From flux.adam at gmail.com Mon Oct 1 00:46:28 2018 From: flux.adam at gmail.com (Adam Harwell) Date: Sun, 30 Sep 2018 17:46:28 -0700 Subject: [openstack-dev] [kolla][octavia] Containerize the amphora-agent In-Reply-To: References: Message-ID: I was coming to the same conclusion for a completely different goal -- baking lighter weight VMs (and eliminating a number of compatibility issues) by putting exactly what we need in containers and making the base OS irrelevant. So, I am interested in helping to do this in a way that will work well for both goals. My thought is that containerizing the agent AND using (existing?) containerized haproxy distributions, we can better standardize things between different amphora base OSes at the same time as setting up for full containerization. We should discuss further on IRC next week maybe, if we can find a good time. --Adam On Sun, Sep 30, 2018, 11:56 Hongbin Lu wrote: > Hi all, > > I am working on the Zun integration for Octavia. I did a preliminary > research and it seems what we need to do is to containerize the amphora > agent that was packaged and shipped by a VM image. I wonder if anyone > already has a containerized docker image that I can leverage. If not, I > will create one. > > Best regards, > Hongbin > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yamamoto at midokura.com Mon Oct 1 04:22:41 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Mon, 1 Oct 2018 13:22:41 +0900 Subject: [openstack-dev] [neutron][stadium][networking] Seeking proposals for non-voting Stadium projects in Neutron check queue In-Reply-To: References: Message-ID: hi, what kind of jobs should it be? eg. unit tests, tempest, etc... On Sun, Sep 30, 2018 at 9:43 AM Miguel Lavalle wrote: > > Dear networking Stackers, > > During the recent PTG in Denver, we discussed measures to prevent patches merged in the Neutron repo breaking Stadium and related networking projects in general. We decided to implement the following: > > 1) For Stadium projects, we want to add non-voting jobs to the Neutron check queue > 2) For non stadium projects, we are inviting them to add 3rd party CI jobs > > The next step is for each project to propose the jobs that they want to run against Neutron patches. > > Best regards > > Miguel > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kor.jmlim at gmail.com Mon Oct 1 06:33:30 2018 From: kor.jmlim at gmail.com (Jea-Min Lim) Date: Mon, 1 Oct 2018 15:33:30 +0900 Subject: [openstack-dev] [Horizon] Horizon tutorial didn`t work Message-ID: Hello everyone, I`m following a tutorial of Building a Dashboard using Horizon. (link: https://docs.openstack.org/horizon/latest/contributor/tutorials/dashboard.html#tutorials-dashboard ) However, provided custom management command doesn't create boilerplate code. I typed tox -e manage -- startdash mydashboard --target openstack_dashboard/dashboards/mydashboard and the attached screenshot file is the execution result. Are there any recommendations to solve this problem? Regards. [image: result_jmlim.PNG] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: result_jmlim.PNG Type: image/png Size: 33958 bytes Desc: not available URL: From d.krol at samsung.com Mon Oct 1 07:41:01 2018 From: d.krol at samsung.com (Dariusz Krol) Date: Mon, 1 Oct 2018 09:41:01 +0200 Subject: [openstack-dev] [python3-first] support in stable branches In-Reply-To: References: <20180927144829eucas1p26786a6e62c869b8138066f8857dae267~YSSg9Kjph2467924679eucas1p2l@eucas1p2.samsung.com> <20180928140517eucas1p285022a4ce8a367efadd9cdf05916a917~YlWEluavl1478514785eucas1p29@eucas1p2.samsung.com> Message-ID: <20181001074103eucas1p29c4c3ca08743afa2c610718b40f4eea5~ZbCc-4oe10677906779eucas1p2X@eucas1p2.samsung.com> Hello Doug, thanks for your explanation. I was a little bit confused by changes to stable branches with python3-first topic as I thought it has to do something with adding new test configuration for python3. But as you explained this is about moving zuul-related configuration, which is a part of python3-first goal (but it is not related to supporting python3 by projects IMHO :) ) Anyway, it is now clear to me and sorry for making this confusion. Best, Dariusz Krol On 9/28/18 6:05 PM, Doug Hellmann wrote: > Dariusz Krol writes: > >> Hello, >> >> >> I'm specifically referring to branches mentioned in: >> https://github.com/openstack/goal-tools/blob/4125c31e74776a7dc6a15d2276ab51ff3e73cd16/goal_tools/python3_first/jobs.py#L54 > I'm still not entirely sure what you're saying is happening that you do > not expect to have happening, but I'll take a guess. > > The zuul migration portion of the goal work needs to move *all* of the > Zuul settings for a repo into the correct branch because after the > migration the job settings will no longer be in project-config at all > and so zuul won't know which jobs to run on the stable branches if we > haven't imported the settings. > > The migration script tries to figure out which jobs apply to which > branches of each repo by looking at the branch specifier settings in > project-config, and then it creates an import patch for each branch with > the relevant jobs. Subsequent steps in the script change the > documentation and release notes jobs and then add new python 3.6 testing > jobs. Those steps only apply to the master branch. > > So, if you have a patch importing a python 3 job setting to a stable > branch of a repo where you aren't expecting it (and it isn't supported), > that's most likely because project-config has no branch specifiers for > the job (meaning it should run on all branches). We did find several > cases where that was true because projects added jobs without branch > specifiers after the branches were created, and then back-ported no > patches to the stable branch. See > http://lists.openstack.org/pipermail/openstack-dev/2018-August/133594.html > for details. > > Doug > >> I hope this helps. >> >> >> Best, >> >> Dariusz Krol >> >> >> On 09/27/2018 06:04 PM, Ben Nemec wrote: >>> >>> On 9/27/18 10:36 AM, Doug Hellmann wrote: >>>> Dariusz Krol writes: >>>> >>>>> Hello Champions :) >>>>> >>>>> >>>>> I work on the Trove project and we are wondering if python3 should be >>>>> supported in previous releases as well? >>>>> >>>>> Actually this question was asked by Alan Pevec from the stable branch >>>>> maintainers list. >>>>> >>>>> I saw you added releases up to ocata to support python3 and there are >>>>> already changes on gerrit waiting to be merged but after reading [1] I >>>>> have my doubts about this. >>>> I'm not sure what you're referring to when you say "added releases up to >>>> ocata" here. Can you link to the patches that you have questions about? >>> Possibly the zuul migration patches for all the stable branches? If >>> so, those don't change the status of python 3 support on the stable >>> branches, they just split the zuul configuration to make it easier to >>> add new python 3 jobs on master without affecting the stable branches. >>> >>>>> Could you elaborate why it is necessary to support previous releases ? >>>>> >>>>> >>>>> Best, >>>>> >>>>> Dariusz Krol >>>>> >>>>> >>>>> [1] https://docs.openstack.org/project-team-guide/stable-branches.html >>>>> __________________________________________________________________________ >>>>> >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> __________________________________________________________________________ >>>> >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 13168 bytes Desc: not available URL: From majopela at redhat.com Mon Oct 1 08:01:18 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Mon, 1 Oct 2018 10:01:18 +0200 Subject: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed In-Reply-To: References: Message-ID: Thank you for the guidance and ping Doug. Was this triggered by [1] ? or By the 1.1.0 tag pushed to gerrit? I'm working to make os-log-merger part of the OpenStack governance projects, and to make sure we release it as a tarball. It's a small tool I've been using for years making my life easier every time I've needed to debug complex scenarios. It's not a big project, but I hope the extra exposure will make developers, and admins life easier. Some projects use it as a way of aggregating logs [2] In a way that those can then be easily consumed by logstash/kibana. Best regards, Miguel Ángel Ajo [1] https://review.openstack.org/#/c/605641/ [2] http://git.openstack.org/cgit/openstack/neutron/tree/neutron/tests/contrib/post_test_hook.sh#n41 [3] http://logs.openstack.org/58/605358/4/check/neutron-functional/18de376/logs/dsvm-functional-index.txt.gz On Fri, Sep 28, 2018 at 5:45 PM Doug Hellmann wrote: > zuul at openstack.org writes: > > > Build failed. > > > > - release-openstack-python > http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/ > : FAILURE in 3m 57s > > - announce-release announce-release : SKIPPED > > - propose-update-constraints propose-update-constraints : SKIPPED > > The error here is > > ERROR: unknown environment 'venv' > > It looks like os-log-merger is not set up for the > release-openstack-python job, which expects a specific tox setup. > > > http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/ara-report/result/7c6fd37c-82d8-48f7-b653-5bdba90cbc31/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Miguel Ángel Ajo OSP / Networking DFG, OVN Squad Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Mon Oct 1 08:26:15 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 01 Oct 2018 10:26:15 +0200 Subject: [openstack-dev] [nova][cinder][qa] Should we enable multiattach in tempest-full? In-Reply-To: <4f1bf485-f663-69ae-309e-ab9286e588e1@gmail.com> References: <4f1bf485-f663-69ae-309e-ab9286e588e1@gmail.com> Message-ID: <1538382375.30016.0@smtp.office365.com> On Sat, Sep 29, 2018 at 10:35 PM, Matt Riedemann wrote: > Nova, cinder and tempest run the nova-multiattach job in their check > and gate queues. The job was added in Queens and was a specific job > because we had to change the ubuntu cloud archive we used in Queens > to get multiattach working. Since Rocky, devstack defaults to a > version of the UCA that works for multiattach, so there isn't really > anything preventing us from running the tempest multiattach tests in > the integrated gate. The job tries to be as minimal as possible by > only running tempest.api.compute.* tests, but it still means spinning > up a new node and devstack for testing. > > Given the state of the gate recently, I'm thinking it would be good > if we dropped the nova-multiattach job in Stein and just enable the > multiattach tests in one of the other integrated gate jobs. +1 > I initially was just going to enable it in the nova-next job, but we > don't run that on cinder or tempest changes. I'm not sure if > tempest-full is a good place for this though since that job already > runs a lot of tests and has been timing out a lot lately [1][2]. > > The tempest-slow job is another option, but cinder doesn't currently > run that job (it probably should since it runs volume-related tests, > including the only tempest tests that use encrypted volumes). If the multiattach test qualifies as a slow test then I'm in favor of adding it to the tempest-slow and not lengthening the tempest-full further. gibi > > Are there other ideas/options for enabling multiattach in another job > that nova/cinder/tempest already use so we can drop the now mostly > redundant nova-multiattach job? > > [1] http://status.openstack.org/elastic-recheck/#1686542 > [2] http://status.openstack.org/elastic-recheck/#1783405 > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From majopela at redhat.com Mon Oct 1 08:29:05 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Mon, 1 Oct 2018 10:29:05 +0200 Subject: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed In-Reply-To: References: Message-ID: Oh, ok 1.1.0 tag didn't have 'venv' in tox.ini, but master has it since: https://review.openstack.org/#/c/548618/7/tox.ini at 37 On Mon, Oct 1, 2018 at 10:01 AM Miguel Angel Ajo Pelayo wrote: > Thank you for the guidance and ping Doug. > > Was this triggered by [1] ? or By the 1.1.0 tag pushed to gerrit? > > > I'm working to make os-log-merger part of the OpenStack governance > projects, and to make sure we release it as a tarball. > > It's a small tool I've been using for years making my life easier every > time I've needed to debug complex scenarios. It's not a big project, but I > hope the extra exposure will make developers, and admins life easier. > > > Some projects use it as a way of aggregating logs [2] In a way that those > can then be easily consumed by logstash/kibana. > > > Best regards, > Miguel Ángel Ajo > > [1] https://review.openstack.org/#/c/605641/ > [2] > http://git.openstack.org/cgit/openstack/neutron/tree/neutron/tests/contrib/post_test_hook.sh#n41 > [3] > http://logs.openstack.org/58/605358/4/check/neutron-functional/18de376/logs/dsvm-functional-index.txt.gz > > > On Fri, Sep 28, 2018 at 5:45 PM Doug Hellmann > wrote: > >> zuul at openstack.org writes: >> >> > Build failed. >> > >> > - release-openstack-python >> http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/ >> : FAILURE in 3m 57s >> > - announce-release announce-release : SKIPPED >> > - propose-update-constraints propose-update-constraints : SKIPPED >> >> The error here is >> >> ERROR: unknown environment 'venv' >> >> It looks like os-log-merger is not set up for the >> release-openstack-python job, which expects a specific tox setup. >> >> >> http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/ara-report/result/7c6fd37c-82d8-48f7-b653-5bdba90cbc31/ >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Miguel Ángel Ajo > OSP / Networking DFG, OVN Squad Engineering > -- Miguel Ángel Ajo OSP / Networking DFG, OVN Squad Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.krol at samsung.com Mon Oct 1 08:33:16 2018 From: d.krol at samsung.com (Dariusz Krol) Date: Mon, 1 Oct 2018 10:33:16 +0200 Subject: [openstack-dev] [python3-first] support in stable branches In-Reply-To: References: <20180927144829eucas1p26786a6e62c869b8138066f8857dae267~YSSg9Kjph2467924679eucas1p2l@eucas1p2.samsung.com> <20180928140517eucas1p285022a4ce8a367efadd9cdf05916a917~YlWEluavl1478514785eucas1p29@eucas1p2.samsung.com> Message-ID: <20181001083318eucas1p1248ff56f964b6637d2bcd1a2f1c1774b~ZbwEPl9BW0244302443eucas1p1b@eucas1p1.samsung.com> Hello Doug, thanks for your explanation. I was a little bit confused by changes to stable branches with python3-first topic as I thought it has to do something with adding new test configuration for python3. But as you explained this is about moving zuul-related configuration, which is a part of python3-first goal (but it is not related to supporting python3 by projects IMHO :) ) Anyway, it is now clear to me and sorry for making this confusion. Best, Dariusz Krol On 09/28/2018 06:05 PM, Doug Hellmann wrote: > Dariusz Krol writes: > >> Hello, >> >> >> I'm specifically referring to branches mentioned in: >> https://github.com/openstack/goal-tools/blob/4125c31e74776a7dc6a15d2276ab51ff3e73cd16/goal_tools/python3_first/jobs.py#L54 > I'm still not entirely sure what you're saying is happening that you do > not expect to have happening, but I'll take a guess. > > The zuul migration portion of the goal work needs to move *all* of the > Zuul settings for a repo into the correct branch because after the > migration the job settings will no longer be in project-config at all > and so zuul won't know which jobs to run on the stable branches if we > haven't imported the settings. > > The migration script tries to figure out which jobs apply to which > branches of each repo by looking at the branch specifier settings in > project-config, and then it creates an import patch for each branch with > the relevant jobs. Subsequent steps in the script change the > documentation and release notes jobs and then add new python 3.6 testing > jobs. Those steps only apply to the master branch. > > So, if you have a patch importing a python 3 job setting to a stable > branch of a repo where you aren't expecting it (and it isn't supported), > that's most likely because project-config has no branch specifiers for > the job (meaning it should run on all branches). We did find several > cases where that was true because projects added jobs without branch > specifiers after the branches were created, and then back-ported no > patches to the stable branch. See > http://lists.openstack.org/pipermail/openstack-dev/2018-August/133594.html > for details. > > Doug > >> I hope this helps. >> >> >> Best, >> >> Dariusz Krol >> >> >> On 09/27/2018 06:04 PM, Ben Nemec wrote: >>> >>> On 9/27/18 10:36 AM, Doug Hellmann wrote: >>>> Dariusz Krol writes: >>>> >>>>> Hello Champions :) >>>>> >>>>> >>>>> I work on the Trove project and we are wondering if python3 should be >>>>> supported in previous releases as well? >>>>> >>>>> Actually this question was asked by Alan Pevec from the stable branch >>>>> maintainers list. >>>>> >>>>> I saw you added releases up to ocata to support python3 and there are >>>>> already changes on gerrit waiting to be merged but after reading [1] I >>>>> have my doubts about this. >>>> I'm not sure what you're referring to when you say "added releases up to >>>> ocata" here. Can you link to the patches that you have questions about? >>> Possibly the zuul migration patches for all the stable branches? If >>> so, those don't change the status of python 3 support on the stable >>> branches, they just split the zuul configuration to make it easier to >>> add new python 3 jobs on master without affecting the stable branches. >>> >>>>> Could you elaborate why it is necessary to support previous releases ? >>>>> >>>>> >>>>> Best, >>>>> >>>>> Dariusz Krol >>>>> >>>>> >>>>> [1] https://docs.openstack.org/project-team-guide/stable-branches.html >>>>> __________________________________________________________________________ >>>>> >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> __________________________________________________________________________ >>>> >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 13168 bytes Desc: not available URL: From john at johngarbutt.com Mon Oct 1 08:36:24 2018 From: john at johngarbutt.com (John Garbutt) Date: Mon, 1 Oct 2018 09:36:24 +0100 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> Message-ID: On Fri, 28 Sep 2018 at 00:46, Jay Pipes wrote: > On 09/27/2018 06:23 PM, Matt Riedemann wrote: > > On 9/27/2018 3:02 PM, Jay Pipes wrote: > >> A great example of this would be the proposed "deploy template" from > >> [2]. This is nothing more than abusing the placement traits API in > >> order to allow passthrough of instance configuration data from the > >> nova flavor extra spec directly into the nodes.instance_info field in > >> the Ironic database. It's a hack that is abusing the entire concept of > >> the placement traits concept, IMHO. > >> > >> We should have a way *in Nova* of allowing instance configuration > >> key/value information to be passed through to the virt driver's > >> spawn() method, much the same way we provide for user_data that gets > >> exposed after boot to the guest instance via configdrive or the > >> metadata service API. What this deploy template thing is is just a > >> hack to get around the fact that nova doesn't have a basic way of > >> passing through some collated instance configuration key/value > >> information, which is a darn shame and I'm really kind of annoyed with > >> myself for not noticing this sooner. :( > > > > We talked about this in Dublin through right? We said a good thing to do > > would be to have some kind of template/profile/config/whatever stored > > off in glare where schema could be registered on that thing, and then > > you pass a handle (ID reference) to that to nova when creating the > > (baremetal) server, nova pulls it down from glare and hands it off to > > the virt driver. It's just that no one is doing that work. > > No, nobody is doing that work. > > I will if need be if it means not hacking the placement API to serve > this purpose (for which it wasn't intended). > Going back to the point Mark Goddard made, there are two things here: 1) Picking the correct resource provider 2) Telling Ironic to transform the picked node in some way Today we allow the use Capabilities for both. I am suggesting we move to using Traits only for (1), leaving (2) in place for now, while we decide what to do (i.e. future of "deploy template" concept). It feels like Ironic's plan to define the "deploy templates" in Ironic should replace the dependency on Glare for this use case, largely because the definition of the deploy template (in my mind) is very heavily related to inspector and driver properties, etc. Mark is looking at moving that forward at the moment. Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougal at redhat.com Mon Oct 1 09:28:34 2018 From: dougal at redhat.com (Dougal Matthews) Date: Mon, 1 Oct 2018 10:28:34 +0100 Subject: [openstack-dev] [mistral] Extend created(updated)_at by started(finished)_at to clarify the duration of the task In-Reply-To: References: Message-ID: On Wed, 26 Sep 2018 at 12:03, Олег Овчарук wrote: > Hi everyone! Please take a look to the blueprint that i've just created > https://blueprints.launchpad.net/mistral/+spec/mistral-add-started-finished-at > > I'd like to implement this feature, also I want to update CloudFlow when > this will be done. Please let me know in the blueprint if I can start > implementing. > I agree with Renat, this sounds like a useful addition to me. I have added to to the Stein launchpad milestone. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Mon Oct 1 09:52:43 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 1 Oct 2018 12:52:43 +0300 Subject: [openstack-dev] [Horizon] Horizon tutorial didn`t work In-Reply-To: References: Message-ID: Hi Jea-Min, Thank you for your report. I'll check the manual and fix it asap. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Mon, Oct 1, 2018 at 9:38 AM Jea-Min Lim wrote: > Hello everyone, > > I`m following a tutorial of Building a Dashboard using Horizon. > (link: > https://docs.openstack.org/horizon/latest/contributor/tutorials/dashboard.html#tutorials-dashboard > ) > > However, provided custom management command doesn't create boilerplate > code. > > I typed tox -e manage -- startdash mydashboard --target > openstack_dashboard/dashboards/mydashboard > > and the attached screenshot file is the execution result. > > Are there any recommendations to solve this problem? > > Regards. > > [image: result_jmlim.PNG] > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: result_jmlim.PNG Type: image/png Size: 33958 bytes Desc: not available URL: From jaypipes at gmail.com Mon Oct 1 12:02:30 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 1 Oct 2018 08:02:30 -0400 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> Message-ID: On 10/01/2018 04:36 AM, John Garbutt wrote: > On Fri, 28 Sep 2018 at 00:46, Jay Pipes > wrote: > > On 09/27/2018 06:23 PM, Matt Riedemann wrote: > > On 9/27/2018 3:02 PM, Jay Pipes wrote: > >> A great example of this would be the proposed "deploy template" > from > >> [2]. This is nothing more than abusing the placement traits API in > >> order to allow passthrough of instance configuration data from the > >> nova flavor extra spec directly into the nodes.instance_info > field in > >> the Ironic database. It's a hack that is abusing the entire > concept of > >> the placement traits concept, IMHO. > >> > >> We should have a way *in Nova* of allowing instance configuration > >> key/value information to be passed through to the virt driver's > >> spawn() method, much the same way we provide for user_data that > gets > >> exposed after boot to the guest instance via configdrive or the > >> metadata service API. What this deploy template thing is is just a > >> hack to get around the fact that nova doesn't have a basic way of > >> passing through some collated instance configuration key/value > >> information, which is a darn shame and I'm really kind of > annoyed with > >> myself for not noticing this sooner. :( > > > > We talked about this in Dublin through right? We said a good > thing to do > > would be to have some kind of template/profile/config/whatever > stored > > off in glare where schema could be registered on that thing, and > then > > you pass a handle (ID reference) to that to nova when creating the > > (baremetal) server, nova pulls it down from glare and hands it > off to > > the virt driver. It's just that no one is doing that work. > > No, nobody is doing that work. > > I will if need be if it means not hacking the placement API to serve > this purpose (for which it wasn't intended). > > > Going back to the point Mark Goddard made, there are two things here: > > 1) Picking the correct resource provider > 2) Telling Ironic to transform the picked node in some way > > Today we allow the use Capabilities for both. > > I am suggesting we move to using Traits only for (1), leaving (2) in > place for now, while we decide what to do (i.e. future of "deploy > template" concept). > > It feels like Ironic's plan to define the "deploy templates" in Ironic > should replace the dependency on Glare for this use case, largely > because the definition of the deploy template (in my mind) is very > heavily related to inspector and driver properties, etc. Mark is looking > at moving that forward at the moment. That won't do anything about the flavor explosion problem, though, right John? -jay From sombrafam at gmail.com Mon Oct 1 12:22:46 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Mon, 1 Oct 2018 09:22:46 -0300 Subject: [openstack-dev] [nova][cinder][qa] Should we enable multiattach in tempest-full? In-Reply-To: <1538382375.30016.0@smtp.office365.com> References: <4f1bf485-f663-69ae-309e-ab9286e588e1@gmail.com> <1538382375.30016.0@smtp.office365.com> Message-ID: Em seg, 1 de out de 2018 às 05:26, Balázs Gibizer < balazs.gibizer at ericsson.com> escreveu: > > > On Sat, Sep 29, 2018 at 10:35 PM, Matt Riedemann > wrote: > > Nova, cinder and tempest run the nova-multiattach job in their check > > and gate queues. The job was added in Queens and was a specific job > > because we had to change the ubuntu cloud archive we used in Queens > > to get multiattach working. Since Rocky, devstack defaults to a > > version of the UCA that works for multiattach, so there isn't really > > anything preventing us from running the tempest multiattach tests in > > the integrated gate. The job tries to be as minimal as possible by > > only running tempest.api.compute.* tests, but it still means spinning > > up a new node and devstack for testing. > > > > Given the state of the gate recently, I'm thinking it would be good > > if we dropped the nova-multiattach job in Stein and just enable the > > multiattach tests in one of the other integrated gate jobs. > > +1 > > > I initially was just going to enable it in the nova-next job, but we > > don't run that on cinder or tempest changes. I'm not sure if > > tempest-full is a good place for this though since that job already > > runs a lot of tests and has been timing out a lot lately [1][2]. > > > > The tempest-slow job is another option, but cinder doesn't currently > > run that job (it probably should since it runs volume-related tests, > > including the only tempest tests that use encrypted volumes). > > If the multiattach test qualifies as a slow test then I'm in favor of > adding it to the tempest-slow and not lengthening the tempest-full > further. > > +1 On having this on tempest-slow and add this to Cinder, provided that it would also cover encryption . > gibi > > > > > Are there other ideas/options for enabling multiattach in another job > > that nova/cinder/tempest already use so we can drop the now mostly > > redundant nova-multiattach job? > > > > [1] http://status.openstack.org/elastic-recheck/#1686542 > > [2] http://status.openstack.org/elastic-recheck/#1783405 > > > > -- > > > > Thanks, > > > > Matt > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Oct 1 12:46:04 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 01 Oct 2018 21:46:04 +0900 Subject: [openstack-dev] [qa] [infra] [placement] tempest plugins virtualenv In-Reply-To: <20180928141006.GA16108@zeong> References: <20180928141006.GA16108@zeong> Message-ID: <1662faa07f6.d035ba8630868.3059353454053199847@ghanshyammann.com> ---- On Fri, 28 Sep 2018 23:10:06 +0900 Matthew Treinish wrote ---- > On Fri, Sep 28, 2018 at 02:39:24PM +0100, Chris Dent wrote: > > > > I'm still trying to figure out how to properly create a "modern" (as > > in zuul v3 oriented) integration test for placement using gabbi and > > tempest. That work is happening at https://review.openstack.org/#/c/601614/ > > > > There was lots of progress made after the last message on this > > topic http://lists.openstack.org/pipermail/openstack-dev/2018-September/134837.html > > but I've reached another interesting impasse. > > > > From devstack's standpoint, the way to say "I want to use a tempest > > plugin" is to set TEMPEST_PLUGINS to alist of where the plugins are. > > devstack:lib/tempest then does a: > > > > tox -evenv-tempest -- pip install -c $REQUIREMENTS_DIR/upper-constraints.txt $TEMPEST_PLUGINS > > > > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_12_58_138163 > > > > I have this part working as expected. > > > > However, > > > > The advice is then to create a new job that has a parent of > > devstack-tempest. That zuul job runs a variety of tox environments, > > depending on the setting of the `tox_envlist` var. If you wish to > > use a `tempest_test_regex` (I do) the preferred tox environment is > > 'all'. > > > > That venv doesn't have the plugin installed, thus no gabbi tests are > > found: > > > > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_798683 > > Right above this line it shows that the gabbi-tempest plugin is installed in > the venv: > > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_650661 > > at version 0.1.1. It's a bit weird because it's line wrapped in my browser. > The devstack logs also shows the plugin: > > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/controller/logs/devstacklog.txt.gz#_2018-09-28_11_13_13_076 > > All the tempest tox jobs that run tempest (and the tempest-venv command used by > devstack) run inside the same tox venv: > > https://github.com/openstack/tempest/blob/master/tox.ini#L52 > > My guess is that the plugin isn't returning any tests that match the regex. > > I'm also a bit alarmed that tempest run is returning 0 there when no tests are > being run. That's definitely a bug because things should fail with no tests > being successfully run. Tempest run fail on "no test" run [1] .. [1] https://github.com/openstack/tempest/blob/807f0dec66689aced05c2bb970f344cbb8a3c6a3/tempest/cmd/run.py#L182 -gmann > > -Matt Treinish > > > > > How do I get my plugin installed into the right venv while still > > following the guidelines for good zuul behavior? > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gmann at ghanshyammann.com Mon Oct 1 13:00:40 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 01 Oct 2018 22:00:40 +0900 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: <1662fb764fa.cb961bbf31583.1579803421023633804@ghanshyammann.com> ---- On Fri, 21 Sep 2018 23:13:02 +0900 Lance Bragstad wrote ---- > > On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann wrote: > ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt wrote ---- > > tl;dr+1 consistent names > > I would make the names mirror the API... because the Operator setting them knows the API, not the codeIgnore the crazy names in Nova, I certainly hate them > > Big +1 on consistent naming which will help operator as well as developer to maintain those. > > > > > Lance Bragstad wrote: > > > I'm curious if anyone has context on the "os-" part of the format? > > > > My memory of the Nova policy mess...* Nova's policy rules traditionally followed the patterns of the code > > ** Yes, horrible, but it happened.* The code used to have the OpenStack API and the EC2 API, hence the "os"* API used to expand with extensions, so the policy name is often based on extensions** note most of the extension code has now gone, including lots of related policies* Policy in code was focused on getting us to a place where we could rename policy** Whoop whoop by the way, it feels like we are really close to something sensible now! > > Lance Bragstad wrote: > > Thoughts on using create, list, update, and delete as opposed to post, get, put, patch, and delete in the naming convention? > > I could go either way as I think about "list servers" in the API.But my preference is for the URL stub and POST, GET, etc. > > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad wrote:If we consider dropping "os", should we entertain dropping "api", too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple service types (e.g "compute" or "loadbalancer"). > > +1The API is known as "compute" in api-ref, so the policy should be for "compute", etc. > > Agree on mapping the policy name with api-ref as much as possible. Other than policy name having 'os-', we have 'os-' in resource name also in nova API url like /os-agents, /os-aggregates etc (almost every resource except servers , flavors). As we cannot get rid of those from API url, we need to keep the same in policy naming too? or we can have policy name like compute:agents:create/post but that mismatch from api-ref where agents resource url is os-agents. > > Good question. I think this depends on how the service does policy enforcement. > I know we did something like this in keystone, which required policy names and method names to be the same: > "identity:list_users": "..." > Because the initial implementation of policy enforcement used a decorator like this: > from keystone import controller > @controller.protected def list_users(self): ... > Having the policy name the same as the method name made it easier for the decorator implementation to resolve the policy needed to protect the API because it just looked at the name of the wrapped method. The advantage was that it was easy to implement new APIs because you only needed to add a policy, implement the method, and make sure you decorate the implementation. > While this worked, we are moving away from it entirely. The decorator implementation was ridiculously complicated. Only a handful of keystone developers understood it. With the addition of system-scope, it would have only become more convoluted. It also enables a much more copy-paste pattern (e.g., so long as I wrap my method with this decorator implementation, things should work right?). Instead, we're calling enforcement within the controller implementation to ensure things are easier to understand. It requires developers to be cognizant of how different token types affect the resources within an API. That said, coupling the policy name to the method name is no longer a requirement for keystone. > Hopefully, that helps explain why we needed them to match. > Also we have action API (i know from nova not sure from other services) like POST /servers/{server_id}/action {addSecurityGroup} and their current policy name is all inconsistent. few have policy name including their resource name like "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in policy name like "os_compute_api:os-admin-actions:reset_state" and few has direct action name like "os_compute_api:os-console-output" > > Since the actions API relies on the request body and uses a single HTTP method, does it make sense to have the HTTP method in the policy name? It feels redundant, and we might be able to establish a convention that's more meaningful for things like action APIs. It looks like cinder has a similar pattern [0]. Yes, HTTP method is not necessary to be in action policy. action name itself should be self explanatory. > [0] https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action > May be we can make them consistent with :: or any better opinion. > > > From: Lance Bragstad > The topic of having consistent policy names has popped up a few times this week. > > > > I would love to have this nailed down before we go through all the policy rules again. In my head I hope in Nova we can go through each policy rule and do the following: > > * move to new consistent policy name, deprecate existing name* hardcode scope check to project, system or user** (user, yes... keypairs, yuck, but its how they work)** deprecate in rule scope checks, which are largely bogus in Nova anyway* make read/write/admin distinction** therefore adding the "noop" role, amount other things > > + policy granularity. > > It is good idea to make the policy improvement all together and for all rules as you mentioned. But my worries is how much load it will be on operator side to migrate all policy rules at same time? What will be the deprecation period etc which i think we can discuss on proposed spec - https://review.openstack.org/#/c/547850 > Yeah, that's another valid concern. I know at least one operator has weighed in already. I'm curious if operators have specific input here. > It ultimately depends on if they override existing policies or not. If a deployment doesn't have any overrides, it should be a relatively simple change for operators to consume. Agree that non-override policy will be ok for name change cases. But still they will be effected much when default roles will be changed. -gmann > > -gmann > > > Thanks,John __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jim at jimrollenhagen.com Mon Oct 1 13:01:46 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 1 Oct 2018 09:01:46 -0400 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> Message-ID: On Mon, Oct 1, 2018 at 8:03 AM Jay Pipes wrote: > On 10/01/2018 04:36 AM, John Garbutt wrote: > > On Fri, 28 Sep 2018 at 00:46, Jay Pipes > > wrote: > > > > On 09/27/2018 06:23 PM, Matt Riedemann wrote: > > > On 9/27/2018 3:02 PM, Jay Pipes wrote: > > >> A great example of this would be the proposed "deploy template" > > from > > >> [2]. This is nothing more than abusing the placement traits API > in > > >> order to allow passthrough of instance configuration data from > the > > >> nova flavor extra spec directly into the nodes.instance_info > > field in > > >> the Ironic database. It's a hack that is abusing the entire > > concept of > > >> the placement traits concept, IMHO. > > >> > > >> We should have a way *in Nova* of allowing instance configuration > > >> key/value information to be passed through to the virt driver's > > >> spawn() method, much the same way we provide for user_data that > > gets > > >> exposed after boot to the guest instance via configdrive or the > > >> metadata service API. What this deploy template thing is is just > a > > >> hack to get around the fact that nova doesn't have a basic way of > > >> passing through some collated instance configuration key/value > > >> information, which is a darn shame and I'm really kind of > > annoyed with > > >> myself for not noticing this sooner. :( > > > > > > We talked about this in Dublin through right? We said a good > > thing to do > > > would be to have some kind of template/profile/config/whatever > > stored > > > off in glare where schema could be registered on that thing, and > > then > > > you pass a handle (ID reference) to that to nova when creating the > > > (baremetal) server, nova pulls it down from glare and hands it > > off to > > > the virt driver. It's just that no one is doing that work. > > > > No, nobody is doing that work. > > > > I will if need be if it means not hacking the placement API to serve > > this purpose (for which it wasn't intended). > > > > > > Going back to the point Mark Goddard made, there are two things here: > > > > 1) Picking the correct resource provider > > 2) Telling Ironic to transform the picked node in some way > > > > Today we allow the use Capabilities for both. > > > > I am suggesting we move to using Traits only for (1), leaving (2) in > > place for now, while we decide what to do (i.e. future of "deploy > > template" concept). > > > > It feels like Ironic's plan to define the "deploy templates" in Ironic > > should replace the dependency on Glare for this use case, largely > > because the definition of the deploy template (in my mind) is very > > heavily related to inspector and driver properties, etc. Mark is looking > > at moving that forward at the moment. > > That won't do anything about the flavor explosion problem, though, right > John? > Does nova still plan to allow passing additional desired traits into the server create request? I (we?) was kind of banking on that to solve the Baskin Robbins thing. // jim > > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Oct 1 13:13:30 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 01 Oct 2018 22:13:30 +0900 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: <1662fc326b2.b3cb83bc32239.7575898832806527463@ghanshyammann.com> ---- On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad wrote ---- > > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki wrote: > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > wrote: > > > > Ideally I would like to see it in the form of least specific to most specific. But more importantly in a way that there is no additional delimiters between the service type and the resource. Finally, I do not like the change of plurality depending on action type. > > > > I propose we consider > > > > ::[:] > > > > Example for keystone (note, action names below are strictly examples I am fine with whatever form those actions take): > > identity:projects:create > > identity:projects:delete > > identity:projects:list > > identity:projects:get > > > > It keeps things simple and consistent when you're looking through overrides / defaults. > > --Morgan > +1 -- I think the ordering if `resource` comes before > `action|subaction` will be more clean. > > ++ > These are excellent points. I especially like being able to omit the convention about plurality. Furthermore, I'd like to add that I think we should make the resource singular (e.g., project instead or projects). For example: > compute:server:list > compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize (or confirm-resize) Do we need "action" word there? I think action name itself should convey the operation. IMO below notation without "äction" word looks clear enough. what you say? compute:server:reboot compute:server:confirm_resize -gmann > > Otherwise, someone might mistake compute:servers:get, as "list". This is ultra-nick-picky, but something I thought of when seeing the usage of "get_all" in policy names in favor of "list." > In summary, the new convention based on the most recent feedback should be: > ::[:] > Rules:service-type is always defined in the service types authority > resources are always singular > Thanks to all for sticking through this tedious discussion. I appreciate it. > /R > > Harry > > > > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad wrote: > >> > >> Bumping this thread again and proposing two conventions based on the discussion here. I propose we decide on one of the two following conventions: > >> > >> :: > >> > >> or > >> > >> :_ > >> > >> Where is the corresponding service type of the project [0], and is either create, get, list, update, or delete. I think decoupling the method from the policy name should aid in consistency, regardless of the underlying implementation. The HTTP method specifics can still be relayed using oslo.policy's DocumentedRuleDefault object [1]. > >> > >> I think the plurality of the resource should default to what makes sense for the operation being carried out (e.g., list:foobars, create:foobar). > >> > >> I don't mind the first one because it's clear about what the delimiter is and it doesn't look weird when projects have something like: > >> > >> ::: > >> > >> If folks are ok with this, I can start working on some documentation that explains the motivation for this. Afterward, we can figure out how we want to track this work. > >> > >> What color do you want the shed to be? > >> > >> [0] https://service-types.openstack.org/service-types.json > >> [1] https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule > >> > >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad wrote: > >>> > >>> > >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann wrote: > >>>> > >>>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt wrote ---- > >>>> > tl;dr+1 consistent names > >>>> > I would make the names mirror the API... because the Operator setting them knows the API, not the codeIgnore the crazy names in Nova, I certainly hate them > >>>> > >>>> Big +1 on consistent naming which will help operator as well as developer to maintain those. > >>>> > >>>> > > >>>> > Lance Bragstad wrote: > >>>> > > I'm curious if anyone has context on the "os-" part of the format? > >>>> > > >>>> > My memory of the Nova policy mess...* Nova's policy rules traditionally followed the patterns of the code > >>>> > ** Yes, horrible, but it happened.* The code used to have the OpenStack API and the EC2 API, hence the "os"* API used to expand with extensions, so the policy name is often based on extensions** note most of the extension code has now gone, including lots of related policies* Policy in code was focused on getting us to a place where we could rename policy** Whoop whoop by the way, it feels like we are really close to something sensible now! > >>>> > Lance Bragstad wrote: > >>>> > Thoughts on using create, list, update, and delete as opposed to post, get, put, patch, and delete in the naming convention? > >>>> > I could go either way as I think about "list servers" in the API.But my preference is for the URL stub and POST, GET, etc. > >>>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad wrote:If we consider dropping "os", should we entertain dropping "api", too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple service types (e.g "compute" or "loadbalancer"). > >>>> > +1The API is known as "compute" in api-ref, so the policy should be for "compute", etc. > >>>> > >>>> Agree on mapping the policy name with api-ref as much as possible. Other than policy name having 'os-', we have 'os-' in resource name also in nova API url like /os-agents, /os-aggregates etc (almost every resource except servers , flavors). As we cannot get rid of those from API url, we need to keep the same in policy naming too? or we can have policy name like compute:agents:create/post but that mismatch from api-ref where agents resource url is os-agents. > >>> > >>> > >>> Good question. I think this depends on how the service does policy enforcement. > >>> > >>> I know we did something like this in keystone, which required policy names and method names to be the same: > >>> > >>> "identity:list_users": "..." > >>> > >>> Because the initial implementation of policy enforcement used a decorator like this: > >>> > >>> from keystone import controller > >>> > >>> @controller.protected > >>> def list_users(self): > >>> ... > >>> > >>> Having the policy name the same as the method name made it easier for the decorator implementation to resolve the policy needed to protect the API because it just looked at the name of the wrapped method. The advantage was that it was easy to implement new APIs because you only needed to add a policy, implement the method, and make sure you decorate the implementation. > >>> > >>> While this worked, we are moving away from it entirely. The decorator implementation was ridiculously complicated. Only a handful of keystone developers understood it. With the addition of system-scope, it would have only become more convoluted. It also enables a much more copy-paste pattern (e.g., so long as I wrap my method with this decorator implementation, things should work right?). Instead, we're calling enforcement within the controller implementation to ensure things are easier to understand. It requires developers to be cognizant of how different token types affect the resources within an API. That said, coupling the policy name to the method name is no longer a requirement for keystone. > >>> > >>> Hopefully, that helps explain why we needed them to match. > >>> > >>>> > >>>> > >>>> Also we have action API (i know from nova not sure from other services) like POST /servers/{server_id}/action {addSecurityGroup} and their current policy name is all inconsistent. few have policy name including their resource name like "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in policy name like "os_compute_api:os-admin-actions:reset_state" and few has direct action name like "os_compute_api:os-console-output" > >>> > >>> > >>> Since the actions API relies on the request body and uses a single HTTP method, does it make sense to have the HTTP method in the policy name? It feels redundant, and we might be able to establish a convention that's more meaningful for things like action APIs. It looks like cinder has a similar pattern [0]. > >>> > >>> [0] https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action > >>> > >>>> > >>>> > >>>> May be we can make them consistent with :: or any better opinion. > >>>> > >>>> > From: Lance Bragstad > The topic of having consistent policy names has popped up a few times this week. > >>>> > > >>>> > I would love to have this nailed down before we go through all the policy rules again. In my head I hope in Nova we can go through each policy rule and do the following: > >>>> > * move to new consistent policy name, deprecate existing name* hardcode scope check to project, system or user** (user, yes... keypairs, yuck, but its how they work)** deprecate in rule scope checks, which are largely bogus in Nova anyway* make read/write/admin distinction** therefore adding the "noop" role, amount other things > >>>> > >>>> + policy granularity. > >>>> > >>>> It is good idea to make the policy improvement all together and for all rules as you mentioned. But my worries is how much load it will be on operator side to migrate all policy rules at same time? What will be the deprecation period etc which i think we can discuss on proposed spec - https://review.openstack.org/#/c/547850 > >>> > >>> > >>> Yeah, that's another valid concern. I know at least one operator has weighed in already. I'm curious if operators have specific input here. > >>> > >>> It ultimately depends on if they override existing policies or not. If a deployment doesn't have any overrides, it should be a relatively simple change for operators to consume. > >>> > >>>> > >>>> > >>>> > >>>> -gmann > >>>> > >>>> > Thanks,John __________________________________________________________________________ > >>>> > OpenStack Development Mailing List (not for usage questions) > >>>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > > >>>> > >>>> > >>>> > >>>> __________________________________________________________________________ > >>>> OpenStack Development Mailing List (not for usage questions) > >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gmann at ghanshyammann.com Mon Oct 1 13:14:06 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 01 Oct 2018 22:14:06 +0900 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> <20180928203318.GA3769@sm-workstation> Message-ID: <1662fc3b44e.10c358bd832277.7078958170608467364@ghanshyammann.com> ---- On Sat, 29 Sep 2018 07:23:30 +0900 Lance Bragstad wrote ---- > Alright - I've worked up the majority of what we have in this thread and proposed a documentation patch for oslo.policy [0]. > I think we're at the point where we can finish the rest of this discussion in gerrit if folks are ok with that. > [0] https://review.openstack.org/#/c/606214/ +1, thanks for that. let's start the discussion there. -gmann > On Fri, Sep 28, 2018 at 3:33 PM Sean McGinnis wrote: > On Fri, Sep 28, 2018 at 01:54:01PM -0500, Lance Bragstad wrote: > > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki wrote: > > > > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > > > wrote: > > > > > > > > Ideally I would like to see it in the form of least specific to most > > > specific. But more importantly in a way that there is no additional > > > delimiters between the service type and the resource. Finally, I do not > > > like the change of plurality depending on action type. > > > > > > > > I propose we consider > > > > > > > > ::[:] > > > > > > > > Example for keystone (note, action names below are strictly examples I > > > am fine with whatever form those actions take): > > > > identity:projects:create > > > > identity:projects:delete > > > > identity:projects:list > > > > identity:projects:get > > > > > > > > It keeps things simple and consistent when you're looking through > > > overrides / defaults. > > > > --Morgan > > > +1 -- I think the ordering if `resource` comes before > > > `action|subaction` will be more clean. > > > > > > > Great idea. This is looking better and better. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From cdent+os at anticdent.org Mon Oct 1 13:25:09 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 1 Oct 2018 14:25:09 +0100 (BST) Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: <01d48897-d486-6ab0-2f64-a2093e4157dc@gmail.com> References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> <7b2ae14e-5f3d-ff60-3ebe-8b8c62ee5994@fried.cc> <01d48897-d486-6ab0-2f64-a2093e4157dc@gmail.com> Message-ID: On Sat, 29 Sep 2018, Jay Pipes wrote: > I don't think that's a fair statement. You absolutely *do* care which way we > go. You want to encode multiple bits of information into a trait string -- > such as "PCI_ADDRESS_01_AB_23_CD" -- and leave it up to the caller to have to > understand that this trait string has multiple bits of information encoded in > it (the fact that it's a PCI device and that the PCI device is at > 01_AB_23_CD). > > You don't see a problem encoding these variants inside a string. Chris > doesn't either. Lest I be misconstrued, I'd like to clarify: What I was trying to say elsewhere in the thread was that placement should never be aware of _anything_ that is in the trait string (except CUSTOM_* when validating ones that are added, and MISC_SHARES[...] for sharing providers). On the placement server side, input is compared solely for equality with stored data and nothing else, and we should never allow value comparisons, string fragments, regex, etc. So from a code perspective _placement_ is completely agnostic to whether a trait is "PCI_ADDRESS_01_AB_23_CD", "STORAGE_DISK_SSD", or "JAY_LIKES_CRUNCHIE_BARS". However, things which are using traits (e.g., nova, ironic) need to make their own decisions about how the value of traits are interpreted. I don't have a strong position on that except to say that _if_ we end up in a position of there being lots of traits willy nilly, people who have chosen to do that need to know that the contract presented by traits right now (present or not present, no value comprehension) is fixed. > I *do* see a problem with it, based on my experience in Nova where this kind > of thing leads to ugly, unmaintainable, and incomprehensible code as I have > pointed to in previous responses. I think there are many factors that have led to nova being incomprehensible and indeed bad representations is one of them, but I think reasonable people can disagree on which factors are the most important and with sufficient discussion come to some reasonable compromises. I personally feel that while the bad representations (encoding stuff in strings or json blobs) thing is a big deal, another major factor is a predilection to make new apis, new abstractions, and new representations rather than working with and adhering to the constraints of the existing ones. This leads to a lot of code that encodes business logic in itself (e.g., several different ways and layers of indirection to think about allocation ratios) rather than working within strong and constraining contracts. >From my standpoint there isn't much to talk about here from a placement code standpoint. We should clearly document the functional contract (and stick to it) and we should come up with exemplars for how to make the best use of traits. I think this conversation could allow us to find those examples. I don't, however, want placement to be a traffic officer for how people do things. In the context of the orchestration between nova and ironic and how that interaction happens, nova has every right to set some guidelines if it needs to. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From gmann at ghanshyammann.com Mon Oct 1 13:37:49 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 01 Oct 2018 22:37:49 +0900 Subject: [openstack-dev] [nova][cinder][qa] Should we enable multiattach in tempest-full? In-Reply-To: References: <4f1bf485-f663-69ae-309e-ab9286e588e1@gmail.com> <1538382375.30016.0@smtp.office365.com> Message-ID: <1662fd96896.121bfdb2733507.842876450808135416@ghanshyammann.com> ---- On Mon, 01 Oct 2018 21:22:46 +0900 Erlon Cruz wrote ---- > > > Em seg, 1 de out de 2018 às 05:26, Balázs Gibizer escreveu: > > > On Sat, Sep 29, 2018 at 10:35 PM, Matt Riedemann > wrote: > > Nova, cinder and tempest run the nova-multiattach job in their check > > and gate queues. The job was added in Queens and was a specific job > > because we had to change the ubuntu cloud archive we used in Queens > > to get multiattach working. Since Rocky, devstack defaults to a > > version of the UCA that works for multiattach, so there isn't really > > anything preventing us from running the tempest multiattach tests in > > the integrated gate. The job tries to be as minimal as possible by > > only running tempest.api.compute.* tests, but it still means spinning > > up a new node and devstack for testing. > > > > Given the state of the gate recently, I'm thinking it would be good > > if we dropped the nova-multiattach job in Stein and just enable the > > multiattach tests in one of the other integrated gate jobs. > > +1 > > > I initially was just going to enable it in the nova-next job, but we > > don't run that on cinder or tempest changes. I'm not sure if > > tempest-full is a good place for this though since that job already > > runs a lot of tests and has been timing out a lot lately [1][2]. > > > > The tempest-slow job is another option, but cinder doesn't currently > > run that job (it probably should since it runs volume-related tests, > > including the only tempest tests that use encrypted volumes). > > If the multiattach test qualifies as a slow test then I'm in favor of > adding it to the tempest-slow and not lengthening the tempest-full > further. > > +1 On having this on tempest-slow and add this to Cinder, provided that it would also cover encryption . +1 on adding multiattach on integrated job. It is always good to cover more features in integrate-gate instead of separate jobs. These tests does not take much time, it should be ok to add in tempest-full [1]. We should make only really slow test as 'slow' otherwise it should be fine to run in tempest-full. I thought adding tempest-slow on cinder was merged but it is not[2] [1] http://logs.openstack.org/80/606880/2/check/nova-multiattach/7f8681e/job-output.txt.gz#_2018-10-01_10_12_55_482653 [2] https://review.openstack.org/#/c/591354/2 -gmann > gibi > > > > > Are there other ideas/options for enabling multiattach in another job > > that nova/cinder/tempest already use so we can drop the now mostly > > redundant nova-multiattach job? > > > > [1] http://status.openstack.org/elastic-recheck/#1686542 > > [2] http://status.openstack.org/elastic-recheck/#1783405 > > > > -- > > > > Thanks, > > > > Matt > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jaypipes at gmail.com Mon Oct 1 14:12:47 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 1 Oct 2018 10:12:47 -0400 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> Message-ID: <2e4ce0e9-1ad1-abe7-2f6e-c10b84b3fdb4@gmail.com> On 10/01/2018 09:01 AM, Jim Rollenhagen wrote: > On Mon, Oct 1, 2018 at 8:03 AM Jay Pipes > wrote: > > On 10/01/2018 04:36 AM, John Garbutt wrote: > > On Fri, 28 Sep 2018 at 00:46, Jay Pipes > > >> wrote: > > > >     On 09/27/2018 06:23 PM, Matt Riedemann wrote: > >      > On 9/27/2018 3:02 PM, Jay Pipes wrote: > >      >> A great example of this would be the proposed "deploy > template" > >     from > >      >> [2]. This is nothing more than abusing the placement > traits API in > >      >> order to allow passthrough of instance configuration data > from the > >      >> nova flavor extra spec directly into the nodes.instance_info > >     field in > >      >> the Ironic database. It's a hack that is abusing the entire > >     concept of > >      >> the placement traits concept, IMHO. > >      >> > >      >> We should have a way *in Nova* of allowing instance > configuration > >      >> key/value information to be passed through to the virt > driver's > >      >> spawn() method, much the same way we provide for > user_data that > >     gets > >      >> exposed after boot to the guest instance via configdrive > or the > >      >> metadata service API. What this deploy template thing is > is just a > >      >> hack to get around the fact that nova doesn't have a > basic way of > >      >> passing through some collated instance configuration > key/value > >      >> information, which is a darn shame and I'm really kind of > >     annoyed with > >      >> myself for not noticing this sooner. :( > >      > > >      > We talked about this in Dublin through right? We said a good > >     thing to do > >      > would be to have some kind of template/profile/config/whatever > >     stored > >      > off in glare where schema could be registered on that > thing, and > >     then > >      > you pass a handle (ID reference) to that to nova when > creating the > >      > (baremetal) server, nova pulls it down from glare and hands it > >     off to > >      > the virt driver. It's just that no one is doing that work. > > > >     No, nobody is doing that work. > > > >     I will if need be if it means not hacking the placement API > to serve > >     this purpose (for which it wasn't intended). > > > > > > Going back to the point Mark Goddard made, there are two things here: > > > > 1) Picking the correct resource provider > > 2) Telling Ironic to transform the picked node in some way > > > > Today we allow the use Capabilities for both. > > > > I am suggesting we move to using Traits only for (1), leaving (2) in > > place for now, while we decide what to do (i.e. future of "deploy > > template" concept). > > > > It feels like Ironic's plan to define the "deploy templates" in > Ironic > > should replace the dependency on Glare for this use case, largely > > because the definition of the deploy template (in my mind) is very > > heavily related to inspector and driver properties, etc. Mark is > looking > > at moving that forward at the moment. > > That won't do anything about the flavor explosion problem, though, > right > John? > > > Does nova still plan to allow passing additional desired traits into the > server create request? > I (we?) was kind of banking on that to solve the Baskin Robbins thing. That's precisely what I've been looking into. From what I can tell, Ironic was planning on using these CUSTOM_DEPLOY_TEMPLATE_XXX traits in two ways: 1) To tell Nova what scheduling constraints the instance needed -- e.g. "hey Nova, make sure I land on a node that supports UEFI boot mode because my boot image relies on that". 2) As a convenient (because it would require no changes to Nova) way of passing instance pre-spawn configuration data to the Ironic virt driver -- e.g. pass the entire set of traits that are in the RequestSpec's flavor and image extra specs to Ironic before calling the Ironic node provision API. #1 is fine IMHO, since it (mostly) represents a "capability" that the resource provider (the Ironic baremetal node) must support in order for the instance to successfully boot. #2 is a problem, though, because it *doesn't* represent a capability. In fact, it can represent any and all sorts of key/value, JSON/dict or other information and this information is not intended to be passed to the placement/scheduler service as a constraint. It is this data, also, that tends to create the flavor explosion problem because it means that Ironic deployers need to create dozens of flavors that vary only slightly based on a user's desired deployment configuration. So, deployers end up needing to create dozens of flavors varying only slightly by things like RAID level or some pre-defined disk partitioning layout. For example, the deployer might have dozens of flavors that look like this: - 12CPU_128G_RAID10_DRIVE_LAYOUT_X - 12CPU_128G_RAID5_DRIVE_LAYOUT_X - 12CPU_128G_RAID01_DRIVE_LAYOUT_X - 12CPU_128G_RAID10_DRIVE_LAYOUT_Y - 12CPU_128G_RAID5_DRIVE_LAYOUT_Y - 12CPU_128G_RAID01_DRIVE_LAYOUT_Y which is the flavor explosion problem we talk about. Add into that mix the desire to have certain functionality "activated" -- like "this Ironic node supports UEFI boot mode but I need to twiddle some foo before I can provision using UEFI boot mode" -- and things just get ugly. I suppose the long-term solution I would like to see is a clean separation of #1 from #2, with traits being used as-is for the *required capabilities* (i.e. does this node support hardware RAID?) and separate command-line arguments for #2, with those command-line arguments being references to Glance metadef items. That way, Glance metadefs can be used for validating the data supplied to the command line instead of Nova needing to add that data validation logic. So, the long-term command line solution might look something like this: # list the disk partitioning schemes from glance openstack metadefs list --object disk_partitioning_scheme # Choose the disk partitioning scheme you want... # let's say that this disk partitioning scheme is a JSON # document that looks like this (the schema for this document # would belong to the metadef schema in Glance) { "type": "disk_partitioning_scheme", "version": 123, "object": { "raid" : { "level": 5 }, "partitioning_table_format": "gpt", "partitions": { "/dev/sda": { "format": "ext4", "label": "os", "size": "256MB", "mountpoint": "/" }, "/dev/sdb": { "format": "ext4", "label": "home", "size": "2TB", "mountpoint": "/home" }... } }, "placement": { "traits": { "required": [ "STORAGE_DISK_SSD", "STORAGE_RAID_HARDWARE", "BOOT_MODE_UEFI" ] } } } openstack server create --flavor METAL_12CPU_128G \ --image $IMAGE_UUID \ --config-data $PARTITIONING_SCHEME_UUID # nova would see the "placement:traits" collection in the config-data # object and would merge those trait constraints into the constraints it # constructs from the flavor extra specs and image metadata, passing # those constraints directly to placement for scheduling filters. # Note that this is virtually identical to the solution we developed # with the Neutron team to put a "network:capabilities" section in the # port binding profile to indicate the required traits for a port... # nova would pass the config-data object to the virt driver # to do with as it wants. One alternative could also have Ironic folks separate disk partitioning from RAID configuration, making separate disk_partitioning_scheme and raid_config metadef items in Glance, decoupling the RAID configuration from the disk partitioning configuration. The user could then specify different RAID config and disk partitioning on the command line, like so: openstack server create --flavor METAL_12CPU_128G \ --image $IMAGE_UUID \ --config-data $PARTIONING_SCHEME_UUID \ --config-data $RAID_CONFIG_UUID Anyway, these are the thoughts I'm typing up in the mentioned spec... -jay From zbitter at redhat.com Mon Oct 1 14:15:51 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 1 Oct 2018 10:15:51 -0400 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> Message-ID: <270d322a-cb22-7237-1c72-8f47ad3475ed@redhat.com> On 28/09/18 1:19 PM, Chris Dent wrote: >> They aren't arbitrary. They are there for a reason: a trait is a >> boolean capability. It describes something that either a provider is >> capable of supporting or it isn't. > > This is somewhat (maybe even only slightly) different from what I > think the definition of a trait is, and that nuance may be relevant. > > I describe a trait as a "quality that a resource provider has" (the > car is blue). This contrasts with a resource class which is a > "quantity that a resource provider has" (the car has 4 doors). I'm not sure that quality vs. quantity is actually the right distinction here... someone could equally argue that having 4 doors is itself a quality[1] of a car, and they could certainly come up with a formulation that obscures the role of quantity at all (the car is a sedan). I think the actual distinction you're describing is between discrete (or perhaps just enumerable) and continuous (or at least innumerable) values. What that misses is that if the car is blue, it cannot also be green. Since placement absolutely should not know anything at all about the meaning of traits, this means that clients will be required to implement a bunch of business logic to maintain consistency. Furthermore, should the colour of the car change from blue to green at some point in the future[2], I am assuming that placement will not offer an API that allows both traits to be updated atomically. Those are problems that key-value solves. It could be the case that those problems are not considered important in this context; if so I'd expect to see the reasons explained as part of this discussion. cheers, Zane. [1] Resisting the urge to quote Stalin here. [2] https://en.wikipedia.org/wiki/New_riddle_of_induction From jim at jimrollenhagen.com Mon Oct 1 14:35:15 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 1 Oct 2018 10:35:15 -0400 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: <2e4ce0e9-1ad1-abe7-2f6e-c10b84b3fdb4@gmail.com> References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> <2e4ce0e9-1ad1-abe7-2f6e-c10b84b3fdb4@gmail.com> Message-ID: On Mon, Oct 1, 2018 at 10:13 AM Jay Pipes wrote: > On 10/01/2018 09:01 AM, Jim Rollenhagen wrote: > > On Mon, Oct 1, 2018 at 8:03 AM Jay Pipes > > wrote: > > > > On 10/01/2018 04:36 AM, John Garbutt wrote: > > > On Fri, 28 Sep 2018 at 00:46, Jay Pipes > > > > >> wrote: > > > > > > On 09/27/2018 06:23 PM, Matt Riedemann wrote: > > > > On 9/27/2018 3:02 PM, Jay Pipes wrote: > > > >> A great example of this would be the proposed "deploy > > template" > > > from > > > >> [2]. This is nothing more than abusing the placement > > traits API in > > > >> order to allow passthrough of instance configuration data > > from the > > > >> nova flavor extra spec directly into the > nodes.instance_info > > > field in > > > >> the Ironic database. It's a hack that is abusing the > entire > > > concept of > > > >> the placement traits concept, IMHO. > > > >> > > > >> We should have a way *in Nova* of allowing instance > > configuration > > > >> key/value information to be passed through to the virt > > driver's > > > >> spawn() method, much the same way we provide for > > user_data that > > > gets > > > >> exposed after boot to the guest instance via configdrive > > or the > > > >> metadata service API. What this deploy template thing is > > is just a > > > >> hack to get around the fact that nova doesn't have a > > basic way of > > > >> passing through some collated instance configuration > > key/value > > > >> information, which is a darn shame and I'm really kind of > > > annoyed with > > > >> myself for not noticing this sooner. :( > > > > > > > > We talked about this in Dublin through right? We said a > good > > > thing to do > > > > would be to have some kind of > template/profile/config/whatever > > > stored > > > > off in glare where schema could be registered on that > > thing, and > > > then > > > > you pass a handle (ID reference) to that to nova when > > creating the > > > > (baremetal) server, nova pulls it down from glare and > hands it > > > off to > > > > the virt driver. It's just that no one is doing that work. > > > > > > No, nobody is doing that work. > > > > > > I will if need be if it means not hacking the placement API > > to serve > > > this purpose (for which it wasn't intended). > > > > > > > > > Going back to the point Mark Goddard made, there are two things > here: > > > > > > 1) Picking the correct resource provider > > > 2) Telling Ironic to transform the picked node in some way > > > > > > Today we allow the use Capabilities for both. > > > > > > I am suggesting we move to using Traits only for (1), leaving (2) > in > > > place for now, while we decide what to do (i.e. future of "deploy > > > template" concept). > > > > > > It feels like Ironic's plan to define the "deploy templates" in > > Ironic > > > should replace the dependency on Glare for this use case, largely > > > because the definition of the deploy template (in my mind) is very > > > heavily related to inspector and driver properties, etc. Mark is > > looking > > > at moving that forward at the moment. > > > > That won't do anything about the flavor explosion problem, though, > > right > > John? > > > > > > Does nova still plan to allow passing additional desired traits into the > > server create request? > > I (we?) was kind of banking on that to solve the Baskin Robbins thing. > > That's precisely what I've been looking into. Right, well aware of that. > From what I can tell, > Ironic was planning on using these CUSTOM_DEPLOY_TEMPLATE_XXX traits in > two ways: > > 1) To tell Nova what scheduling constraints the instance needed -- e.g. > "hey Nova, make sure I land on a node that supports UEFI boot mode > because my boot image relies on that". > > 2) As a convenient (because it would require no changes to Nova) way of > passing instance pre-spawn configuration data to the Ironic virt driver > -- e.g. pass the entire set of traits that are in the RequestSpec's > flavor and image extra specs to Ironic before calling the Ironic node > provision API. > > #1 is fine IMHO, since it (mostly) represents a "capability" that the > resource provider (the Ironic baremetal node) must support in order for > the instance to successfully boot. > This is the sort of thing I want to initially target. I understand the deploy templates thing proposes solving both #1 and #2, but I want to back up a bit. So say the user requests a node that supports UEFI because their image needs UEFI. Which workflow would you want here? 1) The operator (or ironic?) has already configured the node to boot in UEFI mode. Only pre-configured nodes advertise the "supports UEFI" trait. 2) Any node that supports UEFI mode advertises the trait. Ironic ensures that UEFI mode is enabled before provisioning the machine. I imagine doing #2 by passing the traits which were specifically requested by the user, from Nova to Ironic, so that Ironic can do the right thing for the user. Your proposal suggests that the user request the "supports UEFI" trait, and *also* pass some glance UUID which the user understands will make sure the node actually boots in UEFI mode. Something like: openstack server create --flavor METAL_12CPU_128G --trait SUPPORTS_UEFI --config-data $TURN_ON_UEFI_UUID Note that I pass --trait because I hope that will one day be supported and we can slow down the flavor explosion. But I'm not sure if that's still planned because I think you misunderstood my question or wanted to talk more about RAID. :) Anyway, it seems like a pretty lame user experience to be able to explicitly request UEFI, but get a server that doesn't actually come up because I forgot to also tell it to turn on UEFI mode. FWIW, I do really like what you're proposing, for complex things like RAID (or really. But for the simple cases (UEFI being one that many people use today, VT bit being another example), feels a bit overkill. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From alifshit at redhat.com Mon Oct 1 14:39:09 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Mon, 1 Oct 2018 10:39:09 -0400 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> <7b2ae14e-5f3d-ff60-3ebe-8b8c62ee5994@fried.cc> <01d48897-d486-6ab0-2f64-a2093e4157dc@gmail.com> Message-ID: > So from a code perspective _placement_ is completely agnostic to > whether a trait is "PCI_ADDRESS_01_AB_23_CD", "STORAGE_DISK_SSD", or > "JAY_LIKES_CRUNCHIE_BARS". Right, but words have meanings, and everyone is better off if that meaning is common amongst everyone doing the talking. So if placement understands traits as "a unitary piece of information that is either true or false" (ex: HAS_SSD), but nova understands it as "multiple pieces of information, all of which are either true or false" (ex: HAS_PCI_DE_AD_BE_EF), then that's asking for trouble. Can it work out? Probably, but it'll be more by accident that by design, sort of like French and Spanish sharing certain words, but then having some similar sounding words mean something completely different. > However, things which are using traits (e.g., nova, ironic) need to > make their own decisions about how the value of traits are > interpreted. Well... if placement is saying "here's the primitives I can work with and can expose to my users", but nova is saying "well, we like this one primitive, but what we really need is this other primitive, and you don't have it, but we can totally hack this first primitive that you do have to do what we want"... That's ugly. From what I understand, Nova needs *resources* (not resources providers) to have *quantities* of things, and this is not something that placement can currently work with, which is why we're having this flamewar ;) > I don't have a strong position on that except to say > that _if_ we end up in a position of there being lots of traits > willy nilly, people who have chosen to do that need to know that the > contract presented by traits right now (present or not present, no > value comprehension) is fixed. > > > I *do* see a problem with it, based on my experience in Nova where this kind > > of thing leads to ugly, unmaintainable, and incomprehensible code as I have > > pointed to in previous responses. > > I think there are many factors that have led to nova being > incomprehensible and indeed bad representations is one of them, but > I think reasonable people can disagree on which factors are the most > important and with sufficient discussion come to some reasonable > compromises. I personally feel that while the bad representations > (encoding stuff in strings or json blobs) thing is a big deal, > another major factor is a predilection to make new apis, new > abstractions, and new representations rather than working with and > adhering to the constraints of the existing ones. This leads to a > lot of code that encodes business logic in itself (e.g., several > different ways and layers of indirection to think about allocation > ratios) rather than working within strong and constraining > contracts. > > From my standpoint there isn't much to talk about here from a > placement code standpoint. We should clearly document the functional > contract (and stick to it) and we should come up with exemplars > for how to make the best use of traits. > > I think this conversation could allow us to find those examples. > > I don't, however, want placement to be a traffic officer for how > people do things. In the context of the orchestration between nova > and ironic and how that interaction happens, nova has every right to > set some guidelines if it needs to. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent__________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -- Artom Lifshitz Software Engineer, OpenStack Compute DFG From dms at danplanet.com Mon Oct 1 15:06:01 2018 From: dms at danplanet.com (Dan Smith) Date: Mon, 01 Oct 2018 08:06:01 -0700 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: (Chris Dent's message of "Mon, 1 Oct 2018 14:25:09 +0100 (BST)") References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> <7b2ae14e-5f3d-ff60-3ebe-8b8c62ee5994@fried.cc> <01d48897-d486-6ab0-2f64-a2093e4157dc@gmail.com> Message-ID: I was out when much of this conversation happened, so I'm going to summarize my opinion here. > So from a code perspective _placement_ is completely agnostic to > whether a trait is "PCI_ADDRESS_01_AB_23_CD", "STORAGE_DISK_SSD", or > "JAY_LIKES_CRUNCHIE_BARS". > > However, things which are using traits (e.g., nova, ironic) need to > make their own decisions about how the value of traits are > interpreted. I don't have a strong position on that except to say > that _if_ we end up in a position of there being lots of traits > willy nilly, people who have chosen to do that need to know that the > contract presented by traits right now (present or not present, no > value comprehension) is fixed. I agree with what Chris holds sacred here, which is that placement shouldn't ever care about what the trait names are or what they mean to someone else. That also extends to me hoping we never implement a generic key=value store on resource providers in placement. >> I *do* see a problem with it, based on my experience in Nova where >> this kind of thing leads to ugly, unmaintainable, and >> incomprehensible code as I have pointed to in previous responses. I definitely agree with what Jay holds sacred here, which is that abusing the data model to encode key=value information into single trait strings is bad (which is what you're doing with something like PCI_ADDRESS_01_AB_23_CD). I don't want placement (the code) to try to put any technical restrictions on the meaning of trait names, in an attempt to try to prevent the above abuse. I agree that means people _can_ abuse it if they wish, which I think is Chris' point. However, I think it _is_ important for the placement team (the people) to care about how consumers (nova, etc) use traits, and thus provide guidance on that is necessary. Not everyone will follow that guidance, but we should provide it. Projects with history-revering developers on both sides of the fence can help this effort if they lead by example. If everyone goes off and implements their way around the perceived restriction of not being able to ask placement for RAID_LEVEL>=5, we're going to have a much larger mess than the steaming pile of extra specs in nova that we're trying to avoid. --Dan From miguel at mlavalle.com Mon Oct 1 15:20:24 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 1 Oct 2018 10:20:24 -0500 Subject: [openstack-dev] [neutron][stadium][networking] Seeking proposals for non-voting Stadium projects in Neutron check queue In-Reply-To: References: Message-ID: Hi Takashi, We are open to your suggestion. What job do you think will be helpful in minimizing the possibility of Neutron patches breaking your project? Best regards On Sun, Sep 30, 2018 at 11:25 PM Takashi Yamamoto wrote: > hi, > > what kind of jobs should it be? eg. unit tests, tempest, etc... > On Sun, Sep 30, 2018 at 9:43 AM Miguel Lavalle > wrote: > > > > Dear networking Stackers, > > > > During the recent PTG in Denver, we discussed measures to prevent > patches merged in the Neutron repo breaking Stadium and related networking > projects in general. We decided to implement the following: > > > > 1) For Stadium projects, we want to add non-voting jobs to the Neutron > check queue > > 2) For non stadium projects, we are inviting them to add 3rd party CI > jobs > > > > The next step is for each project to propose the jobs that they want to > run against Neutron patches. > > > > Best regards > > > > Miguel > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Oct 1 15:28:51 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 1 Oct 2018 10:28:51 -0500 Subject: [openstack-dev] [nova][cinder][qa] Should we enable multiattach in tempest-full? In-Reply-To: <1662fd96896.121bfdb2733507.842876450808135416@ghanshyammann.com> References: <4f1bf485-f663-69ae-309e-ab9286e588e1@gmail.com> <1538382375.30016.0@smtp.office365.com> <1662fd96896.121bfdb2733507.842876450808135416@ghanshyammann.com> Message-ID: <81332221-db29-5525-6f3f-a000c93e4939@gmail.com> On 10/1/2018 8:37 AM, Ghanshyam Mann wrote: > +1 on adding multiattach on integrated job. It is always good to cover more features in integrate-gate instead of separate jobs. These tests does not take much time, it should be ok to add in tempest-full [1]. We should make only really slow test as 'slow' otherwise it should be fine to run in tempest-full. > > I thought adding tempest-slow on cinder was merged but it is not[2] > > [1]http://logs.openstack.org/80/606880/2/check/nova-multiattach/7f8681e/job-output.txt.gz#_2018-10-01_10_12_55_482653 > [2]https://review.openstack.org/#/c/591354/2 Actually it will be enabled in both tempest-full and tempest-slow, because there is also a multiattach test marked as 'slow': TestMultiAttachVolumeSwap. I'll push patches today. -- Thanks, Matt From doug at doughellmann.com Mon Oct 1 16:23:36 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 01 Oct 2018 12:23:36 -0400 Subject: [openstack-dev] [python3-first] support in stable branches In-Reply-To: <20181001074103eucas1p29c4c3ca08743afa2c610718b40f4eea5~ZbCc-4oe10677906779eucas1p2X@eucas1p2.samsung.com> References: <20180927144829eucas1p26786a6e62c869b8138066f8857dae267~YSSg9Kjph2467924679eucas1p2l@eucas1p2.samsung.com> <20180928140517eucas1p285022a4ce8a367efadd9cdf05916a917~YlWEluavl1478514785eucas1p29@eucas1p2.samsung.com> <20181001074103eucas1p29c4c3ca08743afa2c610718b40f4eea5~ZbCc-4oe10677906779eucas1p2X@eucas1p2.samsung.com> Message-ID: Dariusz Krol writes: > Hello Doug, > > thanks for your explanation. I was a little bit confused by changes to > stable branches with python3-first topic as I thought it has to do > something with adding new test configuration for python3. > > But as you explained this is about moving zuul-related configuration, > which is a part of python3-first goal (but it is not related to > supporting python3 by projects IMHO :) ) > > Anyway, it is now clear to me and sorry for making this confusion. Thanks for asking for clarification! Doug From doug at doughellmann.com Mon Oct 1 16:25:21 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 01 Oct 2018 12:25:21 -0400 Subject: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed In-Reply-To: References: Message-ID: Miguel Angel Ajo Pelayo writes: > Thank you for the guidance and ping Doug. > > Was this triggered by [1] ? or By the 1.1.0 tag pushed to gerrit? The release jobs are always triggered by the git tagging event. The patches in openstack/releases run a job that adds tags, but the patch you linked to hasn't been merged yet, so it looks like it was caused by pushing the tag manually. Doug From duc.openstack at gmail.com Mon Oct 1 16:52:12 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Mon, 1 Oct 2018 09:52:12 -0700 Subject: [openstack-dev] [horizon][plugins] npm jobs fail due to new XStatic-jQuery release (was: Horizon gates are broken) In-Reply-To: References: Message-ID: Hi Shu, Thanks for proposing your fix. It looks good to me. I have submitted a similar patch for senlin-dashboard to unblock the broken gate test [1]. [1] https://review.openstack.org/#/c/607003/ Regards, Duc (dtruong) On Fri, Sep 28, 2018 at 2:24 AM Shu M. wrote: > > Hi Ivan, > > Thank you for your help to our plugins and sorry for bothering you. > I found problem on installing horizon in "post-install", e.g. we should install horizon with upper-constraints.txt in "post-install". > I proposed patch[1] in zun-ui, please check it. If we can merge this, I will expand it the other remaining plugins. > > [1] https://review.openstack.org/#/c/606010/ > > Thanks, > Shu Muto > > 2018年9月28日(金) 3:34 Ivan Kolodyazhny : >> >> Hi, >> >> Unfortunately, this issue affects some of the plugins too :(. At least gates for the magnum-ui, senlin-dashboard, zaqar-ui and zun-ui are broken now. I'm working both with project teams to fix it asap. Let's wait if [5] helps for senlin-dashboard and fix all the rest of plugins. >> >> >> [5] https://review.openstack.org/#/c/605826/ >> >> Regards, >> Ivan Kolodyazhny, >> http://blog.e0ne.info/ >> >> >> On Wed, Sep 26, 2018 at 4:50 PM Ivan Kolodyazhny wrote: >>> >>> Hi all, >>> >>> Patch [1] is merged and our gates are un-blocked now. I went throw review list and post 'recheck' where it was needed. >>> >>> We need to cherry-pick this fix to stable releases too. I'll do it asap >>> >>> Regards, >>> Ivan Kolodyazhny, >>> http://blog.e0ne.info/ >>> >>> >>> On Mon, Sep 24, 2018 at 11:18 AM Ivan Kolodyazhny wrote: >>>> >>>> Hi team, >>>> >>>> Unfortunately, horizon gates are broken now. We can't merge any patch due to the -1 from CI. >>>> I don't want to disable tests now, that's why I proposed a fix [1]. >>>> >>>> We'd got released some of XStatic-* packages last week. At least new XStatic-jQuery [2] breaks horizon [3]. I'm working on a new job for requirements repo [4] to prevent such issues in the future. >>>> >>>> Please, do not try 'recheck' until [1] will be merged. >>>> >>>> [1] https://review.openstack.org/#/c/604611/ >>>> [2] https://pypi.org/project/XStatic-jQuery/#history >>>> [3] https://bugs.launchpad.net/horizon/+bug/1794028 >>>> [4] https://review.openstack.org/#/c/604613/ >>>> >>>> Regards, >>>> Ivan Kolodyazhny, >>>> http://blog.e0ne.info/ >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at fried.cc Mon Oct 1 16:58:15 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 1 Oct 2018 11:58:15 -0500 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> <7b2ae14e-5f3d-ff60-3ebe-8b8c62ee5994@fried.cc> <01d48897-d486-6ab0-2f64-a2093e4157dc@gmail.com> Message-ID: <1bfdd43e-73d4-bf75-944d-60d2e9aae8b2@fried.cc> Dan- On 10/01/2018 10:06 AM, Dan Smith wrote: > I was out when much of this conversation happened, so I'm going to > summarize my opinion here. > >> So from a code perspective _placement_ is completely agnostic to >> whether a trait is "PCI_ADDRESS_01_AB_23_CD", "STORAGE_DISK_SSD", or >> "JAY_LIKES_CRUNCHIE_BARS". >> >> However, things which are using traits (e.g., nova, ironic) need to >> make their own decisions about how the value of traits are >> interpreted. I don't have a strong position on that except to say >> that _if_ we end up in a position of there being lots of traits >> willy nilly, people who have chosen to do that need to know that the >> contract presented by traits right now (present or not present, no >> value comprehension) is fixed. > > I agree with what Chris holds sacred here, which is that placement > shouldn't ever care about what the trait names are or what they mean to > someone else. That also extends to me hoping we never implement a > generic key=value store on resource providers in placement. > >>> I *do* see a problem with it, based on my experience in Nova where >>> this kind of thing leads to ugly, unmaintainable, and >>> incomprehensible code as I have pointed to in previous responses. > > I definitely agree with what Jay holds sacred here, which is that > abusing the data model to encode key=value information into single trait > strings is bad (which is what you're doing with something like > PCI_ADDRESS_01_AB_23_CD). > > I don't want placement (the code) to try to put any technical > restrictions on the meaning of trait names, in an attempt to try to > prevent the above abuse. I agree that means people _can_ abuse it if > they wish, which I think is Chris' point. However, I think it _is_ > important for the placement team (the people) to care about how > consumers (nova, etc) use traits, and thus provide guidance on that is > necessary. Not everyone will follow that guidance, but we should provide > it. Projects with history-revering developers on both sides of the fence > can help this effort if they lead by example. > > If everyone goes off and implements their way around the perceived > restriction of not being able to ask placement for RAID_LEVEL>=5, we're > going to have a much larger mess than the steaming pile of extra specs > in nova that we're trying to avoid. Sorry, I'm having a hard time understanding where you're landing here. It sounds like you might be saying, "I would rather not see encoded trait names OR a new key/value primitive; but if the alternative is ending up with 'a much larger mess', I would accept..." ...which? Or is it, "We should not implement a key/value primitive, nor should we implement restrictions on trait names; but we should continue to discourage (ab)use of trait names by steering placement consumers to..." ...do what? The restriction is real, not perceived. Without key/value (either encoded or explicit), how should we steer placement consumers to satisfy e.g., "Give me disk from a provider with RAID5"? (Put aside the ability to do comparisons other than straight equality - so limiting the discussion to RAID_LEVEL=5, ignoring RAID_LEVEL>=5. Also limiting the discussion to what we want out of GET /a_c - so this excludes, "And then go configure RAID5 on my storage.") > > --Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dms at danplanet.com Mon Oct 1 17:17:50 2018 From: dms at danplanet.com (Dan Smith) Date: Mon, 01 Oct 2018 10:17:50 -0700 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: <1bfdd43e-73d4-bf75-944d-60d2e9aae8b2@fried.cc> (Eric Fried's message of "Mon, 1 Oct 2018 11:58:15 -0500") References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> <7b2ae14e-5f3d-ff60-3ebe-8b8c62ee5994@fried.cc> <01d48897-d486-6ab0-2f64-a2093e4157dc@gmail.com> <1bfdd43e-73d4-bf75-944d-60d2e9aae8b2@fried.cc> Message-ID: > It sounds like you might be saying, "I would rather not see encoded > trait names OR a new key/value primitive; but if the alternative is > ending up with 'a much larger mess', I would accept..." ...which? > > Or is it, "We should not implement a key/value primitive, nor should we > implement restrictions on trait names; but we should continue to > discourage (ab)use of trait names by steering placement consumers to..." > ...do what? The second one. > The restriction is real, not perceived. Without key/value (either > encoded or explicit), how should we steer placement consumers to satisfy > e.g., "Give me disk from a provider with RAID5"? Sure, I'm not doubting the need to find providers with certain abilities. What I'm saying (and I assume Jay is as well), is that finding things with more domain-specific attributes is the job of the domain controller (i.e. nova). Placement's strength, IMHO, is the unified and extremely simple data model and consistency guarantees that it provides. It takes a lot of the work of searching and atomic accounting of enumerable and qualitative things out of the scheduler of the consumer. IMHO, it doesn't (i.e. won't ever) and shouldn't replace all the things that nova's scheduler needs to do. I think it's useful to draw the line in front of a full-blown key=value store and DSL grammar for querying everything with all the operations anyone could ever need. Unifying the simpler and more common bits into placement and keeping the domain-specific consideration and advanced filtering of the results in nova/ironic/etc is the right separation of responsibilities, IMHO. RAID level is, of course, an overly simplistic example to use, which makes the problem seem small, but we know more complicated examples exist. --Dan From openstack at fried.cc Mon Oct 1 17:20:32 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 1 Oct 2018 12:20:32 -0500 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: <01d48897-d486-6ab0-2f64-a2093e4157dc@gmail.com> References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> <7b2ae14e-5f3d-ff60-3ebe-8b8c62ee5994@fried.cc> <01d48897-d486-6ab0-2f64-a2093e4157dc@gmail.com> Message-ID: On 09/29/2018 10:40 AM, Jay Pipes wrote: > On 09/28/2018 04:36 PM, Eric Fried wrote: >> So here it is. Two of the top influencers in placement, one saying we >> shouldn't overload traits, the other saying we shouldn't add a primitive >> that would obviate the need for that. Historically, this kind of >> disagreement seems to result in an impasse: neither thing happens and >> those who would benefit are forced to find a workaround or punt. >> Frankly, I don't particularly care which way we go; I just want to be >> able to do the things. > > I don't think that's a fair statement. You absolutely *do* care which > way we go. You want to encode multiple bits of information into a trait > string -- such as "PCI_ADDRESS_01_AB_23_CD" -- and leave it up to the > caller to have to understand that this trait string has multiple bits of > information encoded in it (the fact that it's a PCI device and that the > PCI device is at 01_AB_23_CD). It was an oversimplification to say I don't care. I would like, ideally, long-term, to see a true key/value primitive, because I think it's much more powerful and less hacky. But am sympathetic to what Chris brought up about full plate and timeline. So while we're waiting for that to fit into the schedule, I wouldn't mind the ability to use encoded traits to some extent to satisfy the use cases. > You don't see a problem encoding these variants inside a string. Chris > doesn't either. Yeah, I see the problem, and I don't love the idea - as I say, I would prefer a true key/value primitive. But I would rather use encoded traits as a temporary measure to get stuff done than a) work around things with a mess of extra specs and/or b) wait, potentially until the heat death of the universe if we remain deadlocked on whether a key/value primitive should happen. > > I *do* see a problem with it, based on my experience in Nova where this > kind of thing leads to ugly, unmaintainable, and incomprehensible code > as I have pointed to in previous responses. > > Furthermore, your point isn't that "you just want to be able to do the > things". Your point (and the point of others, from Cyborg and Ironic) is > that you want to be able to use placement to pass various bits of > information to an instance, and placement wasn't designed for that > purpose. Nova was. > > So, instead of working out a solution with the Nova team for passing > configuration data about an instance, the proposed solution is instead > to hack/encode multiple bits of information into a trait string. This > proposed solution is seen as a way around having to work out a more > appropriate solution that has Nova pass that configuration data (as is > appropriate, since nova is the project that manages instances) to the > virt driver or generic device manager (i.e. Cyborg) before the instance > spawns. I agree that we should not overload placement as a mechanism to pass configuration information ("set up RAID5 on my storage, please") to the driver. So let's put that aside. (Looking forward to that spec.) I still want to use something like "Is capable of RAID5" and/or "Has RAID5 already configured" as part of a scheduling and placement decision. Being able to have the GET /a_c response filtered down to providers with those, ahem, traits is the exact purpose of that operation. While we're in the neighborhood, we agreed in Denver to use a trait to indicate which service "owns" a provider [1], so we can eventually coordinate a smooth handoff of e.g. a device provider from nova to cyborg. This is certainly not a capability (but it is a trait), and it can certainly be construed as a key/value (owning_service=cyborg). Are we rescinding that decision? [1] https://review.openstack.org/#/c/602160/ > I'm working on a spec that will describe a way for the user to instruct > Nova to pass configuration data to the virt driver (or device manager) > before instance spawn. This will have nothing to do with placement or > traits, since this configuration data is not modeling scheduling and > placement decisions. > > I hope to have that spec done by Monday so we can discuss on the spec. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Mon Oct 1 17:36:30 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 1 Oct 2018 13:36:30 -0400 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> <7b2ae14e-5f3d-ff60-3ebe-8b8c62ee5994@fried.cc> <01d48897-d486-6ab0-2f64-a2093e4157dc@gmail.com> Message-ID: <7f449c27-faff-835c-b52e-61b45aadde08@gmail.com> On 10/01/2018 01:20 PM, Eric Fried wrote: > I agree that we should not overload placement as a mechanism to pass > configuration information ("set up RAID5 on my storage, please") to the > driver. So let's put that aside. (Looking forward to that spec.) ack. > I still want to use something like "Is capable of RAID5" and/or "Has > RAID5 already configured" as part of a scheduling and placement > decision. Being able to have the GET /a_c response filtered down to > providers with those, ahem, traits is the exact purpose of that operation. And yep, I have zero problem with this either, as I've noted. This is precisely what placement and traits were designed for. > While we're in the neighborhood, we agreed in Denver to use a trait to > indicate which service "owns" a provider [1], so we can eventually > coordinate a smooth handoff of e.g. a device provider from nova to > cyborg. This is certainly not a capability (but it is a trait), and it > can certainly be construed as a key/value (owning_service=cyborg). Are > we rescinding that decision? Unfortunately I have zero recollection of a conversation about using traits for indicating who "owns" a provider. :( I don't think I would support such a thing -- rather, I would support adding an attribute to the provider model itself for an owning service or such thing. That's a great example of where the attribute has specific conceptual meaning to placement (the concept of ownership) and should definitely not be tucked away, encoded into a trait string. OK, I'll get back to that spec now... :) Best, -jay > [1] https://review.openstack.org/#/c/602160/ > >> I'm working on a spec that will describe a way for the user to instruct >> Nova to pass configuration data to the virt driver (or device manager) >> before instance spawn. This will have nothing to do with placement or >> traits, since this configuration data is not modeling scheduling and >> placement decisions. >> >> I hope to have that spec done by Monday so we can discuss on the spec. >> >> Best, >> -jay >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jungleboyj at gmail.com Mon Oct 1 18:10:49 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 1 Oct 2018 13:10:49 -0500 Subject: [openstack-dev] [nova][cinder][qa] Should we enable multiattach in tempest-full? In-Reply-To: <81332221-db29-5525-6f3f-a000c93e4939@gmail.com> References: <4f1bf485-f663-69ae-309e-ab9286e588e1@gmail.com> <1538382375.30016.0@smtp.office365.com> <1662fd96896.121bfdb2733507.842876450808135416@ghanshyammann.com> <81332221-db29-5525-6f3f-a000c93e4939@gmail.com> Message-ID: <3c5ea780-b709-625c-788c-bf7944d09dbf@gmail.com> On 10/1/2018 10:28 AM, Matt Riedemann wrote: > On 10/1/2018 8:37 AM, Ghanshyam Mann wrote: >> +1 on adding multiattach on integrated job. It is always good to >> cover more features in integrate-gate instead of separate jobs. These >> tests does not take much time, it should be ok to add in tempest-full >> [1]. We should make only really slow test as 'slow' otherwise it >> should be fine to run in tempest-full. >> >> I thought adding tempest-slow on cinder was merged but it is not[2] >> >> [1]http://logs.openstack.org/80/606880/2/check/nova-multiattach/7f8681e/job-output.txt.gz#_2018-10-01_10_12_55_482653 >> >> [2]https://review.openstack.org/#/c/591354/2 > Sean and I are working on getting those changes merged.  So, that will be good. > Actually it will be enabled in both tempest-full and tempest-slow, > because there is also a multiattach test marked as 'slow': > TestMultiAttachVolumeSwap. > > I'll push patches today. > Thank you!  I think this is the right way to go. From dms at danplanet.com Mon Oct 1 18:16:05 2018 From: dms at danplanet.com (Dan Smith) Date: Mon, 01 Oct 2018 11:16:05 -0700 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: <7f449c27-faff-835c-b52e-61b45aadde08@gmail.com> (Jay Pipes's message of "Mon, 1 Oct 2018 13:36:30 -0400") References: <2d478d26-02f0-12cf-8d60-368b780661d6@gmail.com> <7b2ae14e-5f3d-ff60-3ebe-8b8c62ee5994@fried.cc> <01d48897-d486-6ab0-2f64-a2093e4157dc@gmail.com> <7f449c27-faff-835c-b52e-61b45aadde08@gmail.com> Message-ID: >> I still want to use something like "Is capable of RAID5" and/or "Has >> RAID5 already configured" as part of a scheduling and placement >> decision. Being able to have the GET /a_c response filtered down to >> providers with those, ahem, traits is the exact purpose of that operation. > > And yep, I have zero problem with this either, as I've noted. This is > precisely what placement and traits were designed for. Same. >> While we're in the neighborhood, we agreed in Denver to use a trait to >> indicate which service "owns" a provider [1], so we can eventually >> coordinate a smooth handoff of e.g. a device provider from nova to >> cyborg. This is certainly not a capability (but it is a trait), and it >> can certainly be construed as a key/value (owning_service=cyborg). Are >> we rescinding that decision? > > Unfortunately I have zero recollection of a conversation about using > traits for indicating who "owns" a provider. :( I definitely do. > I don't think I would support such a thing -- rather, I would support > adding an attribute to the provider model itself for an owning service > or such thing. > > That's a great example of where the attribute has specific conceptual > meaning to placement (the concept of ownership) and should definitely > not be tucked away, encoded into a trait string. No, as I recall it means nothing to placement - it means something to the consumers. A gentleperson's agreement for identifying who owns what if we're going to, say, remove things that might be stale from placement when updating the provider tree. --Dan From isaku.yamahata at gmail.com Mon Oct 1 19:28:48 2018 From: isaku.yamahata at gmail.com (Isaku Yamahata) Date: Mon, 1 Oct 2018 12:28:48 -0700 Subject: [openstack-dev] [ironic][neutron] SmartNics with Ironic In-Reply-To: References: Message-ID: <20181001192848.GA21607@private.email.ne.jp> Hello. I'm willing to help it. For detailed tech discussion, gerrit with updated spec would be better, though. I'd like to supplement on Neutron port binding. On Sun, Sep 30, 2018 at 06:25:58AM +0000, Moshe Levi wrote: > Hi Julia, > > I don't mind to update the ironic spec [1]. Unfortunately, I wasn't in the PTG but I had a sync meeting with Isuku. > > As I see it there is 2 use-cases: > > 1. Running the neutron ovs agent in the smartnic > 2. Running the neutron super ovs agent which manage the ovs running on the smartnic. > > It seem that most of the discussion was around the second use-case. > > This is my understanding on the ironic neutron PTG meeting: > > 1. Ironic cores don't want to change the deployment interface as proposed in [1]. > 2. We should a new network_interface for use case 2. But what about the first use case? Should it be a new network_interface as well? * Which component, Ironic or Neutron, should take care that SmartNIC is up/ready? * How is the up/readiness of smartnic defined? Common way or device-dependent way. For example, - able to connect via ovsdb/openflow - agent responsible for smartNIC has reported to Neutron agentdb. - able to ssh into smartnic - device specific way with driver for each device. - etc. > 3. We should delay the port binding until the baremetal is powered on the ovs is running. > * For the first use case I was thinking to change the neutron server to just keep the port binding information in the neutron DB. Then when the neutron ovs agent is a live it will retrieve all the baremeal port , add them to the ovsdb and start the neutron ovs agent fullsync. > * For the second use case the agent is alive so the agent itself can monitor the ovsdb of the baremetal and configure it when it up Can you please elaborate why port binding for smartnic should be delayed? I'm failing to see the necessity. Probably I'm missing something. Since we can skip the check of agent liveness with the assumption that hostid given by Ironic is correct, we don't have to delay the port binding for both case, 1 and 2 as above. > 4. How to notify that neutron agent successfully/unsuccessfully bind the port ? > * In both use-cases we should use neutron-ironic notification to make sure the port binding was done successfully. I agree that neutron-ironic notification is necessary. There seems the confusion of port-binding and ovs-programing. The success/failure of port-binding is the result of neutron port-update. The current code synchronously checks the ovs-agent liveness on the host and parameter validity. port-binding doesn't directly/synchronously program ovs. It only triggers to start ovs programming eventually. On the other hand, the success/failure of ovs programming is asynchronous regarding to neutron REST API because ovs-agent does it asynchronously on behalf of neutron server. So here neutron-ironic notification is necessary. When the ovs programming is done, the agent sets port::status = UP from DOWN. (and nova is notified that port is ready to use.) In the case of the failure, port::status is set to ERROR. Thanks, > > Is my understanding correct? > > > > [1] - https://review.openstack.org/#/c/582767/ > > From: Miguel Lavalle > Sent: Sunday, September 30, 2018 3:20 AM > To: OpenStack Development Mailing List > Subject: Re: [openstack-dev] [ironic][neutron] SmartNics with Ironic > > Hi, > > Yes, this also matches the recollection of the joint conversation in Denver. Please look at the "Ironic x-project discussion - Smartnics" section in http://lists.openstack.org/pipermail/openstack-dev/2018-September/135032.html > > Regards > > Miguel > > > > On Thu, Sep 27, 2018 at 1:31 PM Julia Kreger > wrote: > Greetings everyone, > > Now that the PTG is over, I would like to go ahead and get the > specification that was proposed to ironic-specs updated to represent > the discussions that took place at the PTG. > > A few highlights from my recollection: > > * Ironic being the source of truth for the hardware configuration for > the neutron agent to determine where to push configuration to. This > would include the address and credential information (certificates > right?). > * The information required is somehow sent to neutron (possibly as > part of the binding profile, which we could send at each time port > actions are requested by Ironic.) > * The Neutron agent running on the control plane connects outbound to > the smartnic, using information supplied to perform the appropriate > network configuration. > * In Ironic, this would likely be a new network_interface driver > module, with some additional methods that help facilitate the > work-flow logic changes needed in each deploy_interface driver module. > * Ironic would then be informed or gain awareness that the > configuration has been completed and that the deployment can proceed. > (A different spec has been proposed regarding this.) > > I have submitted a forum session based upon this and the agreed upon > goal at the PTG was to have the ironic spec written up to describe the > required changes. > > I guess the next question is, who wants to update the specification? > > -Julia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Isaku Yamahata From juliaashleykreger at gmail.com Mon Oct 1 20:07:22 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 1 Oct 2018 13:07:22 -0700 Subject: [openstack-dev] [ironic][neutron] SmartNics with Ironic In-Reply-To: References: Message-ID: Greetings, Comments in-line. Thanks, -Julia On Sat, Sep 29, 2018 at 11:27 PM Moshe Levi wrote: > Hi Julia, > > > > I don't mind to update the ironic spec [1]. Unfortunately, I wasn't in the > PTG but I had a sync meeting with Isuku. > > > > As I see it there is 2 use-cases: > > 1. Running the neutron ovs agent in the smartnic > 2. Running the neutron super ovs agent which manage the ovs running on > the smartnic. > > My takeaway from the meeting with neutron is that there would not be a neutron ovs agent running on the smartnic. That the configuration would need to be pushed at all times, which is ultimately better security wise if the tenant NIC is somehow compromised it reduces the control plane exposure. > 1. > > > > It seem that most of the discussion was around the second use-case. > By the time Ironic and Neutron met together, it seemed like the first use case was no longer under consideration. I may be wrong, but very strong preference existed for the second scenario when we met the next day. > > This is my understanding on the ironic neutron PTG meeting: > > 1. Ironic cores don't want to change the deployment interface as > proposed in [1]. > 2. We should a new network_interface for use case 2. But what about > the first use case? Should it be a new network_interface as well? > 3. We should delay the port binding until the baremetal is powered on > the ovs is running. > 1. For the first use case I was thinking to change the neutron > server to just keep the port binding information in the neutron DB. Then > when the neutron ovs agent is a live it will retrieve all the baremeal port > , add them to the ovsdb and start the neutron ovs agent fullsync. > 2. For the second use case the agent is alive so the agent itself > can monitor the ovsdb of the baremetal and configure it when it up > 4. How to notify that neutron agent successfully/unsuccessfully bind > the port ? > 1. In both use-cases we should use neutron-ironic notification to > make sure the port binding was done successfully. > > > > Is my understanding correct? > > > Not quite. 1) We as in Ironic recognize that there would need to be changes, it is the method as to how that we would prefer to be explicit and have chosen by the interface. The underlying behavior needs to be different, and the new network_interface should support both cases 1 and 2 because that interface contain needed logic for the conductor to determine the appropriate path forward. We should likely also put some guards in to prevent non-smart interfaces from being used in the same configuration due to the security issues that creates. 3) I believe this would be more of a matter of the network_interface knowing that the machine is powered up, and attempting to assert configuration through Neutron to push configuration to the smartnic. 3a) The consensus is that the information to access the smartnic is hardware configuration metadata and that ironic should be the source of truth for information about that hardware. The discussion was push that as needed into neutron to help enable the attachment. I proposed just including it in the binding profile as a possibility, since it is transient information. 3b) As I understood it, this would ultimately be the default operating behavior. 4) Was not discussed, but something along the path is going to have to check and retry as necessary. That item could be in the network_interface code. 4a) This doesn't exist yet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate.johnston at redhat.com Mon Oct 1 20:44:10 2018 From: nate.johnston at redhat.com (Nate Johnston) Date: Mon, 1 Oct 2018 16:44:10 -0400 Subject: [openstack-dev] [neutron] Bug deputy report week of Sept 24 Message-ID: <20181001204410.fin7brlpqfoudnk5@bishop> All, Here is my Bug Deputy report for the week of 9/24 - 10/1. It was a busy week! It is my recollection that prety much anything that doesn't have a patchset next to it is probably unowned and available for working. High priority ================== * https://bugs.launchpad.net/bugs/1794406 - neutron.objects lost PortFording in setup.cfg - Fix released: https://review.openstack.org/605302 * https://bugs.launchpad.net/bugs/1794545 - PlacementAPIClient.update_resource_class wrong client call, missing argument - Fix in progress: https://review.openstack.org/605455 * https://bugs.launchpad.net/bugs/1795280 - netns deletion on newer kernels fails with errno 16 - May or may not be a Neutron issue, applies at kernel version 4.18 but not 3.10. Left unconfirmed but checking it out within Red Hat. * https://bugs.launchpad.net/bugs/1795482 - Deleting network namespaces sometimes fails in check/gate queue with ENOENT - Fix in progress: https://review.openstack.org/607009 Medium priority ================== * https://bugs.launchpad.net/bugs/1794305 - [dvr_no_external][ha] centralized fip show up in the backup snat-namespace on the restore host (restaring or down/up - Fix in progress: https://review.openstack.org/605359 https://review.openstack.org/606384 * https://bugs.launchpad.net/bugs/1794809 - Gateway ports are down after reboot of control plane nodes - Fix in progress: https://review.openstack.org/606085 * https://bugs.launchpad.net/bugs/1794865 - Failed to check policy on listing loggable resources * https://bugs.launchpad.net/bugs/1794870 - NetworkNotFound failures on network test teardown because of retries due to the initial request taking >60 seconds * https://bugs.launchpad.net/bugs/1794809 - Gateway ports are down after reboot of control plane nodes - Similar to https://bugs.launchpad.net/neutron/+bug/1793529 but not quite the same - Fix in progress: https://review.openstack.org/606085 * https://bugs.launchpad.net/bugs/1794259 - rocky upgrade path broken requirements pecan too low - Fix released: https://review.openstack.org/605027 * https://bugs.launchpad.net/bugs/1794535 - Consider all router ports for dvr arp updates - Fix in progress: https://review.openstack.org/605434 * https://bugs.launchpad.net/bugs/1795126 - Race condition of DHCP agent updating port after ownership removed and given to another agent will cause extra port creation * Fix in progress: https://review.openstack.org/606383 * https://bugs.launchpad.net/bugs/1794424 - trunk: can not delete bound trunk for agent which allow create trunk on bound port - Fix in progress: https://review.openstack.org/605589 Low priority ================== * https://bugs.launchpad.net/bugs/1795127 - [dvr][ha] router state change cause unnecessary router_update Invalid/Won't Fix ================== * https://bugs.launchpad.net/bugs/1794569 - DVR with static routes may cause routed traffic to be dropped - Marked invalid since it was filed against Newton (neutron 9.4.1) * https://bugs.launchpad.net/bugs/1794695 - resources can't be filtered by tag-related parameterso - Marked 'Won't Fix' because neutronclient is deprecated, but the functionality works correctly in openstackclient. * https://bugs.launchpad.net/bugs/1794919 - [RFE] To decide create port with specific IP version - Marked 'Opinion'; can accomplish the same goal with current parameters for creating a port. RFE/Wishlist ================== * https://bugs.launchpad.net/bugs/1795212 - [RFE] Prevent DHCP agent from processing stale RPC messages when restarting up * https://bugs.launchpad.net/bugs/1794771 - SRIOV trunk port - multiple vlans on same VF Still Under Discussion ================== * https://bugs.launchpad.net/bugs/1794450 - When creating a server instance with an IPv4 and an IPv6 addresses, the IPv6 is not assigned * https://bugs.launchpad.net/bugs/1794991 - Inconsistent flows with DVR l2pop VxLAN on br-tun * https://bugs.launchpad.net/bugs/1795432 - neutron does not create the necessary iptables rules for dhcp agents when linuxbridge is used - Reporter notes that this is a variant scenario of https://bugs.launchpad.net/neutron/+bug/1720205, which was fixed in June Thanks, Nate From openstack at fried.cc Mon Oct 1 21:40:48 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 1 Oct 2018 16:40:48 -0500 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> <2e4ce0e9-1ad1-abe7-2f6e-c10b84b3fdb4@gmail.com> Message-ID: > So say the user requests a node that supports UEFI because their image > needs UEFI. Which workflow would you want here? > > 1) The operator (or ironic?) has already configured the node to boot in > UEFI mode. Only pre-configured nodes advertise the "supports UEFI" trait. > > 2) Any node that supports UEFI mode advertises the trait. Ironic ensures > that UEFI mode is enabled before provisioning the machine. > > I imagine doing #2 by passing the traits which were specifically > requested by the user, from Nova to Ironic, so that Ironic can do the > right thing for the user. > > Your proposal suggests that the user request the "supports UEFI" trait, > and *also* pass some glance UUID which the user understands will make > sure the node actually boots in UEFI mode. Something like: > > openstack server create --flavor METAL_12CPU_128G --trait SUPPORTS_UEFI > --config-data $TURN_ON_UEFI_UUID > > Note that I pass --trait because I hope that will one day be supported > and we can slow down the flavor explosion. IMO --trait would be making things worse (but see below). I think UEFI with Jay's model would be more like: openstack server create --flavor METAL_12CPU_128G --config-data $UEFI where the UEFI profile would be pretty trivial, consisting of placement.traits.required = ["BOOT_MODE_UEFI"] and object.boot_mode = "uefi". I agree that this seems kind of heavy, and that it would be nice to be able to say "boot mode is UEFI" just once. OTOH I get Jay's point that we need to separate the placement decision from the instance configuration. That said, what if it was: openstack config-profile create --name BOOT_MODE_UEFI --json - { "type": "boot_mode_scheme", "version": 123, "object": { "boot_mode": "uefi" }, "placement": { "traits": { "required": [ "BOOT_MODE_UEFI" ] } } } ^D And now you could in fact say openstack server create --flavor foo --config-profile BOOT_MODE_UEFI using the profile name, which happens to be the same as the trait name because you made it so. Does that satisfy the yen for saying it once? (I mean, despite the fact that you first had to say it three times to get it set up.) ======== I do want to zoom out a bit and point out that we're talking about implementing a new framework of substantial size and impact when the original proposal - using the trait for both - would just work out of the box today with no changes in either API. Is it really worth it? ======== By the way, with Jim's --trait suggestion, this: > ...dozens of flavors that look like this: > - 12CPU_128G_RAID10_DRIVE_LAYOUT_X > - 12CPU_128G_RAID5_DRIVE_LAYOUT_X > - 12CPU_128G_RAID01_DRIVE_LAYOUT_X > - 12CPU_128G_RAID10_DRIVE_LAYOUT_Y > - 12CPU_128G_RAID5_DRIVE_LAYOUT_Y > - 12CPU_128G_RAID01_DRIVE_LAYOUT_Y ...could actually become: openstack server create --flavor 12CPU_128G --trait $WHICH_RAID --trait $WHICH_LAYOUT No flavor explosion. (Maybe if we called it something other than --trait, like maybe --config-option, it would let us pretend we're not really overloading a trait to do config - it's just a coincidence that the config option has the same name as the trait it causes to be required.) -efried . From juliaashleykreger at gmail.com Mon Oct 1 22:04:08 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 1 Oct 2018 15:04:08 -0700 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> <2e4ce0e9-1ad1-abe7-2f6e-c10b84b3fdb4@gmail.com> Message-ID: On Mon, Oct 1, 2018 at 2:41 PM Eric Fried wrote: > > > So say the user requests a node that supports UEFI because their image > > needs UEFI. Which workflow would you want here? > > > > 1) The operator (or ironic?) has already configured the node to boot in > > UEFI mode. Only pre-configured nodes advertise the "supports UEFI" trait. > > > > 2) Any node that supports UEFI mode advertises the trait. Ironic ensures > > that UEFI mode is enabled before provisioning the machine. > > > > I imagine doing #2 by passing the traits which were specifically > > requested by the user, from Nova to Ironic, so that Ironic can do the > > right thing for the user. > > > > Your proposal suggests that the user request the "supports UEFI" trait, > > and *also* pass some glance UUID which the user understands will make > > sure the node actually boots in UEFI mode. Something like: > > > > openstack server create --flavor METAL_12CPU_128G --trait SUPPORTS_UEFI > > --config-data $TURN_ON_UEFI_UUID > > > > Note that I pass --trait because I hope that will one day be supported > > and we can slow down the flavor explosion. > > IMO --trait would be making things worse (but see below). I think UEFI > with Jay's model would be more like: > > openstack server create --flavor METAL_12CPU_128G --config-data $UEFI > > where the UEFI profile would be pretty trivial, consisting of > placement.traits.required = ["BOOT_MODE_UEFI"] and object.boot_mode = > "uefi". > > I agree that this seems kind of heavy, and that it would be nice to be > able to say "boot mode is UEFI" just once. OTOH I get Jay's point that > we need to separate the placement decision from the instance configuration. > > That said, what if it was: > > openstack config-profile create --name BOOT_MODE_UEFI --json - > { > "type": "boot_mode_scheme", > "version": 123, > "object": { > "boot_mode": "uefi" > }, > "placement": { > "traits": { > "required": [ > "BOOT_MODE_UEFI" > ] > } > } > } > ^D > > And now you could in fact say > > openstack server create --flavor foo --config-profile BOOT_MODE_UEFI > > using the profile name, which happens to be the same as the trait name > because you made it so. Does that satisfy the yen for saying it once? (I > mean, despite the fact that you first had to say it three times to get > it set up.) > > ======== > > I do want to zoom out a bit and point out that we're talking about > implementing a new framework of substantial size and impact when the > original proposal - using the trait for both - would just work out of > the box today with no changes in either API. Is it really worth it? > > +1000. Reading both of these threads, it feels like we're basically trying to make something perfect. I think that is a fine goal, except it is unrealistic because the enemy of good is perfection. ======== > > By the way, with Jim's --trait suggestion, this: > > > ...dozens of flavors that look like this: > > - 12CPU_128G_RAID10_DRIVE_LAYOUT_X > > - 12CPU_128G_RAID5_DRIVE_LAYOUT_X > > - 12CPU_128G_RAID01_DRIVE_LAYOUT_X > > - 12CPU_128G_RAID10_DRIVE_LAYOUT_Y > > - 12CPU_128G_RAID5_DRIVE_LAYOUT_Y > > - 12CPU_128G_RAID01_DRIVE_LAYOUT_Y > > ...could actually become: > > openstack server create --flavor 12CPU_128G --trait $WHICH_RAID --trait > $WHICH_LAYOUT > > No flavor explosion. > ++ I believe this was where this discussion kind of ended up in.. ?Dublin? The desire and discussion that led us into complex configuration templates and profiles being submitted were for highly complex scenarios where users wanted to assert detailed raid configurations to disk. Naturally, there are many issues there. The ability to provide such detail would be awesome for those 10% of operators that need such functionality. Of course, if that is the only path forward, then we delay the 90% from getting the minimum viable feature they need. > > (Maybe if we called it something other than --trait, like maybe > --config-option, it would let us pretend we're not really overloading a > trait to do config - it's just a coincidence that the config option has > the same name as the trait it causes to be required.) > I feel like it might be confusing, but totally +1 to matching required trait name being a thing. That way scheduling is completely decoupled and if everything was correct then the request should already be scheduled properly. > -efried > . > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Mon Oct 1 22:36:56 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 1 Oct 2018 18:36:56 -0400 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> <2e4ce0e9-1ad1-abe7-2f6e-c10b84b3fdb4@gmail.com> Message-ID: <6de3c709-1d41-fa13-5c74-e38536a24587@gmail.com> On 10/01/2018 06:04 PM, Julia Kreger wrote: > On Mon, Oct 1, 2018 at 2:41 PM Eric Fried wrote: > > > > So say the user requests a node that supports UEFI because their > image > > needs UEFI. Which workflow would you want here? > > > > 1) The operator (or ironic?) has already configured the node to > boot in > > UEFI mode. Only pre-configured nodes advertise the "supports > UEFI" trait. > > > > 2) Any node that supports UEFI mode advertises the trait. Ironic > ensures > > that UEFI mode is enabled before provisioning the machine. > > > > I imagine doing #2 by passing the traits which were specifically > > requested by the user, from Nova to Ironic, so that Ironic can do the > > right thing for the user. > > > > Your proposal suggests that the user request the "supports UEFI" > trait, > > and *also* pass some glance UUID which the user understands will make > > sure the node actually boots in UEFI mode. Something like: > > > > openstack server create --flavor METAL_12CPU_128G --trait > SUPPORTS_UEFI > > --config-data $TURN_ON_UEFI_UUID > > > > Note that I pass --trait because I hope that will one day be > supported > > and we can slow down the flavor explosion. > > IMO --trait would be making things worse (but see below). I think UEFI > with Jay's model would be more like: > >   openstack server create --flavor METAL_12CPU_128G --config-data $UEFI > > where the UEFI profile would be pretty trivial, consisting of > placement.traits.required = ["BOOT_MODE_UEFI"] and object.boot_mode = > "uefi". > > I agree that this seems kind of heavy, and that it would be nice to be > able to say "boot mode is UEFI" just once. OTOH I get Jay's point that > we need to separate the placement decision from the instance > configuration. > > That said, what if it was: > >  openstack config-profile create --name BOOT_MODE_UEFI --json - >  { >   "type": "boot_mode_scheme", >   "version": 123, >   "object": { >       "boot_mode": "uefi" >   }, >   "placement": { >    "traits": { >     "required": [ >      "BOOT_MODE_UEFI" >     ] >    } >   } >  } >  ^D > > And now you could in fact say > >  openstack server create --flavor foo --config-profile BOOT_MODE_UEFI > > using the profile name, which happens to be the same as the trait name > because you made it so. Does that satisfy the yen for saying it once? (I > mean, despite the fact that you first had to say it three times to get > it set up.) > > ======== > > I do want to zoom out a bit and point out that we're talking about > implementing a new framework of substantial size and impact when the > original proposal - using the trait for both - would just work out of > the box today with no changes in either API. Is it really worth it? > > > +1000. Reading both of these threads, it feels like we're basically > trying to make something perfect. I think that is a fine goal, except it > is unrealistic because the enemy of good is perfection. > > ======== > > By the way, with Jim's --trait suggestion, this: > > > ...dozens of flavors that look like this: > > - 12CPU_128G_RAID10_DRIVE_LAYOUT_X > > - 12CPU_128G_RAID5_DRIVE_LAYOUT_X > > - 12CPU_128G_RAID01_DRIVE_LAYOUT_X > > - 12CPU_128G_RAID10_DRIVE_LAYOUT_Y > > - 12CPU_128G_RAID5_DRIVE_LAYOUT_Y > > - 12CPU_128G_RAID01_DRIVE_LAYOUT_Y > > ...could actually become: > >  openstack server create --flavor 12CPU_128G --trait $WHICH_RAID > --trait > $WHICH_LAYOUT > > No flavor explosion. > > > ++ I believe this was where this discussion kind of ended up in.. ?Dublin? > > The desire and discussion that led us into complex configuration > templates and profiles being submitted were for highly complex scenarios > where users wanted to assert detailed raid configurations to disk. > Naturally, there are many issues there. The ability to provide such > detail would be awesome for those 10% of operators that need such > functionality. Of course, if that is the only path forward, then we > delay the 90% from getting the minimum viable feature they need. > > > (Maybe if we called it something other than --trait, like maybe > --config-option, it would let us pretend we're not really overloading a > trait to do config - it's just a coincidence that the config option has > the same name as the trait it causes to be required.) > > > I feel like it might be confusing, but totally +1 to matching required > trait name being a thing. That way scheduling is completely decoupled > and if everything was correct then the request should already be > scheduled properly. I guess I'll just drop the idea of doing this properly then. It's true that the placement traits concept can be hacked up and the virt driver can just pass a list of trait strings to the Ironic API and that's the most expedient way to get what the 90% of people apparently want. It's also true that it will add a bunch of unmaintainable tribal knowledge into the interface between Nova and Ironic, but that has been the case for multiple years. The flavor explosion problem will continue to get worse for those of us who deal with its pain (Oath in particular feels this) because the interface between nova flavors and Ironic instance capabilities will continue to be super-tightly-coupled. For the record, I would have been happier if someone had proposed separating the instance configuration data in the flavor extra-specs from the notion of required placement constraints (i.e. traits). You could call the extra_spec "deploy_template_id" if you wanted and that extra spec value could have been passed to Ironic during node provisioning instead of the list of placement constraints (traits). So, you'd have a list of actual placement traits for an instance that looked like this: required=BOOT_MODE_UEFI,STORAGE_HARDWARE_RAID and you'd have a flavor extra spec called "deploy_template_id" with a value of the deploy template configuration data you wanted to communicate to Ironic. The Ironic virt driver could then just look for the "deploy_template_id" extra spec and pass the value of that to the Ironic API instead of passing a list of traits. That would have at least satisfied my desire to separate configuration data from placement constraints. Anyway, I'm done trying to please my own desires for a clean solution to this. Best, -jay From kennelson11 at gmail.com Mon Oct 1 23:57:30 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 1 Oct 2018 16:57:30 -0700 Subject: [openstack-dev] Berlin Community Contributor Awards In-Reply-To: References: Message-ID: Hello :) I wanted to bring this to the top of people's inboxes as we have three weeks left to submit community members[1]. I can think of a dozen people right now that deserve an award and I am sure you all could do the same. It only takes a few minutes and its an easy way to make sure they get the recognition they deserve. Show your appreciation and nominate one person. -Kendall (diablo_rojo) [1] https://openstackfoundation.formstack.com/forms/berlin_stein_ccas On Fri, Aug 24, 2018 at 11:15 AM Kendall Nelson wrote: > Hello Everyone! > > As we approach the Summit (still a ways away thankfully), I thought I > would kick off the Community Contributor Award nominations early this > round. > > For those of you that already know what they are, here is the form[1]. > > For those of you that have never heard of the CCA, I'll briefly explain > what they are :) We all know people in the community that do the dirty > jobs, we all know people that will bend over backwards trying to help > someone new, we all know someone that is a savant in some area of the code > we could never hope to understand. These people rarely get the thanks they > deserve and the Community Contributor Awards are a chance to make sure they > know that they are appreciated for the amazing work they do and skills they > have. > > So go forth and nominate these amazing community members[1]! Nominations > will close on October 21st at 7:00 UTC and winners will be announced at the > OpenStack Summit in Berlin. > > -Kendall (diablo_rojo) > > [1] https://openstackfoundation.formstack.com/forms/berlin_stein_ccas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Oct 2 00:31:34 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 1 Oct 2018 17:31:34 -0700 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: <6de3c709-1d41-fa13-5c74-e38536a24587@gmail.com> References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> <2e4ce0e9-1ad1-abe7-2f6e-c10b84b3fdb4@gmail.com> <6de3c709-1d41-fa13-5c74-e38536a24587@gmail.com> Message-ID: On Mon, Oct 1, 2018 at 3:37 PM Jay Pipes wrote: > On 10/01/2018 06:04 PM, Julia Kreger wrote: > > On Mon, Oct 1, 2018 at 2:41 PM Eric Fried wrote: > > > > > > > So say the user requests a node that supports UEFI because their > > image > > > needs UEFI. Which workflow would you want here? > > > > > > 1) The operator (or ironic?) has already configured the node to > > boot in > > > UEFI mode. Only pre-configured nodes advertise the "supports > > UEFI" trait. > > > > > > 2) Any node that supports UEFI mode advertises the trait. Ironic > > ensures > > > that UEFI mode is enabled before provisioning the machine. > > > > > > I imagine doing #2 by passing the traits which were specifically > > > requested by the user, from Nova to Ironic, so that Ironic can do > the > > > right thing for the user. > > > > > > Your proposal suggests that the user request the "supports UEFI" > > trait, > > > and *also* pass some glance UUID which the user understands will > make > > > sure the node actually boots in UEFI mode. Something like: > > > > > > openstack server create --flavor METAL_12CPU_128G --trait > > SUPPORTS_UEFI > > > --config-data $TURN_ON_UEFI_UUID > > > > > > Note that I pass --trait because I hope that will one day be > > supported > > > and we can slow down the flavor explosion. > > > > IMO --trait would be making things worse (but see below). I think > UEFI > > with Jay's model would be more like: > > > > openstack server create --flavor METAL_12CPU_128G --config-data > $UEFI > > > > where the UEFI profile would be pretty trivial, consisting of > > placement.traits.required = ["BOOT_MODE_UEFI"] and object.boot_mode = > > "uefi". > > > > I agree that this seems kind of heavy, and that it would be nice to > be > > able to say "boot mode is UEFI" just once. OTOH I get Jay's point > that > > we need to separate the placement decision from the instance > > configuration. > > > > That said, what if it was: > > > > openstack config-profile create --name BOOT_MODE_UEFI --json - > > { > > "type": "boot_mode_scheme", > > "version": 123, > > "object": { > > "boot_mode": "uefi" > > }, > > "placement": { > > "traits": { > > "required": [ > > "BOOT_MODE_UEFI" > > ] > > } > > } > > } > > ^D > > > > And now you could in fact say > > > > openstack server create --flavor foo --config-profile > BOOT_MODE_UEFI > > > > using the profile name, which happens to be the same as the trait > name > > because you made it so. Does that satisfy the yen for saying it > once? (I > > mean, despite the fact that you first had to say it three times to > get > > it set up.) > > > > ======== > > > > I do want to zoom out a bit and point out that we're talking about > > implementing a new framework of substantial size and impact when the > > original proposal - using the trait for both - would just work out of > > the box today with no changes in either API. Is it really worth it? > > > > > > +1000. Reading both of these threads, it feels like we're basically > > trying to make something perfect. I think that is a fine goal, except it > > is unrealistic because the enemy of good is perfection. > > > > ======== > > > > By the way, with Jim's --trait suggestion, this: > > > > > ...dozens of flavors that look like this: > > > - 12CPU_128G_RAID10_DRIVE_LAYOUT_X > > > - 12CPU_128G_RAID5_DRIVE_LAYOUT_X > > > - 12CPU_128G_RAID01_DRIVE_LAYOUT_X > > > - 12CPU_128G_RAID10_DRIVE_LAYOUT_Y > > > - 12CPU_128G_RAID5_DRIVE_LAYOUT_Y > > > - 12CPU_128G_RAID01_DRIVE_LAYOUT_Y > > > > ...could actually become: > > > > openstack server create --flavor 12CPU_128G --trait $WHICH_RAID > > --trait > > $WHICH_LAYOUT > > > > No flavor explosion. > > > > > > ++ I believe this was where this discussion kind of ended up in.. > ?Dublin? > > > > The desire and discussion that led us into complex configuration > > templates and profiles being submitted were for highly complex scenarios > > where users wanted to assert detailed raid configurations to disk. > > Naturally, there are many issues there. The ability to provide such > > detail would be awesome for those 10% of operators that need such > > functionality. Of course, if that is the only path forward, then we > > delay the 90% from getting the minimum viable feature they need. > > > > > > (Maybe if we called it something other than --trait, like maybe > > --config-option, it would let us pretend we're not really > overloading a > > trait to do config - it's just a coincidence that the config option > has > > the same name as the trait it causes to be required.) > > > > > > I feel like it might be confusing, but totally +1 to matching required > > trait name being a thing. That way scheduling is completely decoupled > > and if everything was correct then the request should already be > > scheduled properly. > > I guess I'll just drop the idea of doing this properly then. It's true > that the placement traits concept can be hacked up and the virt driver > can just pass a list of trait strings to the Ironic API and that's the > most expedient way to get what the 90% of people apparently want. It's > also true that it will add a bunch of unmaintainable tribal knowledge > into the interface between Nova and Ironic, but that has been the case > for multiple years. > A few things that need to be unpacked in this statement. But first, please don't stop. Your bringing a different perspective, and we need to find a common ground. My HUGE issue right now is just how frustrated people at this moment when this feels like the third time we've looked at pivoting on this. Does that mean we should stop? Absolutely not! In terms of hacked up. This is the way the virt driver behaves today[1]. In a sense, that contract has already been established. We absolutely have to be aware of what the scheduling was order to turn off UEFI. Additionally, there is logic[2] to prevent deployment in case something goes wrong and a trait is on the requested instance that does not match what is being offered as a trait for the baremetal node. So it seems reasonable to even continue to do regardless. I don't agree it is unmaintainable, But we do fall down on documenting the interaction for a whole slew of reasons. If we want to discuss that, we should get lots of tea. :) I'm totally open to improving that non-existent nova/ironic interaction documentation, and even open to improving the interaction moving forward but I would only ask that we take one deliberate step at a time. [1] http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/ironic/patcher.py?h=stable/rocky#n118 [2] http://git.openstack.org/cgit/openstack/ironic/tree/ironic/conductor/utils.py?h=stable/rocky#n935 > > The flavor explosion problem will continue to get worse for those of us > who deal with its pain (Oath in particular feels this) because the > interface between nova flavors and Ironic instance capabilities will > continue to be super-tightly-coupled. > So what would alleviate some of that explosion pain? If something like "--required-trait" or even "--trait" is a possibility, which is where I thought things were going to go in nova based on past discussions... that seems like it would help it tremendously. Granted, I'm not looking at that list of flavors, but there must surely be some commonality that we can begin to use to identify the commonalities. If it is purely raid, then lets build a mechanism to help drive that forward first since we already have a field in our API for it. Of course, with all of this back and forth, I'm getting the feeling "--required-trait HW_CPU_X86_VMX" or "--trait HW_CPU_X86_VMX" will just not happen because of the impasse that exists. > > > For the record, I would have been happier if someone had proposed > separating the instance configuration data in the flavor extra-specs > from the notion of required placement constraints (i.e. traits). You > could call the extra_spec "deploy_template_id" if you wanted and that > extra spec value could have been passed to Ironic during node > provisioning instead of the list of placement constraints (traits). > But wouldn't this completely change all nova user's interaction with nova? > > So, you'd have a list of actual placement traits for an instance that > looked like this: > > required=BOOT_MODE_UEFI,STORAGE_HARDWARE_RAID > > and you'd have a flavor extra spec called "deploy_template_id" with a > value of the deploy template configuration data you wanted to > communicate to Ironic. The Ironic virt driver could then just look for > the "deploy_template_id" extra spec and pass the value of that to the > Ironic API instead of passing a list of traits. > Is the source desire instead of allowing for defaults or the desired state of the hardware to be expressed, a desire for an override facility? If so that would kind of make sense and I suspect would help reduce the flavor explosion. I can't imagine telling someone "You need to create this template file, put it in glance, in order to set 323MB of RAM." Without override facilities, I'm not sure how the actual flavor explosion would be scaled back (I guess we need data to actually understand the causes). For what it is worth, this seems reasonable path to start with as an override facility. I guess the only thing that we would need to do is maybe consider it a default node object field in ironic and prevent overwrites in the case of rebuild events. (Of course, someone will want to rebuild those machines. :\) > > That would have at least satisfied my desire to separate configuration > data from placement constraints. > > Anyway, I'm done trying to please my own desires for a clean solution to > this. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Tue Oct 2 02:27:27 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Mon, 1 Oct 2018 22:27:27 -0400 Subject: [openstack-dev] [kolla][octavia] Containerize the amphora-agent In-Reply-To: References: Message-ID: On Sun, Sep 30, 2018 at 8:47 PM Adam Harwell wrote: > I was coming to the same conclusion for a completely different goal -- > baking lighter weight VMs (and eliminating a number of compatibility > issues) by putting exactly what we need in containers and making the base > OS irrelevant. So, I am interested in helping to do this in a way that will > work well for both goals. > My thought is that containerizing the agent AND using (existing?) > containerized haproxy distributions, we can better standardize things > between different amphora base OSes at the same time as setting up for full > containerization. > We should discuss further on IRC next week maybe, if we can find a good > time. > Sure. Feel free to ping me if you see me online. My IRC nick is hongbin and you will find me in #openstack-zun or #openstack-dev > > --Adam > > On Sun, Sep 30, 2018, 11:56 Hongbin Lu wrote: > >> Hi all, >> >> I am working on the Zun integration for Octavia. I did a preliminary >> research and it seems what we need to do is to containerize the amphora >> agent that was packaged and shipped by a VM image. I wonder if anyone >> already has a containerized docker image that I can leverage. If not, I >> will create one. >> >> Best regards, >> Hongbin >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kor.jmlim at gmail.com Tue Oct 2 03:55:38 2018 From: kor.jmlim at gmail.com (Jea-Min Lim) Date: Tue, 2 Oct 2018 12:55:38 +0900 Subject: [openstack-dev] [Horizon] Horizon tutorial didn`t work In-Reply-To: References: Message-ID: Thanks for the reply. If you need any detailed information, let me know. Regards, 2018년 10월 1일 (월) 오후 6:53, Ivan Kolodyazhny 님이 작성: > Hi Jea-Min, > > Thank you for your report. I'll check the manual and fix it asap. > > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > > On Mon, Oct 1, 2018 at 9:38 AM Jea-Min Lim wrote: > >> Hello everyone, >> >> I`m following a tutorial of Building a Dashboard using Horizon. >> (link: >> https://docs.openstack.org/horizon/latest/contributor/tutorials/dashboard.html#tutorials-dashboard >> ) >> >> However, provided custom management command doesn't create boilerplate >> code. >> >> I typed tox -e manage -- startdash mydashboard --target >> openstack_dashboard/dashboards/mydashboard >> >> and the attached screenshot file is the execution result. >> >> Are there any recommendations to solve this problem? >> >> Regards. >> >> [image: result_jmlim.PNG] >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: result_jmlim.PNG Type: image/png Size: 33958 bytes Desc: not available URL: From wjstk16 at gmail.com Tue Oct 2 04:50:56 2018 From: wjstk16 at gmail.com (Won) Date: Tue, 2 Oct 2018 13:50:56 +0900 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. Message-ID: I have some problems with Prometheus alarms in vitrage. I receive a list of alarms from the Prometheus alarm manager well, but the alarm does not disappear when the problem(alarm) is resolved. The alarm that came once in both the alarm list and the entity graph does not disappear in vitrage. The alarm sent by zabbix disappears when alarm solved, I wonder how to clear the Prometheus alarm from vitrage and how to update the alarm automatically like zabbix. thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Tue Oct 2 05:30:47 2018 From: eumel at arcor.de (Frank Kloeker) Date: Tue, 02 Oct 2018 07:30:47 +0200 Subject: [openstack-dev] [I18n] No Office Hour this week Message-ID: <1f1e9b3f843d712df48f30e7956334b7@arcor.de> Hello, due the national holiday in Germany tomorrow and the long weekend after that, there will be no I18n Office Hour this week. Next session will be held on 2018/10/11 13:00 UTC [1]. kind regards Frank [1] https://wiki.openstack.org/wiki/Meetings/I18nTeamMeeting From bence.romsics at gmail.com Tue Oct 2 05:39:25 2018 From: bence.romsics at gmail.com (Bence Romsics) Date: Tue, 2 Oct 2018 07:39:25 +0200 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: <543894f0-8ee2-68b8-ac02-898a5359a2c6@gmail.com> References: <1538145718.22269.0@smtp.office365.com> <543894f0-8ee2-68b8-ac02-898a5359a2c6@gmail.com> Message-ID: Hi All, I'm quite late to the discussion, because I'm on vacation and I missed the beginning of this thread, but let me share a few thoughts. On Fri, Sep 28, 2018 at 6:13 PM Jay Pipes wrote: > * Does the provider belong to physical network "corpnet" and also > support creation of virtual NICs of type either "DIRECT" or "NORMAL"? I'd like to split this question into two, because I think modeling vnic_types and physnets as traits are different. I'll start with the simpler: vnic_types. I may have missed some of the arguments in this very long thread, but I honestly do not see what is the problem with vnic_type traits. These are true capabilities of the backend - not binary though. When it comes to DIRECT and NORMAL the difference is basically if the backend can do SR-IOV or not. On the other hand I have my reservations about physnet traits. I have an item on my todo list to look into Placement aggregates and explore if those are better representing a physnet. Before committing to using aggregates for physnets I know I should fully discover the aggregates API though. And mention one concern which could lead to a usability problem today: aggregates seem to have no names. I think they should. The operator is helpless without them. On Fri, Sep 28, 2018 at 11:51 PM Jay Pipes wrote: > That's correct, because you're encoding >1 piece of information into the > single string (the fact that it's a temperature *and* the value of that > temperature are the two pieces of information encoded into the single > string). > > Now that there's multiple pieces of information encoded in the string > the reader of the trait string needs to know how to decode those bits of > information, which is exactly what we're trying to avoid doing [...]. Technically Placement traits today can be used as a covert communication channel. And doing that is tempting. One component encodes information into a trait name. Another reads it (ie. the trait on the allocated RP) and decodes it. Maybe that trait wasn't influencing placement at all. This is the metadata use case. (If it is a use case at all.) I think the most problematic is when we unknowingly mix placement-influencing info and effectless-metadata into a single blob (as a trait name). One good way to avoid this is to fully and actively discourage the use of traits as a covert communication channel. I can totally support that. I want to mention that in the work-in-progress implementation of the minimum guaranteed bandwidth we considered and then conciously avoided using this covert communication channel. Neutron agents and servers use their usual communication channels to share resource information between them. None of them ever decodes a trait name. All we ever ask of them after allocation is this: Are you responsible for this RP UUID? (For example see https://review.openstack.org/574783.) Cheers, Bence From majopela at redhat.com Tue Oct 2 07:40:30 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Tue, 2 Oct 2018 09:40:30 +0200 Subject: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed In-Reply-To: References: Message-ID: Thanks for the info Doug. On Mon, Oct 1, 2018 at 6:25 PM Doug Hellmann wrote: > Miguel Angel Ajo Pelayo writes: > > > Thank you for the guidance and ping Doug. > > > > Was this triggered by [1] ? or By the 1.1.0 tag pushed to gerrit? > > The release jobs are always triggered by the git tagging event. The > patches in openstack/releases run a job that adds tags, but the patch > you linked to hasn't been merged yet, so it looks like it was caused by > pushing the tag manually. > > Doug > -- Miguel Ángel Ajo OSP / Networking DFG, OVN Squad Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.morin at orange.com Tue Oct 2 08:01:37 2018 From: thomas.morin at orange.com (thomas.morin at orange.com) Date: Tue, 2 Oct 2018 10:01:37 +0200 Subject: [openstack-dev] [neutron][stadium][networking] Seeking proposals for non-voting Stadium projects in Neutron check queue In-Reply-To: References: Message-ID: <18731_1538467298_5BB325E2_18731_20_1_580d078c-3d98-acf4-dda0-2e4b30205709@orange.com> Hi Miguel, all, The initiative is very welcome and will help make it more efficient to develop in stadium projects. legacy-tempest-dsvm-networking-bgpvpn-bagpipe would be a candidate, for networking-bgpvpn and networking-bagpipe (it covers API and scenario tests for the BGPVPN API (networking-bgpvpn) and given that networking-bagpipe is used as reference driver, it exercises a large portion of networking-bagpipe as well). Having this one will help a lot. Thanks, -Thomas On 9/30/18 2:42 AM, Miguel Lavalle wrote: > Dear networking Stackers, > > During the recent PTG in Denver, we discussed measures to prevent > patches merged in the Neutron repo breaking Stadium and related > networking projects in general. We decided to implement the following: > > 1) For Stadium projects, we want to add non-voting jobs to the Neutron > check queue > 2) For non stadium projects, we are inviting them to add 3rd party CI jobs > > The next step is for each project to propose the jobs that they want > to run against Neutron patches. > > Best regards > > Miguel > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. From dtantsur at redhat.com Tue Oct 2 08:18:42 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 2 Oct 2018 10:18:42 +0200 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: <6de3c709-1d41-fa13-5c74-e38536a24587@gmail.com> References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> <2e4ce0e9-1ad1-abe7-2f6e-c10b84b3fdb4@gmail.com> <6de3c709-1d41-fa13-5c74-e38536a24587@gmail.com> Message-ID: On 10/2/18 12:36 AM, Jay Pipes wrote: > On 10/01/2018 06:04 PM, Julia Kreger wrote: >> On Mon, Oct 1, 2018 at 2:41 PM Eric Fried wrote: >> >> >>      > So say the user requests a node that supports UEFI because their >>     image >>      > needs UEFI. Which workflow would you want here? >>      > >>      > 1) The operator (or ironic?) has already configured the node to >>     boot in >>      > UEFI mode. Only pre-configured nodes advertise the "supports >>     UEFI" trait. >>      > >>      > 2) Any node that supports UEFI mode advertises the trait. Ironic >>     ensures >>      > that UEFI mode is enabled before provisioning the machine. >>      > >>      > I imagine doing #2 by passing the traits which were specifically >>      > requested by the user, from Nova to Ironic, so that Ironic can do the >>      > right thing for the user. >>      > >>      > Your proposal suggests that the user request the "supports UEFI" >>     trait, >>      > and *also* pass some glance UUID which the user understands will make >>      > sure the node actually boots in UEFI mode. Something like: >>      > >>      > openstack server create --flavor METAL_12CPU_128G --trait >>     SUPPORTS_UEFI >>      > --config-data $TURN_ON_UEFI_UUID >>      > >>      > Note that I pass --trait because I hope that will one day be >>     supported >>      > and we can slow down the flavor explosion. >> >>     IMO --trait would be making things worse (but see below). I think UEFI >>     with Jay's model would be more like: >> >>        openstack server create --flavor METAL_12CPU_128G --config-data $UEFI >> >>     where the UEFI profile would be pretty trivial, consisting of >>     placement.traits.required = ["BOOT_MODE_UEFI"] and object.boot_mode = >>     "uefi". >> >>     I agree that this seems kind of heavy, and that it would be nice to be >>     able to say "boot mode is UEFI" just once. OTOH I get Jay's point that >>     we need to separate the placement decision from the instance >>     configuration. >> >>     That said, what if it was: >> >>       openstack config-profile create --name BOOT_MODE_UEFI --json - >>       { >>        "type": "boot_mode_scheme", >>        "version": 123, >>        "object": { >>            "boot_mode": "uefi" >>        }, >>        "placement": { >>         "traits": { >>          "required": [ >>           "BOOT_MODE_UEFI" >>          ] >>         } >>        } >>       } >>       ^D >> >>     And now you could in fact say >> >>       openstack server create --flavor foo --config-profile BOOT_MODE_UEFI >> >>     using the profile name, which happens to be the same as the trait name >>     because you made it so. Does that satisfy the yen for saying it once? (I >>     mean, despite the fact that you first had to say it three times to get >>     it set up.) >> >>     ======== >> >>     I do want to zoom out a bit and point out that we're talking about >>     implementing a new framework of substantial size and impact when the >>     original proposal - using the trait for both - would just work out of >>     the box today with no changes in either API. Is it really worth it? >> >> >> +1000. Reading both of these threads, it feels like we're basically trying to >> make something perfect. I think that is a fine goal, except it is unrealistic >> because the enemy of good is perfection. >> >>     ======== >> >>     By the way, with Jim's --trait suggestion, this: >> >>      > ...dozens of flavors that look like this: >>      > - 12CPU_128G_RAID10_DRIVE_LAYOUT_X >>      > - 12CPU_128G_RAID5_DRIVE_LAYOUT_X >>      > - 12CPU_128G_RAID01_DRIVE_LAYOUT_X >>      > - 12CPU_128G_RAID10_DRIVE_LAYOUT_Y >>      > - 12CPU_128G_RAID5_DRIVE_LAYOUT_Y >>      > - 12CPU_128G_RAID01_DRIVE_LAYOUT_Y >> >>     ...could actually become: >> >>       openstack server create --flavor 12CPU_128G --trait $WHICH_RAID >>     --trait >>     $WHICH_LAYOUT >> >>     No flavor explosion. >> >> >> ++ I believe this was where this discussion kind of ended up in.. ?Dublin? >> >> The desire and discussion that led us into complex configuration templates and >> profiles being submitted were for highly complex scenarios where users wanted >> to assert detailed raid configurations to disk. Naturally, there are many >> issues there. The ability to provide such detail would be awesome for those >> 10% of operators that need such functionality. Of course, if that is the only >> path forward, then we delay the 90% from getting the minimum viable feature >> they need. >> >> >>     (Maybe if we called it something other than --trait, like maybe >>     --config-option, it would let us pretend we're not really overloading a >>     trait to do config - it's just a coincidence that the config option has >>     the same name as the trait it causes to be required.) >> >> >> I feel like it might be confusing, but totally +1 to matching required trait >> name being a thing. That way scheduling is completely decoupled and if >> everything was correct then the request should already be scheduled properly. > > I guess I'll just drop the idea of doing this properly then. It's true that the > placement traits concept can be hacked up and the virt driver can just pass a > list of trait strings to the Ironic API and that's the most expedient way to get > what the 90% of people apparently want. It's also true that it will add a bunch > of unmaintainable tribal knowledge into the interface between Nova and Ironic, > but that has been the case for multiple years. Mostly a side note: this has been always the case and probably will be :( For example, this is just some black magic for everyone outside of our teams I've ever talked to: $ openstack flavor set --property resources:VCPU=0 my-baremetal-flavor $ openstack flavor set --property resources:MEMORY_MB=0 my-baremetal-flavor $ openstack flavor set --property resources:DISK_GB=0 my-baremetal-flavor People got used to doing it, but for them it's just "magical commands to make Ironic work starting with Rocky". > > The flavor explosion problem will continue to get worse for those of us who deal > with its pain (Oath in particular feels this) because the interface between nova > flavors and Ironic instance capabilities will continue to be super-tightly-coupled. > > For the record, I would have been happier if someone had proposed separating the > instance configuration data in the flavor extra-specs from the notion of > required placement constraints (i.e. traits). You could call the extra_spec > "deploy_template_id" if you wanted and that extra spec value could have been > passed to Ironic during node provisioning instead of the list of placement > constraints (traits). I like it, but I guess it has two downsides: 1. Adding something to Nova API just for Ironic. 2. Ability for operators to shoot their legs by requesting some RAID configuration via deploy_template_id but not requesting the ability to use RAID in traits. Also will it really help with flavor explosion, unless we allow to pass this deploy_template_id via user-facing API? Dmitry > > So, you'd have a list of actual placement traits for an instance that looked > like this: > > required=BOOT_MODE_UEFI,STORAGE_HARDWARE_RAID > > and you'd have a flavor extra spec called "deploy_template_id" with a value of > the deploy template configuration data you wanted to communicate to Ironic. The > Ironic virt driver could then just look for the "deploy_template_id" extra spec > and pass the value of that to the Ironic API instead of passing a list of traits. > > That would have at least satisfied my desire to separate configuration data from > placement constraints. > > Anyway, I'm done trying to please my own desires for a clean solution to this. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Tue Oct 2 09:09:37 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 2 Oct 2018 10:09:37 +0100 (BST) Subject: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper" In-Reply-To: References: Message-ID: On Wed, 19 Sep 2018, Monty Taylor wrote: > Yes. Your life will be much better if you do not make more legacy jobs. They > are brittle and hard to work with. > > New jobs should either use the devstack base job, the devstack-tempest base > job or the devstack-tox-functional base job - depending on what things are > intended. I have a thing mostly working at https://review.openstack.org/#/c/601614/ The commit message has some ideas on how it could be better and the various hacks I needed to do to get things working. One of the comments in there is about the idea of making a zuul job which is effectively "run the gabbits in these dirs" against a tempest set up. Doing so will require some minor changes to the tempest tox passenv settings but I think it ought to straightforwardish. Some reviews from people who understand these things more than me would be most welcome. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From john at johngarbutt.com Tue Oct 2 09:47:05 2018 From: john at johngarbutt.com (John Garbutt) Date: Tue, 2 Oct 2018 10:47:05 +0100 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> <2e4ce0e9-1ad1-abe7-2f6e-c10b84b3fdb4@gmail.com> Message-ID: Back to the deprecation for a moment... My plan was to tell folks to use Traits to influence placement decisions, rather than capabilities. We probably can't remove the feature till we have deploy templates, but it seems wrong not to warn our users to avoid using capabilities, when 80% of the use cases can be moved to traits today, and give you better performance, etc. Thoughts? On Mon, 1 Oct 2018 at 22:42, Eric Fried wrote: > I do want to zoom out a bit and point out that we're talking about > implementing a new framework of substantial size and impact when the > original proposal - using the trait for both - would just work out of > the box today with no changes in either API. Is it really worth it? Yeah, I think the simpler solution deals with a lot of the cases right now. Personally, I see using traits as about hiding complexity from the end user (not the operator). End users are requesting a host with a given capability (via flavor, image or otherwise), and they don't really care if the operator has statically configured it, or Ironic dynamically configures it. Operator still statically configures what deploy templates are possible on what nodes (last time I read the spec). For the common cases, I see us adding standard traits. They would also be useful to pick between nodes that are statically configured one way or the other. (Although MarkG keeps telling me (in a British way) that is probably rubbish, and he might be right...) I am +1 Jay's idea for the more complicated cases (a bit like what jroll was saying). For me, the user (gets bad interop and) has no visibility into what the crazy custom trait means (i.e. the LAYOUT_Y in efried's example). A validated blob in Glare doesn't seem terrible for that special case. But generally that seems like quite a different use case, and its tempting to focus on something well typed that is disk configuration specific. Although, it is tempting not to block the simpler solution, while we work out how people use this for real. Thanks, John From chris at openstack.org Tue Oct 2 12:27:35 2018 From: chris at openstack.org (Chris Hoge) Date: Tue, 2 Oct 2018 07:27:35 -0500 Subject: [openstack-dev] [k8s][magnum][zun] Notification of removal of in-tree K8s OpenStack Provider Message-ID: <49F4E92D-EB16-4DFF-B82B-0179D711C276@openstack.org> For those projects that use OpenStack as a cloud provider for K8s, there is a patch in flight[1] to remove the in-tree OpenStack provider from the kubernetes/kubernetes repository. The provider has been deprecated for two releases, with a replacement external provider available[2]. Before we merge this patch for the 1.13 K8s release cycle, we want to make sure that projects dependent on the in-tree provider (expecially thinking about projects like Magnum and Zun) have an opportunity to express their readiness to switch over. [1] https://github.com/kubernetes/kubernetes/pull/67782 [2] https://github.com/kubernetes/cloud-provider-openstack From mark at stackhpc.com Tue Oct 2 12:58:51 2018 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 2 Oct 2018 13:58:51 +0100 Subject: [openstack-dev] [ironic] Tenks Message-ID: Hi, In the most recent Ironic meeting we discussed [1] tenks, and the possibility of adding the project under Ironic governance. We agreed to move the discussion to the mailing list. I'll introduce the project here and give everyone a chance to ask questions. If things appear to move in the right direction, I'll propose a vote for inclusion under Ironic's governance. Tenks is a project for managing 'virtual bare metal clusters'. It aims to be a drop-in replacement for the various scripts and templates that exist in the Ironic devstack plugin for creating VMs to act as bare metal nodes in development and test environments. Similar code exists in Bifrost and TripleO, and probably other places too. By focusing on one project, we can ensure that it works well, and provides all the features necessary as support for bare metal in the cloud evolves. That's tenks the concept. Tenks in reality today is a working version 1.0, written in Ansible, built by Will Miller (w-miller) during his summer placement. Will has returned to his studies, and Will Szumski (jovial) has picked it up. You don't have to be called Will to work on Tenks, but it helps. There are various resources available for anyone wishing to find out more: * Ironic spec review: https://review.openstack.org/#/c/579583 * Documentation: https://tenks.readthedocs.io/en/latest/ * Source code: https://github.com/stackhpc/tenks * Blog: https://stackhpc.com/tenks.html * IRC: mgoddard or jovial in #openstack-ironic What does everyone think? Is this something that the ironic community could or should take ownership of? [1] http://eavesdrop.openstack.org/meetings/ironic/2018/ironic.2018-10-01-15.00.log.html#l-170 Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Tue Oct 2 12:59:06 2018 From: aspiers at suse.com (Adam Spiers) Date: Tue, 2 Oct 2018 13:59:06 +0100 Subject: [openstack-dev] [infra] Gerrit User Summit, November 2018 Message-ID: <20181002125906.doeoecdh6wrqhavz@pacific.linksys.moosehall> Hi all, The next forthcoming Gerrit User Summit 2018 will be Nov 15th-16th in Palo Alto, hosted by Cloudera. See the Gerrit User Summit page at: https://gerrit.googlesource.com/summit/2018/+/master/index.md and the event registration at: https://gus2018.eventbrite.com Hopefully some members of the OpenStack community can attend the event, not just so we can keep up to date with Gerrit but also so that our interests can be represented! Regards, Adam From jaypipes at gmail.com Tue Oct 2 13:02:52 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 2 Oct 2018 09:02:52 -0400 Subject: [openstack-dev] [ironic] Tenks In-Reply-To: References: Message-ID: On 10/02/2018 08:58 AM, Mark Goddard wrote: > Hi, > > In the most recent Ironic meeting we discussed [1] tenks, and the > possibility of adding the project under Ironic governance. We agreed to > move the discussion to the mailing list. I'll introduce the project here > and give everyone a chance to ask questions. If things appear to move in > the right direction, I'll propose a vote for inclusion under Ironic's > governance. > > Tenks is a project for managing 'virtual bare metal clusters'. It aims > to be a drop-in replacement for the various scripts and templates that > exist in the Ironic devstack plugin for creating VMs to act as bare > metal nodes in development and test environments. Similar code exists in > Bifrost and TripleO, and probably other places too. By focusing on one > project, we can ensure that it works well, and provides all the features > necessary as support for bare metal in the cloud evolves. > > That's tenks the concept. Tenks in reality today is a working version > 1.0, written in Ansible, built by Will Miller (w-miller) during his > summer placement. Will has returned to his studies, and Will Szumski > (jovial) has picked it up. You don't have to be called Will to work on > Tenks, but it helps. > > There are various resources available for anyone wishing to find out more: > > * Ironic spec review: https://review.openstack.org/#/c/579583 > * Documentation: https://tenks.readthedocs.io/en/latest/ > * Source code: https://github.com/stackhpc/tenks > * Blog: https://stackhpc.com/tenks.html > * IRC: mgoddard or jovial in #openstack-ironic > > What does everyone think? Is this something that the ironic community > could or should take ownership of? How does Tenks relate to OVB? https://openstack-virtual-baremetal.readthedocs.io/en/latest/introduction.html Best, -jay From mark at stackhpc.com Tue Oct 2 14:37:35 2018 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 2 Oct 2018 15:37:35 +0100 Subject: [openstack-dev] [ironic] Tenks In-Reply-To: References: Message-ID: On Tue, 2 Oct 2018 at 14:03, Jay Pipes wrote: > On 10/02/2018 08:58 AM, Mark Goddard wrote: > > Hi, > > > > In the most recent Ironic meeting we discussed [1] tenks, and the > > possibility of adding the project under Ironic governance. We agreed to > > move the discussion to the mailing list. I'll introduce the project here > > and give everyone a chance to ask questions. If things appear to move in > > the right direction, I'll propose a vote for inclusion under Ironic's > > governance. > > > > Tenks is a project for managing 'virtual bare metal clusters'. It aims > > to be a drop-in replacement for the various scripts and templates that > > exist in the Ironic devstack plugin for creating VMs to act as bare > > metal nodes in development and test environments. Similar code exists in > > Bifrost and TripleO, and probably other places too. By focusing on one > > project, we can ensure that it works well, and provides all the features > > necessary as support for bare metal in the cloud evolves. > > > > That's tenks the concept. Tenks in reality today is a working version > > 1.0, written in Ansible, built by Will Miller (w-miller) during his > > summer placement. Will has returned to his studies, and Will Szumski > > (jovial) has picked it up. You don't have to be called Will to work on > > Tenks, but it helps. > > > > There are various resources available for anyone wishing to find out > more: > > > > * Ironic spec review: https://review.openstack.org/#/c/579583 > > * Documentation: https://tenks.readthedocs.io/en/latest/ > > * Source code: https://github.com/stackhpc/tenks > > * Blog: https://stackhpc.com/tenks.html > > * IRC: mgoddard or jovial in #openstack-ironic > > > > What does everyone think? Is this something that the ironic community > > could or should take ownership of? > > How does Tenks relate to OVB? > > > https://openstack-virtual-baremetal.readthedocs.io/en/latest/introduction.html Good question. As far as I'm aware, OVB is a tool for using an OpenStack cloud to host the virtual bare metal nodes, and is typically used for testing TripleO. Tenks does not rule out supporting this use case in future, but currently operates more like the Ironic devstack plugin, using libvirt/KVM/QEMU as the virtualisation provider. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Tue Oct 2 15:13:11 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 2 Oct 2018 11:13:11 -0400 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: <6de3c709-1d41-fa13-5c74-e38536a24587@gmail.com> References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> <2e4ce0e9-1ad1-abe7-2f6e-c10b84b3fdb4@gmail.com> <6de3c709-1d41-fa13-5c74-e38536a24587@gmail.com> Message-ID: On Mon, Oct 1, 2018 at 6:38 PM Jay Pipes wrote: > On 10/01/2018 06:04 PM, Julia Kreger wrote: > > On Mon, Oct 1, 2018 at 2:41 PM Eric Fried wrote: > > > > > That said, what if it was: > > > > openstack config-profile create --name BOOT_MODE_UEFI --json - > > { > > "type": "boot_mode_scheme", > > "version": 123, > > "object": { > > "boot_mode": "uefi" > > }, > > "placement": { > > "traits": { > > "required": [ > > "BOOT_MODE_UEFI" > > ] > > } > > } > > } > > ^D > > > > And now you could in fact say > > > > openstack server create --flavor foo --config-profile > BOOT_MODE_UEFI > > > > using the profile name, which happens to be the same as the trait > name > > because you made it so. Does that satisfy the yen for saying it > once? (I > > mean, despite the fact that you first had to say it three times to > get > > it set up.) > > > > > > > I feel like it might be confusing, but totally +1 to matching required > > trait name being a thing. That way scheduling is completely decoupled > > and if everything was correct then the request should already be > > scheduled properly. > > I guess I'll just drop the idea of doing this properly then. It's true > that the placement traits concept can be hacked up and the virt driver > can just pass a list of trait strings to the Ironic API and that's the > most expedient way to get what the 90% of people apparently want. It's > also true that it will add a bunch of unmaintainable tribal knowledge > into the interface between Nova and Ironic, but that has been the case > for multiple years. > > The flavor explosion problem will continue to get worse for those of us > who deal with its pain (Oath in particular feels this) because the > interface between nova flavors and Ironic instance capabilities will > continue to be super-tightly-coupled. > > For the record, I would have been happier if someone had proposed > separating the instance configuration data in the flavor extra-specs > from the notion of required placement constraints (i.e. traits). You > could call the extra_spec "deploy_template_id" if you wanted and that > extra spec value could have been passed to Ironic during node > provisioning instead of the list of placement constraints (traits). > > So, you'd have a list of actual placement traits for an instance that > looked like this: > > required=BOOT_MODE_UEFI,STORAGE_HARDWARE_RAID > > and you'd have a flavor extra spec called "deploy_template_id" with a > value of the deploy template configuration data you wanted to > communicate to Ironic. The Ironic virt driver could then just look for > the "deploy_template_id" extra spec and pass the value of that to the > Ironic API instead of passing a list of traits. > > That would have at least satisfied my desire to separate configuration > data from placement constraints. > > Anyway, I'm done trying to please my own desires for a clean solution to > this. > Jay, please don't stop - I think we aren't expressing ourselves well, or you're missing something, or both. I understand this is a frustrating conversation for everyone. But I think we're making good progress on the end goal (whether or not we do an intermediate step that hacks on top of traits). We all want a clean solution to this. What Eric is proposing (and Julia and I seem to be in favor of), is nearly the same as your proposal. The single difference is that these config templates or deploy templates or whatever could *also* require certain traits, and the scheduler would use that information to pick a node. While this does put some scheduling information into the config template, it also means that we can remove some of the flavor explosion *and* mostly separate scheduling from configuration. So, you'd have a list of traits on a flavor: required=HW_CPU_X86_VMX,HW_NIC_ACCEL_IPSEC And you would also have a list of traits in the deploy template: {"traits": {"required": ["STORAGE_HARDWARE_RAID"]}, "config": } This allows for making flavors that are reasonably flexible (instead of two flavors that do VMX and IPSEC acceleration, one of which does RAID). It also allows users to specify a desired configuration without also needing to know how to correctly choose a flavor that can handle that configuration. I think it makes a lot of sense, doesn't impose more work on users, and can reduce the number of flavors operators need to manage. Does that make sense? // jim > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Oct 2 15:23:47 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 2 Oct 2018 10:23:47 -0500 Subject: [openstack-dev] [ironic][tripleo] Tenks In-Reply-To: References: Message-ID: On 10/2/18 9:37 AM, Mark Goddard wrote: > > > On Tue, 2 Oct 2018 at 14:03, Jay Pipes > wrote: > > On 10/02/2018 08:58 AM, Mark Goddard wrote: > > Hi, > > > > In the most recent Ironic meeting we discussed [1] tenks, and the > > possibility of adding the project under Ironic governance. We > agreed to > > move the discussion to the mailing list. I'll introduce the > project here > > and give everyone a chance to ask questions. If things appear to > move in > > the right direction, I'll propose a vote for inclusion under > Ironic's > > governance. > > > > Tenks is a project for managing 'virtual bare metal clusters'. It > aims > > to be a drop-in replacement for the various scripts and templates > that > > exist in the Ironic devstack plugin for creating VMs to act as bare > > metal nodes in development and test environments. Similar code > exists in > > Bifrost and TripleO, and probably other places too. By focusing > on one > > project, we can ensure that it works well, and provides all the > features > > necessary as support for bare metal in the cloud evolves. > > > > That's tenks the concept. Tenks in reality today is a working > version > > 1.0, written in Ansible, built by Will Miller (w-miller) during his > > summer placement. Will has returned to his studies, and Will Szumski > > (jovial) has picked it up. You don't have to be called Will to > work on > > Tenks, but it helps. > > > > There are various resources available for anyone wishing to find > out more: > > > > * Ironic spec review: https://review.openstack.org/#/c/579583 > > * Documentation: https://tenks.readthedocs.io/en/latest/ > > * Source code: https://github.com/stackhpc/tenks > > * Blog: https://stackhpc.com/tenks.html > > * IRC: mgoddard or jovial in #openstack-ironic > > > > What does everyone think? Is this something that the ironic > community > > could or should take ownership of? > > How does Tenks relate to OVB? > > https://openstack-virtual-baremetal.readthedocs.io/en/latest/introduction.html > > > Good question. As far as I'm aware, OVB is a tool for using an OpenStack > cloud to host the virtual bare metal nodes, and is typically used for > testing TripleO. Tenks does not rule out supporting this use case in > future, but currently operates more like the Ironic devstack plugin, > using libvirt/KVM/QEMU as the virtualisation provider. Yeah, sounds like this is more a replacement for the kvm virtual environment setup in tripleo-quickstart. I'm adding the tripleo tag for their attention. From cdent+os at anticdent.org Tue Oct 2 15:24:54 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 2 Oct 2018 16:24:54 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-40 Message-ID: HTML: https://anticdent.org/tc-report-18-40.html I'm going to take a break from writing the TC reports for a while. If other people (whether on the TC or not) are interested in producing their own form of a subjective review of the week's TC activity, I very much encourage you to do so. It's proven an effective way to help at least some people maintain engagement. I may pick it up again when I feel like I have sufficient focus and energy to produce something that has more value and interpretation than simply pointing at [the IRC logs](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/). However, at this time, I'm not producing a product that is worth the time it takes me to do it and the time it takes away from doing other things. I'd rather make more significant progress on fewer things. In the meantime, please join me in congratulating and welcoming the newly elected members of the TC: Lance Bragstad, Jean-Philippe Evrard, Doug Hellman, Julia Kreger, Ghanshyam Mann, and Jeremy Stanley. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From openstack at fried.cc Tue Oct 2 15:39:45 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 2 Oct 2018 10:39:45 -0500 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> <2e4ce0e9-1ad1-abe7-2f6e-c10b84b3fdb4@gmail.com> <6de3c709-1d41-fa13-5c74-e38536a24587@gmail.com> Message-ID: <5c357c69-df9b-1c16-79c9-65f84e487f95@fried.cc> > What Eric is proposing (and Julia and I seem to be in favor of), is > nearly the same as your proposal. The single difference is that these > config templates or deploy templates or whatever could *also* require > certain traits, and the scheduler would use that information to pick a > node. While this does put some scheduling information into the config > template, it also means that we can remove some of the flavor explosion > *and* mostly separate scheduling from configuration. > > So, you'd have a list of traits on a flavor: > > required=HW_CPU_X86_VMX,HW_NIC_ACCEL_IPSEC > > And you would also have a list of traits in the deploy template: > > {"traits": {"required": ["STORAGE_HARDWARE_RAID"]}, "config": } > > This allows for making flavors that are reasonably flexible (instead of > two flavors that do VMX and IPSEC acceleration, one of which does RAID). > It also allows users to specify a desired configuration without also > needing to know how to correctly choose a flavor that can handle that > configuration. > > I think it makes a lot of sense, doesn't impose more work on users, and > can reduce the number of flavors operators need to manage. > > Does that make sense? This is in fact exactly what Jay proposed. And both Julia and I are in favor of it as an ideal long-term solution. Where Julia and I deviated from Jay's point of view was in our desire to use "the hack" in the short term so we can satisfy the majority of use cases right away without having to wait for that ideal solution to materialize. > > // jim > > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From miguel at mlavalle.com Tue Oct 2 15:41:58 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Tue, 2 Oct 2018 10:41:58 -0500 Subject: [openstack-dev] [neutron][stable] Stable Core Team Update Message-ID: Hi Stable Team, I want to nominate Bernard Cafarrelli as a stable core reviewer for Neutron and related projects. Bernard has been increasing the number of stable reviews he is doing for the project [1]. Besides that, he is a stable maintainer downstream for his employer (Red Hat), so he can bring that valuable experience to the Neutron stable team. Thanks and regards Miguel [1] https://review.openstack.org/#/q/(project:openstack/neutron+OR+openstack/networking-sfc+OR+project:openstack/networking-ovn)++branch:%255Estable/.*+reviewedby:%22Bernard+Cafarelli+%253Cbcafarel%2540redhat.com%253E%22 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Tue Oct 2 16:09:44 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 2 Oct 2018 12:09:44 -0400 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: <5c357c69-df9b-1c16-79c9-65f84e487f95@fried.cc> References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> <2e4ce0e9-1ad1-abe7-2f6e-c10b84b3fdb4@gmail.com> <6de3c709-1d41-fa13-5c74-e38536a24587@gmail.com> <5c357c69-df9b-1c16-79c9-65f84e487f95@fried.cc> Message-ID: On Tue, Oct 2, 2018 at 11:40 AM Eric Fried wrote: > > What Eric is proposing (and Julia and I seem to be in favor of), is > > nearly the same as your proposal. The single difference is that these > > config templates or deploy templates or whatever could *also* require > > certain traits, and the scheduler would use that information to pick a > > node. While this does put some scheduling information into the config > > template, it also means that we can remove some of the flavor explosion > > *and* mostly separate scheduling from configuration. > > > > So, you'd have a list of traits on a flavor: > > > > required=HW_CPU_X86_VMX,HW_NIC_ACCEL_IPSEC > > > > And you would also have a list of traits in the deploy template: > > > > {"traits": {"required": ["STORAGE_HARDWARE_RAID"]}, "config": blob>} > > > > This allows for making flavors that are reasonably flexible (instead of > > two flavors that do VMX and IPSEC acceleration, one of which does RAID). > > It also allows users to specify a desired configuration without also > > needing to know how to correctly choose a flavor that can handle that > > configuration. > > > > I think it makes a lot of sense, doesn't impose more work on users, and > > can reduce the number of flavors operators need to manage. > > > > Does that make sense? > > This is in fact exactly what Jay proposed. And both Julia and I are in > favor of it as an ideal long-term solution. Where Julia and I deviated > from Jay's point of view was in our desire to use "the hack" in the > short term so we can satisfy the majority of use cases right away > without having to wait for that ideal solution to materialize. > Ah, good point, I had missed that initially. Thanks. Let's do that. So if we all agree Jay's proposal is the right thing to do, is there any reason to start working on a short-term hack instead of putting those efforts into the better solution? I don't see why we couldn't get that done in one cycle, if we're all in agreement on it. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue Oct 2 16:17:30 2018 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 2 Oct 2018 17:17:30 +0100 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> <2e4ce0e9-1ad1-abe7-2f6e-c10b84b3fdb4@gmail.com> <6de3c709-1d41-fa13-5c74-e38536a24587@gmail.com> <5c357c69-df9b-1c16-79c9-65f84e487f95@fried.cc> Message-ID: On Tue, 2 Oct 2018 at 17:10, Jim Rollenhagen wrote: > On Tue, Oct 2, 2018 at 11:40 AM Eric Fried wrote: > >> > What Eric is proposing (and Julia and I seem to be in favor of), is >> > nearly the same as your proposal. The single difference is that these >> > config templates or deploy templates or whatever could *also* require >> > certain traits, and the scheduler would use that information to pick a >> > node. While this does put some scheduling information into the config >> > template, it also means that we can remove some of the flavor explosion >> > *and* mostly separate scheduling from configuration. >> > >> > So, you'd have a list of traits on a flavor: >> > >> > required=HW_CPU_X86_VMX,HW_NIC_ACCEL_IPSEC >> > >> > And you would also have a list of traits in the deploy template: >> > >> > {"traits": {"required": ["STORAGE_HARDWARE_RAID"]}, "config": > blob>} >> > >> > This allows for making flavors that are reasonably flexible (instead of >> > two flavors that do VMX and IPSEC acceleration, one of which does RAID). >> > It also allows users to specify a desired configuration without also >> > needing to know how to correctly choose a flavor that can handle that >> > configuration. >> > >> > I think it makes a lot of sense, doesn't impose more work on users, and >> > can reduce the number of flavors operators need to manage. >> > >> > Does that make sense? >> >> This is in fact exactly what Jay proposed. And both Julia and I are in >> favor of it as an ideal long-term solution. Where Julia and I deviated >> from Jay's point of view was in our desire to use "the hack" in the >> short term so we can satisfy the majority of use cases right away >> without having to wait for that ideal solution to materialize. >> > > Ah, good point, I had missed that initially. Thanks. Let's do that. > > So if we all agree Jay's proposal is the right thing to do, is there any > reason to start working on a short-term hack instead of putting those > efforts into the better solution? I don't see why we couldn't get that done > in one cycle, if we're all in agreement on it. > I'm still unclear on the ironic side of this. I can see that config of some sort is stored in glance, and referenced upon nova server creation. Somehow this would be synced to ironic by the nova virt driver during node provisioning. The part that's missing in my mind is how to map from a config in glance to a set of actions performed by ironic. Does the config in glance reference a deploy template, or a set of ironic deploy steps? Or does ironic (or OpenStack) define some config schema that it supports, and use it to generate a set of deploy steps? > // jim > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdake at cisco.com Tue Oct 2 16:18:16 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Tue, 2 Oct 2018 16:18:16 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-40 In-Reply-To: References: Message-ID: Chris, Thanks for all the hard work you have put into this. FWIW I found value in your reports, but perhaps because I am not involved in the daily activities of the TC. Cheers -steve On 10/2/18, 8:25 AM, "Chris Dent" wrote: HTML: https://anticdent.org/tc-report-18-40.html I'm going to take a break from writing the TC reports for a while. If other people (whether on the TC or not) are interested in producing their own form of a subjective review of the week's TC activity, I very much encourage you to do so. It's proven an effective way to help at least some people maintain engagement. I may pick it up again when I feel like I have sufficient focus and energy to produce something that has more value and interpretation than simply pointing at [the IRC logs](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/). However, at this time, I'm not producing a product that is worth the time it takes me to do it and the time it takes away from doing other things. I'd rather make more significant progress on fewer things. In the meantime, please join me in congratulating and welcoming the newly elected members of the TC: Lance Bragstad, Jean-Philippe Evrard, Doug Hellman, Julia Kreger, Ghanshyam Mann, and Jeremy Stanley. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From openstack at fried.cc Tue Oct 2 16:30:36 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 2 Oct 2018 11:30:36 -0500 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> <2e4ce0e9-1ad1-abe7-2f6e-c10b84b3fdb4@gmail.com> <6de3c709-1d41-fa13-5c74-e38536a24587@gmail.com> <5c357c69-df9b-1c16-79c9-65f84e487f95@fried.cc> Message-ID: <3f4296a1-0eb0-92bd-84bf-f59f49abb9e6@fried.cc> On 10/02/2018 11:09 AM, Jim Rollenhagen wrote: > On Tue, Oct 2, 2018 at 11:40 AM Eric Fried wrote: > > > What Eric is proposing (and Julia and I seem to be in favor of), is > > nearly the same as your proposal. The single difference is that these > > config templates or deploy templates or whatever could *also* require > > certain traits, and the scheduler would use that information to pick a > > node. While this does put some scheduling information into the config > > template, it also means that we can remove some of the flavor > explosion > > *and* mostly separate scheduling from configuration. > > > > So, you'd have a list of traits on a flavor: > > > > required=HW_CPU_X86_VMX,HW_NIC_ACCEL_IPSEC > > > > And you would also have a list of traits in the deploy template: > > > > {"traits": {"required": ["STORAGE_HARDWARE_RAID"]}, "config": > } > > > > This allows for making flavors that are reasonably flexible > (instead of > > two flavors that do VMX and IPSEC acceleration, one of which does > RAID). > > It also allows users to specify a desired configuration without also > > needing to know how to correctly choose a flavor that can handle that > > configuration. > > > > I think it makes a lot of sense, doesn't impose more work on > users, and > > can reduce the number of flavors operators need to manage. > > > > Does that make sense? > > This is in fact exactly what Jay proposed. And both Julia and I are in > favor of it as an ideal long-term solution. Where Julia and I deviated > from Jay's point of view was in our desire to use "the hack" in the > short term so we can satisfy the majority of use cases right away > without having to wait for that ideal solution to materialize. > > > Ah, good point, I had missed that initially. Thanks. Let's do that. > > So if we all agree Jay's proposal is the right thing to do, is there any > reason to start working on a short-term hack instead of putting those > efforts into the better solution? I don't see why we couldn't get that > done in one cycle, if we're all in agreement on it. It takes more than agreement, though. It takes resources. I may have misunderstood a major theme of the PTG, but I think the Nova team is pretty overextended already. Even assuming authorship by wicked smaaht folks such as yourself, the spec and code reviews will require a nontrivial investment from Nova cores. The result would likely be de-/re-prioritization of things we just got done agreeing to work on. If that's The Right Thing, so be it. But we can't just say we're going to move forward with something of this magnitude without sacrificing something else. (Note that the above opinion is based on the assumption that the hacky way will require *much* less spec/code/review bandwidth to accomplish. If that's not true, then I totally agree with you that we should spend our time working on the right solution.) > > // jim > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at fried.cc Tue Oct 2 16:34:08 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 2 Oct 2018 11:34:08 -0500 Subject: [openstack-dev] [placement] The "intended purpose" of traits In-Reply-To: References: <1538145718.22269.0@smtp.office365.com> Message-ID: <1a08f42a-9c97-9fa7-c2af-261375f7ae9a@fried.cc> On 09/28/2018 07:23 PM, Mohammed Naser wrote: > On Fri, Sep 28, 2018 at 7:17 PM Chris Dent wrote: >> >> On Fri, 28 Sep 2018, melanie witt wrote: >> >>> I'm concerned about a lot of repetition here and maintenance headache for >>> operators. That's where the thoughts about whether we should provide >>> something like a key-value construct to API callers where they can instead >>> say: >>> >>> * OWNER=CINDER >>> * RAID=10 >>> * NUMA_CELL=0 >>> >>> for each resource provider. >>> >>> If I'm off base with my example, please let me know. I'm not a placement >>> expert. >>> >>> Anyway, I hope that gives an idea of what I'm thinking about in this >>> discussion. I agree we need to pick a direction and go with it. I'm just >>> trying to look out for the experience operators are going to be using this >>> and maintaining it in their deployments. >> >> Despite saying "let's never do this" with regard to having formal >> support for key/values in placement, if we did choose to do it (if >> that's what we chose, I'd live with it), when would we do it? We >> have a very long backlog of features that are not yet done. I >> believe (I hope obviously) that we will be able to accelerate >> placement's velocity with it being extracted, but that won't be >> enough to suddenly be able to do quickly do all the things we have >> on the plate. >> >> Are we going to make people wait for some unknown amount of time, >> in the meantime? While there is a grammar that could do some of >> these things? >> >> Unless additional resources come on the scene I don't think is >> either feasible or reasonable for us to considering doing any model >> extending at this time (irrespective of the merit of the idea). >> >> In some kind of weird belief way I'd really prefer we keep the >> grammar placement exposes simple, because my experience with HTTP >> APIs strongly suggests that's very important, and that experience is >> effectively why I am here, but I have no interest in being a >> fundamentalist about it. We should argue about it strongly to make >> sure we get the right result, but it's not a huge deal either way. > > Is there a spec up for this should anyone want to implement it? By "this" are you referring to a placement key/value primitive? There is not a spec or blueprint that I'm aware of. And I think the reason is the strong and immediate resistance to the very idea any time it is mentioned. Who would want to write a spec that's almost certain to be vetoed? > >> -- >> Chris Dent ٩◔̯◔۶ https://anticdent.org/ >> freenode: cdent tw: @anticdent__________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From doug at doughellmann.com Tue Oct 2 16:40:47 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 02 Oct 2018 12:40:47 -0400 Subject: [openstack-dev] [goal][python3] week 8 update Message-ID: This is week 8 of the "Run under Python 3 by default" goal (https://governance.openstack.org/tc/goals/stein/python3-first.html). == Ongoing and Completed Work == I proposed a large set of new patches to update the tox settings for repositories that were still using python2 for doc, release note, linter, etc. jobs. Quite a few of those were duplicates, so if you find that someone else has already started that work please vote -1 on my patch with a link to the other one, and I'll abandon mine. +---------------------+---------------------+--------------+---------+----------+---------+------------+-------+--------------------+ | Team | zuul | tox defaults | Docs | 3.6 unit | Failing | Unreviewed | Total | Champion | +---------------------+---------------------+--------------+---------+----------+---------+------------+-------+--------------------+ | adjutant | + | 1/ 1 | - | + | 0 | 1 | 6 | Doug Hellmann | | barbican | 7/ 13 | + | 1/ 3 | + | 6 | 4 | 20 | Doug Hellmann | | blazar | + | + | + | + | 0 | 0 | 25 | Nguyen Hai | | Chef OpenStack | + | 2/ 2 | - | - | 1 | 1 | 3 | Doug Hellmann | | cinder | + | 1/ 3 | + | + | 0 | 1 | 33 | Doug Hellmann | | cloudkitty | + | + | + | + | 0 | 0 | 26 | Doug Hellmann | | congress | + | 1/ 3 | + | + | 1 | 1 | 25 | Nguyen Hai | | cyborg | + | + | + | + | 0 | 0 | 16 | Nguyen Hai | | designate | + | 2/ 4 | + | + | 0 | 1 | 26 | Nguyen Hai | | Documentation | + | 1/ 5 | + | + | 1 | 1 | 23 | Doug Hellmann | | dragonflow | + | - | + | + | 0 | 0 | 6 | Nguyen Hai | | ec2-api | + | 2/ 2 | + | + | 2 | 2 | 14 | | | freezer | waiting for cleanup | 1/ 5 | + | + | 0 | 1 | 34 | | | glance | + | 1/ 4 | + | + | 0 | 0 | 26 | Nguyen Hai | | heat | 3/ 27 | 4/ 8 | 1/ 6 | 1/ 7 | 2 | 4 | 48 | Doug Hellmann | | horizon | + | 1/ 32 | + | + | 0 | 1 | 42 | Nguyen Hai | | I18n | + | 1/ 1 | - | - | 0 | 0 | 3 | Doug Hellmann | | InteropWG | + | 4/ 4 | + | 1/ 3 | 2 | 4 | 14 | Doug Hellmann | | ironic | + | 1/ 10 | + | + | 0 | 1 | 95 | Doug Hellmann | | karbor | + | + | + | + | 0 | 0 | 22 | Nguyen Hai | | keystone | + | 1/ 7 | + | + | 0 | 0 | 48 | Doug Hellmann | | kolla | + | 1/ 1 | + | + | 1 | 0 | 13 | | | kuryr | + | + | + | + | 0 | 0 | 20 | Doug Hellmann | | magnum | + | 2/ 5 | + | + | 0 | 1 | 27 | | | manila | + | 4/ 8 | + | + | 0 | 0 | 32 | Goutham Pacha Ravi | | masakari | + | 3/ 5 | + | - | 0 | 3 | 24 | Nguyen Hai | | mistral | + | + | + | + | 0 | 0 | 38 | Nguyen Hai | | monasca | 1/ 66 | 5/ 17 | + | + | 3 | 4 | 100 | Doug Hellmann | | murano | + | 2/ 5 | + | + | 0 | 2 | 39 | | | neutron | 15/ 73 | 12/ 18 | 2/ 14 | 2/ 13 | 18 | 18 | 118 | Doug Hellmann | | nova | + | + | + | + | 0 | 0 | 37 | | | octavia | + | 1/ 4 | + | + | 0 | 1 | 35 | Nguyen Hai | | OpenStack Charms | 17/117 | 73/ 73 | - | - | 67 | 90 | 190 | Doug Hellmann | | OpenStack-Helm | + | 1/ 2 | + | - | 1 | 0 | 6 | | | OpenStackAnsible | 6/270 | 1/ 91 | + | - | 6 | 2 | 424 | | | OpenStackClient | + | 2/ 4 | + | + | 0 | 1 | 27 | | | OpenStackSDK | + | + | + | + | 0 | 0 | 25 | | | oslo | + | 1/ 31 | + | + | 0 | 1 | 220 | Doug Hellmann | | Packaging-rpm | + | 3/ 3 | + | + | 0 | 1 | 10 | Doug Hellmann | | PowerVMStackers | + | - | - | + | 0 | 0 | 18 | Doug Hellmann | | Puppet OpenStack | + | 1/ 1 | + | - | 0 | 1 | 237 | Doug Hellmann | | qinling | + | + | + | + | 0 | 0 | 12 | | | Quality Assurance | + | 11/ 14 | + | + | 1 | 9 | 63 | Doug Hellmann | | rally | + | 2/ 3 | + | - | 2 | 2 | 7 | Nguyen Hai | | Release Management | + | - | - | + | 0 | 0 | 2 | Doug Hellmann | | requirements | + | - | + | + | 0 | 0 | 7 | Doug Hellmann | | sahara | + | 1/ 6 | + | + | 0 | 1 | 40 | Doug Hellmann | | searchlight | + | + | + | + | 0 | 0 | 22 | Nguyen Hai | | senlin | + | + | + | + | 0 | 0 | 25 | Nguyen Hai | | SIGs | + | 6/ 9 | + | + | 1 | 5 | 18 | Doug Hellmann | | solum | + | 1/ 3 | + | + | 0 | 1 | 24 | Nguyen Hai | | storlets | + | 1/ 2 | + | + | 1 | 1 | 9 | | | swift | + | 2/ 3 | + | + | 1 | 1 | 17 | Nguyen Hai | | tacker | + | 3/ 4 | + | + | 1 | 2 | 25 | Nguyen Hai | | Technical Committee | + | 1/ 2 | - | + | 0 | 0 | 9 | Doug Hellmann | | Telemetry | 14/ 31 | 1/ 7 | 2/ 6 | 2/ 6 | 5 | 6 | 50 | Doug Hellmann | | tricircle | + | + | + | + | 0 | 0 | 14 | Nguyen Hai | | tripleo | waiting for cleanup | + | + | + | 0 | 0 | 154 | Doug Hellmann | | trove | 8/ 17 | 3/ 5 | + | + | 1 | 2 | 28 | Doug Hellmann | | User Committee | + | 3/ 3 | 1/ 2 | - | 0 | 2 | 9 | Doug Hellmann | | vitrage | + | 1/ 3 | + | + | 0 | 1 | 26 | Nguyen Hai | | watcher | + | + | + | + | 0 | 0 | 27 | Nguyen Hai | | winstackers | + | + | + | + | 0 | 0 | 17 | | | zaqar | + | 1/ 3 | + | + | 1 | 1 | 25 | | | zun | + | + | + | + | 0 | 0 | 21 | Nguyen Hai | | | 55/ 65 | 17/ 61 | 53/ 58 | 52/ 56 | 125 | 183 | 2855 | | +---------------------+---------------------+--------------+---------+----------+---------+------------+-------+--------------------+ == Next Steps == We need to to approve the patches proposed by the goal champions, and then to expand functional test coverage for python 3. Please document your team's status in the wiki as well: https://wiki.openstack.org/wiki/Python3 == How can you help? == Quite a few of the recent tox updates also exposed issues with using pylint under python 3, mostly due to having an older version of the tool pinned. This is a known issue, which was discussed in an earlier update email. The fixes are usually pretty straightforward, and good opportunities to contribute while you're waiting for tests to run or if you're just starting to get into the community. The series of patches preceding https://review.openstack.org/#/c/606676/ in the openstack/neutron repository are examples of some of the sorts of changes needed. If you're interested in helping to fix these sorts of issues, please leave a comment on the patch that changes the tox configuration so that we don't have multiple folks working on the same failures. 1. Choose a patch that has failing tests and help fix it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) 2. Review the patches for the zuul changes. Keep in mind that some of those patches will be on the stable branches for projects. 3. Work on adding functional test jobs that run under Python 3. == How can you ask for help? == If you have any questions, please post them here to the openstack-dev list with the topic tag [python3] in the subject line. Posting questions to the mailing list will give the widest audience the chance to see the answers. We are using the #openstack-dev IRC channel for discussion as well, but I'm not sure how good our timezone coverage is so it's probably better to use the mailing list. == Reference Material == Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open Storyboard: https://storyboard.openstack.org/#!/board/104 Zuul migration notes: https://etherpad.openstack.org/p/python3-first Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 From dtantsur at redhat.com Tue Oct 2 16:47:21 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 2 Oct 2018 18:47:21 +0200 Subject: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> <2e4ce0e9-1ad1-abe7-2f6e-c10b84b3fdb4@gmail.com> <6de3c709-1d41-fa13-5c74-e38536a24587@gmail.com> <5c357c69-df9b-1c16-79c9-65f84e487f95@fried.cc> Message-ID: <751dbe9e-ba84-c702-33ee-abd7c2bf0fed@redhat.com> On 10/2/18 6:17 PM, Mark Goddard wrote: > > > On Tue, 2 Oct 2018 at 17:10, Jim Rollenhagen > wrote: > > On Tue, Oct 2, 2018 at 11:40 AM Eric Fried wrote: > > > What Eric is proposing (and Julia and I seem to be in favor of), is > > nearly the same as your proposal. The single difference is that these > > config templates or deploy templates or whatever could *also* require > > certain traits, and the scheduler would use that information to pick a > > node. While this does put some scheduling information into the config > > template, it also means that we can remove some of the flavor explosion > > *and* mostly separate scheduling from configuration. > > > > So, you'd have a list of traits on a flavor: > > > > required=HW_CPU_X86_VMX,HW_NIC_ACCEL_IPSEC > > > > And you would also have a list of traits in the deploy template: > > > > {"traits": {"required": ["STORAGE_HARDWARE_RAID"]}, "config": blob>} > > > > This allows for making flavors that are reasonably flexible (instead of > > two flavors that do VMX and IPSEC acceleration, one of which does RAID). > > It also allows users to specify a desired configuration without also > > needing to know how to correctly choose a flavor that can handle that > > configuration. > > > > I think it makes a lot of sense, doesn't impose more work on users, and > > can reduce the number of flavors operators need to manage. > > > > Does that make sense? > > This is in fact exactly what Jay proposed. And both Julia and I are in > favor of it as an ideal long-term solution. Where Julia and I deviated > from Jay's point of view was in our desire to use "the hack" in the > short term so we can satisfy the majority of use cases right away > without having to wait for that ideal solution to materialize. > > > Ah, good point, I had missed that initially. Thanks. Let's do that. > > So if we all agree Jay's proposal is the right thing to do, is there any > reason to start working on a short-term hack instead of putting those > efforts into the better solution? I don't see why we couldn't get that done > in one cycle, if we're all in agreement on it. > > > I'm still unclear on the ironic side of this. I can see that config of some sort > is stored in glance, and referenced upon nova server creation. Somehow this > would be synced to ironic by the nova virt driver during node provisioning. The > part that's missing in my mind is how to map from a config in glance to a set of > actions performed by ironic. Does the config in glance reference a deploy > template, or a set of ironic deploy steps? Or does ironic (or OpenStack) define > some config schema that it supports, and use it to generate a set of deploy steps? I think the most straightforward way is through the same deploy steps mechanism we planned. Make the virt driver fetch the config from glance, then pass it to the provisioning API. As a bonus, we'll get the same API workflow with standalone and nova case. > > > // jim > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dtantsur at redhat.com Tue Oct 2 16:51:40 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 2 Oct 2018 18:51:40 +0200 Subject: [openstack-dev] [tc] [all] TC Report 18-40 In-Reply-To: References: Message-ID: ++ that was very helpful, thanks Chris! On 10/2/18 6:18 PM, Steven Dake (stdake) wrote: > Chris, > > Thanks for all the hard work you have put into this. FWIW I found value in your reports, but perhaps because I am not involved in the daily activities of the TC. > > Cheers > -steve > > > On 10/2/18, 8:25 AM, "Chris Dent" wrote: > > > HTML: https://anticdent.org/tc-report-18-40.html > > I'm going to take a break from writing the TC reports for a while. > If other people (whether on the TC or not) are interested in > producing their own form of a subjective review of the week's TC > activity, I very much encourage you to do so. It's proven an > effective way to help at least some people maintain engagement. > > I may pick it up again when I feel like I have sufficient focus and > energy to produce something that has more value and interpretation > than simply pointing at > [the IRC logs](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/). > However, at this time, I'm not producing a product that is worth the > time it takes me to do it and the time it takes away from doing > other things. I'd rather make more significant progress on fewer > things. > > In the meantime, please join me in congratulating and welcoming the > newly elected members of the TC: Lance Bragstad, Jean-Philippe > Evrard, Doug Hellman, Julia Kreger, Ghanshyam Mann, and Jeremy > Stanley. > > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From balazs.gibizer at ericsson.com Tue Oct 2 16:52:24 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Tue, 2 Oct 2018 16:52:24 +0000 Subject: [openstack-dev] [nova] Cancelling the notification subteam weekly meeting indefinitely Message-ID: <1538499139.13834.0@smtp.office365.com> Hi, Due to low amount of ongoing work in the area there is a low interest to have this meeting going. So I'm cancelling it indefinitely[1]. Of course I'm still intereseted in helping any notification related work in the future and you can reach me in #openstack-nova as usual. cheers, gibi [1]https://review.openstack.org/#/c/607314/ From tpb at dyncloud.net Tue Oct 2 17:58:43 2018 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 2 Oct 2018 13:58:43 -0400 Subject: [openstack-dev] [manila] nominating Amit Oren for manila core Message-ID: <20181002175843.ik5mhqqz3hwqb42m@barron.net> Amit Oren has contributed high quality reviews in the last couple of cycles so I would like to nominated him for manila core. Please respond with your +1 or -1 votes. We'll hold voting open for 7 days. Thanks, -- Tom Barron (tbarron) From xingyang105 at gmail.com Tue Oct 2 18:00:11 2018 From: xingyang105 at gmail.com (Xing Yang) Date: Tue, 2 Oct 2018 14:00:11 -0400 Subject: [openstack-dev] [manila] nominating Amit Oren for manila core In-Reply-To: <20181002175843.ik5mhqqz3hwqb42m@barron.net> References: <20181002175843.ik5mhqqz3hwqb42m@barron.net> Message-ID: +1 On Tue, Oct 2, 2018 at 1:58 PM Tom Barron wrote: > Amit Oren has contributed high quality reviews in the last couple of > cycles so I would like to nominated him for manila core. > > Please respond with your +1 or -1 votes. We'll hold voting open for 7 > days. > > Thanks, > > -- Tom Barron (tbarron) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victoria at vmartinezdelacruz.com Tue Oct 2 18:40:58 2018 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Tue, 2 Oct 2018 15:40:58 -0300 Subject: [openstack-dev] [manila] nominating Amit Oren for manila core In-Reply-To: References: <20181002175843.ik5mhqqz3hwqb42m@barron.net> Message-ID: +1 :D El mar., 2 de oct. de 2018 a la(s) 15:00, Xing Yang (xingyang105 at gmail.com) escribió: > +1 > > On Tue, Oct 2, 2018 at 1:58 PM Tom Barron wrote: > >> Amit Oren has contributed high quality reviews in the last couple of >> cycles so I would like to nominated him for manila core. >> >> Please respond with your +1 or -1 votes. We'll hold voting open for 7 >> days. >> >> Thanks, >> >> -- Tom Barron (tbarron) >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Oct 2 18:45:38 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 2 Oct 2018 13:45:38 -0500 Subject: [openstack-dev] [neutron][stable] Stable Core Team Update In-Reply-To: References: Message-ID: <57f927a7-3c5b-8b96-e0b8-2d27fb66cff0@gmail.com> On 10/2/2018 10:41 AM, Miguel Lavalle wrote: > Hi Stable Team, > > I want to nominate Bernard Cafarrelli as a stable core reviewer for > Neutron and related projects. Bernard has been increasing the number of > stable reviews he is doing for the project [1]. Besides that, he is a > stable maintainer downstream for his employer (Red Hat), so he can bring > that valuable experience to the Neutron stable team. > > Thanks and regards > > Miguel > > [1] > https://review.openstack.org/#/q/(project:openstack/neutron+OR+openstack/networking-sfc+OR+project:openstack/networking-ovn)++branch:%255Estable/.*+reviewedby:%22Bernard+Cafarelli+%253Cbcafarel%2540redhat.com%253E%22 > +1 from me. -- Thanks, Matt From sean.mcginnis at gmx.com Tue Oct 2 19:20:02 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 2 Oct 2018 14:20:02 -0500 Subject: [openstack-dev] [neutron][stable] Stable Core Team Update In-Reply-To: <57f927a7-3c5b-8b96-e0b8-2d27fb66cff0@gmail.com> References: <57f927a7-3c5b-8b96-e0b8-2d27fb66cff0@gmail.com> Message-ID: <20181002192002.GA22035@sm-workstation> On Tue, Oct 02, 2018 at 01:45:38PM -0500, Matt Riedemann wrote: > On 10/2/2018 10:41 AM, Miguel Lavalle wrote: > > Hi Stable Team, > > > > I want to nominate Bernard Cafarrelli as a stable core reviewer for > > Neutron and related projects. Bernard has been increasing the number of > > stable reviews he is doing for the project [1]. Besides that, he is a > > stable maintainer downstream for his employer (Red Hat), so he can bring > > that valuable experience to the Neutron stable team. > > > > Thanks and regards > > > > Miguel > > > > [1] https://review.openstack.org/#/q/(project:openstack/neutron+OR+openstack/networking-sfc+OR+project:openstack/networking-ovn)++branch:%255Estable/.*+reviewedby:%22Bernard+Cafarelli+%253Cbcafarel%2540redhat.com%253E%22 > > +1 from me. > > -- > > Thanks, > > Matt +1 from me as well. Sean From haleyb.dev at gmail.com Tue Oct 2 19:23:09 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Tue, 2 Oct 2018 15:23:09 -0400 Subject: [openstack-dev] [neutron][stable] Stable Core Team Update In-Reply-To: References: Message-ID: <94a09f21-a4a8-0a50-cd8d-a37c6b7ef8fd@gmail.com> +1 from me :) -Brian On 10/02/2018 11:41 AM, Miguel Lavalle wrote: > Hi Stable Team, > > I want to nominate Bernard Cafarrelli as a stable core reviewer for > Neutron and related projects. Bernard has been increasing the number of > stable reviews he is doing for the project [1]. Besides that, he is a > stable maintainer downstream for his employer (Red Hat), so he can bring > that valuable experience to the Neutron stable team. > > Thanks and regards > > Miguel > > [1] > https://review.openstack.org/#/q/(project:openstack/neutron+OR+openstack/networking-sfc+OR+project:openstack/networking-ovn)++branch:%255Estable/.*+reviewedby:%22Bernard+Cafarelli+%253Cbcafarel%2540redhat.com%253E%22 > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jungleboyj at gmail.com Tue Oct 2 20:02:16 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 2 Oct 2018 15:02:16 -0500 Subject: [openstack-dev] [manila] nominating Amit Oren for manila core In-Reply-To: <20181002175843.ik5mhqqz3hwqb42m@barron.net> References: <20181002175843.ik5mhqqz3hwqb42m@barron.net> Message-ID: <55e28e98-6893-88c4-97b5-9c2a224bd99d@gmail.com> As a friend of Manila I am definitely +1 except that Cinder would like him back full time.  ;-) Jay On 10/2/2018 12:58 PM, Tom Barron wrote: > Amit Oren has contributed high quality reviews in the last couple of > cycles so I would like to nominated him for manila core. > > Please respond with your +1 or -1 votes.  We'll hold voting open for 7 > days. > > Thanks, > > -- Tom Barron (tbarron) > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From rodrigo.barbieri2010 at gmail.com Tue Oct 2 20:09:24 2018 From: rodrigo.barbieri2010 at gmail.com (Rodrigo Barbieri) Date: Tue, 2 Oct 2018 17:09:24 -0300 Subject: [openstack-dev] [manila] nominating Amit Oren for manila core In-Reply-To: <55e28e98-6893-88c4-97b5-9c2a224bd99d@gmail.com> References: <20181002175843.ik5mhqqz3hwqb42m@barron.net> <55e28e98-6893-88c4-97b5-9c2a224bd99d@gmail.com> Message-ID: +1 -- Rodrigo Barbieri MSc Computer Scientist OpenStack Manila Core Contributor Federal University of São Carlos On Tue, Oct 2, 2018, 17:02 Jay S Bryant wrote: > As a friend of Manila I am definitely +1 except that Cinder would like > him back full time. ;-) > > Jay > > > On 10/2/2018 12:58 PM, Tom Barron wrote: > > Amit Oren has contributed high quality reviews in the last couple of > > cycles so I would like to nominated him for manila core. > > > > Please respond with your +1 or -1 votes. We'll hold voting open for 7 > > days. > > > > Thanks, > > > > -- Tom Barron (tbarron) > > > > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at citynetwork.eu Tue Oct 2 21:24:18 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Tue, 2 Oct 2018 23:24:18 +0200 Subject: [openstack-dev] [publiccloud-wg] Reminder weekly meeting Public Cloud WG Message-ID: Hi everyone, Time for a new meeting for PCWG - 3rd October 0700 UTC in #openstack-publiccloud! Agenda found at https://etherpad.openstack.org/p/publiccloud-wg Talk to you in a couple of hours! Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From gjayavelu at vmware.com Tue Oct 2 22:15:14 2018 From: gjayavelu at vmware.com (Giridhar Jayavelu) Date: Tue, 2 Oct 2018 22:15:14 +0000 Subject: [openstack-dev] [helm] multiple nova compute nodes Message-ID: <010E15C0-C006-4072-B250-2B4CFF40CD4C@vmware.com> Hi, Currently, all nova components are packaged in same helm chart "nova". Are there any plans to separate nova-compute from rest of the services ? What should be the approach for deploying multiple nova computes nodes using OpenStack helm charts? Thanks, Giri From chris.friesen at windriver.com Tue Oct 2 23:04:24 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 2 Oct 2018 17:04:24 -0600 Subject: [openstack-dev] [helm] multiple nova compute nodes In-Reply-To: <010E15C0-C006-4072-B250-2B4CFF40CD4C@vmware.com> References: <010E15C0-C006-4072-B250-2B4CFF40CD4C@vmware.com> Message-ID: On 10/2/2018 4:15 PM, Giridhar Jayavelu wrote: > Hi, > Currently, all nova components are packaged in same helm chart "nova". Are there any plans to separate nova-compute from rest of the services ? > What should be the approach for deploying multiple nova computes nodes using OpenStack helm charts? The nova-compute pods are part of a daemonset which will automatically create a nova-compute pod on each node that has the "openstack-compute-node=enabled" label. Chris From wilkers.steve at gmail.com Tue Oct 2 23:27:17 2018 From: wilkers.steve at gmail.com (Steve Wilkerson) Date: Tue, 2 Oct 2018 18:27:17 -0500 Subject: [openstack-dev] [helm] multiple nova compute nodes In-Reply-To: References: <010E15C0-C006-4072-B250-2B4CFF40CD4C@vmware.com> Message-ID: In addition to targeting nodes by labels (these labels are exposed for overrides in the move chart’s values.yaml, so they can be whatever labels you wish them to be), you can also disable particular templates in the nova chart. You can find these under the ‘manifests:’ key in the chart’s values.yaml. Each template in the nova chart will have a key that can toggle whether you deploy that template or not, and these keys should be named similar to the templates they control. With this, you can exclude particular nova components if you desire. Hope that helps clear things up. Cheers, Steve On Tue, Oct 2, 2018 at 6:04 PM Chris Friesen wrote: > On 10/2/2018 4:15 PM, Giridhar Jayavelu wrote: > > Hi, > > Currently, all nova components are packaged in same helm chart "nova". > Are there any plans to separate nova-compute from rest of the services ? > > What should be the approach for deploying multiple nova computes nodes > using OpenStack helm charts? > > The nova-compute pods are part of a daemonset which will automatically > create a nova-compute pod on each node that has the > "openstack-compute-node=enabled" label. > > Chris > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.morin at orange.com Wed Oct 3 07:28:21 2018 From: thomas.morin at orange.com (Thomas Morin) Date: Wed, 3 Oct 2018 09:28:21 +0200 Subject: [openstack-dev] [neutron][stable] Stable Core Team Update In-Reply-To: References: Message-ID: <069018e7-c32b-73c6-2bd3-377baf2b300f@orange.com> +1 ! -Thomas On 10/2/18 5:41 PM, Miguel Lavalle wrote: > Hi Stable Team, > > I want to nominate Bernard Cafarrelli as a stable core reviewer for > Neutron and related projects. Bernard has been increasing the number > of stable reviews he is doing for the project [1]. Besides that, he is > a stable maintainer downstream for his employer (Red Hat), so he can > bring that valuable experience to the Neutron stable team. > > Thanks and regards > > Miguel > > [1] > https://review.openstack.org/#/q/(project:openstack/neutron+OR+openstack/networking-sfc+OR+project:openstack/networking-ovn)++branch:%255Estable/.*+reviewedby:%22Bernard+Cafarelli+%253Cbcafarel%2540redhat.com%253E%22 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From majopela at redhat.com Wed Oct 3 08:14:49 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Wed, 3 Oct 2018 10:14:49 +0200 Subject: [openstack-dev] [tripleo] [quickstart] [networking-ovn] No more overcloud_prep-containers.sh script Message-ID: Hi folks I was trying to deploy neutron with networking-ovn via tripleo-quickstart scripts on master, and this config file [1]. It doesn't work, overcloud deploy cries with: 1) trying to deploy ovn I end up with a 2018-10-02 17:48:12 | "2018-10-02 17:47:51,864 DEBUG: 26691 -- Error: image tripleomaster/centos-binary-ovn-controller:current-tripleo not found", it seems like the overcloud_prep-containers.sh is not there anymore (I guess overcloud deploy handles it automatically now? but it fails to generate the ovn containers for some reason) Also, if you look at [2] which are our ansible migration scripts to migrate ml2/ovs to ml2/networking-ovn, you will see that we make use of overcloud_prep-containers.sh , I guess that we will need to make sure [1] works and we will get [2] for free. [1] https://github.com/openstack/networking-ovn/blob/master/tripleo/ovn.yml [2] https://docs.openstack.org/networking-ovn/latest/install/migration.html -- Miguel Ángel Ajo OSP / Networking DFG, OVN Squad Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalvarez at redhat.com Wed Oct 3 08:34:31 2018 From: dalvarez at redhat.com (Daniel Alvarez Sanchez) Date: Wed, 3 Oct 2018 10:34:31 +0200 Subject: [openstack-dev] [tripleo] [quickstart] [networking-ovn] No more overcloud_prep-containers.sh script In-Reply-To: References: Message-ID: Hi Miguel, This patch should fix it [0]. I ran into same issues and had to manually patch and/or generate the OVN containers myself. Try it out and let me know if the problem persists. To confirm that this is the same issue try to check which images you got in your local registry (ODL images may be present while OVN ones are not). [0] https://review.openstack.org/#/c/604953/5 Cheers, Daniel On Wed, Oct 3, 2018 at 10:15 AM Miguel Angel Ajo Pelayo wrote: > Hi folks > > I was trying to deploy neutron with networking-ovn via > tripleo-quickstart scripts on master, and this config file [1]. It doesn't > work, overcloud deploy cries with: > > 1) trying to deploy ovn I end up with a 2018-10-02 17:48:12 | "2018-10-02 > 17:47:51,864 DEBUG: 26691 -- Error: image > tripleomaster/centos-binary-ovn-controller:current-tripleo not found", > > it seems like the overcloud_prep-containers.sh is not there anymore (I > guess overcloud deploy handles it automatically now? but it fails to > generate the ovn containers for some reason) > > Also, if you look at [2] which are our ansible migration scripts to > migrate ml2/ovs to ml2/networking-ovn, you will see that we make use of > overcloud_prep-containers.sh , I guess that we will need to make sure [1] > works and we will get [2] for free. > > > > [1] > https://github.com/openstack/networking-ovn/blob/master/tripleo/ovn.yml > [2] > https://docs.openstack.org/networking-ovn/latest/install/migration.html > -- > Miguel Ángel Ajo > OSP / Networking DFG, OVN Squad Engineering > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Wed Oct 3 08:44:02 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Wed, 3 Oct 2018 10:44:02 +0200 Subject: [openstack-dev] [tripleo] [quickstart] [networking-ovn] No more overcloud_prep-containers.sh script In-Reply-To: References: Message-ID: On 03/10/2018 10:14, Miguel Angel Ajo Pelayo wrote: > Hi folks > > I was trying to deploy neutron with networking-ovn via tripleo-quickstart > scripts on master, and this config file [1]. It doesn't work, overcloud > deploy cries with: > > 1) trying to deploy ovn I end up with a 2018-10-02 17:48:12 | "2018-10-02 > 17:47:51,864 DEBUG: 26691 -- Error: image > tripleomaster/centos-binary-ovn-controller:current-tripleo not found", > > it seems like the overcloud_prep-containers.sh is not there anymore (I > guess overcloud deploy handles it automatically now? but it fails to > generate the ovn containers for some reason) > > Also, if you look at [2] which are our ansible migration scripts to migrate > ml2/ovs to ml2/networking-ovn, you will see that we make use of > overcloud_prep-containers.sh , I guess that we will need to make sure [1] > works and we will get [2] for free. Hi Miguel, i'm not subject matter expert but here's some relevant info: * overcloud_prep-containers.sh is not a production thing, it's automation from TripleO Quickstart, which is not part of production deployments. We shouldn't depend on it in docs/automation for OVN migration. * For production envs, the image preparation steps used to be documented and performed manually. This now is now changing in Rocky+, as Steve Baker integrated the image prep into the deployment itself. There are docs about the current method [3]. * I hit similar issues with incorrect Neutron images being uploaded to undercloud registry, you can try deploying with this patch [4] which aims to fix that problem (also the depends-on patch is necessary). Jirka > [1] https://github.com/openstack/networking-ovn/blob/master/tripleo/ovn.yml > [2] https://docs.openstack.org/networking-ovn/latest/install/migration.html [3] http://tripleo.org/install/advanced_deployment/container_image_prepare.html [4] https://review.openstack.org/#/c/604953/ From ifatafekn at gmail.com Wed Oct 3 08:46:29 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Wed, 3 Oct 2018 11:46:29 +0300 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: Hi, In the alertmanager.yml file you should have a receiver for Vitrage. Please verify that it includes "send_resolved: true". This is required for Prometheus to notify Vitrage when an alarm is resolved. The full Vitrage receiver definition should be: - name: ** webhook_configs: - url: ** # example: 'http://127.0.0.1:8999/v1/event' send_resolved: true http_config: basic_auth: username: ** password: ** Hope it helps, Ifat On Tue, Oct 2, 2018 at 7:51 AM Won wrote: > I have some problems with Prometheus alarms in vitrage. > I receive a list of alarms from the Prometheus alarm manager well, but the > alarm does not disappear when the problem(alarm) is resolved. The alarm > that came once in both the alarm list and the entity graph does not > disappear in vitrage. The alarm sent by zabbix disappears when alarm > solved, I wonder how to clear the Prometheus alarm from vitrage and how to > update the alarm automatically like zabbix. > thank you. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Wed Oct 3 08:47:22 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Wed, 03 Oct 2018 10:47:22 +0200 Subject: [openstack-dev] =?utf-8?b?Pz09P3V0Zi04P3E/ID89PT91dGYtOD9xPyBb?= =?utf-8?q?helm=5D__multiple_nova_compute_nodes?= In-Reply-To: Message-ID: <76d9-5bb48200-1f-1e90940@27345374> > > The nova-compute pods are part of a daemonset which will automatically > create a nova-compute pod on each node that has the > "openstack-compute-node=enabled" label. > Hello, Should we add this in the documentation, maybe with an architecture diagram? Regards, Jean-Philippe Evrard (evrardjp) From jean-philippe at evrard.me Wed Oct 3 08:59:18 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Wed, 03 Oct 2018 10:59:18 +0200 Subject: [openstack-dev] =?utf-8?b?Pz09P3V0Zi04P3E/ID89PT91dGYtOD9xPyBb?= =?utf-8?q?infra=5D_Gerrit_User_Summit=2C_November_2018?= In-Reply-To: <20181002125906.doeoecdh6wrqhavz@pacific.linksys.moosehall> Message-ID: <7668-5bb48500-11-794c2080@160396458> On Tuesday, October 02, 2018 14:59 CEST, Adam Spiers wrote: > Hi all, > > The next forthcoming Gerrit User Summit 2018 will be Nov 15th-16th in > Palo Alto, hosted by Cloudera. > > See the Gerrit User Summit page at: > > https://gerrit.googlesource.com/summit/2018/+/master/index.md > > and the event registration at: > > https://gus2018.eventbrite.com > > Hopefully some members of the OpenStack community can attend the > event, not just so we can keep up to date with Gerrit but also so that > our interests can be represented! > > Regards, > Adam > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Good catch! When I read your email, I assume you won't be going? Palo Alto is indeed not very close to Europe and it's indeed a long trip for a two day effort. Maybe there is someone closer that can help there and attend this? Regards, Jean-Philippe Evrard (evrardjp) From majopela at redhat.com Wed Oct 3 09:25:56 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Wed, 3 Oct 2018 11:25:56 +0200 Subject: [openstack-dev] [tripleo] [quickstart] [networking-ovn] No more overcloud_prep-containers.sh script In-Reply-To: References: Message-ID: Hi Jirka & Daniel, thanks for your answers... more inline. On Wed, Oct 3, 2018 at 10:44 AM Jiří Stránský wrote: > On 03/10/2018 10:14, Miguel Angel Ajo Pelayo wrote: > > Hi folks > > > > I was trying to deploy neutron with networking-ovn via > tripleo-quickstart > > scripts on master, and this config file [1]. It doesn't work, overcloud > > deploy cries with: > > > > 1) trying to deploy ovn I end up with a 2018-10-02 17:48:12 | "2018-10-02 > > 17:47:51,864 DEBUG: 26691 -- Error: image > > tripleomaster/centos-binary-ovn-controller:current-tripleo not found", > > > > it seems like the overcloud_prep-containers.sh is not there anymore (I > > guess overcloud deploy handles it automatically now? but it fails to > > generate the ovn containers for some reason) > > > > Also, if you look at [2] which are our ansible migration scripts to > migrate > > ml2/ovs to ml2/networking-ovn, you will see that we make use of > > overcloud_prep-containers.sh , I guess that we will need to make sure [1] > > works and we will get [2] for free. > > Hi Miguel, > > i'm not subject matter expert but here's some relevant info: > > * overcloud_prep-containers.sh is not a production thing, it's > automation from TripleO Quickstart, which is not part of production > deployments. We shouldn't depend on it in docs/automation for OVN > migration. > > Yes I know, but based on the deployment details we have for networking-ovn it should be enough, we will have to update those documents with the new changes anyway, because surprisingly this change has came for "Rocky" last minute. Why did we have such last minute change? :-/ I understand the value of simplifying workflows to cloud operators, but when we make workflow changes last minute we make others life harder (now I need to rework something I want to be available in rocky, the migration scripts/document). > * For production envs, the image preparation steps used to be documented > and performed manually. This now is now changing in Rocky+, as Steve > Baker integrated the image prep into the deployment itself. There are > docs about the current method [3]. > Oops, I see openstack tripleo container image prepare default \ --output-env-file containers-prepare-parameter.yaml Always outputs neutron_driver: null @Emilien Macchi, @Steve Baker How can I make sure it provides "ovn" for example? ^ I know I could manually change the file, but then, how would I run... " -- local-push-destination" ? > > > * I hit similar issues with incorrect Neutron images being uploaded to > undercloud registry, you can try deploying with this patch [4] which > aims to fix that problem (also the depends-on patch is necessary). > Thanks a lot! > > Jirka > > > [1] > https://github.com/openstack/networking-ovn/blob/master/tripleo/ovn.yml > > [2] > https://docs.openstack.org/networking-ovn/latest/install/migration.html > > [3] > http://tripleo.org/install/advanced_deployment/container_image_prepare.html > [4] https://review.openstack.org/#/c/604953/ > > -- Miguel Ángel Ajo OSP / Networking DFG, OVN Squad Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From majopela at redhat.com Wed Oct 3 09:37:39 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Wed, 3 Oct 2018 11:37:39 +0200 Subject: [openstack-dev] [neutron][stadium][networking] Seeking proposals for non-voting Stadium projects in Neutron check queue In-Reply-To: <18731_1538467298_5BB325E2_18731_20_1_580d078c-3d98-acf4-dda0-2e4b30205709@orange.com> References: <18731_1538467298_5BB325E2_18731_20_1_580d078c-3d98-acf4-dda0-2e4b30205709@orange.com> Message-ID: That's fantastic, I believe we could add some of the networking ovn jobs, we need to decide which one would be more beneficial. On Tue, Oct 2, 2018 at 10:02 AM wrote: > Hi Miguel, all, > > The initiative is very welcome and will help make it more efficient to > develop in stadium projects. > > legacy-tempest-dsvm-networking-bgpvpn-bagpipe would be a candidate, for > networking-bgpvpn and networking-bagpipe (it covers API and scenario > tests for the BGPVPN API (networking-bgpvpn) and given that > networking-bagpipe is used as reference driver, it exercises a large > portion of networking-bagpipe as well). > > Having this one will help a lot. > > Thanks, > > -Thomas > > > On 9/30/18 2:42 AM, Miguel Lavalle wrote: > > Dear networking Stackers, > > > > During the recent PTG in Denver, we discussed measures to prevent > > patches merged in the Neutron repo breaking Stadium and related > > networking projects in general. We decided to implement the following: > > > > 1) For Stadium projects, we want to add non-voting jobs to the Neutron > > check queue > > 2) For non stadium projects, we are inviting them to add 3rd party CI > jobs > > > > The next step is for each project to propose the jobs that they want > > to run against Neutron patches. > > > > Best regards > > > > Miguel > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _________________________________________________________________________________________________________________________ > > Ce message et ses pieces jointes peuvent contenir des informations > confidentielles ou privilegiees et ne doivent donc > pas etre diffuses, exploites ou copies sans autorisation. Si vous avez > recu ce message par erreur, veuillez le signaler > a l'expediteur et le detruire ainsi que les pieces jointes. Les messages > electroniques etant susceptibles d'alteration, > Orange decline toute responsabilite si ce message a ete altere, deforme ou > falsifie. Merci. > > This message and its attachments may contain confidential or privileged > information that may be protected by law; > they should not be distributed, used or copied without authorisation. > If you have received this email in error, please notify the sender and > delete this message and its attachments. > As emails may be altered, Orange is not liable for messages that have been > modified, changed or falsified. > Thank you. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Miguel Ángel Ajo OSP / Networking DFG, OVN Squad Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Wed Oct 3 10:23:17 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Wed, 3 Oct 2018 12:23:17 +0200 Subject: [openstack-dev] [tripleo] [quickstart] [networking-ovn] No more overcloud_prep-containers.sh script In-Reply-To: References: Message-ID: <2134199f-6ff5-528f-2ab5-fa7c6049d7d5@redhat.com> >> Yes I know, but based on the deployment details we have for networking-ovn > it should be enough, we will have to update those documents with the new > changes anyway, because surprisingly this change has came for "Rocky" last > minute. Why did we have such last minute change? :-/ > > I understand the value of simplifying workflows to cloud operators, but > when we make workflow changes last minute we make others life harder (now I > need to rework something I want to be available in rocky, the migration > scripts/document). Yea it's hard to strike a good balance there. It threw off updates and upgrades too. The upload is internally done via external_deploy_tasks which must not be run during `upgrade prepare` and `upgrade run` (or `update prepare` and `update run`). So likely Rocky updates/upgrades don't work as expected right now, and we'll need `external-upgrade run --tags image_prepare` inserted into the workflows. It's in my "queue of things to look into ASAP" ;D. Jirka From liliueecg at gmail.com Wed Oct 3 12:00:10 2018 From: liliueecg at gmail.com (Li Liu) Date: Wed, 3 Oct 2018 08:00:10 -0400 Subject: [openstack-dev] [cyborg]No irc meeting this week Message-ID: Hi guys I have to cancel the meeting this week as I realize all of the folks in China are on their National day vacation. we will resume the meeting next week. Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Oct 3 12:58:46 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 03 Oct 2018 08:58:46 -0400 Subject: [openstack-dev] [goals][python3][heat][stable] how should we proceed with ocata branch Message-ID: There is one more patch to import the zuul configuration for the heat-agents repository's stable/ocata branch. That branch is apparently broken, and Zane suggested on the review [1] that we abandon the patch and close the branch. That patch is the only thing blocking the cleanup patch in project-config, so I would like to get a definitive answer about what to do. Should we close the branch, or does someone want to try to fix things up? Doug [1] https://review.openstack.org/#/c/597272/ From e0ne at e0ne.info Wed Oct 3 13:06:49 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 3 Oct 2018 16:06:49 +0300 Subject: [openstack-dev] [horizon][plugins] npm jobs fail due to new XStatic-jQuery release (was: Horizon gates are broken) In-Reply-To: References: Message-ID: Thanks for working on it, Shu. Let's unblock gates and find a better long-term solution Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Mon, Oct 1, 2018 at 7:55 PM Duc Truong wrote: > Hi Shu, > > Thanks for proposing your fix. It looks good to me. I have submitted > a similar patch for senlin-dashboard to unblock the broken gate test [1]. > > [1] https://review.openstack.org/#/c/607003/ > > Regards, > > Duc (dtruong) > On Fri, Sep 28, 2018 at 2:24 AM Shu M. wrote: > > > > Hi Ivan, > > > > Thank you for your help to our plugins and sorry for bothering you. > > I found problem on installing horizon in "post-install", e.g. we should > install horizon with upper-constraints.txt in "post-install". > > I proposed patch[1] in zun-ui, please check it. If we can merge this, I > will expand it the other remaining plugins. > > > > [1] https://review.openstack.org/#/c/606010/ > > > > Thanks, > > Shu Muto > > > > 2018年9月28日(金) 3:34 Ivan Kolodyazhny : > >> > >> Hi, > >> > >> Unfortunately, this issue affects some of the plugins too :(. At least > gates for the magnum-ui, senlin-dashboard, zaqar-ui and zun-ui are broken > now. I'm working both with project teams to fix it asap. Let's wait if [5] > helps for senlin-dashboard and fix all the rest of plugins. > >> > >> > >> [5] https://review.openstack.org/#/c/605826/ > >> > >> Regards, > >> Ivan Kolodyazhny, > >> http://blog.e0ne.info/ > >> > >> > >> On Wed, Sep 26, 2018 at 4:50 PM Ivan Kolodyazhny > wrote: > >>> > >>> Hi all, > >>> > >>> Patch [1] is merged and our gates are un-blocked now. I went throw > review list and post 'recheck' where it was needed. > >>> > >>> We need to cherry-pick this fix to stable releases too. I'll do it asap > >>> > >>> Regards, > >>> Ivan Kolodyazhny, > >>> http://blog.e0ne.info/ > >>> > >>> > >>> On Mon, Sep 24, 2018 at 11:18 AM Ivan Kolodyazhny > wrote: > >>>> > >>>> Hi team, > >>>> > >>>> Unfortunately, horizon gates are broken now. We can't merge any patch > due to the -1 from CI. > >>>> I don't want to disable tests now, that's why I proposed a fix [1]. > >>>> > >>>> We'd got released some of XStatic-* packages last week. At least new > XStatic-jQuery [2] breaks horizon [3]. I'm working on a new job for > requirements repo [4] to prevent such issues in the future. > >>>> > >>>> Please, do not try 'recheck' until [1] will be merged. > >>>> > >>>> [1] https://review.openstack.org/#/c/604611/ > >>>> [2] https://pypi.org/project/XStatic-jQuery/#history > >>>> [3] https://bugs.launchpad.net/horizon/+bug/1794028 > >>>> [4] https://review.openstack.org/#/c/604613/ > >>>> > >>>> Regards, > >>>> Ivan Kolodyazhny, > >>>> http://blog.e0ne.info/ > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed Oct 3 13:22:24 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 3 Oct 2018 14:22:24 +0100 (BST) Subject: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper" In-Reply-To: References: Message-ID: On Tue, 2 Oct 2018, Chris Dent wrote: > One of the comments in there is about the idea of making a zuul job > which is effectively "run the gabbits in these dirs" against a > tempest set up. Doing so will require some minor changes to the > tempest tox passenv settings but I think it ought to > straightforwardish. I've made a first stab at this: * Small number of changes to tempest: https://review.openstack.org/#/c/607507/ (The important change here, the one that strictly required changes to tempest, is adjusting passenv in tox.ini) * Much smaller job on the placement side: https://review.openstack.org/#/c/607508/ I'd really like to see this become a real thing, so if I could get some help from tempest people on how to make it in line with expectations that would be great. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From nate.johnston at redhat.com Wed Oct 3 13:31:22 2018 From: nate.johnston at redhat.com (Nate Johnston) Date: Wed, 3 Oct 2018 09:31:22 -0400 Subject: [openstack-dev] [neutron][stable] Stable Core Team Update In-Reply-To: References: Message-ID: <20181003133122.tqrn5b7sbbviutbu@bishop> On Tue, Oct 02, 2018 at 10:41:58AM -0500, Miguel Lavalle wrote: > I want to nominate Bernard Cafarrelli as a stable core reviewer for Neutron > and related projects. Bernard has been increasing the number of stable > reviews he is doing for the project [1]. Besides that, he is a stable > maintainer downstream for his employer (Red Hat), so he can bring that > valuable experience to the Neutron stable team. I'm not on the stable team, but an enthusiastic +1 from me! Nate From mriedemos at gmail.com Wed Oct 3 13:42:41 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 3 Oct 2018 08:42:41 -0500 Subject: [openstack-dev] [goals][python3][heat][stable] how should we proceed with ocata branch In-Reply-To: References: Message-ID: <3cda96c8-adfe-e9ff-be17-1cba08e6e63f@gmail.com> On 10/3/2018 7:58 AM, Doug Hellmann wrote: > There is one more patch to import the zuul configuration for the > heat-agents repository's stable/ocata branch. That branch is apparently > broken, and Zane suggested on the review [1] that we abandon the patch > and close the branch. > > That patch is the only thing blocking the cleanup patch in > project-config, so I would like to get a definitive answer about what to > do. Should we close the branch, or does someone want to try to fix > things up? > > Doug > > [1]https://review.openstack.org/#/c/597272/ I'm assuming heat-agents is a service, not a library, since it doesn't show up in upper-constraints. Based on that, does heat itself plan on putting its stable/ocata branch into extended maintenance mode and if so, does that mean EOLing the heat-agents stable/ocata branch could cause problems for the heat stable/ocata branch? In other words, will it be reasonable to run CI for stable/ocata heat changes against a heat-agents ocata-eol tag? -- Thanks, Matt From doug at doughellmann.com Wed Oct 3 13:47:24 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 03 Oct 2018 09:47:24 -0400 Subject: [openstack-dev] [Release-job-failures][neutron] Release of openstack/networking-bigswitch failed In-Reply-To: References: Message-ID: zuul at openstack.org writes: > Build failed. > > - release-openstack-python http://logs.openstack.org/d1/d1df2f75e0e8259ddaaf5136f421567733ba7f5b/release/release-openstack-python/8a89663/ : FAILURE in 6m 19s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED It looks like there's something wrong with the versioning in the stable/queens branch of the networking-bigswitch repository. The error I see when I run "python setup.py --name" locally is: ValueError: git history requires a target version of pbr.version.SemanticVersion(13.0.1), but target version is pbr.version.SemanticVersion(12.0.5) This is being caused by re-tagging the 13.0.1 release as 12.0.5. I think if we tag a *newer* commit as 12.0.6 that will work (it seems to work locally). Doug From wolverine.av at gmail.com Wed Oct 3 14:31:19 2018 From: wolverine.av at gmail.com (AdityaVaja) Date: Wed, 3 Oct 2018 22:31:19 +0800 Subject: [openstack-dev] [Release-job-failures][neutron] Release of openstack/networking-bigswitch failed In-Reply-To: References: Message-ID: Hello Doug, I was going to send out an email or drop in on IRC to check how to fix that. We accidentally pushed 13.0.1 on stable/queens, instead of 12.0.5. (13.x.x is rocky and 12.x.x is queens) Tried reverting the situation by pushing 12.0.5 for the same commit hash on queens, but that didn't work. Releases with commit hash can be compared here [1]. Deleting the release from PYPI can be done, but deleting a tag from gerrit is not possible without allowing forcePush from project-config - afaik. But that seems like an extreme step. Not sure how to fix the situation, so I thought checking with openstack-release to get an idea. If your original suggestion of pushing 12.0.6 still stands, I can go ahead and do that. Thanks! [1] https://github.com/openstack/networking-bigswitch/releases On Wed, Oct 3, 2018 at 9:47 PM Doug Hellmann wrote: > zuul at openstack.org writes: > > > Build failed. > > > > - release-openstack-python > http://logs.openstack.org/d1/d1df2f75e0e8259ddaaf5136f421567733ba7f5b/release/release-openstack-python/8a89663/ > : FAILURE in 6m 19s > > - announce-release announce-release : SKIPPED > > - propose-update-constraints propose-update-constraints : SKIPPED > > It looks like there's something wrong with the versioning in the > stable/queens branch of the networking-bigswitch repository. > > The error I see when I run "python setup.py --name" locally is: > > ValueError: git history requires a target version of > pbr.version.SemanticVersion(13.0.1), but target version is > pbr.version.SemanticVersion(12.0.5) > > This is being caused by re-tagging the 13.0.1 release as 12.0.5. > > I think if we tag a *newer* commit as 12.0.6 that will work (it seems to > work locally). > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- via telegram -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Wed Oct 3 14:38:41 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 3 Oct 2018 09:38:41 -0500 Subject: [openstack-dev] [neutron][stadium][networking] Seeking proposals for non-voting Stadium projects in Neutron check queue In-Reply-To: References: <18731_1538467298_5BB325E2_18731_20_1_580d078c-3d98-acf4-dda0-2e4b30205709@orange.com> Message-ID: Thomas, Miguel, Next step is just push a patch for review with the definition of your job. We can discuss the details in Gerrit Cheers On Wed, Oct 3, 2018 at 4:39 AM Miguel Angel Ajo Pelayo wrote: > That's fantastic, > > I believe we could add some of the networking ovn jobs, we need to > decide which one would be more beneficial. > > On Tue, Oct 2, 2018 at 10:02 AM wrote: > >> Hi Miguel, all, >> >> The initiative is very welcome and will help make it more efficient to >> develop in stadium projects. >> >> legacy-tempest-dsvm-networking-bgpvpn-bagpipe would be a candidate, for >> networking-bgpvpn and networking-bagpipe (it covers API and scenario >> tests for the BGPVPN API (networking-bgpvpn) and given that >> networking-bagpipe is used as reference driver, it exercises a large >> portion of networking-bagpipe as well). >> >> Having this one will help a lot. >> >> Thanks, >> >> -Thomas >> >> >> On 9/30/18 2:42 AM, Miguel Lavalle wrote: >> > Dear networking Stackers, >> > >> > During the recent PTG in Denver, we discussed measures to prevent >> > patches merged in the Neutron repo breaking Stadium and related >> > networking projects in general. We decided to implement the following: >> > >> > 1) For Stadium projects, we want to add non-voting jobs to the Neutron >> > check queue >> > 2) For non stadium projects, we are inviting them to add 3rd party CI >> jobs >> > >> > The next step is for each project to propose the jobs that they want >> > to run against Neutron patches. >> > >> > Best regards >> > >> > Miguel >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _________________________________________________________________________________________________________________________ >> >> Ce message et ses pieces jointes peuvent contenir des informations >> confidentielles ou privilegiees et ne doivent donc >> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez >> recu ce message par erreur, veuillez le signaler >> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages >> electroniques etant susceptibles d'alteration, >> Orange decline toute responsabilite si ce message a ete altere, deforme >> ou falsifie. Merci. >> >> This message and its attachments may contain confidential or privileged >> information that may be protected by law; >> they should not be distributed, used or copied without authorisation. >> If you have received this email in error, please notify the sender and >> delete this message and its attachments. >> As emails may be altered, Orange is not liable for messages that have >> been modified, changed or falsified. >> Thank you. >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Miguel Ángel Ajo > OSP / Networking DFG, OVN Squad Engineering > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Wed Oct 3 14:43:10 2018 From: jungleboyj at gmail.com (Jay S. Bryant) Date: Wed, 3 Oct 2018 09:43:10 -0500 Subject: [openstack-dev] [cinder] Follow-up on core team changes ... Message-ID: <88e9cb04-b74c-7fb2-7db0-f10a2446fe51@gmail.com> Team, I wanted to follow up on the note I sent a week or so ago about changes to the Core team.  I talked to Winston-D (Huang Zhiteng) and it sounded like he would not be able to take a more active role.  There were no other objections so I am removing him from the Core list. John Griffith indicated an interest in staying on and thinks that he will be able to get more time for Cinder.  As a result we have decided to keep him on. This leaves Cinder with 9 people on the core team. Thanks! Jay (jungleboyj) From jungleboyj at gmail.com Wed Oct 3 14:45:25 2018 From: jungleboyj at gmail.com (Jay S. Bryant) Date: Wed, 3 Oct 2018 09:45:25 -0500 Subject: [openstack-dev] [cinder] Proposing Gorka Eguileor to Stable Core ... Message-ID: Team, We had discussed the possibility of adding Gorka to the stable core team during the PTG.  He does review a number of our backport patches and is active in that area. If there are no objections in the next week I will add him to the list. Thanks! Jay (jungleboyj) From e0ne at e0ne.info Wed Oct 3 14:50:55 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 3 Oct 2018 17:50:55 +0300 Subject: [openstack-dev] [cinder] Proposing Gorka Eguileor to Stable Core ... In-Reply-To: References: Message-ID: +1 from me to Gorka! Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Wed, Oct 3, 2018 at 5:47 PM Jay S. Bryant wrote: > Team, > > We had discussed the possibility of adding Gorka to the stable core team > during the PTG. He does review a number of our backport patches and is > active in that area. > > If there are no objections in the next week I will add him to the list. > > Thanks! > > Jay (jungleboyj) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eharney at redhat.com Wed Oct 3 15:03:46 2018 From: eharney at redhat.com (Eric Harney) Date: Wed, 3 Oct 2018 11:03:46 -0400 Subject: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal) In-Reply-To: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> References: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> Message-ID: <8da94dd5-5411-d4ea-cfb7-36e4de9e55e9@redhat.com> On 9/27/18 1:36 PM, Markus Hentsch wrote: > Dear OpenStack developers, > > we would like to propose the introduction of an encrypted image format > in OpenStack. We already created a basic implementation involving Nova, > Cinder, OSC and Glance, which we'd like to contribute. > > We originally created a full spec document but since the official > cross-project contribution workflow in OpenStack is a thing of the past, > we have no single repository to upload it to. Thus, the Glance team > advised us to post this on the mailing list [1]. > > Ironically, Glance is the least affected project since the image > transformation processes affected are taking place elsewhere (Nova and > Cinder mostly). > > Below you'll find the most important parts of our spec that describe our > proposal - which our current implementation is based on. We'd love to > hear your feedback on the topic and would like to encourage all affected > projects to join the discussion. > > Subsequently, we'd like to receive further instructions on how we may > contribute to all of the affected projects in the most effective and > collaborative way possible. The Glance team suggested starting with a > complete spec in the glance-specs repository, followed by individual > specs/blueprints for the remaining projects [1]. Would that be alright > for the other teams? > > [1] > http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-09-27-14.00.log.html > > Best regards, > Markus Hentsch > > (excerpts from our image encryption spec below) > > Problem description > =================== > > An image, when uploaded to Glance or being created through Nova from an > existing server (VM), may contain sensitive information. The already > provided signature functionality only protects images against > alteration. Images may be stored on several hosts over long periods of > time. First and foremost this includes the image storage hosts of Glance > itself. Furthermore it might also involve caches on systems like compute > hosts. In conclusion they are exposed to a multitude of potential > scenarios involving different hosts with different access patterns and > attack surfaces. The OpenStack components involved in those scenarios do > not protect the confidentiality of image data. That’s why we propose the > introduction of an encrypted image format. > > Use Cases > --------- > > * A user wants to upload an image, which includes sensitive information. > To ensure the integrity of the image, a signature can be generated and > used for verification. Additionally, the user wants to protect the > confidentiality of the image data through encryption. The user generates > or uploads a key in the key manager (e.g. Barbican) and uses it to > encrypt the image locally using the OpenStack client (osc) when > uploading it. Consequently, the image stored on the Glance host is > encrypted. > > * A user wants to create an image from an existing server with ephemeral > storage. This server may contain sensitive user data. The corresponding > compute host then generates the image based on the data of the ephemeral > storage disk. To protect the confidentiality of the data within the > image, the user wants Nova to also encrypt the image using a key from > the key manager, specified by its secret ID. Consequently, the image > stored on the Glance host is encrypted. > > * A user wants to create a new server or volume based on an encrypted > image created by any of the use cases described above. The corresponding > compute or volume host has to be able to decrypt the image using the > symmetric key stored in the key manager and transform it into the > requested resource (server disk or volume). > > Although not required on a technical level, all of the use cases > described above assume the usage of encrypted volume types and encrypted > ephemeral storage as provided by OpenStack. > > > Proposed changes > ================ > > * Glance: Adding a container type for encrypted images that supports > different mechanisms (format, cipher algorithms, secret ID) via a > metadata property. Whether introducing several container types or > outsourcing the mechanism definition into metadata properties may still > be up for discussion, although we do favor the latter. > > * Nova: Adding support for decrypting an encrypted image when a servers > ephemeral disk is created. This includes direct decryption streaming for > encrypted disks. Nova should select a suitable mechanism according to > the image container type and metadata. The symmetric key will be > retrieved from the key manager (e.g. Barbican). > > * Cinder: Adding support for decrypting an encrypted image when a volume > is created from it. Cinder should select a suitable mechanism according > to the image container type and metadata. The symmetric key will be > retrieved from the key manager (e.g. Barbican). Are you aware of the existing Cinder support for similar functionality? When encrypted volumes are uploaded to Glance images from Cinder, encryption keys are cloned in Barbican and tied to Glance images as metadata. Then, volumes created from those images can consume the Barbican key to use the volume. The keys I mention here are used for the LUKS encryption layer -- which is different from what you are proposing. But I'd like to point it out to make sure that the interaction between the two different encryption methods is understood when designing this and considering use cases. (Note: there is still at least one big TODO pending around this functionality, namely that the Glance service doesn't know to remove a key from Barbican when the image is deleted from Glance.) > > * OpenStack Client / SDK: Adding support for encrypting images using a > secret ID which references the symmetric key in the key manager (e.g. > Barbican). This also involves new CLI arguments to specify the secret ID > and encryption method. > > We propose to use an implementation of symmetric AES 256 encryption > provided by GnuPG as a basic mechanism supported by this draft. It is a > well established implementation of PGP and supports streamable > encryption/decryption processes, which is important as illustrated below. > > We also explored the possibility of using more elaborated and dynamic > approaches like PKCS#7 (CMS) but ultimately failed to find a free > open-source implementation (e.g. OpenSSL) that supports streamable > decryption of CMS-wrapped encrypted data. More precisely, no > implementation we tested was able to decrypt a symmetrically encrypted, > CMS-wrapped container without trying to completely load it into memory > or suffering from other limitations regarding big files. > > We require the streamability of the encryption/decryption mechanism for > two reasons: > > 1. Loading entire images into the memory of compute hosts or a users > system is unacceptable. > > 2. We propose direct decryption-streaming into the target storage (e.g. > encrypted volume) to prevent the creation of temporary unencrypted files. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gcerami at redhat.com Wed Oct 3 15:08:41 2018 From: gcerami at redhat.com (Gabriele Cerami) Date: Wed, 3 Oct 2018 16:08:41 +0100 Subject: [openstack-dev] [TripleO] Promotion jobs migration to zuulv3-aware workflow Message-ID: <20181003150841.nnoigmccwlulrqko@localhost> Hi, as part of the process to migrate all the jobs to use a workflow that takes advantage of zuulv3, we need to migrate the jobs that form the promotion pipeline in rdo sf. There are a total of 95 jobs that need migration in rdo sf. Of these, 55 are in the various promotion pipeline for all the branches. We already tested part of them, sampling the various branch/topology configuration, and we had good results. We tried to plan a soft migration, but it would take ages to finish by migrating job one at a time and be sure everything is working 100% before moving to the next. Since we are in a good spot with promotion (we just had promotion) our next plan is to bulk migrate all the promotion jobs at the same time, and spend a few days dealing with the consequences. Tomorrow at 13 UTC, we'll pull the trigger on the migration. From that moment, expect some delays in the promotions, as we'll try to fix anything that we missed in our previous tests in the following days. We don't expect the fixes to take more than one week, and we are ready to revert to the previous configuration if need arises. Thanks From bodenvmw at gmail.com Wed Oct 3 15:43:25 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Wed, 3 Oct 2018 09:43:25 -0600 Subject: [openstack-dev] [neutron] Please opt-in for neutron-lib patches Message-ID: <238b8195-cb07-fb9c-a0ad-c66c19bfefba@gmail.com> Just a friendly reminder that networking projects now need to opt-in for neutron-lib consumption patches [1]. Starting next week (September 8) I'd like to start basing consumption patches on those projects that have opted-in. If there are exceptions please let me know so we can track them accordingly. Thanks [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/135063.html From markus.hentsch at secustack.com Wed Oct 3 15:42:39 2018 From: markus.hentsch at secustack.com (Markus Hentsch) Date: Wed, 3 Oct 2018 17:42:39 +0200 Subject: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal) In-Reply-To: <8da94dd5-5411-d4ea-cfb7-36e4de9e55e9@redhat.com> References: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> <8da94dd5-5411-d4ea-cfb7-36e4de9e55e9@redhat.com> Message-ID: <91b8b9a8-70e1-93d1-0173-2a90b6c88161@secustack.com> Hello Eric, Eric Harney wrote: > > Are you aware of the existing Cinder support for similar functionality? > > When encrypted volumes are uploaded to Glance images from Cinder, > encryption keys are cloned in Barbican and tied to Glance images as > metadata.  Then, volumes created from those images can consume the > Barbican key to use the volume. > > The keys I mention here are used for the LUKS encryption layer -- which > is different from what you are proposing.  But I'd like to point it out > to make sure that the interaction between the two different encryption > methods is understood when designing this and considering use cases. > > (Note: there is still at least one big TODO pending around this > functionality, namely that the Glance service doesn't know to remove a > key from Barbican when the image is deleted from Glance.) > We are aware of this and plan to integrate this existing case with our new approach. This would mean that the container_format property of such image - which currently is simply 'bare' iirc - would be changed to the new one for encrypted images and its metadata adjusted to contain appropriate information about the encryption used. Such metadata would indicate the Cinder-specific encryption in this case and differentiate it from our general encryption mechanism. Regarding the key deletion: that's an interesting point actually. Although OpenStack does create and delete keys for encryption of volumes or ephemeral storage itself automatically, we didn't plan to do that in our current proposal for the image encryption. As our quoted use cases describe, the user is to upload or order a key in Barbican beforehand themselves. This means the key management (including deletion) for encrypted images was meant to be in the hand of the user. I wonder, would this be undesired from the OpenStack perspective? Best regards, Markus From jlabarre at redhat.com Wed Oct 3 16:09:16 2018 From: jlabarre at redhat.com (James LaBarre) Date: Wed, 3 Oct 2018 12:09:16 -0400 Subject: [openstack-dev] [ironic] Tenks In-Reply-To: References: Message-ID: <93f279e1-24e3-3668-7f10-d9c795d00327@redhat.com> On 10/2/18 10:37 AM, Mark Goddard wrote: > > > On Tue, 2 Oct 2018 at 14:03, Jay Pipes > wrote: > > On 10/02/2018 08:58 AM, Mark Goddard wrote: > > Tenks is a project for managing 'virtual bare metal clusters'. > It aims > > to be a drop-in replacement for the various scripts and > templates that > > exist in the Ironic devstack plugin for creating VMs to act as bare > > metal nodes in development and test environments. Similar code > exists in > > Bifrost and TripleO, and probably other places too. By focusing > on one > > project, we can ensure that it works well, and provides all the > features > > necessary as support for bare metal in the cloud evolves. > > How does Tenks relate to OVB? > > https://openstack-virtual-baremetal.readthedocs.io/en/latest/introduction.html > > > Good question. As far as I'm aware, OVB is a tool for using an > OpenStack cloud to host the virtual bare metal nodes, and is typically > used for testing TripleO. Tenks does not rule out supporting this use > case in future, but currently operates more like the Ironic devstack > plugin, using libvirt/KVM/QEMU as the virtualisation provider. I'm presuming, as Tenks is supposed to support multiple hypervisors, that a multi-arch environment would be supported (different node types on different architectures).  Or does this even enter into the consideration? -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Wed Oct 3 16:16:16 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 3 Oct 2018 11:16:16 -0500 Subject: [openstack-dev] [neutron] Please opt-in for neutron-lib patches In-Reply-To: <238b8195-cb07-fb9c-a0ad-c66c19bfefba@gmail.com> References: <238b8195-cb07-fb9c-a0ad-c66c19bfefba@gmail.com> Message-ID: Hi Boden. Love this initiative. We would like networking-powervm to be included, and have proposed [5], but are wondering why we weren't picked up in [6]. Your email [1] says "If your project isn't in [3][4], but you think it should be; it maybe missing a recent neutron-lib version in your requirements.txt." What's "recent"? I see the latest (per the requirements project) is 1.19.0 and we have 1.18.0. Should we bump? Thanks, efried [5] https://review.openstack.org/#/c/607625/ [6] https://etherpad.openstack.org/p/neutron-sibling-setup On 10/03/2018 10:43 AM, Boden Russell wrote: > Just a friendly reminder that networking projects now need to opt-in for > neutron-lib consumption patches [1]. > > Starting next week (September 8) I'd like to start basing consumption > patches on those projects that have opted-in. If there are exceptions > please let me know so we can track them accordingly. > > Thanks > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-September/135063.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mark at stackhpc.com Wed Oct 3 16:17:09 2018 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 3 Oct 2018 17:17:09 +0100 Subject: [openstack-dev] [ironic] Tenks In-Reply-To: <93f279e1-24e3-3668-7f10-d9c795d00327@redhat.com> References: <93f279e1-24e3-3668-7f10-d9c795d00327@redhat.com> Message-ID: On Wed, 3 Oct 2018 at 17:10, James LaBarre wrote: > On 10/2/18 10:37 AM, Mark Goddard wrote: > > > > On Tue, 2 Oct 2018 at 14:03, Jay Pipes wrote: > >> On 10/02/2018 08:58 AM, Mark Goddard wrote: >> > Tenks is a project for managing 'virtual bare metal clusters'. It aims >> > to be a drop-in replacement for the various scripts and templates that >> > exist in the Ironic devstack plugin for creating VMs to act as bare >> > metal nodes in development and test environments. Similar code exists >> in >> > Bifrost and TripleO, and probably other places too. By focusing on one >> > project, we can ensure that it works well, and provides all the >> features >> > necessary as support for bare metal in the cloud evolves. >> >> How does Tenks relate to OVB? >> >> >> https://openstack-virtual-baremetal.readthedocs.io/en/latest/introduction.html > > > Good question. As far as I'm aware, OVB is a tool for using an OpenStack > cloud to host the virtual bare metal nodes, and is typically used for > testing TripleO. Tenks does not rule out supporting this use case in > future, but currently operates more like the Ironic devstack plugin, using > libvirt/KVM/QEMU as the virtualisation provider. > > > I'm presuming, as Tenks is supposed to support multiple hypervisors, that > a multi-arch environment would be supported (different node types on > different architectures). Or does this even enter into the consideration? > I think that would be a good feature to consider in future, although it's not something that works currently. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Wed Oct 3 16:21:03 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 3 Oct 2018 12:21:03 -0400 Subject: [openstack-dev] [goals][python3][heat][stable] how should we proceed with ocata branch In-Reply-To: <3cda96c8-adfe-e9ff-be17-1cba08e6e63f@gmail.com> References: <3cda96c8-adfe-e9ff-be17-1cba08e6e63f@gmail.com> Message-ID: On 3/10/18 9:42 AM, Matt Riedemann wrote: > On 10/3/2018 7:58 AM, Doug Hellmann wrote: >> There is one more patch to import the zuul configuration for the >> heat-agents repository's stable/ocata branch. That branch is apparently >> broken, and Zane suggested on the review [1] that we abandon the patch >> and close the branch. >> >> That patch is the only thing blocking the cleanup patch in >> project-config, so I would like to get a definitive answer about what to >> do. Should we close the branch, or does someone want to try to fix >> things up? I think we agreed on closing the branch, and Rico was looking into the procedure for how to actually do that. >> Doug >> >> [1]https://review.openstack.org/#/c/597272/ > > I'm assuming heat-agents is a service, not a library, since it doesn't > show up in upper-constraints. It's a guest agent, so neither :) > Based on that, does heat itself plan on > putting its stable/ocata branch into extended maintenance mode and if Wearing my Red Hat hat, I would be happy to EOL it. But wearing my upstream hat, I'm happy to keep maintaining it, and I was not proposing that we EOL heat's stable/ocata as well. > so, does that mean EOLing the heat-agents stable/ocata branch could > cause problems for the heat stable/ocata branch? In other words, will it > be reasonable to run CI for stable/ocata heat changes against a > heat-agents ocata-eol tag? I don't think that's a problem. The guest agents rarely change, and I don't think there's ever been a patch backported by 4 releases. cheers, Zane. From sean.mcginnis at gmx.com Wed Oct 3 16:26:17 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 3 Oct 2018 11:26:17 -0500 Subject: [openstack-dev] [cinder] Proposing Gorka Eguileor to Stable Core ... In-Reply-To: References: Message-ID: <20181003162616.GA4424@sm-workstation> On Wed, Oct 03, 2018 at 09:45:25AM -0500, Jay S. Bryant wrote: > Team, > > We had discussed the possibility of adding Gorka to the stable core team > during the PTG.  He does review a number of our backport patches and is > active in that area. > > If there are no objections in the next week I will add him to the list. > > Thanks! > > Jay (jungleboyj) > +1 from me. Gorka has shown to understand the stable policies and I think his coming from a company that has a vested interest in stable backports would make him a good candidate for stable core. From whayutin at redhat.com Wed Oct 3 16:26:50 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 3 Oct 2018 10:26:50 -0600 Subject: [openstack-dev] [all] Zuul job backlog In-Reply-To: References: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> <1538165560.3935414.1524300072.5C31EEA9@webmail.messagingengine.com> Message-ID: On Fri, Sep 28, 2018 at 3:02 PM Matt Riedemann wrote: > On 9/28/2018 3:12 PM, Clark Boylan wrote: > > I was asked to write a followup to this as the long Zuul queues have > persisted through this week. Largely because the situation from last week > hasn't changed much. We were down the upgraded cloud region while we worked > around a network configuration bug, then once that was addressed we ran > into neutron port assignment and deletion issues. We think these are both > fixed and we are running in this region again as of today. > > > > Other good news is our classification rate is up significantly. We can > use that information to go through the top identified gate bugs: > > > > Network Connectivity issues to test nodes [2]. This is the current top > of the list, but I think its impact is relatively small. What is happening > here is jobs fail to connect to their test nodes early in the pre-run > playbook and then fail. Zuul will rerun these jobs for us because they > failed in the pre-run step. Prior to zuulv3 we had nodepool run a ready > script before marking test nodes as ready, this script would've caught and > filtered out these broken network nodes early. We now notice them late > during the pre-run of a job. > > > > Pip fails to find distribution for package [3]. Earlier in the week we > had the in region mirror fail in two different regions for unrelated > errors. These mirrors were fixed and the only other hits for this bug come > from Ara which tried to install the 'black' package on python3.5 but this > package requires python>=3.6. > > > > yum, no more mirrors to try [4]. At first glance this appears to be an > infrastructure issue because the mirror isn't serving content to yum. On > further investigation it turned out to be a DNS resolution issue caused by > the installation of designate in the tripleo jobs. Tripleo is aware of this > issue and working to correct it. > > > > Stackviz failing on py3 [5]. This is a real bug in stackviz caused by > subunit data being binary not utf8 encoded strings. I've written a fix for > this problem athttps://review.openstack.org/606184, but in doing so found > that this was a known issue back in March and there was already a proposed > fix,https://review.openstack.org/#/c/555388/3. It would be helpful if the > QA team could care for this project and get a fix in. Otherwise, we should > consider disabling stackviz on our tempest jobs (though the output from > stackviz is often useful). > > > > There are other bugs being tracked by e-r. Some are bugs in the > openstack software and I'm sure some are also bugs in the infrastructure. I > have not yet had the time to work through the others though. It would be > helpful if project teams could prioritize the debugging and fixing of these > issues though. > > > > [2]http://status.openstack.org/elastic-recheck/gate.html#1793370 > > [3]http://status.openstack.org/elastic-recheck/gate.html#1449136 > > [4]http://status.openstack.org/elastic-recheck/gate.html#1708704 > > [5]http://status.openstack.org/elastic-recheck/gate.html#1758054 > > Thanks for the update Clark. > > Another thing this week is the logstash indexing is behind by at least > half a day. That's because workers were hitting OOM errors due to giant > screen log files that aren't formatted properly so that we only index > INFO+ level logs, and were instead trying to index the entire file, > which some of which are 33MB *compressed*. So indexing of those > identified problematic screen logs has been disabled: > > https://review.openstack.org/#/c/606197/ > > I've reported bugs against each related project. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Greetings Clark and all, The TripleO team would like to announce a significant change to the upstream CI the project has in place today. TripleO can at times consume a large share of the compute resources [1] provided by the OpenStack upstream infrastructure team and OpenStack providers. The TripleO project has a large code base and high velocity of change which alone can tax the upstream CI system [3]. Additionally like other projects the issue is particularly acute when gate jobs are reset at a high rate. Unlike most other projects in OpenStack, TripleO uses multiple nodepool nodes in each job to more closely emulate customer like deployments. While using multiple nodes per job helps to uncover bugs that are not found in other projects, the resources used, the run time of each job, and usability have proven to be challenging. It has been a challenge to maintain job run times, quality and usability for TripleO and a challenge for the infra team to provide the required compute resources for the project. A simplification of our upstream deployments to check and gate changes is in order. The TripleO project has created a single node container based composable OpenStack deployment [2]. It is the projects intention to replace most of the TripleO upstream jobs with the Standalone deployment. We would like to reduce our multi-node usage to a total of two or three multinode jobs to handle a basic overcloud deployment, updates and upgrades[a]. Currently in master we are relying on multiple multi-node scenario jobs to test many of the OpenStack services in a single job. Our intention is to move these multinode scenario jobs to single node job(s) that tests a smaller subset of services. The goal of this would be target the specific areas of the TripleO code base that affect these services and only run those there. This would replace the existing 2-3 hour two node job(s) with single node job(s) for specific services that completes in about half the time. This unfortunately will reduce the overall coverage upstream but still allows us a basic smoke test of the supported OpenStack services and their deployment upstream. Ideally projects other than TripleO would make use of the Standalone deployment to test their particular service with containers, upgrades or for various other reasons. Additional projects using this deployment would help ensure bugs are found quickly and resolved providing additional resilience to the upstream gate jobs. The TripleO team will begin review to scope out and create estimates for the above work starting on October 18 2018. One should expect to see updates on our progress posted to the list. Below are some details on the proposed changes. Thank you all for your time and patience! Performance improvements: * Standalone jobs use half the nodes of multinode jobs * The standalone job has an average run time of 60-80 minutes, about half the run time of our multinode jobs Base TripleO Job Definitions (Stein onwards): Multi-node jobs * containers-multinode * containers-multinode-updates * containers-multinode-upgrades Single node jobs * undercloud * undercloud-upgrade * standalone Jobs to be removed (Stein onwards): Multi-node jobs[b] * scenario001-multinode * scenario002-multinode * scenario003-multinode * scenario004-multinode * scenario006-mulitinode * scenario007-multinode * scenario008-multinode * scenario009-multinode * scenario010-multinode * scenario011-multinode Jobs that may need to be created to cover additional services[4] (Stein onwards): Single node jobs[c] * standalone-barbican * standalone-ceph[d] * standalone-designate * standalone-manila * standalone-octavia * standalone-openshift * standalone-sahara * standalone-telemetry [1] https://gist.github.com/notmyname/8bf3dbcb7195250eb76f2a1a8996fb00 [2] https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/standalone.html [3] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134867.html [4] https://github.com/openstack/tripleo-heat-templates/blob/master/README.rst#service-testing-matrix -- Wes Hayutin Associate MANAGER Red Hat whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Oct 3 16:28:29 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 3 Oct 2018 11:28:29 -0500 Subject: [openstack-dev] [nova] We should fail to boot a server if PF passthrough is requested and we don't honor it, right? Message-ID: <3ddbcdb8-3a9c-925a-eb95-caa4e773189f@gmail.com> I came across [1] today while triaging a bug [2]. Unless I'm mistaken, the user has requested SR-IOV PF passthrough for their server and for whatever reason we can't find the PCI device for the PF passthrough port so we don't reflect the actual device MAC address on the port. Is that worth stopping the server create? Or is logging an ERROR enough here? The reason being we get an IndexError here [3]. Ultimately if we found a PCI device but it's not whitelisted, we'll raise an exception anyway when building the port binding profile [4]. So is it reasonable to just raise PciDeviceNotFound whenever we can't find a PCI device on a compute host given a pci_request_id? In other words, it seems something failed earlier during scheduling and/or the PCI device resource claim if we get this far and things are still messed up. [1] https://github.com/openstack/nova/blob/237ced4737a28728408eb30c3d20c6b2c13b4a8d/nova/network/neutronv2/api.py#L1426 [2] https://bugs.launchpad.net/nova/+bug/1795064 [3] https://github.com/openstack/nova/blob/237ced4737a28728408eb30c3d20c6b2c13b4a8d/nova/network/neutronv2/api.py#L1404 [4] https://github.com/openstack/nova/blob/237ced4737a28728408eb30c3d20c6b2c13b4a8d/nova/network/neutronv2/api.py#L1393 -- Thanks, Matt From doug at doughellmann.com Wed Oct 3 16:28:30 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 03 Oct 2018 12:28:30 -0400 Subject: [openstack-dev] [goals][python3][heat][stable] how should we proceed with ocata branch In-Reply-To: References: <3cda96c8-adfe-e9ff-be17-1cba08e6e63f@gmail.com> Message-ID: Zane Bitter writes: > On 3/10/18 9:42 AM, Matt Riedemann wrote: >> On 10/3/2018 7:58 AM, Doug Hellmann wrote: >>> There is one more patch to import the zuul configuration for the >>> heat-agents repository's stable/ocata branch. That branch is apparently >>> broken, and Zane suggested on the review [1] that we abandon the patch >>> and close the branch. >>> >>> That patch is the only thing blocking the cleanup patch in >>> project-config, so I would like to get a definitive answer about what to >>> do. Should we close the branch, or does someone want to try to fix >>> things up? > > I think we agreed on closing the branch, and Rico was looking into the > procedure for how to actually do that. OK. I am going to abandon the patch to import the zuul settings then. Abandoning all open patches is one of the early steps anyway, and doing that now for this 1 patch will allow us to proceed with the cleanup. Doug From alifshit at redhat.com Wed Oct 3 16:31:06 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Wed, 3 Oct 2018 12:31:06 -0400 Subject: [openstack-dev] State of NUMA live migration Message-ID: Yes, this is still happening. Mea culpa for not carrying the ball and maintaining visibility. There's work in nova to actually get it working, and in intel-nfv-ci to lay down the groundwork for eventual CI. In nova, the spec has been re-proposed for Stein [1]. There are some differences from the Rocky version, but based on what I've heard was discussed at Denver, it shouldn't be too controversial. There's a couple of nova patches up as well [2], but that's still pretty WIP given the changes in the spec. A bunch of patches from Rocky were abandoned because they're no longer applicable. In intel-nfv-ci, there's a whole stack of changes [3] that are mostly about technical debt and laying the groundwork to support multinode test environments, but there's also a WIP patch in there [4] that'll eventually actually test live migration. For now we have no upstream/public environment to run that on, so anyone who's involved will need their own env if they want to run the tests and/or play with the feature. Longer-term, I would like to have some form of upstream CI testing this, be it in the vanilla nodepool with nested virt and "fake" NUMA topologies, or a 3rd party CI with resources provided by an interested stakeholder. [1] https://review.openstack.org/#/c/599587/ [2] https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/numa-aware-live-migration [3] https://review.openstack.org/#/c/576602/ [4] https://review.openstack.org/#/c/574871/6 From alifshit at redhat.com Wed Oct 3 16:37:29 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Wed, 3 Oct 2018 12:37:29 -0400 Subject: [openstack-dev] [nova] State of NUMA live migration In-Reply-To: References: Message-ID: > Yes, this is still happening. Mea culpa for not carrying the ball and > maintaining visibility. There's work in nova to actually get it > working, and in intel-nfv-ci to lay down the groundwork for eventual > CI. > > In nova, the spec has been re-proposed for Stein [1]. There are some > differences from the Rocky version, but based on what I've heard was > discussed at Denver, it shouldn't be too controversial. There's a > couple of nova patches up as well [2], but that's still pretty WIP > given the changes in the spec. A bunch of patches from Rocky were > abandoned because they're no longer applicable. > > In intel-nfv-ci, there's a whole stack of changes [3] that are mostly > about technical debt and laying the groundwork to support multinode > test environments, but there's also a WIP patch in there [4] that'll > eventually actually test live migration. > > For now we have no upstream/public environment to run that on, so > anyone who's involved will need their own env if they want to run the > tests and/or play with the feature. Longer-term, I would like to have > some form of upstream CI testing this, be it in the vanilla nodepool > with nested virt and "fake" NUMA topologies, or a 3rd party CI with > resources provided by an interested stakeholder. Forgot the nova tag :( > [1] https://review.openstack.org/#/c/599587/ > [2] https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/numa-aware-live-migration > [3] https://review.openstack.org/#/c/576602/ > [4] https://review.openstack.org/#/c/574871/6 -- -- Artom Lifshitz Software Engineer, OpenStack Compute DFG From ben at swartzlander.org Wed Oct 3 16:41:36 2018 From: ben at swartzlander.org (Ben Swartzlander) Date: Wed, 3 Oct 2018 12:41:36 -0400 Subject: [openstack-dev] [manila] nominating Amit Oren for manila core In-Reply-To: <20181002175843.ik5mhqqz3hwqb42m@barron.net> References: <20181002175843.ik5mhqqz3hwqb42m@barron.net> Message-ID: <640ef66f-443d-d962-11aa-26fca6f582df@swartzlander.org> On 10/02/2018 01:58 PM, Tom Barron wrote: > Amit Oren has contributed high quality reviews in the last couple of > cycles so I would like to nominated him for manila core. > > Please respond with your +1 or -1 votes.  We'll hold voting open for 7 > days. +1 > Thanks, > > -- Tom Barron (tbarron) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Wed Oct 3 16:43:25 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 03 Oct 2018 12:43:25 -0400 Subject: [openstack-dev] [all] Zuul job backlog In-Reply-To: References: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> <1538165560.3935414.1524300072.5C31EEA9@webmail.messagingengine.com> Message-ID: Wesley Hayutin writes: [snip] > The TripleO project has created a single node container based composable > OpenStack deployment [2]. It is the projects intention to replace most of > the TripleO upstream jobs with the Standalone deployment. We would like to > reduce our multi-node usage to a total of two or three multinode jobs to > handle a basic overcloud deployment, updates and upgrades[a]. Currently in > master we are relying on multiple multi-node scenario jobs to test many of > the OpenStack services in a single job. Our intention is to move these > multinode scenario jobs to single node job(s) that tests a smaller subset > of services. The goal of this would be target the specific areas of the > TripleO code base that affect these services and only run those there. This > would replace the existing 2-3 hour two node job(s) with single node job(s) > for specific services that completes in about half the time. This > unfortunately will reduce the overall coverage upstream but still allows us > a basic smoke test of the supported OpenStack services and their deployment > upstream. > > Ideally projects other than TripleO would make use of the Standalone > deployment to test their particular service with containers, upgrades or > for various other reasons. Additional projects using this deployment would > help ensure bugs are found quickly and resolved providing additional > resilience to the upstream gate jobs. The TripleO team will begin review to > scope out and create estimates for the above work starting on October 18 > 2018. One should expect to see updates on our progress posted to the > list. Below are some details on the proposed changes. [snip] Thanks for all of the details, Wes. I know the current situation has been hurting the TripleO team as well, so I'm glad to see a good plan in place to address it. I look forward to seeing updates about the progress. Doug From bodenvmw at gmail.com Wed Oct 3 16:47:49 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Wed, 3 Oct 2018 10:47:49 -0600 Subject: [openstack-dev] [neutron] Please opt-in for neutron-lib patches In-Reply-To: References: <238b8195-cb07-fb9c-a0ad-c66c19bfefba@gmail.com> Message-ID: <32dfdbce-c133-b851-3faf-0522b706e612@gmail.com> On 10/3/18 10:16 AM, Eric Fried wrote: > We would like networking-powervm to be included, and have proposed [5], > but are wondering why we weren't picked up in [6]. Your email [1] says > > "If your project isn't in [3][4], > but you think it should be; it maybe missing a recent neutron-lib > version in your requirements.txt." > > What's "recent"? I see the latest (per the requirements project) is > 1.19.0 and we have 1.18.0. Should we bump? I must've accidentally missed powervm; thanks for following up. The fact that you're not using neutron-lib 1.19.0 has nothing to do with it; this was a user error. It's probably a good idea to move up to 1.19.0 once neutron does [1]. Typically I will propose the bump for all such projects, but I was waiting for the list of opt-in consumers before doing so. [1] https://review.openstack.org/#/c/605690 From gouthampravi at gmail.com Wed Oct 3 16:51:37 2018 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 3 Oct 2018 09:51:37 -0700 Subject: [openstack-dev] [manila] nominating Amit Oren for manila core In-Reply-To: <20181002175843.ik5mhqqz3hwqb42m@barron.net> References: <20181002175843.ik5mhqqz3hwqb42m@barron.net> Message-ID: +1 -- Goutham Pacha Ravi On Tue, Oct 2, 2018 at 10:59 AM Tom Barron wrote: > > Amit Oren has contributed high quality reviews in the last couple of > cycles so I would like to nominated him for manila core. > > Please respond with your +1 or -1 votes. We'll hold voting open for 7 > days. > > Thanks, > > -- Tom Barron (tbarron) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Wed Oct 3 17:44:06 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 03 Oct 2018 13:44:06 -0400 Subject: [openstack-dev] [Release-job-failures][neutron] Release of openstack/networking-bigswitch failed In-Reply-To: References: Message-ID: AdityaVaja writes: > Hello Doug, > > I was going to send out an email or drop in on IRC to check how to fix that. > > We accidentally pushed 13.0.1 on stable/queens, instead of 12.0.5. (13.x.x > is rocky and 12.x.x is queens) > Tried reverting the situation by pushing 12.0.5 for the same commit hash on > queens, but that didn't work. Releases with commit hash can be compared > here [1]. > > Deleting the release from PYPI can be done, but deleting a tag from gerrit > is not possible without allowing forcePush from project-config - afaik. But > that seems like an extreme step. > Not sure how to fix the situation, so I thought checking with > openstack-release to get an idea. > > If your original suggestion of pushing 12.0.6 still stands, I can go ahead > and do that. I think that pushing 12.0.6 to a point on that branch after the 12.0.5/13.0.1 commit will produce a new release with version 12.0.6 that works correctly. Unfortunately, you're right that we don't have a good way to remove the old 13.0.1 release and tag. In the past we have just ignored that and moved on, although I think in some cases we did remove the package from PyPI to avoid having it installed accidentally. I don't know how many of your users are using pip to install the packages you build, so I don't know how important it will be for you to do that. Doug From juliaashleykreger at gmail.com Wed Oct 3 17:55:43 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 3 Oct 2018 10:55:43 -0700 Subject: [openstack-dev] [ironic] Tenks In-Reply-To: References: <93f279e1-24e3-3668-7f10-d9c795d00327@redhat.com> Message-ID: On Wed, Oct 3, 2018 at 9:17 AM Mark Goddard wrote: > > > On Wed, 3 Oct 2018 at 17:10, James LaBarre wrote: > >> On 10/2/18 10:37 AM, Mark Goddard wrote: >> >> >> >> On Tue, 2 Oct 2018 at 14:03, Jay Pipes wrote: >> >>> On 10/02/2018 08:58 AM, Mark Goddard wrote: >>> > Tenks is a project for managing 'virtual bare metal clusters'. It aims >>> > to be a drop-in replacement for the various scripts and templates that >>> > exist in the Ironic devstack plugin for creating VMs to act as bare >>> > metal nodes in development and test environments. Similar code exists >>> in >>> > Bifrost and TripleO, and probably other places too. By focusing on one >>> > project, we can ensure that it works well, and provides all the >>> features >>> > necessary as support for bare metal in the cloud evolves. >>> >>> How does Tenks relate to OVB? >>> >>> >>> https://openstack-virtual-baremetal.readthedocs.io/en/latest/introduction.html >> >> >> Good question. As far as I'm aware, OVB is a tool for using an OpenStack >> cloud to host the virtual bare metal nodes, and is typically used for >> testing TripleO. Tenks does not rule out supporting this use case in >> future, but currently operates more like the Ironic devstack plugin, using >> libvirt/KVM/QEMU as the virtualisation provider. >> >> >> I'm presuming, as Tenks is supposed to support multiple hypervisors, that >> a multi-arch environment would be supported (different node types on >> different architectures). Or does this even enter into the consideration? >> > > I think that would be a good feature to consider in future, although it's > not something that works currently. > I think it would be a very important thing to have as we branch out into different architectures. I would personally love to see an ARM64 CI job. To actually put that job in place would mean major changes to VM creation which would be beneficial to put in one place. Long story short, I'm all for additional spoons. > __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Wed Oct 3 19:03:06 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 3 Oct 2018 19:03:06 +0000 Subject: [openstack-dev] [glance] Glance API Caching Enhancements for Edge Computing Message-ID: <12853469-784F-4606-92CD-F94A14FE819B@windriver.com> Glance Team, I am following up on discussions in the edge-computing PTG meetings. There were discussions on potential enhancements to Glance API Caching for support of the proposed MVP Edge Architecture. And I took the action to write up a blueprint and a specification for these enhancements ... and will follow up with implementation. I thought I’d start the discussions on the mailing list ... and if everyone is still in agreement, then I’ll move the high-level definition/requirements to Glance’s Launchpad (or Storyboard?) and write up a Glance spec and review it in Gerrit. THE PROPOSAL: Enhance Glance API Caching such that a) It works between two Glance Services (i.e. Glance at a Central OpenStack Cloud and Glance at an Edge OpenStack Cloud) · i.e. current Glance API Caching only works with external webservers b) Enable the Edge Cloud Glance API Caching to support the ability to locally use those cached images (e.g. nova boot ...) even when network connectivity is lost to the Central Cloud Glance Service · i.e. image meta-data is required in order to service a ‘nova boot ...’, and today image meta-data is NOT cached by Glance API Caching. The proposed solution should generically work between any two Glance Services. e.g. · in a Multi-Region Environment, · in a StarlingX Distributed Cloud, · etc. The proposed solution need only deal with the Edge Cloud Glance talking to a single Central Cloud Glance. Let me know if you have any a questions or comments, Greg. p.s. Background: More info on the edge-computing group’s MVP Edge Architecture can be found here: https://www.dropbox.com/s/255x1cao14taer3/MVP-Architecture_edge-computing_PTG.pptx?dl=0 and https://docs.google.com/document/d/1Mq6bSm_lES56S4gygEuhmMbEeCI2nBFl_U5vl50wih8/edit?ts=5baa654e#heading=h.ncllqse6iw0u -------------- next part -------------- An HTML attachment was scrubbed... URL: From dschoenb at redhat.com Wed Oct 3 19:11:50 2018 From: dschoenb at redhat.com (Dustin Schoenbrun) Date: Wed, 3 Oct 2018 15:11:50 -0400 Subject: [openstack-dev] [manila] nominating Amit Oren for manila core In-Reply-To: References: <20181002175843.ik5mhqqz3hwqb42m@barron.net> Message-ID: +1 --- Dustin Schoenbrun Senior OpenStack Quality Engineer Red Hat, Inc. dschoenb at redhat.com On Wed, Oct 3, 2018 at 12:52 PM Goutham Pacha Ravi wrote: > +1 > > -- > Goutham Pacha Ravi > > > On Tue, Oct 2, 2018 at 10:59 AM Tom Barron wrote: > > > > Amit Oren has contributed high quality reviews in the last couple of > > cycles so I would like to nominated him for manila core. > > > > Please respond with your +1 or -1 votes. We'll hold voting open for 7 > > days. > > > > Thanks, > > > > -- Tom Barron (tbarron) > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Oct 3 19:50:07 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 03 Oct 2018 15:50:07 -0400 Subject: [openstack-dev] [goals][python3][barbican][monasca][neutron][charms][ansible][telemetry][trove][stable] completing zuul settings migration Message-ID: We have 7 teams still working on the zuul setting migrations as part of the python3-first goal [1]. Quite a few of the changes are on stable branches, so will need attention from the stable teams for each project. If you need help reviewing those stable branch patches, please speak up so we can find some folks on the stable team to provide +2 votes. Some of the patches are failing the test jobs, probably because of bitrot on older stable branches. Those may need extra attention from project team members to fix, depending on the nature of the problems. I would like to have this portion of the work done by the first milestone, and I think we're close enough to do it. Please look through the list of patches for your project and ensure they are in your review queues. Thanks, Doug [1] https://review.openstack.org/#/q/topic:python3-first+is:open+message:%22import+zuul+job+settings+from+project-config%22 From melwittt at gmail.com Wed Oct 3 20:19:53 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 3 Oct 2018 13:19:53 -0700 Subject: [openstack-dev] [nova] team photos from the Stein PTG Message-ID: <99b37749-ea2a-6d54-d5c7-049b485e9b76@gmail.com> Hey all, Sorry for not posting this earlier but here's a direct link to our photo folder on dropbox for the Stein PTG team photos: https://www.dropbox.com/sh/2pmvfkstudih2wf/AAB--3TRAFaU2qN7GKDj_eZha/Nova?dl=0&subfolder_nav_tracking=1 You can view and download them from there ^. I think they came out really nice and the funny ones gave me a chuckle. :) Cheers, -melanie From mriedemos at gmail.com Wed Oct 3 21:14:51 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 3 Oct 2018 16:14:51 -0500 Subject: [openstack-dev] [cinder] Proposing Gorka Eguileor to Stable Core ... In-Reply-To: References: Message-ID: On 10/3/2018 9:45 AM, Jay S. Bryant wrote: > Team, > > We had discussed the possibility of adding Gorka to the stable core team > during the PTG.  He does review a number of our backport patches and is > active in that area. > > If there are no objections in the next week I will add him to the list. > > Thanks! > > Jay (jungleboyj) +1 from me in the stable-maint-core peanut gallery. -- Thanks, Matt From mriedemos at gmail.com Wed Oct 3 21:18:18 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 3 Oct 2018 16:18:18 -0500 Subject: [openstack-dev] [goals][python3][heat][stable] how should we proceed with ocata branch In-Reply-To: References: <3cda96c8-adfe-e9ff-be17-1cba08e6e63f@gmail.com> Message-ID: <8a474e5e-306c-5ca6-f6ad-25d12307d2e5@gmail.com> On 10/3/2018 11:21 AM, Zane Bitter wrote: >>> That patch is the only thing blocking the cleanup patch in >>> project-config, so I would like to get a definitive answer about what to >>> do. Should we close the branch, or does someone want to try to fix >>> things up? > > I think we agreed on closing the branch, and Rico was looking into the > procedure for how to actually do that. > >>> Doug >>> >>> [1]https://review.openstack.org/#/c/597272/ >> >> I'm assuming heat-agents is a service, not a library, since it doesn't >> show up in upper-constraints. > > It's a guest agent, so neither :) > >> Based on that, does heat itself plan on putting its stable/ocata >> branch into extended maintenance mode and if > > Wearing my Red Hat hat, I would be happy to EOL it. But wearing my > upstream hat, I'm happy to keep maintaining it, and I was not proposing > that we EOL heat's stable/ocata as well. > >> so, does that mean EOLing the heat-agents stable/ocata branch could >> cause problems for the heat stable/ocata branch? In other words, will >> it be reasonable to run CI for stable/ocata heat changes against a >> heat-agents ocata-eol tag? > > I don't think that's a problem. The guest agents rarely change, and I > don't think there's ever been a patch backported by 4 releases. OK, cool, sounds like killing the heat-agent ocata branch is the thing to do then. -- Thanks, Matt From melwittt at gmail.com Wed Oct 3 22:07:38 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 3 Oct 2018 15:07:38 -0700 Subject: [openstack-dev] [nova][xenapi] can we deprecate the xenapi-specific 'nova-console' service? Message-ID: <8682e817-4ba9-8f76-173d-619896050176@gmail.com> Greetings Devs and Ops, Today I noticed that our code does not handle the 'nova-console' service properly in a multi-cell deployment and given that no one has complained or reported bugs about it, we're wondering if anyone still uses the nova-console service. The documentation [1] says that the nova-console service is a "XenAPI-specific service that most recent VNC proxy architectures do not use." Can anyone from xenapi land shed some light on whether the nova-console service is still useful in deployments using the xenapi driver, or is it an old relic that we should deprecate and remove? Thanks for your help, -melanie [1] https://docs.openstack.org/nova/latest/admin/remote-console-access.html From doug at doughellmann.com Thu Oct 4 01:42:18 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 03 Oct 2018 21:42:18 -0400 Subject: [openstack-dev] [goal][python3] week 7 update In-Reply-To: References: Message-ID: Doug Hellmann writes: > Doug Hellmann writes: > >> Doug Hellmann writes: >> >>> == Things We Learned This Week == >>> >>> When we updated the tox.ini settings for jobs like pep8 and release >>> notes early in the Rocky session we only touched some of the official >>> repositories. I'll be working on making a list of the ones we missed so >>> we can update them by the end of Stein. >> >> I see quite a few repositories with tox settings out of date (about 350, >> see below). Given the volume, I'm going to prepare the patches and >> propose them a few at a time over the next couple of weeks. > > Zuul looked bored this morning so I went ahead and proposed a few of the > larger batches of these changes for the Charms, OpenStack Ansible, and > Horizon teams. TripleO also has quite a few, but since we know the gate > is unstable I will hold off on those for now. > > There will be more patches when there is CI capacity again. > > Doug All of these patches have now been proposed. Doug From wjstk16 at gmail.com Thu Oct 4 04:12:01 2018 From: wjstk16 at gmail.com (Won) Date: Thu, 4 Oct 2018 13:12:01 +0900 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: Thank you for your reply Ifat. The alertmanager.yml file already contains 'send_resolved:true'. However, the alarm does not disappear from the alarm list and the entity graph even if the alarm is resolved, the alarm manager makes a silence, or removes the alarm rule from Prometheus. The only way to remove alarms is to manually remove them from the db. Is there any other way to remove the alarm? Entities(vm) that run on multi nodes in the rocky version have similar symptoms. There was a symptom that the Entities created on the multi-node would not disappear from the Entity Graph even after deletion. Is this a bug in rocky version? Best Regards, Won 2018년 10월 3일 (수) 오후 5:46, Ifat Afek 님이 작성: > Hi, > > In the alertmanager.yml file you should have a receiver for Vitrage. > Please verify that it includes "send_resolved: true". This is required for > Prometheus to notify Vitrage when an alarm is resolved. > > The full Vitrage receiver definition should be: > > - name: ** > > webhook_configs: > > - url: ** # example: 'http://127.0.0.1:8999/v1/event > ' > > send_resolved: true > > http_config: > > basic_auth: > > username: ** > > password: ** > > Hope it helps, > Ifat > > > On Tue, Oct 2, 2018 at 7:51 AM Won wrote: > >> I have some problems with Prometheus alarms in vitrage. >> I receive a list of alarms from the Prometheus alarm manager well, but >> the alarm does not disappear when the problem(alarm) is resolved. The alarm >> that came once in both the alarm list and the entity graph does not >> disappear in vitrage. The alarm sent by zabbix disappears when alarm >> solved, I wonder how to clear the Prometheus alarm from vitrage and how to >> update the alarm automatically like zabbix. >> thank you. >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Thu Oct 4 06:08:25 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 4 Oct 2018 11:38:25 +0530 Subject: [openstack-dev] [glance] Glance API Caching Enhancements for Edge Computing In-Reply-To: <12853469-784F-4606-92CD-F94A14FE819B@windriver.com> References: <12853469-784F-4606-92CD-F94A14FE819B@windriver.com> Message-ID: Hi Greg, Have you actually filed a blueprint or specs for this in glance? If yes could you please provide a reference for the same. Thanks & Best Regards, Abhishek Kekane On Thu, Oct 4, 2018 at 12:34 AM Waines, Greg wrote: > Glance Team, > > > > I am following up on discussions in the edge-computing PTG meetings. > > There were discussions on potential enhancements to Glance API Caching for > support of the proposed MVP Edge Architecture. > > And I took the action to write up a blueprint and a specification for > these enhancements ... and will follow up with implementation. > > > > I thought I’d start the discussions on the mailing list ... and if > everyone is still in agreement, > > then I’ll move the high-level definition/requirements to Glance’s > Launchpad (or Storyboard?) and > > write up a Glance spec and review it in Gerrit. > > > > *THE PROPOSAL:* > > > > Enhance Glance API Caching such that > > a) It works between two Glance Services (i.e. Glance at a Central > OpenStack Cloud and Glance at an Edge OpenStack Cloud) > > · i.e. current Glance API Caching only works with external > webservers > > b) Enable the Edge Cloud Glance API Caching to support the ability > to locally use those cached images (e.g. nova boot ...) > even when network connectivity is lost to the Central Cloud Glance Service > > · i.e. image meta-data is required in order to service a ‘nova > boot ...’, and > today image meta-data is NOT cached by Glance API Caching. > > > > The proposed solution should generically work between any two Glance > Services. > > e.g. > > · in a Multi-Region Environment, > > · in a StarlingX Distributed Cloud, > > · etc. > > > > The proposed solution need only deal with the Edge Cloud Glance talking to > a single Central Cloud Glance. > > > > > > Let me know if you have any a questions or comments, > > Greg. > > > > > > > > p.s. > > *Background:* > > More info on the edge-computing group’s MVP Edge Architecture can be found > here: > > > https://www.dropbox.com/s/255x1cao14taer3/MVP-Architecture_edge-computing_PTG.pptx?dl=0 > > and > > > https://docs.google.com/document/d/1Mq6bSm_lES56S4gygEuhmMbEeCI2nBFl_U5vl50wih8/edit?ts=5baa654e#heading=h.ncllqse6iw0u > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Thu Oct 4 07:16:30 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 4 Oct 2018 12:46:30 +0530 Subject: [openstack-dev] [all] Zuul job backlog In-Reply-To: References: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> <1538165560.3935414.1524300072.5C31EEA9@webmail.messagingengine.com> Message-ID: Hi, Could you please point out some of the glance functional tests which are failing and causing this resets? I will like to put some efforts towards fixing those. Thanks & Best Regards, Abhishek Kekane On Wed, Oct 3, 2018 at 10:14 PM Doug Hellmann wrote: > Wesley Hayutin writes: > > [snip] > > > The TripleO project has created a single node container based composable > > OpenStack deployment [2]. It is the projects intention to replace most of > > the TripleO upstream jobs with the Standalone deployment. We would like > to > > reduce our multi-node usage to a total of two or three multinode jobs to > > handle a basic overcloud deployment, updates and upgrades[a]. Currently > in > > master we are relying on multiple multi-node scenario jobs to test many > of > > the OpenStack services in a single job. Our intention is to move these > > multinode scenario jobs to single node job(s) that tests a smaller subset > > of services. The goal of this would be target the specific areas of the > > TripleO code base that affect these services and only run those there. > This > > would replace the existing 2-3 hour two node job(s) with single node > job(s) > > for specific services that completes in about half the time. This > > unfortunately will reduce the overall coverage upstream but still allows > us > > a basic smoke test of the supported OpenStack services and their > deployment > > upstream. > > > > Ideally projects other than TripleO would make use of the Standalone > > deployment to test their particular service with containers, upgrades or > > for various other reasons. Additional projects using this deployment > would > > help ensure bugs are found quickly and resolved providing additional > > resilience to the upstream gate jobs. The TripleO team will begin review > to > > scope out and create estimates for the above work starting on October 18 > > 2018. One should expect to see updates on our progress posted to the > > list. Below are some details on the proposed changes. > > [snip] > > Thanks for all of the details, Wes. I know the current situation has > been hurting the TripleO team as well, so I'm glad to see a good plan in > place to address it. I look forward to seeing updates about the > progress. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Thu Oct 4 10:43:42 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 4 Oct 2018 10:43:42 +0000 Subject: [openstack-dev] [glance] Glance API Caching Enhancements for Edge Computing In-Reply-To: References: <12853469-784F-4606-92CD-F94A14FE819B@windriver.com> Message-ID: <6D715CA4-7524-4C9F-8C7B-1C215B104097@windriver.com> I have not yet done a blueprint or spec for this in Glance. But can do that now. Is Glance using Launchpad or Storyboard ? Greg. From: Abhishek Kekane Reply-To: "openstack-dev at lists.openstack.org" Date: Thursday, October 4, 2018 at 2:08 AM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [glance] Glance API Caching Enhancements for Edge Computing Hi Greg, Have you actually filed a blueprint or specs for this in glance? If yes could you please provide a reference for the same. Thanks & Best Regards, Abhishek Kekane On Thu, Oct 4, 2018 at 12:34 AM Waines, Greg > wrote: Glance Team, I am following up on discussions in the edge-computing PTG meetings. There were discussions on potential enhancements to Glance API Caching for support of the proposed MVP Edge Architecture. And I took the action to write up a blueprint and a specification for these enhancements ... and will follow up with implementation. I thought I’d start the discussions on the mailing list ... and if everyone is still in agreement, then I’ll move the high-level definition/requirements to Glance’s Launchpad (or Storyboard?) and write up a Glance spec and review it in Gerrit. THE PROPOSAL: Enhance Glance API Caching such that a) It works between two Glance Services (i.e. Glance at a Central OpenStack Cloud and Glance at an Edge OpenStack Cloud) • i.e. current Glance API Caching only works with external webservers b) Enable the Edge Cloud Glance API Caching to support the ability to locally use those cached images (e.g. nova boot ...) even when network connectivity is lost to the Central Cloud Glance Service • i.e. image meta-data is required in order to service a ‘nova boot ...’, and today image meta-data is NOT cached by Glance API Caching. The proposed solution should generically work between any two Glance Services. e.g. • in a Multi-Region Environment, • in a StarlingX Distributed Cloud, • etc. The proposed solution need only deal with the Edge Cloud Glance talking to a single Central Cloud Glance. Let me know if you have any a questions or comments, Greg. p.s. Background: More info on the edge-computing group’s MVP Edge Architecture can be found here: https://www.dropbox.com/s/255x1cao14taer3/MVP-Architecture_edge-computing_PTG.pptx?dl=0 and https://docs.google.com/document/d/1Mq6bSm_lES56S4gygEuhmMbEeCI2nBFl_U5vl50wih8/edit?ts=5baa654e#heading=h.ncllqse6iw0u __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Thu Oct 4 10:54:57 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 4 Oct 2018 16:24:57 +0530 Subject: [openstack-dev] [glance] Glance API Caching Enhancements for Edge Computing In-Reply-To: <6D715CA4-7524-4C9F-8C7B-1C215B104097@windriver.com> References: <12853469-784F-4606-92CD-F94A14FE819B@windriver.com> <6D715CA4-7524-4C9F-8C7B-1C215B104097@windriver.com> Message-ID: Geeg, Glance uses launchpad for blueprints and for specs you can refer to glance-specs repo. Thank you, Abhishek On Thu 4 Oct, 2018, 16:14 Waines, Greg, wrote: > I have not yet done a blueprint or spec for this in Glance. > > But can do that now. > > > > Is Glance using Launchpad or Storyboard ? > > > > Greg. > > > > > > > > *From: *Abhishek Kekane > *Reply-To: *"openstack-dev at lists.openstack.org" < > openstack-dev at lists.openstack.org> > *Date: *Thursday, October 4, 2018 at 2:08 AM > *To: *"openstack-dev at lists.openstack.org" < > openstack-dev at lists.openstack.org> > *Subject: *Re: [openstack-dev] [glance] Glance API Caching Enhancements > for Edge Computing > > > > Hi Greg, > > > > Have you actually filed a blueprint or specs for this in glance? > > If yes could you please provide a reference for the same. > > > > > Thanks & Best Regards, > > Abhishek Kekane > > > > > > On Thu, Oct 4, 2018 at 12:34 AM Waines, Greg > wrote: > > Glance Team, > > > > I am following up on discussions in the edge-computing PTG meetings. > > There were discussions on potential enhancements to Glance API Caching for > support of the proposed MVP Edge Architecture. > > And I took the action to write up a blueprint and a specification for > these enhancements ... and will follow up with implementation. > > > > I thought I’d start the discussions on the mailing list ... and if > everyone is still in agreement, > > then I’ll move the high-level definition/requirements to Glance’s > Launchpad (or Storyboard?) and > > write up a Glance spec and review it in Gerrit. > > > > *THE PROPOSAL:* > > > > Enhance Glance API Caching such that > > a) It works between two Glance Services (i.e. Glance at a Central > OpenStack Cloud and Glance at an Edge OpenStack Cloud) > > · i.e. current Glance API Caching only works with external > webservers > > b) Enable the Edge Cloud Glance API Caching to support the ability > to locally use those cached images (e.g. nova boot ...) > even when network connectivity is lost to the Central Cloud Glance Service > > · i.e. image meta-data is required in order to service a ‘nova > boot ...’, and > today image meta-data is NOT cached by Glance API Caching. > > > > The proposed solution should generically work between any two Glance > Services. > > e.g. > > · in a Multi-Region Environment, > > · in a StarlingX Distributed Cloud, > > · etc. > > > > The proposed solution need only deal with the Edge Cloud Glance talking to > a single Central Cloud Glance. > > > > > > Let me know if you have any a questions or comments, > > Greg. > > > > > > > > p.s. > > *Background:* > > More info on the edge-computing group’s MVP Edge Architecture can be found > here: > > > https://www.dropbox.com/s/255x1cao14taer3/MVP-Architecture_edge-computing_PTG.pptx?dl=0 > > and > > > https://docs.google.com/document/d/1Mq6bSm_lES56S4gygEuhmMbEeCI2nBFl_U5vl50wih8/edit?ts=5baa654e#heading=h.ncllqse6iw0u > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Thu Oct 4 11:10:44 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 04 Oct 2018 12:10:44 +0100 Subject: [openstack-dev] Sphinx testing fun Message-ID: <21b70f530e4aec2c0be2a496bc0a1ed646fd3d51.camel@redhat.com> The tests in os-api-ref recently broke: http://logs.openstack.org/62/606362/1/check/openstack-tox-py35/8b709de/testr_results.html.gz Specifically, we're seeing errors likes this: ft1.1: os_api_ref.tests.test_basic_example.TestBasicExample.test_rest_method_StringException: Traceback (most recent call last): File "/home/zuul/src/git.openstack.org/openstack/os-api-ref/.tox/py35/lib/python3.5/site-packages/sphinx_testing/util.py", line 143, in decorator func(*(args + (app, status, warning)), **kwargs) File "/home/zuul/src/git.openstack.org/openstack/os-api-ref/os_api_ref/tests/test_basic_example.py", line 41, in setUp self.html = (app.outdir / 'index.html').read_text(encoding='utf-8') TypeError: unsupported operand type(s) for /: 'str' and 'str' Which is wrong because 'app.outdir' is not supposed to be an instance of 'unicode' but rather 'sphinx_testing.path.path' [1] (which overrides the '/' operator to act as concatenation because operator overloading is always a good idea 😒) [2]. Anyway, we can go figure out what's changed here and handle it but this is, at best, going to be a band aid. The fact is 'sphinx_testing' is unmaintained and has been for some time now. The new hotness is 'sphinx.testing' [3], which is provided (with zero documentation) as part of Sphinx. Unfortunately, this uses pytest fixtures [4] which I'm pretty sure Monty (and a few others?) are vehemently against using in OpenStack. That leaves us with three options: * Take over 'sphinx_testing' and bring it up-to-date. Maintain forever. * Start using 'sphinx.testing' and everything it comes with * Delete any tests that use 'sphinx_testing' and deal with the lack of coverage For what it's worth, I use 'sphinx.testing' in 'sphinxcontrib-apidoc' [5] without *too* many issues, but that lives outside the 'openstack' namespace and isn't bound by upper-constraints and the likes. Stephen [1] https://github.com/sphinx-doc/sphinx-testing/blob/0.7.2/src/sphinx_testing/path.py#L217 [2] https://github.com/sphinx-doc/sphinx-testing/blob/0.7.2/src/sphinx_testing/util.py#L45-L63 [3] https://github.com/sphinx-doc/sphinx/tree/v1.8.0/sphinx/testing [4] https://docs.pytest.org/en/latest/fixture.html [5] https://github.com/sphinx-contrib/apidoc/blob/0.3.0/tests/test_ext.py From doug at doughellmann.com Thu Oct 4 11:21:14 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 04 Oct 2018 07:21:14 -0400 Subject: [openstack-dev] Sphinx testing fun In-Reply-To: <21b70f530e4aec2c0be2a496bc0a1ed646fd3d51.camel@redhat.com> References: <21b70f530e4aec2c0be2a496bc0a1ed646fd3d51.camel@redhat.com> Message-ID: Stephen Finucane writes: > The tests in os-api-ref recently broke: > > http://logs.openstack.org/62/606362/1/check/openstack-tox-py35/8b709de/testr_results.html.gz > > Specifically, we're seeing errors likes this: > > ft1.1: os_api_ref.tests.test_basic_example.TestBasicExample.test_rest_method_StringException: Traceback (most recent call last): > File "/home/zuul/src/git.openstack.org/openstack/os-api-ref/.tox/py35/lib/python3.5/site-packages/sphinx_testing/util.py", line 143, in decorator > func(*(args + (app, status, warning)), **kwargs) > File "/home/zuul/src/git.openstack.org/openstack/os-api-ref/os_api_ref/tests/test_basic_example.py", line 41, in setUp > self.html = (app.outdir / 'index.html').read_text(encoding='utf-8') > TypeError: unsupported operand type(s) for /: 'str' and 'str' > > Which is wrong because 'app.outdir' is not supposed to be an instance > of 'unicode' but rather 'sphinx_testing.path.path' [1] (which overrides > the '/' operator to act as concatenation because operator overloading > is always a good idea 😒) [2]. Is that a change in Sphinx's API? Or sphinx_testing's? > > Anyway, we can go figure out what's changed here and handle it but this > is, at best, going to be a band aid. The fact is 'sphinx_testing' is > unmaintained and has been for some time now. The new hotness is > 'sphinx.testing' [3], which is provided (with zero documentation) as > part of Sphinx. Unfortunately, this uses pytest fixtures [4] which I'm > pretty sure Monty (and a few others?) are vehemently against using in > OpenStack. That leaves us with three options: > > * Take over 'sphinx_testing' and bring it up-to-date. Maintain > forever. > * Start using 'sphinx.testing' and everything it comes with > * Delete any tests that use 'sphinx_testing' and deal with the lack of > coverage Could we change our tests to use pathlib to wrap app.outdir and get the same results as before? Doug From bob.ball at citrix.com Thu Oct 4 12:03:41 2018 From: bob.ball at citrix.com (Bob Ball) Date: Thu, 4 Oct 2018 12:03:41 +0000 Subject: [openstack-dev] [nova][xenapi] can we deprecate the xenapi-specific 'nova-console' service? In-Reply-To: <8682e817-4ba9-8f76-173d-619896050176@gmail.com> References: <8682e817-4ba9-8f76-173d-619896050176@gmail.com> Message-ID: <0233724203a34d82a08b567a79a1f4a5@AMSPEX02CL01.citrite.net> Hi Melanie, We recommend using novncproxy_base_url with vncserver_proxyclient_address set to the dom0's management IP address. We don't currently use nova-console, so deprecation would be the best approach. Thanks, Bob -----Original Message----- From: melanie witt [mailto:melwittt at gmail.com] Sent: 03 October 2018 23:08 To: OpenStack Development Mailing List (not for usage questions) ; openstack-operators at lists.openstack.org Subject: [openstack-dev] [nova][xenapi] can we deprecate the xenapi-specific 'nova-console' service? Greetings Devs and Ops, Today I noticed that our code does not handle the 'nova-console' service properly in a multi-cell deployment and given that no one has complained or reported bugs about it, we're wondering if anyone still uses the nova-console service. The documentation [1] says that the nova-console service is a "XenAPI-specific service that most recent VNC proxy architectures do not use." Can anyone from xenapi land shed some light on whether the nova-console service is still useful in deployments using the xenapi driver, or is it an old relic that we should deprecate and remove? Thanks for your help, -melanie [1] https://docs.openstack.org/nova/latest/admin/remote-console-access.html __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sfinucan at redhat.com Thu Oct 4 12:57:52 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 04 Oct 2018 13:57:52 +0100 Subject: [openstack-dev] [nova][xenapi] can we deprecate the xenapi-specific 'nova-console' service? In-Reply-To: <0233724203a34d82a08b567a79a1f4a5@AMSPEX02CL01.citrite.net> References: <8682e817-4ba9-8f76-173d-619896050176@gmail.com> <0233724203a34d82a08b567a79a1f4a5@AMSPEX02CL01.citrite.net> Message-ID: On Thu, 2018-10-04 at 12:03 +0000, Bob Ball wrote: > Hi Melanie, > > We recommend using novncproxy_base_url with > vncserver_proxyclient_address set to the dom0's management IP > address. > > We don't currently use nova-console, so deprecation would be the best > approach. > > Thanks, > > Bob What about nova-xvpvncproxy [1]? This would be configured using xvpvncproxy_base_url. This is also Xen-specific (as the name, Xen VNC Proxy, would suggest). If the noVNC-based console is now recommended, can we also deprecate the XVP one? Stephen [1] https://review.openstack.org/#/c/606148/5/doc/source/admin/remote-console-access.rst at 313 > -----Original Message----- > From: melanie witt [mailto:melwittt at gmail.com] > Sent: 03 October 2018 23:08 > To: OpenStack Development Mailing List (not for usage questions) < > openstack-dev at lists.openstack.org>; > openstack-operators at lists.openstack.org > Subject: [openstack-dev] [nova][xenapi] can we deprecate the xenapi- > specific 'nova-console' service? > > Greetings Devs and Ops, > > Today I noticed that our code does not handle the 'nova-console' > service properly in a multi-cell deployment and given that no one has > complained or reported bugs about it, we're wondering if anyone still > uses the nova-console service. The documentation [1] says that the > nova-console service is a "XenAPI-specific service that most recent > VNC proxy architectures do not use." > > Can anyone from xenapi land shed some light on whether the nova- > console service is still useful in deployments using the xenapi > driver, or is it an old relic that we should deprecate and remove? > > Thanks for your help, > -melanie > > [1] > https://docs.openstack.org/nova/latest/admin/remote-console-access.html class="Apple-tab-span" style="white-space:pre"> > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Thu Oct 4 13:32:22 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 4 Oct 2018 14:32:22 +0100 (BST) Subject: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper" In-Reply-To: References: Message-ID: On Wed, 3 Oct 2018, Chris Dent wrote: > I'd really like to see this become a real thing, so if I could get > some help from tempest people on how to make it in line with > expectations that would be great. I've written up the end game of what I'm trying to achieve in a bit more detail at https://anticdent.org/gabbi-in-the-gate.html -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From ifatafekn at gmail.com Thu Oct 4 13:46:22 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Thu, 4 Oct 2018 16:46:22 +0300 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: Hi, Can you please give us some more details about your scenario with Prometheus? Please try and give as many details as possible, so we can try to reproduce the bug. What do you mean by “if the alarm is resolved, the alarm manager makes a silence, or removes the alarm rule from Prometheus”? these are different cases. None of them works in your environment? Which Prometheus and Alertmanager versions are you using? Please try to change the Vitrage loglevel to DEBUG (set “debug = true” in /etc/vitrage/vitrage.conf) and send me the Vitrage collector, graph and api logs. Regarding the multi nodes, I'm not sure I understand your configuration. Do you mean there is more than one OpenStack and Nova? more than one host? more than one vm? Basically, vms are deleted from Vitrage in two cases: 1. After each periodic call to get_all of nova.instance datasource. By default this is done once in 10 minutes. 2. Immediately, if you have the following configuration in /etc/nova/nova.conf: notification_topics = notifications,vitrage_notifications So, please check your nova.conf and also whether the vms are deleted after 10 minutes. Thanks, Ifat On Thu, Oct 4, 2018 at 7:12 AM Won wrote: > Thank you for your reply Ifat. > > The alertmanager.yml file already contains 'send_resolved:true'. > However, the alarm does not disappear from the alarm list and the entity > graph even if the alarm is resolved, the alarm manager makes a silence, or > removes the alarm rule from Prometheus. > The only way to remove alarms is to manually remove them from the db. Is > there any other way to remove the alarm? > Entities(vm) that run on multi nodes in the rocky version have similar > symptoms. There was a symptom that the Entities created on the multi-node > would not disappear from the Entity Graph even after deletion. > Is this a bug in rocky version? > > Best Regards, > Won > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Thu Oct 4 14:11:22 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 4 Oct 2018 23:11:22 +0900 Subject: [openstack-dev] [all][Searchlight] Always build universal wheels Message-ID: Hi team, Not sure if anyone is aware of [1] in openstack-infra that is trying to always build universal wheels by default for all projects. Please avoid adding universal wheels to the project setup.cfg. Bests, [1] https://review.openstack.org/#/c/607902/ -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu Oct 4 14:13:21 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 04 Oct 2018 07:13:21 -0700 Subject: [openstack-dev] [all] Zuul job backlog In-Reply-To: References: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> <1538165560.3935414.1524300072.5C31EEA9@webmail.messagingengine.com> Message-ID: <1538662401.2558963.1530696776.04282A5E@webmail.messagingengine.com> On Thu, Oct 4, 2018, at 12:16 AM, Abhishek Kekane wrote: > Hi, > Could you please point out some of the glance functional tests which are > failing and causing this resets? > I will like to put some efforts towards fixing those. http://status.openstack.org/elastic-recheck/data/integrated_gate.html is a good place to start. That shows you a list of tests that failed in the OpenStack Integrated gate that elastic-recheck could not identify the failure for including those for several functional jobs. If you'd like to start looking at identified bugs first then http://status.openstack.org/elastic-recheck/gate.html shows identified failures that happened in the gate. For glance functional jobs the first link points to: http://logs.openstack.org/99/595299/1/gate/openstack-tox-functional/fc13eca/ http://logs.openstack.org/44/569644/3/gate/openstack-tox-functional/b7c487c/ http://logs.openstack.org/99/595299/1/gate/openstack-tox-functional-py35/b166313/ http://logs.openstack.org/44/569644/3/gate/openstack-tox-functional-py35/ce262ab/ Clark From ifatafekn at gmail.com Thu Oct 4 14:27:31 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Thu, 4 Oct 2018 17:27:31 +0300 Subject: [openstack-dev] Dose anyone have use Vitrage to build a mature project for RCA or any other purpose? In-Reply-To: <544dd684.4fbf.165a7742cea.Coremail.18800173600@163.com> References: <544dd684.4fbf.165a7742cea.Coremail.18800173600@163.com> Message-ID: On Wed, Sep 5, 2018 at 4:59 AM zhangwenqing <18800173600 at 163.com> wrote: > Thanks for your reply! > So how do you analyse the RCA?Do you ever use any statistics methods like time series or mathine learning methods?Or just use the method that Vitrage provides? > > > zhangwenqing > > Hi, Sorry for the late reply, I somehow missed your mail. Vitrage is not a monitoring tool. It does not perform statistic calculations, health checks, etc. Instead, it gets topology and alarms from several datasources (OpenStack or external), combines them in a topology graph, and performs RCA analysis on top of the graph. The RCA rules are defined in Vitrage templates [1]. In these templates, you can define topology conditions and actions to be executed as a result, like raise an additional alarm, modify a resource state, or mark root-cause relationship between two alarms. [1] https://docs.openstack.org/vitrage/latest/contributor/vitrage-template-format.html Hope it helps, Ifat -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Thu Oct 4 14:28:28 2018 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 4 Oct 2018 08:28:28 -0600 Subject: [openstack-dev] [tripleo][puppet] clearing the gate and landing patches to help CI In-Reply-To: References: Message-ID: And master is blocked again. We need https://review.openstack.org/#/c/607952/ Thanks, -Alex On Fri, Sep 28, 2018 at 9:02 AM Alex Schultz wrote: > > Hey Folks, > > Currently the tripleo gate is at 21 hours and we're continue to have > timeouts and now scenario001/004 (in queens/pike) appear to be broken. > Additionally we've got some patches in puppet-openstack that we need > to land in order to resolve broken puppet unit tests which is > affecting both projects. > > Currently we need to wait for the following to land in puppet: > https://review.openstack.org/#/q/I4875b8bc8b2333046fc3a08b4669774fd26c89cb > https://review.openstack.org/#/c/605350/ > > In tripleo we currently have not identified the root cause for any of > the timeout failures so I'd for us to work on that before trying to > land anything else because the gate resets are killing us and not > helping anything. We have landed a few patches that have improved the > situation but we're still hitting issues. > > https://bugs.launchpad.net/tripleo/+bug/1795009 is the bug for the > scenario001/004 issues. It appears that we're ending up with a newer > version of ansible on the system then what the packages provide. Still > working on figuring out where it's coming from. > > Please do not approve anything or recheck unless it's to address CI > issues at this time. > > Thanks, > -Alex From sfinucan at redhat.com Thu Oct 4 15:20:23 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 04 Oct 2018 16:20:23 +0100 Subject: [openstack-dev] Sphinx testing fun In-Reply-To: References: <21b70f530e4aec2c0be2a496bc0a1ed646fd3d51.camel@redhat.com> Message-ID: On Thu, 2018-10-04 at 07:21 -0400, Doug Hellmann wrote: > Stephen Finucane writes: > > > The tests in os-api-ref recently broke: > > > > http://logs.openstack.org/62/606362/1/check/openstack-tox-py35/8b709de/testr_results.html.gz > > > > Specifically, we're seeing errors likes this: > > > > ft1.1: os_api_ref.tests.test_basic_example.TestBasicExample.test_rest_method_StringException: Traceback (most recent call last): > > File "/home/zuul/src/git.openstack.org/openstack/os-api-ref/.tox/py35/lib/python3.5/site-packages/sphinx_testing/util.py", line 143, in decorator > > func(*(args + (app, status, warning)), **kwargs) > > File "/home/zuul/src/git.openstack.org/openstack/os-api-ref/os_api_ref/tests/test_basic_example.py", line 41, in setUp > > self.html = (app.outdir / 'index.html').read_text(encoding='utf-8') > > TypeError: unsupported operand type(s) for /: 'str' and 'str' > > > > Which is wrong because 'app.outdir' is not supposed to be an instance > > of 'unicode' but rather 'sphinx_testing.path.path' [1] (which overrides > > the '/' operator to act as concatenation because operator overloading > > is always a good idea 😒) [2]. > > Is that a change in Sphinx's API? Or sphinx_testing's? It's a change in Sphinx, though not in the API [1]. I should really stop playing with Sphinx. > > > > Anyway, we can go figure out what's changed here and handle it but this > > is, at best, going to be a band aid. The fact is 'sphinx_testing' is > > unmaintained and has been for some time now. The new hotness is > > 'sphinx.testing' [3], which is provided (with zero documentation) as > > part of Sphinx. Unfortunately, this uses pytest fixtures [4] which I'm > > pretty sure Monty (and a few others?) are vehemently against using in > > OpenStack. That leaves us with three options: > > > > * Take over 'sphinx_testing' and bring it up-to-date. Maintain > > forever. > > * Start using 'sphinx.testing' and everything it comes with > > * Delete any tests that use 'sphinx_testing' and deal with the lack of > > coverage > > Could we change our tests to use pathlib to wrap app.outdir and get the > same results as before? That's what I've done [2], which is kind of based on how I fixed this in Sphinx. However, this is at best a stopgap. The fact remains that 'sphinx_testing' is dead and the large changes that Sphinx is undergoing (2.0 will be Python 3 only, with multiple other fixes) will make further breakages more likely. Unless we want a repeat of the Mox situation, I do think we should start thinking about this sooner rather than later. Stephen [1] https://github.com/sphinx-doc/sphinx/commit/3a85b3502f [2] https://review.openstack.org/607984 > Doug From fungi at yuggoth.org Thu Oct 4 16:13:20 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 4 Oct 2018 16:13:20 +0000 Subject: [openstack-dev] [all][Searchlight] Always build universal wheels In-Reply-To: References: Message-ID: <20181004161320.erqxq7erbw3armyt@yuggoth.org> On 2018-10-04 23:11:22 +0900 (+0900), Trinh Nguyen wrote: [...] > Please avoid adding universal wheels to the project setup.cfg. [...] Why would you avoid also adding it to the setup.cfg? The change you cited is merely to be able to continue building universal wheels for projects while the setup.cfg files are corrected over time, to reduce the urgency of doing so. Wheel building happens in more places than just our CI system, so only fixing it in CI is not a good long-term strategy. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jean-philippe at evrard.me Thu Oct 4 16:23:13 2018 From: jean-philippe at evrard.me (jean-philippe) Date: Thu, 04 Oct 2018 18:23:13 +0200 Subject: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! Message-ID: Agreed with the merge.  null -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Oct 4 17:12:06 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 4 Oct 2018 12:12:06 -0500 Subject: [openstack-dev] [release] Release countdown for week R-26 and R-25, October 8-19 Message-ID: <20181004171205.GA320@sm-workstation> Welcome to the (biweekly for now) release countdown email. Just a quick reminder for this one. Development Focus ----------------- Team focus should be on spec approval and implementation for priority features. General Information ------------------- Teams should now be making progress towards the cycle goals [1]. Please prioritize reviews for these appropriately. [1] https://governance.openstack.org/tc/goals/stein/index.html Upcoming Deadlines & Dates -------------------------- Stein-1 milestone: October 25 (R-24 week) Forum at OpenStack Summit in Berlin: November 13-15 -- Sean McGinnis (smcginnis) From doug at doughellmann.com Thu Oct 4 17:40:05 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 04 Oct 2018 13:40:05 -0400 Subject: [openstack-dev] [tc] bringing back formal TC meetings Message-ID: During the TC office hours today [1] we discussed the question of resuming having formal meetings of the TC to ensure we are in compliance with the section of the bylaws that says "The Technical Committee shall meet at least quarterly." [2] As not all TC members were present today, we decide to move the discussion to the mailing list to ensure all members have input into the decision. A bit of background ------------------- The TC used to hold formal weekly meetings with agendas, roll call, etc. We stopped doing that in an attempt to encourage more asynchronous communication and to include folks in all time zones. Those meetings were replaced with less formal "office hours" where TC members were encouraged to be present on IRC in case the community had questions or issues to raise in a synchronous forum. The bylaws section that describes the membership and some of the expectations for the Technical Committee specifically requires us to meet at least once quarter year. We have had meetings at the PTGs and summits, which while not recorded via a roll call were open and documented afterwards with summaries to this mailing list. With the change in event schedule, we will no longer have obvious opportunities to hold 4 in-person meetings each year. During the discussion today, we established the general consensus that our current informal office hours do not constitute a "meeting" in the sense that any of us understand this requirement. As a result, we need to consider changes to our current meeting policy to ensure we are in compliance with the foundation bylaws. Today's discussion ------------------ A few folks expressed concern that we work to ensure these meetings *not* be seen as a replacement for asynchronous communication, and that we continue to encourage ad hoc discussions to continue to happen on the mailing list or during office hours. I think we agreed we could do that by managing the agenda carefully (i.e., the chair or vice chair would need to add topics to the agenda, rather than allowing anyone to add anything as we have done in the past). We also talked about only allowing recurring topics on the agenda, but I would prefer that we not write too many hard rules at the outset. We discussed how frequently we should meet, and everyone seemed to agree that weekly was too often and quarterly was not often enough. I proposed monthly, and there was some general support for that. We also talked about whether to find a new meeting time or to use one of the office hour times. As things stand now, the proposal is to try to find a time a few hours earlier than office hours on the first Thursday of each month for the meetings. The earlier time is so that APAC participants (Ghanshyam, in particular) do not need to stay up until midnight or later to participate. Next steps ---------- TC members, please reply to this thread and indicate if you would find meeting at 1300 UTC on the first Thursday of every month acceptable, and of course include any other comments you might have (including alternate times). Thanks, Doug [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-10-04.log.html#t2018-10-04T15:02:31 [2] https://www.openstack.org/legal/technical-committee-member-policy/ (item #4) From mnaser at vexxhost.com Thu Oct 4 17:43:48 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 4 Oct 2018 13:43:48 -0400 Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: References: Message-ID: <1538674956.local-8a7d046f-13e1-v1.4.2-f587b7b7@getmailspring.com> On Oct 4 2018, at 1:40 pm, Doug Hellmann wrote: > > During the TC office hours today [1] we discussed the question of > resuming having formal meetings of the TC to ensure we are in compliance > with the section of the bylaws that says "The Technical Committee shall > meet at least quarterly." [2] As not all TC members were present today, > we decide to move the discussion to the mailing list to ensure all > members have input into the decision. > > A bit of background > ------------------- > > The TC used to hold formal weekly meetings with agendas, roll call, > etc. We stopped doing that in an attempt to encourage more asynchronous > communication and to include folks in all time zones. Those meetings > were replaced with less formal "office hours" where TC members were > encouraged to be present on IRC in case the community had questions or > issues to raise in a synchronous forum. > > The bylaws section that describes the membership and some of the > expectations for the Technical Committee specifically requires us to > meet at least once quarter year. We have had meetings at the PTGs and > summits, which while not recorded via a roll call were open and > documented afterwards with summaries to this mailing list. > > With the change in event schedule, we will no longer have obvious > opportunities to hold 4 in-person meetings each year. During the > discussion today, we established the general consensus that our current > informal office hours do not constitute a "meeting" in the sense that > any of us understand this requirement. As a result, we need to consider > changes to our current meeting policy to ensure we are in compliance > with the foundation bylaws. > > Today's discussion > ------------------ > > A few folks expressed concern that we work to ensure these meetings > *not* be seen as a replacement for asynchronous communication, and that > we continue to encourage ad hoc discussions to continue to happen on the > mailing list or during office hours. I think we agreed we could do that > by managing the agenda carefully (i.e., the chair or vice chair would > need to add topics to the agenda, rather than allowing anyone to add > anything as we have done in the past). We also talked about only > allowing recurring topics on the agenda, but I would prefer that we not > write too many hard rules at the outset. > > We discussed how frequently we should meet, and everyone seemed to agree > that weekly was too often and quarterly was not often enough. I proposed > monthly, and there was some general support for that. We also talked > about whether to find a new meeting time or to use one of the office > hour times. > > As things stand now, the proposal is to try to find a time a few hours > earlier than office hours on the first Thursday of each month for the > meetings. The earlier time is so that APAC participants (Ghanshyam, in > particular) do not need to stay up until midnight or later to > participate. > > Next steps > ---------- > > TC members, please reply to this thread and indicate if you would find > meeting at 1300 UTC on the first Thursday of every month acceptable, and > of course include any other comments you might have (including alternate > times). > I think every 1 month is a bit too much. Especially if we're going to rotate to be able to accommodate those in APAC time zones, they might not be able to make a proper meeting in 2 months. I don't think a very brief weekly one that's planned which we can skip is a huge burden on all of us, finding an hour of time isn't too unreasonable (and we mostly are already doing it during office hours anyways). > Thanks, > Doug > > > [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-10-04.log.html#t2018-10-04T15:02:31 > [2] https://www.openstack.org/legal/technical-committee-member-policy/ (item #4) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Oct 4 17:47:53 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 4 Oct 2018 17:47:53 +0000 Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: References: Message-ID: <20181004174753.vruvzggqb4l7cpeg@yuggoth.org> On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote: [...] > TC members, please reply to this thread and indicate if you would > find meeting at 1300 UTC on the first Thursday of every month > acceptable, and of course include any other comments you might > have (including alternate times). This time is acceptable to me. As long as we ensure that community feedback continues more frequently in IRC and on the ML (for example by making it clear that this meeting is expressly *not* for that) then I'm fine with resuming formal meetings. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zbitter at redhat.com Thu Oct 4 18:03:45 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 4 Oct 2018 14:03:45 -0400 Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: <20181004174753.vruvzggqb4l7cpeg@yuggoth.org> References: <20181004174753.vruvzggqb4l7cpeg@yuggoth.org> Message-ID: <636696c5-0c3e-c66d-2dfa-97cf9ab9ef60@redhat.com> On 4/10/18 1:47 PM, Jeremy Stanley wrote: > On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote: > [...] >> TC members, please reply to this thread and indicate if you would >> find meeting at 1300 UTC on the first Thursday of every month >> acceptable, and of course include any other comments you might >> have (including alternate times). > > This time is acceptable to me. As long as we ensure that community > feedback continues more frequently in IRC and on the ML (for example > by making it clear that this meeting is expressly *not* for that) > then I'm fine with resuming formal meetings. +1 From whayutin at redhat.com Thu Oct 4 18:11:42 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 4 Oct 2018 12:11:42 -0600 Subject: [openstack-dev] [tripleo][puppet] clearing the gate and landing patches to help CI In-Reply-To: References: Message-ID: On Thu, Oct 4, 2018 at 8:29 AM Alex Schultz wrote: > And master is blocked again. We need > https://review.openstack.org/#/c/607952/ > > Thanks, > -Alex > I just got infra to put that patch on the top of the queue. We also need https://review.openstack.org/#/c/607904/ reviewed ASAP Thanks > > On Fri, Sep 28, 2018 at 9:02 AM Alex Schultz wrote: > > > > Hey Folks, > > > > Currently the tripleo gate is at 21 hours and we're continue to have > > timeouts and now scenario001/004 (in queens/pike) appear to be broken. > > Additionally we've got some patches in puppet-openstack that we need > > to land in order to resolve broken puppet unit tests which is > > affecting both projects. > > > > Currently we need to wait for the following to land in puppet: > > > https://review.openstack.org/#/q/I4875b8bc8b2333046fc3a08b4669774fd26c89cb > > https://review.openstack.org/#/c/605350/ > > > > In tripleo we currently have not identified the root cause for any of > > the timeout failures so I'd for us to work on that before trying to > > land anything else because the gate resets are killing us and not > > helping anything. We have landed a few patches that have improved the > > situation but we're still hitting issues. > > > > https://bugs.launchpad.net/tripleo/+bug/1795009 is the bug for the > > scenario001/004 issues. It appears that we're ending up with a newer > > version of ansible on the system then what the packages provide. Still > > working on figuring out where it's coming from. > > > > Please do not approve anything or recheck unless it's to address CI > > issues at this time. > > > > Thanks, > > -Alex > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Wes Hayutin Associate MANAGER Red Hat whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From moshele at mellanox.com Thu Oct 4 18:31:20 2018 From: moshele at mellanox.com (Moshe Levi) Date: Thu, 4 Oct 2018 18:31:20 +0000 Subject: [openstack-dev] [ironic][neutron] SmartNics with Ironic In-Reply-To: References: Message-ID: Hi Julia, Apologize we were not able to be there to better represent the use case. PSB From: Julia Kreger Sent: Monday, October 1, 2018 11:07 PM To: OpenStack Development Mailing List (not for usage questions) Cc: isaku.yamahata at intel.com; Eyal Lavee Subject: Re: [openstack-dev] [ironic][neutron] SmartNics with Ironic Greetings, Comments in-line. Thanks, -Julia On Sat, Sep 29, 2018 at 11:27 PM Moshe Levi > wrote: Hi Julia, I don't mind to update the ironic spec [1]. Unfortunately, I wasn't in the PTG but I had a sync meeting with Isuku. As I see it there is 2 use-cases: 1. Running the neutron ovs agent in the smartnic 2. Running the neutron super ovs agent which manage the ovs running on the smartnic. My takeaway from the meeting with neutron is that there would not be a neutron ovs agent running on the smartnic. That the configuration would need to be pushed at all times, which is ultimately better security wise if the tenant NIC is somehow compromised it reduces the control plane exposure. [ML] - Can you elaborate on the security concerns with running the neutron ovs agent on the smart NIC? If you compare this to the standard virtualization use case, this is as secure if not more secure. The tenant image runs in the bare metal host and receives only a network interface/port. The host has no way to access the OS/services/agents running on the smart NIC CPUs, in the same way that a tenant image running in a VM has no way to access the services/agents running in the hypervisor. It is in fact event more secure, as they are running in physically disjoint hardware and memory (thus not accessible even through side-channel vulnerabilities such as meltdown/spectre). 1. It seem that most of the discussion was around the second use-case. By the time Ironic and Neutron met together, it seemed like the first use case was no longer under consideration. I may be wrong, but very strong preference existed for the second scenario when we met the next day. [ML] – We are seeing great interest on smart NICs for bare metal use cases to allow to provide services (networking, storage and others) to bare metal servers that were previously only possible for VMs. Conceptually the smart NIC can be thought of as an isolated hypervisor layer for the bare metal host. The first service we are targeting in this spec is aligning the bare metal networking with the standard neutron ovs agent. The target is to try to align (as possible) the bare metal implementation to the virtualization use case, up to the point of actually running (as possible) the same agents on the smart NIC (again acting as a hypervisor for the bare metal host use case). This allows to reuse/align the implementation, and naturally scales with the number of bare metal servers, as opposed to running the agents on controller nodes, requiring potentially scaling the controllers to match the number of bare metal servers. It also provides a path to providing more advanced services in the smart NIC in the next steps (not limiting the implementation to be OVSDB protocol specific). This is my understanding on the ironic neutron PTG meeting: 1. Ironic cores don't want to change the deployment interface as proposed in [1]. 2. We should a new network_interface for use case 2. But what about the first use case? Should it be a new network_interface as well? 3. We should delay the port binding until the baremetal is powered on the ovs is running. * For the first use case I was thinking to change the neutron server to just keep the port binding information in the neutron DB. Then when the neutron ovs agent is a live it will retrieve all the baremeal port , add them to the ovsdb and start the neutron ovs agent fullsync. * For the second use case the agent is alive so the agent itself can monitor the ovsdb of the baremetal and configure it when it up 1. How to notify that neutron agent successfully/unsuccessfully bind the port ? * In both use-cases we should use neutron-ironic notification to make sure the port binding was done successfully. Is my understanding correct? Not quite. 1) We as in Ironic recognize that there would need to be changes, it is the method as to how that we would prefer to be explicit and have chosen by the interface. The underlying behavior needs to be different, and the new network_interface should support both cases 1 and 2 because that interface contain needed logic for the conductor to determine the appropriate path forward. We should likely also put some guards in to prevent non-smart interfaces from being used in the same configuration due to the security issues that creates. 3) I believe this would be more of a matter of the network_interface knowing that the machine is powered up, and attempting to assert configuration through Neutron to push configuration to the smartnic. 3a) The consensus is that the information to access the smartnic is hardware configuration metadata and that ironic should be the source of truth for information about that hardware. The discussion was push that as needed into neutron to help enable the attachment. I proposed just including it in the binding profile as a possibility, since it is transient information. 3b) As I understood it, this would ultimately be the default operating behavior. 4) Was not discussed, but something along the path is going to have to check and retry as necessary. That item could be in the network_interface code. 4a) This doesn't exist yet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Oct 4 20:44:34 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 4 Oct 2018 15:44:34 -0500 Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: <20181004174753.vruvzggqb4l7cpeg@yuggoth.org> References: <20181004174753.vruvzggqb4l7cpeg@yuggoth.org> Message-ID: <20181004204433.GA14157@sm-workstation> On Thu, Oct 04, 2018 at 05:47:53PM +0000, Jeremy Stanley wrote: > On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote: > [...] > > TC members, please reply to this thread and indicate if you would > > find meeting at 1300 UTC on the first Thursday of every month > > acceptable, and of course include any other comments you might > > have (including alternate times). > > This time is acceptable to me. As long as we ensure that community > feedback continues more frequently in IRC and on the ML (for example > by making it clear that this meeting is expressly *not* for that) > then I'm fine with resuming formal meetings. > -- > Jeremy Stanley Same here. The time works for me, but I hope that bringing back an official meeting time does not prevent productive conversations from happening at any other time. Sean From chris at openstack.org Thu Oct 4 21:23:12 2018 From: chris at openstack.org (Chris Hoge) Date: Thu, 4 Oct 2018 16:23:12 -0500 Subject: [openstack-dev] OpenStack Community Meeting - October 10 - Strategic Area Governance Update Message-ID: Following the interest in the first OpenStack Foundation community meeting, where we discussed the OpenStack Rocky release as well as quick updates from Kata, Airship, StarlingX and Zuul, we want to keep the community meetings going. The second community meeting will be October 10 at 8:00 PT / 15:00 UTC, and the agenda is for Jonathan Bryce and Thierry Carrez to share the latest plans for strategic project governance at the Foundation. These updates will include the process for creating new Strategic Focus Areas, and the lifecycle of new Foundation supported projects. We will have an opportunity to share feedback as well as a question and answer session at the end of the presentation. For a little context about the proposed plan for strategic project governance, you can read Jonathan’s email to the Foundation mailing list: http://lists.openstack.org/pipermail/foundation/2018-August/002617.html This meeting will be recorded and made publicly available. This is part of our plan to introduce bi-weekly OpenStack Foundation community meetings that will cover topics like Foundation strategic area updates, project demonstrations, and other community efforts. We expect the next meeting to take place October 24 and focus on the anticipated StarlingX release. Do you have something you'd like to discuss or share with the community? Please share them with me so that I can schedule them for future meetings. OpenStack Community Meeting - Strategic Area Governance Update Date & Time: October 10, 8:00 PT / 15:00 UTC Zoom Meeting Link: https://zoom.us/j/312447172 Agenda: https://etherpad.openstack.org/p/openstack-community-meeting Thanks! Chris Hoge Strategic Program Manager OpenStack Foundation From doug at doughellmann.com Thu Oct 4 21:54:32 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 04 Oct 2018 17:54:32 -0400 Subject: [openstack-dev] [all][Searchlight] Always build universal wheels In-Reply-To: <20181004161320.erqxq7erbw3armyt@yuggoth.org> References: <20181004161320.erqxq7erbw3armyt@yuggoth.org> Message-ID: Jeremy Stanley writes: > On 2018-10-04 23:11:22 +0900 (+0900), Trinh Nguyen wrote: > [...] >> Please avoid adding universal wheels to the project setup.cfg. > [...] > > Why would you avoid also adding it to the setup.cfg? The change you > cited is merely to be able to continue building universal wheels for > projects while the setup.cfg files are corrected over time, to > reduce the urgency of doing so. Wheel building happens in more > places than just our CI system, so only fixing it in CI is not a > good long-term strategy. I abandoned a couple of dozen patches submitted today by someone who was not coordinating with the goal champions with a message that said those patches were not needed because I didn't want folks to be dealing with this right now. Teams who want to update their setup.cfg can do so, but my intent is to ensure it is not required in order to produce releases with the automation in the short term. Doug From doug at doughellmann.com Thu Oct 4 21:57:06 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 04 Oct 2018 17:57:06 -0400 Subject: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed In-Reply-To: References: Message-ID: zuul at openstack.org writes: > Build failed. > > - release-openstack-python http://logs.openstack.org/b4/b4553baa59b1f13a57973f2a3cff9b57acf3d522/release/release-openstack-python/68d0da8/ : POST_FAILURE in 5m 19s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures It looks like there is a bad classifier in the os-log-merger setup.cfg file. HTTPError: 400 Client Error: Invalid value for classifiers. Error: 'Topic:: Utilities' is not a valid choice for this field for url: https://upload.pypi.org/legacy/ I'm not aware of any way to test those values easily before doing an upload. If someone knows of a way, please let me know. Doug From doug at doughellmann.com Thu Oct 4 22:00:59 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 04 Oct 2018 18:00:59 -0400 Subject: [openstack-dev] Sphinx testing fun In-Reply-To: References: <21b70f530e4aec2c0be2a496bc0a1ed646fd3d51.camel@redhat.com> Message-ID: Stephen Finucane writes: > On Thu, 2018-10-04 at 07:21 -0400, Doug Hellmann wrote: >> Stephen Finucane writes: [snip] >> > Anyway, we can go figure out what's changed here and handle it but this >> > is, at best, going to be a band aid. The fact is 'sphinx_testing' is >> > unmaintained and has been for some time now. The new hotness is >> > 'sphinx.testing' [3], which is provided (with zero documentation) as >> > part of Sphinx. Unfortunately, this uses pytest fixtures [4] which I'm >> > pretty sure Monty (and a few others?) are vehemently against using in >> > OpenStack. That leaves us with three options: >> > >> > * Take over 'sphinx_testing' and bring it up-to-date. Maintain >> > forever. >> > * Start using 'sphinx.testing' and everything it comes with >> > * Delete any tests that use 'sphinx_testing' and deal with the lack of >> > coverage >> >> Could we change our tests to use pathlib to wrap app.outdir and get the >> same results as before? > > That's what I've done [2], which is kind of based on how I fixed this > in Sphinx. However, this is at best a stopgap. The fact remains that > 'sphinx_testing' is dead and the large changes that Sphinx is > undergoing (2.0 will be Python 3 only, with multiple other fixes) will > make further breakages more likely. Unless we want a repeat of the Mox > situation, I do think we should start thinking about this sooner rather > than later. Yeah, it sounds like we need to deal with the change. It looks like only the os-api-ref repo uses sphinx-testing. How many tests are we talking about having to rewrite/update there? Doug From fungi at yuggoth.org Thu Oct 4 22:07:10 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 4 Oct 2018 22:07:10 +0000 Subject: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed In-Reply-To: References: Message-ID: <20181004220710.s47ar6pdsiunen2d@yuggoth.org> On 2018-10-04 17:57:06 -0400 (-0400), Doug Hellmann wrote: [...] > HTTPError: 400 Client Error: Invalid value for classifiers. Error: > 'Topic:: Utilities' is not a valid choice for this field for url: > https://upload.pypi.org/legacy/ > > I'm not aware of any way to test those values easily before doing an > upload. If someone knows of a way, please let me know. I started writing a distcheck utility based on some validation code flit uses, but now that twine has a check option there is expressed intent by Dustin to do it there (see description of https://github.com/pypa/twine/pull/395 for details) which seems like a more natural solution anyway. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From chris.friesen at windriver.com Thu Oct 4 22:06:30 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 4 Oct 2018 16:06:30 -0600 Subject: [openstack-dev] [nova] agreement on how to specify options that impact scheduling and configuration Message-ID: While discussing the "Add HPET timer support for x86 guests" blueprint[1] one of the items that came up was how to represent what are essentially flags that impact both scheduling and configuration. Eric Fried posted a spec to start a discussion[2], and a number of nova developers met on a hangout to hash it out. This is the result. In this specific scenario the goal was to allow the user to specify that their image required a virtual HPET. For efficient scheduling we wanted this to map to a placement trait, and the virt driver also needed to enable the feature when booting the instance. (This can be generalized to other similar problems, including how to specify scheduling and configuration information for Ironic.) We discussed two primary approaches: The first approach was to specify an arbitrary "key=val" in flavor extra-specs or image properties, which nova would automatically translate into the appropriate placement trait before passing it to placement. Once scheduled to a compute node, the virt driver would look for "key=val" in the flavor/image to determine how to proceed. The second approach was to directly specify the placement trait in the flavor extra-specs or image properties. Once scheduled to a compute node, the virt driver would look for the placement trait in the flavor/image to determine how to proceed. Ultimately, the decision was made to go with the second approach. The result is that it is officially acceptable for virt drivers to key off placement traits specified in the image/flavor in order to turn on/off configuration options for the instance. If we do get down to the virt driver and the trait is set, and the driver for whatever reason determines it's not capable of flipping the switch, it should fail. It should be noted that it only makes sense to use placement traits for things that affect scheduling. If it doesn't affect scheduling, then it can be stored in the flavor extra-specs or image properties separate from the placement traits. Also, this approach only makes sense for simple booleans. Anything requiring more complex configuration will likely need additional extra-spec and/or config and/or unicorn dust. Chris [1] https://blueprints.launchpad.net/nova/+spec/support-hpet-on-guest [2] https://review.openstack.org/#/c/607989/1/specs/stein/approved/support-hpet-on-guest.rst From doug at doughellmann.com Thu Oct 4 22:21:50 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 04 Oct 2018 18:21:50 -0400 Subject: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed In-Reply-To: <20181004220710.s47ar6pdsiunen2d@yuggoth.org> References: <20181004220710.s47ar6pdsiunen2d@yuggoth.org> Message-ID: Jeremy Stanley writes: > On 2018-10-04 17:57:06 -0400 (-0400), Doug Hellmann wrote: > [...] >> HTTPError: 400 Client Error: Invalid value for classifiers. Error: >> 'Topic:: Utilities' is not a valid choice for this field for url: >> https://upload.pypi.org/legacy/ >> >> I'm not aware of any way to test those values easily before doing an >> upload. If someone knows of a way, please let me know. > > I started writing a distcheck utility based on some validation code > flit uses, but now that twine has a check option there is expressed > intent by Dustin to do it there (see description of > https://github.com/pypa/twine/pull/395 for details) which seems like > a more natural solution anyway. OK, good. The existing check job for packaging already runs 'twine check' so we should be covered when that feature is merged and released. Doug From fungi at yuggoth.org Thu Oct 4 22:36:03 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 4 Oct 2018 22:36:03 +0000 Subject: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed In-Reply-To: References: <20181004220710.s47ar6pdsiunen2d@yuggoth.org> Message-ID: <20181004223602.ekacp7yetjeyntcs@yuggoth.org> On 2018-10-04 18:21:50 -0400 (-0400), Doug Hellmann wrote: > Jeremy Stanley writes: > > > On 2018-10-04 17:57:06 -0400 (-0400), Doug Hellmann wrote: > > [...] > >> HTTPError: 400 Client Error: Invalid value for classifiers. Error: > >> 'Topic:: Utilities' is not a valid choice for this field for url: > >> https://upload.pypi.org/legacy/ > >> > >> I'm not aware of any way to test those values easily before doing an > >> upload. If someone knows of a way, please let me know. > > > > I started writing a distcheck utility based on some validation code > > flit uses, but now that twine has a check option there is expressed > > intent by Dustin to do it there (see description of > > https://github.com/pypa/twine/pull/395 for details) which seems like > > a more natural solution anyway. > > OK, good. The existing check job for packaging already runs 'twine > check' so we should be covered when that feature is merged and released. Well, the TODO comment at https://github.com/pypa/warehouse/blob/55230d8/warehouse/forklift/legacy.py#L341-L343 expresses an interest in seeing Warehouse's (the current PyPI implementation) metadata validation code move to the https://pypi.org/project/packaging/ library, which should be clear to progress now that https://www.python.org/dev/peps/pep-0459/ (which supplanted the withdrawn PEP 426 mentioned in the TODO) has been finalized. Someone still needs to do that work, and then update `twine check` to use it (probably won't be me either, unless I somehow grow some extra spare time). This would be an awesome project for *someone* interested in making new inroads in the Python packaging community. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dangtrinhnt at gmail.com Fri Oct 5 00:47:21 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 5 Oct 2018 09:47:21 +0900 Subject: [openstack-dev] [all][Searchlight] Always build universal wheels In-Reply-To: References: <20181004161320.erqxq7erbw3armyt@yuggoth.org> Message-ID: Thank Jeremy, Doug for explaining. On Fri, Oct 5, 2018 at 6:54 AM Doug Hellmann wrote: > Jeremy Stanley writes: > > > On 2018-10-04 23:11:22 +0900 (+0900), Trinh Nguyen wrote: > > [...] > >> Please avoid adding universal wheels to the project setup.cfg. > > [...] > > > > Why would you avoid also adding it to the setup.cfg? The change you > > cited is merely to be able to continue building universal wheels for > > projects while the setup.cfg files are corrected over time, to > > reduce the urgency of doing so. Wheel building happens in more > > places than just our CI system, so only fixing it in CI is not a > > good long-term strategy. > > I abandoned a couple of dozen patches submitted today by someone who was > not coordinating with the goal champions with a message that said those > patches were not needed because I didn't want folks to be dealing with > this right now. > > Teams who want to update their setup.cfg can do so, but my intent is to > ensure it is not required in order to produce releases with the > automation in the short term. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Fri Oct 5 03:09:44 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 4 Oct 2018 23:09:44 -0400 Subject: [openstack-dev] [glance] spec-lite: Embed validation data in locations Message-ID: I put together an etherpad outlining the changes to Iain's spec-lite [0] that we discussed at today's glance meeting: https://etherpad.openstack.org/p/glance-spec-lite-stein-locations-with-checksum It's probably best to leave comments on the etherpad. cheers, brian [0] https://review.openstack.org/#/c/597648/ From akekane at redhat.com Fri Oct 5 04:11:51 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Fri, 5 Oct 2018 09:41:51 +0530 Subject: [openstack-dev] [all] Zuul job backlog In-Reply-To: <1538662401.2558963.1530696776.04282A5E@webmail.messagingengine.com> References: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> <1538165560.3935414.1524300072.5C31EEA9@webmail.messagingengine.com> <1538662401.2558963.1530696776.04282A5E@webmail.messagingengine.com> Message-ID: Hi Clark, Thank you for the inputs. I have verified the logs and found that mostly image import web-download import method related tests are failing. Now in this test [1] we are trying to download a file from ' https://www.openstack.org/assets/openstack-logo/2016R/OpenStack-Logo-Horizontal.eps.zip' in glance. Here we are assuming image will be downloaded and active within 20 seconds of time and if not it will be marked as failed. Now this test never fails in local environment but their might be a problem of connecting to remote while this test is executed in zuul jobs. Do you have any alternative idea how we can test this scenario, as it is very hard to reproduce this in local environment. Thanks & Best Regards, Abhishek Kekane On Thu, Oct 4, 2018 at 7:43 PM Clark Boylan wrote: > On Thu, Oct 4, 2018, at 12:16 AM, Abhishek Kekane wrote: > > Hi, > > Could you please point out some of the glance functional tests which are > > failing and causing this resets? > > I will like to put some efforts towards fixing those. > > http://status.openstack.org/elastic-recheck/data/integrated_gate.html is > a good place to start. That shows you a list of tests that failed in the > OpenStack Integrated gate that elastic-recheck could not identify the > failure for including those for several functional jobs. > > If you'd like to start looking at identified bugs first then > http://status.openstack.org/elastic-recheck/gate.html shows identified > failures that happened in the gate. > > For glance functional jobs the first link points to: > > http://logs.openstack.org/99/595299/1/gate/openstack-tox-functional/fc13eca/ > > http://logs.openstack.org/44/569644/3/gate/openstack-tox-functional/b7c487c/ > > http://logs.openstack.org/99/595299/1/gate/openstack-tox-functional-py35/b166313/ > > http://logs.openstack.org/44/569644/3/gate/openstack-tox-functional-py35/ce262ab/ > > Clark > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Fri Oct 5 04:44:31 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Fri, 5 Oct 2018 14:44:31 +1000 Subject: [openstack-dev] [neutron][neutron-release] feature/graphql branch rebase In-Reply-To: <89b44820-47c0-0a8b-a363-cf3ff4e1879a@redhat.com> References: <89b44820-47c0-0a8b-a363-cf3ff4e1879a@redhat.com> Message-ID: <649c9384-9d5c-3859-f893-71b48ecd675a@redhat.com> Hey Neutron folks, I'm just reiterating the request. Thanks On 20/06/18 11:34, Gilles Dubreuil wrote: > Could someone from the Neutron release group rebase feature/graphql > branch against master/HEAD branch please? > > Regards, > Gilles > > From mtreinish at kortar.org Fri Oct 5 04:50:07 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Fri, 05 Oct 2018 00:50:07 -0400 Subject: [openstack-dev] [all] Zuul job backlog In-Reply-To: References: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> <1538165560.3935414.1524300072.5C31EEA9@webmail.messagingengine.com> <1538662401.2558963.1530696776.04282A5E@webmail.messagingengine.com> Message-ID: On October 5, 2018 12:11:51 AM EDT, Abhishek Kekane wrote: >Hi Clark, > >Thank you for the inputs. I have verified the logs and found that >mostly >image import web-download import method related tests are failing. >Now in this test [1] we are trying to download a file from ' >https://www.openstack.org/assets/openstack-logo/2016R/OpenStack-Logo-Horizontal.eps.zip' >in glance. Here we are assuming image will be downloaded and active >within >20 seconds of time and if not it will be marked as failed. Now this >test >never fails in local environment but their might be a problem of >connecting >to remote while this test is executed in zuul jobs. > >Do you have any alternative idea how we can test this scenario, as it >is >very hard to reproduce this in local environment. > External networking will always be unreliable from the ci environment, nothing is 100% reliable and just given the sheer number of jobs we execute there will be an appreciable number of failures just from that. That being said this exact problem you've described is one we fixed in devstack/tempest over 5 years ago: https://bugs.launchpad.net/tempest/+bug/1190623 It'd be nice if we didn't keep repeating problems. The solution for that bug is likely to be the same thing here, and not relying on pulling something from the external network in the test. Just use something else hosted on the local apache httpd of the test node and use that as the url to import in the test. -Matt Treinish > > >On Thu, Oct 4, 2018 at 7:43 PM Clark Boylan >wrote: > >> On Thu, Oct 4, 2018, at 12:16 AM, Abhishek Kekane wrote: >> > Hi, >> > Could you please point out some of the glance functional tests >which are >> > failing and causing this resets? >> > I will like to put some efforts towards fixing those. >> >> http://status.openstack.org/elastic-recheck/data/integrated_gate.html >is >> a good place to start. That shows you a list of tests that failed in >the >> OpenStack Integrated gate that elastic-recheck could not identify the >> failure for including those for several functional jobs. >> >> If you'd like to start looking at identified bugs first then >> http://status.openstack.org/elastic-recheck/gate.html shows >identified >> failures that happened in the gate. >> >> For glance functional jobs the first link points to: >> >> >http://logs.openstack.org/99/595299/1/gate/openstack-tox-functional/fc13eca/ >> >> >http://logs.openstack.org/44/569644/3/gate/openstack-tox-functional/b7c487c/ >> >> >http://logs.openstack.org/99/595299/1/gate/openstack-tox-functional-py35/b166313/ >> >> >http://logs.openstack.org/44/569644/3/gate/openstack-tox-functional-py35/ce262ab/ >> >> Clark >> >> >__________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> -- Sent from my Android device with K-9 Mail. Please excuse my brevity. From akekane at redhat.com Fri Oct 5 05:28:26 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Fri, 5 Oct 2018 10:58:26 +0530 Subject: [openstack-dev] [all] Zuul job backlog In-Reply-To: References: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> <1538165560.3935414.1524300072.5C31EEA9@webmail.messagingengine.com> <1538662401.2558963.1530696776.04282A5E@webmail.messagingengine.com> Message-ID: Hi Matt, Thanks for the input, I guess I should use ' http://git.openstack.org/static/openstack.png' which will definitely work. Clark, Matt, Kindly let me know your opinion about the same. Thanks & Best Regards, Abhishek Kekane On Fri, Oct 5, 2018 at 10:20 AM Matthew Treinish wrote: > > > On October 5, 2018 12:11:51 AM EDT, Abhishek Kekane > wrote: > >Hi Clark, > > > >Thank you for the inputs. I have verified the logs and found that > >mostly > >image import web-download import method related tests are failing. > >Now in this test [1] we are trying to download a file from ' > > > https://www.openstack.org/assets/openstack-logo/2016R/OpenStack-Logo-Horizontal.eps.zip > ' > >in glance. Here we are assuming image will be downloaded and active > >within > >20 seconds of time and if not it will be marked as failed. Now this > >test > >never fails in local environment but their might be a problem of > >connecting > >to remote while this test is executed in zuul jobs. > > > >Do you have any alternative idea how we can test this scenario, as it > >is > >very hard to reproduce this in local environment. > > > > External networking will always be unreliable from the ci environment, > nothing is 100% reliable and just given the sheer number of jobs we execute > there will be an appreciable number of failures just from that. That being > said this exact problem you've described is one we fixed in > devstack/tempest over 5 years ago: > > https://bugs.launchpad.net/tempest/+bug/1190623 > > It'd be nice if we didn't keep repeating problems. The solution for that > bug is likely to be the same thing here, and not relying on pulling > something from the external network in the test. Just use something else > hosted on the local apache httpd of the test node and use that as the url > to import in the test. > > -Matt Treinish > > > > > > >On Thu, Oct 4, 2018 at 7:43 PM Clark Boylan > >wrote: > > > >> On Thu, Oct 4, 2018, at 12:16 AM, Abhishek Kekane wrote: > >> > Hi, > >> > Could you please point out some of the glance functional tests > >which are > >> > failing and causing this resets? > >> > I will like to put some efforts towards fixing those. > >> > >> http://status.openstack.org/elastic-recheck/data/integrated_gate.html > >is > >> a good place to start. That shows you a list of tests that failed in > >the > >> OpenStack Integrated gate that elastic-recheck could not identify the > >> failure for including those for several functional jobs. > >> > >> If you'd like to start looking at identified bugs first then > >> http://status.openstack.org/elastic-recheck/gate.html shows > >identified > >> failures that happened in the gate. > >> > >> For glance functional jobs the first link points to: > >> > >> > > > http://logs.openstack.org/99/595299/1/gate/openstack-tox-functional/fc13eca/ > >> > >> > > > http://logs.openstack.org/44/569644/3/gate/openstack-tox-functional/b7c487c/ > >> > >> > > > http://logs.openstack.org/99/595299/1/gate/openstack-tox-functional-py35/b166313/ > >> > >> > > > http://logs.openstack.org/44/569644/3/gate/openstack-tox-functional-py35/ce262ab/ > >> > >> Clark > >> > >> > >__________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > -- > Sent from my Android device with K-9 Mail. Please excuse my brevity. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Fri Oct 5 06:15:26 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Fri, 5 Oct 2018 16:15:26 +1000 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: References: <413d67d8-e4de-51fe-e7cf-8fb6520aed34@redhat.com> Message-ID: Hi Edison, Sorry for the delay. Please see inline... Cheers, Gilles On 07/09/18 12:03, Edison Xiang wrote: > Hey gilles, > > Thanks your introduction for GraphQL and Relay. > > > GraphQL and OpenAPI have a different feature scope and both have pros and cons. > > I totally agree with you. They can work together. > Right now, I think we have no more work to adapt OpenStack APIs for > Open API. > Firstly we could sort out Open API schemas base on the current > OpenStack APIs. > and then we can discuss how to use it. I think a big question is going to be about the effort required to bring OpenStack API to be Open API v3.0 compliant. This is challenging because the various projects involved and the need to validate a new solution across all the projects. The best approach is likely to first demonstrate a new solution is viable and then eventually bring it to be accepted globally. Also because we don't have unlimited resources, I doubt we're going to be able to bring both Open API and GraphQL to the table(s). There no doubts how OpenStack APIs can benefit from features such as schema definitions, self documentation and better performance especially if they are built-in or derived from a standard. Meanwhile a practical example shows those features in action (for the skeptical) but also demonstrate how to do it which clarify the effort involved along with pros and cons.I want to make clear that I'm not against OpenAPI, I was actually keen to get it on board because of the benefits And it will also helps compare solutions (Open API, GraphQL). So, what do you think about an Open API proof of concept with Neutron? > About the micro version, we discuss with your team mate dmitry in > another email [1] Obviously micro version is a point of contention. My take on this is because consuming them has been proven harder than developing them. The beauty of GraphQL is that there is no need to deal with version at all. New fields appears when needed and old one are marked deprecated. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134202.html > > Best Regards, > Edison Xiang > > On Tue, Sep 4, 2018 at 8:37 AM Gilles Dubreuil > wrote: > > > > On 30/08/18 13:56, Edison Xiang wrote: >> Hi Ed Leafe, >> >> Thanks your reply. >> Open API defines a standard interface description for REST APIs. >> Open API 3.0 can make a description(schema) for current OpenStack >> REST API. >> It will not change current OpenStack API. >> I am not a GraphQL expert. I look up something about GraphQL. >> In my understanding, GraphQL will get current OpenAPI together >> and provide another APIs based on Relay, > > Not sure what you mean here, could you please develop? > > >> and Open API is used to describe REST APIs and GraphQL is used to >> describe Relay APIs. > > There is no such thing as "Relay APIs". > GraphQL povides a de-facto API Schema and Relay provides > extensions on top to facilitate re-fetching, paging and more. > GraphQL and OpenAPI have a different feature scope and both have > pros and cons. > GraphQL is delivering API without using REST verbs as all requests > are undone using POST and its data. > Beyond that what would be great (and it will ultimately come) is > to have both of them working together. > > The idea of the GraphQL Proof of Concept is see what it can bring > and at what cost such as effort and trade-offs. > And to compare this against the effort to adapt OpenStack APIs to > use Open API. > > BTW what's the status of Open API 3.0 in regards of Microversion? > > Regards, > Gilles > >> >> Best Regards, >> Edison Xiang >> >> On Wed, Aug 29, 2018 at 9:33 PM Ed Leafe > > wrote: >> >> On Aug 29, 2018, at 1:36 AM, Edison Xiang >> > wrote: >> > >> > As we know, Open API 3.0 was released on July, 2017, it is >> about one year ago. >> > Open API 3.0 support some new features like anyof, oneof >> and allof than Open API 2.0(Swagger 2.0). >> > Now OpenStack projects do not support Open API. >> > Also I found some old emails in the Mail List about >> supporting Open API 2.0 in OpenStack. >> >> There is currently an effort by some developers to >> investigate the possibility of using GraphQL with OpenStack >> APIs. What would Open API 3.0 provide that GraphQL would not? >> I’m asking because I don’t know enough about Open API to >> compare them. >> >> >> -- Ed Leafe >> >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email:gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Oct 5 08:02:45 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 05 Oct 2018 17:02:45 +0900 Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: <20181004174753.vruvzggqb4l7cpeg@yuggoth.org> References: <20181004174753.vruvzggqb4l7cpeg@yuggoth.org> Message-ID: <1664340182c.be42b6fc43129.5782669995674524194@ghanshyammann.com> ---- On Fri, 05 Oct 2018 02:47:53 +0900 Jeremy Stanley wrote ---- > On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote: > [...] > > TC members, please reply to this thread and indicate if you would > > find meeting at 1300 UTC on the first Thursday of every month > > acceptable, and of course include any other comments you might > > have (including alternate times). > > This time is acceptable to me. As long as we ensure that community > feedback continues more frequently in IRC and on the ML (for example > by making it clear that this meeting is expressly *not* for that) > then I'm fine with resuming formal meetings. +1. Time works fine for me, Thanks for considering the APAC TZ. I agree that we should keep encouraging the usual discussion in existing office hours, IRC or ML. I will be definitely able to attend other 2 office hours (Tuesday and Wednesday) which are suitable for my TZ. -gmann > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From thierry at openstack.org Fri Oct 5 08:55:32 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 5 Oct 2018 10:55:32 +0200 Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: <1664340182c.be42b6fc43129.5782669995674524194@ghanshyammann.com> References: <20181004174753.vruvzggqb4l7cpeg@yuggoth.org> <1664340182c.be42b6fc43129.5782669995674524194@ghanshyammann.com> Message-ID: Ghanshyam Mann wrote: > ---- On Fri, 05 Oct 2018 02:47:53 +0900 Jeremy Stanley wrote ---- > > On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote: > > [...] > > > TC members, please reply to this thread and indicate if you would > > > find meeting at 1300 UTC on the first Thursday of every month > > > acceptable, and of course include any other comments you might > > > have (including alternate times). > > > > This time is acceptable to me. As long as we ensure that community > > feedback continues more frequently in IRC and on the ML (for example > > by making it clear that this meeting is expressly *not* for that) > > then I'm fine with resuming formal meetings. > > +1. Time works fine for me, Thanks for considering the APAC TZ. > > I agree that we should keep encouraging the usual discussion in existing office hours, IRC or ML. I will be definitely able to attend other 2 office hours (Tuesday and Wednesday) which are suitable for my TZ. 1300 UTC is obviously good for me, but once we are off DST that will mean 5am for our Pacific Time people (do we have any left ?). Maybe 1400 UTC would be a better trade-off? Regarding frequency, I agree with mnaser that once per month might be too rare. That means only 5-ish meetings for a given a 6-month membership. But that can work if we use the meeting as a formal progress status checkpoint, rather than a way to discuss complex topics. -- Thierry Carrez (ttx) From cdent+os at anticdent.org Fri Oct 5 08:59:27 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 5 Oct 2018 09:59:27 +0100 (BST) Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: References: Message-ID: On Thu, 4 Oct 2018, Doug Hellmann wrote: > TC members, please reply to this thread and indicate if you would find > meeting at 1300 UTC on the first Thursday of every month acceptable, and > of course include any other comments you might have (including alternate > times). +1 Also, if we're going to set aside a time for a semi-formal meeting, I hope we will have some form of agenda and minutes, with a fairly clear process for setting that agenda as well as a process for making sure that the fast and/or rude typers do not dominate the discussion during the meetings, as they used to back in the day when there were weekly meetings. The "raising hands" thing that came along towards the end sort of worked, so a variant on that may be sufficient. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From dangtrinhnt at gmail.com Fri Oct 5 09:30:21 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 5 Oct 2018 18:30:21 +0900 Subject: [openstack-dev] [release][searchlight] What should I do with the missing releases? Message-ID: Dear release team, One thing comes up in my mind when preparing for the stein-1 release of Searchlight that is what should we do with the missing releases (i.e. Rocky)? Can I just ignore it or do I have to create a branch for it? Thanks and regards, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From amal.kammoun.2 at gmail.com Fri Oct 5 11:16:04 2018 From: amal.kammoun.2 at gmail.com (amal kammoun) Date: Fri, 5 Oct 2018 13:16:04 +0200 Subject: [openstack-dev] Monasca agent problem Message-ID: Hello, I have an issue with the monasca Agent. In fact, I installed monasca with Openstack using devstack. I want to minitor now the instances deployed using Openstack. For that I installed on each instance the monasca agent with the following link: https://github.com/openstack/monasca-agent/blob/master/docs/Agent.md The problem is that I cannot define alarms for the concerned instance. Example on my agent: [image: image.png] On my monitoring system I found the alam defintion but it is not activated. also the instance on where the agent is running is not declared as a server on the monasca servers list. Regards, Amal Kammoun. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 7090 bytes Desc: not available URL: From doug at doughellmann.com Fri Oct 5 11:20:01 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 05 Oct 2018 07:20:01 -0400 Subject: [openstack-dev] [all][Searchlight] Always build universal wheels In-Reply-To: References: <20181004161320.erqxq7erbw3armyt@yuggoth.org> Message-ID: Trinh Nguyen writes: > Thank Jeremy, Doug for explaining. > > On Fri, Oct 5, 2018 at 6:54 AM Doug Hellmann wrote: > >> Jeremy Stanley writes: >> >> > On 2018-10-04 23:11:22 +0900 (+0900), Trinh Nguyen wrote: >> > [...] >> >> Please avoid adding universal wheels to the project setup.cfg. >> > [...] >> > >> > Why would you avoid also adding it to the setup.cfg? The change you >> > cited is merely to be able to continue building universal wheels for >> > projects while the setup.cfg files are corrected over time, to >> > reduce the urgency of doing so. Wheel building happens in more >> > places than just our CI system, so only fixing it in CI is not a >> > good long-term strategy. >> >> I abandoned a couple of dozen patches submitted today by someone who was >> not coordinating with the goal champions with a message that said those >> patches were not needed because I didn't want folks to be dealing with >> this right now. >> >> Teams who want to update their setup.cfg can do so, but my intent is to >> ensure it is not required in order to produce releases with the >> automation in the short term. I thought about this some more last night, looking for reasons not to update all of the setup.cfg files. If this zuul migration project has shown anything, it's that we need to continue to be creative with finding ways to avoid touching every branch of every repository when we have build system changes to make. I support decentralizing the job management, but I think this is a case where we can avoid doing a bunch of work, and so we should. We've been saying we want to update the setup.cfg files to include the setting to cause a universal wheel to build because we want the local developer experience when building wheels to be the same as what we get from the CI system when we publish packages. I don't think that's a real requirement, though. The default behavior of bdist_wheel is to create a version-specific wheel, suitable for use with the version of python used to build it. The universal flag makes a wheel file that can be used under either python2 or python3. Perhaps surprisingly, the contents of a universal wheel are *exactly* the same as the contents of a version-specific wheel. Literally the *only* difference is the filename, which includes both versions so that pip will choose the universal file if no version-specific file exists. Therefore, for our CI system, we want to publish universal wheels to PyPI because they are more usable to consumers installing from there (including the CI system). On the other hand, for local builds, there's no particular reason to prefer a universal wheel over the version-specific format. If someone is building their own wheels for internal consumption, they can easily choose to keep the version-specific packages, or add the --universal flag like we're doing in the CI job. So, I think this all means we can leave the setup.cfg files as they are and not worry about updating the wheel format flag. Doug From doug at doughellmann.com Fri Oct 5 11:27:32 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 05 Oct 2018 07:27:32 -0400 Subject: [openstack-dev] [all] Zuul job backlog In-Reply-To: References: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> <1538165560.3935414.1524300072.5C31EEA9@webmail.messagingengine.com> <1538662401.2558963.1530696776.04282A5E@webmail.messagingengine.com> Message-ID: Abhishek Kekane writes: > Hi Matt, > > Thanks for the input, I guess I should use ' > http://git.openstack.org/static/openstack.png' which will definitely work. > Clark, Matt, Kindly let me know your opinion about the same. That URL would not be on the local node running the test, and would eventually exhibit the same problems. In fact we have seen issues cloning git repositories as part of the tests in the past. You need to use a localhost URL to ensure that the download doesn't have to go off of the node. That may mean placing something into the directory where Apache is serving files as part of the test setup. Doug From doug at doughellmann.com Fri Oct 5 11:30:09 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 05 Oct 2018 07:30:09 -0400 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: References: <413d67d8-e4de-51fe-e7cf-8fb6520aed34@redhat.com> Message-ID: Gilles Dubreuil writes: >> About the micro version, we discuss with your team mate dmitry in >> another email [1] > > Obviously micro version is a point of contention. > My take on this is because consuming them has been proven harder than > developing them. > The beauty of GraphQL is that there is no need to deal with version at all. > New fields appears when needed and old one are marked deprecated. How does someone using GraphQL to use a cloud know when a specific field is available? How can they differentiate what is supported in one cloud from what is supported in another, running a different version of the same service? Doug From doug at doughellmann.com Fri Oct 5 11:38:19 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 05 Oct 2018 07:38:19 -0400 Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: References: <20181004174753.vruvzggqb4l7cpeg@yuggoth.org> <1664340182c.be42b6fc43129.5782669995674524194@ghanshyammann.com> Message-ID: Thierry Carrez writes: > Ghanshyam Mann wrote: >> ---- On Fri, 05 Oct 2018 02:47:53 +0900 Jeremy Stanley wrote ---- >> > On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote: >> > [...] >> > > TC members, please reply to this thread and indicate if you would >> > > find meeting at 1300 UTC on the first Thursday of every month >> > > acceptable, and of course include any other comments you might >> > > have (including alternate times). >> > >> > This time is acceptable to me. As long as we ensure that community >> > feedback continues more frequently in IRC and on the ML (for example >> > by making it clear that this meeting is expressly *not* for that) >> > then I'm fine with resuming formal meetings. >> >> +1. Time works fine for me, Thanks for considering the APAC TZ. >> >> I agree that we should keep encouraging the usual discussion in existing office hours, IRC or ML. I will be definitely able to attend other 2 office hours (Tuesday and Wednesday) which are suitable for my TZ. > > 1300 UTC is obviously good for me, but once we are off DST that will > mean 5am for our Pacific Time people (do we have any left ?). > > Maybe 1400 UTC would be a better trade-off? Julia is out west, but I think not all the way to PST. > Regarding frequency, I agree with mnaser that once per month might be > too rare. That means only 5-ish meetings for a given a 6-month > membership. But that can work if we use the meeting as a formal progress > status checkpoint, rather than a way to discuss complex topics. I think we can definitely manage the agenda to minimize the number of complex discussions. If that proves to be too hard, I wouldn't mind meeting more often, but there does seem to be a lot of support for preferring other venues for those conversations. Doug From doug at doughellmann.com Fri Oct 5 11:40:00 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 05 Oct 2018 07:40:00 -0400 Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: References: Message-ID: Chris Dent writes: > On Thu, 4 Oct 2018, Doug Hellmann wrote: > >> TC members, please reply to this thread and indicate if you would find >> meeting at 1300 UTC on the first Thursday of every month acceptable, and >> of course include any other comments you might have (including alternate >> times). > > +1 > > Also, if we're going to set aside a time for a semi-formal meeting, I > hope we will have some form of agenda and minutes, with a fairly > clear process for setting that agenda as well as a process for I had in mind "email the chair your topic suggestion" and then "the chair emails the agenda to openstack-dev tagged [tc] a bit in advance of the meeting". There would also probably be some standing topics, like updates for ongoing projects. Does that work for everyone? Doug > making sure that the fast and/or rude typers do not dominate the > discussion during the meetings, as they used to back in the day when > there were weekly meetings. > > The "raising hands" thing that came along towards the end sort of > worked, so a variant on that may be sufficient. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Fri Oct 5 11:42:03 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 05 Oct 2018 07:42:03 -0400 Subject: [openstack-dev] [release][searchlight] What should I do with the missing releases? In-Reply-To: References: Message-ID: Trinh Nguyen writes: > Dear release team, > > One thing comes up in my mind when preparing for the stein-1 release of > Searchlight that is what should we do with the missing releases (i.e. > Rocky)? Can I just ignore it or do I have to create a branch for it? There was no rocky release, so I don't really see any reason to create the branch. There isn't anything to maintain. Doug From fungi at yuggoth.org Fri Oct 5 11:46:32 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 5 Oct 2018 11:46:32 +0000 Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: References: Message-ID: <20181005114631.2j5uiooysany2zoi@yuggoth.org> On 2018-10-05 07:40:00 -0400 (-0400), Doug Hellmann wrote: [...] > I had in mind "email the chair your topic suggestion" and then "the > chair emails the agenda to openstack-dev tagged [tc] a bit in advance of > the meeting". There would also probably be some standing topics, like > updates for ongoing projects. > > Does that work for everyone? [...] Seems fine to me. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Oct 5 11:53:41 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 5 Oct 2018 11:53:41 +0000 Subject: [openstack-dev] [all][Searchlight] Always build universal wheels In-Reply-To: References: <20181004161320.erqxq7erbw3armyt@yuggoth.org> Message-ID: <20181005115340.fm3j5ajd4avwpmf3@yuggoth.org> On 2018-10-05 07:20:01 -0400 (-0400), Doug Hellmann wrote: [...] > So, I think this all means we can leave the setup.cfg files as > they are and not worry about updating the wheel format flag. I continue to agree that, because of the reasons you state, it is not urgent to update setup.cfg (either to start supporting universal wheels or to follow the deprecation/transition on the section name in the latest wheel release), at least for projects relying on the OpenStack-specific release jobs. It is still technically more correct and I don't think we should forbid individual teams from also updating the setup.cfg files in their repositories should they choose to do so. That's all I've been trying to say. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jim at jimrollenhagen.com Fri Oct 5 11:54:07 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 5 Oct 2018 07:54:07 -0400 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: References: <413d67d8-e4de-51fe-e7cf-8fb6520aed34@redhat.com> Message-ID: GraphQL has introspection features that allow clients to pull the schema (types, queries, mutations, etc): https://graphql.org/learn/introspection/ That said, it seems like using this in a client like OpenStackSDK would get messy quickly. Instead of asking for which versions are supported, you'd have to fetch the schema, map it to actual features somehow, and adjust queries based on this info. I guess there might be a middleground where we could fetch the REST API version, and know from that what GraphQL queries can be made. // jim On Fri, Oct 5, 2018 at 7:30 AM Doug Hellmann wrote: > Gilles Dubreuil writes: > > >> About the micro version, we discuss with your team mate dmitry in > >> another email [1] > > > > Obviously micro version is a point of contention. > > My take on this is because consuming them has been proven harder than > > developing them. > > The beauty of GraphQL is that there is no need to deal with version at > all. > > New fields appears when needed and old one are marked deprecated. > > How does someone using GraphQL to use a cloud know when a specific field > is available? How can they differentiate what is supported in one cloud > from what is supported in another, running a different version of the > same service? > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Oct 5 11:55:51 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 05 Oct 2018 07:55:51 -0400 Subject: [openstack-dev] [Distutils] pip 18.1 has been released! In-Reply-To: References: Message-ID: Watch for changes in pip's behavior. - Doug Pradyun Gedam writes: > On behalf of the PyPA, I am pleased to announce that pip 18.1 has just > been released. > > To install pip 18.1, you can run:: > > python -m pip install --upgrade pip > > or use get-pip as described in > https://pip.pypa.io/en/latest/installing. Note that > if you are using a version of pip supplied by your distribution > vendor, vendor-supplied > upgrades will be available in due course. > > The highlights of this release are: > > - Python 3.7 is now supported > - Dependency Links support is now scheduled for removal in pip 19.0 > (the next pip > release, scheduled in January 2019). > - Plaform specific options can now be used with the --target option, > to enable certain > workflows. > - Much more helpful error messages on invalid configuration files > - Many bug fixes and minor improvements > > Thanks to everyone who put so much effort into the new release. Many of the > contributions came from community members, whether in the form of code, > participation in design discussions and/or bug reports. The pip development > team is extremely grateful to everyone in the community for their contributions. > > Thanks, > Pradyun > -- > Distutils-SIG mailing list -- distutils-sig at python.org > To unsubscribe send an email to distutils-sig-leave at python.org > https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/ > Message archived at https://mail.python.org/mm3/archives/list/distutils-sig at python.org/message/YBYAYIXJ2WUUYCJLM7EWMQETJOW5W6ZZ/ From nickgrwl3 at gmail.com Fri Oct 5 11:56:06 2018 From: nickgrwl3 at gmail.com (Niket Agrawal) Date: Fri, 5 Oct 2018 13:56:06 +0200 Subject: [openstack-dev] Ryu integration with Openstack In-Reply-To: <70907B1D-9D78-4004-9059-5F154FA6EDE9@redhat.com> References: <418337EF-FF89-4B26-8EF7-B8C5CF752E24@redhat.com> <70907B1D-9D78-4004-9059-5F154FA6EDE9@redhat.com> Message-ID: Hi, Thanks for the help. I am trying to run a custom Ryu app from the nova compute node and have all the openvswitches connected to this new controller. However, to be able to run this new app, I have to first stop the existing neutron openvswitch agents in the same node as they run Ryu app (integrated in Openstack) by default. Ryu in Openstack provides basic functionalities like L2 switching but does not support launching a custom app at the same time. I'd like to have a single instance of Ryu controller control all the openvswtich instances rather than having openvswitch agents in each node managing the openvswitches separately. For this, I'll probably have to migrate the existing functionality provided by Ryu app to this new app of mine. Could you share some suggestions or are you aware of any previous work done towards this, that I can read about? Regards, Niket On Thu, Sep 27, 2018 at 9:21 AM Slawomir Kaplonski wrote: > Hi, > > Code of app is in > https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py > and classes for specific bridge types are in > https://github.com/openstack/neutron/tree/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native > > > Wiadomość napisana przez Niket Agrawal w dniu > 27.09.2018, o godz. 00:08: > > > > Hi, > > > > Thanks for your reply. Is there a way to access the code that is running > in the app to see what is the logic implemented in the app? > > > > Regards, > > Niket > > > > On Wed, Sep 26, 2018 at 10:31 PM Slawomir Kaplonski > wrote: > > Hi, > > > > > Wiadomość napisana przez Niket Agrawal w dniu > 26.09.2018, o godz. 18:11: > > > > > > Hello, > > > > > > I have a question regarding the Ryu integration in Openstack. By > default, the openvswitch bridges (br-int, br-tun and br-ex) are registered > to a controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl > get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova compute > node. However there is a different instance of the same Ryu controller > running on the neutron gateway as well and the three openvswitch bridges > (br-int, br-tun and br-ex) are registered to this instance of Ryu > controller. If I stop neutron-openvswitch agent on the nova compute node, > the bridges there are no longer connected to the controller, but the > bridges in the neutron gateway continue to remain connected to the > controller. Only when I stop the neutron openvswitch agent in the neutron > gateway as well, the bridges there get disconnected. > > > > > > I'm unable to find where in the Openstack code I can access this > implementation, because I intend to make a few tweaks to this architecture > which is present currently. Also, I'd like to know which app is the Ryu SDN > controller running by default at the moment. I feel the information in the > code can help me find it too. > > > > Ryu app is started by neutron-openvswitch-agent in: > https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py#L34 > > Is it what You are looking for? > > > > > > > > Regards, > > > Niket > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Fri Oct 5 12:36:09 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Fri, 5 Oct 2018 14:36:09 +0200 Subject: [openstack-dev] [puppet] Heads up for changes causing restarts! Message-ID: <7e95fa90-448d-cdd2-5541-2187f7f81f6d@binero.se> Hello, Due to bugs and fixes that has been needed we are probably going to merge some changes to Puppet modules which will cause a refresh of their services meaning they will be restarted. If you are following the stable branches (stable/rocky in this case) and not using tagged releases when you are pulling in the Puppet OpenStack modules we want to alert you that restarts of services might happen if you deploy new changes. These two for example is bug fixes which are probably going to be restarted causing restart of Horizon and Cinder services [1] [2] [3]. Feel free to reach out to us at #puppet-openstack if you have any concerns. [1] https://review.openstack.org/#/c/608244/ [2] https://review.openstack.org/#/c/607964/ (if backported to Rocky later on) [3] https://review.openstack.org/#/c/605071/ Best regards Tobias From juliaashleykreger at gmail.com Fri Oct 5 13:16:36 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 5 Oct 2018 07:16:36 -0600 Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: References: <20181004174753.vruvzggqb4l7cpeg@yuggoth.org> <1664340182c.be42b6fc43129.5782669995674524194@ghanshyammann.com> Message-ID: +1 to bringing back formal meetings. A few replies below regarding time/agenda. On Fri, Oct 5, 2018 at 5:38 AM Doug Hellmann wrote: > Thierry Carrez writes: > > > Ghanshyam Mann wrote: > >> ---- On Fri, 05 Oct 2018 02:47:53 +0900 Jeremy Stanley < > fungi at yuggoth.org> wrote ---- > >> > On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote: > >> > [...] > >> > > TC members, please reply to this thread and indicate if you would > >> > > find meeting at 1300 UTC on the first Thursday of every month > >> > > acceptable, and of course include any other comments you might > >> > > have (including alternate times). > >> > > >> > This time is acceptable to me. As long as we ensure that community > >> > feedback continues more frequently in IRC and on the ML (for example > >> > by making it clear that this meeting is expressly *not* for that) > >> > then I'm fine with resuming formal meetings. > >> > >> +1. Time works fine for me, Thanks for considering the APAC TZ. > >> > >> I agree that we should keep encouraging the usual discussion in > existing office hours, IRC or ML. I will be definitely able to attend other > 2 office hours (Tuesday and Wednesday) which are suitable for my TZ. > > > > 1300 UTC is obviously good for me, but once we are off DST that will > > mean 5am for our Pacific Time people (do we have any left ?). > > > > Maybe 1400 UTC would be a better trade-off? > > Julia is out west, but I think not all the way to PST. > My home time zone is PST. It would be awesome if we could hold the meeting an hour later, but I can get up early in the morning once a month. If we choose to meet more regularly, then a one hour later start would be more appreciated if it is not too much of an inconvenience to APAC TC members. That being said, I do typically get up early, just not 0500 early that often. > > > Regarding frequency, I agree with mnaser that once per month might be > > too rare. That means only 5-ish meetings for a given a 6-month > > membership. But that can work if we use the meeting as a formal progress > > status checkpoint, rather than a way to discuss complex topics. > > I think we can definitely manage the agenda to minimize the number of > complex discussions. If that proves to be too hard, I wouldn't mind > meeting more often, but there does seem to be a lot of support for > preferring other venues for those conversations. > > +1 I think there is a point where we need to recognize there is a time and place for everything, and some of those long running complex conversations might not be well suited for what would essentially be "review business status" meetings. If we have any clue that something is going to be a very long and drawn out discussion, then I feel like we should make an effort to schedule individually. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Fri Oct 5 13:24:39 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 5 Oct 2018 09:24:39 -0400 Subject: [openstack-dev] [nova] [ironic] agreement on how to specify options that impact scheduling and configuration In-Reply-To: References: Message-ID: Added [ironic] topic. On 10/04/2018 06:06 PM, Chris Friesen wrote: > While discussing the "Add HPET timer support for x86 guests" > blueprint[1] one of the items that came up was how to represent what are > essentially flags that impact both scheduling and configuration.  Eric > Fried posted a spec to start a discussion[2], and a number of nova > developers met on a hangout to hash it out.  This is the result. > > In this specific scenario the goal was to allow the user to specify that > their image required a virtual HPET.  For efficient scheduling we wanted > this to map to a placement trait, and the virt driver also needed to > enable the feature when booting the instance.  (This can be generalized > to other similar problems, including how to specify scheduling and > configuration information for Ironic.) > > We discussed two primary approaches: > > The first approach was to specify an arbitrary "key=val" in flavor > extra-specs or image properties, which nova would automatically > translate into the appropriate placement trait before passing it to > placement.  Once scheduled to a compute node, the virt driver would look > for "key=val" in the flavor/image to determine how to proceed. > > The second approach was to directly specify the placement trait in the > flavor extra-specs or image properties.  Once scheduled to a compute > node, the virt driver would look for the placement trait in the > flavor/image to determine how to proceed. > > Ultimately, the decision was made to go with the second approach.  The > result is that it is officially acceptable for virt drivers to key off > placement traits specified in the image/flavor in order to turn on/off > configuration options for the instance.  If we do get down to the virt > driver and the trait is set, and the driver for whatever reason > determines it's not capable of flipping the switch, it should fail. Ironicers, pay attention to the above! :) It's a green light from Nova to use the traits list contained in the flavor extra specs and image metadata when (pre-)configuring an instance. > It should be noted that it only makes sense to use placement traits for > things that affect scheduling.  If it doesn't affect scheduling, then it > can be stored in the flavor extra-specs or image properties separate > from the placement traits.  Also, this approach only makes sense for > simple booleans.  Anything requiring more complex configuration will > likely need additional extra-spec and/or config and/or unicorn dust. Ironicers, also pay close attention to the advice above. Things that are not "scheduleable" -- in other words, things that don't filter the list of hosts that a workload can land on -- should not go in traits. Finally, here's the HPET os-traits patch. Reviews welcome (it's tiny patch): https://review.openstack.org/608258 Best, -jay > Chris > > [1] https://blueprints.launchpad.net/nova/+spec/support-hpet-on-guest > [2] > https://review.openstack.org/#/c/607989/1/specs/stein/approved/support-hpet-on-guest.rst > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Fri Oct 5 13:31:05 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 5 Oct 2018 14:31:05 +0100 (BST) Subject: [openstack-dev] [placement] update 18-40 Message-ID: HTML: https://anticdent.org/placement-update-18-40.html Here's this week's placement update. We remain focused on specs and pressing issues with extraction, mostly because until the extraction is "done" in some form doing much other work is a bit premature. # Most Important There have been several discussions recently about what to do with options that impact both scheduling and configuration. Some of this was in the thread about [intended purposes of traits](http://lists.openstack.org/pipermail/openstack-dev/2018-October/thread.html#135301), but more recently there was discussion on how to support guests that want an HPET. Chris Friesen [summarized a hangout](http://lists.openstack.org/pipermail/openstack-dev/2018-October/135446.html) that happened yesterday that will presumably be reflected in an [in-progress spec](https://review.openstack.org/#/c/607989/1). The work to get [grenade upgrading to placement](https://review.openstack.org/#/c/604454/) is very close. After several iterations of tweaking, the grenade jobs are now passing. There are still some adjustments to get devstack jobs working, but the way is relatively clear. More on this in "extraction" below, but the reason this is a most important is that this stuff allows us to do proper integration and upgrade testing, without which it is hard to have confidence. # What's Changed In both placement and nova, placement is no longer using `get_legacy_facade()`. This will remove some annoying deprecation warnings. The nova->placement database migration script for MySQL has merged. The postgresql version is still [up for review](https://review.openstack.org/#/c/604028/). Consumer generations are now being used in some allocation handling in nova. # Questions * What should we do about nova calling the placement db, like in [nova-manage](https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L416) and [nova-status](https://github.com/openstack/nova/blob/master/nova/cmd/status.py#L254). * Should we consider starting a new extraction etherpad? The [old one](https://etherpad.openstack.org/p/placement-extract-stein-3) has become a bit noisy and out of date. # Bugs * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 17. -1. * [In progress placement bugs](https://goo.gl/vzGGDQ) 8. -1. # Specs Many of these specs don't seem to be getting much attention. Can the dead ones be abandoned? * Account for host agg allocation ratio in placement (Still in rocky/) * Add subtree filter for GET /resource_providers * Resource provider - request group mapping in allocation candidate * VMware: place instances on resource pool (still in rocky/) * Standardize CPU resource tracking * Allow overcommit of dedicated CPU (Has an alternative which changes allocations to a float) * List resource providers having inventory * Bi-directional enforcement of traits * allow transferring ownership of instance * Modelling passthrough devices for report to placement * Propose counting quota usage from placement and API database (A bit out of date but may be worth resurrecting) * Spec: allocation candidates in tree * [WIP] generic device discovery policy * Nova Cyborg interaction specification. * supporting virtual NVDIMM devices * Spec: Support filtering by forbidden aggregate * Proposes NUMA topology with RPs * Support initial allocation ratios * Count quota based on resource class * WIP: High Precision Event Timer (HPET) on x86 guests * Add support for emulated virtual TPM * Limit instance create max_count (spec) (has some concurrency issues related placement) * Adds spec for instance live resize So many specs. # Main Themes ## Making Nested Useful Work on getting nova's use of nested resource providers happy and fixing bugs discovered in placement in the process. This is creeping ahead. There is plenty of discussion going along nearby with regards to various ways they are being used, notably GPUs. * I feel like I'm missing some things in this area. Please let me know if there are others. This is related: * Pass allocations to virt drivers when resizing ## Extraction There continue to be three main tasks in regard to placement extraction: 1. upgrade and integration testing 2. database schema migration and management 3. documentation publishing The upgrade aspect of (1) is in progress with a [patch to grenade](https://review.openstack.org/#/c/604454/) and a [patch to devstack](https://review.openstack.org/#/c/600162/). This is very close to working. The remaining failures are with jobs that do not have `openstack/placement` in `$PROJECTS`. Once devstack is happy then we can start thinking about integration testing using tempest. I've started some experiments with [using gabbi](https://review.openstack.org/#/c/601614/) for that. I've explained my reasoning in [a blog post](https://anticdent.org/gabbi-in-the-gate.html). Successful devstack is dependent on us having a reasonable solution to (2). For the moment [a hacked up script](https://review.openstack.org/#/c/600161/) is being used to create tables. This works, but is not sufficient for deployers nor for any migrations we might need to do. Moving to alembic seems a reasonable thing to do, as a part of that. We have work in progress to tune up the documentation but we are not yet publishing documentation (3). We need to work out a plan for this. Presumably we don't want to be publishing docs until we are publishing code, but the interdependencies need to be teased out. # Other Going to start highlighting some specific changes across several projects. If you're aware of something I'm missing, please let me know. * Generate sample policy in placement directory (This is a bit stuck on not being sure what the right thing to do is.) * Some efforts by Eric to reduce code complexity * Improve handling of default allocation ratios * Neutron minimum bandwidth implementation * TripleO: Use valid_interfaces instead of os_interface for placement * Puppet: Separate placement database is not deprecated * Add OWNERSHIP $SERVICE traits * Puppet: Initial cookiecutter and import from nova::placement * WIP: Add placement to devstack-gate PROJECTS * zun: Use placement for unified resource management # End I'm going to be away next week, so if any my pending code needs some fixes and is blocking other stuff, please fix it. Also, there will be no pupdate next week (unless someone else does one). -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From colleen at gazlene.net Fri Oct 5 14:03:33 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 05 Oct 2018 16:03:33 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 1 October 2018 Message-ID: <1538748213.2759972.1531950176.4C4033DA@webmail.messagingengine.com> # Keystone Team Update - Week of 1 October 2018 ## News ### JSON-home As Morgan works through the flaskification project, it's been clear that some of the JSON-home[1] code could use some refactoring and that the document itself is inconsistent[2], but we're unclear whether anyone uses this or cares if it changes. If you have ever used keystone's JSON-home implementation, come talk to us. [1] https://adam.younglogic.com/2018/01/using-json-home-keystone/ [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-10-02.log.html#t2018-10-02T18:22:25 ## Open Specs Search query: https://bit.ly/2Pi6dGj We still only have three specs targeted at Stein, but Adam has revived several "ongoing" specs that can use some eyes, please take a look[3]. [3] https://bit.ly/2OyDLTh ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 19 changes this week. ## Changes that need Attention Search query: https://bit.ly/2PUk84S There are 41 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. One of these is a proposal to add rate-limiting to keystoneauth[4], would be good to get some more reactions to it. Another is the flaskification patch of doom[5] which will definitely need some close attention. [4] https://review.openstack.org/605043 [5] https://review.openstack.org/603461 ## Bugs This week we opened 5 new bugs and closed 7. Bugs opened (5) Bug #1795487 (keystone:Undecided) opened by Amy Marrich https://bugs.launchpad.net/keystone/+bug/1795487 Bug #1795800 (keystone:Undecided) opened by Andy Ngo https://bugs.launchpad.net/keystone/+bug/1795800 Bug #1796077 (keystone:Undecided) opened by Ching Kuo https://bugs.launchpad.net/keystone/+bug/1796077 Bug #1796247 (keystone:Undecided) opened by Yang Youseok https://bugs.launchpad.net/keystone/+bug/1796247 Bug #1795496 (oslo.policy:Undecided) opened by Adam Young https://bugs.launchpad.net/oslo.policy/+bug/1795496 Bugs closed (3) Bug #1782687 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1782687 Bug #1796077 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1796077 Bug #1796247 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1796247 Bugs fixed (4) Bug #1794552 (keystone:Medium) fixed by Morgan Fainberg https://bugs.launchpad.net/keystone/+bug/1794552 Bug #1753585 (keystone:Low) fixed by Vishakha Agarwal https://bugs.launchpad.net/keystone/+bug/1753585 Bug #1615076 (keystone:Undecided) fixed by Vishakha Agarwal https://bugs.launchpad.net/keystone/+bug/1615076 Bug #1615076 (python-keystoneclient:Undecided) fixed by Vishakha Agarwal https://bugs.launchpad.net/python-keystoneclient/+bug/1615076 ## Milestone Outlook https://releases.openstack.org/stein/schedule.html Now just 3 weeks away from the spec proposal freeze. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 From e0ne at e0ne.info Fri Oct 5 14:10:16 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Fri, 5 Oct 2018 17:10:16 +0300 Subject: [openstack-dev] [Horizon] Horizon tutorial didn`t work In-Reply-To: References: Message-ID: Hi Jea-Min, I filed a bug [1] and proposed a fix for it [2] [1] https://bugs.launchpad.net/horizon/+bug/1796312 [2] https://review.openstack.org/608274 Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Tue, Oct 2, 2018 at 6:55 AM Jea-Min Lim wrote: > Thanks for the reply. > > If you need any detailed information, let me know. > > Regards, > > 2018년 10월 1일 (월) 오후 6:53, Ivan Kolodyazhny 님이 작성: > >> Hi Jea-Min, >> >> Thank you for your report. I'll check the manual and fix it asap. >> >> >> Regards, >> Ivan Kolodyazhny, >> http://blog.e0ne.info/ >> >> >> On Mon, Oct 1, 2018 at 9:38 AM Jea-Min Lim wrote: >> >>> Hello everyone, >>> >>> I`m following a tutorial of Building a Dashboard using Horizon. >>> (link: >>> https://docs.openstack.org/horizon/latest/contributor/tutorials/dashboard.html#tutorials-dashboard >>> ) >>> >>> However, provided custom management command doesn't create boilerplate >>> code. >>> >>> I typed tox -e manage -- startdash mydashboard --target >>> openstack_dashboard/dashboards/mydashboard >>> >>> and the attached screenshot file is the execution result. >>> >>> Are there any recommendations to solve this problem? >>> >>> Regards. >>> >>> [image: result_jmlim.PNG] >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: result_jmlim.PNG Type: image/png Size: 33958 bytes Desc: not available URL: From majopela at redhat.com Fri Oct 5 14:15:13 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Fri, 5 Oct 2018 16:15:13 +0200 Subject: [openstack-dev] Ryu integration with Openstack In-Reply-To: References: <418337EF-FF89-4B26-8EF7-B8C5CF752E24@redhat.com> <70907B1D-9D78-4004-9059-5F154FA6EDE9@redhat.com> Message-ID: have a look at dragonflow project, may be it's similar to what you're trying to accomplish On Fri, Oct 5, 2018, 1:56 PM Niket Agrawal wrote: > Hi, > > Thanks for the help. I am trying to run a custom Ryu app from the nova > compute node and have all the openvswitches connected to this new > controller. However, to be able to run this new app, I have to first stop > the existing neutron openvswitch agents in the same node as they run Ryu > app (integrated in Openstack) by default. Ryu in Openstack provides basic > functionalities like L2 switching but does not support launching a custom > app at the same time. > I'd like to have a single instance of Ryu controller control all the > openvswtich instances rather than having openvswitch agents in each node > managing the openvswitches separately. For this, I'll probably have to > migrate the existing functionality provided by Ryu app to this new app of > mine. Could you share some suggestions or are you aware of any previous > work done towards this, that I can read about? > > Regards, > Niket > > On Thu, Sep 27, 2018 at 9:21 AM Slawomir Kaplonski > wrote: > >> Hi, >> >> Code of app is in >> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py >> and classes for specific bridge types are in >> https://github.com/openstack/neutron/tree/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native >> >> > Wiadomość napisana przez Niket Agrawal w dniu >> 27.09.2018, o godz. 00:08: >> > >> > Hi, >> > >> > Thanks for your reply. Is there a way to access the code that is >> running in the app to see what is the logic implemented in the app? >> > >> > Regards, >> > Niket >> > >> > On Wed, Sep 26, 2018 at 10:31 PM Slawomir Kaplonski < >> skaplons at redhat.com> wrote: >> > Hi, >> > >> > > Wiadomość napisana przez Niket Agrawal w dniu >> 26.09.2018, o godz. 18:11: >> > > >> > > Hello, >> > > >> > > I have a question regarding the Ryu integration in Openstack. By >> default, the openvswitch bridges (br-int, br-tun and br-ex) are registered >> to a controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl >> get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova compute >> node. However there is a different instance of the same Ryu controller >> running on the neutron gateway as well and the three openvswitch bridges >> (br-int, br-tun and br-ex) are registered to this instance of Ryu >> controller. If I stop neutron-openvswitch agent on the nova compute node, >> the bridges there are no longer connected to the controller, but the >> bridges in the neutron gateway continue to remain connected to the >> controller. Only when I stop the neutron openvswitch agent in the neutron >> gateway as well, the bridges there get disconnected. >> > > >> > > I'm unable to find where in the Openstack code I can access this >> implementation, because I intend to make a few tweaks to this architecture >> which is present currently. Also, I'd like to know which app is the Ryu SDN >> controller running by default at the moment. I feel the information in the >> code can help me find it too. >> > >> > Ryu app is started by neutron-openvswitch-agent in: >> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py#L34 >> > Is it what You are looking for? >> > >> > > >> > > Regards, >> > > Niket >> > > >> __________________________________________________________________________ >> > > OpenStack Development Mailing List (not for usage questions) >> > > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > — >> > Slawek Kaplonski >> > Senior software engineer >> > Red Hat >> > >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Fri Oct 5 14:37:41 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 5 Oct 2018 09:37:41 -0500 Subject: [openstack-dev] [placement] update 18-40 In-Reply-To: References: Message-ID: <031ab237-3593-4a66-47c9-05630edb96f2@fried.cc> > * What should we do about nova calling the placement db, like in >   > [nova-manage](https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L416) This should be purely a placement-side migration, nah? >   and >   > [nova-status](https://github.com/openstack/nova/blob/master/nova/cmd/status.py#L254). For others' reference, Chris and I have been discussing this [1] in the spec review that was prompted by the above. As of the last episode: a) we're not convinced this status check is worth having in the first place; but if it is, b) the algorithm being used currently is pretty weak, and will soon be actually bogus; and c) there's a suggestion for a "better" (if not particularly efficient) alternative that uses the API instead of going directly to the db. -efried [1] https://review.openstack.org/#/c/600016/3/specs/stein/approved/list-rps-having.rst at 49 From nickgrwl3 at gmail.com Fri Oct 5 15:03:17 2018 From: nickgrwl3 at gmail.com (Niket Agrawal) Date: Fri, 5 Oct 2018 17:03:17 +0200 Subject: [openstack-dev] Ryu integration with Openstack In-Reply-To: References: <418337EF-FF89-4B26-8EF7-B8C5CF752E24@redhat.com> <70907B1D-9D78-4004-9059-5F154FA6EDE9@redhat.com> Message-ID: Thank you. I will have a look. Regards, Niket On Fri, Oct 5, 2018 at 4:15 PM Miguel Angel Ajo Pelayo wrote: > have a look at dragonflow project, may be it's similar to what you're > trying to accomplish > > On Fri, Oct 5, 2018, 1:56 PM Niket Agrawal wrote: > >> Hi, >> >> Thanks for the help. I am trying to run a custom Ryu app from the nova >> compute node and have all the openvswitches connected to this new >> controller. However, to be able to run this new app, I have to first stop >> the existing neutron openvswitch agents in the same node as they run Ryu >> app (integrated in Openstack) by default. Ryu in Openstack provides basic >> functionalities like L2 switching but does not support launching a custom >> app at the same time. >> I'd like to have a single instance of Ryu controller control all the >> openvswtich instances rather than having openvswitch agents in each node >> managing the openvswitches separately. For this, I'll probably have to >> migrate the existing functionality provided by Ryu app to this new app of >> mine. Could you share some suggestions or are you aware of any previous >> work done towards this, that I can read about? >> >> Regards, >> Niket >> >> On Thu, Sep 27, 2018 at 9:21 AM Slawomir Kaplonski >> wrote: >> >>> Hi, >>> >>> Code of app is in >>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py >>> and classes for specific bridge types are in >>> https://github.com/openstack/neutron/tree/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native >>> >>> > Wiadomość napisana przez Niket Agrawal w dniu >>> 27.09.2018, o godz. 00:08: >>> > >>> > Hi, >>> > >>> > Thanks for your reply. Is there a way to access the code that is >>> running in the app to see what is the logic implemented in the app? >>> > >>> > Regards, >>> > Niket >>> > >>> > On Wed, Sep 26, 2018 at 10:31 PM Slawomir Kaplonski < >>> skaplons at redhat.com> wrote: >>> > Hi, >>> > >>> > > Wiadomość napisana przez Niket Agrawal w dniu >>> 26.09.2018, o godz. 18:11: >>> > > >>> > > Hello, >>> > > >>> > > I have a question regarding the Ryu integration in Openstack. By >>> default, the openvswitch bridges (br-int, br-tun and br-ex) are registered >>> to a controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl >>> get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova compute >>> node. However there is a different instance of the same Ryu controller >>> running on the neutron gateway as well and the three openvswitch bridges >>> (br-int, br-tun and br-ex) are registered to this instance of Ryu >>> controller. If I stop neutron-openvswitch agent on the nova compute node, >>> the bridges there are no longer connected to the controller, but the >>> bridges in the neutron gateway continue to remain connected to the >>> controller. Only when I stop the neutron openvswitch agent in the neutron >>> gateway as well, the bridges there get disconnected. >>> > > >>> > > I'm unable to find where in the Openstack code I can access this >>> implementation, because I intend to make a few tweaks to this architecture >>> which is present currently. Also, I'd like to know which app is the Ryu SDN >>> controller running by default at the moment. I feel the information in the >>> code can help me find it too. >>> > >>> > Ryu app is started by neutron-openvswitch-agent in: >>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py#L34 >>> > Is it what You are looking for? >>> > >>> > > >>> > > Regards, >>> > > Niket >>> > > >>> __________________________________________________________________________ >>> > > OpenStack Development Mailing List (not for usage questions) >>> > > Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> > — >>> > Slawek Kaplonski >>> > Senior software engineer >>> > Red Hat >>> > >>> > >>> > >>> __________________________________________________________________________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> __________________________________________________________________________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> — >>> Slawek Kaplonski >>> Senior software engineer >>> Red Hat >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nickgrwl3 at gmail.com Fri Oct 5 15:31:00 2018 From: nickgrwl3 at gmail.com (Niket Agrawal) Date: Fri, 5 Oct 2018 17:31:00 +0200 Subject: [openstack-dev] Ryu integration with Openstack In-Reply-To: References: <418337EF-FF89-4B26-8EF7-B8C5CF752E24@redhat.com> <70907B1D-9D78-4004-9059-5F154FA6EDE9@redhat.com> Message-ID: Hi, >From what I read so far about the Dragonflow project, it implements a distributed SDN controller, ie, there is a controller running in each of the compute nodes managing the openvswitch instance in that node. This is also what currently happens with the openvswitch agent on each node running a Ryu app. Not sure if you misread my previous email, I'd like to remove this distributed style of SDN controllers running in each node, and have a central controller managing every switch. I prefer to have Ryu as my central controller as designing a Ryu app is quite simple. Nevertheless, thanks for mentioning about the dragon flow project. Regards, Niket On Fri, Oct 5, 2018 at 5:03 PM Niket Agrawal wrote: > Thank you. I will have a look. > > Regards, > Niket > > On Fri, Oct 5, 2018 at 4:15 PM Miguel Angel Ajo Pelayo < > majopela at redhat.com> wrote: > >> have a look at dragonflow project, may be it's similar to what you're >> trying to accomplish >> >> On Fri, Oct 5, 2018, 1:56 PM Niket Agrawal wrote: >> >>> Hi, >>> >>> Thanks for the help. I am trying to run a custom Ryu app from the nova >>> compute node and have all the openvswitches connected to this new >>> controller. However, to be able to run this new app, I have to first stop >>> the existing neutron openvswitch agents in the same node as they run Ryu >>> app (integrated in Openstack) by default. Ryu in Openstack provides basic >>> functionalities like L2 switching but does not support launching a custom >>> app at the same time. >>> I'd like to have a single instance of Ryu controller control all the >>> openvswtich instances rather than having openvswitch agents in each node >>> managing the openvswitches separately. For this, I'll probably have to >>> migrate the existing functionality provided by Ryu app to this new app of >>> mine. Could you share some suggestions or are you aware of any previous >>> work done towards this, that I can read about? >>> >>> Regards, >>> Niket >>> >>> On Thu, Sep 27, 2018 at 9:21 AM Slawomir Kaplonski >>> wrote: >>> >>>> Hi, >>>> >>>> Code of app is in >>>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py >>>> and classes for specific bridge types are in >>>> https://github.com/openstack/neutron/tree/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native >>>> >>>> > Wiadomość napisana przez Niket Agrawal w dniu >>>> 27.09.2018, o godz. 00:08: >>>> > >>>> > Hi, >>>> > >>>> > Thanks for your reply. Is there a way to access the code that is >>>> running in the app to see what is the logic implemented in the app? >>>> > >>>> > Regards, >>>> > Niket >>>> > >>>> > On Wed, Sep 26, 2018 at 10:31 PM Slawomir Kaplonski < >>>> skaplons at redhat.com> wrote: >>>> > Hi, >>>> > >>>> > > Wiadomość napisana przez Niket Agrawal w >>>> dniu 26.09.2018, o godz. 18:11: >>>> > > >>>> > > Hello, >>>> > > >>>> > > I have a question regarding the Ryu integration in Openstack. By >>>> default, the openvswitch bridges (br-int, br-tun and br-ex) are registered >>>> to a controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl >>>> get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova >>>> compute node. However there is a different instance of the same Ryu >>>> controller running on the neutron gateway as well and the three openvswitch >>>> bridges (br-int, br-tun and br-ex) are registered to this instance of Ryu >>>> controller. If I stop neutron-openvswitch agent on the nova compute node, >>>> the bridges there are no longer connected to the controller, but the >>>> bridges in the neutron gateway continue to remain connected to the >>>> controller. Only when I stop the neutron openvswitch agent in the neutron >>>> gateway as well, the bridges there get disconnected. >>>> > > >>>> > > I'm unable to find where in the Openstack code I can access this >>>> implementation, because I intend to make a few tweaks to this architecture >>>> which is present currently. Also, I'd like to know which app is the Ryu SDN >>>> controller running by default at the moment. I feel the information in the >>>> code can help me find it too. >>>> > >>>> > Ryu app is started by neutron-openvswitch-agent in: >>>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py#L34 >>>> > Is it what You are looking for? >>>> > >>>> > > >>>> > > Regards, >>>> > > Niket >>>> > > >>>> __________________________________________________________________________ >>>> > > OpenStack Development Mailing List (not for usage questions) >>>> > > Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> > >>>> > — >>>> > Slawek Kaplonski >>>> > Senior software engineer >>>> > Red Hat >>>> > >>>> > >>>> > >>>> __________________________________________________________________________ >>>> > OpenStack Development Mailing List (not for usage questions) >>>> > Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> > >>>> __________________________________________________________________________ >>>> > OpenStack Development Mailing List (not for usage questions) >>>> > Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> — >>>> Slawek Kaplonski >>>> Senior software engineer >>>> Red Hat >>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Fri Oct 5 15:40:32 2018 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Fri, 5 Oct 2018 08:40:32 -0700 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 1 October 2018 In-Reply-To: <1538748213.2759972.1531950176.4C4033DA@webmail.messagingengine.com> References: <1538748213.2759972.1531950176.4C4033DA@webmail.messagingengine.com> Message-ID: On Fri, Oct 5, 2018, 07:04 Colleen Murphy wrote: > # Keystone Team Update - Week of 1 October 2018 > > ## News > > ### JSON-home > > As Morgan works through the flaskification project, it's been clear that > some of the JSON-home[1] code could use some refactoring and that the > document itself is inconsistent[2], but we're unclear whether anyone uses > this or cares if it changes. If you have ever used keystone's JSON-home > implementation, come talk to us. > > [1] https://adam.younglogic.com/2018/01/using-json-home-keystone/ > [2] > http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-10-02.log.html#t2018-10-02T18:22:25 > > ## Open Specs > > Search query: https://bit.ly/2Pi6dGj > > We still only have three specs targeted at Stein, but Adam has revived > several "ongoing" specs that can use some eyes, please take a look[3]. > > [3] https://bit.ly/2OyDLTh > > ## Recently Merged Changes > > Search query: https://bit.ly/2pquOwT > > We merged 19 changes this week. > > ## Changes that need Attention > > Search query: https://bit.ly/2PUk84S > > There are 41 changes that are passing CI, not in merge conflict, have no > negative reviews and aren't proposed by bots. > > One of these is a proposal to add rate-limiting to keystoneauth[4], would > be good to get some more reactions to it. > > Another is the flaskification patch of doom[5] which will definitely need > some close attention. > > [4] https://review.openstack.org/605043 > [5] https://review.openstack.org/603461 > > ## Bugs > > This week we opened 5 new bugs and closed 7. > > Bugs opened (5) > Bug #1795487 (keystone:Undecided) opened by Amy Marrich > https://bugs.launchpad.net/keystone/+bug/1795487 > Bug #1795800 (keystone:Undecided) opened by Andy Ngo > https://bugs.launchpad.net/keystone/+bug/1795800 > Bug #1796077 (keystone:Undecided) opened by Ching Kuo > https://bugs.launchpad.net/keystone/+bug/1796077 > Bug #1796247 (keystone:Undecided) opened by Yang Youseok > https://bugs.launchpad.net/keystone/+bug/1796247 > Bug #1795496 (oslo.policy:Undecided) opened by Adam Young > https://bugs.launchpad.net/oslo.policy/+bug/1795496 > > Bugs closed (3) > Bug #1782687 (keystone:Undecided) > https://bugs.launchpad.net/keystone/+bug/1782687 > Bug #1796077 (keystone:Undecided) > https://bugs.launchpad.net/keystone/+bug/1796077 > Bug #1796247 (keystone:Undecided) > https://bugs.launchpad.net/keystone/+bug/1796247 > > Bugs fixed (4) > Bug #1794552 (keystone:Medium) fixed by Morgan Fainberg > https://bugs.launchpad.net/keystone/+bug/1794552 > Bug #1753585 (keystone:Low) fixed by Vishakha Agarwal > https://bugs.launchpad.net/keystone/+bug/1753585 > Bug #1615076 (keystone:Undecided) fixed by Vishakha Agarwal > https://bugs.launchpad.net/keystone/+bug/1615076 > Bug #1615076 (python-keystoneclient:Undecided) fixed by Vishakha Agarwal > https://bugs.launchpad.net/python-keystoneclient/+bug/1615076 > > ## Milestone Outlook > > https://releases.openstack.org/stein/schedule.html > > Now just 3 weeks away from the spec proposal freeze. > > ## Help with this newsletter > > Help contribute to this newsletter by editing the etherpad: > https://etherpad.openstack.org/p/keystone-team-newsletter > Dashboard generated using gerrit-dash-creator and > https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 As an update to JSON Home bits, I have worked around the possible needed changes. The document should remain the same as before. --Morgan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Oct 5 15:49:48 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 5 Oct 2018 10:49:48 -0500 Subject: [openstack-dev] [nova][stable] Preparing for ocata-em (extended maintenance) In-Reply-To: References: Message-ID: <58ea2b38-fe8d-d265-f0f1-1b15e4c23277@gmail.com> The ocata-em tag request is up for review: https://review.openstack.org/#/c/608296/ On 9/28/2018 11:21 AM, Matt Riedemann wrote: > Per the other thread on this [1] I've created an etherpad [2] to track > what needs to happen to get nova's stable/ocata branch ready for > Extended Maintenance [3] which means we need to flush our existing Ocata > backports that we want in the final Ocata release before tagging the > branch as ocata-em, after which point we won't do releases from that > branch anymore. > > The etherpad lists each open ocata backport along with any of its > related backports on newer branches like pike/queens/etc. Since we need > the backports to go in order, we need to review and merge the changes on > the newer branches first. With the state of the gate lately, we really > can't sit on our hands here because it will probably take up to a week > just to merge all of the changes for each branch. > > Once the Ocata backports are flushed through, we'll cut the final > release and tag the branch as being in extended maintenance. > > Do we want to coordinate a review day next week for the > nova-stable-maint core team, like Tuesday, or just trust that you all > know who you are and will help out as necessary in getting these reviews > done? Non-stable cores are also welcome to help review here to make sure > we're not missing something, which is also a good way to get noticed as > caring about stable branches and eventually get you on the stable maint > core team. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-September/thread.html#134810 > > [2] https://etherpad.openstack.org/p/nova-ocata-em > [3] > https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance -- Thanks, Matt From dangtrinhnt at gmail.com Fri Oct 5 16:02:27 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Sat, 6 Oct 2018 01:02:27 +0900 Subject: [openstack-dev] [release][searchlight] What should I do with the missing releases? In-Reply-To: References: Message-ID: Thank Doug. On Fri, Oct 5, 2018 at 8:42 PM Doug Hellmann wrote: > Trinh Nguyen writes: > > > Dear release team, > > > > One thing comes up in my mind when preparing for the stein-1 release of > > Searchlight that is what should we do with the missing releases (i.e. > > Rocky)? Can I just ignore it or do I have to create a branch for it? > > There was no rocky release, so I don't really see any reason to create > the branch. There isn't anything to maintain. > > Doug > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Fri Oct 5 16:21:49 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Fri, 05 Oct 2018 17:21:49 +0100 Subject: [openstack-dev] Sphinx testing fun In-Reply-To: References: <21b70f530e4aec2c0be2a496bc0a1ed646fd3d51.camel@redhat.com> Message-ID: On Thu, 2018-10-04 at 18:00 -0400, Doug Hellmann wrote: > Stephen Finucane writes: > > > On Thu, 2018-10-04 at 07:21 -0400, Doug Hellmann wrote: > > > Stephen Finucane writes: > > [snip] > > > > > Anyway, we can go figure out what's changed here and handle it but this > > > > is, at best, going to be a band aid. The fact is 'sphinx_testing' is > > > > unmaintained and has been for some time now. The new hotness is > > > > 'sphinx.testing' [3], which is provided (with zero documentation) as > > > > part of Sphinx. Unfortunately, this uses pytest fixtures [4] which I'm > > > > pretty sure Monty (and a few others?) are vehemently against using in > > > > OpenStack. That leaves us with three options: > > > > > > > > * Take over 'sphinx_testing' and bring it up-to-date. Maintain > > > > forever. > > > > * Start using 'sphinx.testing' and everything it comes with > > > > * Delete any tests that use 'sphinx_testing' and deal with the lack of > > > > coverage > > > > > > Could we change our tests to use pathlib to wrap app.outdir and get the > > > same results as before? > > > > That's what I've done [2], which is kind of based on how I fixed this > > in Sphinx. However, this is at best a stopgap. The fact remains that > > 'sphinx_testing' is dead and the large changes that Sphinx is > > undergoing (2.0 will be Python 3 only, with multiple other fixes) will > > make further breakages more likely. Unless we want a repeat of the Mox > > situation, I do think we should start thinking about this sooner rather > > than later. > > Yeah, it sounds like we need to deal with the change. > > It looks like only the os-api-ref repo uses sphinx-testing. How many > tests are we talking about having to rewrite/update there? > > Doug That's good news. I'd expected other projects would use it but then nothing I've worked on does (and that likely constitutes a large percentage of Sphinx extensions in OpenStack). I see four failing tests so I guess, if they break again, we can opt for option 3 above and deal with it. I can't see os-api-ref changing too much in the future (barring adding PDF support at some point). Stephen From sombrafam at gmail.com Fri Oct 5 16:42:11 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Fri, 5 Oct 2018 13:42:11 -0300 Subject: [openstack-dev] [cinder][qa] Enabling online volume_extend tests by default Message-ID: Hey folks, Following up on the discussions that we had on the Denver PTG, the Cinder team is planning to enable online volume_extend tests[1] to be run by default. Currently, those tests are only run by some CI systems and infra jobs that explicitly set it to be so. We are also adding a negative test and an associated option in tempest[2] to allow vendor drivers that does not support online extending to be tested. This patch will be merged first and after a reasonable time for people check whether their backends supports that or not, we will proceed and merge the devstack patch[1] triggering the tests in all CIs and infra jobs. Please let us know if you have any question or concerns about it. Kind regards, Erlon _________________ [1] https://review.openstack.org/#/c/572188/ [2] https://review.openstack.org/#/c/578463/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Oct 5 17:48:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 05 Oct 2018 13:48:34 -0400 Subject: [openstack-dev] [goals][python3][heat][manila][qinling][zaqar][magnum][keystone][congress] switching python package jobs In-Reply-To: References: Message-ID: Doug Hellmann writes: > Doug Hellmann writes: > >> I think we are ready to go ahead and switch all of the python packaging >> jobs to the new set defined in the publish-to-pypi-python3 template >> [1]. We still have some cleanup patches for projects that have not >> completed their zuul migration, but there are only a few and rebasing >> those will be easy enough. >> >> The template adds a new check job that runs when any files related to >> packaging are changed (readme, setup, etc.). Otherwise it switches from >> the python2-based PyPI job to use python3. >> >> I have the patch to switch all official projects ready in [2]. >> >> Doug >> >> [1] http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/project-templates.yaml#n218 >> [2] https://review.openstack.org/#/c/598323/ > > This change is now in place. The Ironic team discovered one issue, and > the fix is proposed as https://review.openstack.org/606152 > > This change has also reopened the question of how to publish some of the > projects for which we do not own names on PyPI. > > I registered manila, qinling, and zaqar-ui by uploading Rocky series > releases of those projects and then added openstackci as an owner so we > can upload new packages this cycle. > > I asked the owners of the name "heat" to allow us to use it, and they > rejected the request. So, I proposed a change to heat to update the > sdist name to "openstack-heat". > > * https://review.openstack.org/606160 > > We don't own "magnum" but there is already an "openstack-magnum" set up > with old releases, so I have proposed a change to the magnum repo to > change the dist name there, so we can resume using it. > > * https://review.openstack.org/606162 The owner of the name "magnum" has given us access, so I have set it up with permission for the CI system to publish and I have abandoned the rename patch. > I have filed requests with the maintainers of PyPI to claim the names > "keystone" and "congress". That may take some time. Please let me know > if you're willing to simply use "openstack-keystone" and > "openstack-congress" instead. I will take care of configuring PyPI and > proposing the patch to update your setup.cfg (that way you can approve > the change). > > * https://github.com/pypa/warehouse/issues/4770 > * https://github.com/pypa/warehouse/issues/4771 > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Fri Oct 5 17:57:08 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 05 Oct 2018 13:57:08 -0400 Subject: [openstack-dev] [goals][python3][heat][manila][qinling][zaqar][magnum][keystone][congress] switching python package jobs In-Reply-To: References: Message-ID: Doug Hellmann writes: > Doug Hellmann writes: > >> Doug Hellmann writes: >> >>> I think we are ready to go ahead and switch all of the python packaging >>> jobs to the new set defined in the publish-to-pypi-python3 template >>> [1]. We still have some cleanup patches for projects that have not >>> completed their zuul migration, but there are only a few and rebasing >>> those will be easy enough. >>> >>> The template adds a new check job that runs when any files related to >>> packaging are changed (readme, setup, etc.). Otherwise it switches from >>> the python2-based PyPI job to use python3. >>> >>> I have the patch to switch all official projects ready in [2]. >>> >>> Doug >>> >>> [1] http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/project-templates.yaml#n218 >>> [2] https://review.openstack.org/#/c/598323/ >> >> This change is now in place. The Ironic team discovered one issue, and >> the fix is proposed as https://review.openstack.org/606152 >> >> This change has also reopened the question of how to publish some of the >> projects for which we do not own names on PyPI. >> >> I registered manila, qinling, and zaqar-ui by uploading Rocky series >> releases of those projects and then added openstackci as an owner so we >> can upload new packages this cycle. >> >> I asked the owners of the name "heat" to allow us to use it, and they >> rejected the request. So, I proposed a change to heat to update the >> sdist name to "openstack-heat". >> >> * https://review.openstack.org/606160 >> >> We don't own "magnum" but there is already an "openstack-magnum" set up >> with old releases, so I have proposed a change to the magnum repo to >> change the dist name there, so we can resume using it. >> >> * https://review.openstack.org/606162 > > The owner of the name "magnum" has given us access, so I have set it up > with permission for the CI system to publish and I have abandoned the > rename patch. > >> I have filed requests with the maintainers of PyPI to claim the names >> "keystone" and "congress". That may take some time. Please let me know >> if you're willing to simply use "openstack-keystone" and >> "openstack-congress" instead. I will take care of configuring PyPI and >> proposing the patch to update your setup.cfg (that way you can approve >> the change). >> >> * https://github.com/pypa/warehouse/issues/4770 >> * https://github.com/pypa/warehouse/issues/4771 We haven't heard back about either of these requests, so I filed changes with congress and keystone to change the dist names. * https://review.openstack.org/608332 (congress) * https://review.openstack.org/608331 (keystone) Doug From tpb at dyncloud.net Fri Oct 5 19:44:09 2018 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 5 Oct 2018 15:44:09 -0400 Subject: [openstack-dev] [manila] [infra] remove driverfixes/ocata branch [was: Re: [cinder][infra] Remove driverfixes/ocata branch] In-Reply-To: <1537198616.4138246.1510937528.5167ACBD@webmail.messagingengine.com> References: <20180917150056.GA10750@sm-workstation> <1537198616.4138246.1510937528.5167ACBD@webmail.messagingengine.com> Message-ID: <20181005194409.5guhyofmuwc6ryxc@barron.net> Clark, would you be so kind, at your conveniencew, as to remove the manila driverfixes/ocata branch? There are no open changes on the branch and `git log origin/driverfixes/ocata ^origin/stable/ocata --no-merges --oneline` reveals no commits that we need to preserve. Thanks much! -- Tom Barron (tbarron) On 17/09/18 08:36 -0700, Clark Boylan wrote: >On Mon, Sep 17, 2018, at 8:00 AM, Sean McGinnis wrote: >> Hello Cinder and Infra teams. Cinder needs some help from infra or some >> pointers on how to proceed. >> >> tl;dr - The openstack/cinder repo had a driverfixes/ocata branch created for >> fixes that no longer met the more restrictive phase II stable policy criteria. >> Extended maintenance has changed that and we want to delete driverfixes/ocata >> to make sure patches are going to the right place. >> >> Background >> ---------- >> Before the extended maintenance changes, the Cinder team found a lot of vendors >> were maintaining their own forks to keep backported driver fixes that we were >> not allowing upstream due to the stable policy being more restrictive for older >> (or deleted) branches. We created the driverfixes/* branches as a central place >> for these to go so distros would have one place to grab these fixes, if they >> chose to do so. >> >> This has worked great IMO, and we do occasionally still have things that need >> to go to driverfixes/mitaka and driverfixes/newton. We had also pushed a lot of >> fixes to driverfixes/ocata, but with the changes to stable policy with extended >> maintenance, that is no longer needed. >> >> Extended Maintenance Changes >> ---------------------------- >> With things being somewhat relaxed with the extended maintenance changes, we >> are now able to backport bug fixes to stable/ocata that we couldn't before and >> we don't have to worry as much about that branch being deleted. >> >> I had gone through and identified all patches backported to driverfixes/ocata >> but not stable/ocata and cherry-picked them over to get the two branches in >> sync. The stable/ocata should now be identical or ahead of driverfixes/ocata >> and we want to make sure nothing more gets accidentally merged to >> driverfixes/ocata instead of the official stable branch. >> >> Plan >> ---- >> We would now like to have the driverfixes/ocata branch deleted so there is no >> confusion about where backports should go and we don't accidentally get these >> out of sync again. >> >> Infra team, please delete this branch or let me know if there is a process >> somewhere I should follow to have this removed. > >The first step is to make sure that all changes on the branch are in a non open state (merged or abandoned). https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open shows that there are no open changes. > >Next you will want to make sure that the commits on this branch are preserved somehow. Git garbage collection will delete and cleanup commits if they are not discoverable when working backward from some ref. This is why our old stable branch deletion process required we tag the stable branch as $release-eol first. Looking at `git log origin/driverfixes/ocata ^origin/stable/ocata --no-merges --oneline` there are quite a few commits on the driverfixes branch that are not on the stable branch, but that appears to be due to cherry pick writing new commits. You have indicated above that you believe the two branches are in sync at this point. A quick sampling of commits seems to confirm this as well. > >If you can go ahead and confirm that you are ready to delete the driverfixes/ocata branch I will go ahead and remove it. > >Clark > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cboylan at sapwetik.org Fri Oct 5 20:06:04 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 05 Oct 2018 13:06:04 -0700 Subject: [openstack-dev] [manila] [infra] remove driverfixes/ocata branch [was: Re: [cinder][infra] Remove driverfixes/ocata branch] In-Reply-To: <20181005194409.5guhyofmuwc6ryxc@barron.net> References: <20180917150056.GA10750@sm-workstation> <1537198616.4138246.1510937528.5167ACBD@webmail.messagingengine.com> <20181005194409.5guhyofmuwc6ryxc@barron.net> Message-ID: <1538769964.3960508.1532333704.6EFDCA93@webmail.messagingengine.com> On Fri, Oct 5, 2018, at 12:44 PM, Tom Barron wrote: > Clark, would you be so kind, at your conveniencew, as to remove the > manila driverfixes/ocata branch? > > There are no open changes on the branch and `git log > origin/driverfixes/ocata ^origin/stable/ocata --no-merges --oneline` > reveals no commits that we need to preserve. > > Thanks much! > Done. The old head of that branch was d9c0f8fa4b15a595ed46950b6e5b5d1b4514a7e4. Clark From tpb at dyncloud.net Fri Oct 5 20:45:25 2018 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 5 Oct 2018 16:45:25 -0400 Subject: [openstack-dev] [manila] [infra] remove driverfixes/ocata branch [was: Re: [cinder][infra] Remove driverfixes/ocata branch] In-Reply-To: <1538769964.3960508.1532333704.6EFDCA93@webmail.messagingengine.com> References: <20180917150056.GA10750@sm-workstation> <1537198616.4138246.1510937528.5167ACBD@webmail.messagingengine.com> <20181005194409.5guhyofmuwc6ryxc@barron.net> <1538769964.3960508.1532333704.6EFDCA93@webmail.messagingengine.com> Message-ID: <20181005204525.rfnoev3moejdgi4k@barron.net> On 05/10/18 13:06 -0700, Clark Boylan wrote: >On Fri, Oct 5, 2018, at 12:44 PM, Tom Barron wrote: >> Clark, would you be so kind, at your conveniencew, as to remove the >> manila driverfixes/ocata branch? >> >> There are no open changes on the branch and `git log >> origin/driverfixes/ocata ^origin/stable/ocata --no-merges --oneline` >> reveals no commits that we need to preserve. >> >> Thanks much! >> > >Done. The old head of that branch was d9c0f8fa4b15a595ed46950b6e5b5d1b4514a7e4. > >Clark Awesome, and thanks again! -- Tom From melwittt at gmail.com Fri Oct 5 21:20:30 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 5 Oct 2018 14:20:30 -0700 Subject: [openstack-dev] [placement] update 18-40 In-Reply-To: References: Message-ID: <7244aef7-fb29-fcfe-75c2-c28ace480a23@gmail.com> On Fri, 5 Oct 2018 14:31:05 +0100 (BST), Chris Dent wrote: > * > Propose counting quota usage from placement and API database > (A bit out of date but may be worth resurrecting) I'd like to resurrect this spec but it really depends on being able to ask for usage scoped only to a particular instance of Nova (GET /usages for NovaA vs GET /usages for NovaB). From what I understand, we don't have a concrete plan for being able to differentiate ownership of allocations yet. Until then, we will be using a policy-based switch to control the quota behavior in the event of down/slow cells in a multi-cell deployment (fail build if project has servers in down/slow cells vs allow potentially violating quota limits if project has servers in down/slow cells). So, being able to leverage the placement API for /usages is not considered critical, since we have an interim plan. -melanie From jean-philippe at evrard.me Fri Oct 5 23:19:52 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Sat, 06 Oct 2018 01:19:52 +0200 Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: References: Message-ID: <72d7478e860b84a8197f01b5c0da83159266f972.camel@evrard.me> On Fri, 2018-10-05 at 07:40 -0400, Doug Hellmann wrote: > Chris Dent writes: > > > On Thu, 4 Oct 2018, Doug Hellmann wrote: > > > > > TC members, please reply to this thread and indicate if you would > > > find > > > meeting at 1300 UTC on the first Thursday of every month > > > acceptable, and > > > of course include any other comments you might have (including > > > alternate > > > times). > > > > +1 > > > > Also, if we're going to set aside a time for a semi-formal meeting, > > I > > hope we will have some form of agenda and minutes, with a fairly > > clear process for setting that agenda as well as a process for > > I had in mind "email the chair your topic suggestion" and then "the > chair emails the agenda to openstack-dev tagged [tc] a bit in advance > of > the meeting". There would also probably be some standing topics, like > updates for ongoing projects. > > Does that work for everyone? > > Fine for me From melwittt at gmail.com Fri Oct 5 23:59:28 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 5 Oct 2018 16:59:28 -0700 Subject: [openstack-dev] [nova] Rocky RC time regression analysis Message-ID: Hey everyone, During our Rocky retrospective discussion at the PTG [1], we talked about the spec freeze deadline (milestone 2, historically it had been milestone 1) and whether or not it was related to the hectic late-breaking regression RC time we had last cycle. I had an action item to go through the list of RC time bugs [2] and dig into each one, examining: when the patch that introduced the bug landed vs when the bug was reported, why it wasn't caught sooner, and report back so we can take a look together and determine whether they were related to the spec freeze deadline. I used this etherpad to make notes [3], which I will [mostly] copy-paste here. These are all after RC1 and I'll paste them in chronological order of when the bug was reported. Milestone 1 r-1 was 2018-04-19. Spec freeze was milestone 2 r-2 was 2018-06-07. Feature freeze (FF) was on 2018-07-26. RC1 was on 2018-08-09. 1) Broken live migration bandwidth minimum => maximum based on neutron event https://bugs.launchpad.net/nova/+bug/1786346 - Bug was reported on 2018-08-09, the day of RC1 - The patch that caused the regression landed on 2018-03-30 https://review.openstack.org/497457 - Unrelated to a blueprint, the regression was part of a bug fix - Was found because prometheanfire was doing live migrations and noticed they seemed to be stuck at 1MiB/s for linuxbridge VMs - The bug was due to a race, so the gate didn't hit it - Comment on the regression bug from dansmith: "The few hacked up gate jobs we used to test this feature at merge time likely didn't notice the race because the migrations finished before the potential timeout and/or are on systems so loaded that the neutron event came late enough for us to win the race repeatedly." 2) Docs for the zvm driver missing - All zvm driver code changes were merged by 2018-07-17 but the documentation was overlooked but was noticed near RC time - Blueprint was approved on 2018-02-12 3) Volume status remains "detaching" after a failure to detach a volume due to DeviceDetachFailed https://bugs.launchpad.net/nova/+bug/1786318 - Bug was reported on 2018-08-09, the day of RC1 - The change that introduced the regression landed on 2018-02-21 https://review.openstack.org/546423 - Unrelated to a blueprint, the regression was part of a bug fix - Question: why wasn't this caught earlier? - Answer: Unit tests were not asserting the call to the roll_detaching volume API. Coverage has since been added along with the bug fix https://review.openstack.org/590439 4) OVB overcloud deploy fails on nova placement errors https://bugs.launchpad.net/nova/+bug/1787910 - Bug was reported on 2018-08-20 - Change that caused the regression landed on 2018-07-26, FF day https://review.openstack.org/517921 - Blueprint was approved on 2018-05-16 - Was found because of a failure in the legacy-periodic-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master CI job. The ironic-inspector CI upstream also failed because of this, as noted by dtantsur. - Question: why did it take nearly a month for the failure to be noticed? Is there any way we can cover this in our ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa job? 5) when live migration fails due to a internal error rollback is not handled correctly https://bugs.launchpad.net/nova/+bug/1788014 - Bug was reported on 2018-08-20 - The change that caused the regression landed on 2018-07-26, FF day https://review.openstack.org/434870 - Unrelated to a blueprint, the regression was part of a bug fix - Was found because sean-k-mooney was doing live migrations and found that when a LM failed because of a QEMU internal error, the VM remained ACTIVE but the VM no longer had network connectivity. - Question: why wasn't this caught earlier? - Answer: We would need a live migration job scenario that intentionally initiates and fails a live migration, then verify network connectivity after the rollback occurs. - Question: can we add something like that? 6) nova-manage db online_data_migrations hangs on instances with no host set https://bugs.launchpad.net/nova/+bug/1788115 - Bug was reported on 2018-08-21 - The patch that introduced the bug landed on 2018-05-30 https://review.openstack.org/567878 - Unrelated to a blueprint, the regression was part of a bug fix - Question: why wasn't this caught earlier? - Answer: To hit the bug, you had to have had instances with no host set (that failed to schedule) in your database during an upgrade. This does not happen during the grenade job - Question: could we add anything to the grenade job that would leave some instances with no host set to cover cases like this? 7) release notes erroneously say that nova-consoleauth doesn't have to run in Rocky https://bugs.launchpad.net/nova/+bug/1788470 - Bug was reported on 2018-08-22 - The patches that conveyed the wrong information for the docs landed on 2018-05-07 https://review.openstack.org/565367 - Blueprint was approved on 2018-03-12 - Question: why wasn't this caught earlier? - Answer: The patches should have been tested by a devstack change that runs without the nova-consoleauth service, to verify the system can work without the service. - Question: can we add test coverage for that? - Answer: Yes, it's proposed as a WIP https://review.openstack.org/607070 8) libvirt driver rx_queue_size changes break SR-IOV https://bugs.launchpad.net/nova/+bug/1789074 - Bug was reported on 2018-08-25 - The change that caused the regression landed on 2018-04-23 https://review.openstack.org/484997 - Blueprint was approved on 2018-03-23 - Was found because moshele tried to create a server with an SRIOV interface (PF or VF) with rx_queue_size and tx_queue_size set in nova.conf and it failed. - Question: why wasn't this caught earlier? - Answer: Exposing the bug required both setting rx/tx queue sizes and booting a server with a SRIOV interface. We don't have hardware for testing SRIOV in the gate. - Question: is there any other way to test this via functional tests? From what I understand, there isn't. Based on all of this, I don't believe the spec freeze at milestone 2 was related to the late-breaking regressions we had around RC time. We approved 10 additional blueprints between r-1 2018-04-19 and r-2 2018-06-07 [4]. Half of the regressions were unrelated to feature work and were introduced as part of bug fixes. Of the other half, 3 out of 4 had blueprints approved before r-1. Only one involved a blueprint approved after r-1. In a couple of cases, the patch that introduced the regression landed on feature freeze day, with the bugs being found about a month later, which was about two weeks after RC1. In most cases, the regression landed months before the bug was found, because of lack of test coverage. It seems like if we could do anything about this, it would be to move feature freeze day sooner so we have more time between FF and RC. But I have a feeling that some of the bugs get found when people take the RC and try it out. Based on what I've found here, I think we are fine to stick with using milestone 2 as the spec freeze date. And we might want to consider moving feature freeze day sooner to give more time between feature freeze and RC. This cycle, even though it is a longer cycle, we still have only 2 weeks between s-3 and rc1 [5]. I'm ambivalent about changing the usual milestone 3 feature freeze date because I have a feeling that people tend to try things out once RC is released, but maybe I'm wrong on that. What are your thoughts? Finally, please do jump in and reply if you have info to share or questions to ask about the regressions listed above. I'm especially interested in getting answers to the questions I posed earlier inline about whether there's anything we can do to cover some of the cases with our CI. Cheers, -melanie [1] https://etherpad.openstack.org/p/nova-rocky-retrospective [2] https://etherpad.openstack.org/p/nova-rocky-release-candidate-todo [3] https://etherpad.openstack.org/p/nova-rocky-rc-regression-analysis [4] https://docs.google.com/spreadsheets/d/e/2PACX-1vQicKStmnQFcOdnZU56ynJmn8e0__jYsr4FWXs3GrDsDzg1hwHofvJnuSieCH3ExbPngoebmEeY0waH/pubhtml?gid=128173249&single=true [5] https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule From ildiko.vancsa at gmail.com Sat Oct 6 09:15:13 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Sat, 6 Oct 2018 11:15:13 +0200 Subject: [openstack-dev] [os-upstream-institute] Sync calls with the mentors for Berlin Message-ID: <5E88AB2F-9D53-47D2-8C95-086A5C3C2C41@gmail.com> Hi, We are planning two calls to prepare for the Berlin Summit. One is some time between Oct 15 and 23 and the other is the week of November 5 which is the week before the Summit. The purpose of the first call is to go through the training material and prepare for the training. We will check the slides and the exercises and check whether we have any missing items or things to correct in the Contributor guide. The second call will be for checking back on action items and discuss any last minute administrativa. I created a Doodle poll to find the best times within the two aforementioned periods: https://doodle.com/poll/3ctqzkikn7prnpgg Please mark __all the slots__ that can work for you, even if it is not the most convenient time of the day as we will only have two of these calls. If you plan to join the crew for Berlin and you haven’t signed up yet on the wiki please do it here: https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute_Occasions Please let me know if you have any questions. Thanks and Best Regards, Ildikó From ildiko.vancsa at gmail.com Sat Oct 6 09:19:22 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Sat, 6 Oct 2018 11:19:22 +0200 Subject: [openstack-dev] [os-upstream-institute] Cancelling the bi-weekly meetings Message-ID: Hi Training Team, As a follow up from our last call we agreed to cancel the regular IRC meetings as we do not have that many regular topics to discuss. We will keep scheduling ad-hoc calls to prepare for the trainings before the Summits and utilize the mailing list and the IRC channel for communication. Please raise your concern __before October 12__ in case you don’t agree with this decision. If I don’t receive any concerns or preference on keeping the meeting till that date I will go ahead and remove it form the community calendar. Please let me know if you have questions. Thanks and Best Regards, Ildikó From dangtrinhnt at gmail.com Sat Oct 6 13:51:30 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Sat, 6 Oct 2018 22:51:30 +0900 Subject: [openstack-dev] [Searchlight] Weekly report Stein R-27 Message-ID: Dear team, I think it's good to maintain a weekly report to keep everyone informed of what's going on. So, from now on, I will try to write weekly reports. Here is for the week of Oct 1 - Oct 5 or Stein R-27: https://www.dangtrinh.com/2018/10/searchlight-report-stein-r-27.html All comments are welcomed! Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From codeology.lab at gmail.com Sat Oct 6 16:13:58 2018 From: codeology.lab at gmail.com (Cody) Date: Sat, 6 Oct 2018 12:13:58 -0400 Subject: [openstack-dev] [tripleo][nova]Unable to launch instances after fresh deployment Message-ID: Hello stackers, I have a 3-controller based cluster deployed using TripleO (Queens, tripleo-current). Every time after a fresh deployment, I am unable to launch instances for the first few times. New launches either stuck at BUILD or go to ERROR state. The issue continues until the "nova_placement" container on the controller with the InternalAPI VIP becomes "unhealthy" and receives a manual docker restart. Only after that, everything goes back to normal. The problem is reproducible and consistent every time with fresh deployments. I found a similar description of the syndrome in this bug [1]. The report is for Rocky, but the syndrome is similar to mine using Queens. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1630069 Here is an excerpt from the /var/log/containers/nova/nova-scheduler.log from one of the controller nodes: ... 2018-10-06 04:57:37.926 28 INFO nova.scheduler.host_manager [req-dfa89ff4-cbde-482e-ad40-0696294ffdd1 - - - - -] Host mapping not found for host overcloud-novacompute-1.localdomain. Not tracking instance info for this host. 2018-10-06 04:57:37.926 25 INFO nova.scheduler.host_manager [req-dfa89ff4-cbde-482e-ad40-0696294ffdd1 - - - - -] Host mapping not found for host overcloud-novacompute-1.localdomain. Not tracking instance info for this host. 2018-10-06 04:57:37.926 30 INFO nova.scheduler.host_manager [req-dfa89ff4-cbde-482e-ad40-0696294ffdd1 - - - - -] Host mapping not found for host overcloud-novacompute-1.localdomain. Not tracking instance info for this host. 2018-10-06 04:57:37.926 32 INFO nova.scheduler.host_manager [req-dfa89ff4-cbde-482e-ad40-0696294ffdd1 - - - - -] Host mapping not found for host overcloud-novacompute-1.localdomain. Not tracking instance info for this host. 2018-10-06 04:57:37.926 26 INFO nova.scheduler.host_manager [req-dfa89ff4-cbde-482e-ad40-0696294ffdd1 - - - - -] Host mapping not found for host overcloud-novacompute-1.localdomain. Not tracking instance info for this host. 2018-10-06 04:57:37.927 28 INFO nova.scheduler.host_manager [req-dfa89ff4-cbde-482e-ad40-0696294ffdd1 - - - - -] Received a sync request from an unknown host 'overcloud-novacompute-1.localdomain'. Re-created its InstanceList. 2018-10-06 04:57:37.927 25 INFO nova.scheduler.host_manager [req-dfa89ff4-cbde-482e-ad40-0696294ffdd1 - - - - -] Received a sync request from an unknown host 'overcloud-novacompute-1.localdomain'. Re-created its InstanceList. 2018-10-06 04:57:37.927 30 INFO nova.scheduler.host_manager [req-dfa89ff4-cbde-482e-ad40-0696294ffdd1 - - - - -] Received a sync request from an unknown host 'overcloud-novacompute-1.localdomain'. Re-created its InstanceList. 2018-10-06 04:57:37.927 32 INFO nova.scheduler.host_manager [req-dfa89ff4-cbde-482e-ad40-0696294ffdd1 - - - - -] Received a sync request from an unknown host 'overcloud-novacompute-1.localdomain'. Re-created its InstanceList. 2018-10-06 04:57:37.927 26 INFO nova.scheduler.host_manager [req-dfa89ff4-cbde-482e-ad40-0696294ffdd1 - - - - -] Received a sync request from an unknown host 'overcloud-novacompute-1.localdomain'. Re-created its InstanceList. Could this be the same bug for Queens, too? Cody -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Sat Oct 6 21:43:21 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sat, 6 Oct 2018 17:43:21 -0400 Subject: [openstack-dev] [openstack-ansible] blocked gates Message-ID: Hi everyone: Our gates have unfortunately been blocked because of OpenSUSE failures and Bionic timeouts, I have submitted a patch to set both to non voting Bionic: It seems to take a really long time for APT installs, I'm investigating and thanks to fungi I hope to have an instance to get it up and running soon. OpenSUSE: Package conflicts, nicolasbock is looking into it I have created a patch to disable both jobs, as well as two patches to enable them so we can be ready to enable them once they're fixed. remote: https://review.openstack.org/608427 Set OpenSUSE and Bionic jobs to non-voting remote: https://review.openstack.org/608428 Restore Bionic jobs to voting remote: https://review.openstack.org/608429 Restore OpenSUSE jobs Thanks everyone for your patience! Mohammed From ekcs.openstack at gmail.com Sat Oct 6 23:54:38 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Sat, 6 Oct 2018 16:54:38 -0700 Subject: [openstack-dev] [masakari][congress] what is a host? Message-ID: Hi all, I'm working on a potential integration between masakari and congress. But I am stuck on some basic usage questions I could not answer in my search of docs and demos. Any clarification or references would be much appreciated! 1. What does a host refer to in masakari API? Here's the explanation in API doc: "Host can be any kind of virtual machine which can have compute service running on it." (https://developer.openstack.org/api-ref/instance-ha/#hosts-hosts) So is a masakari host usually a nova instance/server instead of a host/hypervisor? 2. Through the masakari api, how does one go about configuring a VM to be managed by masakari instance HA? Thanks so much! Eric Kao From sam47priya at gmail.com Sun Oct 7 03:29:37 2018 From: sam47priya at gmail.com (Sam P) Date: Sun, 7 Oct 2018 12:29:37 +0900 Subject: [openstack-dev] [masakari][congress] what is a host? In-Reply-To: References: Message-ID: Hi Eric, (1) "virtual machine" is bug. This need to be corrected as physical machine or hypervisor. Masakari host is a physical host/hypervisor. I will correct this. (2) Not through masakari APIs. You have to add metadata key 'HA_Enabled=True' to each VM by using nova API. Masakeri monitors check for failures and send notification to masakari API if detected any failure (i.e host, VM or process failures). In host failure (hypervisor down) scenario, Masakari engine get the VM list on that hypervisor and start evacuate VMs. Operator can configure masakari to evacuate all VMs or only the VMs with the metadata key 'HA_Enabled=True. Please see the config file [1] section [host_failure] for more details. Let me know if you need more info on this. [1] https://docs.openstack.org/masakari/latest/sample_config.html --- Regards, Sampath On Sun, Oct 7, 2018 at 8:55 AM Eric K wrote: > Hi all, I'm working on a potential integration between masakari and > congress. But I am stuck on some basic usage questions I could not > answer in my search of docs and demos. Any clarification or references > would be much appreciated! > > 1. What does a host refer to in masakari API? Here's the explanation in > API doc: > "Host can be any kind of virtual machine which can have compute > service running on it." > (https://developer.openstack.org/api-ref/instance-ha/#hosts-hosts) > > So is a masakari host usually a nova instance/server instead of a > host/hypervisor? > > 2. Through the masakari api, how does one go about configuring a VM to > be managed by masakari instance HA? > > Thanks so much! > > Eric Kao > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Sun Oct 7 09:10:39 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Sun, 7 Oct 2018 11:10:39 +0200 Subject: [openstack-dev] [goals][upgrade-checkers] Oslo library status Message-ID: <301224F2-16C9-4817-9B14-BC05DE051E5A@redhat.com> Hi, I start working on „neutron-status upgrade check” tool with noop operation for now. Patch is in [1] I started using this new oslo_upgradecheck library in version 0.0.1.dev15 which is available on pypi.org but I see that in master there are some changes already (like shorted names of base classes). So my question here is: should I just wait a bit more for kind of „stable” version of this lib and then push neutron patch to review (do You have any eta for that?), or maybe we shouldn’t rely on this oslo library in this release and implement all on our own, like it is done currently in nova? [1] https://review.openstack.org/#/c/608444/ — Slawek Kaplonski Senior software engineer Red Hat From gmann at ghanshyammann.com Sun Oct 7 11:27:11 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 07 Oct 2018 20:27:11 +0900 Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: References: <20181004174753.vruvzggqb4l7cpeg@yuggoth.org> <1664340182c.be42b6fc43129.5782669995674524194@ghanshyammann.com> Message-ID: <1664e47f98a.ea4f29ba21783.499931815572605632@ghanshyammann.com> ---- On Fri, 05 Oct 2018 22:16:36 +0900 Julia Kreger wrote ---- > +1 to bringing back formal meetings. A few replies below regarding time/agenda. > > On Fri, Oct 5, 2018 at 5:38 AM Doug Hellmann wrote: > Thierry Carrez writes: > > > Ghanshyam Mann wrote: > >> ---- On Fri, 05 Oct 2018 02:47:53 +0900 Jeremy Stanley wrote ---- > >> > On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote: > >> > [...] > >> > > TC members, please reply to this thread and indicate if you would > >> > > find meeting at 1300 UTC on the first Thursday of every month > >> > > acceptable, and of course include any other comments you might > >> > > have (including alternate times). > >> > > >> > This time is acceptable to me. As long as we ensure that community > >> > feedback continues more frequently in IRC and on the ML (for example > >> > by making it clear that this meeting is expressly *not* for that) > >> > then I'm fine with resuming formal meetings. > >> > >> +1. Time works fine for me, Thanks for considering the APAC TZ. > >> > >> I agree that we should keep encouraging the usual discussion in existing office hours, IRC or ML. I will be definitely able to attend other 2 office hours (Tuesday and Wednesday) which are suitable for my TZ. > > > > 1300 UTC is obviously good for me, but once we are off DST that will > > mean 5am for our Pacific Time people (do we have any left ?). > > > > Maybe 1400 UTC would be a better trade-off? > > Julia is out west, but I think not all the way to PST. > > My home time zone is PST. It would be awesome if we could hold the meeting an hour later, but I can get up early in the morning once a month. If we choose to meet more regularly, then a one hour later start would be more appreciated if it is not too much of an inconvenience to APAC TC members. That being said, I do typically get up early, just not 0500 early that often. One hour later (1400 UTC) also works for me. -gmann > > Regarding frequency, I agree with mnaser that once per month might be > > too rare. That means only 5-ish meetings for a given a 6-month > > membership. But that can work if we use the meeting as a formal progress > > status checkpoint, rather than a way to discuss complex topics. > > I think we can definitely manage the agenda to minimize the number of > complex discussions. If that proves to be too hard, I wouldn't mind > meeting more often, but there does seem to be a lot of support for > preferring other venues for those conversations. > > > +1 I think there is a point where we need to recognize there is a time and place for everything, and some of those long running complex conversations might not be well suited for what would essentially be "review business status" meetings. If we have any clue that something is going to be a very long and drawn out discussion, then I feel like we should make an effort to schedule individually. __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gmann at ghanshyammann.com Sun Oct 7 12:02:53 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 07 Oct 2018 21:02:53 +0900 Subject: [openstack-dev] [cinder][qa] Enabling online volume_extend tests by default In-Reply-To: References: Message-ID: <1664e68a792.cea6625622063.3163129957725641398@ghanshyammann.com> ---- On Sat, 06 Oct 2018 01:42:11 +0900 Erlon Cruz wrote ---- > Hey folks, > Following up on the discussions that we had on the Denver PTG, the Cinder teamis planning to enable online volume_extend tests[1] to be run by default. Currently,those tests are only run by some CI systems and infra jobs that explicitly set it tobe so. > We are also adding a negative test and an associated option in tempest[2] to allowvendor drivers that does not support online extending to be tested. This patch willbe merged first and after a reasonable time for people check whether their backends supports that or not, we will proceed and merge the devstack patch[1]triggering the tests in all CIs and infra jobs. Thanks Erlon. +1 on running those tests on gate. Though I have concern over running those tests by default(making config options True by default), because it is not confirmed all cinder backends implements this functionality and it only works for nova libvirt driver. We need to keep config options default as False and Devstack/CI can make it True to run the tests. If this feature becomes mandatory functionality (or cinder say standard feature i think) to implement for every backends and it work with all nova driver also(in term of instance action events) then, we can enable this feature tests by default. But until then, we should keep them disable by default in Tempest but we can enable them on gate via Devstack (patch you mentioned) and test them daily on integrated-gate. Overall, I am ok with Devstack change to make these tests enable for every Cinder backends but we need to keep the config options false in Tempest. I will review those patch and leave comments on gerrit (i saw those patch introduce new config option than using the existing one) -gmann > Please let us know if you have any question or concerns about it. > Kind regards,Erlon_________________[1] https://review.openstack.org/#/c/572188/[2] https://review.openstack.org/#/c/578463/ __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From hongbin034 at gmail.com Sun Oct 7 17:48:46 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sun, 7 Oct 2018 13:48:46 -0400 Subject: [openstack-dev] [kolla-ansible][zun] Request to make another release for stable/queens Message-ID: Hi Kolla team, I have a fixup on the configuration of Zun service: https://review.openstack.org/#/c/591256/ . The fix has been backported to stable/queens and I wonder if it is possible to release kolla-ansible 6.2.0 that contains this patch. The deployment of Zun service needs this patch to work properly. Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From litao3721 at 126.com Mon Oct 8 01:33:33 2018 From: litao3721 at 126.com (Tao Li) Date: Mon, 8 Oct 2018 09:33:33 +0800 Subject: [openstack-dev] =?gb2312?b?tPC4tDogW25vdmFdW3B5dGhvbi1ub3ZhY2xp?= =?gb2312?b?ZW50XSBBIFRlc3QgaXNzdWUgaW4gcHl0aG9uLW5vdmFjbGllbnQu?= In-Reply-To: <6796d5e5-614b-2cb9-3fa3-24cfe5fe0978@gmail.com> References: <004901d45869$f0713b70$d153b250$@126.com> <6796d5e5-614b-2cb9-3fa3-24cfe5fe0978@gmail.com> Message-ID: <006801d45ea6$ed964f60$c8c2ee20$@126.com> Hi matt Sorry to reply you late because the National Day in China. I will to report a bug. -----邮件原件----- 发件人: Matt Riedemann 发送时间: 2018年9月30日 23:34 收件人: Tao Li ; OpenStack Development Mailing List (not for usage questions) ; Brin Zhang(张百林) 主题: Re: [nova][python-novaclient] A Test issue in python-novaclient. On 9/29/2018 10:01 PM, Tao Li wrote: > I found this test is added about ten days ago in this patch > https://review.openstack.org/#/c/599276/, > > I checked it and don’t know why it failed. I think my commit shouldn’t > cause this issue. So do you any suggestion to me? > Yes it must be an intermittent race bug introduced by that change for the 2. 66 microversion. Since it deals with filtering based on time, we might not have a time window that is big enough (we expect to get a result of changes < $before but are getting <= $before). http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A %5C%22%7C%20%20%20%20%20testtools.matchers._impl.MismatchError%3A%20%5B'crea te'%5D%20!%3D%20%5B'create'%2C%20'stop'%5D%5C%22%20AND%20tags%3A%5C%22consol e%5C%22&from=7d Please report a bug against python-novaclient. The 2.66 test is based on a similar changes_since test, so we should see why they are behaving differently. -- Thanks, Matt From soulxu at gmail.com Mon Oct 8 01:53:36 2018 From: soulxu at gmail.com (Alex Xu) Date: Mon, 8 Oct 2018 09:53:36 +0800 Subject: [openstack-dev] [nova] [ironic] agreement on how to specify options that impact scheduling and configuration In-Reply-To: References: Message-ID: Jay Pipes 于2018年10月5日周五 下午9:25写道: > Added [ironic] topic. > > On 10/04/2018 06:06 PM, Chris Friesen wrote: > > While discussing the "Add HPET timer support for x86 guests" > > blueprint[1] one of the items that came up was how to represent what are > > essentially flags that impact both scheduling and configuration. Eric > > Fried posted a spec to start a discussion[2], and a number of nova > > developers met on a hangout to hash it out. This is the result. > > > > In this specific scenario the goal was to allow the user to specify that > > their image required a virtual HPET. For efficient scheduling we wanted > > this to map to a placement trait, and the virt driver also needed to > > enable the feature when booting the instance. (This can be generalized > > to other similar problems, including how to specify scheduling and > > configuration information for Ironic.) > > > > We discussed two primary approaches: > > > > The first approach was to specify an arbitrary "key=val" in flavor > > extra-specs or image properties, which nova would automatically > > translate into the appropriate placement trait before passing it to > > placement. Once scheduled to a compute node, the virt driver would look > > for "key=val" in the flavor/image to determine how to proceed. > > > > The second approach was to directly specify the placement trait in the > > flavor extra-specs or image properties. Once scheduled to a compute > > node, the virt driver would look for the placement trait in the > > flavor/image to determine how to proceed. > > > > Ultimately, the decision was made to go with the second approach. The > > result is that it is officially acceptable for virt drivers to key off > > placement traits specified in the image/flavor in order to turn on/off > > configuration options for the instance. If we do get down to the virt > > driver and the trait is set, and the driver for whatever reason > > determines it's not capable of flipping the switch, it should fail. > > Ironicers, pay attention to the above! :) It's a green light from Nova > to use the traits list contained in the flavor extra specs and image > metadata when (pre-)configuring an instance. > > > It should be noted that it only makes sense to use placement traits for > > things that affect scheduling. If it doesn't affect scheduling, then it > > can be stored in the flavor extra-specs or image properties separate > > from the placement traits. Also, this approach only makes sense for > > simple booleans. Anything requiring more complex configuration will > > likely need additional extra-spec and/or config and/or unicorn dust. > > Ironicers, also pay close attention to the advice above. Things that are > not "scheduleable" -- in other words, things that don't filter the list > of hosts that a workload can land on -- should not go in traits. > ++, see I talk about the same thing before https://review.openstack.org/#/c/504952/5/specs/approved/config-template-traits.rst at 95 :) > > Finally, here's the HPET os-traits patch. Reviews welcome (it's tiny > patch): > > https://review.openstack.org/608258 > > Best, > -jay > > > Chris > > > > [1] https://blueprints.launchpad.net/nova/+spec/support-hpet-on-guest > > [2] > > > https://review.openstack.org/#/c/607989/1/specs/stein/approved/support-hpet-on-guest.rst > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jun.zhongjun2 at gmail.com Mon Oct 8 02:57:30 2018 From: jun.zhongjun2 at gmail.com (jun zhong) Date: Mon, 8 Oct 2018 10:57:30 +0800 Subject: [openstack-dev] [manila] nominating Amit Oren for manila core In-Reply-To: References: <20181002175843.ik5mhqqz3hwqb42m@barron.net> Message-ID: +1 Dustin Schoenbrun 于2018年10月4日周四 上午3:12写道: > +1 > > --- > Dustin Schoenbrun > Senior OpenStack Quality Engineer > Red Hat, Inc. > dschoenb at redhat.com > > > On Wed, Oct 3, 2018 at 12:52 PM Goutham Pacha Ravi > wrote: > >> +1 >> >> -- >> Goutham Pacha Ravi >> >> >> On Tue, Oct 2, 2018 at 10:59 AM Tom Barron wrote: >> > >> > Amit Oren has contributed high quality reviews in the last couple of >> > cycles so I would like to nominated him for manila core. >> > >> > Please respond with your +1 or -1 votes. We'll hold voting open for 7 >> > days. >> > >> > Thanks, >> > >> > -- Tom Barron (tbarron) >> > >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yikunkero at gmail.com Mon Oct 8 07:09:36 2018 From: yikunkero at gmail.com (Yikun Jiang) Date: Mon, 8 Oct 2018 15:09:36 +0800 Subject: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API? Message-ID: In Denver, we agree to add a new "re-image" API in cinder to support upport volume-backed server rebuild with a new image. An initial blueprint has been drafted in [3], welcome to review it, thanks. : ) The API is very simple, something like: URL: POST /v3/{project_id}/volumes/{volume_id}/action Request body: { 'os-reimage': { 'image_id': "71543ced-a8af-45b6-a5c4-a46282108a90" } } The question is do we need a "force" parameter in request body? like: { 'os-reimage': { 'image_id': "71543ced-a8af-45b6-a5c4-a46282108a90", * 'force': True* } } The "force" parameter idea comes from [4], means that 1. we can re-image an "available" volume directly. 2. we can't re-image "in-use"/"reserved" volume directly. 3. we can only re-image an "in-use"/"reserved" volume with "force" parameter. And it means nova need to always call re-image API with an extra "force" parameter, because the volume status is "in-use" or "reserve" when we rebuild the server. *So, what's you idea? Do we really want to add this "force" parameter?* [1] https://etherpad.openstack.org/p/nova-ptg-stein L483 [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild L12 [3] https://review.openstack.org/#/c/605317 [4] https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst at 75 Regards, Yikun ---------------------------------------- Jiang Yikun(Kero) Mail: yikunkero at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifatafekn at gmail.com Mon Oct 8 07:50:30 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Mon, 8 Oct 2018 10:50:30 +0300 Subject: [openstack-dev] [vitrage] vitrage virtual PTG Message-ID: Hi, We will hold the vitrage virtual PTG on Wednesday-Thursday this week, October 10-11th. The agenda is listed in the etherpad[1], and you can still add new topics for discussion. We will skip the regular IRC meeting this week. [1] https://etherpad.openstack.org/p/vitrage-stein-ptg Thanks, Ifat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Mon Oct 8 09:58:24 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Mon, 8 Oct 2018 20:58:24 +1100 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: References: <413d67d8-e4de-51fe-e7cf-8fb6520aed34@redhat.com> Message-ID: On 05/10/18 21:54, Jim Rollenhagen wrote: > GraphQL has introspection features that allow clients to pull the > schema (types, queries, mutations, etc): > https://graphql.org/learn/introspection/ > > That said, it seems like using this in a client like OpenStackSDK > would get messy quickly. Instead of asking for which versions are > supported, you'd have to fetch the schema, map it to actual features > somehow, and adjust queries based on this info. > A main difference in software architecture when using GraphQL is that a client makes use of a GraphQL client library instead of relaying on a SDK. > I guess there might be a middleground where we could fetch the REST > API version, and know from that what GraphQL queries can be made. > > // jim > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Mon Oct 8 10:00:48 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Mon, 8 Oct 2018 15:30:48 +0530 Subject: [openstack-dev] [all] Zuul job backlog In-Reply-To: References: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> <1538165560.3935414.1524300072.5C31EEA9@webmail.messagingengine.com> <1538662401.2558963.1530696776.04282A5E@webmail.messagingengine.com> Message-ID: Hi Doug, Should I use something like SimpleHttpServer to upload a file and download the same, or there are other efficient ways to handle it, Kindly let me know if you are having any suggestions for the same. Thanks & Best Regards, Abhishek Kekane On Fri, Oct 5, 2018 at 4:57 PM Doug Hellmann wrote: > Abhishek Kekane writes: > > > Hi Matt, > > > > Thanks for the input, I guess I should use ' > > http://git.openstack.org/static/openstack.png' which will definitely > work. > > Clark, Matt, Kindly let me know your opinion about the same. > > That URL would not be on the local node running the test, and would > eventually exhibit the same problems. In fact we have seen issues > cloning git repositories as part of the tests in the past. > > You need to use a localhost URL to ensure that the download doesn't have > to go off of the node. That may mean placing something into the directory > where Apache is serving files as part of the test setup. > > Doug > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amal.kammoun.2 at gmail.com Mon Oct 8 10:03:26 2018 From: amal.kammoun.2 at gmail.com (amal kammoun) Date: Mon, 8 Oct 2018 12:03:26 +0200 Subject: [openstack-dev] [Monasca] [monasca-Agent] Monasca agent problem Message-ID: Hello, I have an issue with the monasca Agent. In fact, I installed monasca with Openstack using devstack. I want to minitor now the instances deployed using Openstack. For that I installed on each instance the monasca agent with the following link: https://github.com/openstack/monasca-agent/blob/master/docs/Agent.md The problem is that I cannot define alarms for the concerned instance. Example on my agent: [image: image.png] On my monitoring system I found the alam defintion but it is not activated. also the instance on where the agent is running is not declared as a server on the monasca servers list. Regards, Amal Kammoun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.engelmann at everyware.ch Mon Oct 8 10:14:40 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Mon, 8 Oct 2018 12:14:40 +0200 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints Message-ID: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> Hi, I would like to start a discussion about some changes and additions I would like to see in in kolla and kolla-ansible. 1. Keepalived is a problem in layer3 spine leaf networks as any floating IP can only exist in one leaf (and VRRP is a problem in layer3). I would like to use consul and registrar to get rid of the "internal" floating IP and use consuls DNS service discovery to connect all services with each other. 2. Using "ports" for external API (endpoint) access is a major headache if a firewall is involved. I would like to configure the HAProxy (or fabio) for the external access to use "Host:" like, eg. "Host: keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS. Any customer would just need HTTPS access and not have to open all those ports in his firewall. For some enterprise customers it is not possible to request FW changes like that. 3. HAProxy is not capable to handle "read/write" split with Galera. I would like to introduce ProxySQL to be able to scale Galera. 4. HAProxy is fine but fabio integrates well with consul, statsd and could be connected to a vault cluster to manage secure certificate access. 5. I would like to add vault as Barbican backend. 6. I would like to add an option to enable tokenless authentication for all services with each other to get rid of all the openstack service passwords (security issue). What do you think about it? All the best, Florian -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From tpb at dyncloud.net Mon Oct 8 11:23:11 2018 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 8 Oct 2018 07:23:11 -0400 Subject: [openstack-dev] [manila] Stein mid-cycle and bug smash dates Message-ID: <20181008112311.xdu24qgascglq6k5@barron.net> In a recent weekly manila community meeting [1] we tentatively agreed to have a virtual mid-cycle Wednesday and Thursday 16-17 January 2019. This would be the week after the Stein-2 miletone and a month before Manila Feature proposal Freeze. Also, given the success of the China-based bug-smashes in the last few years, we are planning an Americas-timezone-friendly bug-smash as well, aiming for 13-14 March 2019, the week after the Stein-3 milestone and Feature Freeze. Please respond to the list if you have objections or counter-proposals to these proposed dates. Thanks! -- Tom Barron (tbarron) [1] http://eavesdrop.openstack.org/meetings/manila/2018/manila.2018-09-27-15.00.log.txt From witold.bedyk at est.fujitsu.com Mon Oct 8 12:20:35 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Mon, 8 Oct 2018 12:20:35 +0000 Subject: [openstack-dev] [Monasca] [monasca-Agent] Monasca agent problem In-Reply-To: References: Message-ID: Hi Amal, I guess, the mailing list doesn’t accept attachments. Could you please check if you’re using the right project? Monasca is multi-tenant and stores all the measurements and alarm definitions per project. If alarm definition is created, let’s say for project ‘admin’ and the agent is configured to send the measurements for project ‘customer1’, the alarm won’t get triggered in either of them. Best regards Witek From: amal kammoun Sent: Montag, 8. Oktober 2018 12:03 To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [Monasca] [monasca-Agent] Monasca agent problem Hello, I have an issue with the monasca Agent. In fact, I installed monasca with Openstack using devstack. I want to minitor now the instances deployed using Openstack. For that I installed on each instance the monasca agent with the following link: https://github.com/openstack/monasca-agent/blob/master/docs/Agent.md The problem is that I cannot define alarms for the concerned instance. Example on my agent: [image.png] On my monitoring system I found the alam defintion but it is not activated. also the instance on where the agent is running is not declared as a server on the monasca servers list. Regards, Amal Kammoun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Mon Oct 8 13:27:51 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 8 Oct 2018 08:27:51 -0500 Subject: [openstack-dev] [nova] Rocky RC time regression analysis In-Reply-To: References: Message-ID: <4a92262d-88f9-d301-172a-544336ffd99c@fried.cc> Mel- I don't have much of anything useful to add here, but wanted to say thanks for this thorough analysis. It must have taken a lot of time and work. Musings inline. On 10/05/2018 06:59 PM, melanie witt wrote: > Hey everyone, > > During our Rocky retrospective discussion at the PTG [1], we talked > about the spec freeze deadline (milestone 2, historically it had been > milestone 1) and whether or not it was related to the hectic > late-breaking regression RC time we had last cycle. I had an action item > to go through the list of RC time bugs [2] and dig into each one, > examining: when the patch that introduced the bug landed vs when the bug > was reported, why it wasn't caught sooner, and report back so we can > take a look together and determine whether they were related to the spec > freeze deadline. > > I used this etherpad to make notes [3], which I will [mostly] copy-paste > here. These are all after RC1 and I'll paste them in chronological order > of when the bug was reported. > > Milestone 1 r-1 was 2018-04-19. > Spec freeze was milestone 2 r-2 was 2018-06-07. > Feature freeze (FF) was on 2018-07-26. > RC1 was on 2018-08-09. > > 1) Broken live migration bandwidth minimum => maximum based on neutron > event https://bugs.launchpad.net/nova/+bug/1786346 > > - Bug was reported on 2018-08-09, the day of RC1 > - The patch that caused the regression landed on 2018-03-30 > https://review.openstack.org/497457 > - Unrelated to a blueprint, the regression was part of a bug fix > - Was found because prometheanfire was doing live migrations and noticed > they seemed to be stuck at 1MiB/s for linuxbridge VMs > - The bug was due to a race, so the gate didn't hit it > - Comment on the regression bug from dansmith: "The few hacked up gate > jobs we used to test this feature at merge time likely didn't notice the > race because the migrations finished before the potential timeout and/or > are on systems so loaded that the neutron event came late enough for us > to win the race repeatedly." > > 2) Docs for the zvm driver missing > > - All zvm driver code changes were merged by 2018-07-17 but the > documentation was overlooked but was noticed near RC time > - Blueprint was approved on 2018-02-12 > > 3) Volume status remains "detaching" after a failure to detach a volume > due to DeviceDetachFailed https://bugs.launchpad.net/nova/+bug/1786318 > > - Bug was reported on 2018-08-09, the day of RC1 > - The change that introduced the regression landed on 2018-02-21 > https://review.openstack.org/546423 > - Unrelated to a blueprint, the regression was part of a bug fix > - Question: why wasn't this caught earlier? > - Answer: Unit tests were not asserting the call to the roll_detaching > volume API. Coverage has since been added along with the bug fix > https://review.openstack.org/590439 > > 4) OVB overcloud deploy fails on nova placement errors > https://bugs.launchpad.net/nova/+bug/1787910 > > - Bug was reported on 2018-08-20 > - Change that caused the regression landed on 2018-07-26, FF day > https://review.openstack.org/517921 > - Blueprint was approved on 2018-05-16 > - Was found because of a failure in the > legacy-periodic-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master > CI job. The ironic-inspector CI upstream also failed because of this, as > noted by dtantsur. > - Question: why did it take nearly a month for the failure to be > noticed? Is there any way we can cover this in our > ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa job? > > 5) when live migration fails due to a internal error rollback is not > handled correctly https://bugs.launchpad.net/nova/+bug/1788014 > > - Bug was reported on 2018-08-20 > - The change that caused the regression landed on 2018-07-26, FF day > https://review.openstack.org/434870 > - Unrelated to a blueprint, the regression was part of a bug fix > - Was found because sean-k-mooney was doing live migrations and found > that when a LM failed because of a QEMU internal error, the VM remained > ACTIVE but the VM no longer had network connectivity. > - Question: why wasn't this caught earlier? > - Answer: We would need a live migration job scenario that intentionally > initiates and fails a live migration, then verify network connectivity > after the rollback occurs. > - Question: can we add something like that? > > 6) nova-manage db online_data_migrations hangs on instances with no host > set https://bugs.launchpad.net/nova/+bug/1788115 > > - Bug was reported on 2018-08-21 > - The patch that introduced the bug landed on 2018-05-30 > https://review.openstack.org/567878 > - Unrelated to a blueprint, the regression was part of a bug fix > - Question: why wasn't this caught earlier? > - Answer: To hit the bug, you had to have had instances with no host set > (that failed to schedule) in your database during an upgrade. This does > not happen during the grenade job > - Question: could we add anything to the grenade job that would leave > some instances with no host set to cover cases like this? > > 7) release notes erroneously say that nova-consoleauth doesn't have to > run in Rocky https://bugs.launchpad.net/nova/+bug/1788470 > > - Bug was reported on 2018-08-22 > - The patches that conveyed the wrong information for the docs landed on > 2018-05-07 https://review.openstack.org/565367 > - Blueprint was approved on 2018-03-12 > - Question: why wasn't this caught earlier? > - Answer: The patches should have been tested by a devstack change that > runs without the nova-consoleauth service, to verify the system can work > without the service. > - Question: can we add test coverage for that? > - Answer: Yes, it's proposed as a WIP https://review.openstack.org/607070 > > 8) libvirt driver rx_queue_size changes break SR-IOV > https://bugs.launchpad.net/nova/+bug/1789074 > > - Bug was reported on 2018-08-25 > - The change that caused the regression landed on 2018-04-23 > https://review.openstack.org/484997 > - Blueprint was approved on 2018-03-23 > - Was found because moshele tried to create a server with an SRIOV > interface (PF or VF) with rx_queue_size and tx_queue_size set in > nova.conf and it failed. > - Question: why wasn't this caught earlier? > - Answer: Exposing the bug required both setting rx/tx queue sizes and > booting a server with a SRIOV interface. We don't have hardware for > testing SRIOV in the gate. > - Question: is there any other way to test this via functional tests? > From what I understand, there isn't. > > Based on all of this, I don't believe the spec freeze at milestone 2 was > related to the late-breaking regressions we had around RC time. Agree. > We > approved 10 additional blueprints between r-1 2018-04-19 and r-2 > 2018-06-07 [4]. Half of the regressions were unrelated to feature work > and were introduced as part of bug fixes. Of the other half, 3 out of 4 > had blueprints approved before r-1. Only one involved a blueprint > approved after r-1. > > In a couple of cases, the patch that introduced the regression landed on > feature freeze day, with the bugs being found about a month later, which > was about two weeks after RC1. In most cases, the regression landed > months before the bug was found, because of lack of test coverage. > > It seems like if we could do anything about this, it would be to move > feature freeze day sooner so we have more time between FF and RC. Yes, I think this would be a thing worth trying. Or perhaps... > But I > have a feeling that some of the bugs get found when people take the RC > and try it out. ...perhaps moving both FF and RC1 earlier before the release date. Not sure it's worth the pain of having a longer RC window. > Based on what I've found here, I think we are fine to stick with using > milestone 2 as the spec freeze date. Agree. > And we might want to consider > moving feature freeze day sooner to give more time between feature > freeze and RC. This cycle, even though it is a longer cycle, we still > have only 2 weeks between s-3 and rc1 [5]. > > I'm ambivalent about changing the usual milestone 3 feature freeze date > because I have a feeling that people tend to try things out once RC is > released, but maybe I'm wrong on that. What are your thoughts? > > Finally, please do jump in and reply if you have info to share or Random thoughts, in no particular order: - The number of regressions due to bug fixes may indicate that we were rushing, and/or weren't reviewing as deeply/thoroughly as we should have been. Which may very well be a factor of the code being complex, convoluted, or otherwise incomprehensible to almost everyone. If so, is it worth taking some time away from feature work to catch up on tech debt, refactoring, simplification, etc.? - The bugfix regressions could also be due to improper prioritization. Do we have a sense of whether the cure was worse than the disease in any of these cases? Would we have been better off leaving the original bugs alone? Can we use that to inform our triaging process, especially late in the cycle? - We may want to consider the possibility that this was an anomaly, on the edge of the bell curve of normal, not related to anything we did or didn't do. By the same token, if we change something and Stein goes better, can we really pat ourselves on the back and say we made the difference by whatever we changed? I'm not suggesting we take no action; but that if we do something speculative like moving dates around, I'm not sure we can really measure success or failure. OTOH, if we do something that'll be beneficial regardless (like refactoring/simplification) then we win even if the RC storm is repeated. > questions to ask about the regressions listed above. I'm especially > interested in getting answers to the questions I posed earlier inline > about whether there's anything we can do to cover some of the cases with > our CI. > > Cheers, > -melanie > > > [1] https://etherpad.openstack.org/p/nova-rocky-retrospective > [2] https://etherpad.openstack.org/p/nova-rocky-release-candidate-todo > [3] https://etherpad.openstack.org/p/nova-rocky-rc-regression-analysis > [4] > https://docs.google.com/spreadsheets/d/e/2PACX-1vQicKStmnQFcOdnZU56ynJmn8e0__jYsr4FWXs3GrDsDzg1hwHofvJnuSieCH3ExbPngoebmEeY0waH/pubhtml?gid=128173249&single=true > > [5] https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lbragstad at gmail.com Mon Oct 8 13:49:21 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 8 Oct 2018 08:49:21 -0500 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: <1662fc326b2.b3cb83bc32239.7575898832806527463@ghanshyammann.com> References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> <1662fc326b2.b3cb83bc32239.7575898832806527463@ghanshyammann.com> Message-ID: On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann wrote: > ---- On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad < > lbragstad at gmail.com> wrote ---- > > > > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki > wrote: > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > > wrote: > > > > > > Ideally I would like to see it in the form of least specific to most > specific. But more importantly in a way that there is no additional > delimiters between the service type and the resource. Finally, I do not > like the change of plurality depending on action type. > > > > > > I propose we consider > > > > > > ::[:] > > > > > > Example for keystone (note, action names below are strictly examples > I am fine with whatever form those actions take): > > > identity:projects:create > > > identity:projects:delete > > > identity:projects:list > > > identity:projects:get > > > > > > It keeps things simple and consistent when you're looking through > overrides / defaults. > > > --Morgan > > +1 -- I think the ordering if `resource` comes before > > `action|subaction` will be more clean. > > > > ++ > > These are excellent points. I especially like being able to omit the > convention about plurality. Furthermore, I'd like to add that I think we > should make the resource singular (e.g., project instead or projects). For > example: > > compute:server:list > > > compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize > (or confirm-resize) > > Do we need "action" word there? I think action name itself should convey > the operation. IMO below notation without "äction" word looks clear enough. > what you say? > > compute:server:reboot > compute:server:confirm_resize > I agree. I simplified this in the current version up for review. > > -gmann > > > > > Otherwise, someone might mistake compute:servers:get, as "list". This > is ultra-nick-picky, but something I thought of when seeing the usage of > "get_all" in policy names in favor of "list." > > In summary, the new convention based on the most recent feedback should > be: > > ::[:] > > Rules:service-type is always defined in the service types authority > > resources are always singular > > Thanks to all for sticking through this tedious discussion. I > appreciate it. > > /R > > > > Harry > > > > > > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad > wrote: > > >> > > >> Bumping this thread again and proposing two conventions based on > the discussion here. I propose we decide on one of the two following > conventions: > > >> > > >> :: > > >> > > >> or > > >> > > >> :_ > > >> > > >> Where is the corresponding service type of the > project [0], and is either create, get, list, update, or delete. I > think decoupling the method from the policy name should aid in consistency, > regardless of the underlying implementation. The HTTP method specifics can > still be relayed using oslo.policy's DocumentedRuleDefault object [1]. > > >> > > >> I think the plurality of the resource should default to what makes > sense for the operation being carried out (e.g., list:foobars, > create:foobar). > > >> > > >> I don't mind the first one because it's clear about what the > delimiter is and it doesn't look weird when projects have something like: > > >> > > >> ::: > > >> > > >> If folks are ok with this, I can start working on some > documentation that explains the motivation for this. Afterward, we can > figure out how we want to track this work. > > >> > > >> What color do you want the shed to be? > > >> > > >> [0] https://service-types.openstack.org/service-types.json > > >> [1] > https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule > > >> > > >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad > wrote: > > >>> > > >>> > > >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann < > gmann at ghanshyammann.com> wrote: > > >>>> > > >>>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt < > john at johngarbutt.com> wrote ---- > > >>>> > tl;dr+1 consistent names > > >>>> > I would make the names mirror the API... because the Operator > setting them knows the API, not the codeIgnore the crazy names in Nova, I > certainly hate them > > >>>> > > >>>> Big +1 on consistent naming which will help operator as well as > developer to maintain those. > > >>>> > > >>>> > > > >>>> > Lance Bragstad wrote: > > >>>> > > I'm curious if anyone has context on the "os-" part of the > format? > > >>>> > > > >>>> > My memory of the Nova policy mess...* Nova's policy rules > traditionally followed the patterns of the code > > >>>> > ** Yes, horrible, but it happened.* The code used to have the > OpenStack API and the EC2 API, hence the "os"* API used to expand with > extensions, so the policy name is often based on extensions** note most of > the extension code has now gone, including lots of related policies* Policy > in code was focused on getting us to a place where we could rename policy** > Whoop whoop by the way, it feels like we are really close to something > sensible now! > > >>>> > Lance Bragstad wrote: > > >>>> > Thoughts on using create, list, update, and delete as opposed > to post, get, put, patch, and delete in the naming convention? > > >>>> > I could go either way as I think about "list servers" in the > API.But my preference is for the URL stub and POST, GET, etc. > > >>>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad < > lbragstad at gmail.com> wrote:If we consider dropping "os", should we > entertain dropping "api", too? Do we have a good reason to keep "api"?I > wouldn't be opposed to simple service types (e.g "compute" or > "loadbalancer"). > > >>>> > +1The API is known as "compute" in api-ref, so the policy > should be for "compute", etc. > > >>>> > > >>>> Agree on mapping the policy name with api-ref as much as > possible. Other than policy name having 'os-', we have 'os-' in resource > name also in nova API url like /os-agents, /os-aggregates etc (almost every > resource except servers , flavors). As we cannot get rid of those from API > url, we need to keep the same in policy naming too? or we can have policy > name like compute:agents:create/post but that mismatch from api-ref where > agents resource url is os-agents. > > >>> > > >>> > > >>> Good question. I think this depends on how the service does policy > enforcement. > > >>> > > >>> I know we did something like this in keystone, which required > policy names and method names to be the same: > > >>> > > >>> "identity:list_users": "..." > > >>> > > >>> Because the initial implementation of policy enforcement used a > decorator like this: > > >>> > > >>> from keystone import controller > > >>> > > >>> @controller.protected > > >>> def list_users(self): > > >>> ... > > >>> > > >>> Having the policy name the same as the method name made it easier > for the decorator implementation to resolve the policy needed to protect > the API because it just looked at the name of the wrapped method. The > advantage was that it was easy to implement new APIs because you only > needed to add a policy, implement the method, and make sure you decorate > the implementation. > > >>> > > >>> While this worked, we are moving away from it entirely. The > decorator implementation was ridiculously complicated. Only a handful of > keystone developers understood it. With the addition of system-scope, it > would have only become more convoluted. It also enables a much more > copy-paste pattern (e.g., so long as I wrap my method with this decorator > implementation, things should work right?). Instead, we're calling > enforcement within the controller implementation to ensure things are > easier to understand. It requires developers to be cognizant of how > different token types affect the resources within an API. That said, > coupling the policy name to the method name is no longer a requirement for > keystone. > > >>> > > >>> Hopefully, that helps explain why we needed them to match. > > >>> > > >>>> > > >>>> > > >>>> Also we have action API (i know from nova not sure from other > services) like POST /servers/{server_id}/action {addSecurityGroup} and > their current policy name is all inconsistent. few have policy name > including their resource name like > "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in > policy name like "os_compute_api:os-admin-actions:reset_state" and few has > direct action name like "os_compute_api:os-console-output" > > >>> > > >>> > > >>> Since the actions API relies on the request body and uses a single > HTTP method, does it make sense to have the HTTP method in the policy name? > It feels redundant, and we might be able to establish a convention that's > more meaningful for things like action APIs. It looks like cinder has a > similar pattern [0]. > > >>> > > >>> [0] > https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action > > >>> > > >>>> > > >>>> > > >>>> May be we can make them consistent with > :: or any better opinion. > > >>>> > > >>>> > From: Lance Bragstad > The topic of > having consistent policy names has popped up a few times this week. > > >>>> > > > >>>> > I would love to have this nailed down before we go through all > the policy rules again. In my head I hope in Nova we can go through each > policy rule and do the following: > > >>>> > * move to new consistent policy name, deprecate existing name* > hardcode scope check to project, system or user** (user, yes... keypairs, > yuck, but its how they work)** deprecate in rule scope checks, which are > largely bogus in Nova anyway* make read/write/admin distinction** therefore > adding the "noop" role, amount other things > > >>>> > > >>>> + policy granularity. > > >>>> > > >>>> It is good idea to make the policy improvement all together and > for all rules as you mentioned. But my worries is how much load it will be > on operator side to migrate all policy rules at same time? What will be the > deprecation period etc which i think we can discuss on proposed spec - > https://review.openstack.org/#/c/547850 > > >>> > > >>> > > >>> Yeah, that's another valid concern. I know at least one operator > has weighed in already. I'm curious if operators have specific input here. > > >>> > > >>> It ultimately depends on if they override existing policies or > not. If a deployment doesn't have any overrides, it should be a relatively > simple change for operators to consume. > > >>> > > >>>> > > >>>> > > >>>> > > >>>> -gmann > > >>>> > > >>>> > Thanks,John > __________________________________________________________________________ > > >>>> > OpenStack Development Mailing List (not for usage questions) > > >>>> > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >>>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >>>> > > > >>>> > > >>>> > > >>>> > > >>>> > __________________________________________________________________________ > > >>>> OpenStack Development Mailing List (not for usage questions) > > >>>> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >> > > >> > __________________________________________________________________________ > > >> OpenStack Development Mailing List (not for usage questions) > > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Oct 8 13:54:34 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 8 Oct 2018 08:54:34 -0500 Subject: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API? In-Reply-To: References: Message-ID: <20181008135434.GB17162@sm-workstation> On Mon, Oct 08, 2018 at 03:09:36PM +0800, Yikun Jiang wrote: > In Denver, we agree to add a new "re-image" API in cinder to support upport > volume-backed server rebuild with a new image. > > An initial blueprint has been drafted in [3], welcome to review it, thanks. > : ) > > [snip] > > The "force" parameter idea comes from [4], means that > 1. we can re-image an "available" volume directly. > 2. we can't re-image "in-use"/"reserved" volume directly. > 3. we can only re-image an "in-use"/"reserved" volume with "force" > parameter. > > And it means nova need to always call re-image API with an extra "force" > parameter, > because the volume status is "in-use" or "reserve" when we rebuild the > server. > > *So, what's you idea? Do we really want to add this "force" parameter?* > I would prefer we have the "force" parameter, even if it is something that will always be defaulted to True from Nova. Having this exposed as a REST API means anyone could call it, not just Nova code. So as protection from someone doing something that they are not really clear on the full implications of, having a flag in there to guard volumes that are already attached or reserved for shelved instances is worth the very minor extra overhead. > [1] https://etherpad.openstack.org/p/nova-ptg-stein L483 > [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild L12 > [3] https://review.openstack.org/#/c/605317 > [4] > https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst at 75 > > Regards, > Yikun > ---------------------------------------- > Jiang Yikun(Kero) > Mail: yikunkero at gmail.com > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Mon Oct 8 13:59:27 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 8 Oct 2018 08:59:27 -0500 Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: References: <20181004174753.vruvzggqb4l7cpeg@yuggoth.org> <1664340182c.be42b6fc43129.5782669995674524194@ghanshyammann.com> Message-ID: <20181008135927.GC17162@sm-workstation> > > > > I think we can definitely manage the agenda to minimize the number of > > complex discussions. If that proves to be too hard, I wouldn't mind > > meeting more often, but there does seem to be a lot of support for > > preferring other venues for those conversations. > > > > > +1 I think there is a point where we need to recognize there is a time and > place for everything, and some of those long running complex conversations > might not be well suited for what would essentially be "review business > status" meetings. If we have any clue that something is going to be a very > long and drawn out discussion, then I feel like we should make an effort to > schedule individually. We could also be very aggressive about ending the meeting early if all process topics are covered to actively discourage this meeting from becoming a forum for other non-process discussions. From doug at doughellmann.com Mon Oct 8 14:08:33 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 08 Oct 2018 10:08:33 -0400 Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: References: Message-ID: Based on the conversation in the other branch of this thread, I have filed [1] to start monthly meetings on November 1 at 1400 UTC. It may take a while before that actually shows up on the calendar, because it required adding a feature to yaml2ical [2]. We talked about using email to add items to the agenda, but I realized that's going to complicate the coordination between chair and vice chair, so I would like for us to use the wiki [2] to suggest agenda items. We will still rely on email to the openstack-dev or openstack-discuss list to set the formal agenda before the actual meeting. Let me know if you foresee any issues with that plan. Doug [1] https://review.openstack.org/608682 [2] https://review.openstack.org/608680 [3] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee From doug at doughellmann.com Mon Oct 8 14:25:14 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 08 Oct 2018 10:25:14 -0400 Subject: [openstack-dev] [tc] planned absence Message-ID: TC members, I have some PTO planned, so I will be away from 13 Oct - 28 Oct. I approved the patch appointing Mohammed as vice chair this morning, so he will be serving as chair during that time. Doug From doug at doughellmann.com Mon Oct 8 14:27:06 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 08 Oct 2018 10:27:06 -0400 Subject: [openstack-dev] [tc] assigning new liaisons to projects Message-ID: TC members, Since we are starting a new term, and have several new members, we need to decide how we want to rotate the liaisons attached to each our project teams, SIGs, and working groups [1]. Last term we went through a period of volunteer sign-up and then I randomly assigned folks to slots to fill out the roster evenly. During the retrospective we talked a bit about how to ensure we had an objective perspective for each team by not having PTLs sign up for their own teams, but I don't think we settled on that as a hard rule. I think the easiest and fairest (to new members) way to manage the list will be to wipe it and follow the same process we did last time. If you agree, I will update the page this week and we can start collecting volunteers over the next week or so. Doug [1] https://wiki.openstack.org/wiki/OpenStack_health_tracker From lbragstad at gmail.com Mon Oct 8 14:29:30 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 8 Oct 2018 09:29:30 -0500 Subject: [openstack-dev] [tc] bringing back formal TC meetings In-Reply-To: References: Message-ID: On Mon, Oct 8, 2018 at 9:08 AM Doug Hellmann wrote: > Based on the conversation in the other branch of this thread, I have > filed [1] to start monthly meetings on November 1 at 1400 UTC. It may > take a while before that actually shows up on the calendar, because it > required adding a feature to yaml2ical [2]. > > We talked about using email to add items to the agenda, but I realized > that's going to complicate the coordination between chair and vice > chair, so I would like for us to use the wiki [2] to suggest agenda > items. We will still rely on email to the openstack-dev or > openstack-discuss list to set the formal agenda before the actual > meeting. Let me know if you foresee any issues with that plan. > > ++ I think the wiki is a good alternative to using email. Those times also work for me. > Doug > > [1] https://review.openstack.org/608682 > [2] https://review.openstack.org/608680 > [3] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Mon Oct 8 14:36:44 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 8 Oct 2018 10:36:44 -0400 Subject: [openstack-dev] [ironic] Tenks In-Reply-To: References: Message-ID: On Tue, Oct 2, 2018 at 8:59 AM Mark Goddard wrote: > Hi, > > In the most recent Ironic meeting we discussed [1] tenks, and the > possibility of adding the project under Ironic governance. We agreed to > move the discussion to the mailing list. I'll introduce the project here > and give everyone a chance to ask questions. If things appear to move in > the right direction, I'll propose a vote for inclusion under Ironic's > governance. > > Tenks is a project for managing 'virtual bare metal clusters'. It aims to > be a drop-in replacement for the various scripts and templates that exist > in the Ironic devstack plugin for creating VMs to act as bare metal nodes > in development and test environments. Similar code exists in Bifrost and > TripleO, and probably other places too. By focusing on one project, we can > ensure that it works well, and provides all the features necessary as > support for bare metal in the cloud evolves. > > That's tenks the concept. Tenks in reality today is a working version 1.0, > written in Ansible, built by Will Miller (w-miller) during his summer > placement. Will has returned to his studies, and Will Szumski (jovial) has > picked it up. You don't have to be called Will to work on Tenks, but it > helps. > > There are various resources available for anyone wishing to find out more: > > * Ironic spec review: https://review.openstack.org/#/c/579583 > * Documentation: https://tenks.readthedocs.io/en/latest/ > * Source code: https://github.com/stackhpc/tenks > * Blog: https://stackhpc.com/tenks.html > * IRC: mgoddard or jovial in #openstack-ironic > > What does everyone think? Is this something that the ironic community > could or should take ownership of? > Makes sense to me, but we should also have an explicit goal of using tenks to kill our devstack code (and the other things mentioned). Consider me +2 on the spec but leaving time for additional discussion. :) Thanks Mark! // jim > > [1] > http://eavesdrop.openstack.org/meetings/ironic/2018/ironic.2018-10-01-15.00.log.html#l-170 > > Thanks, > Mark > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Oct 8 14:55:57 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 08 Oct 2018 10:55:57 -0400 Subject: [openstack-dev] [tc] Technical Committee status update for 8 October 2018 Message-ID: This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == Since the last update email, we have held another TC election in which 3 returning and 3 new members have been elected. Welcome again to our new and returning members, and thank you to the former members for their service! * http://lists.openstack.org/pipermail/openstack-dev/2018-September/135187.html If you missed it, I sent the summary of TC discussions at the PTG a while back. * http://lists.openstack.org/pipermail/openstack-dev/2018-September/134744.html Between being busy with the PTG and then the TC elections, I did not send a status update during September, so there are quite a few project updates to report. Project updates: * Add os-ken to neutron: https://review.openstack.org/#/c/588358/ * Add cookbook-openstack-bare-metal to Chef: * https://review.openstack.org/#/c/596746/ * Add mistral-extra to Mistral: https://review.openstack.org/#/c/597551/ * Add placement deliverable to Nova: * https://review.openstack.org/#/c/598380/ * Add metalsmith to Ironic: https://review.openstack.org/#/c/602075/ * Add oslo.upgradecheck to Oslo: * https://review.openstack.org/#/c/602483/ * Add puppet-placement to Puppet: * https://review.openstack.org/#/c/602870/ * Add ansible-role-chrony to TripleO: * https://review.openstack.org/#/c/603516/ * Add octavia-lib to Octavia: https://review.openstack.org/#/c/604890/ * Add neutron-interconnection to neutron: * https://review.openstack.org/#/c/599428/ Retired repositories: * Retire the development-proposals repository: https://review.openstack.org/#/c/600649/ * Retire fuxi: https://review.openstack.org/#/c/604527/ * Retire charm-ceph: https://review.openstack.org/#/c/604530/ Community Updates: * Add Jay Faulkner as ATC for Ironic: https://review.openstack.org/#/c/597212/ * Begin the T deveiopment cycle naming process: https://review.openstack.org/#/c/600354/ * The Operations Guide team adopted the HA Guide: https://review.openstack.org/#/c/601321/ * The Interop Working Group adopted the refstack tools: https://review.openstack.org/#/c/590179/ == TC Meetings == In order to fulfill our obligations under the OpenStack Foundation bylaws, the TC needs to hold meetings at least once each quarter. We have decided to schedule monthly meetings on IRC, and retain the existing office hours times for less formal discussions. See the thread for details. * http://lists.openstack.org/pipermail/openstack-dev/2018-October/135521.html == Ongoing Discussions == Tony Breeds is coordinating the poll for names for the T development cycle and release series. * http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html == TC member actions/focus/discussions for the coming week(s) == We need to decide how we are going to handle updating the list of liaisons to projects this cycle. Please follow up on that thread. * http://lists.openstack.org/pipermail/openstack-dev/2018-October/135523.html Add the monthly meeting to your calendar. * http://lists.openstack.org/pipermail/openstack-dev/2018-October/135521.html The next Foundation board meeting will be held via webex on 25 October. See the wiki for details. * https://wiki.openstack.org/wiki/Governance/Foundation/25Oct2018BoardMeeting == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: - 09:00 UTC on Tuesdays - 01:00 UTC on Wednesdays - 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. You will find channel logs with past conversations at http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From doug at doughellmann.com Mon Oct 8 15:07:38 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 08 Oct 2018 11:07:38 -0400 Subject: [openstack-dev] [goals][python3][telemetry][barbican][monasca][neutron] having a gerrit admin approve the remaining zuul job settings import patches Message-ID: We have about 17 remaining patches to import zuul job settings into a few repositories. Those are mostly in stable branches and the jobs are failing in ways that may take us a long time to fix. Rather than waiting for those, Andreas and I are proposing that we have someone from the infra team approve them, bypassing the test jobs. That will allow us to complete the cleanup work in the project-config repository, and will not leave the affected repositories in a state that is any more (or less) broken than they are today. If you have any objections to the plan, please speak up quickly. I would like to try to proceed before the end of the week. Doug +----------------------------------------------+---------------------------------+-----------+--------+----------+-------------------------------------+---------------+---------------+ | Subject | Repo | Team | Tests | Workflow | URL | Branch | Owner | +----------------------------------------------+---------------------------------+-----------+--------+----------+-------------------------------------+---------------+---------------+ | import zuul job settings from project-config | openstack/aodh | Telemetry | FAILED | NEW | https://review.openstack.org/598648 | stable/ocata | Doug Hellmann | | import zuul job settings from project-config | openstack/barbican | barbican | FAILED | REVIEWED | https://review.openstack.org/599659 | stable/queens | Doug Hellmann | | import zuul job settings from project-config | openstack/barbican | barbican | FAILED | REVIEWED | https://review.openstack.org/599661 | stable/rocky | Doug Hellmann | | import zuul job settings from project-config | openstack/castellan-ui | barbican | FAILED | NEW | https://review.openstack.org/599649 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/ceilometermiddleware | Telemetry | FAILED | APPROVED | https://review.openstack.org/598634 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/ceilometermiddleware | Telemetry | FAILED | APPROVED | https://review.openstack.org/598655 | stable/pike | Doug Hellmann | | import zuul job settings from project-config | openstack/ceilometermiddleware | Telemetry | PASS | NEW | https://review.openstack.org/598661 | stable/queens | Doug Hellmann | | import zuul job settings from project-config | openstack/ceilometermiddleware | Telemetry | FAILED | NEW | https://review.openstack.org/598667 | stable/rocky | Doug Hellmann | | import zuul job settings from project-config | openstack/monasca-analytics | monasca | FAILED | REVIEWED | https://review.openstack.org/595658 | master | Doug Hellmann | | import zuul job settings from project-config | openstack/networking-midonet | neutron | PASS | REVIEWED | https://review.openstack.org/597937 | stable/queens | Doug Hellmann | | import zuul job settings from project-config | openstack/networking-sfc | neutron | FAILED | NEW | https://review.openstack.org/597913 | stable/ocata | Doug Hellmann | | import zuul job settings from project-config | openstack/networking-sfc | neutron | FAILED | NEW | https://review.openstack.org/597925 | stable/pike | Doug Hellmann | | import zuul job settings from project-config | openstack/python-aodhclient | Telemetry | FAILED | NEW | https://review.openstack.org/598652 | stable/ocata | Doug Hellmann | | import zuul job settings from project-config | openstack/python-aodhclient | Telemetry | FAILED | NEW | https://review.openstack.org/598657 | stable/pike | Doug Hellmann | | import zuul job settings from project-config | openstack/python-aodhclient | Telemetry | FAILED | APPROVED | https://review.openstack.org/598669 | stable/rocky | Doug Hellmann | | import zuul job settings from project-config | openstack/python-barbicanclient | barbican | FAILED | NEW | https://review.openstack.org/599656 | stable/ocata | Doug Hellmann | | import zuul job settings from project-config | openstack/python-barbicanclient | barbican | FAILED | NEW | https://review.openstack.org/599658 | stable/pike | Doug Hellmann | +----------------------------------------------+---------------------------------+-----------+--------+----------+-------------------------------------+---------------+---------------+ From markus.hentsch at secustack.com Mon Oct 8 15:16:24 2018 From: markus.hentsch at secustack.com (Markus Hentsch) Date: Mon, 8 Oct 2018 17:16:24 +0200 Subject: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal) In-Reply-To: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> References: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> Message-ID: Dear OpenStack developers, as you suggested, we have written individual specs for Nova [1] and Cinder [2] so far and will write another spec for Glance soon. We'd appreciate any feedback and reviews on the specs :) Thank you in advance, Markus Hentsch [1] https://review.openstack.org/#/c/608696/ [2] https://review.openstack.org/#/c/608663/ From lbragstad at gmail.com Mon Oct 8 15:21:54 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 8 Oct 2018 10:21:54 -0500 Subject: [openstack-dev] [tc] assigning new liaisons to projects In-Reply-To: References: Message-ID: On Mon, Oct 8, 2018 at 9:27 AM Doug Hellmann wrote: > TC members, > > Since we are starting a new term, and have several new members, we need > to decide how we want to rotate the liaisons attached to each our > project teams, SIGs, and working groups [1]. > > Last term we went through a period of volunteer sign-up and then I > randomly assigned folks to slots to fill out the roster evenly. During > the retrospective we talked a bit about how to ensure we had an > objective perspective for each team by not having PTLs sign up for their > own teams, but I don't think we settled on that as a hard rule. > > I think the easiest and fairest (to new members) way to manage the list > will be to wipe it and follow the same process we did last time. If you > agree, I will update the page this week and we can start collecting > volunteers over the next week or so. > +1 >From the perspective of someone new, it'll be nice to go through all the motions. > > Doug > > [1] https://wiki.openstack.org/wiki/OpenStack_health_tracker > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgoncalves at redhat.com Mon Oct 8 16:05:58 2018 From: cgoncalves at redhat.com (Carlos Goncalves) Date: Mon, 8 Oct 2018 18:05:58 +0200 Subject: [openstack-dev] [stable][octavia] Backport patch adding new configuration options Message-ID: Stable team, The Octavia team merged a patch in master [1] that fixed an issue where load balancers could be deleted whenever queue_event_streamer driver is enabled and RabbitMQ goes down [2]. As this is a critical bug, we would like to backport as much back as possible. The question is whether these backports comply with the stable policy because it adds two new configuration options and deprecates one. The patch was prepared so that the deprecated option has precedence if set over the other two. Reading the review guidelines [3], I only see "Incompatible config file changes" as relevant, but the patch doesn't seem to go against that. We had a patch that added a new config option backported to Queens that raised some concern, so we'd like to be on the safe side this time ;-) We'd appreciate guidance to whether such backports are acceptable or not. Thanks, Carlos [1] https://review.openstack.org/#/c/581585/ [2] https://storyboard.openstack.org/#!/story/2002937 [3] https://docs.openstack.org/project-team-guide/stable-branches.html#review-guidelines [4] https://review.openstack.org/#/c/593954/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Oct 8 16:08:23 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 8 Oct 2018 11:08:23 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Oslo library status In-Reply-To: <301224F2-16C9-4817-9B14-BC05DE051E5A@redhat.com> References: <301224F2-16C9-4817-9B14-BC05DE051E5A@redhat.com> Message-ID: On 10/7/2018 4:10 AM, Slawomir Kaplonski wrote: > I start working on „neutron-status upgrade check” tool with noop operation for now. Patch is in [1] > I started using this new oslo_upgradecheck library in version 0.0.1.dev15 which is available on pypi.org but I see that in master there are some changes already (like shorted names of base classes). > So my question here is: should I just wait a bit more for kind of „stable” version of this lib and then push neutron patch to review (do You have any eta for that?), or maybe we shouldn’t rely on this oslo library in this release and implement all on our own, like it is done currently in nova? > > [1]https://review.openstack.org/#/c/608444/ I would wait. I think there are just a couple of changes we need to get into the library (one of which changes the interface) and then we can do a release. Sean McGinnis is waiting on the release for Cinder as well. -- Thanks, Matt From mriedemos at gmail.com Mon Oct 8 16:33:47 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 8 Oct 2018 11:33:47 -0500 Subject: [openstack-dev] [stable][octavia] Backport patch adding new configuration options In-Reply-To: References: Message-ID: <945d58ed-345b-2886-be3f-31a8a57c6187@gmail.com> On 10/8/2018 11:05 AM, Carlos Goncalves wrote: > The Octavia team merged a patch in master [1] that fixed an issue where > load balancers could be deleted whenever queue_event_streamer driver is > enabled and RabbitMQ goes down [2]. > > As this is a critical bug, we would like to backport as much back as > possible. The question is whether these backports comply with the stable > policy because it adds two new configuration options and deprecates one. > The patch was prepared so that the deprecated option has precedence if > set over the other two. > > Reading the review guidelines [3], I only see "Incompatible config file > changes" as relevant, but the patch doesn't seem to go against that. We > had a patch that added a new config option backported to Queens that > raised some concern, so we'd like to be on the safe side this time ;-) > > We'd appreciate guidance to whether such backports are acceptable or not. > Well, a few things: * I would have introduced the new config options as part of the bug fix but *not* deprecated the existing option in the same change but rather as a follow up. Then the new options, which do nothing by default (?), could be backported and the deprecation would remain on master. * The release note mentions the new options as a feature, but that's not really correct is it? They are for fixing a bug, not new feature functionality as much. In general, as long as the new options don't introduce new behavior by default for existing configuration (as you said, the existing option takes precedence if set), and don't require configuration then it should be OK to backport those new options. But the sticky parts here are (1) deprecating an option on stable (we shouldn't do that) and (2) the release note mentioning a feature. What I'd probably do is (1) change the 'feature' release note to a 'fixes' release note on master and then (2) backport the change but (a) drop the deprecation and (b) fix the release note in the backport to not call out a feature (since it's not a feature I don't think?) - and just make it clear with a note in the backport commit message why the backport is different from the original change. -- Thanks, Matt From anne at openstack.org Mon Oct 8 16:40:03 2018 From: anne at openstack.org (Anne Bertucio) Date: Mon, 8 Oct 2018 09:40:03 -0700 Subject: [openstack-dev] [all] Stepping down from Release Management team Message-ID: <47811E35-E119-4582-839B-917626D1B087@openstack.org> Hi all, I have had a fantastic time getting to work on the Release Management team and getting to know you all through the release marketing work, however, it is time for me to step down from my role on the Release Management team as I am moving on from my role at the Foundation and will no longer be working on upstream OpenStack. I cannot thank you all enough for how you all welcomed me into the OpenStack community and for how much I have learned about open source development in my time here. If you have questions about cycle-highlights, swing by #openstack-release. If you have questions about release marketing, contact lauren at openstack.org. For other inquiries, contact allison at openstack.org. While I won't be working upstream anymore, I'll only be a Tweet or IRC message away. Thank you again, and remember that cycle-highlights should be submitted by RC1. Best, Anne Bertucio irc: annabelleB twitter: @whyhiannabelle Anne Bertucio OpenStack Foundation anne at openstack.org | irc: annabelleB -------------- next part -------------- An HTML attachment was scrubbed... URL: From assaf at redhat.com Mon Oct 8 16:50:36 2018 From: assaf at redhat.com (Assaf Muller) Date: Mon, 8 Oct 2018 12:50:36 -0400 Subject: [openstack-dev] [stable][octavia] Backport patch adding new configuration options In-Reply-To: <945d58ed-345b-2886-be3f-31a8a57c6187@gmail.com> References: <945d58ed-345b-2886-be3f-31a8a57c6187@gmail.com> Message-ID: On Mon, Oct 8, 2018 at 12:34 PM Matt Riedemann wrote: > > On 10/8/2018 11:05 AM, Carlos Goncalves wrote: > > The Octavia team merged a patch in master [1] that fixed an issue where > > load balancers could be deleted whenever queue_event_streamer driver is > > enabled and RabbitMQ goes down [2]. > > > > As this is a critical bug, we would like to backport as much back as > > possible. The question is whether these backports comply with the stable > > policy because it adds two new configuration options and deprecates one. > > The patch was prepared so that the deprecated option has precedence if > > set over the other two. > > > > Reading the review guidelines [3], I only see "Incompatible config file > > changes" as relevant, but the patch doesn't seem to go against that. We > > had a patch that added a new config option backported to Queens that > > raised some concern, so we'd like to be on the safe side this time ;-) > > > > We'd appreciate guidance to whether such backports are acceptable or not. > > > > Well, a few things: > > * I would have introduced the new config options as part of the bug fix > but *not* deprecated the existing option in the same change but rather > as a follow up. Then the new options, which do nothing by default (?), > could be backported and the deprecation would remain on master. > > * The release note mentions the new options as a feature, but that's not > really correct is it? They are for fixing a bug, not new feature > functionality as much. > > In general, as long as the new options don't introduce new behavior by > default for existing configuration (as you said, the existing option > takes precedence if set), and don't require configuration then it should > be OK to backport those new options. But the sticky parts here are (1) > deprecating an option on stable (we shouldn't do that) and (2) the > release note mentioning a feature. I would classify this as a critical bug fix. I think it's important to fix the bug on stable branches, even for deployments that will get the fix but not change their configuration options. How that's done with respect to configuration options & backports is another matter, but I do think that whatever approach is chosen should end up with the bug fixed on stable branches without requiring operators to use new options or otherwise make changes to their existing configuration files. > > What I'd probably do is (1) change the 'feature' release note to a > 'fixes' release note on master and then (2) backport the change but (a) > drop the deprecation and (b) fix the release note in the backport to not > call out a feature (since it's not a feature I don't think?) - and just > make it clear with a note in the backport commit message why the > backport is different from the original change. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mrhillsman at gmail.com Mon Oct 8 16:51:59 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 8 Oct 2018 16:51:59 +0000 Subject: [openstack-dev] [all] Stepping down from Release Management team In-Reply-To: <47811E35-E119-4582-839B-917626D1B087@openstack.org> References: <47811E35-E119-4582-839B-917626D1B087@openstack.org> Message-ID: Nooooooo! lol Sorry to see you go but do stay in touch and I will do the same. Cheers to going on and continuing to do great things Anne; excited to see what you are up to in the coming days. From: Anne Bertucio Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Monday, October 8, 2018 at 11:40 AM To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [all] Stepping down from Release Management team Hi all, I have had a fantastic time getting to work on the Release Management team and getting to know you all through the release marketing work, however, it is time for me to step down from my role on the Release Management team as I am moving on from my role at the Foundation and will no longer be working on upstream OpenStack. I cannot thank you all enough for how you all welcomed me into the OpenStack community and for how much I have learned about open source development in my time here. If you have questions about cycle-highlights, swing by #openstack-release. If you have questions about release marketing, contact lauren at openstack.org. For other inquiries, contact allison at openstack.org. While I won't be working upstream anymore, I'll only be a Tweet or IRC message away. Thank you again, and remember that cycle-highlights should be submitted by RC1. Best, Anne Bertucio irc: annabelleB twitter: @whyhiannabelle Anne Bertucio OpenStack Foundation anne at openstack.org | irc: annabelleB -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Oct 8 16:53:25 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 8 Oct 2018 11:53:25 -0500 Subject: [openstack-dev] [tc] assigning new liaisons to projects In-Reply-To: References: Message-ID: <20181008165324.GA26464@sm-workstation> On Mon, Oct 08, 2018 at 10:27:06AM -0400, Doug Hellmann wrote: > TC members, > > Since we are starting a new term, and have several new members, we need > to decide how we want to rotate the liaisons attached to each our > project teams, SIGs, and working groups [1]. > > Last term we went through a period of volunteer sign-up and then I > randomly assigned folks to slots to fill out the roster evenly. During > the retrospective we talked a bit about how to ensure we had an > objective perspective for each team by not having PTLs sign up for their > own teams, but I don't think we settled on that as a hard rule. > > I think the easiest and fairest (to new members) way to manage the list > will be to wipe it and follow the same process we did last time. If you > agree, I will update the page this week and we can start collecting > volunteers over the next week or so. > > Doug > Seems fair and a good approach to me. Sean From skaplons at redhat.com Mon Oct 8 17:16:04 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 8 Oct 2018 19:16:04 +0200 Subject: [openstack-dev] [goals][upgrade-checkers] Oslo library status In-Reply-To: References: <301224F2-16C9-4817-9B14-BC05DE051E5A@redhat.com> Message-ID: Ok. Thx for info Matt. My patch is now marked as WIP and I will wait for release of this lib then. > Wiadomość napisana przez Matt Riedemann w dniu 08.10.2018, o godz. 18:08: > > On 10/7/2018 4:10 AM, Slawomir Kaplonski wrote: >> I start working on „neutron-status upgrade check” tool with noop operation for now. Patch is in [1] >> I started using this new oslo_upgradecheck library in version 0.0.1.dev15 which is available on pypi.org but I see that in master there are some changes already (like shorted names of base classes). >> So my question here is: should I just wait a bit more for kind of „stable” version of this lib and then push neutron patch to review (do You have any eta for that?), or maybe we shouldn’t rely on this oslo library in this release and implement all on our own, like it is done currently in nova? >> [1]https://review.openstack.org/#/c/608444/ > > I would wait. I think there are just a couple of changes we need to get into the library (one of which changes the interface) and then we can do a release. Sean McGinnis is waiting on the release for Cinder as well. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From jaypipes at gmail.com Mon Oct 8 17:48:49 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 8 Oct 2018 13:48:49 -0400 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> Message-ID: On 10/08/2018 06:14 AM, Florian Engelmann wrote: > 3. HAProxy is not capable to handle "read/write" split with Galera. I > would like to introduce ProxySQL to be able to scale Galera. Why not send all read and all write traffic to a single haproxy endpoint and just have haproxy spread all traffic across each Galera node? Galera, after all, is multi-master synchronous replication... so it shouldn't matter which node in the Galera cluster you send traffic to. -jay From doug at doughellmann.com Mon Oct 8 18:07:07 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 08 Oct 2018 14:07:07 -0400 Subject: [openstack-dev] [all] Zuul job backlog In-Reply-To: References: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> <1538165560.3935414.1524300072.5C31EEA9@webmail.messagingengine.com> <1538662401.2558963.1530696776.04282A5E@webmail.messagingengine.com> Message-ID: Abhishek Kekane writes: > Hi Doug, > > Should I use something like SimpleHttpServer to upload a file and download > the same, or there are other efficient ways to handle it, > Kindly let me know if you are having any suggestions for the same. Sure, that would work, especially if your tests are running in the unit test jobs. If you're running a functional test, it seems like it would also be OK to just copy a file into the directory Apache is serving from and then download it from there. Doug > > Thanks & Best Regards, > > Abhishek Kekane > > > On Fri, Oct 5, 2018 at 4:57 PM Doug Hellmann wrote: > >> Abhishek Kekane writes: >> >> > Hi Matt, >> > >> > Thanks for the input, I guess I should use ' >> > http://git.openstack.org/static/openstack.png' which will definitely >> work. >> > Clark, Matt, Kindly let me know your opinion about the same. >> >> That URL would not be on the local node running the test, and would >> eventually exhibit the same problems. In fact we have seen issues >> cloning git repositories as part of the tests in the past. >> >> You need to use a localhost URL to ensure that the download doesn't have >> to go off of the node. That may mean placing something into the directory >> where Apache is serving files as part of the test setup. >> >> Doug >> From doug at doughellmann.com Mon Oct 8 18:23:48 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 08 Oct 2018 14:23:48 -0400 Subject: [openstack-dev] [goals][python3][heat][manila][qinling][zaqar][magnum][keystone][congress] switching python package jobs In-Reply-To: References: Message-ID: Doug Hellmann writes: > Doug Hellmann writes: > >> Doug Hellmann writes: >> >>> Doug Hellmann writes: >> >>> I have filed requests with the maintainers of PyPI to claim the names >>> "keystone" and "congress". That may take some time. Please let me know >>> if you're willing to simply use "openstack-keystone" and >>> "openstack-congress" instead. I will take care of configuring PyPI and >>> proposing the patch to update your setup.cfg (that way you can approve >>> the change). >>> >>> * https://github.com/pypa/warehouse/issues/4770 >>> * https://github.com/pypa/warehouse/issues/4771 > > We haven't heard back about either of these requests, so I filed changes > with congress and keystone to change the dist names. > > * https://review.openstack.org/608332 (congress) > * https://review.openstack.org/608331 (keystone) > > Doug Dan Crosta has very graciously given us the name "keystone" so I abandoned that patch, made openstackci an owner, and uploaded the previous release. The patch to rename congress is approved, but sitting on top of one or two other patches that also need reviews. The patch to rename heat is failing the grenade tests, and we could use some help with fixing the problem. I think we need an upgrade script that removes the old package before installing the new one. Does someone want to learn how grenade scripts work? Doug From doug at doughellmann.com Mon Oct 8 18:34:54 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 08 Oct 2018 14:34:54 -0400 Subject: [openstack-dev] [all] Stepping down from Release Management team In-Reply-To: <47811E35-E119-4582-839B-917626D1B087@openstack.org> References: <47811E35-E119-4582-839B-917626D1B087@openstack.org> Message-ID: Anne Bertucio writes: > Hi all, > > I have had a fantastic time getting to work on the Release Management > team and getting to know you all through the release marketing work, > however, it is time for me to step down from my role on the Release > Management team as I am moving on from my role at the Foundation and > will no longer be working on upstream OpenStack. I cannot thank you > all enough for how you all welcomed me into the OpenStack community > and for how much I have learned about open source development in my > time here. > > If you have questions about cycle-highlights, swing by #openstack-release. > If you have questions about release marketing, contact lauren at openstack.org. > For other inquiries, contact allison at openstack.org. > While I won't be working upstream anymore, I'll only be a Tweet or IRC message away. > > Thank you again, and remember that cycle-highlights should be > submitted by RC1. Thank you for everything, Anne! The cycle-highlights system you helped us create is a great example of using decentralization and peer review at the same time. I'm sure it's going to continue to be an important communication tool for the community. Doug From juliaashleykreger at gmail.com Mon Oct 8 18:57:34 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 8 Oct 2018 12:57:34 -0600 Subject: [openstack-dev] [tc] assigning new liaisons to projects In-Reply-To: References: Message-ID: On Mon, Oct 8, 2018 at 8:27 AM Doug Hellmann wrote: > TC members, > > Since we are starting a new term, and have several new members, we need > to decide how we want to rotate the liaisons attached to each our > project teams, SIGs, and working groups [1]. > > Last term we went through a period of volunteer sign-up and then I > randomly assigned folks to slots to fill out the roster evenly. During > the retrospective we talked a bit about how to ensure we had an > objective perspective for each team by not having PTLs sign up for their > own teams, but I don't think we settled on that as a hard rule. > > I think the easiest and fairest (to new members) way to manage the list > will be to wipe it and follow the same process we did last time. If you > agree, I will update the page this week and we can start collecting > volunteers over the next week or so. > > Doug > > [1] https://wiki.openstack.org/wiki/OpenStack_health_tracker > > > +1, Sounds good. I would just ask that if a TC is member has any expectation on another member to post a status update, that they explicitly reach out and convey that expectation so we minimize confusion. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Oct 9 03:56:47 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 09 Oct 2018 12:56:47 +0900 Subject: [openstack-dev] [tc] assigning new liaisons to projects In-Reply-To: References: Message-ID: <16656f85682.10db8225c63642.3540336337320113443@ghanshyammann.com> ---- On Mon, 08 Oct 2018 23:27:06 +0900 Doug Hellmann wrote ---- > TC members, > > Since we are starting a new term, and have several new members, we need > to decide how we want to rotate the liaisons attached to each our > project teams, SIGs, and working groups [1]. > > Last term we went through a period of volunteer sign-up and then I > randomly assigned folks to slots to fill out the roster evenly. During > the retrospective we talked a bit about how to ensure we had an > objective perspective for each team by not having PTLs sign up for their > own teams, but I don't think we settled on that as a hard rule. > > I think the easiest and fairest (to new members) way to manage the list > will be to wipe it and follow the same process we did last time. If you > agree, I will update the page this week and we can start collecting > volunteers over the next week or so. +1, sounds good to me. -gmann > > Doug > > [1] https://wiki.openstack.org/wiki/OpenStack_health_tracker > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From prometheanfire at gentoo.org Tue Oct 9 03:59:40 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 8 Oct 2018 22:59:40 -0500 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 Message-ID: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> several projects have had problems with the new release, some have ways of working around it, and some do not. I'm sending this just to raise the issue and allow a place to discuss solutions. Currently there is a review proposed to blacklist 9.0.0, but if this is going to still be an issue somehow in further releases we may need another solution. https://review.openstack.org/#/c/608835/ -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From tengqim at cn.ibm.com Tue Oct 9 06:21:44 2018 From: tengqim at cn.ibm.com (Qiming Teng) Date: Tue, 9 Oct 2018 06:21:44 +0000 Subject: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration In-Reply-To: <5d7129ce-39c8-fb9e-517c-64d030f71963@redhat.com> References: <20180927022738.GA22304@rcp.sl.cloud9.ibm.com> <5d7129ce-39c8-fb9e-517c-64d030f71963@redhat.com> Message-ID: <20181009062143.GA32130@rcp.sl.cloud9.ibm.com> > >One approach would be to switch the underlying Heat AutoScalingGroup > >implementation to use Senlin and then deprecate the AutoScalingGroup > >resource type in favor of the Senlin resource type over several > >cycles. > > The hard part (or one hard part, at least) of that is migrating the existing > data. Agreed. In an ideal world, we can transparently transplant the "scaling group" resource implementation onto something (e.g. a library or an interface). This sounds like an option for both teams to brainstorm together. - Qiming From rico.lin.guanyu at gmail.com Tue Oct 9 07:07:11 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 9 Oct 2018 15:07:11 +0800 Subject: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration In-Reply-To: <20181009062143.GA32130@rcp.sl.cloud9.ibm.com> References: <20180927022738.GA22304@rcp.sl.cloud9.ibm.com> <5d7129ce-39c8-fb9e-517c-64d030f71963@redhat.com> <20181009062143.GA32130@rcp.sl.cloud9.ibm.com> Message-ID: a reminder for all, please put your ideas/thoughts/suggest actions in our etherpad [1], which we gonna use for further discussion in Forum, or in PTG if we got no forum for it. So we won't be missing anything. [1] https://etherpad.openstack.org/p/autoscaling-integration-and-feedback On Tue, Oct 9, 2018 at 2:22 PM Qiming Teng wrote: > > >One approach would be to switch the underlying Heat AutoScalingGroup > > >implementation to use Senlin and then deprecate the AutoScalingGroup > > >resource type in favor of the Senlin resource type over several > > >cycles. > > > > The hard part (or one hard part, at least) of that is migrating the > existing > > data. > > Agreed. In an ideal world, we can transparently transplant the "scaling > group" resource implementation onto something (e.g. a library or an > interface). This sounds like an option for both teams to brainstorm > together. > > - Qiming > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From majopela at redhat.com Tue Oct 9 07:17:35 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Tue, 9 Oct 2018 09:17:35 +0200 Subject: [openstack-dev] [SIGS] Ops Tools SIG Message-ID: Hello Yesterday, during the Oslo meeting we discussed [6] the possibility of creating a new Special Interest Group [1][2] to provide home and release means for operator related tools [3] [4] [5] I continued the discussion with M.Hillsman later, and he made me aware of the operator working group and mailing list, which existed even before the SIGs. I believe it could be a very good idea, to give life and more visibility to all those very useful tools (for example, I didn't know some of them existed ...). Give this, I have two questions: 1) Do you know or more tools which could find home under an Ops Tools SIG umbrella? 2) Do you want to join us? Best regards and have a great day. [1] https://governance.openstack.org/sigs/ [2] http://git.openstack.org/cgit/openstack/governance-sigs/tree/sigs.yaml [3] https://wiki.openstack.org/wiki/Osops [4] http://git.openstack.org/cgit/openstack/ospurge/tree/ [5] http://git.openstack.org/cgit/openstack/os-log-merger/tree/ [6] http://eavesdrop.openstack.org/meetings/oslo/2018/oslo.2018-10-08-15.00.log.html#l-130 -- Miguel Ángel Ajo OSP / Networking DFG, OVN Squad Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue Oct 9 08:32:57 2018 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 9 Oct 2018 09:32:57 +0100 Subject: [openstack-dev] [kolla-ansible][zun] Request to make another release for stable/queens In-Reply-To: References: Message-ID: Hi Hongbin, I'll add this to our meeting agenda for tomorrow, but I see no reason why we should not make another queens series release. Cheers, Mark On Sun, 7 Oct 2018 at 18:50, Hongbin Lu wrote: > Hi Kolla team, > > I have a fixup on the configuration of Zun service: > https://review.openstack.org/#/c/591256/ . The fix has been backported to > stable/queens and I wonder if it is possible to release kolla-ansible 6.2.0 > that contains this patch. The deployment of Zun service needs this patch to > work properly. > > Best regards, > Hongbin > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berendt at betacloud-solutions.de Tue Oct 9 08:34:13 2018 From: berendt at betacloud-solutions.de (Christian Berendt) Date: Tue, 9 Oct 2018 10:34:13 +0200 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> Message-ID: > On 8. Oct 2018, at 19:48, Jay Pipes wrote: > > Why not send all read and all write traffic to a single haproxy endpoint and just have haproxy spread all traffic across each Galera node? > > Galera, after all, is multi-master synchronous replication... so it shouldn't matter which node in the Galera cluster you send traffic to. Probably because of MySQL deadlocks in Galera: —snip— Galera cluster has known limitations, one of them is that it uses cluster-wide optimistic locking. This may cause some transactions to rollback. With an increasing number of writeable masters, the transaction rollback rate may increase, especially if there is write contention on the same dataset. It is of course possible to retry the transaction and perhaps it will COMMIT in the retries, but this will add to the transaction latency. However, some designs are deadlock prone, e.g sequence tables. —snap— Source: https://severalnines.com/resources/tutorials/mysql-load-balancing-haproxy-tutorial Christian. -- Christian Berendt Chief Executive Officer (CEO) Mail: berendt at betacloud-solutions.de Web: https://www.betacloud-solutions.de Betacloud Solutions GmbH Teckstrasse 62 / 70190 Stuttgart / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Stuttgart Amtsgericht: Stuttgart, HRB 756139 From mark at stackhpc.com Tue Oct 9 09:04:51 2018 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 9 Oct 2018 10:04:51 +0100 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> Message-ID: Thanks for these suggestions Florian, there are some interesting ideas in here. I'm a little concerned about the maintenance overhead of adding support for all of these things, and wonder if some of them could be done without explicit support in kolla and kolla-ansible. The kolla projects have been able to move quickly by providing a flexible configuration mechanism that avoids the need to maintain support for every OpenStack feature. Other thoughts inline. Regards, Mark On Mon, 8 Oct 2018 at 11:15, Florian Engelmann < florian.engelmann at everyware.ch> wrote: > Hi, > > I would like to start a discussion about some changes and additions I > would like to see in in kolla and kolla-ansible. > > 1. Keepalived is a problem in layer3 spine leaf networks as any floating > IP can only exist in one leaf (and VRRP is a problem in layer3). I would > like to use consul and registrar to get rid of the "internal" floating > IP and use consuls DNS service discovery to connect all services with > each other. > Without reading up, I'm not sure exactly how this fits together. If kolla-ansible made the API host configurable for each service rather than globally, would that be a step in the right direction? > > 2. Using "ports" for external API (endpoint) access is a major headache > if a firewall is involved. I would like to configure the HAProxy (or > fabio) for the external access to use "Host:" like, eg. "Host: > keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS. > Any customer would just need HTTPS access and not have to open all those > ports in his firewall. For some enterprise customers it is not possible > to request FW changes like that. > > 3. HAProxy is not capable to handle "read/write" split with Galera. I > would like to introduce ProxySQL to be able to scale Galera. > It's now possible to use an external database server with kolla-ansible, instead of deploying a mariadb/galera cluster. This could be implemented how you like, see https://docs.openstack.org/kolla-ansible/latest/reference/external-mariadb-guide.html . 4. HAProxy is fine but fabio integrates well with consul, statsd and > could be connected to a vault cluster to manage secure certificate access. > > As above. > 5. I would like to add vault as Barbican backend. > > Does this need explicit support in kolla and kolla-ansible, or could it be done through configuration of barbican.conf? Are there additional packages required in the barbican container? If so, see https://docs.openstack.org/kolla/latest/admin/image-building.html#package-customisation . > 6. I would like to add an option to enable tokenless authentication for > all services with each other to get rid of all the openstack service > passwords (security issue). > > Again, could this be done without explicit support? > What do you think about it? > > All the best, > Florian > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Tue Oct 9 09:40:24 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Tue, 9 Oct 2018 09:40:24 +0000 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations Message-ID: <1539078021.11166.5@smtp.office365.com> Hi, Setup ----- nested allocation: an allocation that contains resources from one or more nested RPs. (if you have better term for this then please suggest). If an instance has nested allocation it means that the compute, it allocates from, has a nested RP tree. BUT if a compute has a nested RP tree it does not automatically means that the instance, allocating from that compute, has a nested allocation (e.g. bandwidth inventory will be on a nested RPs but not every instance will require bandwidth) Afaiu, as soon as we have NUMA modelling in place the most trivial servers will have nested allocations as CPU and MEMORY inverntory will be moved to the nested NUMA RPs. But NUMA is still in the future. Sidenote: there is an edge case reported by bauzas when an instance allocates _only_ from nested RPs. This was discussed on last Friday and it resulted in a new patch[0] but I would like to keep that discussion separate from this if possible. Sidenote: the current problem somewhat related to not just nested PRs but to sharing RPs as well. However I'm not aiming to implement sharing support in Nova right now so I also try to keep the sharing disscussion separated if possible. There was already some discussion on the Monday's scheduler meeting but I could not attend. http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20 The meat -------- Both live-migrate[1] and evacuate[2] has an optional force flag on the nova REST API. The documentation says: "Force by not verifying the provided destination host by the scheduler." Nova implements this statement by not calling the scheduler if force=True BUT still try to manage allocations in placement. To have allocation on the destination host Nova blindly copies the instance allocation from the source host to the destination host during these operations. Nova can do that as 1) the whole allocation is against a single RP (the compute RP) and 2) Nova knows both the source compute RP and the destination compute RP. However as soon as we bring nested allocations into the picture that blind copy will not be feasible. Possible cases 0) The instance has non-nested allocation on the source and would need non nested allocation on the destination. This works with blindy copy today. 1) The instance has a nested allocation on the source and would need a nested allocation on the destination as well. 2) The instance has a non-nested allocation on the source and would need a nested allocation on the destination. 3) The instance has a nested allocation on the source and would need a non nested allocation on the destination. Nova cannot generate nested allocations easily without reimplementing some of the placement allocation candidate (a_c) code. However I don't like the idea of duplicating some of the a_c code in Nova. Nova cannot detect what kind of allocation (nested or non-nested) an instance would need on the destination without calling placement a_c. So knowing when to call placement is a chicken and egg problem. Possible solutions: A) fail fast ------------ 0) Nova can detect that the source allocatioin is non-nested and try the blindy copy and it will succeed. 1) Nova can detect that the source allocaton is nested and fail the operation 2) Nova only sees a non nested source allocation. Even if the dest RP tree is nested it does not mean that the allocation will be nested. We cannot fail fast. Nova can try the blind copy and allocate every resources from the root RP of the destination. If the instance require nested allocation instead the claim will fail in placement. So nova can fail the operation a bit later than in 1). 3) Nova can detect that the source allocation is nested and fail the operation. However and enhanced blind copy that tries to allocation everything from the root RP on the destinaton would have worked. B) Guess when to ignore the force flag and call the scheduler ------------------------------------------------------------- 0) keep the blind copy as it works 1) Nova detect that the source allocation is nested. Ignores the force flag and calls the scheduler that will call placement a_c. Move operation can succeed. 2) Nova only sees a non nested source allocation so it will fall back to blind copy and fails at the claim on destination. 3) Nova detect that the source allocation is nested. Ignores the force flag and calls the scheduler that will call placement a_c. Move operation can succeed. This solution would be against the API doc that states nova does not call the scheduler if the operation is forced. However in case of force live-migration Nova already verifies the target host from couple of perspective in [3]. This solution is alreay proposed for live-migrate in [4] and for evacuate in [5] so the complexity of the solution can be seen in the reviews. C) Remove the force flag from the API in a new microversion ----------------------------------------------------------- 0)-3): all cases would call the scheduler to verify the target host and generate the nested (or non-nested) allocation. We would still need an agreed behavior (from A), B), D)) for the old microversions as the todays code creates inconsistent allocation in #1) and #3) by ignoring the resource from the nested RP. D) Do not manage allocations in placement for forced operation -------------------------------------------------------------- Force flag is considered as a last resort tool for the admin to move VMs around. The API doc has a fat warning about the danger of it. So Nova can simply ignore resource allocation task if force=True. Nova would delete the source allocation and does not create any allocation on the destination host. This is a simple but dangerous solution but it is what the force flag is all about, move the server against all the built in safeties. (If the admin needs the safeties she can set force=False and still specify the destination host) I'm open to any suggestions. Cheers, gibi [0] https://review.openstack.org/#/c/608298/ [1] https://developer.openstack.org/api-ref/compute/#live-migrate-server-os-migratelive-action [2] https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action [3] https://github.com/openstack/nova/blob/c5a7002bd571379818c0108296041d12bc171728/nova/conductor/tasks/live_migrate.py#L97 [4] https://review.openstack.org/#/c/605785 [5] https://review.openstack.org/#/c/606111 From jaypipes at gmail.com Tue Oct 9 09:41:55 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 9 Oct 2018 05:41:55 -0400 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> Message-ID: On 10/09/2018 04:34 AM, Christian Berendt wrote: > > >> On 8. Oct 2018, at 19:48, Jay Pipes wrote: >> >> Why not send all read and all write traffic to a single haproxy endpoint and just have haproxy spread all traffic across each Galera node? >> >> Galera, after all, is multi-master synchronous replication... so it shouldn't matter which node in the Galera cluster you send traffic to. > > Probably because of MySQL deadlocks in Galera: > > —snip— > Galera cluster has known limitations, one of them is that it uses cluster-wide optimistic locking. This may cause some transactions to rollback. With an increasing number of writeable masters, the transaction rollback rate may increase, especially if there is write contention on the same dataset. It is of course possible to retry the transaction and perhaps it will COMMIT in the retries, but this will add to the transaction latency. However, some designs are deadlock prone, e.g sequence tables. > —snap— > > Source: https://severalnines.com/resources/tutorials/mysql-load-balancing-haproxy-tutorial Have you seen the above in production? -jay From davanum at gmail.com Tue Oct 9 10:23:22 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 9 Oct 2018 06:23:22 -0400 Subject: [openstack-dev] [tc] assigning new liaisons to projects In-Reply-To: <16656f85682.10db8225c63642.3540336337320113443@ghanshyammann.com> References: <16656f85682.10db8225c63642.3540336337320113443@ghanshyammann.com> Message-ID: On Mon, Oct 8, 2018 at 11:57 PM Ghanshyam Mann wrote: > > > > ---- On Mon, 08 Oct 2018 23:27:06 +0900 Doug Hellmann < > doug at doughellmann.com> wrote ---- > > TC members, > > > > Since we are starting a new term, and have several new members, we need > > to decide how we want to rotate the liaisons attached to each our > > project teams, SIGs, and working groups [1]. > > > > Last term we went through a period of volunteer sign-up and then I > > randomly assigned folks to slots to fill out the roster evenly. During > > the retrospective we talked a bit about how to ensure we had an > > objective perspective for each team by not having PTLs sign up for their > > own teams, but I don't think we settled on that as a hard rule. > > > > I think the easiest and fairest (to new members) way to manage the list > > will be to wipe it and follow the same process we did last time. If you > > agree, I will update the page this week and we can start collecting > > volunteers over the next week or so. > > +1, sounds good to me. > > -gmann > +1 from me as well. > > > > Doug > > > > [1] https://wiki.openstack.org/wiki/OpenStack_health_tracker > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.engelmann at everyware.ch Tue Oct 9 10:34:51 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Tue, 9 Oct 2018 12:34:51 +0200 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> Message-ID: <5cf93772-d8f6-4cbb-3088-eeb670941969@everyware.ch> Am 10/9/18 um 11:41 AM schrieb Jay Pipes: > On 10/09/2018 04:34 AM, Christian Berendt wrote: >> >> >>> On 8. Oct 2018, at 19:48, Jay Pipes wrote: >>> >>> Why not send all read and all write traffic to a single haproxy >>> endpoint and just have haproxy spread all traffic across each Galera >>> node? >>> >>> Galera, after all, is multi-master synchronous replication... so it >>> shouldn't matter which node in the Galera cluster you send traffic to. >> >> Probably because of MySQL deadlocks in Galera: >> >> —snip— >> Galera cluster has known limitations, one of them is that it uses >> cluster-wide optimistic locking. This may cause some transactions to >> rollback. With an increasing number of writeable masters, the >> transaction rollback rate may increase, especially if there is write >> contention on the same dataset. It is of course possible to retry the >> transaction and perhaps it will COMMIT in the retries, but this will >> add to the transaction latency. However, some designs are deadlock >> prone, e.g sequence tables. >> —snap— >> >> Source: >> https://severalnines.com/resources/tutorials/mysql-load-balancing-haproxy-tutorial >> > > Have you seen the above in production? > Yes of course. Just depends on the application and how high the workload gets. Please read about deadloks and nova in the following report by Intel: http://galeracluster.com/wp-content/uploads/2017/06/performance_analysis_and_tuning_in_china_mobiles_openstack_production_cloud_2.pdf If just Nova is affected we could also create an additional HAProxy listener using all Galera nodes with round-robin for all other services? Anyway - proxySQL would be a great extension. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From florian.engelmann at everyware.ch Tue Oct 9 11:02:20 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Tue, 9 Oct 2018 13:02:20 +0200 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> Message-ID: <8c65a2ba-09e9-d286-9d3c-e577a59185cb@everyware.ch> Am 10/9/18 um 11:04 AM schrieb Mark Goddard: > Thanks for these suggestions Florian, there are some interesting ideas > in here. I'm a little concerned about the maintenance overhead of adding > support for all of these things, and wonder if some of them could be > done without explicit support in kolla and kolla-ansible. The kolla > projects have been able to move quickly by providing a flexible > configuration mechanism that avoids the need to maintain support for > every OpenStack feature. Other thoughts inline. > I do understand your apprehensions Mark. For some of the suggested changes/additions I agree. But adding those components without kolla/kolla-ansible integration feels not right. > > On Mon, 8 Oct 2018 at 11:15, Florian Engelmann > > > wrote: > > Hi, > > I would like to start a discussion about some changes and additions I > would like to see in in kolla and kolla-ansible. > > 1. Keepalived is a problem in layer3 spine leaf networks as any > floating > IP can only exist in one leaf (and VRRP is a problem in layer3). I > would > like to use consul and registrar to get rid of the "internal" floating > IP and use consuls DNS service discovery to connect all services with > each other. > > > Without reading up, I'm not sure exactly how this fits together. If > kolla-ansible made the API host configurable for each service rather > than globally, would that be a step in the right direction? No that would not help. The problem is HA. Right now there is a "central" floating IP (kolla_internal_vip_address) that is used for all services to connect to (each other). Keepalived is failing that IP over if the "active" host fails. In a layer3 (CLOS/Spine-Leaf) network this IP is only available in one leaf/rack. So that rack is a "SPOF". Using service discovery fits perfect in a CLOS network and scales much better as a HA solution. > > > 2. Using "ports" for external API (endpoint) access is a major headache > if a firewall is involved. I would like to configure the HAProxy (or > fabio) for the external access to use "Host:" like, eg. "Host: > keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS. > Any customer would just need HTTPS access and not have to open all > those > ports in his firewall. For some enterprise customers it is not possible > to request FW changes like that. > > 3. HAProxy is not capable to handle "read/write" split with Galera. I > would like to introduce ProxySQL to be able to scale Galera. > > > It's now possible to use an external database server with kolla-ansible, > instead of deploying a mariadb/galera cluster. This could be implemented > how you like, see > https://docs.openstack.org/kolla-ansible/latest/reference/external-mariadb-guide.html. Yes I agree. And this is what we will do in our first production deployment. But I would love to see ProxySQL in Kolla as well. As a side note: Kolla-ansible does use: option mysql-check user haproxy post-41 to check Galera, but that check does not fail if the node is out of sync with the other nodes! http://galeracluster.com/documentation-webpages/monitoringthecluster.html > > 4. HAProxy is fine but fabio integrates well with consul, statsd and > could be connected to a vault cluster to manage secure certificate > access. > > As above. > > 5. I would like to add vault as Barbican backend. > > Does this need explicit support in kolla and kolla-ansible, or could it > be done through configuration of barbican.conf? Are there additional > packages required in the barbican container? If so, see > https://docs.openstack.org/kolla/latest/admin/image-building.html#package-customisation. True but the vault (and consul) containers could be deployed and managed by kolla-ansible. > > 6. I would like to add an option to enable tokenless authentication for > all services with each other to get rid of all the openstack service > passwords (security issue). > > Again, could this be done without explicit support? We did not investigate here. Changes to the apache configuration are needed. I guess we will have to change the kolla container itself to do so? Is it possible to "inject" files in a container using kolla-ansible? -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From james.slagle at gmail.com Tue Oct 9 11:17:13 2018 From: james.slagle at gmail.com (James Slagle) Date: Tue, 9 Oct 2018 07:17:13 -0400 Subject: [openstack-dev] [TripleO] Chair for Edge squad meeting this week Message-ID: I won't be able to chair the edge squad meeting this week. Can anyone take it over? If not, we'll pick it back up next week. -- -- James Slagle -- From jaypipes at gmail.com Tue Oct 9 11:23:03 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 9 Oct 2018 07:23:03 -0400 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <5cf93772-d8f6-4cbb-3088-eeb670941969@everyware.ch> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <5cf93772-d8f6-4cbb-3088-eeb670941969@everyware.ch> Message-ID: <95509c0e-5abb-b0f2-eca3-f04e7eb995e3@gmail.com> On 10/09/2018 06:34 AM, Florian Engelmann wrote: > Am 10/9/18 um 11:41 AM schrieb Jay Pipes: >> On 10/09/2018 04:34 AM, Christian Berendt wrote: >>> >>> >>>> On 8. Oct 2018, at 19:48, Jay Pipes wrote: >>>> >>>> Why not send all read and all write traffic to a single haproxy >>>> endpoint and just have haproxy spread all traffic across each Galera >>>> node? >>>> >>>> Galera, after all, is multi-master synchronous replication... so it >>>> shouldn't matter which node in the Galera cluster you send traffic to. >>> >>> Probably because of MySQL deadlocks in Galera: >>> >>> —snip— >>> Galera cluster has known limitations, one of them is that it uses >>> cluster-wide optimistic locking. This may cause some transactions to >>> rollback. With an increasing number of writeable masters, the >>> transaction rollback rate may increase, especially if there is write >>> contention on the same dataset. It is of course possible to retry the >>> transaction and perhaps it will COMMIT in the retries, but this will >>> add to the transaction latency. However, some designs are deadlock >>> prone, e.g sequence tables. >>> —snap— >>> >>> Source: >>> https://severalnines.com/resources/tutorials/mysql-load-balancing-haproxy-tutorial >>> >> >> Have you seen the above in production? > > Yes of course. Just depends on the application and how high the workload > gets. > > Please read about deadloks and nova in the following report by Intel: > > http://galeracluster.com/wp-content/uploads/2017/06/performance_analysis_and_tuning_in_china_mobiles_openstack_production_cloud_2.pdf I have read the above. It's a synthetic workload analysis, which is why I asked if you'd seen this in production. For the record, we addressed much of the contention/races mentioned in the above around scheduler resource consumption in the Ocata and Pike releases of Nova. I'm aware that the report above identifies the quota handling code in Nova as the primary culprit of the deadlock issues but again, it's a synthetic workload that is designed to find breaking points. It doesn't represent a realistic production workload. You can read about the deadlock issue in depth on my blog here: http://www.joinfu.com/2015/01/understanding-reservations-concurrency-locking-in-nova/ That explains where the source of the problem comes from (it's the use of SELECT FOR UPDATE, which has been removed from Nova's quota-handling code in the Rocky release). > If just Nova is affected we could also create an additional HAProxy > listener using all Galera nodes with round-robin for all other services? I fail to see the point of using Galera with a single writer. At that point, why bother with Galera at all? Just use a single database node with a single slave for backup purposes. > Anyway - proxySQL would be a great extension. I don't disagree that proxySQL is a good extension. However, it adds yet another services to the mesh that needs to be deployed, configured and maintained. Best, -jay From mark at stackhpc.com Tue Oct 9 11:47:07 2018 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 9 Oct 2018 12:47:07 +0100 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <8c65a2ba-09e9-d286-9d3c-e577a59185cb@everyware.ch> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <8c65a2ba-09e9-d286-9d3c-e577a59185cb@everyware.ch> Message-ID: On Tue, 9 Oct 2018 at 12:03, Florian Engelmann < florian.engelmann at everyware.ch> wrote: > Am 10/9/18 um 11:04 AM schrieb Mark Goddard: > > Thanks for these suggestions Florian, there are some interesting ideas > > in here. I'm a little concerned about the maintenance overhead of adding > > support for all of these things, and wonder if some of them could be > > done without explicit support in kolla and kolla-ansible. The kolla > > projects have been able to move quickly by providing a flexible > > configuration mechanism that avoids the need to maintain support for > > every OpenStack feature. Other thoughts inline. > > > > I do understand your apprehensions Mark. For some of the suggested > changes/additions I agree. But adding those components without > kolla/kolla-ansible integration feels not right. > I'm not entirely against adding some of these things, if enough people in the community want them. I'd just like to make sure that if they can be done in a sane way without changes, then we do that and document how to do it instead. > > > > > On Mon, 8 Oct 2018 at 11:15, Florian Engelmann > > > > > > wrote: > > > > Hi, > > > > I would like to start a discussion about some changes and additions I > > would like to see in in kolla and kolla-ansible. > > > > 1. Keepalived is a problem in layer3 spine leaf networks as any > > floating > > IP can only exist in one leaf (and VRRP is a problem in layer3). I > > would > > like to use consul and registrar to get rid of the "internal" > floating > > IP and use consuls DNS service discovery to connect all services with > > each other. > > > > > > Without reading up, I'm not sure exactly how this fits together. If > > kolla-ansible made the API host configurable for each service rather > > than globally, would that be a step in the right direction? > > No that would not help. The problem is HA. Right now there is a > "central" floating IP (kolla_internal_vip_address) that is used for all > services to connect to (each other). Keepalived is failing that IP over > if the "active" host fails. In a layer3 (CLOS/Spine-Leaf) network this > IP is only available in one leaf/rack. So that rack is a "SPOF". > Using service discovery fits perfect in a CLOS network and scales much > better as a HA solution. > > Right, but what I'm saying as a thought experiment is, if we gave you the required variables in kolla-ansible (e.g. nova_internal_fqdn) to make this possible with an externally managed consul service, could that work? > > > > > 2. Using "ports" for external API (endpoint) access is a major > headache > > if a firewall is involved. I would like to configure the HAProxy (or > > fabio) for the external access to use "Host:" like, eg. "Host: > > keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with > HTTPS. > > Any customer would just need HTTPS access and not have to open all > > those > > ports in his firewall. For some enterprise customers it is not > possible > > to request FW changes like that. > > > > 3. HAProxy is not capable to handle "read/write" split with Galera. I > > would like to introduce ProxySQL to be able to scale Galera. > > > > > > It's now possible to use an external database server with kolla-ansible, > > instead of deploying a mariadb/galera cluster. This could be implemented > > how you like, see > > > https://docs.openstack.org/kolla-ansible/latest/reference/external-mariadb-guide.html > . > > Yes I agree. And this is what we will do in our first production > deployment. But I would love to see ProxySQL in Kolla as well. > As a side note: Kolla-ansible does use: > > option mysql-check user haproxy post-41 > > to check Galera, but that check does not fail if the node is out of sync > with the other nodes! > > http://galeracluster.com/documentation-webpages/monitoringthecluster.html > > That's good to know. Could you raise a bug in kolla-ansible on launchpad, and offer advice on how to improve this check if you have any? > > > > > 4. HAProxy is fine but fabio integrates well with consul, statsd and > > could be connected to a vault cluster to manage secure certificate > > access. > > > > As above. > > > > 5. I would like to add vault as Barbican backend. > > > > Does this need explicit support in kolla and kolla-ansible, or could it > > be done through configuration of barbican.conf? Are there additional > > packages required in the barbican container? If so, see > > > https://docs.openstack.org/kolla/latest/admin/image-building.html#package-customisation > . > > True but the vault (and consul) containers could be deployed and managed > by kolla-ansible. > > I'd like to see if anyone else is interested in this. Kolla ansible already deploys a large number of services, which is great. As with many other projects I'm seeing the resources of core contributors fall off a little, and I think we need to consider how to ensure the project is maintainable long term. In my view a good way of doing that is to enable integration with existing services, rather than deploying them. We need to decide where the line is as a community. We have an IRC meeting at 3pm UTC if you'd like to bring it up then. > > > 6. I would like to add an option to enable tokenless authentication > for > > all services with each other to get rid of all the openstack service > > passwords (security issue). > > > > Again, could this be done without explicit support? > > We did not investigate here. Changes to the apache configuration are > needed. I guess we will have to change the kolla container itself to do > so? Is it possible to "inject" files in a container using kolla-ansible? > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Tue Oct 9 12:52:52 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 9 Oct 2018 08:52:52 -0400 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: References: <413d67d8-e4de-51fe-e7cf-8fb6520aed34@redhat.com> Message-ID: On Mon, Oct 8, 2018 at 5:58 AM Gilles Dubreuil wrote: > > On 05/10/18 21:54, Jim Rollenhagen wrote: > > GraphQL has introspection features that allow clients to pull the schema > (types, queries, mutations, etc): https://graphql.org/learn/introspection/ > > That said, it seems like using this in a client like OpenStackSDK would > get messy quickly. Instead of asking for which versions are supported, > you'd have to fetch the schema, map it to actual features somehow, and > adjust queries based on this info. > > > A main difference in software architecture when using GraphQL is that a > client makes use of a GraphQL client library instead of relaying on a SDK. > It seems to me that a major goal of openstacksdk is to hide differences between clouds from the user. If the user is meant to use a GraphQL library themselves, we lose this and the user needs to figure it out themselves. Did I understand that correctly? // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Oct 9 12:58:50 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 9 Oct 2018 12:58:50 +0000 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: References: <413d67d8-e4de-51fe-e7cf-8fb6520aed34@redhat.com> Message-ID: <20181009125850.4sj52i7c3mi6m6ay@yuggoth.org> On 2018-10-09 08:52:52 -0400 (-0400), Jim Rollenhagen wrote: [...] > It seems to me that a major goal of openstacksdk is to hide differences > between clouds from the user. If the user is meant to use a GraphQL library > themselves, we lose this and the user needs to figure it out themselves. > Did I understand that correctly? This is especially useful where the SDK implements business logic for common operations like "if the user requested A and the cloud supports features B+C+D then use those to fulfil the request, otherwise fall back to using features E+F". -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sombrafam at gmail.com Tue Oct 9 13:04:26 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Tue, 9 Oct 2018 10:04:26 -0300 Subject: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API? In-Reply-To: <20181008135434.GB17162@sm-workstation> References: <20181008135434.GB17162@sm-workstation> Message-ID: If you are planning to re-image an image on a bootable volume then yes you should use a force parameter. I have lost the discussion about this on PTG. What is the main use cases? This seems to me something that could be leveraged with the current revert-to-snapshot API, which would be even better. The flow would be: 1 - create a volume from image 2 - create an snapshot 3 - do whatever you wan't 4 - revert the snapshot Would that help in your the use cases? Em seg, 8 de out de 2018 às 10:54, Sean McGinnis escreveu: > On Mon, Oct 08, 2018 at 03:09:36PM +0800, Yikun Jiang wrote: > > In Denver, we agree to add a new "re-image" API in cinder to support > upport > > volume-backed server rebuild with a new image. > > > > An initial blueprint has been drafted in [3], welcome to review it, > thanks. > > : ) > > > > [snip] > > > > The "force" parameter idea comes from [4], means that > > 1. we can re-image an "available" volume directly. > > 2. we can't re-image "in-use"/"reserved" volume directly. > > 3. we can only re-image an "in-use"/"reserved" volume with "force" > > parameter. > > > > And it means nova need to always call re-image API with an extra "force" > > parameter, > > because the volume status is "in-use" or "reserve" when we rebuild the > > server. > > > > *So, what's you idea? Do we really want to add this "force" parameter?* > > > > I would prefer we have the "force" parameter, even if it is something that > will > always be defaulted to True from Nova. > > Having this exposed as a REST API means anyone could call it, not just Nova > code. So as protection from someone doing something that they are not > really > clear on the full implications of, having a flag in there to guard volumes > that > are already attached or reserved for shelved instances is worth the very > minor > extra overhead. > > > [1] https://etherpad.openstack.org/p/nova-ptg-stein L483 > > [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild > L12 > > [3] https://review.openstack.org/#/c/605317 > > [4] > > > https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst at 75 > > > > Regards, > > Yikun > > ---------------------------------------- > > Jiang Yikun(Kero) > > Mail: yikunkero at gmail.com > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel at redhat.com Tue Oct 9 13:11:23 2018 From: michel at redhat.com (Michel Peterson) Date: Tue, 9 Oct 2018 16:11:23 +0300 Subject: [openstack-dev] [neutron][stadium][networking] Seeking proposals for non-voting Stadium projects in Neutron check queue In-Reply-To: References: Message-ID: On Sun, Sep 30, 2018 at 3:43 AM Miguel Lavalle wrote: > The next step is for each project to propose the jobs that they want to > run against Neutron patches. > This is fantastic. Do you plan to have all patches under a single topic for easier tracking? I'll be handling the proposal of these jobs for networking-odl and would like to know this before sending them for review. In addition, since these will be non-voting for Stadium projects, how will the mechanics be to avoid breakage of such projects? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Tue Oct 9 13:22:19 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 9 Oct 2018 09:22:19 -0400 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> Message-ID: <2b81862f-bedd-74c9-b10e-b51913eb8809@gmail.com> On 10/8/18 11:59 PM, Matthew Thode wrote: > several projects have had problems with the new release, some have ways > of working around it, and some do not. I'm sending this just to raise > the issue and allow a place to discuss solutions. > > Currently there is a review proposed to blacklist 9.0.0, but if this is > going to still be an issue somehow in further releases we may need > another solution. > > https://review.openstack.org/#/c/608835/ As indicated in the commit message on the above patch, 9.0.0 contains a bug that's been fixed in oslo.messaging master, so I don't think there's any question that 9.0.0 has to be blacklisted. As far as the timing/content of 9.0.1, however, that may require further discussion. (In other words, I'm saying that when you say 'another solution', my position is that we should take 'another' to mean 'additional', not 'different'.) cheers, brian From lbragstad at gmail.com Tue Oct 9 14:06:57 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 9 Oct 2018 09:06:57 -0500 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> Message-ID: Keystone is failing because it's missing a fix from oslo.messaging [0]. That said, keystone is also relying on an internal implementation detail in oslo.messaging by mocking it in tests [1]. The notification work has been around in keystone for a *long* time, but it's apparent that we should revisit these tests to make sure we aren't testing something that is already tested by oslo.messaging if we're mocking internal implementation details of a library. Regardless, blacklisting version 9.0.0 will work for keystone, but we can work around it another way by either rewriting the tests to not care about oslo.messaging specifics, or removing them if they're obsolete. [0] https://review.openstack.org/#/c/608196/ [1] https://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/unit/common/test_notifications.py#n1343 On Mon, Oct 8, 2018 at 10:59 PM Matthew Thode wrote: > several projects have had problems with the new release, some have ways > of working around it, and some do not. I'm sending this just to raise > the issue and allow a place to discuss solutions. > > Currently there is a review proposed to blacklist 9.0.0, but if this is > going to still be an issue somehow in further releases we may need > another solution. > > https://review.openstack.org/#/c/608835/ > > -- > Matthew Thode (prometheanfire) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Oct 9 14:16:16 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 9 Oct 2018 09:16:16 -0500 Subject: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API? In-Reply-To: References: <20181008135434.GB17162@sm-workstation> Message-ID: On 10/9/2018 8:04 AM, Erlon Cruz wrote: > If you are planning to re-image an image on a bootable volume then yes > you should use a force parameter. I have lost the discussion about this > on PTG. What is the main use cases? This seems to me something that > could be leveraged with the current revert-to-snapshot API, which would > be even better. The flow would be: > > 1 - create a volume from image > 2 - create an snapshot > 3 - do whatever you wan't > 4 - revert the snapshot > > Would that help in your the use cases? As the spec mentions, this is for enabling re-imaging the root volume on a server when nova rebuilds the server. That is not allowed today because the compute service can't re-image the root volume. We don't want to jump through a bunch of gross alternative hoops to create a new root volume with the new image and swap them out (the reasons why are in the spec, and have been discussed previously in the ML). So nova is asking cinder to provide an API to change the image in a volume which the nova rebuild operation will use to re-image the root volume on a volume-backed server. I don't know if revert-to-snapshot solves that use case, but it doesn't sound like it. With the nova rebuild API, the user provides an image reference and that is used to re-image the root disk on the server. So it might not be a snapshot, it could be something new. -- Thanks, Matt From openstack at fried.cc Tue Oct 9 14:39:05 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 9 Oct 2018 09:39:05 -0500 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <1539078021.11166.5@smtp.office365.com> References: <1539078021.11166.5@smtp.office365.com> Message-ID: <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> IIUC, the primary thing the force flag was intended to do - allow an instance to land on the requested destination even if that means oversubscription of the host's resources - doesn't happen anymore since we started making the destination claim in placement. IOW, since pike, you don't actually see a difference in behavior by using the force flag or not. (If you do, it's more likely a bug than what you were expecting.) So there's no reason to keep it around. We can remove it in a new microversion (or not); but even in the current microversion we need not continue making convoluted attempts to observe it. What that means is that we should simplify everything down to ignore the force flag and always call GET /a_c. Problem solved - for nested and/or sharing, NUMA or not, root resources or no, on the source and/or destination. -efried On 10/09/2018 04:40 AM, Balázs Gibizer wrote: > Hi, > > Setup > ----- > > nested allocation: an allocation that contains resources from one or > more nested RPs. (if you have better term for this then please suggest). > > If an instance has nested allocation it means that the compute, it > allocates from, has a nested RP tree. BUT if a compute has a nested RP > tree it does not automatically means that the instance, allocating from > that compute, has a nested allocation (e.g. bandwidth inventory will be > on a nested RPs but not every instance will require bandwidth) > > Afaiu, as soon as we have NUMA modelling in place the most trivial > servers will have nested allocations as CPU and MEMORY inverntory will > be moved to the nested NUMA RPs. But NUMA is still in the future. > > Sidenote: there is an edge case reported by bauzas when an instance > allocates _only_ from nested RPs. This was discussed on last Friday and > it resulted in a new patch[0] but I would like to keep that discussion > separate from this if possible. > > Sidenote: the current problem somewhat related to not just nested PRs > but to sharing RPs as well. However I'm not aiming to implement sharing > support in Nova right now so I also try to keep the sharing disscussion > separated if possible. > > There was already some discussion on the Monday's scheduler meeting but > I could not attend. > http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20 > > > The meat > -------- > > Both live-migrate[1] and evacuate[2] has an optional force flag on the > nova REST API. The documentation says: "Force by not > verifying the provided destination host by the scheduler." > > Nova implements this statement by not calling the scheduler if > force=True BUT still try to manage allocations in placement. > > To have allocation on the destination host Nova blindly copies the > instance allocation from the source host to the destination host during > these operations. Nova can do that as 1) the whole allocation is > against a single RP (the compute RP) and 2) Nova knows both the source > compute RP and the destination compute RP. > > However as soon as we bring nested allocations into the picture that > blind copy will not be feasible. Possible cases > 0) The instance has non-nested allocation on the source and would need > non nested allocation on the destination. This works with blindy copy > today. > 1) The instance has a nested allocation on the source and would need a > nested allocation on the destination as well. > 2) The instance has a non-nested allocation on the source and would > need a nested allocation on the destination. > 3) The instance has a nested allocation on the source and would need a > non nested allocation on the destination. > > Nova cannot generate nested allocations easily without reimplementing > some of the placement allocation candidate (a_c) code. However I don't > like the idea of duplicating some of the a_c code in Nova. > > Nova cannot detect what kind of allocation (nested or non-nested) an > instance would need on the destination without calling placement a_c. > So knowing when to call placement is a chicken and egg problem. > > Possible solutions: > A) fail fast > ------------ > 0) Nova can detect that the source allocatioin is non-nested and try > the blindy copy and it will succeed. > 1) Nova can detect that the source allocaton is nested and fail the > operation > 2) Nova only sees a non nested source allocation. Even if the dest RP > tree is nested it does not mean that the allocation will be nested. We > cannot fail fast. Nova can try the blind copy and allocate every > resources from the root RP of the destination. If the instance require > nested allocation instead the claim will fail in placement. So nova can > fail the operation a bit later than in 1). > 3) Nova can detect that the source allocation is nested and fail the > operation. However and enhanced blind copy that tries to allocation > everything from the root RP on the destinaton would have worked. > > B) Guess when to ignore the force flag and call the scheduler > ------------------------------------------------------------- > 0) keep the blind copy as it works > 1) Nova detect that the source allocation is nested. Ignores the force > flag and calls the scheduler that will call placement a_c. Move > operation can succeed. > 2) Nova only sees a non nested source allocation so it will fall back > to blind copy and fails at the claim on destination. > 3) Nova detect that the source allocation is nested. Ignores the force > flag and calls the scheduler that will call placement a_c. Move > operation can succeed. > > This solution would be against the API doc that states nova does not > call the scheduler if the operation is forced. However in case of force > live-migration Nova already verifies the target host from couple of > perspective in [3]. > This solution is alreay proposed for live-migrate in [4] and for > evacuate in [5] so the complexity of the solution can be seen in the > reviews. > > C) Remove the force flag from the API in a new microversion > ----------------------------------------------------------- > 0)-3): all cases would call the scheduler to verify the target host and > generate the nested (or non-nested) allocation. > We would still need an agreed behavior (from A), B), D)) for the old > microversions as the todays code creates inconsistent allocation in #1) > and #3) by ignoring the resource from the nested RP. > > D) Do not manage allocations in placement for forced operation > -------------------------------------------------------------- > Force flag is considered as a last resort tool for the admin to move > VMs around. The API doc has a fat warning about the danger of it. So > Nova can simply ignore resource allocation task if force=True. Nova > would delete the source allocation and does not create any allocation > on the destination host. > > This is a simple but dangerous solution but it is what the force flag > is all about, move the server against all the built in safeties. (If > the admin needs the safeties she can set force=False and still specify > the destination host) > > I'm open to any suggestions. > > Cheers, > gibi > > [0] https://review.openstack.org/#/c/608298/ > [1] > https://developer.openstack.org/api-ref/compute/#live-migrate-server-os-migratelive-action > [2] > https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action > [3] > https://github.com/openstack/nova/blob/c5a7002bd571379818c0108296041d12bc171728/nova/conductor/tasks/live_migrate.py#L97 > [4] https://review.openstack.org/#/c/605785 > [5] https://review.openstack.org/#/c/606111 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jungleboyj at gmail.com Tue Oct 9 14:40:13 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 9 Oct 2018 09:40:13 -0500 Subject: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API? In-Reply-To: <20181008135434.GB17162@sm-workstation> References: <20181008135434.GB17162@sm-workstation> Message-ID: <82b7e7d9-0115-e2f0-8c50-2ef0588b7459@gmail.com> On 10/8/2018 8:54 AM, Sean McGinnis wrote: > On Mon, Oct 08, 2018 at 03:09:36PM +0800, Yikun Jiang wrote: >> In Denver, we agree to add a new "re-image" API in cinder to support upport >> volume-backed server rebuild with a new image. >> >> An initial blueprint has been drafted in [3], welcome to review it, thanks. >> : ) >> >> [snip] >> >> The "force" parameter idea comes from [4], means that >> 1. we can re-image an "available" volume directly. >> 2. we can't re-image "in-use"/"reserved" volume directly. >> 3. we can only re-image an "in-use"/"reserved" volume with "force" >> parameter. >> >> And it means nova need to always call re-image API with an extra "force" >> parameter, >> because the volume status is "in-use" or "reserve" when we rebuild the >> server. >> >> *So, what's you idea? Do we really want to add this "force" parameter?* >> > I would prefer we have the "force" parameter, even if it is something that will > always be defaulted to True from Nova. > > Having this exposed as a REST API means anyone could call it, not just Nova > code. So as protection from someone doing something that they are not really > clear on the full implications of, having a flag in there to guard volumes that > are already attached or reserved for shelved instances is worth the very minor > extra overhead. I concur with Sean's assessment.  I think putting a safety switch in place in this design is important to ensure that people using the API directly are less likely to do something that they may not actually want to do. Jay >> [1] https://etherpad.openstack.org/p/nova-ptg-stein L483 >> [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild L12 >> [3] https://review.openstack.org/#/c/605317 >> [4] >> https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst at 75 >> >> Regards, >> Yikun >> ---------------------------------------- >> Jiang Yikun(Kero) >> Mail: yikunkero at gmail.com >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sylvain.bauza at gmail.com Tue Oct 9 14:56:15 2018 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Tue, 9 Oct 2018 16:56:15 +0200 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> References: <1539078021.11166.5@smtp.office365.com> <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> Message-ID: Le mar. 9 oct. 2018 à 16:39, Eric Fried a écrit : > IIUC, the primary thing the force flag was intended to do - allow an > instance to land on the requested destination even if that means > oversubscription of the host's resources - doesn't happen anymore since > we started making the destination claim in placement. > > IOW, since pike, you don't actually see a difference in behavior by > using the force flag or not. (If you do, it's more likely a bug than > what you were expecting.) > > So there's no reason to keep it around. We can remove it in a new > microversion (or not); but even in the current microversion we need not > continue making convoluted attempts to observe it. > > What that means is that we should simplify everything down to ignore the > force flag and always call GET /a_c. Problem solved - for nested and/or > sharing, NUMA or not, root resources or no, on the source and/or > destination. > > While I tend to agree with Eric here (and I commented on the review accordingly by saying we should signal the new behaviour by a microversion), I still think we need to properly advertise this, adding openstack-operators@ accordingly. Disclaimer : since we have gaps on OSC, the current OSC behaviour when you "openstack server live-migrate " is to *force* the destination by not calling the scheduler. Yeah, it sucks. Operators, what are the exact cases (for those running clouds newer than Mitaka, ie. Newton and above) when you make use of the --force option for live migration with a microversion newer or equal 2.29 ? In general, even in the case of an emergency, you still want to make sure you don't throw your compute under the bus by massively migrating instances that would create an undetected snowball effect by having this compute refusing new instances. Or are you disabling the target compute service first and throw your pet instances up there ? -Sylvain -efried > > On 10/09/2018 04:40 AM, Balázs Gibizer wrote: > > Hi, > > > > Setup > > ----- > > > > nested allocation: an allocation that contains resources from one or > > more nested RPs. (if you have better term for this then please suggest). > > > > If an instance has nested allocation it means that the compute, it > > allocates from, has a nested RP tree. BUT if a compute has a nested RP > > tree it does not automatically means that the instance, allocating from > > that compute, has a nested allocation (e.g. bandwidth inventory will be > > on a nested RPs but not every instance will require bandwidth) > > > > Afaiu, as soon as we have NUMA modelling in place the most trivial > > servers will have nested allocations as CPU and MEMORY inverntory will > > be moved to the nested NUMA RPs. But NUMA is still in the future. > > > > Sidenote: there is an edge case reported by bauzas when an instance > > allocates _only_ from nested RPs. This was discussed on last Friday and > > it resulted in a new patch[0] but I would like to keep that discussion > > separate from this if possible. > > > > Sidenote: the current problem somewhat related to not just nested PRs > > but to sharing RPs as well. However I'm not aiming to implement sharing > > support in Nova right now so I also try to keep the sharing disscussion > > separated if possible. > > > > There was already some discussion on the Monday's scheduler meeting but > > I could not attend. > > > http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20 > > > > > > The meat > > -------- > > > > Both live-migrate[1] and evacuate[2] has an optional force flag on the > > nova REST API. The documentation says: "Force by not > > verifying the provided destination host by the scheduler." > > > > Nova implements this statement by not calling the scheduler if > > force=True BUT still try to manage allocations in placement. > > > > To have allocation on the destination host Nova blindly copies the > > instance allocation from the source host to the destination host during > > these operations. Nova can do that as 1) the whole allocation is > > against a single RP (the compute RP) and 2) Nova knows both the source > > compute RP and the destination compute RP. > > > > However as soon as we bring nested allocations into the picture that > > blind copy will not be feasible. Possible cases > > 0) The instance has non-nested allocation on the source and would need > > non nested allocation on the destination. This works with blindy copy > > today. > > 1) The instance has a nested allocation on the source and would need a > > nested allocation on the destination as well. > > 2) The instance has a non-nested allocation on the source and would > > need a nested allocation on the destination. > > 3) The instance has a nested allocation on the source and would need a > > non nested allocation on the destination. > > > > Nova cannot generate nested allocations easily without reimplementing > > some of the placement allocation candidate (a_c) code. However I don't > > like the idea of duplicating some of the a_c code in Nova. > > > > Nova cannot detect what kind of allocation (nested or non-nested) an > > instance would need on the destination without calling placement a_c. > > So knowing when to call placement is a chicken and egg problem. > > > > Possible solutions: > > A) fail fast > > ------------ > > 0) Nova can detect that the source allocatioin is non-nested and try > > the blindy copy and it will succeed. > > 1) Nova can detect that the source allocaton is nested and fail the > > operation > > 2) Nova only sees a non nested source allocation. Even if the dest RP > > tree is nested it does not mean that the allocation will be nested. We > > cannot fail fast. Nova can try the blind copy and allocate every > > resources from the root RP of the destination. If the instance require > > nested allocation instead the claim will fail in placement. So nova can > > fail the operation a bit later than in 1). > > 3) Nova can detect that the source allocation is nested and fail the > > operation. However and enhanced blind copy that tries to allocation > > everything from the root RP on the destinaton would have worked. > > > > B) Guess when to ignore the force flag and call the scheduler > > ------------------------------------------------------------- > > 0) keep the blind copy as it works > > 1) Nova detect that the source allocation is nested. Ignores the force > > flag and calls the scheduler that will call placement a_c. Move > > operation can succeed. > > 2) Nova only sees a non nested source allocation so it will fall back > > to blind copy and fails at the claim on destination. > > 3) Nova detect that the source allocation is nested. Ignores the force > > flag and calls the scheduler that will call placement a_c. Move > > operation can succeed. > > > > This solution would be against the API doc that states nova does not > > call the scheduler if the operation is forced. However in case of force > > live-migration Nova already verifies the target host from couple of > > perspective in [3]. > > This solution is alreay proposed for live-migrate in [4] and for > > evacuate in [5] so the complexity of the solution can be seen in the > > reviews. > > > > C) Remove the force flag from the API in a new microversion > > ----------------------------------------------------------- > > 0)-3): all cases would call the scheduler to verify the target host and > > generate the nested (or non-nested) allocation. > > We would still need an agreed behavior (from A), B), D)) for the old > > microversions as the todays code creates inconsistent allocation in #1) > > and #3) by ignoring the resource from the nested RP. > > > > D) Do not manage allocations in placement for forced operation > > -------------------------------------------------------------- > > Force flag is considered as a last resort tool for the admin to move > > VMs around. The API doc has a fat warning about the danger of it. So > > Nova can simply ignore resource allocation task if force=True. Nova > > would delete the source allocation and does not create any allocation > > on the destination host. > > > > This is a simple but dangerous solution but it is what the force flag > > is all about, move the server against all the built in safeties. (If > > the admin needs the safeties she can set force=False and still specify > > the destination host) > > > > I'm open to any suggestions. > > > > Cheers, > > gibi > > > > [0] https://review.openstack.org/#/c/608298/ > > [1] > > > https://developer.openstack.org/api-ref/compute/#live-migrate-server-os-migratelive-action > > [2] > > > https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action > > [3] > > > https://github.com/openstack/nova/blob/c5a7002bd571379818c0108296041d12bc171728/nova/conductor/tasks/live_migrate.py#L97 > > [4] https://review.openstack.org/#/c/605785 > > [5] https://review.openstack.org/#/c/606111 > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Tue Oct 9 15:04:23 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Tue, 9 Oct 2018 15:04:23 +0000 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> References: <1539078021.11166.5@smtp.office365.com> <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> Message-ID: <1539097453.11166.7@smtp.office365.com> On Tue, Oct 9, 2018 at 4:39 PM, Eric Fried wrote: > IIUC, the primary thing the force flag was intended to do - allow an > instance to land on the requested destination even if that means > oversubscription of the host's resources - doesn't happen anymore > since > we started making the destination claim in placement. Can we simply do that still by not creating allocation in placement during the move? (see option #D)) > > IOW, since pike, you don't actually see a difference in behavior by > using the force flag or not. (If you do, it's more likely a bug than > what you were expecting.) There is still difference between force=True and force=False today. When you say force=False nova calls placement a_c and placement try to satisfy requested resource, required traits, and aggregate membership. When you say force=True nova conductor takes the resource allocation from the source host and copies that blindly to the destination but does not check any traits or aggregate membership. So force=True is still ignores a lot of rules and safeties. > > So there's no reason to keep it around. We can remove it in a new > microversion (or not); but even in the current microversion we need > not > continue making convoluted attempts to observe it. If we remove it in a new microversion (option #C)) then we still need to define how to behave in the old microversions when nested allocation would be needed. I don't fully get what you mean by 'not continue making convoluted attempts to observe it.' > > What that means is that we should simplify everything down to ignore > the > force flag and always call GET /a_c. Problem solved - for nested > and/or > sharing, NUMA or not, root resources or no, on the source and/or > destination. If you do the force flag removal in a nw microversion that also means (at least to me) that you should not change the behavior of the force flag in the old microversions. Cheers, gibi > > -efried > > On 10/09/2018 04:40 AM, Balázs Gibizer wrote: >> Hi, >> >> Setup >> ----- >> >> nested allocation: an allocation that contains resources from one or >> more nested RPs. (if you have better term for this then please >> suggest). >> >> If an instance has nested allocation it means that the compute, it >> allocates from, has a nested RP tree. BUT if a compute has a nested >> RP >> tree it does not automatically means that the instance, allocating >> from >> that compute, has a nested allocation (e.g. bandwidth inventory >> will be >> on a nested RPs but not every instance will require bandwidth) >> >> Afaiu, as soon as we have NUMA modelling in place the most trivial >> servers will have nested allocations as CPU and MEMORY inverntory >> will >> be moved to the nested NUMA RPs. But NUMA is still in the future. >> >> Sidenote: there is an edge case reported by bauzas when an instance >> allocates _only_ from nested RPs. This was discussed on last Friday >> and >> it resulted in a new patch[0] but I would like to keep that >> discussion >> separate from this if possible. >> >> Sidenote: the current problem somewhat related to not just nested >> PRs >> but to sharing RPs as well. However I'm not aiming to implement >> sharing >> support in Nova right now so I also try to keep the sharing >> disscussion >> separated if possible. >> >> There was already some discussion on the Monday's scheduler meeting >> but >> I could not attend. >> >> http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20 >> >> >> The meat >> -------- >> >> Both live-migrate[1] and evacuate[2] has an optional force flag on >> the >> nova REST API. The documentation says: "Force by not >> verifying the provided destination host by the scheduler." >> >> Nova implements this statement by not calling the scheduler if >> force=True BUT still try to manage allocations in placement. >> >> To have allocation on the destination host Nova blindly copies the >> instance allocation from the source host to the destination host >> during >> these operations. Nova can do that as 1) the whole allocation is >> against a single RP (the compute RP) and 2) Nova knows both the >> source >> compute RP and the destination compute RP. >> >> However as soon as we bring nested allocations into the picture that >> blind copy will not be feasible. Possible cases >> 0) The instance has non-nested allocation on the source and would >> need >> non nested allocation on the destination. This works with blindy >> copy >> today. >> 1) The instance has a nested allocation on the source and would >> need a >> nested allocation on the destination as well. >> 2) The instance has a non-nested allocation on the source and would >> need a nested allocation on the destination. >> 3) The instance has a nested allocation on the source and would >> need a >> non nested allocation on the destination. >> >> Nova cannot generate nested allocations easily without >> reimplementing >> some of the placement allocation candidate (a_c) code. However I >> don't >> like the idea of duplicating some of the a_c code in Nova. >> >> Nova cannot detect what kind of allocation (nested or non-nested) an >> instance would need on the destination without calling placement >> a_c. >> So knowing when to call placement is a chicken and egg problem. >> >> Possible solutions: >> A) fail fast >> ------------ >> 0) Nova can detect that the source allocatioin is non-nested and try >> the blindy copy and it will succeed. >> 1) Nova can detect that the source allocaton is nested and fail the >> operation >> 2) Nova only sees a non nested source allocation. Even if the dest >> RP >> tree is nested it does not mean that the allocation will be nested. >> We >> cannot fail fast. Nova can try the blind copy and allocate every >> resources from the root RP of the destination. If the instance >> require >> nested allocation instead the claim will fail in placement. So nova >> can >> fail the operation a bit later than in 1). >> 3) Nova can detect that the source allocation is nested and fail the >> operation. However and enhanced blind copy that tries to allocation >> everything from the root RP on the destinaton would have worked. >> >> B) Guess when to ignore the force flag and call the scheduler >> ------------------------------------------------------------- >> 0) keep the blind copy as it works >> 1) Nova detect that the source allocation is nested. Ignores the >> force >> flag and calls the scheduler that will call placement a_c. Move >> operation can succeed. >> 2) Nova only sees a non nested source allocation so it will fall >> back >> to blind copy and fails at the claim on destination. >> 3) Nova detect that the source allocation is nested. Ignores the >> force >> flag and calls the scheduler that will call placement a_c. Move >> operation can succeed. >> >> This solution would be against the API doc that states nova does not >> call the scheduler if the operation is forced. However in case of >> force >> live-migration Nova already verifies the target host from couple of >> perspective in [3]. >> This solution is alreay proposed for live-migrate in [4] and for >> evacuate in [5] so the complexity of the solution can be seen in the >> reviews. >> >> C) Remove the force flag from the API in a new microversion >> ----------------------------------------------------------- >> 0)-3): all cases would call the scheduler to verify the target host >> and >> generate the nested (or non-nested) allocation. >> We would still need an agreed behavior (from A), B), D)) for the old >> microversions as the todays code creates inconsistent allocation in >> #1) >> and #3) by ignoring the resource from the nested RP. >> >> D) Do not manage allocations in placement for forced operation >> -------------------------------------------------------------- >> Force flag is considered as a last resort tool for the admin to move >> VMs around. The API doc has a fat warning about the danger of it. So >> Nova can simply ignore resource allocation task if force=True. Nova >> would delete the source allocation and does not create any >> allocation >> on the destination host. >> >> This is a simple but dangerous solution but it is what the force >> flag >> is all about, move the server against all the built in safeties. (If >> the admin needs the safeties she can set force=False and still >> specify >> the destination host) >> >> I'm open to any suggestions. >> >> Cheers, >> gibi >> >> [0] https://review.openstack.org/#/c/608298/ >> [1] >> >> https://developer.openstack.org/api-ref/compute/#live-migrate-server-os-migratelive-action >> [2] >> >> https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action >> [3] >> >> https://github.com/openstack/nova/blob/c5a7002bd571379818c0108296041d12bc171728/nova/conductor/tasks/live_migrate.py#L97 >> [4] https://review.openstack.org/#/c/605785 >> [5] https://review.openstack.org/#/c/606111 >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From balazs.gibizer at ericsson.com Tue Oct 9 15:08:57 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Tue, 9 Oct 2018 15:08:57 +0000 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: References: <1539078021.11166.5@smtp.office365.com> <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> Message-ID: <1539097728.11166.8@smtp.office365.com> On Tue, Oct 9, 2018 at 4:56 PM, Sylvain Bauza wrote: > > > Le mar. 9 oct. 2018 à 16:39, Eric Fried a > écrit : >> IIUC, the primary thing the force flag was intended to do - allow an >> instance to land on the requested destination even if that means >> oversubscription of the host's resources - doesn't happen anymore >> since >> we started making the destination claim in placement. >> >> IOW, since pike, you don't actually see a difference in behavior by >> using the force flag or not. (If you do, it's more likely a bug than >> what you were expecting.) >> >> So there's no reason to keep it around. We can remove it in a new >> microversion (or not); but even in the current microversion we need >> not >> continue making convoluted attempts to observe it. >> >> What that means is that we should simplify everything down to ignore >> the >> force flag and always call GET /a_c. Problem solved - for nested >> and/or >> sharing, NUMA or not, root resources or no, on the source and/or >> destination. >> > > > While I tend to agree with Eric here (and I commented on the review > accordingly by saying we should signal the new behaviour by a > microversion), I still think we need to properly advertise this, > adding openstack-operators@ accordingly. Question for you as well: if we remove (or change) the force flag in a new microversion then how should the old microversions behave when nested allocations would be required? Cheers, gibi > Disclaimer : since we have gaps on OSC, the current OSC behaviour > when you "openstack server live-migrate " is to *force* the > destination by not calling the scheduler. Yeah, it sucks. > > Operators, what are the exact cases (for those running clouds newer > than Mitaka, ie. Newton and above) when you make use of the --force > option for live migration with a microversion newer or equal 2.29 ? > In general, even in the case of an emergency, you still want to make > sure you don't throw your compute under the bus by massively > migrating instances that would create an undetected snowball effect > by having this compute refusing new instances. Or are you disabling > the target compute service first and throw your pet instances up > there ? > > -Sylvain > > > >> -efried >> >> On 10/09/2018 04:40 AM, Balázs Gibizer wrote: >> > Hi, >> > >> > Setup >> > ----- >> > >> > nested allocation: an allocation that contains resources from one >> or >> > more nested RPs. (if you have better term for this then please >> suggest). >> > >> > If an instance has nested allocation it means that the compute, it >> > allocates from, has a nested RP tree. BUT if a compute has a >> nested RP >> > tree it does not automatically means that the instance, allocating >> from >> > that compute, has a nested allocation (e.g. bandwidth inventory >> will be >> > on a nested RPs but not every instance will require bandwidth) >> > >> > Afaiu, as soon as we have NUMA modelling in place the most trivial >> > servers will have nested allocations as CPU and MEMORY inverntory >> will >> > be moved to the nested NUMA RPs. But NUMA is still in the future. >> > >> > Sidenote: there is an edge case reported by bauzas when an instance >> > allocates _only_ from nested RPs. This was discussed on last >> Friday and >> > it resulted in a new patch[0] but I would like to keep that >> discussion >> > separate from this if possible. >> > >> > Sidenote: the current problem somewhat related to not just nested >> PRs >> > but to sharing RPs as well. However I'm not aiming to implement >> sharing >> > support in Nova right now so I also try to keep the sharing >> disscussion >> > separated if possible. >> > >> > There was already some discussion on the Monday's scheduler >> meeting but >> > I could not attend. >> > >> http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20 >> > >> > >> > The meat >> > -------- >> > >> > Both live-migrate[1] and evacuate[2] has an optional force flag on >> the >> > nova REST API. The documentation says: "Force by not >> > verifying the provided destination host by the scheduler." >> > >> > Nova implements this statement by not calling the scheduler if >> > force=True BUT still try to manage allocations in placement. >> > >> > To have allocation on the destination host Nova blindly copies the >> > instance allocation from the source host to the destination host >> during >> > these operations. Nova can do that as 1) the whole allocation is >> > against a single RP (the compute RP) and 2) Nova knows both the >> source >> > compute RP and the destination compute RP. >> > >> > However as soon as we bring nested allocations into the picture >> that >> > blind copy will not be feasible. Possible cases >> > 0) The instance has non-nested allocation on the source and would >> need >> > non nested allocation on the destination. This works with blindy >> copy >> > today. >> > 1) The instance has a nested allocation on the source and would >> need a >> > nested allocation on the destination as well. >> > 2) The instance has a non-nested allocation on the source and would >> > need a nested allocation on the destination. >> > 3) The instance has a nested allocation on the source and would >> need a >> > non nested allocation on the destination. >> > >> > Nova cannot generate nested allocations easily without >> reimplementing >> > some of the placement allocation candidate (a_c) code. However I >> don't >> > like the idea of duplicating some of the a_c code in Nova. >> > >> > Nova cannot detect what kind of allocation (nested or non-nested) >> an >> > instance would need on the destination without calling placement >> a_c. >> > So knowing when to call placement is a chicken and egg problem. >> > >> > Possible solutions: >> > A) fail fast >> > ------------ >> > 0) Nova can detect that the source allocatioin is non-nested and >> try >> > the blindy copy and it will succeed. >> > 1) Nova can detect that the source allocaton is nested and fail the >> > operation >> > 2) Nova only sees a non nested source allocation. Even if the dest >> RP >> > tree is nested it does not mean that the allocation will be >> nested. We >> > cannot fail fast. Nova can try the blind copy and allocate every >> > resources from the root RP of the destination. If the instance >> require >> > nested allocation instead the claim will fail in placement. So >> nova can >> > fail the operation a bit later than in 1). >> > 3) Nova can detect that the source allocation is nested and fail >> the >> > operation. However and enhanced blind copy that tries to allocation >> > everything from the root RP on the destinaton would have worked. >> > >> > B) Guess when to ignore the force flag and call the scheduler >> > ------------------------------------------------------------- >> > 0) keep the blind copy as it works >> > 1) Nova detect that the source allocation is nested. Ignores the >> force >> > flag and calls the scheduler that will call placement a_c. Move >> > operation can succeed. >> > 2) Nova only sees a non nested source allocation so it will fall >> back >> > to blind copy and fails at the claim on destination. >> > 3) Nova detect that the source allocation is nested. Ignores the >> force >> > flag and calls the scheduler that will call placement a_c. Move >> > operation can succeed. >> > >> > This solution would be against the API doc that states nova does >> not >> > call the scheduler if the operation is forced. However in case of >> force >> > live-migration Nova already verifies the target host from couple of >> > perspective in [3]. >> > This solution is alreay proposed for live-migrate in [4] and for >> > evacuate in [5] so the complexity of the solution can be seen in >> the >> > reviews. >> > >> > C) Remove the force flag from the API in a new microversion >> > ----------------------------------------------------------- >> > 0)-3): all cases would call the scheduler to verify the target >> host and >> > generate the nested (or non-nested) allocation. >> > We would still need an agreed behavior (from A), B), D)) for the >> old >> > microversions as the todays code creates inconsistent allocation >> in #1) >> > and #3) by ignoring the resource from the nested RP. >> > >> > D) Do not manage allocations in placement for forced operation >> > -------------------------------------------------------------- >> > Force flag is considered as a last resort tool for the admin to >> move >> > VMs around. The API doc has a fat warning about the danger of it. >> So >> > Nova can simply ignore resource allocation task if force=True. Nova >> > would delete the source allocation and does not create any >> allocation >> > on the destination host. >> > >> > This is a simple but dangerous solution but it is what the force >> flag >> > is all about, move the server against all the built in safeties. >> (If >> > the admin needs the safeties she can set force=False and still >> specify >> > the destination host) >> > >> > I'm open to any suggestions. >> > >> > Cheers, >> > gibi >> > >> > [0] https://review.openstack.org/#/c/608298/ >> > [1] >> > >> https://developer.openstack.org/api-ref/compute/#live-migrate-server-os-migratelive-action >> > [2] >> > >> https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action >> > [3] >> > >> https://github.com/openstack/nova/blob/c5a7002bd571379818c0108296041d12bc171728/nova/conductor/tasks/live_migrate.py#L97 >> > [4] https://review.openstack.org/#/c/605785 >> > [5] https://review.openstack.org/#/c/606111 >> > >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at nemebean.com Tue Oct 9 15:11:02 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 9 Oct 2018 10:11:02 -0500 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: <2b81862f-bedd-74c9-b10e-b51913eb8809@gmail.com> References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> <2b81862f-bedd-74c9-b10e-b51913eb8809@gmail.com> Message-ID: <916c8bf6-8860-6983-8951-9460439a0242@nemebean.com> On 10/9/18 8:22 AM, Brian Rosmaita wrote: > On 10/8/18 11:59 PM, Matthew Thode wrote: >> several projects have had problems with the new release, some have ways >> of working around it, and some do not. I'm sending this just to raise >> the issue and allow a place to discuss solutions. >> >> Currently there is a review proposed to blacklist 9.0.0, but if this is >> going to still be an issue somehow in further releases we may need >> another solution. >> >> https://review.openstack.org/#/c/608835/ > > As indicated in the commit message on the above patch, 9.0.0 contains a > bug that's been fixed in oslo.messaging master, so I don't think there's > any question that 9.0.0 has to be blacklisted. Agreed. > > As far as the timing/content of 9.0.1, however, that may require further > discussion. I'll get the release request for 9.0.1 up today. That should fix everyone but Keystone. I'm not sure yet what is going to be needed to get that working. They are mocking a private function from oslo.messaging, but we didn't remove it so I'm not sure why those tests started failing. > > (In other words, I'm saying that when you say 'another solution', my > position is that we should take 'another' to mean 'additional', not > 'different'.) > > cheers, > brian > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Tue Oct 9 15:12:30 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 09 Oct 2018 11:12:30 -0400 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> Message-ID: Matthew Thode writes: > several projects have had problems with the new release, some have ways > of working around it, and some do not. I'm sending this just to raise > the issue and allow a place to discuss solutions. > > Currently there is a review proposed to blacklist 9.0.0, but if this is > going to still be an issue somehow in further releases we may need > another solution. > > https://review.openstack.org/#/c/608835/ > > -- > Matthew Thode (prometheanfire) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Do you have links to the failure logs or bug reports or something? If I wanted to help I wouldn't even know where to start. Doug From doug at doughellmann.com Tue Oct 9 15:19:58 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 09 Oct 2018 11:19:58 -0400 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: <2b81862f-bedd-74c9-b10e-b51913eb8809@gmail.com> References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> <2b81862f-bedd-74c9-b10e-b51913eb8809@gmail.com> Message-ID: Brian Rosmaita writes: > On 10/8/18 11:59 PM, Matthew Thode wrote: >> several projects have had problems with the new release, some have ways >> of working around it, and some do not. I'm sending this just to raise >> the issue and allow a place to discuss solutions. >> >> Currently there is a review proposed to blacklist 9.0.0, but if this is >> going to still be an issue somehow in further releases we may need >> another solution. >> >> https://review.openstack.org/#/c/608835/ > > As indicated in the commit message on the above patch, 9.0.0 contains a > bug that's been fixed in oslo.messaging master, so I don't think there's > any question that 9.0.0 has to be blacklisted. I've proposed releasing oslo.messaging 9.0.1 in https://review.openstack.org/609030 If we don't land the constraint update to allow 9.0.1 in, then there's no rush to blacklist anything, is there? > As far as the timing/content of 9.0.1, however, that may require further > discussion. > > (In other words, I'm saying that when you say 'another solution', my > position is that we should take 'another' to mean 'additional', not > 'different'.) I'm not sure what that means. Doug From doug at doughellmann.com Tue Oct 9 15:21:54 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 09 Oct 2018 11:21:54 -0400 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> Message-ID: Lance Bragstad writes: > Keystone is failing because it's missing a fix from oslo.messaging [0]. > That said, keystone is also relying on an internal implementation detail in > oslo.messaging by mocking it in tests [1]. The notification work has been > around in keystone for a *long* time, but it's apparent that we should > revisit these tests to make sure we aren't testing something that is > already tested by oslo.messaging if we're mocking internal implementation > details of a library. > > Regardless, blacklisting version 9.0.0 will work for keystone, but we can > work around it another way by either rewriting the tests to not care about > oslo.messaging specifics, or removing them if they're obsolete. Yeah, we keep running into these sorts of problems when folks mock past the public API boundary of a library, so let's eliminate them as we find them. If there's a way to add a fixture to oslo.messaging to support the tests we can do that, too. Doug > > [0] https://review.openstack.org/#/c/608196/ > [1] > https://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/unit/common/test_notifications.py#n1343 > > On Mon, Oct 8, 2018 at 10:59 PM Matthew Thode > wrote: > >> several projects have had problems with the new release, some have ways >> of working around it, and some do not. I'm sending this just to raise >> the issue and allow a place to discuss solutions. >> >> Currently there is a review proposed to blacklist 9.0.0, but if this is >> going to still be an issue somehow in further releases we may need >> another solution. >> >> https://review.openstack.org/#/c/608835/ >> >> -- >> Matthew Thode (prometheanfire) >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From prometheanfire at gentoo.org Tue Oct 9 15:21:57 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 9 Oct 2018 10:21:57 -0500 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> Message-ID: <20181009152157.x6yllzeaeqwjt6wl@gentoo.org> On 18-10-09 11:12:30, Doug Hellmann wrote: > Matthew Thode writes: > > > several projects have had problems with the new release, some have ways > > of working around it, and some do not. I'm sending this just to raise > > the issue and allow a place to discuss solutions. > > > > Currently there is a review proposed to blacklist 9.0.0, but if this is > > going to still be an issue somehow in further releases we may need > > another solution. > > > > https://review.openstack.org/#/c/608835/ > > > > -- > > Matthew Thode (prometheanfire) > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Do you have links to the failure logs or bug reports or something? If I > wanted to help I wouldn't even know where to start. > http://logs.openstack.org/21/607521/2/check/cross-cinder-py35/e15722e/testr_results.html.gz http://logs.openstack.org/21/607521/2/check/cross-glance-py35/e2161d7/testr_results.html.gz http://logs.openstack.org/21/607521/2/check/cross-keystone-py35/908a1c2/testr_results.html.gz -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From openstack at nemebean.com Tue Oct 9 15:30:30 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 9 Oct 2018 10:30:30 -0500 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> Message-ID: On 10/9/18 9:06 AM, Lance Bragstad wrote: > Keystone is failing because it's missing a fix from oslo.messaging [0]. > That said, keystone is also relying on an internal implementation detail > in oslo.messaging by mocking it in tests [1]. The notification work has > been around in keystone for a *long* time, but it's apparent that we > should revisit these tests to make sure we aren't testing something that > is already tested by oslo.messaging if we're mocking internal > implementation details of a library. This is actually the same problem Cinder and Glance had, it's just being hidden because there is an exception handler in Keystone that buried the original exception message in log output. 9.0.1 will get Keystone working too. But mocking library internals is still naughty and you should stop that. :-P > > Regardless, blacklisting version 9.0.0 will work for keystone, but we > can work around it another way by either rewriting the tests to not care > about oslo.messaging specifics, or removing them if they're obsolete. > > [0] https://review.openstack.org/#/c/608196/ > [1] > https://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/unit/common/test_notifications.py#n1343 > > On Mon, Oct 8, 2018 at 10:59 PM Matthew Thode > wrote: > > several projects have had problems with the new release, some have ways > of working around it, and some do not.  I'm sending this just to raise > the issue and allow a place to discuss solutions. > > Currently there is a review proposed to blacklist 9.0.0, but if this is > going to still be an issue somehow in further releases we may need > another solution. > > https://review.openstack.org/#/c/608835/ > > -- > Matthew Thode (prometheanfire) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From florian.engelmann at everyware.ch Tue Oct 9 15:32:02 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Tue, 9 Oct 2018 17:32:02 +0200 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <95509c0e-5abb-b0f2-eca3-f04e7eb995e3@gmail.com> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <5cf93772-d8f6-4cbb-3088-eeb670941969@everyware.ch> <95509c0e-5abb-b0f2-eca3-f04e7eb995e3@gmail.com> Message-ID: <0ebcd9a9-5df0-0038-5ad8-ede6e0dc4387@everyware.ch> Am 10/9/18 um 1:23 PM schrieb Jay Pipes: > On 10/09/2018 06:34 AM, Florian Engelmann wrote: >> Am 10/9/18 um 11:41 AM schrieb Jay Pipes: >>> On 10/09/2018 04:34 AM, Christian Berendt wrote: >>>> >>>> >>>>> On 8. Oct 2018, at 19:48, Jay Pipes wrote: >>>>> >>>>> Why not send all read and all write traffic to a single haproxy >>>>> endpoint and just have haproxy spread all traffic across each >>>>> Galera node? >>>>> >>>>> Galera, after all, is multi-master synchronous replication... so it >>>>> shouldn't matter which node in the Galera cluster you send traffic to. >>>> >>>> Probably because of MySQL deadlocks in Galera: >>>> >>>> —snip— >>>> Galera cluster has known limitations, one of them is that it uses >>>> cluster-wide optimistic locking. This may cause some transactions to >>>> rollback. With an increasing number of writeable masters, the >>>> transaction rollback rate may increase, especially if there is write >>>> contention on the same dataset. It is of course possible to retry >>>> the transaction and perhaps it will COMMIT in the retries, but this >>>> will add to the transaction latency. However, some designs are >>>> deadlock prone, e.g sequence tables. >>>> —snap— >>>> >>>> Source: >>>> https://severalnines.com/resources/tutorials/mysql-load-balancing-haproxy-tutorial >>>> >>> >>> Have you seen the above in production? >> >> Yes of course. Just depends on the application and how high the >> workload gets. >> >> Please read about deadloks and nova in the following report by Intel: >> >> http://galeracluster.com/wp-content/uploads/2017/06/performance_analysis_and_tuning_in_china_mobiles_openstack_production_cloud_2.pdf > > > I have read the above. It's a synthetic workload analysis, which is why > I asked if you'd seen this in production. > > For the record, we addressed much of the contention/races mentioned in > the above around scheduler resource consumption in the Ocata and Pike > releases of Nova. > > I'm aware that the report above identifies the quota handling code in > Nova as the primary culprit of the deadlock issues but again, it's a > synthetic workload that is designed to find breaking points. It doesn't > represent a realistic production workload. > > You can read about the deadlock issue in depth on my blog here: > > http://www.joinfu.com/2015/01/understanding-reservations-concurrency-locking-in-nova/ > > > That explains where the source of the problem comes from (it's the use > of SELECT FOR UPDATE, which has been removed from Nova's quota-handling > code in the Rocky release). Thank you very much for your link. Great article!!! Took my a while to read it and understand everything but helped a lot!!! > >> If just Nova is affected we could also create an additional HAProxy >> listener using all Galera nodes with round-robin for all other services? > > I fail to see the point of using Galera with a single writer. At that > point, why bother with Galera at all? Just use a single database node > with a single slave for backup purposes. From my point of view Galera is easy to manage and great for HA. Having to handle a manual failover in production with mysql master/slave never was fun... Indeed writing to a single node and not using the other nodes (even for read, like it is done in kolla-ansible) is not the best solution. Galera is slower than a standalone MySQL. Using ProxySQL would enable us to use caching and read/write split to speed up database queries while HA and management are still good. > >> Anyway - proxySQL would be a great extension. > > I don't disagree that proxySQL is a good extension. However, it adds yet > another services to the mesh that needs to be deployed, configured and > maintained. True. I guess we will start with an external MySQL installation to collect some experience. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From sylvain.bauza at gmail.com Tue Oct 9 15:32:12 2018 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Tue, 9 Oct 2018 17:32:12 +0200 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <1539097728.11166.8@smtp.office365.com> References: <1539078021.11166.5@smtp.office365.com> <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> <1539097728.11166.8@smtp.office365.com> Message-ID: Le mar. 9 oct. 2018 à 17:09, Balázs Gibizer a écrit : > > > On Tue, Oct 9, 2018 at 4:56 PM, Sylvain Bauza > wrote: > > > > > > Le mar. 9 oct. 2018 à 16:39, Eric Fried a > > écrit : > >> IIUC, the primary thing the force flag was intended to do - allow an > >> instance to land on the requested destination even if that means > >> oversubscription of the host's resources - doesn't happen anymore > >> since > >> we started making the destination claim in placement. > >> > >> IOW, since pike, you don't actually see a difference in behavior by > >> using the force flag or not. (If you do, it's more likely a bug than > >> what you were expecting.) > >> > >> So there's no reason to keep it around. We can remove it in a new > >> microversion (or not); but even in the current microversion we need > >> not > >> continue making convoluted attempts to observe it. > >> > >> What that means is that we should simplify everything down to ignore > >> the > >> force flag and always call GET /a_c. Problem solved - for nested > >> and/or > >> sharing, NUMA or not, root resources or no, on the source and/or > >> destination. > >> > > > > > > While I tend to agree with Eric here (and I commented on the review > > accordingly by saying we should signal the new behaviour by a > > microversion), I still think we need to properly advertise this, > > adding openstack-operators@ accordingly. > > Question for you as well: if we remove (or change) the force flag in a > new microversion then how should the old microversions behave when > nested allocations would be required? > > In that case (ie. old microversions with either "force=None and target" or 'force=True', we should IMHO not allocate any migration. Thoughts ? > Cheers, > gibi > > > Disclaimer : since we have gaps on OSC, the current OSC behaviour > > when you "openstack server live-migrate " is to *force* the > > destination by not calling the scheduler. Yeah, it sucks. > > > > Operators, what are the exact cases (for those running clouds newer > > than Mitaka, ie. Newton and above) when you make use of the --force > > option for live migration with a microversion newer or equal 2.29 ? > > In general, even in the case of an emergency, you still want to make > > sure you don't throw your compute under the bus by massively > > migrating instances that would create an undetected snowball effect > > by having this compute refusing new instances. Or are you disabling > > the target compute service first and throw your pet instances up > > there ? > > > > -Sylvain > > > > > > > >> -efried > >> > >> On 10/09/2018 04:40 AM, Balázs Gibizer wrote: > >> > Hi, > >> > > >> > Setup > >> > ----- > >> > > >> > nested allocation: an allocation that contains resources from one > >> or > >> > more nested RPs. (if you have better term for this then please > >> suggest). > >> > > >> > If an instance has nested allocation it means that the compute, it > >> > allocates from, has a nested RP tree. BUT if a compute has a > >> nested RP > >> > tree it does not automatically means that the instance, allocating > >> from > >> > that compute, has a nested allocation (e.g. bandwidth inventory > >> will be > >> > on a nested RPs but not every instance will require bandwidth) > >> > > >> > Afaiu, as soon as we have NUMA modelling in place the most trivial > >> > servers will have nested allocations as CPU and MEMORY inverntory > >> will > >> > be moved to the nested NUMA RPs. But NUMA is still in the future. > >> > > >> > Sidenote: there is an edge case reported by bauzas when an instance > >> > allocates _only_ from nested RPs. This was discussed on last > >> Friday and > >> > it resulted in a new patch[0] but I would like to keep that > >> discussion > >> > separate from this if possible. > >> > > >> > Sidenote: the current problem somewhat related to not just nested > >> PRs > >> > but to sharing RPs as well. However I'm not aiming to implement > >> sharing > >> > support in Nova right now so I also try to keep the sharing > >> disscussion > >> > separated if possible. > >> > > >> > There was already some discussion on the Monday's scheduler > >> meeting but > >> > I could not attend. > >> > > >> > http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20 > >> > > >> > > >> > The meat > >> > -------- > >> > > >> > Both live-migrate[1] and evacuate[2] has an optional force flag on > >> the > >> > nova REST API. The documentation says: "Force by not > >> > verifying the provided destination host by the scheduler." > >> > > >> > Nova implements this statement by not calling the scheduler if > >> > force=True BUT still try to manage allocations in placement. > >> > > >> > To have allocation on the destination host Nova blindly copies the > >> > instance allocation from the source host to the destination host > >> during > >> > these operations. Nova can do that as 1) the whole allocation is > >> > against a single RP (the compute RP) and 2) Nova knows both the > >> source > >> > compute RP and the destination compute RP. > >> > > >> > However as soon as we bring nested allocations into the picture > >> that > >> > blind copy will not be feasible. Possible cases > >> > 0) The instance has non-nested allocation on the source and would > >> need > >> > non nested allocation on the destination. This works with blindy > >> copy > >> > today. > >> > 1) The instance has a nested allocation on the source and would > >> need a > >> > nested allocation on the destination as well. > >> > 2) The instance has a non-nested allocation on the source and would > >> > need a nested allocation on the destination. > >> > 3) The instance has a nested allocation on the source and would > >> need a > >> > non nested allocation on the destination. > >> > > >> > Nova cannot generate nested allocations easily without > >> reimplementing > >> > some of the placement allocation candidate (a_c) code. However I > >> don't > >> > like the idea of duplicating some of the a_c code in Nova. > >> > > >> > Nova cannot detect what kind of allocation (nested or non-nested) > >> an > >> > instance would need on the destination without calling placement > >> a_c. > >> > So knowing when to call placement is a chicken and egg problem. > >> > > >> > Possible solutions: > >> > A) fail fast > >> > ------------ > >> > 0) Nova can detect that the source allocatioin is non-nested and > >> try > >> > the blindy copy and it will succeed. > >> > 1) Nova can detect that the source allocaton is nested and fail the > >> > operation > >> > 2) Nova only sees a non nested source allocation. Even if the dest > >> RP > >> > tree is nested it does not mean that the allocation will be > >> nested. We > >> > cannot fail fast. Nova can try the blind copy and allocate every > >> > resources from the root RP of the destination. If the instance > >> require > >> > nested allocation instead the claim will fail in placement. So > >> nova can > >> > fail the operation a bit later than in 1). > >> > 3) Nova can detect that the source allocation is nested and fail > >> the > >> > operation. However and enhanced blind copy that tries to allocation > >> > everything from the root RP on the destinaton would have worked. > >> > > >> > B) Guess when to ignore the force flag and call the scheduler > >> > ------------------------------------------------------------- > >> > 0) keep the blind copy as it works > >> > 1) Nova detect that the source allocation is nested. Ignores the > >> force > >> > flag and calls the scheduler that will call placement a_c. Move > >> > operation can succeed. > >> > 2) Nova only sees a non nested source allocation so it will fall > >> back > >> > to blind copy and fails at the claim on destination. > >> > 3) Nova detect that the source allocation is nested. Ignores the > >> force > >> > flag and calls the scheduler that will call placement a_c. Move > >> > operation can succeed. > >> > > >> > This solution would be against the API doc that states nova does > >> not > >> > call the scheduler if the operation is forced. However in case of > >> force > >> > live-migration Nova already verifies the target host from couple of > >> > perspective in [3]. > >> > This solution is alreay proposed for live-migrate in [4] and for > >> > evacuate in [5] so the complexity of the solution can be seen in > >> the > >> > reviews. > >> > > >> > C) Remove the force flag from the API in a new microversion > >> > ----------------------------------------------------------- > >> > 0)-3): all cases would call the scheduler to verify the target > >> host and > >> > generate the nested (or non-nested) allocation. > >> > We would still need an agreed behavior (from A), B), D)) for the > >> old > >> > microversions as the todays code creates inconsistent allocation > >> in #1) > >> > and #3) by ignoring the resource from the nested RP. > >> > > >> > D) Do not manage allocations in placement for forced operation > >> > -------------------------------------------------------------- > >> > Force flag is considered as a last resort tool for the admin to > >> move > >> > VMs around. The API doc has a fat warning about the danger of it. > >> So > >> > Nova can simply ignore resource allocation task if force=True. Nova > >> > would delete the source allocation and does not create any > >> allocation > >> > on the destination host. > >> > > >> > This is a simple but dangerous solution but it is what the force > >> flag > >> > is all about, move the server against all the built in safeties. > >> (If > >> > the admin needs the safeties she can set force=False and still > >> specify > >> > the destination host) > >> > > >> > I'm open to any suggestions. > >> > > >> > Cheers, > >> > gibi > >> > > >> > [0] https://review.openstack.org/#/c/608298/ > >> > [1] > >> > > >> > https://developer.openstack.org/api-ref/compute/#live-migrate-server-os-migratelive-action > >> > [2] > >> > > >> > https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action > >> > [3] > >> > > >> > https://github.com/openstack/nova/blob/c5a7002bd571379818c0108296041d12bc171728/nova/conductor/tasks/live_migrate.py#L97 > >> > [4] https://review.openstack.org/#/c/605785 > >> > [5] https://review.openstack.org/#/c/606111 > >> > > >> > > >> > > >> > __________________________________________________________________________ > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain.bauza at gmail.com Tue Oct 9 15:33:19 2018 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Tue, 9 Oct 2018 17:33:19 +0200 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: References: <1539078021.11166.5@smtp.office365.com> <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> Message-ID: Shit, I forgot to add openstack-operators at ... Operators, see my question for you here : > Le mar. 9 oct. 2018 à 16:39, Eric Fried a écrit : > >> IIUC, the primary thing the force flag was intended to do - allow an >> instance to land on the requested destination even if that means >> oversubscription of the host's resources - doesn't happen anymore since >> we started making the destination claim in placement. >> >> IOW, since pike, you don't actually see a difference in behavior by >> using the force flag or not. (If you do, it's more likely a bug than >> what you were expecting.) >> >> So there's no reason to keep it around. We can remove it in a new >> microversion (or not); but even in the current microversion we need not >> continue making convoluted attempts to observe it. >> >> What that means is that we should simplify everything down to ignore the >> force flag and always call GET /a_c. Problem solved - for nested and/or >> sharing, NUMA or not, root resources or no, on the source and/or >> destination. >> >> > > While I tend to agree with Eric here (and I commented on the review > accordingly by saying we should signal the new behaviour by a > microversion), I still think we need to properly advertise this, adding > openstack-operators@ accordingly. > Disclaimer : since we have gaps on OSC, the current OSC behaviour when you > "openstack server live-migrate " is to *force* the destination by > not calling the scheduler. Yeah, it sucks. > > Operators, what are the exact cases (for those running clouds newer than > Mitaka, ie. Newton and above) when you make use of the --force option for > live migration with a microversion newer or equal 2.29 ? > In general, even in the case of an emergency, you still want to make sure > you don't throw your compute under the bus by massively migrating instances > that would create an undetected snowball effect by having this compute > refusing new instances. Or are you disabling the target compute service > first and throw your pet instances up there ? > > -Sylvain > > > > -efried >> >> On 10/09/2018 04:40 AM, Balázs Gibizer wrote: >> > Hi, >> > >> > Setup >> > ----- >> > >> > nested allocation: an allocation that contains resources from one or >> > more nested RPs. (if you have better term for this then please suggest). >> > >> > If an instance has nested allocation it means that the compute, it >> > allocates from, has a nested RP tree. BUT if a compute has a nested RP >> > tree it does not automatically means that the instance, allocating from >> > that compute, has a nested allocation (e.g. bandwidth inventory will be >> > on a nested RPs but not every instance will require bandwidth) >> > >> > Afaiu, as soon as we have NUMA modelling in place the most trivial >> > servers will have nested allocations as CPU and MEMORY inverntory will >> > be moved to the nested NUMA RPs. But NUMA is still in the future. >> > >> > Sidenote: there is an edge case reported by bauzas when an instance >> > allocates _only_ from nested RPs. This was discussed on last Friday and >> > it resulted in a new patch[0] but I would like to keep that discussion >> > separate from this if possible. >> > >> > Sidenote: the current problem somewhat related to not just nested PRs >> > but to sharing RPs as well. However I'm not aiming to implement sharing >> > support in Nova right now so I also try to keep the sharing disscussion >> > separated if possible. >> > >> > There was already some discussion on the Monday's scheduler meeting but >> > I could not attend. >> > >> http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20 >> > >> > >> > The meat >> > -------- >> > >> > Both live-migrate[1] and evacuate[2] has an optional force flag on the >> > nova REST API. The documentation says: "Force by not >> > verifying the provided destination host by the scheduler." >> > >> > Nova implements this statement by not calling the scheduler if >> > force=True BUT still try to manage allocations in placement. >> > >> > To have allocation on the destination host Nova blindly copies the >> > instance allocation from the source host to the destination host during >> > these operations. Nova can do that as 1) the whole allocation is >> > against a single RP (the compute RP) and 2) Nova knows both the source >> > compute RP and the destination compute RP. >> > >> > However as soon as we bring nested allocations into the picture that >> > blind copy will not be feasible. Possible cases >> > 0) The instance has non-nested allocation on the source and would need >> > non nested allocation on the destination. This works with blindy copy >> > today. >> > 1) The instance has a nested allocation on the source and would need a >> > nested allocation on the destination as well. >> > 2) The instance has a non-nested allocation on the source and would >> > need a nested allocation on the destination. >> > 3) The instance has a nested allocation on the source and would need a >> > non nested allocation on the destination. >> > >> > Nova cannot generate nested allocations easily without reimplementing >> > some of the placement allocation candidate (a_c) code. However I don't >> > like the idea of duplicating some of the a_c code in Nova. >> > >> > Nova cannot detect what kind of allocation (nested or non-nested) an >> > instance would need on the destination without calling placement a_c. >> > So knowing when to call placement is a chicken and egg problem. >> > >> > Possible solutions: >> > A) fail fast >> > ------------ >> > 0) Nova can detect that the source allocatioin is non-nested and try >> > the blindy copy and it will succeed. >> > 1) Nova can detect that the source allocaton is nested and fail the >> > operation >> > 2) Nova only sees a non nested source allocation. Even if the dest RP >> > tree is nested it does not mean that the allocation will be nested. We >> > cannot fail fast. Nova can try the blind copy and allocate every >> > resources from the root RP of the destination. If the instance require >> > nested allocation instead the claim will fail in placement. So nova can >> > fail the operation a bit later than in 1). >> > 3) Nova can detect that the source allocation is nested and fail the >> > operation. However and enhanced blind copy that tries to allocation >> > everything from the root RP on the destinaton would have worked. >> > >> > B) Guess when to ignore the force flag and call the scheduler >> > ------------------------------------------------------------- >> > 0) keep the blind copy as it works >> > 1) Nova detect that the source allocation is nested. Ignores the force >> > flag and calls the scheduler that will call placement a_c. Move >> > operation can succeed. >> > 2) Nova only sees a non nested source allocation so it will fall back >> > to blind copy and fails at the claim on destination. >> > 3) Nova detect that the source allocation is nested. Ignores the force >> > flag and calls the scheduler that will call placement a_c. Move >> > operation can succeed. >> > >> > This solution would be against the API doc that states nova does not >> > call the scheduler if the operation is forced. However in case of force >> > live-migration Nova already verifies the target host from couple of >> > perspective in [3]. >> > This solution is alreay proposed for live-migrate in [4] and for >> > evacuate in [5] so the complexity of the solution can be seen in the >> > reviews. >> > >> > C) Remove the force flag from the API in a new microversion >> > ----------------------------------------------------------- >> > 0)-3): all cases would call the scheduler to verify the target host and >> > generate the nested (or non-nested) allocation. >> > We would still need an agreed behavior (from A), B), D)) for the old >> > microversions as the todays code creates inconsistent allocation in #1) >> > and #3) by ignoring the resource from the nested RP. >> > >> > D) Do not manage allocations in placement for forced operation >> > -------------------------------------------------------------- >> > Force flag is considered as a last resort tool for the admin to move >> > VMs around. The API doc has a fat warning about the danger of it. So >> > Nova can simply ignore resource allocation task if force=True. Nova >> > would delete the source allocation and does not create any allocation >> > on the destination host. >> > >> > This is a simple but dangerous solution but it is what the force flag >> > is all about, move the server against all the built in safeties. (If >> > the admin needs the safeties she can set force=False and still specify >> > the destination host) >> > >> > I'm open to any suggestions. >> > >> > Cheers, >> > gibi >> > >> > [0] https://review.openstack.org/#/c/608298/ >> > [1] >> > >> https://developer.openstack.org/api-ref/compute/#live-migrate-server-os-migratelive-action >> > [2] >> > >> https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action >> > [3] >> > >> https://github.com/openstack/nova/blob/c5a7002bd571379818c0108296041d12bc171728/nova/conductor/tasks/live_migrate.py#L97 >> > [4] https://review.openstack.org/#/c/605785 >> > [5] https://review.openstack.org/#/c/606111 >> > >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Oct 9 15:35:09 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 9 Oct 2018 17:35:09 +0200 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: References: <1539078021.11166.5@smtp.office365.com> <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> Message-ID: > Shit, I forgot to add openstack-operators at ... > Operators, see my question for you here : > > >> Le mar. 9 oct. 2018 à 16:39, Eric Fried a écrit : >> >>> IIUC, the primary thing the force flag was intended to do - allow an >>> instance to land on the requested destination even if that means >>> oversubscription of the host's resources - doesn't happen anymore since >>> we started making the destination claim in placement. >>> >>> IOW, since pike, you don't actually see a difference in behavior by >>> using the force flag or not. (If you do, it's more likely a bug than >>> what you were expecting.) >>> >>> So there's no reason to keep it around. We can remove it in a new >>> microversion (or not); but even in the current microversion we need not >>> continue making convoluted attempts to observe it. >>> >>> What that means is that we should simplify everything down to ignore the >>> force flag and always call GET /a_c. Problem solved - for nested and/or >>> sharing, NUMA or not, root resources or no, on the source and/or >>> destination. >>> >>> >> >> While I tend to agree with Eric here (and I commented on the review >> accordingly by saying we should signal the new behaviour by a >> microversion), I still think we need to properly advertise this, adding >> openstack-operators@ accordingly. >> Disclaimer : since we have gaps on OSC, the current OSC behaviour when >> you "openstack server live-migrate " is to *force* the destination >> by not calling the scheduler. Yeah, it sucks. >> >> Operators, what are the exact cases (for those running clouds newer than >> Mitaka, ie. Newton and above) when you make use of the --force option for >> live migration with a microversion newer or equal 2.29 ? >> In general, even in the case of an emergency, you still want to make sure >> you don't throw your compute under the bus by massively migrating instances >> that would create an undetected snowball effect by having this compute >> refusing new instances. Or are you disabling the target compute service >> first and throw your pet instances up there ? >> >> -Sylvain >> >> >> >> -efried >>> >>> On 10/09/2018 04:40 AM, Balázs Gibizer wrote: >>> > Hi, >>> > >>> > Setup >>> > ----- >>> > >>> > nested allocation: an allocation that contains resources from one or >>> > more nested RPs. (if you have better term for this then please >>> suggest). >>> > >>> > If an instance has nested allocation it means that the compute, it >>> > allocates from, has a nested RP tree. BUT if a compute has a nested RP >>> > tree it does not automatically means that the instance, allocating >>> from >>> > that compute, has a nested allocation (e.g. bandwidth inventory will >>> be >>> > on a nested RPs but not every instance will require bandwidth) >>> > >>> > Afaiu, as soon as we have NUMA modelling in place the most trivial >>> > servers will have nested allocations as CPU and MEMORY inverntory will >>> > be moved to the nested NUMA RPs. But NUMA is still in the future. >>> > >>> > Sidenote: there is an edge case reported by bauzas when an instance >>> > allocates _only_ from nested RPs. This was discussed on last Friday >>> and >>> > it resulted in a new patch[0] but I would like to keep that discussion >>> > separate from this if possible. >>> > >>> > Sidenote: the current problem somewhat related to not just nested PRs >>> > but to sharing RPs as well. However I'm not aiming to implement >>> sharing >>> > support in Nova right now so I also try to keep the sharing >>> disscussion >>> > separated if possible. >>> > >>> > There was already some discussion on the Monday's scheduler meeting >>> but >>> > I could not attend. >>> > >>> http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20 >>> > >>> > >>> > The meat >>> > -------- >>> > >>> > Both live-migrate[1] and evacuate[2] has an optional force flag on the >>> > nova REST API. The documentation says: "Force by not >>> > verifying the provided destination host by the scheduler." >>> > >>> > Nova implements this statement by not calling the scheduler if >>> > force=True BUT still try to manage allocations in placement. >>> > >>> > To have allocation on the destination host Nova blindly copies the >>> > instance allocation from the source host to the destination host >>> during >>> > these operations. Nova can do that as 1) the whole allocation is >>> > against a single RP (the compute RP) and 2) Nova knows both the source >>> > compute RP and the destination compute RP. >>> > >>> > However as soon as we bring nested allocations into the picture that >>> > blind copy will not be feasible. Possible cases >>> > 0) The instance has non-nested allocation on the source and would need >>> > non nested allocation on the destination. This works with blindy copy >>> > today. >>> > 1) The instance has a nested allocation on the source and would need a >>> > nested allocation on the destination as well. >>> > 2) The instance has a non-nested allocation on the source and would >>> > need a nested allocation on the destination. >>> > 3) The instance has a nested allocation on the source and would need a >>> > non nested allocation on the destination. >>> > >>> > Nova cannot generate nested allocations easily without reimplementing >>> > some of the placement allocation candidate (a_c) code. However I don't >>> > like the idea of duplicating some of the a_c code in Nova. >>> > >>> > Nova cannot detect what kind of allocation (nested or non-nested) an >>> > instance would need on the destination without calling placement a_c. >>> > So knowing when to call placement is a chicken and egg problem. >>> > >>> > Possible solutions: >>> > A) fail fast >>> > ------------ >>> > 0) Nova can detect that the source allocatioin is non-nested and try >>> > the blindy copy and it will succeed. >>> > 1) Nova can detect that the source allocaton is nested and fail the >>> > operation >>> > 2) Nova only sees a non nested source allocation. Even if the dest RP >>> > tree is nested it does not mean that the allocation will be nested. We >>> > cannot fail fast. Nova can try the blind copy and allocate every >>> > resources from the root RP of the destination. If the instance require >>> > nested allocation instead the claim will fail in placement. So nova >>> can >>> > fail the operation a bit later than in 1). >>> > 3) Nova can detect that the source allocation is nested and fail the >>> > operation. However and enhanced blind copy that tries to allocation >>> > everything from the root RP on the destinaton would have worked. >>> > >>> > B) Guess when to ignore the force flag and call the scheduler >>> > ------------------------------------------------------------- >>> > 0) keep the blind copy as it works >>> > 1) Nova detect that the source allocation is nested. Ignores the force >>> > flag and calls the scheduler that will call placement a_c. Move >>> > operation can succeed. >>> > 2) Nova only sees a non nested source allocation so it will fall back >>> > to blind copy and fails at the claim on destination. >>> > 3) Nova detect that the source allocation is nested. Ignores the force >>> > flag and calls the scheduler that will call placement a_c. Move >>> > operation can succeed. >>> > >>> > This solution would be against the API doc that states nova does not >>> > call the scheduler if the operation is forced. However in case of >>> force >>> > live-migration Nova already verifies the target host from couple of >>> > perspective in [3]. >>> > This solution is alreay proposed for live-migrate in [4] and for >>> > evacuate in [5] so the complexity of the solution can be seen in the >>> > reviews. >>> > >>> > C) Remove the force flag from the API in a new microversion >>> > ----------------------------------------------------------- >>> > 0)-3): all cases would call the scheduler to verify the target host >>> and >>> > generate the nested (or non-nested) allocation. >>> > We would still need an agreed behavior (from A), B), D)) for the old >>> > microversions as the todays code creates inconsistent allocation in >>> #1) >>> > and #3) by ignoring the resource from the nested RP. >>> > >>> > D) Do not manage allocations in placement for forced operation >>> > -------------------------------------------------------------- >>> > Force flag is considered as a last resort tool for the admin to move >>> > VMs around. The API doc has a fat warning about the danger of it. So >>> > Nova can simply ignore resource allocation task if force=True. Nova >>> > would delete the source allocation and does not create any allocation >>> > on the destination host. >>> > >>> > This is a simple but dangerous solution but it is what the force flag >>> > is all about, move the server against all the built in safeties. (If >>> > the admin needs the safeties she can set force=False and still specify >>> > the destination host) >>> > >>> > I'm open to any suggestions. >>> > >>> > Cheers, >>> > gibi >>> > >>> > [0] https://review.openstack.org/#/c/608298/ >>> > [1] >>> > >>> https://developer.openstack.org/api-ref/compute/#live-migrate-server-os-migratelive-action >>> > [2] >>> > >>> https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action >>> > [3] >>> > >>> https://github.com/openstack/nova/blob/c5a7002bd571379818c0108296041d12bc171728/nova/conductor/tasks/live_migrate.py#L97 >>> > [4] https://review.openstack.org/#/c/605785 >>> > [5] https://review.openstack.org/#/c/606111 >>> > >>> > >>> > >>> __________________________________________________________________________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Oct 9 15:37:26 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 9 Oct 2018 10:37:26 -0500 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> <2b81862f-bedd-74c9-b10e-b51913eb8809@gmail.com> Message-ID: On 10/9/18 10:19 AM, Doug Hellmann wrote: > Brian Rosmaita writes: > >> On 10/8/18 11:59 PM, Matthew Thode wrote: >>> several projects have had problems with the new release, some have ways >>> of working around it, and some do not. I'm sending this just to raise >>> the issue and allow a place to discuss solutions. >>> >>> Currently there is a review proposed to blacklist 9.0.0, but if this is >>> going to still be an issue somehow in further releases we may need >>> another solution. >>> >>> https://review.openstack.org/#/c/608835/ >> >> As indicated in the commit message on the above patch, 9.0.0 contains a >> bug that's been fixed in oslo.messaging master, so I don't think there's >> any question that 9.0.0 has to be blacklisted. > > I've proposed releasing oslo.messaging 9.0.1 in > https://review.openstack.org/609030 I also included it in https://review.openstack.org/#/c/609031/ (which I see you found). > > If we don't land the constraint update to allow 9.0.1 in, then there's > no rush to blacklist anything, is there? Probably not. We'll want to blacklist it before we allow 9.0.1, but I suspect this is mostly a test problem since in production the transport would have to be set explicitly. > >> As far as the timing/content of 9.0.1, however, that may require further >> discussion. >> >> (In other words, I'm saying that when you say 'another solution', my >> position is that we should take 'another' to mean 'additional', not >> 'different'.) > > I'm not sure what that means. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From balazs.gibizer at ericsson.com Tue Oct 9 15:44:40 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Tue, 9 Oct 2018 15:44:40 +0000 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: References: <1539078021.11166.5@smtp.office365.com> <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> <1539097728.11166.8@smtp.office365.com> Message-ID: <1539099876.11166.10@smtp.office365.com> On Tue, Oct 9, 2018 at 5:32 PM, Sylvain Bauza wrote: > > > Le mar. 9 oct. 2018 à 17:09, Balázs Gibizer > a écrit : >> >> >> On Tue, Oct 9, 2018 at 4:56 PM, Sylvain Bauza >> >> wrote: >> > >> > >> > Le mar. 9 oct. 2018 à 16:39, Eric Fried a >> > écrit : >> >> IIUC, the primary thing the force flag was intended to do - allow >> an >> >> instance to land on the requested destination even if that means >> >> oversubscription of the host's resources - doesn't happen anymore >> >> since >> >> we started making the destination claim in placement. >> >> >> >> IOW, since pike, you don't actually see a difference in behavior >> by >> >> using the force flag or not. (If you do, it's more likely a bug >> than >> >> what you were expecting.) >> >> >> >> So there's no reason to keep it around. We can remove it in a new >> >> microversion (or not); but even in the current microversion we >> need >> >> not >> >> continue making convoluted attempts to observe it. >> >> >> >> What that means is that we should simplify everything down to >> ignore >> >> the >> >> force flag and always call GET /a_c. Problem solved - for nested >> >> and/or >> >> sharing, NUMA or not, root resources or no, on the source and/or >> >> destination. >> >> >> > >> > >> > While I tend to agree with Eric here (and I commented on the review >> > accordingly by saying we should signal the new behaviour by a >> > microversion), I still think we need to properly advertise this, >> > adding openstack-operators@ accordingly. >> >> Question for you as well: if we remove (or change) the force flag in >> a >> new microversion then how should the old microversions behave when >> nested allocations would be required? >> > > In that case (ie. old microversions with either "force=None and > target" or 'force=True', we should IMHO not allocate any migration. > Thoughts ? Do you mean on old microversions implement option #D) ? Cheers, gibi > >> Cheers, >> gibi >> >> > Disclaimer : since we have gaps on OSC, the current OSC behaviour >> > when you "openstack server live-migrate " is to *force* the >> > destination by not calling the scheduler. Yeah, it sucks. >> > >> > Operators, what are the exact cases (for those running clouds newer >> > than Mitaka, ie. Newton and above) when you make use of the --force >> > option for live migration with a microversion newer or equal 2.29 ? >> > In general, even in the case of an emergency, you still want to >> make >> > sure you don't throw your compute under the bus by massively >> > migrating instances that would create an undetected snowball effect >> > by having this compute refusing new instances. Or are you disabling >> > the target compute service first and throw your pet instances up >> > there ? >> > >> > -Sylvain >> > >> > >> > >> >> -efried >> >> >> >> On 10/09/2018 04:40 AM, Balázs Gibizer wrote: >> >> > Hi, >> >> > >> >> > Setup >> >> > ----- >> >> > >> >> > nested allocation: an allocation that contains resources from >> one >> >> or >> >> > more nested RPs. (if you have better term for this then please >> >> suggest). >> >> > >> >> > If an instance has nested allocation it means that the compute, >> it >> >> > allocates from, has a nested RP tree. BUT if a compute has a >> >> nested RP >> >> > tree it does not automatically means that the instance, >> allocating >> >> from >> >> > that compute, has a nested allocation (e.g. bandwidth inventory >> >> will be >> >> > on a nested RPs but not every instance will require bandwidth) >> >> > >> >> > Afaiu, as soon as we have NUMA modelling in place the most >> trivial >> >> > servers will have nested allocations as CPU and MEMORY >> inverntory >> >> will >> >> > be moved to the nested NUMA RPs. But NUMA is still in the >> future. >> >> > >> >> > Sidenote: there is an edge case reported by bauzas when an >> instance >> >> > allocates _only_ from nested RPs. This was discussed on last >> >> Friday and >> >> > it resulted in a new patch[0] but I would like to keep that >> >> discussion >> >> > separate from this if possible. >> >> > >> >> > Sidenote: the current problem somewhat related to not just >> nested >> >> PRs >> >> > but to sharing RPs as well. However I'm not aiming to implement >> >> sharing >> >> > support in Nova right now so I also try to keep the sharing >> >> disscussion >> >> > separated if possible. >> >> > >> >> > There was already some discussion on the Monday's scheduler >> >> meeting but >> >> > I could not attend. >> >> > >> >> >> http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20 >> >> > >> >> > >> >> > The meat >> >> > -------- >> >> > >> >> > Both live-migrate[1] and evacuate[2] has an optional force flag >> on >> >> the >> >> > nova REST API. The documentation says: "Force by >> not >> >> > verifying the provided destination host by the scheduler." >> >> > >> >> > Nova implements this statement by not calling the scheduler if >> >> > force=True BUT still try to manage allocations in placement. >> >> > >> >> > To have allocation on the destination host Nova blindly copies >> the >> >> > instance allocation from the source host to the destination host >> >> during >> >> > these operations. Nova can do that as 1) the whole allocation is >> >> > against a single RP (the compute RP) and 2) Nova knows both the >> >> source >> >> > compute RP and the destination compute RP. >> >> > >> >> > However as soon as we bring nested allocations into the picture >> >> that >> >> > blind copy will not be feasible. Possible cases >> >> > 0) The instance has non-nested allocation on the source and >> would >> >> need >> >> > non nested allocation on the destination. This works with blindy >> >> copy >> >> > today. >> >> > 1) The instance has a nested allocation on the source and would >> >> need a >> >> > nested allocation on the destination as well. >> >> > 2) The instance has a non-nested allocation on the source and >> would >> >> > need a nested allocation on the destination. >> >> > 3) The instance has a nested allocation on the source and would >> >> need a >> >> > non nested allocation on the destination. >> >> > >> >> > Nova cannot generate nested allocations easily without >> >> reimplementing >> >> > some of the placement allocation candidate (a_c) code. However I >> >> don't >> >> > like the idea of duplicating some of the a_c code in Nova. >> >> > >> >> > Nova cannot detect what kind of allocation (nested or >> non-nested) >> >> an >> >> > instance would need on the destination without calling placement >> >> a_c. >> >> > So knowing when to call placement is a chicken and egg problem. >> >> > >> >> > Possible solutions: >> >> > A) fail fast >> >> > ------------ >> >> > 0) Nova can detect that the source allocatioin is non-nested and >> >> try >> >> > the blindy copy and it will succeed. >> >> > 1) Nova can detect that the source allocaton is nested and fail >> the >> >> > operation >> >> > 2) Nova only sees a non nested source allocation. Even if the >> dest >> >> RP >> >> > tree is nested it does not mean that the allocation will be >> >> nested. We >> >> > cannot fail fast. Nova can try the blind copy and allocate every >> >> > resources from the root RP of the destination. If the instance >> >> require >> >> > nested allocation instead the claim will fail in placement. So >> >> nova can >> >> > fail the operation a bit later than in 1). >> >> > 3) Nova can detect that the source allocation is nested and fail >> >> the >> >> > operation. However and enhanced blind copy that tries to >> allocation >> >> > everything from the root RP on the destinaton would have worked. >> >> > >> >> > B) Guess when to ignore the force flag and call the scheduler >> >> > ------------------------------------------------------------- >> >> > 0) keep the blind copy as it works >> >> > 1) Nova detect that the source allocation is nested. Ignores the >> >> force >> >> > flag and calls the scheduler that will call placement a_c. Move >> >> > operation can succeed. >> >> > 2) Nova only sees a non nested source allocation so it will fall >> >> back >> >> > to blind copy and fails at the claim on destination. >> >> > 3) Nova detect that the source allocation is nested. Ignores the >> >> force >> >> > flag and calls the scheduler that will call placement a_c. Move >> >> > operation can succeed. >> >> > >> >> > This solution would be against the API doc that states nova does >> >> not >> >> > call the scheduler if the operation is forced. However in case >> of >> >> force >> >> > live-migration Nova already verifies the target host from >> couple of >> >> > perspective in [3]. >> >> > This solution is alreay proposed for live-migrate in [4] and for >> >> > evacuate in [5] so the complexity of the solution can be seen in >> >> the >> >> > reviews. >> >> > >> >> > C) Remove the force flag from the API in a new microversion >> >> > ----------------------------------------------------------- >> >> > 0)-3): all cases would call the scheduler to verify the target >> >> host and >> >> > generate the nested (or non-nested) allocation. >> >> > We would still need an agreed behavior (from A), B), D)) for the >> >> old >> >> > microversions as the todays code creates inconsistent allocation >> >> in #1) >> >> > and #3) by ignoring the resource from the nested RP. >> >> > >> >> > D) Do not manage allocations in placement for forced operation >> >> > -------------------------------------------------------------- >> >> > Force flag is considered as a last resort tool for the admin to >> >> move >> >> > VMs around. The API doc has a fat warning about the danger of >> it. >> >> So >> >> > Nova can simply ignore resource allocation task if force=True. >> Nova >> >> > would delete the source allocation and does not create any >> >> allocation >> >> > on the destination host. >> >> > >> >> > This is a simple but dangerous solution but it is what the force >> >> flag >> >> > is all about, move the server against all the built in safeties. >> >> (If >> >> > the admin needs the safeties she can set force=False and still >> >> specify >> >> > the destination host) >> >> > >> >> > I'm open to any suggestions. >> >> > >> >> > Cheers, >> >> > gibi >> >> > >> >> > [0] https://review.openstack.org/#/c/608298/ >> >> > [1] >> >> > >> >> >> https://developer.openstack.org/api-ref/compute/#live-migrate-server-os-migratelive-action >> >> > [2] >> >> > >> >> >> https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action >> >> > [3] >> >> > >> >> >> https://github.com/openstack/nova/blob/c5a7002bd571379818c0108296041d12bc171728/nova/conductor/tasks/live_migrate.py#L97 >> >> > [4] https://review.openstack.org/#/c/605785 >> >> > [5] https://review.openstack.org/#/c/606111 >> >> > >> >> > >> >> > >> >> >> __________________________________________________________________________ >> >> > OpenStack Development Mailing List (not for usage questions) >> >> > Unsubscribe: >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Tue Oct 9 15:49:12 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 09 Oct 2018 11:49:12 -0400 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> <2b81862f-bedd-74c9-b10e-b51913eb8809@gmail.com> Message-ID: Ben Nemec writes: > On 10/9/18 10:19 AM, Doug Hellmann wrote: >> Brian Rosmaita writes: >> >>> On 10/8/18 11:59 PM, Matthew Thode wrote: >>>> several projects have had problems with the new release, some have ways >>>> of working around it, and some do not. I'm sending this just to raise >>>> the issue and allow a place to discuss solutions. >>>> >>>> Currently there is a review proposed to blacklist 9.0.0, but if this is >>>> going to still be an issue somehow in further releases we may need >>>> another solution. >>>> >>>> https://review.openstack.org/#/c/608835/ >>> >>> As indicated in the commit message on the above patch, 9.0.0 contains a >>> bug that's been fixed in oslo.messaging master, so I don't think there's >>> any question that 9.0.0 has to be blacklisted. >> >> I've proposed releasing oslo.messaging 9.0.1 in >> https://review.openstack.org/609030 > > I also included it in https://review.openstack.org/#/c/609031/ (which I > see you found). Yeah, I abandoned my separate patch to do the same in favor of your omnibus patch. >> If we don't land the constraint update to allow 9.0.1 in, then there's >> no rush to blacklist anything, is there? > > Probably not. We'll want to blacklist it before we allow 9.0.1, but I > suspect this is mostly a test problem since in production the transport > would have to be set explicitly. > >> >>> As far as the timing/content of 9.0.1, however, that may require further >>> discussion. >>> >>> (In other words, I'm saying that when you say 'another solution', my >>> position is that we should take 'another' to mean 'additional', not >>> 'different'.) >> >> I'm not sure what that means. >> >> Doug >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> From lbragstad at gmail.com Tue Oct 9 15:50:26 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 9 Oct 2018 10:50:26 -0500 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> Message-ID: On Tue, Oct 9, 2018 at 10:31 AM Ben Nemec wrote: > > > On 10/9/18 9:06 AM, Lance Bragstad wrote: > > Keystone is failing because it's missing a fix from oslo.messaging [0]. > > That said, keystone is also relying on an internal implementation detail > > in oslo.messaging by mocking it in tests [1]. The notification work has > > been around in keystone for a *long* time, but it's apparent that we > > should revisit these tests to make sure we aren't testing something that > > is already tested by oslo.messaging if we're mocking internal > > implementation details of a library. > > This is actually the same problem Cinder and Glance had, it's just being > hidden because there is an exception handler in Keystone that buried the > original exception message in log output. 9.0.1 will get Keystone > working too. > > But mocking library internals is still naughty and you should stop that. > :-P > Agreed. I have a note to investigate and see if I can rip those bits out or rewrite them. > > > > > Regardless, blacklisting version 9.0.0 will work for keystone, but we > > can work around it another way by either rewriting the tests to not care > > about oslo.messaging specifics, or removing them if they're obsolete. > > > > [0] https://review.openstack.org/#/c/608196/ > > [1] > > > https://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/unit/common/test_notifications.py#n1343 > > > > On Mon, Oct 8, 2018 at 10:59 PM Matthew Thode > > wrote: > > > > several projects have had problems with the new release, some have > ways > > of working around it, and some do not. I'm sending this just to > raise > > the issue and allow a place to discuss solutions. > > > > Currently there is a review proposed to blacklist 9.0.0, but if this > is > > going to still be an issue somehow in further releases we may need > > another solution. > > > > https://review.openstack.org/#/c/608835/ > > > > -- > > Matthew Thode (prometheanfire) > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > < > http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Oct 9 15:56:47 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 09 Oct 2018 11:56:47 -0400 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: <20181009152157.x6yllzeaeqwjt6wl@gentoo.org> References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> <20181009152157.x6yllzeaeqwjt6wl@gentoo.org> Message-ID: Matthew Thode writes: > On 18-10-09 11:12:30, Doug Hellmann wrote: >> Matthew Thode writes: >> >> > several projects have had problems with the new release, some have ways >> > of working around it, and some do not. I'm sending this just to raise >> > the issue and allow a place to discuss solutions. >> > >> > Currently there is a review proposed to blacklist 9.0.0, but if this is >> > going to still be an issue somehow in further releases we may need >> > another solution. >> > >> > https://review.openstack.org/#/c/608835/ >> > >> > -- >> > Matthew Thode (prometheanfire) >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> Do you have links to the failure logs or bug reports or something? If I >> wanted to help I wouldn't even know where to start. >> > > http://logs.openstack.org/21/607521/2/check/cross-cinder-py35/e15722e/testr_results.html.gz These failures look like we should add a proper API to oslo.messaging to set the notification and rpc backends for testing. The configuration options are *not* part of the API of the library. There is already an oslo_messaging.conffixture module with a fixture class, but it looks like it defaults to rabbit. Maybe someone wants to propose a patch to make that a parameter to the constructor? > http://logs.openstack.org/21/607521/2/check/cross-glance-py35/e2161d7/testr_results.html.gz These failures should be fixed by releasing the patch that Mehdi provided that ensures there is a valid default transport configured. > http://logs.openstack.org/21/607521/2/check/cross-keystone-py35/908a1c2/testr_results.html.gz Lance has already described these as mocking implementation details of the library. I expect we'll need someone with keystone experience to work out what the best solution is to do there. > > -- > Matthew Thode (prometheanfire) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From miguel at mlavalle.com Tue Oct 9 16:08:12 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Tue, 9 Oct 2018 11:08:12 -0500 Subject: [openstack-dev] [neutron][stable] Stable Core Team Update In-Reply-To: <20181003133122.tqrn5b7sbbviutbu@bishop> References: <20181003133122.tqrn5b7sbbviutbu@bishop> Message-ID: Hi Stable Team, Since it has been more than a week since this nomination was posted and we have received only positive feedback, can we move ahead and add Bernard Cafarelli to Neutron Stable core team? Thanks and regards Miguel On Wed, Oct 3, 2018 at 8:32 AM Nate Johnston wrote: > On Tue, Oct 02, 2018 at 10:41:58AM -0500, Miguel Lavalle wrote: > > > I want to nominate Bernard Cafarrelli as a stable core reviewer for > Neutron > > and related projects. Bernard has been increasing the number of stable > > reviews he is doing for the project [1]. Besides that, he is a stable > > maintainer downstream for his employer (Red Hat), so he can bring that > > valuable experience to the Neutron stable team. > > I'm not on the stable team, but an enthusiastic +1 from me! > > Nate > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Tue Oct 9 16:04:39 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 9 Oct 2018 16:04:39 +0000 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch>, Message-ID: <1A3C52DFCD06494D8528644858247BF01C1F62C7@EX10MBOX03.pnnl.gov> There are specific cases where it expects the client to retry and not all code tests for that case. Its safe funneling all traffic to one server. It can be unsafe to do so otherwise. Thanks, Kevin ________________________________________ From: Jay Pipes [jaypipes at gmail.com] Sent: Monday, October 08, 2018 10:48 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints On 10/08/2018 06:14 AM, Florian Engelmann wrote: > 3. HAProxy is not capable to handle "read/write" split with Galera. I > would like to introduce ProxySQL to be able to scale Galera. Why not send all read and all write traffic to a single haproxy endpoint and just have haproxy spread all traffic across each Galera node? Galera, after all, is multi-master synchronous replication... so it shouldn't matter which node in the Galera cluster you send traffic to. -jay __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Kevin.Fox at pnnl.gov Tue Oct 9 16:09:01 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 9 Oct 2018 16:09:01 +0000 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> Message-ID: <1A3C52DFCD06494D8528644858247BF01C1F62DF@EX10MBOX03.pnnl.gov> etcd is an already approved openstack dependency. Could that be used instead of consul so as to not add yet another storage system? coredns with the https://coredns.io/plugins/etcd/ plugin would maybe do what you need? Thanks, Kevin ________________________________________ From: Florian Engelmann [florian.engelmann at everyware.ch] Sent: Monday, October 08, 2018 3:14 AM To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints Hi, I would like to start a discussion about some changes and additions I would like to see in in kolla and kolla-ansible. 1. Keepalived is a problem in layer3 spine leaf networks as any floating IP can only exist in one leaf (and VRRP is a problem in layer3). I would like to use consul and registrar to get rid of the "internal" floating IP and use consuls DNS service discovery to connect all services with each other. 2. Using "ports" for external API (endpoint) access is a major headache if a firewall is involved. I would like to configure the HAProxy (or fabio) for the external access to use "Host:" like, eg. "Host: keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS. Any customer would just need HTTPS access and not have to open all those ports in his firewall. For some enterprise customers it is not possible to request FW changes like that. 3. HAProxy is not capable to handle "read/write" split with Galera. I would like to introduce ProxySQL to be able to scale Galera. 4. HAProxy is fine but fabio integrates well with consul, statsd and could be connected to a vault cluster to manage secure certificate access. 5. I would like to add vault as Barbican backend. 6. I would like to add an option to enable tokenless authentication for all services with each other to get rid of all the openstack service passwords (security issue). What do you think about it? All the best, Florian From kgiusti at gmail.com Tue Oct 9 16:19:59 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Tue, 9 Oct 2018 12:19:59 -0400 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> <20181009152157.x6yllzeaeqwjt6wl@gentoo.org> Message-ID: On Tue, Oct 9, 2018 at 11:56 AM Doug Hellmann wrote: > Matthew Thode writes: > > > On 18-10-09 11:12:30, Doug Hellmann wrote: > >> Matthew Thode writes: > >> > >> > several projects have had problems with the new release, some have > ways > >> > of working around it, and some do not. I'm sending this just to raise > >> > the issue and allow a place to discuss solutions. > >> > > >> > Currently there is a review proposed to blacklist 9.0.0, but if this > is > >> > going to still be an issue somehow in further releases we may need > >> > another solution. > >> > > >> > https://review.openstack.org/#/c/608835/ > >> > > >> > -- > >> > Matthew Thode (prometheanfire) > >> > > __________________________________________________________________________ > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> Do you have links to the failure logs or bug reports or something? If I > >> wanted to help I wouldn't even know where to start. > >> > > > > > http://logs.openstack.org/21/607521/2/check/cross-cinder-py35/e15722e/testr_results.html.gz > > These failures look like we should add a proper API to oslo.messaging to > set the notification and rpc backends for testing. The configuration > options are *not* part of the API of the library. > > There is already an oslo_messaging.conffixture module with a fixture > class, but it looks like it defaults to rabbit. Maybe someone wants to > propose a patch to make that a parameter to the constructor? > oslo.messaging's conffixture uses whatever the config default for transport_url is unless the test specifically overrides it by setting the transport_url attribute. The o.m. unit tests's base test class sets conffixture.transport_url to "fake:/" to use the fake in memory driver. That's the existing practice (I believe it's used like that outside of o.m.) > > > > http://logs.openstack.org/21/607521/2/check/cross-glance-py35/e2161d7/testr_results.html.gz > > These failures should be fixed by releasing the patch that Mehdi > provided that ensures there is a valid default transport configured. > > > > http://logs.openstack.org/21/607521/2/check/cross-keystone-py35/908a1c2/testr_results.html.gz > > Lance has already described these as mocking implementation details of > the library. I expect we'll need someone with keystone experience to > work out what the best solution is to do there. > > > > > -- > > Matthew Thode (prometheanfire) > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Ken Giusti (kgiusti at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Oct 9 16:30:07 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 09 Oct 2018 12:30:07 -0400 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> <20181009152157.x6yllzeaeqwjt6wl@gentoo.org> Message-ID: Ken Giusti writes: > On Tue, Oct 9, 2018 at 11:56 AM Doug Hellmann wrote: > >> Matthew Thode writes: >> >> > On 18-10-09 11:12:30, Doug Hellmann wrote: >> >> Matthew Thode writes: >> >> >> >> > several projects have had problems with the new release, some have >> ways >> >> > of working around it, and some do not. I'm sending this just to raise >> >> > the issue and allow a place to discuss solutions. >> >> > >> >> > Currently there is a review proposed to blacklist 9.0.0, but if this >> is >> >> > going to still be an issue somehow in further releases we may need >> >> > another solution. >> >> > >> >> > https://review.openstack.org/#/c/608835/ >> >> > >> >> > -- >> >> > Matthew Thode (prometheanfire) >> >> > >> __________________________________________________________________________ >> >> > OpenStack Development Mailing List (not for usage questions) >> >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> Do you have links to the failure logs or bug reports or something? If I >> >> wanted to help I wouldn't even know where to start. >> >> >> > >> > >> http://logs.openstack.org/21/607521/2/check/cross-cinder-py35/e15722e/testr_results.html.gz >> >> These failures look like we should add a proper API to oslo.messaging to >> set the notification and rpc backends for testing. The configuration >> options are *not* part of the API of the library. >> >> There is already an oslo_messaging.conffixture module with a fixture >> class, but it looks like it defaults to rabbit. Maybe someone wants to >> propose a patch to make that a parameter to the constructor? >> > > oslo.messaging's conffixture uses whatever the config default for > transport_url is unless the test > specifically overrides it by setting the transport_url attribute. > The o.m. unit tests's base test class sets conffixture.transport_url to > "fake:/" to use the fake in memory driver. > That's the existing practice (I believe it's used like that outside of o.m.) OK, so it sounds like the fixture is relying on the configuration to be set up in advance, and that's the thing we need to change. We don't want users outside of the library to set up tests by using the configuration options, right? Doug From melwittt at gmail.com Tue Oct 9 17:35:23 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 9 Oct 2018 10:35:23 -0700 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <95509c0e-5abb-b0f2-eca3-f04e7eb995e3@gmail.com> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <5cf93772-d8f6-4cbb-3088-eeb670941969@everyware.ch> <95509c0e-5abb-b0f2-eca3-f04e7eb995e3@gmail.com> Message-ID: On Tue, 9 Oct 2018 07:23:03 -0400, Jay Pipes wrote: > That explains where the source of the problem comes from (it's the use > of SELECT FOR UPDATE, which has been removed from Nova's quota-handling > code in the Rocky release). Small correction, the SELECT FOR UPDATE was removed from Nova's quota-handling code in the Pike release. -melanie From kgiusti at gmail.com Tue Oct 9 17:48:52 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Tue, 9 Oct 2018 13:48:52 -0400 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> <20181009152157.x6yllzeaeqwjt6wl@gentoo.org> Message-ID: On Tue, Oct 9, 2018 at 12:30 PM Doug Hellmann wrote: > Ken Giusti writes: > > > On Tue, Oct 9, 2018 at 11:56 AM Doug Hellmann > wrote: > > > >> Matthew Thode writes: > >> > >> > On 18-10-09 11:12:30, Doug Hellmann wrote: > >> >> Matthew Thode writes: > >> >> > >> >> > several projects have had problems with the new release, some have > >> ways > >> >> > of working around it, and some do not. I'm sending this just to > raise > >> >> > the issue and allow a place to discuss solutions. > >> >> > > >> >> > Currently there is a review proposed to blacklist 9.0.0, but if > this > >> is > >> >> > going to still be an issue somehow in further releases we may need > >> >> > another solution. > >> >> > > >> >> > https://review.openstack.org/#/c/608835/ > >> >> > > >> >> > -- > >> >> > Matthew Thode (prometheanfire) > >> >> > > >> > __________________________________________________________________________ > >> >> > OpenStack Development Mailing List (not for usage questions) > >> >> > Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> >> > >> >> Do you have links to the failure logs or bug reports or something? > If I > >> >> wanted to help I wouldn't even know where to start. > >> >> > >> > > >> > > >> > http://logs.openstack.org/21/607521/2/check/cross-cinder-py35/e15722e/testr_results.html.gz > >> > >> These failures look like we should add a proper API to oslo.messaging to > >> set the notification and rpc backends for testing. The configuration > >> options are *not* part of the API of the library. > >> > >> There is already an oslo_messaging.conffixture module with a fixture > >> class, but it looks like it defaults to rabbit. Maybe someone wants to > >> propose a patch to make that a parameter to the constructor? > >> > > > > oslo.messaging's conffixture uses whatever the config default for > > transport_url is unless the test > > specifically overrides it by setting the transport_url attribute. > > The o.m. unit tests's base test class sets conffixture.transport_url to > > "fake:/" to use the fake in memory driver. > > That's the existing practice (I believe it's used like that outside of > o.m.) > > OK, so it sounds like the fixture is relying on the configuration to be > set up in advance, and that's the thing we need to change. We don't want > users outside of the library to set up tests by using the configuration > options, right? > That's the intent of ConfFixture it seems - provide a wrapper API so tests don't have to monkey directly with the config. How about this: https://review.openstack.org/609063 > > Doug > -- Ken Giusti (kgiusti at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Tue Oct 9 18:00:01 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Tue, 09 Oct 2018 11:00:01 -0700 Subject: [openstack-dev] [masakari][congress] what is a host? In-Reply-To: References: Message-ID: Got it thanks very much, Sampath! From: Sam P Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Saturday, October 6, 2018 at 8:29 PM To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [masakari][congress] what is a host? > Hi Eric, > > (1) "virtual machine" is bug. This need to be corrected as physical machine or > hypervisor. > Masakari host is a physical host/hypervisor. I will correct this. > > (2) Not through masakari APIs. You have to add metadata key 'HA_Enabled=True' > to each VM by using nova API. > Masakeri monitors check for failures and send notification to masakari > API if detected any failure (i.e host, VM or process failures). > In host failure (hypervisor down) scenario, Masakari engine get the VM > list on that hypervisor and start evacuate VMs. > Operator can configure masakari to evacuate all VMs or only the VMs with > the metadata key 'HA_Enabled=True. > Please see the config file [1] section [host_failure] for more details. > > Let me know if you need more info on this. > > [1] https://docs.openstack.org/masakari/latest/sample_config.html > > --- Regards, > Sampath > > > > On Sun, Oct 7, 2018 at 8:55 AM Eric K wrote: >> Hi all, I'm working on a potential integration between masakari and >> congress. But I am stuck on some basic usage questions I could not >> answer in my search of docs and demos. Any clarification or references >> would be much appreciated! >> >> 1. What does a host refer to in masakari API? Here's the explanation in API >> doc: >> "Host can be any kind of virtual machine which can have compute >> service running on it." >> (https://developer.openstack.org/api-ref/instance-ha/#hosts-hosts) >> >> So is a masakari host usually a nova instance/server instead of a >> host/hypervisor? >> >> 2. Through the masakari api, how does one go about configuring a VM to >> be managed by masakari instance HA? >> >> Thanks so much! >> >> Eric Kao >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Tue Oct 9 18:12:38 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 9 Oct 2018 11:12:38 -0700 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <5cf93772-d8f6-4cbb-3088-eeb670941969@everyware.ch> <95509c0e-5abb-b0f2-eca3-f04e7eb995e3@gmail.com> Message-ID: On Tue, 9 Oct 2018 10:35:23 -0700, Melanie Witt wrote: > On Tue, 9 Oct 2018 07:23:03 -0400, Jay Pipes wrote: >> That explains where the source of the problem comes from (it's the use >> of SELECT FOR UPDATE, which has been removed from Nova's quota-handling >> code in the Rocky release). > Small correction, the SELECT FOR UPDATE was removed from Nova's > quota-handling code in the Pike release. Elaboration: the calls to quota reserve/commit/rollback were removed in the Pike release, so with_lockmode('update') is not called for quota operations, even though the reserve/commit/rollback methods are still there for use by old (Ocata) computes during an Ocata => Pike upgrade. Then, the reserve/commit/rollback methods were removed in Queens once no old computes could be calling them. -melanie From mriedemos at gmail.com Tue Oct 9 18:37:51 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 9 Oct 2018 13:37:51 -0500 Subject: [openstack-dev] [neutron][stable] Stable Core Team Update In-Reply-To: References: <20181003133122.tqrn5b7sbbviutbu@bishop> Message-ID: <54273ff1-5f40-65f3-9fed-e776519ed488@gmail.com> On 10/9/2018 11:08 AM, Miguel Lavalle wrote: > Since it has been more than a week since this nomination was posted and > we have received only positive feedback, can we move ahead and add > Bernard Cafarelli to Neutron Stable core team? Done: https://review.openstack.org/#/admin/groups/539,members -- Thanks, Matt From liliueecg at gmail.com Tue Oct 9 18:50:48 2018 From: liliueecg at gmail.com (Li Liu) Date: Tue, 9 Oct 2018 14:50:48 -0400 Subject: [openstack-dev] [cyborg] Weekly Meeting this week is happening on ZOOM Message-ID: Hi Team, This week's Cyborg meeting will be held in ZOOM. Sundar will give a presentation. LI LIU is inviting you to a scheduled Zoom meeting. Topic: Cyborg Meeting Time: Oct 10, 2018 10:00 AM Eastern Time (US and Canada) Join from PC, Mac, Linux, iOS or Android: https://zoom.us/j/6637468814 Or iPhone one-tap : US: +16465588665,,6637468814# or +16699006833,,6637468814# Or Telephone: Dial(for higher quality, dial a number based on your current location): US: +1 646 558 8665 or +1 669 900 6833 Meeting ID: 663 746 8814 International numbers available: https://zoom.us/u/bXRmjYQX -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From liliueecg at gmail.com Tue Oct 9 18:55:36 2018 From: liliueecg at gmail.com (Li Liu) Date: Tue, 9 Oct 2018 14:55:36 -0400 Subject: [openstack-dev] [Cyborg] Core Team Update Message-ID: Hi Cyborg Team, I want to nominate Xinran Wang as a new core reviewer for Cyborg project. Xiran has been working hard and kept contributing to the project[1][2]. Keep Claim and Carry on :) [1] https://review.openstack.org/#/q/owner:xin-ran.wang%2540intel.com+status:open [2] http://stackalytics.com/?module=cyborg-group&metric=person-day&release=rocky&user_id=xinran -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Tue Oct 9 19:10:43 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 9 Oct 2018 19:10:43 +0000 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <5cf93772-d8f6-4cbb-3088-eeb670941969@everyware.ch> <95509c0e-5abb-b0f2-eca3-f04e7eb995e3@gmail.com>, Message-ID: <1A3C52DFCD06494D8528644858247BF01C1F64ED@EX10MBOX03.pnnl.gov> Oh, this does raise an interesting question... Should such information be reported by the projects up to users through labels? Something like, "percona_multimaster=safe" Its really difficult for folks to know which projects can and can not be used that way currently. Is this a TC question? Thanks, Kevin ________________________________________ From: melanie witt [melwittt at gmail.com] Sent: Tuesday, October 09, 2018 10:35 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints On Tue, 9 Oct 2018 07:23:03 -0400, Jay Pipes wrote: > That explains where the source of the problem comes from (it's the use > of SELECT FOR UPDATE, which has been removed from Nova's quota-handling > code in the Rocky release). Small correction, the SELECT FOR UPDATE was removed from Nova's quota-handling code in the Pike release. -melanie __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Tue Oct 9 19:19:52 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 9 Oct 2018 14:19:52 -0500 Subject: [openstack-dev] [nova] Rocky RC time regression analysis In-Reply-To: References: Message-ID: On 10/5/2018 6:59 PM, melanie witt wrote: > 5) when live migration fails due to a internal error rollback is not > handled correctly https://bugs.launchpad.net/nova/+bug/1788014 > > - Bug was reported on 2018-08-20 > - The change that caused the regression landed on 2018-07-26, FF day > https://review.openstack.org/434870 > - Unrelated to a blueprint, the regression was part of a bug fix > - Was found because sean-k-mooney was doing live migrations and found > that when a LM failed because of a QEMU internal error, the VM remained > ACTIVE but the VM no longer had network connectivity. > - Question: why wasn't this caught earlier? > - Answer: We would need a live migration job scenario that intentionally > initiates and fails a live migration, then verify network connectivity > after the rollback occurs. > - Question: can we add something like that? Not in Tempest, no, but we could run something in the nova-live-migration job since that executes via its own script. We could hack something in like what we have proposed for testing evacuate: https://review.openstack.org/#/c/602174/ The trick is figuring out how to introduce a fault in the destination host without taking down the service, because if the compute service is down we won't schedule to it. > > 6) nova-manage db online_data_migrations hangs on instances with no host > set https://bugs.launchpad.net/nova/+bug/1788115 > > - Bug was reported on 2018-08-21 > - The patch that introduced the bug landed on 2018-05-30 > https://review.openstack.org/567878 > - Unrelated to a blueprint, the regression was part of a bug fix > - Question: why wasn't this caught earlier? > - Answer: To hit the bug, you had to have had instances with no host set > (that failed to schedule) in your database during an upgrade. This does > not happen during the grenade job > - Question: could we add anything to the grenade job that would leave > some instances with no host set to cover cases like this? Probably - I'd think creating a server on the old side with some parameters that we know won't schedule would do it, maybe requesting an AZ that doesn't exist, or some other kind of scheduler hint that we know won't work so we get a NoValidHost. However, online_data_migrations in grenade probably don't run on the cell0 database, so I'm not sure we would have caught that case. -- Thanks, Matt From jaypipes at gmail.com Tue Oct 9 19:20:05 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 9 Oct 2018 15:20:05 -0400 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <1539097453.11166.7@smtp.office365.com> References: <1539078021.11166.5@smtp.office365.com> <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> <1539097453.11166.7@smtp.office365.com> Message-ID: <0a2a4cc5-b696-8087-1757-efd6ca958889@gmail.com> On 10/09/2018 11:04 AM, Balázs Gibizer wrote: > If you do the force flag removal in a nw microversion that also means > (at least to me) that you should not change the behavior of the force > flag in the old microversions. Agreed. Keep the old, buggy and unsafe behaviour for the old microversion and in a new microversion remove the --force flag entirely and always call GET /a_c, followed by a claim_resources() on the destination host. For the old microversion behaviour, continue to do the "blind copy" of allocations from the source compute node provider to the destination compute node provider. That "blind copy" will still fail if there isn't capacity for the new allocations on the destination host anyway, because the blind copy is just issuing a POST /allocations, and that code path still checks capacity on the target resource providers. There isn't a code path in the placement API that allows a provider's inventory capacity to be exceeded by new allocations. Best, -jay From jaypipes at gmail.com Tue Oct 9 19:22:10 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 9 Oct 2018 15:22:10 -0400 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C1F64ED@EX10MBOX03.pnnl.gov> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <5cf93772-d8f6-4cbb-3088-eeb670941969@everyware.ch> <95509c0e-5abb-b0f2-eca3-f04e7eb995e3@gmail.com> <1A3C52DFCD06494D8528644858247BF01C1F64ED@EX10MBOX03.pnnl.gov> Message-ID: On 10/09/2018 03:10 PM, Fox, Kevin M wrote: > Oh, this does raise an interesting question... Should such information be reported by the projects up to users through labels? Something like, "percona_multimaster=safe" Its really difficult for folks to know which projects can and can not be used that way currently. Are you referring to k8s labels/selectors? or are you referring to project tags (you know, part of that whole Big Tent thing...)? -jay From lbragstad at gmail.com Tue Oct 9 19:48:58 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 9 Oct 2018 14:48:58 -0500 Subject: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0 In-Reply-To: References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> <20181009152157.x6yllzeaeqwjt6wl@gentoo.org> Message-ID: On Tue, Oct 9, 2018 at 10:56 AM Doug Hellmann wrote: > Matthew Thode writes: > > > On 18-10-09 11:12:30, Doug Hellmann wrote: > >> Matthew Thode writes: > >> > >> > several projects have had problems with the new release, some have > ways > >> > of working around it, and some do not. I'm sending this just to raise > >> > the issue and allow a place to discuss solutions. > >> > > >> > Currently there is a review proposed to blacklist 9.0.0, but if this > is > >> > going to still be an issue somehow in further releases we may need > >> > another solution. > >> > > >> > https://review.openstack.org/#/c/608835/ > >> > > >> > -- > >> > Matthew Thode (prometheanfire) > >> > > __________________________________________________________________________ > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> Do you have links to the failure logs or bug reports or something? If I > >> wanted to help I wouldn't even know where to start. > >> > > > > > http://logs.openstack.org/21/607521/2/check/cross-cinder-py35/e15722e/testr_results.html.gz > > These failures look like we should add a proper API to oslo.messaging to > set the notification and rpc backends for testing. The configuration > options are *not* part of the API of the library. > > There is already an oslo_messaging.conffixture module with a fixture > class, but it looks like it defaults to rabbit. Maybe someone wants to > propose a patch to make that a parameter to the constructor? > > > > http://logs.openstack.org/21/607521/2/check/cross-glance-py35/e2161d7/testr_results.html.gz > > These failures should be fixed by releasing the patch that Mehdi > provided that ensures there is a valid default transport configured. > > > > http://logs.openstack.org/21/607521/2/check/cross-keystone-py35/908a1c2/testr_results.html.gz > > Lance has already described these as mocking implementation details of > the library. I expect we'll need someone with keystone experience to > work out what the best solution is to do there. > So - I think it's apparent there are two things to do to fix this for keystone, which could be true for other projects as well. To recap, keystone has tests to assert the plumbing to send a notification was called, or not called, depending on configuration options in keystone (we allow operators to opt out of noisy notifications, like authenticate). As noted earlier, we shouldn't be making these assertions using an internal method of oslo.messaging. I have a patch up to refactor that to use the public API instead [0]. Even with that fix [0], the tests mentioned by Matt still fail because there isn't a sane default. I have a separate patch up to make keystone's tests work by supplying the default introduced in version 9.0.1 [1], overriding the configuration option for transport_url. This got a bit hairy in a circular-dependency kind of way because get_notification_transport() [2] is what registers the default options, which is broken. I have a patch to keystone [3] showing how I worked around this, which might not be needed if we allow the constructor to accept an override for transport_url. [0] https://review.openstack.org/#/c/609072/ [1] https://review.openstack.org/#/c/608196/3/oslo_messaging/transport.py [2] https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/notify/notifier.py#n167 [3] https://review.openstack.org/#/c/609106/ > > > > > -- > > Matthew Thode (prometheanfire) > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Tue Oct 9 19:52:03 2018 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 9 Oct 2018 15:52:03 -0400 Subject: [openstack-dev] [manila] nominating Amit Oren for manila core In-Reply-To: <20181002175843.ik5mhqqz3hwqb42m@barron.net> References: <20181002175843.ik5mhqqz3hwqb42m@barron.net> Message-ID: <20181009195203.ykls2ymgdnawis76@barron.net> On 02/10/18 13:58 -0400, Tom Barron wrote: >Amit Oren has contributed high quality reviews in the last >couple of cycles so I would like to nominated him for manila >core. > >Please respond with your +1 or -1 votes. We'll hold voting >open for 7 days. > >Thanks, > >-- Tom Barron (tbarron) > We've had lots of +1s for Amit Oren as manila core and no -1s so I've added him. Welcome, Amit! -- Tom From sombrafam at gmail.com Tue Oct 9 20:14:43 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Tue, 9 Oct 2018 17:14:43 -0300 Subject: [openstack-dev] [cinder][qa] Enabling online volume_extend tests by default In-Reply-To: <1664e68a792.cea6625622063.3163129957725641398@ghanshyammann.com> References: <1664e68a792.cea6625622063.3163129957725641398@ghanshyammann.com> Message-ID: Hi Ghanshyam, Though I have concern over running those tests by default(making config > options True by default), because it is not confirmed all cinder backends > implements this functionality and it only works for nova libvirt driver. We > need to keep config options default as False and Devstack/CI can make it > True to run the tests. > > The discussion on the PTG was about whether we should run this on gate to actually break the CIs. Once that happens, vendors will have 3 options: #1: fix their drivers by properly implementing volume_extend and run the positive tests #2: fix their drivers by reporting that they not support volume_extend and run the negative tests #3: disable volume extend tests at all (not recommendable), but this still give us a hint on whether the vendor supports this or not > If this feature becomes mandatory functionality (or cinder say standard > feature i think) to implement for every backends and it work with all nova > driver also(in term of instance action events) then, we can enable this > feature tests by default. But until then, we should keep them disable by > default in Tempest but we can enable them on gate via Devstack (patch you > mentioned) and test them daily on integrated-gate. > Its not mandatory that the driver must implement online_extend, but if the driver does not support it, the driver should report as so. > Overall, I am ok with Devstack change to make these tests enable for every > Cinder backends but we need to keep the config options false in Tempest. > So, the outcome from the PTG was that we would first merge the tempest test and give time for vendors to get the drivers fixed. Then we would change it in devstack so we push vendor to fix their drivers in case they hadn't done that. Erlon > > I will review those patch and leave comments on gerrit (i saw those patch > introduce new config option than using the existing one) > > -gmann > > > Please let us know if you have any question or concerns about it. > > Kind regards,Erlon_________________[1] > https://review.openstack.org/#/c/572188/[2] > https://review.openstack.org/#/c/578463/ > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sombrafam at gmail.com Tue Oct 9 20:22:33 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Tue, 9 Oct 2018 17:22:33 -0300 Subject: [openstack-dev] [manila] nominating Amit Oren for manila core In-Reply-To: <20181009195203.ykls2ymgdnawis76@barron.net> References: <20181002175843.ik5mhqqz3hwqb42m@barron.net> <20181009195203.ykls2ymgdnawis76@barron.net> Message-ID: Hey Amit, Welcome! Em ter, 9 de out de 2018 às 16:52, Tom Barron escreveu: > On 02/10/18 13:58 -0400, Tom Barron wrote: > >Amit Oren has contributed high quality reviews in the last > >couple of cycles so I would like to nominated him for manila > >core. > > > >Please respond with your +1 or -1 votes. We'll hold voting > >open for 7 days. > > > >Thanks, > > > >-- Tom Barron (tbarron) > > > > We've had lots of +1s for Amit Oren as manila core and no -1s so I've > added him. > > Welcome, Amit! > > -- Tom > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aoren at infinidat.com Tue Oct 9 20:36:17 2018 From: aoren at infinidat.com (Amit Oren) Date: Tue, 9 Oct 2018 23:36:17 +0300 Subject: [openstack-dev] [manila] nominating Amit Oren for manila core In-Reply-To: References: <20181002175843.ik5mhqqz3hwqb42m@barron.net> <20181009195203.ykls2ymgdnawis76@barron.net> Message-ID: Thank you Tom for nominating me and thank you all for your votes :) - Amit On Tue, Oct 9, 2018 at 11:24 PM Erlon Cruz wrote: > Hey Amit, > > Welcome! > > Em ter, 9 de out de 2018 às 16:52, Tom Barron escreveu: > >> On 02/10/18 13:58 -0400, Tom Barron wrote: >> >Amit Oren has contributed high quality reviews in the last >> >couple of cycles so I would like to nominated him for manila >> >core. >> > >> >Please respond with your +1 or -1 votes. We'll hold voting >> >open for 7 days. >> > >> >Thanks, >> > >> >-- Tom Barron (tbarron) >> > >> >> We've had lots of +1s for Amit Oren as manila core and no -1s so I've >> added him. >> >> Welcome, Amit! >> >> -- Tom >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Tue Oct 9 21:01:24 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 9 Oct 2018 16:01:24 -0500 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <0a2a4cc5-b696-8087-1757-efd6ca958889@gmail.com> References: <1539078021.11166.5@smtp.office365.com> <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> <1539097453.11166.7@smtp.office365.com> <0a2a4cc5-b696-8087-1757-efd6ca958889@gmail.com> Message-ID: <582207c4-9c92-a08a-ea11-5696115dc67f@fried.cc> On 10/09/2018 02:20 PM, Jay Pipes wrote: > On 10/09/2018 11:04 AM, Balázs Gibizer wrote: >> If you do the force flag removal in a nw microversion that also means >> (at least to me) that you should not change the behavior of the force >> flag in the old microversions. > > Agreed. > > Keep the old, buggy and unsafe behaviour for the old microversion and in > a new microversion remove the --force flag entirely and always call GET > /a_c, followed by a claim_resources() on the destination host. > > For the old microversion behaviour, continue to do the "blind copy" of > allocations from the source compute node provider to the destination > compute node provider. TBC, for nested/sharing source, we should consolidate all the resources into a single allocation against the destination's root provider? > That "blind copy" will still fail if there isn't > capacity for the new allocations on the destination host anyway, because > the blind copy is just issuing a POST /allocations, and that code path > still checks capacity on the target resource providers. What happens when the migration fails, either because of that POST /allocations, or afterwards? Do we still have the old allocation around to restore? Cause we can't re-figure it from the now-monolithic destination allocation. > There isn't a > code path in the placement API that allows a provider's inventory > capacity to be exceeded by new allocations. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From chris.friesen at windriver.com Wed Oct 10 00:17:48 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 9 Oct 2018 18:17:48 -0600 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <0a2a4cc5-b696-8087-1757-efd6ca958889@gmail.com> References: <1539078021.11166.5@smtp.office365.com> <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> <1539097453.11166.7@smtp.office365.com> <0a2a4cc5-b696-8087-1757-efd6ca958889@gmail.com> Message-ID: <5acfbba0-9bc6-6c03-4ff1-a42182452041@windriver.com> On 10/9/2018 1:20 PM, Jay Pipes wrote: > On 10/09/2018 11:04 AM, Balázs Gibizer wrote: >> If you do the force flag removal in a nw microversion that also means >> (at least to me) that you should not change the behavior of the force >> flag in the old microversions. > > Agreed. > > Keep the old, buggy and unsafe behaviour for the old microversion and in > a new microversion remove the --force flag entirely and always call GET > /a_c, followed by a claim_resources() on the destination host. Agreed. Once you start looking at more complicated resource topologies, you pretty much need to handle allocations properly. Chris From gdubreui at redhat.com Wed Oct 10 02:24:28 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Wed, 10 Oct 2018 13:24:28 +1100 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: <20181009125850.4sj52i7c3mi6m6ay@yuggoth.org> References: <413d67d8-e4de-51fe-e7cf-8fb6520aed34@redhat.com> <20181009125850.4sj52i7c3mi6m6ay@yuggoth.org> Message-ID: <16397789-98b5-a011-0367-dd5023260870@redhat.com> On 09/10/18 23:58, Jeremy Stanley wrote: > On 2018-10-09 08:52:52 -0400 (-0400), Jim Rollenhagen wrote: > [...] >> It seems to me that a major goal of openstacksdk is to hide differences >> between clouds from the user. If the user is meant to use a GraphQL library >> themselves, we lose this and the user needs to figure it out themselves. >> Did I understand that correctly? > This is especially useful where the SDK implements business logic > for common operations like "if the user requested A and the cloud > supports features B+C+D then use those to fulfil the request, > otherwise fall back to using features E+F". > The features offered to the user don't have to change, it's just a different architecture. The user doesn't have to deal with a GraphQL library, only the client applications (consuming OpenStack APIs). And there are also UI tools such as GraphiQL which allow to interact directly with GraphQL servers. From mnaser at vexxhost.com Wed Oct 10 04:50:50 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 10 Oct 2018 06:50:50 +0200 Subject: [openstack-dev] [openstack-ansible] dropping xenial jobs Message-ID: <29C62E4B-1B5C-4EA9-A750-4E408B9E88EF@vexxhost.com> Hi everyone! So I’ve been thinking of dropping the Xenial jobs to reduce our overall impact in terms of gate usage in master because we don’t support it. However, I was a bit torn on this because i realize that it’s possible for us to write things and backport them only to find out that they’d break under xenial which can be deployed with Rocky. Thoughts? Ideas? I was thinking maybe Lee an experimental job.. not really sure on specifics but I’d like to bring in more feedback. Thanks, Mohammed From zhipengh512 at gmail.com Wed Oct 10 06:58:49 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 10 Oct 2018 07:58:49 +0100 Subject: [openstack-dev] [Cyborg] Core Team Update In-Reply-To: References: Message-ID: Big +1, xinran has been tremendously helpful in the development. On Tue, Oct 9, 2018, 7:55 PM Li Liu wrote: > Hi Cyborg Team, > > I want to nominate Xinran Wang as a new core reviewer for Cyborg project. > Xiran has been working hard and kept contributing to the project[1][2]. > Keep Claim and Carry on :) > > [1] > https://review.openstack.org/#/q/owner:xin-ran.wang%2540intel.com+status:open > [2] > http://stackalytics.com/?module=cyborg-group&metric=person-day&release=rocky&user_id=xinran > -- > Thank you > > Regards > > Li > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.engelmann at everyware.ch Wed Oct 10 07:06:50 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Wed, 10 Oct 2018 09:06:50 +0200 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <8c65a2ba-09e9-d286-9d3c-e577a59185cb@everyware.ch> Message-ID: <84d669d1-d3c5-cd15-84a0-bfc8e4e5eb66@everyware.ch> Am 10/9/18 um 1:47 PM schrieb Mark Goddard: > > > On Tue, 9 Oct 2018 at 12:03, Florian Engelmann > > > wrote: > > Am 10/9/18 um 11:04 AM schrieb Mark Goddard: > > Thanks for these suggestions Florian, there are some interesting > ideas > > in here. I'm a little concerned about the maintenance overhead of > adding > > support for all of these things, and wonder if some of them could be > > done without explicit support in kolla and kolla-ansible. The kolla > > projects have been able to move quickly by providing a flexible > > configuration mechanism that avoids the need to maintain support for > > every OpenStack feature. Other thoughts inline. > > > > I do understand your apprehensions Mark. For some of the suggested > changes/additions I agree. But adding those components without > kolla/kolla-ansible integration feels not right. > > > I'm not entirely against adding some of these things, if enough people > in the community want them. I'd just like to make sure that if they can > be done in a sane way without changes, then we do that and document how > to do it instead. > Yes I agree and that is a very important role/job. > > > > > On Mon, 8 Oct 2018 at 11:15, Florian Engelmann > > > >> > > wrote: > > > >     Hi, > > > >     I would like to start a discussion about some changes and > additions I > >     would like to see in in kolla and kolla-ansible. > > > >     1. Keepalived is a problem in layer3 spine leaf networks as any > >     floating > >     IP can only exist in one leaf (and VRRP is a problem in > layer3). I > >     would > >     like to use consul and registrar to get rid of the "internal" > floating > >     IP and use consuls DNS service discovery to connect all > services with > >     each other. > > > > > > Without reading up, I'm not sure exactly how this fits together. If > > kolla-ansible made the API host configurable for each service rather > > than globally, would that be a step in the right direction? > > No that would not help. The problem is HA. Right now there is a > "central" floating IP (kolla_internal_vip_address) that is used for all > services to connect to (each other). Keepalived is failing that IP over > if the "active" host fails. In a layer3 (CLOS/Spine-Leaf) network this > IP is only available in one leaf/rack. So that rack is a "SPOF". > Using service discovery fits perfect in a CLOS network and scales much > better as a HA solution. > > Right, but what I'm saying as a thought experiment is, if we gave you > the required variables in kolla-ansible (e.g. nova_internal_fqdn) to > make this possible with an externally managed consul service, could that > work? Now I get you. I would say all configuration templates need to be changed to allow, eg. $ grep http /etc/kolla/cinder-volume/cinder.conf glance_api_servers = http://10.10.10.5:9292 auth_url = http://internal.somedomain.tld:35357 www_authenticate_uri = http://internal.somedomain.tld:5000 auth_url = http://internal.somedomain.tld:35357 auth_endpoint = http://internal.somedomain.tld:5000 to look like: glance_api_servers = http://glance.service.somedomain.consul:9292 auth_url = http://keystone.service.somedomain.consul:35357 www_authenticate_uri = http://keystone.service.somedomain.consul:5000 auth_url = http://keystone.service.somedomain.consul:35357 auth_endpoint = http://keystone.service.somedomain.consul:5000 > > > > > > >     2. Using "ports" for external API (endpoint) access is a > major headache > >     if a firewall is involved. I would like to configure the > HAProxy (or > >     fabio) for the external access to use "Host:" like, eg. "Host: > >     keystone.somedomain.tld", "Host: nova.somedomain.tld", ... > with HTTPS. > >     Any customer would just need HTTPS access and not have to > open all > >     those > >     ports in his firewall. For some enterprise customers it is > not possible > >     to request FW changes like that. > > > >     3. HAProxy is not capable to handle "read/write" split with > Galera. I > >     would like to introduce ProxySQL to be able to scale Galera. > > > > > > It's now possible to use an external database server with > kolla-ansible, > > instead of deploying a mariadb/galera cluster. This could be > implemented > > how you like, see > > > https://docs.openstack.org/kolla-ansible/latest/reference/external-mariadb-guide.html. > > Yes I agree. And this is what we will do in our first production > deployment. But I would love to see ProxySQL in Kolla as well. > As a side note: Kolla-ansible does use: > > option mysql-check user haproxy post-41 > > to check Galera, but that check does not fail if the node is out of > sync > with the other nodes! > > http://galeracluster.com/documentation-webpages/monitoringthecluster.html > > That's good to know. Could you raise a bug in kolla-ansible on > launchpad, and offer advice on how to improve this check if you have any? > done: https://bugs.launchpad.net/kolla-ansible/+bug/1796930 > > > > >     4. HAProxy is fine but fabio integrates well with consul, > statsd and > >     could be connected to a vault cluster to manage secure > certificate > >     access. > > > > As above. > > > >     5. I would like to add vault as Barbican backend. > > > > Does this need explicit support in kolla and kolla-ansible, or > could it > > be done through configuration of barbican.conf? Are there additional > > packages required in the barbican container? If so, see > > > https://docs.openstack.org/kolla/latest/admin/image-building.html#package-customisation. > > True but the vault (and consul) containers could be deployed and > managed > by kolla-ansible. > > I'd like to see if anyone else is interested in this. Kolla ansible > already deploys a large number of services, which is great. As with many > other projects I'm seeing the resources of core contributors fall off a > little, and I think we need to consider how to ensure the project is > maintainable long term. In my view a good way of doing that is to enable > integration with existing services, rather than deploying them. We need > to decide where the line is as a community. We have an IRC meeting at > 3pm UTC if you'd like to bring it up then. Sorry I didn't manage to attend the IRC meeting. But again, I aggree, it would be great to extend the (already great) felxibility of kolla-ansible to be "add" "external" services. > > > > >     6. I would like to add an option to enable tokenless > authentication for > >     all services with each other to get rid of all the openstack > service > >     passwords (security issue). > > > > Again, could this be done without explicit support? > > We did not investigate here. Changes to the apache configuration are > needed. I guess we will have to change the kolla container itself to do > so? Is it possible to "inject" files in a container using kolla-ansible? > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From florian.engelmann at everyware.ch Wed Oct 10 07:18:32 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Wed, 10 Oct 2018 09:18:32 +0200 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C1F62DF@EX10MBOX03.pnnl.gov> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <1A3C52DFCD06494D8528644858247BF01C1F62DF@EX10MBOX03.pnnl.gov> Message-ID: <83f981fc-c49f-8207-215c-4cb2adbd9108@everyware.ch> by "another storage system" you mean the KV store of consul? That's just someting consul brings with it... consul is very strong in doing health checks Am 10/9/18 um 6:09 PM schrieb Fox, Kevin M: > etcd is an already approved openstack dependency. Could that be used instead of consul so as to not add yet another storage system? coredns with the https://coredns.io/plugins/etcd/ plugin would maybe do what you need? > > Thanks, > Kevin > ________________________________________ > From: Florian Engelmann [florian.engelmann at everyware.ch] > Sent: Monday, October 08, 2018 3:14 AM > To: openstack-dev at lists.openstack.org > Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints > > Hi, > > I would like to start a discussion about some changes and additions I > would like to see in in kolla and kolla-ansible. > > 1. Keepalived is a problem in layer3 spine leaf networks as any floating > IP can only exist in one leaf (and VRRP is a problem in layer3). I would > like to use consul and registrar to get rid of the "internal" floating > IP and use consuls DNS service discovery to connect all services with > each other. > > 2. Using "ports" for external API (endpoint) access is a major headache > if a firewall is involved. I would like to configure the HAProxy (or > fabio) for the external access to use "Host:" like, eg. "Host: > keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS. > Any customer would just need HTTPS access and not have to open all those > ports in his firewall. For some enterprise customers it is not possible > to request FW changes like that. > > 3. HAProxy is not capable to handle "read/write" split with Galera. I > would like to introduce ProxySQL to be able to scale Galera. > > 4. HAProxy is fine but fabio integrates well with consul, statsd and > could be connected to a vault cluster to manage secure certificate access. > > 5. I would like to add vault as Barbican backend. > > 6. I would like to add an option to enable tokenless authentication for > all services with each other to get rid of all the openstack service > passwords (security issue). > > What do you think about it? > > All the best, > Florian > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From jean-philippe at evrard.me Wed Oct 10 07:46:48 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 10 Oct 2018 09:46:48 +0200 Subject: [openstack-dev] [openstack-ansible] dropping xenial jobs In-Reply-To: <29C62E4B-1B5C-4EA9-A750-4E408B9E88EF@vexxhost.com> References: <29C62E4B-1B5C-4EA9-A750-4E408B9E88EF@vexxhost.com> Message-ID: <62eb93ac4415d1cb4f6e1bb3a7b676d0128d811c.camel@evrard.me> On Wed, 2018-10-10 at 06:50 +0200, Mohammed Naser wrote: > Hi everyone! > > So I’ve been thinking of dropping the Xenial jobs to reduce our > overall impact in terms of gate usage in master because we don’t > support it. > > However, I was a bit torn on this because i realize that it’s > possible for us to write things and backport them only to find out > that they’d break under xenial which can be deployed with Rocky. > > Thoughts? Ideas? I was thinking maybe Lee an experimental job.. not > really sure on specifics but I’d like to bring in more feedback. > > Thanks, > Mohammed > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Hello, In the past we removed the jobs (ocata didn't have trusty jobs, IIRC), and made sure backports were still passing the gates of newton with trusty jobs voting. It may force a re-implementation in lower branches, but it is fine for me, as ansible versions also differ and might require re-implementation anyway. That said, we didn't have the flexibility we have now with Zuul v3, and making the jobs voting/nv/experimental. The middle ground would be to have a non-voting check job. It is fine, but it consumes resources for patches that aren't supposed to be backported, and therefore I think it's not the greatest solution. I like the "check experimental" part. It has an inconvenient: it relies on our good behaviour: When a bug is raised (so first element, we should not forget the bug link!) for backport, we should all keep in mind to check experimental before attempting the merge on the higher branch. That said, I still prefer the "check experimental" that nothing. You have my vote! Best regards, JP From mark at stackhpc.com Wed Oct 10 07:53:01 2018 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 10 Oct 2018 08:53:01 +0100 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <84d669d1-d3c5-cd15-84a0-bfc8e4e5eb66@everyware.ch> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <8c65a2ba-09e9-d286-9d3c-e577a59185cb@everyware.ch> <84d669d1-d3c5-cd15-84a0-bfc8e4e5eb66@everyware.ch> Message-ID: On Wed, 10 Oct 2018 at 08:08, Florian Engelmann < florian.engelmann at everyware.ch> wrote: > Am 10/9/18 um 1:47 PM schrieb Mark Goddard: > > > > > > On Tue, 9 Oct 2018 at 12:03, Florian Engelmann > > > > > > wrote: > > > > Am 10/9/18 um 11:04 AM schrieb Mark Goddard: > > > Thanks for these suggestions Florian, there are some interesting > > ideas > > > in here. I'm a little concerned about the maintenance overhead of > > adding > > > support for all of these things, and wonder if some of them could > be > > > done without explicit support in kolla and kolla-ansible. The > kolla > > > projects have been able to move quickly by providing a flexible > > > configuration mechanism that avoids the need to maintain support > for > > > every OpenStack feature. Other thoughts inline. > > > > > > > I do understand your apprehensions Mark. For some of the suggested > > changes/additions I agree. But adding those components without > > kolla/kolla-ansible integration feels not right. > > > > > > I'm not entirely against adding some of these things, if enough people > > in the community want them. I'd just like to make sure that if they can > > be done in a sane way without changes, then we do that and document how > > to do it instead. > > > > Yes I agree and that is a very important role/job. > > > > > > > > > On Mon, 8 Oct 2018 at 11:15, Florian Engelmann > > > > > > > >> > > > wrote: > > > > > > Hi, > > > > > > I would like to start a discussion about some changes and > > additions I > > > would like to see in in kolla and kolla-ansible. > > > > > > 1. Keepalived is a problem in layer3 spine leaf networks as > any > > > floating > > > IP can only exist in one leaf (and VRRP is a problem in > > layer3). I > > > would > > > like to use consul and registrar to get rid of the "internal" > > floating > > > IP and use consuls DNS service discovery to connect all > > services with > > > each other. > > > > > > > > > Without reading up, I'm not sure exactly how this fits together. > If > > > kolla-ansible made the API host configurable for each service > rather > > > than globally, would that be a step in the right direction? > > > > No that would not help. The problem is HA. Right now there is a > > "central" floating IP (kolla_internal_vip_address) that is used for > all > > services to connect to (each other). Keepalived is failing that IP > over > > if the "active" host fails. In a layer3 (CLOS/Spine-Leaf) network > this > > IP is only available in one leaf/rack. So that rack is a "SPOF". > > Using service discovery fits perfect in a CLOS network and scales > much > > better as a HA solution. > > > > Right, but what I'm saying as a thought experiment is, if we gave you > > the required variables in kolla-ansible (e.g. nova_internal_fqdn) to > > make this possible with an externally managed consul service, could that > > work? > > Now I get you. I would say all configuration templates need to be > changed to allow, eg. > > $ grep http /etc/kolla/cinder-volume/cinder.conf > glance_api_servers = http://10.10.10.5:9292 > auth_url = http://internal.somedomain.tld:35357 > www_authenticate_uri = http://internal.somedomain.tld:5000 > auth_url = http://internal.somedomain.tld:35357 > auth_endpoint = http://internal.somedomain.tld:5000 > > to look like: > > glance_api_servers = http://glance.service.somedomain.consul:9292 > auth_url = http://keystone.service.somedomain.consul:35357 > www_authenticate_uri = http://keystone.service.somedomain.consul:5000 > auth_url = http://keystone.service.somedomain.consul:35357 > auth_endpoint = http://keystone.service.somedomain.consul:5000 Those values are generally formed using the internal FQDN (kolla_internal_fqdn), that is supposed to resolve to the internal VIP. If we had something like this in group_vars/all.yml, we could make the host configurable on a per-service basis, while still retaining the default. This is a common pattern in kolla-ansible. For example: nova_external_fqdn: "{{ kolla_external_fqdn }}" nova_internal_fqdn: "{{ kolla_internal_fqdn }}" > > > > > > > > > > > 2. Using "ports" for external API (endpoint) access is a > > major headache > > > if a firewall is involved. I would like to configure the > > HAProxy (or > > > fabio) for the external access to use "Host:" like, eg. "Host: > > > keystone.somedomain.tld", "Host: nova.somedomain.tld", ... > > with HTTPS. > > > Any customer would just need HTTPS access and not have to > > open all > > > those > > > ports in his firewall. For some enterprise customers it is > > not possible > > > to request FW changes like that. > > > > > > 3. HAProxy is not capable to handle "read/write" split with > > Galera. I > > > would like to introduce ProxySQL to be able to scale Galera. > > > > > > > > > It's now possible to use an external database server with > > kolla-ansible, > > > instead of deploying a mariadb/galera cluster. This could be > > implemented > > > how you like, see > > > > > > https://docs.openstack.org/kolla-ansible/latest/reference/external-mariadb-guide.html > . > > > > Yes I agree. And this is what we will do in our first production > > deployment. But I would love to see ProxySQL in Kolla as well. > > As a side note: Kolla-ansible does use: > > > > option mysql-check user haproxy post-41 > > > > to check Galera, but that check does not fail if the node is out of > > sync > > with the other nodes! > > > > > http://galeracluster.com/documentation-webpages/monitoringthecluster.html > > > > That's good to know. Could you raise a bug in kolla-ansible on > > launchpad, and offer advice on how to improve this check if you have any? > > > > done: https://bugs.launchpad.net/kolla-ansible/+bug/1796930 > > Thanks > > > > > > > > 4. HAProxy is fine but fabio integrates well with consul, > > statsd and > > > could be connected to a vault cluster to manage secure > > certificate > > > access. > > > > > > As above. > > > > > > 5. I would like to add vault as Barbican backend. > > > > > > Does this need explicit support in kolla and kolla-ansible, or > > could it > > > be done through configuration of barbican.conf? Are there > additional > > > packages required in the barbican container? If so, see > > > > > > https://docs.openstack.org/kolla/latest/admin/image-building.html#package-customisation > . > > > > True but the vault (and consul) containers could be deployed and > > managed > > by kolla-ansible. > > > > I'd like to see if anyone else is interested in this. Kolla ansible > > already deploys a large number of services, which is great. As with many > > other projects I'm seeing the resources of core contributors fall off a > > little, and I think we need to consider how to ensure the project is > > maintainable long term. In my view a good way of doing that is to enable > > integration with existing services, rather than deploying them. We need > > to decide where the line is as a community. We have an IRC meeting at > > 3pm UTC if you'd like to bring it up then. > > > Sorry I didn't manage to attend the IRC meeting. But again, I aggree, it > would be great to extend the (already great) felxibility of > kolla-ansible to be "add" "external" services. > > Sorry, I missed a key word here - tomorrow. It is now today, i.e. Wednesdays at 3pm UTC. > > > > > > > > > 6. I would like to add an option to enable tokenless > > authentication for > > > all services with each other to get rid of all the openstack > > service > > > passwords (security issue). > > > > > > Again, could this be done without explicit support? > > > > We did not investigate here. Changes to the apache configuration are > > needed. I guess we will have to change the kolla container itself to > do > > so? Is it possible to "inject" files in a container using > kolla-ansible? > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > < > http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Wed Oct 10 07:54:55 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 10 Oct 2018 09:54:55 +0200 Subject: [openstack-dev] [tc] assigning new liaisons to projects In-Reply-To: References: Message-ID: <647ccc28e200ca947ce694cd0a3ac4703f6d9099.camel@evrard.me> On Mon, 2018-10-08 at 10:27 -0400, Doug Hellmann wrote: > TC members, > > Since we are starting a new term, and have several new members, we > need > to decide how we want to rotate the liaisons attached to each our > project teams, SIGs, and working groups [1]. > > Last term we went through a period of volunteer sign-up and then I > randomly assigned folks to slots to fill out the roster evenly. > During > the retrospective we talked a bit about how to ensure we had an > objective perspective for each team by not having PTLs sign up for > their > own teams, but I don't think we settled on that as a hard rule. > > I think the easiest and fairest (to new members) way to manage the > list > will be to wipe it and follow the same process we did last time. If > you > agree, I will update the page this week and we can start collecting > volunteers over the next week or so. > > Doug > > [1] https://wiki.openstack.org/wiki/OpenStack_health_tracker > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev +1 From Jesse.Pretorius at rackspace.co.uk Wed Oct 10 07:55:39 2018 From: Jesse.Pretorius at rackspace.co.uk (Jesse Pretorius) Date: Wed, 10 Oct 2018 07:55:39 +0000 Subject: [openstack-dev] [openstack-ansible] dropping xenial jobs In-Reply-To: <29C62E4B-1B5C-4EA9-A750-4E408B9E88EF@vexxhost.com> References: <29C62E4B-1B5C-4EA9-A750-4E408B9E88EF@vexxhost.com> Message-ID: On 10/10/18, 5:54 AM, "Mohammed Naser" wrote: > So I’ve been thinking of dropping the Xenial jobs to reduce our overall impact in terms of gate usage in master because we don’t support it. I think we can start dropping it given our intended supported platform for Stein is Bionic, not Xenial. We'll have to carry Xenial & Bionic for Rocky as voting jobs. Anything ported back and found not to work for both can be fixed through either another patch to master which is back ported, or a re-implementation, as necessary. ________________________________ Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. From aschadin at sbcloud.ru Wed Oct 10 08:13:10 2018 From: aschadin at sbcloud.ru (=?koi8-r?B?/sHEyc4g4czFy9PBzsTSIPPF0sfFxdfJ3g==?=) Date: Wed, 10 Oct 2018 08:13:10 +0000 Subject: [openstack-dev] [watcher] [monasca] Bare metal node N+1 redundancy and Proactive HA Message-ID: <2ed868b74dfa4007a0dd350b369a94e0@sbcloud.ru> Greetings Fujitsu team, During last PTG we've discussed two new blueprints[1] and how they can be implemented on Watcher and Monasca sides. What is the status of these BPs? Do you need my help with it? Witek, should we submit these BPs on Monasca's Storyboard? [1]: https://etherpad.openstack.org/p/stein-watcher-ptg -- Alexander Chadin From wjstk16 at gmail.com Wed Oct 10 08:58:03 2018 From: wjstk16 at gmail.com (Won) Date: Wed, 10 Oct 2018 17:58:03 +0900 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: Hi. I'm sorry for the late reply. my prometheus version : 2.3.2 and alertmanager version : 0.15.2 and I attached files.(vitrage collector,graph logs and apache log and prometheus.yml alertmanager.yml alarm rule file etc..) I think the problem that resolved alarm does not disappear is the time stamp problem of the alarm. [image: alarm list.JPG] [image: vitrage-entity_graph.jpg] -gray alarm info severity:PAGE vitrage id: c6a94386-3879-499e-9da0-2a5b9d3294b8 , e2c5eae9-dba9-4f64-960b-b964f1c01dfe , 3d3c903e-fe09-4a6f-941f-1a2adb09feca , 8c6e7906-9e66-404f-967f-40037a6afc83 , e291662b-115d-42b5-8863-da8243dd06b4 , 8abd2a2f-c830-453c-a9d0-55db2bf72d46 ---------- The alarms marked with the blue circle are already resolved. However, it does not disappear from the entity graph and alarm list. There were seven more gray alarms at the top screenshot in active alarms like entity graph. It disappeared by deleting gray alarms from the vitrage-alarms table in the DB or changing the end timestamp value to an earlier time than the current time. At the log, it seems that the first problem is that the timestamp value from the vitrage comes to 2001-01-01, even though the starting value in the Prometheus alarm information has the correct value. When the alarm is solved, the end time stamp value is not updated so alarm does not disappear from the alarm list. The second problem is that even if the time stamp problem is solved, the entity graph problem will not be solved. Gray alarm information is not in the vitage-collector log but in the vitrage graph and apache log. I want to know how to forcefully delete entity from a vitage graph. Regarding the multi nodes, I mean, 1 controll node(pc1) & 1 compute node(pc2). So one openstack. [image: image.png] The test VM in the picture is an instance on compute node that has already been deleted. I waited for hours and checked nova.conf but it was not removed. This was not the occur in the queens version; in the rocky version, multinode environment, there seems to be a bug in VM creation on multi node. The same situation occurred in multi-node environments that were configured with different PCs. thanks, Won 2018년 10월 4일 (목) 오후 10:46, Ifat Afek 님이 작성: > Hi, > > Can you please give us some more details about your scenario with > Prometheus? Please try and give as many details as possible, so we can try > to reproduce the bug. > > > What do you mean by “if the alarm is resolved, the alarm manager makes a > silence, or removes the alarm rule from Prometheus”? these are different > cases. None of them works in your environment? > > Which Prometheus and Alertmanager versions are you using? > > Please try to change the Vitrage loglevel to DEBUG (set “debug = true” > in /etc/vitrage/vitrage.conf) and send me the Vitrage collector, graph and > api logs. > > Regarding the multi nodes, I'm not sure I understand your configuration. > Do you mean there is more than one OpenStack and Nova? more than one host? > more than one vm? > > Basically, vms are deleted from Vitrage in two cases: > 1. After each periodic call to get_all of nova.instance datasource. By > default this is done once in 10 minutes. > 2. Immediately, if you have the following configuration in > /etc/nova/nova.conf: > notification_topics = notifications,vitrage_notifications > > So, please check your nova.conf and also whether the vms are deleted after > 10 minutes. > > Thanks, > Ifat > > > On Thu, Oct 4, 2018 at 7:12 AM Won wrote: > >> Thank you for your reply Ifat. >> >> The alertmanager.yml file already contains 'send_resolved:true'. >> However, the alarm does not disappear from the alarm list and the entity >> graph even if the alarm is resolved, the alarm manager makes a silence, or >> removes the alarm rule from Prometheus. >> The only way to remove alarms is to manually remove them from the db. Is >> there any other way to remove the alarm? >> Entities(vm) that run on multi nodes in the rocky version have similar >> symptoms. There was a symptom that the Entities created on the multi-node >> would not disappear from the Entity Graph even after deletion. >> Is this a bug in rocky version? >> >> Best Regards, >> Won >> >> __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: alarm list.JPG Type: image/jpeg Size: 57609 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vitrage-entity_graph.jpg Type: image/jpeg Size: 74797 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 35018 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: environment.zip Type: application/x-zip-compressed Size: 232194 bytes Desc: not available URL: From balazs.gibizer at ericsson.com Wed Oct 10 09:04:22 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Wed, 10 Oct 2018 09:04:22 +0000 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <582207c4-9c92-a08a-ea11-5696115dc67f@fried.cc> References: <1539078021.11166.5@smtp.office365.com> <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> <1539097453.11166.7@smtp.office365.com> <0a2a4cc5-b696-8087-1757-efd6ca958889@gmail.com> <582207c4-9c92-a08a-ea11-5696115dc67f@fried.cc> Message-ID: <1539162251.7850.1@smtp.office365.com> On Tue, Oct 9, 2018 at 11:01 PM, Eric Fried wrote: > > > On 10/09/2018 02:20 PM, Jay Pipes wrote: >> On 10/09/2018 11:04 AM, Balázs Gibizer wrote: >>> If you do the force flag removal in a nw microversion that also >>> means >>> (at least to me) that you should not change the behavior of the >>> force >>> flag in the old microversions. >> >> Agreed. >> >> Keep the old, buggy and unsafe behaviour for the old microversion >> and in >> a new microversion remove the --force flag entirely and always call >> GET >> /a_c, followed by a claim_resources() on the destination host. >> >> For the old microversion behaviour, continue to do the "blind copy" >> of >> allocations from the source compute node provider to the destination >> compute node provider. > > TBC, for nested/sharing source, we should consolidate all the > resources > into a single allocation against the destination's root provider? Yes, we need to do that not to miss resources allocated from a child RP on the source host and succeed without a complete allocation on the destination host. > >> That "blind copy" will still fail if there isn't >> capacity for the new allocations on the destination host anyway, >> because >> the blind copy is just issuing a POST /allocations, and that code >> path >> still checks capacity on the target resource providers. > > What happens when the migration fails, either because of that POST > /allocations, or afterwards? Do we still have the old allocation > around > to restore? Cause we can't re-figure it from the now-monolithic > destination allocation. For live-migrate we have the source allocation held by the migration_uuid so we can simply move that back to the instance_uuid when the allocation fails on the destination host. For evacuate the source host allocaton is also held by the instance_uuid (no migration_uuid is used) but it is not a real problem here as nova failed to change that allocation so the original source allocation is intact in placement. Cheers, gibi > >> There isn't a >> code path in the placement API that allows a provider's inventory >> capacity to be exceeded by new allocations. >> >> Best, >> -jay >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tuanvc at vn.fujitsu.com Wed Oct 10 09:40:33 2018 From: tuanvc at vn.fujitsu.com (Vu Cong, Tuan) Date: Wed, 10 Oct 2018 09:40:33 +0000 Subject: [openstack-dev] [watcher] [monasca] Bare metal node N+1 redundancy and Proactive HA Message-ID: Hi Alex, Have a nice day. Regarding to "Proactive HA", we decided to follow 3 steps: 1. Test Live Migration feature of Watcher (done for basic test) Thanks to your great help via Watcher IRC channel, I have configured Live Migration successfully. I have already performed several tests for "Host Maintenance" feature and it works! I will make more tests for other strategies. 2. Check the list of metrics that Monasca supports for sending to Watcher (in progress) At this moment, I'm reading this document: https://github.com/openstack/monasca-agent/blob/master/docs/MonascaMetrics.md Then, I will fill the gap for communication between Monasca and Watcher by uploading new patch to Monasca. 3. Integration Test For making sure Proactive HA works well with other Openstack's components as expected. This will be perform later after finishing 1 and 2. Thanks again for your kind help, we really appreciate it, Alex. Vu Cong Tuan From witold.bedyk at est.fujitsu.com Wed Oct 10 09:42:38 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Wed, 10 Oct 2018 09:42:38 +0000 Subject: [openstack-dev] [goals][python3][telemetry][barbican][monasca][neutron] having a gerrit admin approve the remaining zuul job settings import patches In-Reply-To: References: Message-ID: <69e4ee18dbab41fbbf8f0387c7ccda1b@R01UKEXCASM126.r01.fujitsu.local> No objections from me for monasca-analytics repo. Witek > -----Original Message----- > From: Doug Hellmann > Sent: Montag, 8. Oktober 2018 17:08 > To: openstack-dev > Subject: [openstack-dev] > [goals][python3][telemetry][barbican][monasca][neutron] having a gerrit > admin approve the remaining zuul job settings import patches > > We have about 17 remaining patches to import zuul job settings into a few > repositories. Those are mostly in stable branches and the jobs are failing in > ways that may take us a long time to fix. > > Rather than waiting for those, Andreas and I are proposing that we have > someone from the infra team approve them, bypassing the test jobs. That > will allow us to complete the cleanup work in the project-config repository, > and will not leave the affected repositories in a state that is any more (or > less) broken than they are today. > > If you have any objections to the plan, please speak up quickly. I would like > to try to proceed before the end of the week. > > Doug > > +----------------------------------------------+---------------------------------+---------- > -+--------+----------+-------------------------------------+---------------+--------------- > + > | Subject | Repo | Team | Tests | > Workflow | URL | Branch | Owner | > +----------------------------------------------+---------------------------------+---------- > -+--------+----------+-------------------------------------+---------------+--------------- > + > | import zuul job settings from project-config | openstack/aodh | > Telemetry | FAILED | NEW | https://review.openstack.org/598648 | > stable/ocata | Doug Hellmann | > | import zuul job settings from project-config | openstack/barbican | > barbican | FAILED | REVIEWED | https://review.openstack.org/599659 | > stable/queens | Doug Hellmann | > | import zuul job settings from project-config | openstack/barbican | > barbican | FAILED | REVIEWED | https://review.openstack.org/599661 | > stable/rocky | Doug Hellmann | > | import zuul job settings from project-config | openstack/castellan-ui | > barbican | FAILED | NEW | https://review.openstack.org/599649 | master > | Doug Hellmann | > | import zuul job settings from project-config | > openstack/ceilometermiddleware | Telemetry | FAILED | APPROVED | > https://review.openstack.org/598634 | master | Doug Hellmann | > | import zuul job settings from project-config | > openstack/ceilometermiddleware | Telemetry | FAILED | APPROVED | > https://review.openstack.org/598655 | stable/pike | Doug Hellmann | > | import zuul job settings from project-config | > openstack/ceilometermiddleware | Telemetry | PASS | NEW | > https://review.openstack.org/598661 | stable/queens | Doug Hellmann | > | import zuul job settings from project-config | > openstack/ceilometermiddleware | Telemetry | FAILED | NEW | > https://review.openstack.org/598667 | stable/rocky | Doug Hellmann | > | import zuul job settings from project-config | openstack/monasca-analytics > | monasca | FAILED | REVIEWED | https://review.openstack.org/595658 | > master | Doug Hellmann | > | import zuul job settings from project-config | openstack/networking- > midonet | neutron | PASS | REVIEWED | > https://review.openstack.org/597937 | stable/queens | Doug Hellmann | > | import zuul job settings from project-config | openstack/networking-sfc > | neutron | FAILED | NEW | https://review.openstack.org/597913 | > stable/ocata | Doug Hellmann | > | import zuul job settings from project-config | openstack/networking-sfc > | neutron | FAILED | NEW | https://review.openstack.org/597925 | > stable/pike | Doug Hellmann | > | import zuul job settings from project-config | openstack/python-aodhclient > | Telemetry | FAILED | NEW | https://review.openstack.org/598652 | > stable/ocata | Doug Hellmann | > | import zuul job settings from project-config | openstack/python-aodhclient > | Telemetry | FAILED | NEW | https://review.openstack.org/598657 | > stable/pike | Doug Hellmann | > | import zuul job settings from project-config | openstack/python-aodhclient > | Telemetry | FAILED | APPROVED | https://review.openstack.org/598669 | > stable/rocky | Doug Hellmann | > | import zuul job settings from project-config | openstack/python- > barbicanclient | barbican | FAILED | NEW | > https://review.openstack.org/599656 | stable/ocata | Doug Hellmann | > | import zuul job settings from project-config | openstack/python- > barbicanclient | barbican | FAILED | NEW | > https://review.openstack.org/599658 | stable/pike | Doug Hellmann | > +----------------------------------------------+---------------------------------+---------- > -+--------+----------+-------------------------------------+---------------+--------------- > + > > __________________________________________________________ > ________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From balazs.gibizer at ericsson.com Wed Oct 10 10:32:37 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Wed, 10 Oct 2018 10:32:37 +0000 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <1539078021.11166.5@smtp.office365.com> References: <1539078021.11166.5@smtp.office365.com> Message-ID: <1539167549.7850.2@smtp.office365.com> Hi, Thanks for all the feedback. I feel the following consensus is forming: 1) remove the force flag in a new microversion. I've proposed a spec about that API change [1] 2) in the old microversions change the blind allocation copy to gather every resource from a nested source RPs too and try to allocate that from the destination root RP. In nested allocation cases putting this allocation to placement will fail and nova will fail the migration / evacuation. However it will succeed if the server does not need nested allocation neither on the source nor on the destination host (a.k.a the legacy case). Or if the server has nested allocation on the source host but does not need nested allocation on the destination host (for example the dest host does not have nested RP tree yet). I will start implementing #2) as part of the use-nested-allocation-candidate bp soon and will continue with #1) later in the cycle. Nothing is set in stone yet so feedback is still very appreciated. Cheers, gibi [1] https://review.openstack.org/#/c/609330/ On Tue, Oct 9, 2018 at 11:40 AM, Balázs Gibizer wrote: > Hi, > > Setup > ----- > > nested allocation: an allocation that contains resources from one or > more nested RPs. (if you have better term for this then please > suggest). > > If an instance has nested allocation it means that the compute, it > allocates from, has a nested RP tree. BUT if a compute has a nested > RP tree it does not automatically means that the instance, allocating > from that compute, has a nested allocation (e.g. bandwidth inventory > will be on a nested RPs but not every instance will require bandwidth) > > Afaiu, as soon as we have NUMA modelling in place the most trivial > servers will have nested allocations as CPU and MEMORY inverntory > will be moved to the nested NUMA RPs. But NUMA is still in the future. > > Sidenote: there is an edge case reported by bauzas when an instance > allocates _only_ from nested RPs. This was discussed on last Friday > and it resulted in a new patch[0] but I would like to keep that > discussion separate from this if possible. > > Sidenote: the current problem somewhat related to not just nested PRs > but to sharing RPs as well. However I'm not aiming to implement > sharing support in Nova right now so I also try to keep the sharing > disscussion separated if possible. > > There was already some discussion on the Monday's scheduler meeting > but I could not attend. > http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20 > > > The meat > -------- > > Both live-migrate[1] and evacuate[2] has an optional force flag on > the nova REST API. The documentation says: "Force by not > verifying the provided destination host by the scheduler." > > Nova implements this statement by not calling the scheduler if > force=True BUT still try to manage allocations in placement. > > To have allocation on the destination host Nova blindly copies the > instance allocation from the source host to the destination host > during these operations. Nova can do that as 1) the whole allocation > is against a single RP (the compute RP) and 2) Nova knows both the > source compute RP and the destination compute RP. > > However as soon as we bring nested allocations into the picture that > blind copy will not be feasible. Possible cases > 0) The instance has non-nested allocation on the source and would > need non nested allocation on the destination. This works with blindy > copy today. > 1) The instance has a nested allocation on the source and would need > a nested allocation on the destination as well. > 2) The instance has a non-nested allocation on the source and would > need a nested allocation on the destination. > 3) The instance has a nested allocation on the source and would need > a non nested allocation on the destination. > > Nova cannot generate nested allocations easily without reimplementing > some of the placement allocation candidate (a_c) code. However I > don't like the idea of duplicating some of the a_c code in Nova. > > Nova cannot detect what kind of allocation (nested or non-nested) an > instance would need on the destination without calling placement a_c. > So knowing when to call placement is a chicken and egg problem. > > Possible solutions: > A) fail fast > ------------ > 0) Nova can detect that the source allocatioin is non-nested and try > the blindy copy and it will succeed. > 1) Nova can detect that the source allocaton is nested and fail the > operation > 2) Nova only sees a non nested source allocation. Even if the dest RP > tree is nested it does not mean that the allocation will be nested. > We cannot fail fast. Nova can try the blind copy and allocate every > resources from the root RP of the destination. If the instance > require nested allocation instead the claim will fail in placement. > So nova can fail the operation a bit later than in 1). > 3) Nova can detect that the source allocation is nested and fail the > operation. However and enhanced blind copy that tries to allocation > everything from the root RP on the destinaton would have worked. > > B) Guess when to ignore the force flag and call the scheduler > ------------------------------------------------------------- > 0) keep the blind copy as it works > 1) Nova detect that the source allocation is nested. Ignores the > force flag and calls the scheduler that will call placement a_c. Move > operation can succeed. > 2) Nova only sees a non nested source allocation so it will fall back > to blind copy and fails at the claim on destination. > 3) Nova detect that the source allocation is nested. Ignores the > force flag and calls the scheduler that will call placement a_c. Move > operation can succeed. > > This solution would be against the API doc that states nova does not > call the scheduler if the operation is forced. However in case of > force live-migration Nova already verifies the target host from > couple of perspective in [3]. > This solution is alreay proposed for live-migrate in [4] and for > evacuate in [5] so the complexity of the solution can be seen in the > reviews. > > C) Remove the force flag from the API in a new microversion > ----------------------------------------------------------- > 0)-3): all cases would call the scheduler to verify the target host > and generate the nested (or non-nested) allocation. > We would still need an agreed behavior (from A), B), D)) for the old > microversions as the todays code creates inconsistent allocation in > #1) and #3) by ignoring the resource from the nested RP. > > D) Do not manage allocations in placement for forced operation > -------------------------------------------------------------- > Force flag is considered as a last resort tool for the admin to move > VMs around. The API doc has a fat warning about the danger of it. So > Nova can simply ignore resource allocation task if force=True. Nova > would delete the source allocation and does not create any allocation > on the destination host. > > This is a simple but dangerous solution but it is what the force flag > is all about, move the server against all the built in safeties. (If > the admin needs the safeties she can set force=False and still > specify the destination host) > > I'm open to any suggestions. > > Cheers, > gibi > > [0] https://review.openstack.org/#/c/608298/ > [1] > https://developer.openstack.org/api-ref/compute/#live-migrate-server-os-migratelive-action > [2] > https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action > [3] > https://github.com/openstack/nova/blob/c5a7002bd571379818c0108296041d12bc171728/nova/conductor/tasks/live_migrate.py#L97 > [4] https://review.openstack.org/#/c/605785 > [5] https://review.openstack.org/#/c/606111 > From sylvain.bauza at gmail.com Wed Oct 10 11:57:32 2018 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Wed, 10 Oct 2018 13:57:32 +0200 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <1539167549.7850.2@smtp.office365.com> References: <1539078021.11166.5@smtp.office365.com> <1539167549.7850.2@smtp.office365.com> Message-ID: Le mer. 10 oct. 2018 à 12:32, Balázs Gibizer a écrit : > Hi, > > Thanks for all the feedback. I feel the following consensus is forming: > > 1) remove the force flag in a new microversion. I've proposed a spec > about that API change [1] > > Thanks, will look at it. > 2) in the old microversions change the blind allocation copy to gather > every resource from a nested source RPs too and try to allocate that > from the destination root RP. In nested allocation cases putting this > allocation to placement will fail and nova will fail the migration / > evacuation. However it will succeed if the server does not need nested > allocation neither on the source nor on the destination host (a.k.a the > legacy case). Or if the server has nested allocation on the source host > but does not need nested allocation on the destination host (for > example the dest host does not have nested RP tree yet). > > Cool with me. > I will start implementing #2) as part of the > use-nested-allocation-candidate bp soon and will continue with #1) > later in the cycle. > > Nothing is set in stone yet so feedback is still very appreciated. > > Cheers, > gibi > > [1] https://review.openstack.org/#/c/609330/ > > On Tue, Oct 9, 2018 at 11:40 AM, Balázs Gibizer > wrote: > > Hi, > > > > Setup > > ----- > > > > nested allocation: an allocation that contains resources from one or > > more nested RPs. (if you have better term for this then please > > suggest). > > > > If an instance has nested allocation it means that the compute, it > > allocates from, has a nested RP tree. BUT if a compute has a nested > > RP tree it does not automatically means that the instance, allocating > > from that compute, has a nested allocation (e.g. bandwidth inventory > > will be on a nested RPs but not every instance will require bandwidth) > > > > Afaiu, as soon as we have NUMA modelling in place the most trivial > > servers will have nested allocations as CPU and MEMORY inverntory > > will be moved to the nested NUMA RPs. But NUMA is still in the future. > > > > Sidenote: there is an edge case reported by bauzas when an instance > > allocates _only_ from nested RPs. This was discussed on last Friday > > and it resulted in a new patch[0] but I would like to keep that > > discussion separate from this if possible. > > > > Sidenote: the current problem somewhat related to not just nested PRs > > but to sharing RPs as well. However I'm not aiming to implement > > sharing support in Nova right now so I also try to keep the sharing > > disscussion separated if possible. > > > > There was already some discussion on the Monday's scheduler meeting > > but I could not attend. > > > http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20 > > > > > > The meat > > -------- > > > > Both live-migrate[1] and evacuate[2] has an optional force flag on > > the nova REST API. The documentation says: "Force by not > > verifying the provided destination host by the scheduler." > > > > Nova implements this statement by not calling the scheduler if > > force=True BUT still try to manage allocations in placement. > > > > To have allocation on the destination host Nova blindly copies the > > instance allocation from the source host to the destination host > > during these operations. Nova can do that as 1) the whole allocation > > is against a single RP (the compute RP) and 2) Nova knows both the > > source compute RP and the destination compute RP. > > > > However as soon as we bring nested allocations into the picture that > > blind copy will not be feasible. Possible cases > > 0) The instance has non-nested allocation on the source and would > > need non nested allocation on the destination. This works with blindy > > copy today. > > 1) The instance has a nested allocation on the source and would need > > a nested allocation on the destination as well. > > 2) The instance has a non-nested allocation on the source and would > > need a nested allocation on the destination. > > 3) The instance has a nested allocation on the source and would need > > a non nested allocation on the destination. > > > > Nova cannot generate nested allocations easily without reimplementing > > some of the placement allocation candidate (a_c) code. However I > > don't like the idea of duplicating some of the a_c code in Nova. > > > > Nova cannot detect what kind of allocation (nested or non-nested) an > > instance would need on the destination without calling placement a_c. > > So knowing when to call placement is a chicken and egg problem. > > > > Possible solutions: > > A) fail fast > > ------------ > > 0) Nova can detect that the source allocatioin is non-nested and try > > the blindy copy and it will succeed. > > 1) Nova can detect that the source allocaton is nested and fail the > > operation > > 2) Nova only sees a non nested source allocation. Even if the dest RP > > tree is nested it does not mean that the allocation will be nested. > > We cannot fail fast. Nova can try the blind copy and allocate every > > resources from the root RP of the destination. If the instance > > require nested allocation instead the claim will fail in placement. > > So nova can fail the operation a bit later than in 1). > > 3) Nova can detect that the source allocation is nested and fail the > > operation. However and enhanced blind copy that tries to allocation > > everything from the root RP on the destinaton would have worked. > > > > B) Guess when to ignore the force flag and call the scheduler > > ------------------------------------------------------------- > > 0) keep the blind copy as it works > > 1) Nova detect that the source allocation is nested. Ignores the > > force flag and calls the scheduler that will call placement a_c. Move > > operation can succeed. > > 2) Nova only sees a non nested source allocation so it will fall back > > to blind copy and fails at the claim on destination. > > 3) Nova detect that the source allocation is nested. Ignores the > > force flag and calls the scheduler that will call placement a_c. Move > > operation can succeed. > > > > This solution would be against the API doc that states nova does not > > call the scheduler if the operation is forced. However in case of > > force live-migration Nova already verifies the target host from > > couple of perspective in [3]. > > This solution is alreay proposed for live-migrate in [4] and for > > evacuate in [5] so the complexity of the solution can be seen in the > > reviews. > > > > C) Remove the force flag from the API in a new microversion > > ----------------------------------------------------------- > > 0)-3): all cases would call the scheduler to verify the target host > > and generate the nested (or non-nested) allocation. > > We would still need an agreed behavior (from A), B), D)) for the old > > microversions as the todays code creates inconsistent allocation in > > #1) and #3) by ignoring the resource from the nested RP. > > > > D) Do not manage allocations in placement for forced operation > > -------------------------------------------------------------- > > Force flag is considered as a last resort tool for the admin to move > > VMs around. The API doc has a fat warning about the danger of it. So > > Nova can simply ignore resource allocation task if force=True. Nova > > would delete the source allocation and does not create any allocation > > on the destination host. > > > > This is a simple but dangerous solution but it is what the force flag > > is all about, move the server against all the built in safeties. (If > > the admin needs the safeties she can set force=False and still > > specify the destination host) > > > > I'm open to any suggestions. > > > > Cheers, > > gibi > > > > [0] https://review.openstack.org/#/c/608298/ > > [1] > > > https://developer.openstack.org/api-ref/compute/#live-migrate-server-os-migratelive-action > > [2] > > > https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action > > [3] > > > https://github.com/openstack/nova/blob/c5a7002bd571379818c0108296041d12bc171728/nova/conductor/tasks/live_migrate.py#L97 > > [4] https://review.openstack.org/#/c/605785 > > [5] https://review.openstack.org/#/c/606111 > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Wed Oct 10 12:42:46 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 10 Oct 2018 08:42:46 -0400 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <582207c4-9c92-a08a-ea11-5696115dc67f@fried.cc> References: <1539078021.11166.5@smtp.office365.com> <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> <1539097453.11166.7@smtp.office365.com> <0a2a4cc5-b696-8087-1757-efd6ca958889@gmail.com> <582207c4-9c92-a08a-ea11-5696115dc67f@fried.cc> Message-ID: On 10/09/2018 05:01 PM, Eric Fried wrote: > On 10/09/2018 02:20 PM, Jay Pipes wrote: >> On 10/09/2018 11:04 AM, Balázs Gibizer wrote: >>> If you do the force flag removal in a nw microversion that also means >>> (at least to me) that you should not change the behavior of the force >>> flag in the old microversions. >> >> Agreed. >> >> Keep the old, buggy and unsafe behaviour for the old microversion and in >> a new microversion remove the --force flag entirely and always call GET >> /a_c, followed by a claim_resources() on the destination host. >> >> For the old microversion behaviour, continue to do the "blind copy" of >> allocations from the source compute node provider to the destination >> compute node provider. > > TBC, for nested/sharing source, we should consolidate all the resources > into a single allocation against the destination's root provider? No. If there's >1 provider in the allocation for the source, just fail. >> That "blind copy" will still fail if there isn't >> capacity for the new allocations on the destination host anyway, because >> the blind copy is just issuing a POST /allocations, and that code path >> still checks capacity on the target resource providers. > > What happens when the migration fails, either because of that POST > /allocations, or afterwards? Do we still have the old allocation around > to restore? Cause we can't re-figure it from the now-monolithic > destination allocation. Again, just hard fail if there's >1 provider in the allocation on the source. >> There isn't a >> code path in the placement API that allows a provider's inventory >> capacity to be exceeded by new allocations. >> >> Best, >> -jay >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From corey.bryant at canonical.com Wed Oct 10 12:45:39 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Wed, 10 Oct 2018 08:45:39 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests Message-ID: Hi All, I'd like to enable py37 unit tests in the gate. == Background == I work on OpenStack packaging for Ubuntu. During the Rocky release (Ubuntu Cosmic) I tried to fix py37 bugs upstream as I came across them. There ended up being a lot of py37 issues and after a while, due to time constraints, I resorted to just opening bugs and disabling py37 unit tests that were failing in our package builds. Luckily enough, even though Cosmic ships with python3.6 and python3.7, python3.6 ended up being chosen as the default for Cosmic. == Defaulting to python3.7 == The next release of Ubuntu opens in just a few weeks. It will default to python3.7 and will not include python3.6. My hope is that if I can help enable py37 unit tests upstream now, we can get a wider view at fixing issues soon. == Enabling py37 unit tests == Ubuntu Bionic (18.04 LTS) has the 3.7.0 interpreter and I have reviews up to define the py37 zuul job and templates here: https://review.openstack.org/#/c/609066 I'd like to start submitting reviews to projects to enable openstack-python37-jobs (or variant) for projects that already have openstack-python36-jobs in their .zuul.yaml, zuul.yaml, .zuul.d/project.yaml. == Coinciding work == There is python3-first work going on now and I completely understand that this is going to cause more work for some projects. It seems that now is as good of a time as ever to catch up and test with a recent python3 version. I'm sure python3.8 and beyond will be here before we know it. Any thoughts or concerns? Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Wed Oct 10 12:46:23 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 10 Oct 2018 08:46:23 -0400 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <1539167549.7850.2@smtp.office365.com> References: <1539078021.11166.5@smtp.office365.com> <1539167549.7850.2@smtp.office365.com> Message-ID: <3757e85e-87f2-662c-8bbc-d24ed4b88299@gmail.com> On 10/10/2018 06:32 AM, Balázs Gibizer wrote: > Hi, > > Thanks for all the feedback. I feel the following consensus is forming: > > 1) remove the force flag in a new microversion. I've proposed a spec > about that API change [1] +1 > 2) in the old microversions change the blind allocation copy to gather > every resource from a nested source RPs too and try to allocate that > from the destination root RP. In nested allocation cases putting this > allocation to placement will fail and nova will fail the migration / > evacuation. However it will succeed if the server does not need nested > allocation neither on the source nor on the destination host (a.k.a the > legacy case). Or if the server has nested allocation on the source host > but does not need nested allocation on the destination host (for > example the dest host does not have nested RP tree yet). I disagree on this. I'd rather just do a simple check for >1 provider in the allocations on the source and if True, fail hard. The reverse (going from a non-nested source to a nested destination) will hard fail anyway on the destination because the POST /allocations won't work due to capacity exceeded (or failure to have any inventory at all for certain resource classes on the destination's root compute node). -jay > I will start implementing #2) as part of the > use-nested-allocation-candidate bp soon and will continue with #1) > later in the cycle. > > Nothing is set in stone yet so feedback is still very appreciated. > > Cheers, > gibi > > [1] https://review.openstack.org/#/c/609330/ > > On Tue, Oct 9, 2018 at 11:40 AM, Balázs Gibizer > wrote: >> Hi, >> >> Setup >> ----- >> >> nested allocation: an allocation that contains resources from one or >> more nested RPs. (if you have better term for this then please >> suggest). >> >> If an instance has nested allocation it means that the compute, it >> allocates from, has a nested RP tree. BUT if a compute has a nested >> RP tree it does not automatically means that the instance, allocating >> from that compute, has a nested allocation (e.g. bandwidth inventory >> will be on a nested RPs but not every instance will require bandwidth) >> >> Afaiu, as soon as we have NUMA modelling in place the most trivial >> servers will have nested allocations as CPU and MEMORY inverntory >> will be moved to the nested NUMA RPs. But NUMA is still in the future. >> >> Sidenote: there is an edge case reported by bauzas when an instance >> allocates _only_ from nested RPs. This was discussed on last Friday >> and it resulted in a new patch[0] but I would like to keep that >> discussion separate from this if possible. >> >> Sidenote: the current problem somewhat related to not just nested PRs >> but to sharing RPs as well. However I'm not aiming to implement >> sharing support in Nova right now so I also try to keep the sharing >> disscussion separated if possible. >> >> There was already some discussion on the Monday's scheduler meeting >> but I could not attend. >> http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20 >> >> >> The meat >> -------- >> >> Both live-migrate[1] and evacuate[2] has an optional force flag on >> the nova REST API. The documentation says: "Force by not >> verifying the provided destination host by the scheduler." >> >> Nova implements this statement by not calling the scheduler if >> force=True BUT still try to manage allocations in placement. >> >> To have allocation on the destination host Nova blindly copies the >> instance allocation from the source host to the destination host >> during these operations. Nova can do that as 1) the whole allocation >> is against a single RP (the compute RP) and 2) Nova knows both the >> source compute RP and the destination compute RP. >> >> However as soon as we bring nested allocations into the picture that >> blind copy will not be feasible. Possible cases >> 0) The instance has non-nested allocation on the source and would >> need non nested allocation on the destination. This works with blindy >> copy today. >> 1) The instance has a nested allocation on the source and would need >> a nested allocation on the destination as well. >> 2) The instance has a non-nested allocation on the source and would >> need a nested allocation on the destination. >> 3) The instance has a nested allocation on the source and would need >> a non nested allocation on the destination. >> >> Nova cannot generate nested allocations easily without reimplementing >> some of the placement allocation candidate (a_c) code. However I >> don't like the idea of duplicating some of the a_c code in Nova. >> >> Nova cannot detect what kind of allocation (nested or non-nested) an >> instance would need on the destination without calling placement a_c. >> So knowing when to call placement is a chicken and egg problem. >> >> Possible solutions: >> A) fail fast >> ------------ >> 0) Nova can detect that the source allocatioin is non-nested and try >> the blindy copy and it will succeed. >> 1) Nova can detect that the source allocaton is nested and fail the >> operation >> 2) Nova only sees a non nested source allocation. Even if the dest RP >> tree is nested it does not mean that the allocation will be nested. >> We cannot fail fast. Nova can try the blind copy and allocate every >> resources from the root RP of the destination. If the instance >> require nested allocation instead the claim will fail in placement. >> So nova can fail the operation a bit later than in 1). >> 3) Nova can detect that the source allocation is nested and fail the >> operation. However and enhanced blind copy that tries to allocation >> everything from the root RP on the destinaton would have worked. >> >> B) Guess when to ignore the force flag and call the scheduler >> ------------------------------------------------------------- >> 0) keep the blind copy as it works >> 1) Nova detect that the source allocation is nested. Ignores the >> force flag and calls the scheduler that will call placement a_c. Move >> operation can succeed. >> 2) Nova only sees a non nested source allocation so it will fall back >> to blind copy and fails at the claim on destination. >> 3) Nova detect that the source allocation is nested. Ignores the >> force flag and calls the scheduler that will call placement a_c. Move >> operation can succeed. >> >> This solution would be against the API doc that states nova does not >> call the scheduler if the operation is forced. However in case of >> force live-migration Nova already verifies the target host from >> couple of perspective in [3]. >> This solution is alreay proposed for live-migrate in [4] and for >> evacuate in [5] so the complexity of the solution can be seen in the >> reviews. >> >> C) Remove the force flag from the API in a new microversion >> ----------------------------------------------------------- >> 0)-3): all cases would call the scheduler to verify the target host >> and generate the nested (or non-nested) allocation. >> We would still need an agreed behavior (from A), B), D)) for the old >> microversions as the todays code creates inconsistent allocation in >> #1) and #3) by ignoring the resource from the nested RP. >> >> D) Do not manage allocations in placement for forced operation >> -------------------------------------------------------------- >> Force flag is considered as a last resort tool for the admin to move >> VMs around. The API doc has a fat warning about the danger of it. So >> Nova can simply ignore resource allocation task if force=True. Nova >> would delete the source allocation and does not create any allocation >> on the destination host. >> >> This is a simple but dangerous solution but it is what the force flag >> is all about, move the server against all the built in safeties. (If >> the admin needs the safeties she can set force=False and still >> specify the destination host) >> >> I'm open to any suggestions. >> >> Cheers, >> gibi >> >> [0] https://review.openstack.org/#/c/608298/ >> [1] >> https://developer.openstack.org/api-ref/compute/#live-migrate-server-os-migratelive-action >> [2] >> https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action >> [3] >> https://github.com/openstack/nova/blob/c5a7002bd571379818c0108296041d12bc171728/nova/conductor/tasks/live_migrate.py#L97 >> [4] https://review.openstack.org/#/c/605785 >> [5] https://review.openstack.org/#/c/606111 >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jim at jimrollenhagen.com Wed Oct 10 12:58:55 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 10 Oct 2018 08:58:55 -0400 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: <16397789-98b5-a011-0367-dd5023260870@redhat.com> References: <413d67d8-e4de-51fe-e7cf-8fb6520aed34@redhat.com> <20181009125850.4sj52i7c3mi6m6ay@yuggoth.org> <16397789-98b5-a011-0367-dd5023260870@redhat.com> Message-ID: On Tue, Oct 9, 2018 at 10:25 PM Gilles Dubreuil wrote: > > > On 09/10/18 23:58, Jeremy Stanley wrote: > > On 2018-10-09 08:52:52 -0400 (-0400), Jim Rollenhagen wrote: > > [...] > >> It seems to me that a major goal of openstacksdk is to hide differences > >> between clouds from the user. If the user is meant to use a GraphQL > library > >> themselves, we lose this and the user needs to figure it out themselves. > >> Did I understand that correctly? > > This is especially useful where the SDK implements business logic > > for common operations like "if the user requested A and the cloud > > supports features B+C+D then use those to fulfil the request, > > otherwise fall back to using features E+F". > > > > The features offered to the user don't have to change, it's just a > different architecture. > > The user doesn't have to deal with a GraphQL library, only the client > applications (consuming OpenStack APIs). > And there are also UI tools such as GraphiQL which allow to interact > directly with GraphQL servers. > Right, this comes back to what I said earlier: > That said, it seems like using this in a client like OpenStackSDK would get messy quickly. Instead of asking for which versions are supported, you'd have to fetch the schema, map it to actual features somehow, and adjust queries based on this info. > > I guess there might be a middleground where we could fetch the REST API version, and know from that what GraphQL queries can be made. This isn't unsolvable, but it does sound like quite a bit of work. This isn't to say "let's not do graphql at all", but it's important to understand the work involved. FWIW, I originally mentioned the SDK (as opposed to the clients speaking graphql directly), as the client applications are currently transitioning to use openstacksdk instead of their own API calls. // jim > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Wed Oct 10 13:17:38 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Wed, 10 Oct 2018 13:17:38 +0000 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <3757e85e-87f2-662c-8bbc-d24ed4b88299@gmail.com> References: <1539078021.11166.5@smtp.office365.com> <1539167549.7850.2@smtp.office365.com> <3757e85e-87f2-662c-8bbc-d24ed4b88299@gmail.com> Message-ID: <1539177453.13734.0@smtp.office365.com> On Wed, Oct 10, 2018 at 2:46 PM, Jay Pipes wrote: > On 10/10/2018 06:32 AM, Balázs Gibizer wrote: >> Hi, >> >> Thanks for all the feedback. I feel the following consensus is >> forming: >> >> 1) remove the force flag in a new microversion. I've proposed a spec >> about that API change [1] > > +1 > >> 2) in the old microversions change the blind allocation copy to >> gather >> every resource from a nested source RPs too and try to allocate that >> from the destination root RP. In nested allocation cases putting this >> allocation to placement will fail and nova will fail the migration / >> evacuation. However it will succeed if the server does not need >> nested >> allocation neither on the source nor on the destination host (a.k.a >> the >> legacy case). Or if the server has nested allocation on the source >> host >> but does not need nested allocation on the destination host (for >> example the dest host does not have nested RP tree yet). > > I disagree on this. I'd rather just do a simple check for >1 provider > in the allocations on the source and if True, fail hard. > > The reverse (going from a non-nested source to a nested destination) > will hard fail anyway on the destination because the POST > /allocations won't work due to capacity exceeded (or failure to have > any inventory at all for certain resource classes on the > destination's root compute node). If we hard fail on >1 provider in an allocation on the source then we lose the (not really common) case when the source allocation is nested but the destination node does not have a nested RP tree yet and it would support the summarized allocation on the root RP. But sure simply failing would be a simpler solution. gibi > > -jay > >> I will start implementing #2) as part of the >> use-nested-allocation-candidate bp soon and will continue with #1) >> later in the cycle. >> >> Nothing is set in stone yet so feedback is still very appreciated. >> >> Cheers, >> gibi >> >> [1] https://review.openstack.org/#/c/609330/ >> >> On Tue, Oct 9, 2018 at 11:40 AM, Balázs Gibizer >> wrote: >>> Hi, >>> >>> Setup >>> ----- >>> >>> nested allocation: an allocation that contains resources from one or >>> more nested RPs. (if you have better term for this then please >>> suggest). >>> >>> If an instance has nested allocation it means that the compute, it >>> allocates from, has a nested RP tree. BUT if a compute has a nested >>> RP tree it does not automatically means that the instance, >>> allocating >>> from that compute, has a nested allocation (e.g. bandwidth inventory >>> will be on a nested RPs but not every instance will require >>> bandwidth) >>> >>> Afaiu, as soon as we have NUMA modelling in place the most trivial >>> servers will have nested allocations as CPU and MEMORY inverntory >>> will be moved to the nested NUMA RPs. But NUMA is still in the >>> future. >>> >>> Sidenote: there is an edge case reported by bauzas when an instance >>> allocates _only_ from nested RPs. This was discussed on last Friday >>> and it resulted in a new patch[0] but I would like to keep that >>> discussion separate from this if possible. >>> >>> Sidenote: the current problem somewhat related to not just nested >>> PRs >>> but to sharing RPs as well. However I'm not aiming to implement >>> sharing support in Nova right now so I also try to keep the sharing >>> disscussion separated if possible. >>> >>> There was already some discussion on the Monday's scheduler meeting >>> but I could not attend. >>> http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20 >>> >>> >>> The meat >>> -------- >>> >>> Both live-migrate[1] and evacuate[2] has an optional force flag on >>> the nova REST API. The documentation says: "Force by >>> not >>> verifying the provided destination host by the scheduler." >>> >>> Nova implements this statement by not calling the scheduler if >>> force=True BUT still try to manage allocations in placement. >>> >>> To have allocation on the destination host Nova blindly copies the >>> instance allocation from the source host to the destination host >>> during these operations. Nova can do that as 1) the whole allocation >>> is against a single RP (the compute RP) and 2) Nova knows both the >>> source compute RP and the destination compute RP. >>> >>> However as soon as we bring nested allocations into the picture that >>> blind copy will not be feasible. Possible cases >>> 0) The instance has non-nested allocation on the source and would >>> need non nested allocation on the destination. This works with >>> blindy >>> copy today. >>> 1) The instance has a nested allocation on the source and would need >>> a nested allocation on the destination as well. >>> 2) The instance has a non-nested allocation on the source and would >>> need a nested allocation on the destination. >>> 3) The instance has a nested allocation on the source and would need >>> a non nested allocation on the destination. >>> >>> Nova cannot generate nested allocations easily without >>> reimplementing >>> some of the placement allocation candidate (a_c) code. However I >>> don't like the idea of duplicating some of the a_c code in Nova. >>> >>> Nova cannot detect what kind of allocation (nested or non-nested) an >>> instance would need on the destination without calling placement >>> a_c. >>> So knowing when to call placement is a chicken and egg problem. >>> >>> Possible solutions: >>> A) fail fast >>> ------------ >>> 0) Nova can detect that the source allocatioin is non-nested and try >>> the blindy copy and it will succeed. >>> 1) Nova can detect that the source allocaton is nested and fail the >>> operation >>> 2) Nova only sees a non nested source allocation. Even if the dest >>> RP >>> tree is nested it does not mean that the allocation will be nested. >>> We cannot fail fast. Nova can try the blind copy and allocate every >>> resources from the root RP of the destination. If the instance >>> require nested allocation instead the claim will fail in placement. >>> So nova can fail the operation a bit later than in 1). >>> 3) Nova can detect that the source allocation is nested and fail the >>> operation. However and enhanced blind copy that tries to allocation >>> everything from the root RP on the destinaton would have worked. >>> >>> B) Guess when to ignore the force flag and call the scheduler >>> ------------------------------------------------------------- >>> 0) keep the blind copy as it works >>> 1) Nova detect that the source allocation is nested. Ignores the >>> force flag and calls the scheduler that will call placement a_c. >>> Move >>> operation can succeed. >>> 2) Nova only sees a non nested source allocation so it will fall >>> back >>> to blind copy and fails at the claim on destination. >>> 3) Nova detect that the source allocation is nested. Ignores the >>> force flag and calls the scheduler that will call placement a_c. >>> Move >>> operation can succeed. >>> >>> This solution would be against the API doc that states nova does not >>> call the scheduler if the operation is forced. However in case of >>> force live-migration Nova already verifies the target host from >>> couple of perspective in [3]. >>> This solution is alreay proposed for live-migrate in [4] and for >>> evacuate in [5] so the complexity of the solution can be seen in the >>> reviews. >>> >>> C) Remove the force flag from the API in a new microversion >>> ----------------------------------------------------------- >>> 0)-3): all cases would call the scheduler to verify the target host >>> and generate the nested (or non-nested) allocation. >>> We would still need an agreed behavior (from A), B), D)) for the old >>> microversions as the todays code creates inconsistent allocation in >>> #1) and #3) by ignoring the resource from the nested RP. >>> >>> D) Do not manage allocations in placement for forced operation >>> -------------------------------------------------------------- >>> Force flag is considered as a last resort tool for the admin to move >>> VMs around. The API doc has a fat warning about the danger of it. So >>> Nova can simply ignore resource allocation task if force=True. Nova >>> would delete the source allocation and does not create any >>> allocation >>> on the destination host. >>> >>> This is a simple but dangerous solution but it is what the force >>> flag >>> is all about, move the server against all the built in safeties. (If >>> the admin needs the safeties she can set force=False and still >>> specify the destination host) >>> >>> I'm open to any suggestions. >>> >>> Cheers, >>> gibi >>> >>> [0] https://review.openstack.org/#/c/608298/ >>> [1] >>> https://developer.openstack.org/api-ref/compute/#live-migrate-server-os-migratelive-action >>> [2] >>> https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action >>> [3] >>> https://github.com/openstack/nova/blob/c5a7002bd571379818c0108296041d12bc171728/nova/conductor/tasks/live_migrate.py#L97 >>> [4] https://review.openstack.org/#/c/605785 >>> [5] https://review.openstack.org/#/c/606111 >>> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Wed Oct 10 13:18:49 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Oct 2018 13:18:49 +0000 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: <16397789-98b5-a011-0367-dd5023260870@redhat.com> References: <413d67d8-e4de-51fe-e7cf-8fb6520aed34@redhat.com> <20181009125850.4sj52i7c3mi6m6ay@yuggoth.org> <16397789-98b5-a011-0367-dd5023260870@redhat.com> Message-ID: <20181010131849.ey5yf3zxjtevtnae@yuggoth.org> On 2018-10-10 13:24:28 +1100 (+1100), Gilles Dubreuil wrote: > On 09/10/18 23:58, Jeremy Stanley wrote: > > On 2018-10-09 08:52:52 -0400 (-0400), Jim Rollenhagen wrote: > > [...] > > > It seems to me that a major goal of openstacksdk is to hide > > > differences between clouds from the user. If the user is meant > > > to use a GraphQL library themselves, we lose this and the user > > > needs to figure it out themselves. Did I understand that > > > correctly? > > This is especially useful where the SDK implements business > > logic for common operations like "if the user requested A and > > the cloud supports features B+C+D then use those to fulfil the > > request, otherwise fall back to using features E+F". > > The features offered to the user don't have to change, it's just a > different architecture. > > The user doesn't have to deal with a GraphQL library, only the > client applications (consuming OpenStack APIs). And there are also > UI tools such as GraphiQL which allow to interact directly with > GraphQL servers. My point was simply that SDKs provide more than a simple translation of network API calls and feature discovery. There can also be rather a lot of "business logic" orchestrating multiple primitive API calls to reach some more complex outcome. The services don't want to embed this orchestrated business logic themselves, and it makes little sense to replicate the same algorithms in every single application which wants to make use of such composite functionality. There are common actions an application might wish to take which involve speaking to multiple APIs for different services to make specific calls in a particular order, perhaps feeding the results of one into the next. Can you explain how GraphQL eliminates the above reasons for an SDK? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed Oct 10 13:26:14 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Oct 2018 13:26:14 +0000 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: Message-ID: <20181010132614.ie5uchdp7aqg4wn4@yuggoth.org> On 2018-10-10 08:45:39 -0400 (-0400), Corey Bryant wrote: [...] > Ubuntu Bionic (18.04 LTS) has the 3.7.0 interpreter [...] Thanks for the heads up! Last time I looked it was still a pre-3.7.0 beta package, but looks like that has finally been updated to a proper release of the interpreter for Bionic in the last few weeks? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From aj at suse.com Wed Oct 10 13:26:49 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 10 Oct 2018 15:26:49 +0200 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: Message-ID: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> On 10/10/2018 14.45, Corey Bryant wrote: > [...] > == Enabling py37 unit tests == > > Ubuntu Bionic (18.04 LTS) has the 3.7.0 interpreter and I have reviews > up to define the py37 zuul job and templates here: > https://review.openstack.org/#/c/609066 > > I'd like to start submitting reviews to projects to enable > openstack-python37-jobs (or variant) for projects that already have > openstack-python36-jobs in their .zuul.yaml, zuul.yaml, > .zuul.d/project.yaml. We have projects testing python 3.5 and 3.6 already. Adding 3.7 to it is a lot of wasted VMs. Can we limit testing and not test all three, please? Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From corey.bryant at canonical.com Wed Oct 10 13:38:14 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Wed, 10 Oct 2018 09:38:14 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <20181010132614.ie5uchdp7aqg4wn4@yuggoth.org> References: <20181010132614.ie5uchdp7aqg4wn4@yuggoth.org> Message-ID: On Wed, Oct 10, 2018 at 9:27 AM Jeremy Stanley wrote: > On 2018-10-10 08:45:39 -0400 (-0400), Corey Bryant wrote: > [...] > > Ubuntu Bionic (18.04 LTS) has the 3.7.0 interpreter > [...] > > Thanks for the heads up! Last time I looked it was still a pre-3.7.0 > beta package, but looks like that has finally been updated to a > proper release of the interpreter for Bionic in the last few weeks? > Yes, it was recently updated. It was originally a sync from Debian and we got what we got in Bionic but the foundations folks were kind enough to update it for us. This is a universe package for Bionic so it's not officially supported but it should be good enough for unit testing. Another option could be to use a non-LTS image to use a supported release. Corey -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devbut t > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Wed Oct 10 13:42:28 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Wed, 10 Oct 2018 09:42:28 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> Message-ID: On Wed, Oct 10, 2018 at 9:26 AM Andreas Jaeger wrote: > On 10/10/2018 14.45, Corey Bryant wrote: > > [...] > > == Enabling py37 unit tests == > > > > Ubuntu Bionic (18.04 LTS) has the 3.7.0 interpreter and I have reviews > > up to define the py37 zuul job and templates here: > > https://review.openstack.org/#/c/609066 > > > > I'd like to start submitting reviews to projects to enable > > openstack-python37-jobs (or variant) for projects that already have > > openstack-python36-jobs in their .zuul.yaml, zuul.yaml, > > .zuul.d/project.yaml. > > We have projects testing python 3.5 and 3.6 already. Adding 3.7 to it is > a lot of wasted VMs. Can we limit testing and not test all three, please? > > Well, I wouldn't call any of them wasted if they're testing against a supported Python version. Corey Andreas > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Oct 10 13:45:51 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Oct 2018 13:45:51 +0000 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <20181010132614.ie5uchdp7aqg4wn4@yuggoth.org> Message-ID: <20181010134551.srvqaygqqlugvvvy@yuggoth.org> On 2018-10-10 09:38:14 -0400 (-0400), Corey Bryant wrote: [...] > Another option could be to use a non-LTS image to use a supported > release. Let's avoid creating additional images unless there is a strong reason (every additional image means more load on our image builders, more space consumed in our providers, et cetera). Bionic seems like it will serve fine for this purpose now that it's got more than a pre-release of 3.7. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jaypipes at gmail.com Wed Oct 10 13:55:53 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 10 Oct 2018 09:55:53 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> Message-ID: <16b7a86a-8518-5c07-d059-a2be49d32ae5@gmail.com> On 10/10/2018 09:42 AM, Corey Bryant wrote: > On Wed, Oct 10, 2018 at 9:26 AM Andreas Jaeger > wrote: > > On 10/10/2018 14.45, Corey Bryant wrote: > > [...] > > == Enabling py37 unit tests == > > > > Ubuntu Bionic (18.04 LTS) has the 3.7.0 interpreter and I have > reviews > > up to define the py37 zuul job and templates here: > > https://review.openstack.org/#/c/609066 > > > > I'd like to start submitting reviews to projects to enable > > openstack-python37-jobs (or variant) for projects that already have > > openstack-python36-jobs in their .zuul.yaml, zuul.yaml, > > .zuul.d/project.yaml. > > We have projects testing python 3.5 and 3.6 already. Adding 3.7 to > it is > a lot of wasted VMs. Can we limit testing and not test all three, > please? > > Well, I wouldn't call any of them wasted if they're testing against a > supported Python version. ++ -jay From mriedemos at gmail.com Wed Oct 10 14:07:58 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 10 Oct 2018 09:07:58 -0500 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <1539097728.11166.8@smtp.office365.com> References: <1539078021.11166.5@smtp.office365.com> <0798743f-d0f0-5d33-ca91-886e2d080d92@fried.cc> <1539097728.11166.8@smtp.office365.com> Message-ID: <9d8fb467-71d9-74ce-2d55-5bbc0137a26f@gmail.com> On 10/9/2018 10:08 AM, Balázs Gibizer wrote: > Question for you as well: if we remove (or change) the force flag in a > new microversion then how should the old microversions behave when > nested allocations would be required? Fail fast if we can detect we have nested. We don't support forcing those types of servers. -- Thanks, Matt From aj at suse.com Wed Oct 10 14:09:21 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 10 Oct 2018 16:09:21 +0200 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> Message-ID: On 10/10/2018 15.42, Corey Bryant wrote: > > > On Wed, Oct 10, 2018 at 9:26 AM Andreas Jaeger > wrote: > > On 10/10/2018 14.45, Corey Bryant wrote: > > [...] > > == Enabling py37 unit tests == > > > > Ubuntu Bionic (18.04 LTS) has the 3.7.0 interpreter and I have > reviews > > up to define the py37 zuul job and templates here: > > https://review.openstack.org/#/c/609066 > > > > I'd like to start submitting reviews to projects to enable > > openstack-python37-jobs (or variant) for projects that already have > > openstack-python36-jobs in their .zuul.yaml, zuul.yaml, > > .zuul.d/project.yaml. > > We have projects testing python 3.5 and 3.6 already. Adding 3.7 to > it is > a lot of wasted VMs. Can we limit testing and not test all three, > please? > > > Well, I wouldn't call any of them wasted if they're testing against a > supported Python version. What I mean is that we run too into a situation where we have a large backlog of CI jobs since we have to many changes and jobs in flight. So, I'm asking whether there is a good way to not duplicating all jobs to run on all three interpreters. Do we really need testing of all three versions? Or is testing with a subset a manageable risk? Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From mriedemos at gmail.com Wed Oct 10 14:14:11 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 10 Oct 2018 09:14:11 -0500 Subject: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <3757e85e-87f2-662c-8bbc-d24ed4b88299@gmail.com> References: <1539078021.11166.5@smtp.office365.com> <1539167549.7850.2@smtp.office365.com> <3757e85e-87f2-662c-8bbc-d24ed4b88299@gmail.com> Message-ID: <10626f80-278b-2cce-bee1-76a738e482c9@gmail.com> On 10/10/2018 7:46 AM, Jay Pipes wrote: >> 2) in the old microversions change the blind allocation copy to gather >> every resource from a nested source RPs too and try to allocate that >> from the destination root RP. In nested allocation cases putting this >> allocation to placement will fail and nova will fail the migration / >> evacuation. However it will succeed if the server does not need nested >> allocation neither on the source nor on the destination host (a.k.a the >> legacy case). Or if the server has nested allocation on the source host >> but does not need nested allocation on the destination host (for >> example the dest host does not have nested RP tree yet). > > I disagree on this. I'd rather just do a simple check for >1 provider in > the allocations on the source and if True, fail hard. > > The reverse (going from a non-nested source to a nested destination) > will hard fail anyway on the destination because the POST /allocations > won't work due to capacity exceeded (or failure to have any inventory at > all for certain resource classes on the destination's root compute node). I agree with Jay here. If we know the source has allocations on >1 provider, just fail fast, why even walk the tree and try to claim those against the destination - the nested providers aren't going to be the same UUIDs on the destination, *and* trying to squash all of the source nested allocations into the single destination root provider and hope it works is super hacky and I don't think we should attempt that. Just fail if being forced and nested allocations exist on the source. -- Thanks, Matt From corey.bryant at canonical.com Wed Oct 10 14:16:16 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Wed, 10 Oct 2018 10:16:16 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> Message-ID: On Wed, Oct 10, 2018 at 10:09 AM Andreas Jaeger wrote: > On 10/10/2018 15.42, Corey Bryant wrote: > > > > > > On Wed, Oct 10, 2018 at 9:26 AM Andreas Jaeger > > wrote: > > > > On 10/10/2018 14.45, Corey Bryant wrote: > > > [...] > > > == Enabling py37 unit tests == > > > > > > Ubuntu Bionic (18.04 LTS) has the 3.7.0 interpreter and I have > > reviews > > > up to define the py37 zuul job and templates here: > > > https://review.openstack.org/#/c/609066 > > > > > > I'd like to start submitting reviews to projects to enable > > > openstack-python37-jobs (or variant) for projects that already > have > > > openstack-python36-jobs in their .zuul.yaml, zuul.yaml, > > > .zuul.d/project.yaml. > > > > We have projects testing python 3.5 and 3.6 already. Adding 3.7 to > > it is > > a lot of wasted VMs. Can we limit testing and not test all three, > > please? > > > > > > Well, I wouldn't call any of them wasted if they're testing against a > > supported Python version. > > > What I mean is that we run too into a situation where we have a large > backlog of CI jobs since we have to many changes and jobs in flight. > > So, I'm asking whether there is a good way to not duplicating all jobs > to run on all three interpreters. Do we really need testing of all three > versions? Or is testing with a subset a manageable risk? > Fair enough. I'm probably not the right person to answer so perhaps someone else can chime in. One thing worth pointing out is that it seems the jump from 3.5 to 3.6 wasn't nearly as painful as the jump from 3.6 to 3.7, at least in my experience. Corey > Andreas > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Oct 10 14:17:41 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Oct 2018 14:17:41 +0000 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> Message-ID: <20181010141740.psu34xvv33kay7ll@yuggoth.org> On 2018-10-10 16:09:21 +0200 (+0200), Andreas Jaeger wrote: [...] > So, I'm asking whether there is a good way to not duplicating all > jobs to run on all three interpreters. Do we really need testing > of all three versions? Or is testing with a subset a manageable > risk? OpenStack projects are hopefully switching to testing on Bionic instead of Xenial during the Stein cycle, so will stop testing with Python 3.5 on master when that happens (since Bionic provides 3.6/3.7 and no 3.5). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corey.bryant at canonical.com Wed Oct 10 14:29:17 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Wed, 10 Oct 2018 10:29:17 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <20181010141740.psu34xvv33kay7ll@yuggoth.org> References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010141740.psu34xvv33kay7ll@yuggoth.org> Message-ID: On Wed, Oct 10, 2018 at 10:18 AM Jeremy Stanley wrote: > On 2018-10-10 16:09:21 +0200 (+0200), Andreas Jaeger wrote: > [...] > > So, I'm asking whether there is a good way to not duplicating all > > jobs to run on all three interpreters. Do we really need testing > > of all three versions? Or is testing with a subset a manageable > > risk? > > OpenStack projects are hopefully switching to testing on Bionic > instead of Xenial during the Stein cycle, so will stop testing with > Python 3.5 on master when that happens (since Bionic provides > 3.6/3.7 and no 3.5). > That would be ideal, in which case dropping py35 and adding py37 for master in Stein shouldn't require any more resources. Corey -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifatafekn at gmail.com Wed Oct 10 14:52:08 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Wed, 10 Oct 2018 17:52:08 +0300 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: Hi Won, On Wed, Oct 10, 2018 at 11:58 AM Won wrote: > > my prometheus version : 2.3.2 and alertmanager version : 0.15.2 and I > attached files.(vitrage collector,graph logs and apache log and > prometheus.yml alertmanager.yml alarm rule file etc..) > I think the problem that resolved alarm does not disappear is the time > stamp problem of the alarm. > > -gray alarm info > severity:PAGE > vitrage id: c6a94386-3879-499e-9da0-2a5b9d3294b8 , > e2c5eae9-dba9-4f64-960b-b964f1c01dfe , 3d3c903e-fe09-4a6f-941f-1a2adb09feca > , 8c6e7906-9e66-404f-967f-40037a6afc83 , > e291662b-115d-42b5-8863-da8243dd06b4 , 8abd2a2f-c830-453c-a9d0-55db2bf72d46 > ---------- > > The alarms marked with the blue circle are already resolved. However, it > does not disappear from the entity graph and alarm list. > There were seven more gray alarms at the top screenshot in active alarms > like entity graph. It disappeared by deleting gray alarms from the > vitrage-alarms table in the DB or changing the end timestamp value to an > earlier time than the current time. > I checked the files that you sent, and it appears that the connection between Prometheus and Vitrage works well. I see in vitrage-graph log that Prometheus notified both on alert firing and on alert resolved statuses. I still don't understand why the alarms were not removed from Vitrage, though. Can you please send me the output of 'vitrage topology show' CLI command? Also, did you happen to restart vitrage-graph or vitrage-collector during your tests? > At the log, it seems that the first problem is that the timestamp value > from the vitrage comes to 2001-01-01, even though the starting value in the > Prometheus alarm information has the correct value. > When the alarm is solved, the end time stamp value is not updated so alarm > does not disappear from the alarm list. > Can you please show me where you saw the 2001 timestamp? I didn't find it in the log. > The second problem is that even if the time stamp problem is solved, the > entity graph problem will not be solved. Gray alarm information is not in > the vitage-collector log but in the vitrage graph and apache log. > I want to know how to forcefully delete entity from a vitage graph. > You shouldn't do it :-) there is no API for deleting entities, and messing with the database may cause unexpected results. The only thing that you can safely do is to stop all Vitrage services, execute 'vitrage-purge-data' command, and start the services again. This will cause rebuilding of the entity graph. > Regarding the multi nodes, I mean, 1 controll node(pc1) & 1 compute > node(pc2). So one openstack. > > The test VM in the picture is an instance on compute node that has already > been deleted. I waited for hours and checked nova.conf but it was not > removed. > This was not the occur in the queens version; in the rocky version, > multinode environment, there seems to be a bug in VM creation on multi node. > The same situation occurred in multi-node environments that were > configured with different PCs. > Let me make sure I understand the problem. When you create a new vm in Nova, does it immediately appear in the entity graph? When you delete a vm, it remains? does it remain in a multi-node environment and deleted in a single node environment? Br, Ifat -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Wed Oct 10 14:52:48 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 10 Oct 2018 16:52:48 +0200 Subject: [openstack-dev] [puppet][tripleo][all] Zuul job backlog Message-ID: <6e6c023b-a320-4b46-2c88-33c008589a17@redhat.com> Wesley Hayutin writes: [snip] > The TripleO project has created a single node container based composable > OpenStack deployment [2]. It is the projects intention to replace most of > the TripleO upstream jobs with the Standalone deployment. We would like to > reduce our multi-node usage to a total of two or three multinode jobs to > handle a basic overcloud deployment, updates and upgrades[a]. Currently in > master we are relying on multiple multi-node scenario jobs to test many of > the OpenStack services in a single job. Our intention is to move these > multinode scenario jobs to single node job(s) that tests a smaller subset > of services. The goal of this would be target the specific areas of the > TripleO code base that affect these services and only run those there. This > would replace the existing 2-3 hour two node job(s) with single node job(s) > for specific services that completes in about half the time. This > unfortunately will reduce the overall coverage upstream but still allows us > a basic smoke test of the supported OpenStack services and their deployment > upstream. > > Ideally projects other than TripleO would make use of the Standalone > deployment to test their particular service with containers, upgrades or > for various other reasons. Additional projects using this deployment would > help ensure bugs are found quickly and resolved providing additional > resilience to the upstream gate jobs. The TripleO team will begin review to > scope out and create estimates for the above work starting on October 18 > 2018. One should expect to see updates on our progress posted to the > list. Below are some details on the proposed changes. > > Thank you all for your time and patience! > > Performance improvements: > * Standalone jobs use half the nodes of multinode jobs > * The standalone job has an average run time of 60-80 minutes, about half > the run time of our multinode jobs > > Base TripleO Job Definitions (Stein onwards): > Multi-node jobs > * containers-multinode > * containers-multinode-updates > * containers-multinode-upgrades > Single node jobs > * undercloud > * undercloud-upgrade > * standalone > > Jobs to be removed (Stein onwards): > Multi-node jobs[b] > * scenario001-multinode > * scenario002-multinode > * scenario003-multinode > * scenario004-multinode > * scenario006-mulitinode > * scenario007-multinode > * scenario008-multinode > * scenario009-multinode > * scenario010-multinode > * scenario011-multinode > > Jobs that may need to be created to cover additional services[4] (Stein > onwards): > Single node jobs[c] > * standalone-barbican > * standalone-ceph[d] > * standalone-designate > * standalone-manila > * standalone-octavia > * standalone-openshift > * standalone-sahara > * standalone-telemetry > > [1] https://gist.github.com/notmyname/8bf3dbcb7195250eb76f2a1a8996fb00 > [2] > https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/standalone.html > [3] > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134867.html > [4] > https://github.com/openstack/tripleo-heat-templates/blob/master/README.rst#service-testing-matrix I wanted to follow-up that original thread [0] wrt running a default standalone tripleo deployment integration job for openstack-puppet modules to see if it breaks tripleo. There is a topic [1] to review please. The issue (IMO) is that the default standalone setup deploys a fixed set of openstack services, some are disabled [2] and some go by default [3], which may be either an excessive or lacking coverage (like Ironic) for some of the puppet openstack modules. My take is it only makes sense to deploy that standalone setup for the puppet-openstack-integration perhaps (and tripleo itself obviously, as that involves a majority of openstack-puppet modules), but not for each particular puppet-foo module. Why wasting CI resources for that default job clonned for the modules and see, for example, puppet-keystone (and all other modules) standalone jobs failing because of an unrelated puppet-nova's libvirt issue [4]? That's pointless and inefficient. And to cover Ironic deployments, we'd have to keep the undercloud job as a separate. Although that probably is acceptable as a first iteration... But ideally I'd like to see that standalone job composable and adapted to only test a deployment for the wanted components for puppet-foo modules under check/gate. And it also makes sense to disable tempest for the standalone job(s) perhaps as it is already covered by neighbour jobs. [0] https://goo.gl/UFNtcC [1] https://goo.gl/dPkgCH [2] https://goo.gl/eZ1wuC [3] https://goo.gl/H8ZnAJ [4] https://review.openstack.org/609289 -- Best regards, Bogdan Dobrelya, Irc #bogdando From Kevin.Fox at pnnl.gov Wed Oct 10 15:49:56 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 10 Oct 2018 15:49:56 +0000 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <5cf93772-d8f6-4cbb-3088-eeb670941969@everyware.ch> <95509c0e-5abb-b0f2-eca3-f04e7eb995e3@gmail.com> <1A3C52DFCD06494D8528644858247BF01C1F64ED@EX10MBOX03.pnnl.gov>, Message-ID: <1A3C52DFCD06494D8528644858247BF01C1F6C0B@EX10MBOX03.pnnl.gov> Sorry. Couldn't quite think of the name. I was meaning, openstack project tags. Thanks, Kevin ________________________________________ From: Jay Pipes [jaypipes at gmail.com] Sent: Tuesday, October 09, 2018 12:22 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints On 10/09/2018 03:10 PM, Fox, Kevin M wrote: > Oh, this does raise an interesting question... Should such information be reported by the projects up to users through labels? Something like, "percona_multimaster=safe" Its really difficult for folks to know which projects can and can not be used that way currently. Are you referring to k8s labels/selectors? or are you referring to project tags (you know, part of that whole Big Tent thing...)? -jay __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dms at danplanet.com Wed Oct 10 15:58:20 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 10 Oct 2018 08:58:20 -0700 Subject: [openstack-dev] [Openstack-operators] [nova] Supporting force live-migrate and force evacuate with nested allocations In-Reply-To: <10626f80-278b-2cce-bee1-76a738e482c9@gmail.com> (Matt Riedemann's message of "Wed, 10 Oct 2018 09:14:11 -0500") References: <1539078021.11166.5@smtp.office365.com> <1539167549.7850.2@smtp.office365.com> <3757e85e-87f2-662c-8bbc-d24ed4b88299@gmail.com> <10626f80-278b-2cce-bee1-76a738e482c9@gmail.com> Message-ID: >> I disagree on this. I'd rather just do a simple check for >1 >> provider in the allocations on the source and if True, fail hard. >> >> The reverse (going from a non-nested source to a nested destination) >> will hard fail anyway on the destination because the POST >> /allocations won't work due to capacity exceeded (or failure to have >> any inventory at all for certain resource classes on the >> destination's root compute node). > > I agree with Jay here. If we know the source has allocations on >1 > provider, just fail fast, why even walk the tree and try to claim > those against the destination - the nested providers aren't going to > be the same UUIDs on the destination, *and* trying to squash all of > the source nested allocations into the single destination root > provider and hope it works is super hacky and I don't think we should > attempt that. Just fail if being forced and nested allocations exist > on the source. Same, yeah. --Dan From jaypipes at gmail.com Wed Oct 10 17:35:57 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 10 Oct 2018 13:35:57 -0400 Subject: [openstack-dev] [kolla][tc] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C1F6C0B@EX10MBOX03.pnnl.gov> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <5cf93772-d8f6-4cbb-3088-eeb670941969@everyware.ch> <95509c0e-5abb-b0f2-eca3-f04e7eb995e3@gmail.com> <1A3C52DFCD06494D8528644858247BF01C1F64ED@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C1F6C0B@EX10MBOX03.pnnl.gov> Message-ID: <6b63c2b5-2881-7a01-f586-7700a62bc16d@gmail.com> +tc topic On 10/10/2018 11:49 AM, Fox, Kevin M wrote: > Sorry. Couldn't quite think of the name. I was meaning, openstack project tags. I think having a tag that indicates the project is no longer using SELECT FOR UPDATE (and thus is safe to use multi-writer Galera) is an excellent idea, Kevin. ++ -jay > ________________________________________ > From: Jay Pipes [jaypipes at gmail.com] > Sent: Tuesday, October 09, 2018 12:22 PM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints > > On 10/09/2018 03:10 PM, Fox, Kevin M wrote: >> Oh, this does raise an interesting question... Should such information be reported by the projects up to users through labels? Something like, "percona_multimaster=safe" Its really difficult for folks to know which projects can and can not be used that way currently. > > Are you referring to k8s labels/selectors? or are you referring to > project tags (you know, part of that whole Big Tent thing...)? > > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gnhill at liquidweb.com Wed Oct 10 17:41:41 2018 From: gnhill at liquidweb.com (Greg Hill) Date: Wed, 10 Oct 2018 12:41:41 -0500 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo Message-ID: I've been out of the openstack loop for a few years, so I hope this reaches the right folks. Josh Harlow (original author of taskflow and related libraries) and I have been discussing the option of moving taskflow out of the openstack umbrella recently. This move would likely also include the futurist and automaton libraries that are primarily used by taskflow. The idea would be to just host them on github and use the regular Github features for Issues, PRs, wiki, etc, in the hopes that this would spur more development. Taskflow hasn't had any substantial contributions in several years and it doesn't really seem that the current openstack devs have a vested interest in moving it forward. I would like to move it forward, but I don't have an interest in being bound by the openstack workflow (this is why the project stagnated as core reviewers were pulled on to other projects and couldn't keep up with the review backlog, so contributions ground to a halt). I guess I'm putting it forward to the larger community. Does anyone have any objections to us doing this? Are there any non-obvious technicalities that might make such a transition difficult? Who would need to be made aware so they could adjust their own workflows? Or would it be preferable to just fork and rename the project so openstack can continue to use the current taskflow version without worry of us breaking features? Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Wed Oct 10 18:16:28 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 10 Oct 2018 14:16:28 -0400 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: References: Message-ID: <8b2f89f9-a8f1-fc82-0d08-e8a93ef1889c@gmail.com> On 10/10/2018 01:41 PM, Greg Hill wrote: > I've been out of the openstack loop for a few years, so I hope this > reaches the right folks. > > Josh Harlow (original author of taskflow and related libraries) and I > have been discussing the option of moving taskflow out of the openstack > umbrella recently. This move would likely also include the futurist and > automaton libraries that are primarily used by taskflow. The idea would > be to just host them on github and use the regular Github features for > Issues, PRs, wiki, etc, in the hopes that this would spur more > development. Taskflow hasn't had any substantial contributions in > several years and it doesn't really seem that the current openstack devs > have a vested interest in moving it forward. I would like to move it > forward, but I don't have an interest in being bound by the openstack > workflow (this is why the project stagnated as core reviewers were > pulled on to other projects and couldn't keep up with the review > backlog, so contributions ground to a halt). I'm not sure how using pull requests instead of Gerrit changesets would help "core reviewers being pulled on to other projects"? Is this just about preferring not having a non-human gatekeeper like Gerrit+Zuul and being able to just have a couple people merge whatever they want to the master HEAD without needing to talk about +2/+W rights? If it's just about preferring the pull request workflow versus the Gerrit rebase workflow, just say so. Same for just preferring the Github UI versus Gerrit's UI (which I agree is awful). Anyway, it's cool with me to "free" taskflow from the onerous yoke of OpenStack development if that's what the contributors to it want. Best, -jay From gnhill at liquidweb.com Wed Oct 10 18:35:00 2018 From: gnhill at liquidweb.com (Greg Hill) Date: Wed, 10 Oct 2018 13:35:00 -0500 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: <8b2f89f9-a8f1-fc82-0d08-e8a93ef1889c@gmail.com> References: <8b2f89f9-a8f1-fc82-0d08-e8a93ef1889c@gmail.com> Message-ID: > I'm not sure how using pull requests instead of Gerrit changesets would > help "core reviewers being pulled on to other projects"? > The 2 +2 requirement works for larger projects with a lot of contributors. When you have only 3 regular contributors and 1 of them gets pulled on to a project and can no longer actively contribute, you have 2 developers who can +2 each other but nothing can get merged without that 3rd dev finding time to add another +2. This is what happened with Taskflow a few years back. Eventually the other 2 gave up and moved on also. > Is this just about preferring not having a non-human gatekeeper like > Gerrit+Zuul and being able to just have a couple people merge whatever > they want to the master HEAD without needing to talk about +2/+W rights? > We plan to still have a CI gatekeeper, probably Travis CI, to make sure PRs past muster before being merged, so it's not like we're wanting to circumvent good contribution practices by committing whatever to HEAD. But the +2/+W rights thing was a huge PITA to deal with with so few contributors, for sure. If it's just about preferring the pull request workflow versus the > Gerrit rebase workflow, just say so. Same for just preferring the Github > UI versus Gerrit's UI (which I agree is awful). > I mean, yes, I personally prefer the Github UI and workflow, but that was not a primary consideration. I got used to using gerrit well enough. It was mostly the There's also a sense that if a project is in the Openstack umbrella, it's not useful outside Openstack, and Taskflow is designed to be a general purpose library. The hope is that just making it a regular open source project might attract more users and contributors. This may or may not bear out, but as it is, there's no real benefit to staying an openstack project on this front since nobody is actively working on it within the community. Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Wed Oct 10 18:41:51 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 10 Oct 2018 14:41:51 -0400 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: References: <8b2f89f9-a8f1-fc82-0d08-e8a93ef1889c@gmail.com> Message-ID: On Wed, Oct 10, 2018 at 2:35 PM Greg Hill wrote: > > I'm not sure how using pull requests instead of Gerrit changesets would >> help "core reviewers being pulled on to other projects"? >> > > The 2 +2 requirement works for larger projects with a lot of contributors. > When you have only 3 regular contributors and 1 of them gets pulled on to a > project and can no longer actively contribute, you have 2 developers who > can +2 each other but nothing can get merged without that 3rd dev finding > time to add another +2. This is what happened with Taskflow a few years > back. Eventually the other 2 gave up and moved on also. > As a note, the 2+2 requirement is only a convention, not a rule. Swift has moved to 1+2 already, and other projects have considered it. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Oct 10 18:43:59 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 10 Oct 2018 11:43:59 -0700 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: References: <8b2f89f9-a8f1-fc82-0d08-e8a93ef1889c@gmail.com> Message-ID: <1539197039.1579166.1537600328.5D9DE566@webmail.messagingengine.com> On Wed, Oct 10, 2018, at 11:35 AM, Greg Hill wrote: > > I'm not sure how using pull requests instead of Gerrit changesets would > > help "core reviewers being pulled on to other projects"? > > > > The 2 +2 requirement works for larger projects with a lot of contributors. > When you have only 3 regular contributors and 1 of them gets pulled on to a > project and can no longer actively contribute, you have 2 developers who > can +2 each other but nothing can get merged without that 3rd dev finding > time to add another +2. This is what happened with Taskflow a few years > back. Eventually the other 2 gave up and moved on also. > To be clear this isn't enforced by anything but your reviewer practices. What is enforced is that you have +2 verified, a +2 code review, and +1 Workflow (this is a Gerrit submit requirements function that is also configurable per project). OpenStack requiring multiple +2 code reviews is enforced by humans and maybe the discussion could be "should taskflow and related tools allow single code review approval (and possibly self approval by any remaining cores)?" It might be a worthwhile discussion to reevaluate whether or not the humans should continue to enforce this rule on all code bases independent of what happens with taskflow. > > > Is this just about preferring not having a non-human gatekeeper like > > Gerrit+Zuul and being able to just have a couple people merge whatever > > they want to the master HEAD without needing to talk about +2/+W rights? > > > > We plan to still have a CI gatekeeper, probably Travis CI, to make sure PRs > past muster before being merged, so it's not like we're wanting to > circumvent good contribution practices by committing whatever to HEAD. But > the +2/+W rights thing was a huge PITA to deal with with so few > contributors, for sure. > > If it's just about preferring the pull request workflow versus the > > Gerrit rebase workflow, just say so. Same for just preferring the Github > > UI versus Gerrit's UI (which I agree is awful). > > > > I mean, yes, I personally prefer the Github UI and workflow, but that was > not a primary consideration. I got used to using gerrit well enough. It was > mostly the There's also a sense that if a project is in the Openstack > umbrella, it's not useful outside Openstack, and Taskflow is designed to be > a general purpose library. The hope is that just making it a regular open > source project might attract more users and contributors. This may or may > not bear out, but as it is, there's no real benefit to staying an openstack > project on this front since nobody is actively working on it within the > community. > > Greg From fungi at yuggoth.org Wed Oct 10 18:51:27 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Oct 2018 18:51:27 +0000 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: References: <8b2f89f9-a8f1-fc82-0d08-e8a93ef1889c@gmail.com> Message-ID: <20181010185126.agh5d2msk2aut62d@yuggoth.org> On 2018-10-10 13:35:00 -0500 (-0500), Greg Hill wrote: [...] > We plan to still have a CI gatekeeper, probably Travis CI, to make sure PRs > past muster before being merged, so it's not like we're wanting to > circumvent good contribution practices by committing whatever to HEAD. Travis CI has gained the ability to prevent you from merging changes which fail testing? Or do you mean something else when you refer to it as a "gatekeeper" here? > But the +2/+W rights thing was a huge PITA to deal with with so > few contributors, for sure. [...] Note that this is not a technical limitation at all, merely social convention. There are plenty of projects hosted on our infrastructure, some even official OpenStack projects, where contributor count is low enough that authors who are also core reviewers just review and approve their own changes for expediency. > There's also a sense that if a project is in the Openstack > umbrella, it's not useful outside Openstack, and Taskflow is > designed to be a general purpose library. [...] Be aware that the "OpenStack Infrastructure" is in the process of rebranding itself as "OpenDev" and we're working to eliminate mention of OpenStack on things that don't need it. This includes moving services to a new domain name, switching to other repository namespaces, putting mirroring to services like GitHub and Bitbucket under the direct control of teams who are interested in handling that with their own unique organizations in those platforms, and so on. It's progressing, though perhaps too slowly to solve your immediate concerns. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From piotrmisiak1984 at gmail.com Wed Oct 10 19:56:28 2018 From: piotrmisiak1984 at gmail.com (Piotr Misiak) Date: Wed, 10 Oct 2018 21:56:28 +0200 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <84d669d1-d3c5-cd15-84a0-bfc8e4e5eb66@everyware.ch> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <8c65a2ba-09e9-d286-9d3c-e577a59185cb@everyware.ch> <84d669d1-d3c5-cd15-84a0-bfc8e4e5eb66@everyware.ch> Message-ID: <200b392c-1a41-f53a-6822-01c809aa22fd@gmail.com> On 10.10.2018 09:06, Florian Engelmann wrote: > Now I get you. I would say all configuration templates need to be > changed to allow, eg. > > $ grep http /etc/kolla/cinder-volume/cinder.conf > glance_api_servers = http://10.10.10.5:9292 > auth_url = http://internal.somedomain.tld:35357 > www_authenticate_uri = http://internal.somedomain.tld:5000 > auth_url = http://internal.somedomain.tld:35357 > auth_endpoint = http://internal.somedomain.tld:5000 > > to look like: > > glance_api_servers = http://glance.service.somedomain.consul:9292 > auth_url = http://keystone.service.somedomain.consul:35357 > www_authenticate_uri = http://keystone.service.somedomain.consul:5000 > auth_url = http://keystone.service.somedomain.consul:35357 > auth_endpoint = http://keystone.service.somedomain.consul:5000 > The idea with Consul looks interesting. But I don't get your issue with VIP address and spine-leaf network. What we have: - controller1 behind leaf1 A/B pair with MLAG - controller2 behind leaf2 A/B pair with MLAG - controller3 behind leaf3 A/B pair with MLAG The VIP address is active on one controller server. When the server fail then the VIP will move to another controller server. Where do you see a SPOF in this configuration? Thanks From openstack at fried.cc Wed Oct 10 20:07:10 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 10 Oct 2018 15:07:10 -0500 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: References: Message-ID: On 10/10/2018 12:41 PM, Greg Hill wrote: > I've been out of the openstack loop for a few years, so I hope this > reaches the right folks. > > Josh Harlow (original author of taskflow and related libraries) and I > have been discussing the option of moving taskflow out of the openstack > umbrella recently. This move would likely also include the futurist and > automaton libraries that are primarily used by taskflow. The idea would > be to just host them on github and use the regular Github features for > Issues, PRs, wiki, etc, in the hopes that this would spur more > development. Taskflow hasn't had any substantial contributions in > several years and it doesn't really seem that the current openstack devs > have a vested interest in moving it forward. I would like to move it > forward, but I don't have an interest in being bound by the openstack > workflow (this is why the project stagnated as core reviewers were > pulled on to other projects and couldn't keep up with the review > backlog, so contributions ground to a halt). > > I guess I'm putting it forward to the larger community. Does anyone have > any objections to us doing this? Are there any non-obvious > technicalities that might make such a transition difficult? Who would > need to be made aware so they could adjust their own workflows? The PowerVM nova virt driver uses taskflow (and we love it, btw). So we do need to be kept apprised of any movement in this area, and will need to be able to continue tracking it as a requirement. If it does move, I assume the maintainers will still be available and accessible. Josh has been helpful a number of times in the past. Other than that, I have no opinion on whether such a move is good or bad, right or wrong, or what it should look like. -efried > > Or would it be preferable to just fork and rename the project so > openstack can continue to use the current taskflow version without worry > of us breaking features? > > Greg > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at nemebean.com Wed Oct 10 20:17:46 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 10 Oct 2018 15:17:46 -0500 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: References: <8b2f89f9-a8f1-fc82-0d08-e8a93ef1889c@gmail.com> Message-ID: <2ce69a0a-56b9-e912-2ae2-7389394c0035@nemebean.com> On 10/10/18 1:35 PM, Greg Hill wrote: > > I'm not sure how using pull requests instead of Gerrit changesets would > help "core reviewers being pulled on to other projects"? > > > The 2 +2 requirement works for larger projects with a lot of > contributors. When you have only 3 regular contributors and 1 of them > gets pulled on to a project and can no longer actively contribute, you > have 2 developers who can +2 each other but nothing can get merged > without that 3rd dev finding time to add another +2. This is what > happened with Taskflow a few years back. Eventually the other 2 gave up > and moved on also. As the others have mentioned, this doesn't need to continue to be a blocker. If the alternative is nobody working on the project at all, a single approver policy is far better. In practice it's probably not much different from having a general oslo core rubber stamp +2 a patch that was already reviewed by a taskflow expert. > > Is this just about preferring not having a non-human gatekeeper like > Gerrit+Zuul and being able to just have a couple people merge whatever > they want to the master HEAD without needing to talk about +2/+W rights? > > > We plan to still have a CI gatekeeper, probably Travis CI, to make sure > PRs past muster before being merged, so it's not like we're wanting to > circumvent good contribution practices by committing whatever to HEAD. > But the +2/+W rights thing was a huge PITA to deal with with so few > contributors, for sure. I guess this would be the one concern I'd have about moving it out. We still have a fair number of OpenStack projects depending on taskflow[1] to one degree or another, and having taskflow fully integrated into the OpenStack CI system is nice for catching problems with proposed changes early. I think there was some work recently to get OpenStack CI voting on Github, but it seems inefficient to do work to move it out of OpenStack and then do more work to partially bring it back. I suppose the other option is to just stop CI'ing on OpenStack and rely on the upper-constraints gating we do for our other dependencies. That would be unfortunate, but again if the alternative is no development at all then it might be a necessary compromise. 1: http://codesearch.openstack.org/?q=taskflow&i=nope&files=requirements.txt&repos= > > If it's just about preferring the pull request workflow versus the > Gerrit rebase workflow, just say so. Same for just preferring the > Github > UI versus Gerrit's UI (which I agree is awful). > > > I mean, yes, I personally prefer the Github UI and workflow, but that > was not a primary consideration. I got used to using gerrit well enough. > It was mostly the  There's also a sense that if a project is in the > Openstack umbrella, it's not useful outside Openstack, and Taskflow is > designed to be a general purpose library. The hope is that just making > it a regular open source project might attract more users and > contributors. This may or may not bear out, but as it is, there's no > real benefit to staying an openstack project on this front since nobody > is actively working on it within the community. > > Greg > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tenobreg at redhat.com Wed Oct 10 20:42:03 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Wed, 10 Oct 2018 17:42:03 -0300 Subject: [openstack-dev] [sahara] PTL out for a week Message-ID: Hi all, I'm taking PTO from tomorrow until Monday Oct 22nd. I won't cancel the meeting yet but let me know if you want me to. See you all in a couple weeks. Thanks, -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Oct 10 21:00:40 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 10 Oct 2018 16:00:40 -0500 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> Message-ID: <20181010210039.GA15538@sm-workstation> > > > > > > What I mean is that we run too into a situation where we have a large > > backlog of CI jobs since we have to many changes and jobs in flight. > > > > So, I'm asking whether there is a good way to not duplicating all jobs > > to run on all three interpreters. Do we really need testing of all three > > versions? Or is testing with a subset a manageable risk? > > > > Fair enough. I'm probably not the right person to answer so perhaps someone > else can chime in. One thing worth pointing out is that it seems the jump > from 3.5 to 3.6 wasn't nearly as painful as the jump from 3.6 to 3.7, at > least in my experience. > > Corey > I share Andreas's concerns. I would rather see us testing 3.5 and 3.7 versus 3.5, 3.6, and 3.7. I would expect anything that passes on 3.7 to be fairly safe when it comes to 3.6 runtimes. Maybe a periodic job that exercies 3.6? Sean From fungi at yuggoth.org Wed Oct 10 21:10:33 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Oct 2018 21:10:33 +0000 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <20181010210039.GA15538@sm-workstation> References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> Message-ID: <20181010211033.tatylo4fakiymvtq@yuggoth.org> On 2018-10-10 16:00:40 -0500 (-0500), Sean McGinnis wrote: [...] > I would rather see us testing 3.5 and 3.7 versus 3.5, 3.6, and > 3.7. [...] I might have only pointed this out on IRC so far, but the expectation is that testing 3.5 and 3.6 at the same time was merely transitional since official OpenStack projects should be moving their testing from Ubuntu Xenial (which provides 3.5) to Ubuntu Bionic (which provides 3.6 and, now, 3.7 as well) during the Stein cycle and so will drop 3.5 testing on master in the process. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gouthampravi at gmail.com Wed Oct 10 23:35:38 2018 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 10 Oct 2018 16:35:38 -0700 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <20181010211033.tatylo4fakiymvtq@yuggoth.org> References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> Message-ID: On Wed, Oct 10, 2018 at 2:10 PM Jeremy Stanley wrote: > > On 2018-10-10 16:00:40 -0500 (-0500), Sean McGinnis wrote: > [...] > > I would rather see us testing 3.5 and 3.7 versus 3.5, 3.6, and > > 3.7. > [...] > > I might have only pointed this out on IRC so far, but the > expectation is that testing 3.5 and 3.6 at the same time was merely > transitional since official OpenStack projects should be moving > their testing from Ubuntu Xenial (which provides 3.5) to Ubuntu > Bionic (which provides 3.6 and, now, 3.7 as well) during the Stein > cycle and so will drop 3.5 testing on master in the process. ++ on switching python3.5 jobs to testing with python3.7 on Bionic. python3.5 wasn't supported on all distros [1][2][3][4][5]. Xenial had it, so it was nice to test with it when developing Queens and Rocky. Thanks Corey for starting this effort. I proposed changes to manila repos to use your template [1] [2], but the interpreter's not being installed, do you need to make any bindep changes to enable the "universe" ppa and install python3.7 and python3.7-dev? [1] OpenSuse https://software.opensuse.org/package/python3 [2] Ubuntu https://packages.ubuntu.com/search?keywords=python3 [3] Fedora https://apps.fedoraproject.org/packages/python3 [4] Arch https://www.archlinux.org/packages/extra/x86_64/python/ [5] Gentoo https://wiki.gentoo.org/wiki/Project:Python/Implementations [6] manila https://review.openstack.org/#/c/609558 [7] python-manilaclient https://review.openstack.org/609557 -- Goutham > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Wed Oct 10 23:50:29 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Oct 2018 23:50:29 +0000 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> Message-ID: <20181010235029.oyhvdgmlycul6fux@yuggoth.org> On 2018-10-10 16:35:38 -0700 (-0700), Goutham Pacha Ravi wrote: [...] > Thanks Corey for starting this effort. I proposed changes to > manila repos to use your template [1] [2], but the interpreter's > not being installed, do you need to make any bindep changes to > enable the "universe" ppa and install python3.7 and python3.7-dev? [...] I think we need to just make sure that the standard Python jobs install the intended version of the interpreter. Using bindep for that particular purpose is mildly silly. The bindep.txt file is, first and foremost, a local developer convenience to let people know what unexpected packages they might need to install on their systems to run certain kinds of local tests. I really doubt any reasonable developer will be surprised that they need to install python3.7 before being able to successfully run `tox -e py37` nor is the error message confusing if they forget to do so. A couple projects have added python-version-specific bindep profiles which do nothing but install the corresponding interpreter, but adding things to bindep.txt purely to satisfy the CI system is backwards. Our CI jobs should do what we expect them to do by default. If the job says it's going to run unit tests under Python 3.7 then the job should make sure a suitable interpreter is installed to do so. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From itzshamail at gmail.com Thu Oct 11 03:26:53 2018 From: itzshamail at gmail.com (Shamail Tahir) Date: Wed, 10 Oct 2018 23:26:53 -0400 Subject: [openstack-dev] [all] Stepping down from Release Management team In-Reply-To: References: <47811E35-E119-4582-839B-917626D1B087@openstack.org> Message-ID: > On Oct 8, 2018, at 2:34 PM, Doug Hellmann wrote: > > Anne Bertucio writes: > >> Hi all, >> >> I have had a fantastic time getting to work on the Release Management >> team and getting to know you all through the release marketing work, >> however, it is time for me to step down from my role on the Release >> Management team as I am moving on from my role at the Foundation and >> will no longer be working on upstream OpenStack. I cannot thank you >> all enough for how you all welcomed me into the OpenStack community >> and for how much I have learned about open source development in my >> time here. >> >> If you have questions about cycle-highlights, swing by #openstack-release. >> If you have questions about release marketing, contact lauren at openstack.org. >> For other inquiries, contact allison at openstack.org. >> While I won't be working upstream anymore, I'll only be a Tweet or IRC message away. >> >> Thank you again, and remember that cycle-highlights should be >> submitted by RC1. > > Thank you for everything, Anne! The cycle-highlights system you helped > us create is a great example of using decentralization and peer review > at the same time. I'm sure it's going to continue to be an important > communication tool for the community. +1 It was a pleasure working with you Anne! Thanks for everything you helped accomplish through your contributions to the community. I wish you success in your next adventure. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dangtrinhnt at gmail.com Thu Oct 11 04:42:42 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 11 Oct 2018 13:42:42 +0900 Subject: [openstack-dev] [Searchlight] Team meeting today at 1200 Message-ID: Hi team, This is just a reminder that we will have a team meeting at 12:00 UTC today on #openstack-meeting-4 Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Thu Oct 11 05:08:28 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Thu, 11 Oct 2018 01:08:28 -0400 Subject: [openstack-dev] [sahara] PTL out for a week In-Reply-To: References: Message-ID: Enjoy the PTO, Telles! I don't really have much to share at a meeting. My vote is for no meeting. Our other active core and other active community members can agree to cancel or not, and I'll heed that decision. On Wed, Oct 10, 2018 at 4:42 PM Telles Nobrega wrote: > Hi all, > > I'm taking PTO from tomorrow until Monday Oct 22nd. I won't cancel the > meeting yet but let me know if you want me to. > > See you all in a couple weeks. > > Thanks, > -- > > TELLES NOBREGA > > SOFTWARE ENGINEER > > Red Hat Brasil > > Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo > > tenobreg at redhat.com > > TRIED. TESTED. TRUSTED. > Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil > pelo Great Place to Work. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Oct 11 05:20:09 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 11 Oct 2018 16:20:09 +1100 Subject: [openstack-dev] [os-upstream-institute] Find a slot for a meeting to discuss - ACTION NEEDED In-Reply-To: <948BBE83-6631-4CCC-A558-DEFDA6149C41@gmail.com> References: <313CAE1B-CCBB-426F-976B-0320B2273BA1@gmail.com> <948BBE83-6631-4CCC-A558-DEFDA6149C41@gmail.com> Message-ID: <20181011052007.GA18592@thor.bakeyournoodle.com> On Sat, Sep 29, 2018 at 02:50:31PM +0200, Ildiko Vancsa wrote: > Hi Training Team, > > Based on the votes on the Doodle poll below we will have our ad-hoc meeting __next Friday (October 5) 1600 UTC__. > > Hangouts link for the call: https://hangouts.google.com/call/BKnvu7e72uB_Z-QDHDF2AAEI I don't suppose it was recorded? I was lucky enough to be on vacation for 3 weeks, which means I couldn't make the call. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From ellorent at redhat.com Thu Oct 11 08:00:45 2018 From: ellorent at redhat.com (Felix Enrique Llorente Pastora) Date: Thu, 11 Oct 2018 10:00:45 +0200 Subject: [openstack-dev] [tripleo][ci] Having more that one queue for gate pipeline at tripleo Message-ID: Hello there, After suffering a lot from zuul's tripleo gate piepeline queue reseting after failures on patches I have ask myself what would happend if we have more than one queue for gating tripleo. After a quick read here https://zuul-ci.org/docs/zuul/user/gating.html, I have found the following: "If changes with cross-project dependencies do not share a change queue then Zuul is unable to enqueue them together, and the first will be required to merge before the second is enqueued." So it make sense to share zuul queue, but maybe only one queue for all tripleo projects is too much, for example sharing queue between tripleo-ui and tripleo-quickstart, maybe we need for example to queues for product stuff and one for CI, so product does not get resetted if CI fails in a patch. What do you think ? -- Quique Llorente Openstack TripleO CI -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at citynetwork.eu Thu Oct 11 10:56:27 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Thu, 11 Oct 2018 12:56:27 +0200 Subject: [openstack-dev] [publiccloud-wg] Todays meeting for Public Cloud WG CANCELLED Message-ID: <52c2f8d3-b456-2f45-7967-6bfe207df469@citynetwork.eu> Hi folks, Unfortunate we need to cancel todays meeting! Talk to you next Wednesday at 0700 UTC. Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From tenobreg at redhat.com Thu Oct 11 11:06:38 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Thu, 11 Oct 2018 08:06:38 -0300 Subject: [openstack-dev] [sahara] PTL out for a week In-Reply-To: References: Message-ID: Thanks Jeremy. On Thu, 11 Oct 2018 at 02:09 Jeremy Freudberg wrote: > Enjoy the PTO, Telles! > > I don't really have much to share at a meeting. My vote is for no meeting. > Our other active core and other active community members can agree to > cancel or not, and I'll heed that decision. > > On Wed, Oct 10, 2018 at 4:42 PM Telles Nobrega > wrote: > >> Hi all, >> >> I'm taking PTO from tomorrow until Monday Oct 22nd. I won't cancel the >> meeting yet but let me know if you want me to. >> >> See you all in a couple weeks. >> >> Thanks, >> -- >> >> TELLES NOBREGA >> >> SOFTWARE ENGINEER >> >> Red Hat Brasil >> >> Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo >> >> tenobreg at redhat.com >> >> TRIED. TESTED. TRUSTED. >> Red Hat é reconhecida entre as melhores empresas para trabalhar no >> Brasil pelo Great Place to Work. >> > __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Oct 11 11:16:35 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 11 Oct 2018 13:16:35 +0200 Subject: [openstack-dev] [tripleo][ci] Having more that one queue for gate pipeline at tripleo In-Reply-To: References: Message-ID: On Thu, Oct 11, 2018 at 10:01 AM Felix Enrique Llorente Pastora < ellorent at redhat.com> wrote: > Hello there, > > After suffering a lot from zuul's tripleo gate piepeline queue reseting > after failures on patches I have ask myself what would happend if we have > more than one queue for gating tripleo. > > After a quick read here https://zuul-ci.org/docs/zuul/user/gating.html, > I have found the following: > > "If changes with cross-project dependencies do not share a change queue > then Zuul is unable to enqueue them together, and the first will be > required to merge before the second is enqueued." > > So it make sense to share zuul queue, but maybe only one queue for all > tripleo projects is too much, for example sharing queue between tripleo-ui > and tripleo-quickstart, maybe we need for example to queues for product > stuff and one for CI, so product does not get resetted if CI fails in a > patch. > > What do you think ? > Probably a wrong example, as TripleO UI gate is using CI jobs running tripleo-quickstart scenarios. We could create more queues for projects which are really independent from each other but we need to be very careful about it. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Thu Oct 11 11:24:34 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 11 Oct 2018 13:24:34 +0200 Subject: [openstack-dev] [sahara] PTL out for a week In-Reply-To: References: Message-ID: <2206744.lJyfNjY2gu@whitebase.usersys.redhat.com> On Wednesday, 10 October 2018 22:42:03 CEST Telles Nobrega wrote: > Hi all, > > I'm taking PTO from tomorrow until Monday Oct 22nd. I won't cancel the > meeting yet but let me know if you want me to. No strong opinions about it. I will be around; if there are people asking for it, I can start it, or just (most likely) discuss on the channel :) > > See you all in a couple weeks. See you, enjoy the vacation! Ciao -- Luigi From sambetts at cisco.com Thu Oct 11 11:40:54 2018 From: sambetts at cisco.com (Sam Betts (sambetts)) Date: Thu, 11 Oct 2018 11:40:54 +0000 Subject: [openstack-dev] [ironic] Stepping down as core Message-ID: As many of you will have seen on IRC, I've mostly been appearing AFK for the last couple of development cycles. Due to other tasks downstream most of my attention has been drawn away from upstream Ironic development. Going forward I'm unlikely to be as heavily involved with the Ironic project as I have been in the past, so I am stepping down as a core contributor to make way for those more involved and with more time to contribute. That said I do not intend to disappear, Myself and my colleagues plan to continue to support the Cisco Ironic drivers, we just won't be so heavily involved in core ironic development and its worth noting that although I might appear AFK on IRC because my main focus is on other things, I always have an ear to the ground and direct pings will generally reach me. I will be in Berlin for the OpenStack summit, so to those that are attending I hope to see you there. The Ironic project has been (and I hope continues to be) an awesome place to contribute too, thank you Sam Betts sambetts -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Thu Oct 11 11:48:33 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Thu, 11 Oct 2018 22:48:33 +1100 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: <20181010131849.ey5yf3zxjtevtnae@yuggoth.org> References: <413d67d8-e4de-51fe-e7cf-8fb6520aed34@redhat.com> <20181009125850.4sj52i7c3mi6m6ay@yuggoth.org> <16397789-98b5-a011-0367-dd5023260870@redhat.com> <20181010131849.ey5yf3zxjtevtnae@yuggoth.org> Message-ID: <5c1a3f98-2aad-bfe1-45e3-c4ddd8c52615@redhat.com> On 11/10/18 00:18, Jeremy Stanley wrote: > On 2018-10-10 13:24:28 +1100 (+1100), Gilles Dubreuil wrote: >> On 09/10/18 23:58, Jeremy Stanley wrote: >>> On 2018-10-09 08:52:52 -0400 (-0400), Jim Rollenhagen wrote: >>> [...] >>>> It seems to me that a major goal of openstacksdk is to hide >>>> differences between clouds from the user. If the user is meant >>>> to use a GraphQL library themselves, we lose this and the user >>>> needs to figure it out themselves. Did I understand that >>>> correctly? >>> This is especially useful where the SDK implements business >>> logic for common operations like "if the user requested A and >>> the cloud supports features B+C+D then use those to fulfil the >>> request, otherwise fall back to using features E+F". >> The features offered to the user don't have to change, it's just a >> different architecture. >> >> The user doesn't have to deal with a GraphQL library, only the >> client applications (consuming OpenStack APIs). And there are also >> UI tools such as GraphiQL which allow to interact directly with >> GraphQL servers. > My point was simply that SDKs provide more than a simple translation > of network API calls and feature discovery. There can also be rather > a lot of "business logic" orchestrating multiple primitive API calls > to reach some more complex outcome. The services don't want to embed > this orchestrated business logic themselves, and it makes little > sense to replicate the same algorithms in every single application > which wants to make use of such composite functionality. There are > common actions an application might wish to take which involve > speaking to multiple APIs for different services to make specific > calls in a particular order, perhaps feeding the results of one into > the next. > > Can you explain how GraphQL eliminates the above reasons for an SDK? What I meant is the communication part of any SDK interfacing between clients and API services can be handled by GraphQL client librairies. So instead of having to rely on modules (imported or native) to carry the REST communications, we're dealing with data provided by GraphQL libraries (which are also modules but standardized as GraphQL is a specification). So as you mentioned there is still need to provide the data wrap in objects or any adequate struct to present to the consumers. Having a Schema helps both API and clients developers because the data is clearly typed and graphed. Backend devs can focus on resolving the data for each node/leaf while the clients can focus on what they need and not how to get it. To relate to $subject, by building the data model (graph) we obtain a schema and introspection. That's a big saver in term of resources. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Oct 11 12:19:05 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 11 Oct 2018 21:19:05 +0900 Subject: [openstack-dev] [nova] API updates week 18-41 Message-ID: <1666310ee7a.120308160149382.7211510799035704630@ghanshyammann.com> Hi All, Please find the Nova API highlights of this week. Weekly Office Hour: =============== What we discussed this week: - Discussed on api cleanup spec. - Discussed api extensions work and pending things on this work. Proposed all the pending item for this BP. - Discussed on 2 new bugs which needs more log for further debugging. added bug comments. Planned Features : ============== Below are the API related features for Stein. Ref - https://etherpad.openstack.org/p/stein-nova-subteam-tracking (feel free to add API item there if you are working or found any). NOTE: sequence order are not the priority, they are listed as per their start date. 1. API Extensions merge work - https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-stein - https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-stein+status:open - Weekly Progress: Pushed all the remaining patches. This is in runway also. 2. Handling a down cell - https://blueprints.launchpad.net/nova/+spec/handling-down-cell - https://review.openstack.org/#/q/topic:bp/handling-down-cell+(status:open+OR+status:merged) - Weekly Progress: No progress. Need to open for stein 3. Servers Ips non-unique network names : - https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names - Spec Merged - https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged) - Weekly Progress: No progress. I will push code after API extensions work is merged. 4. Volume multiattach enhancements: - https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements - https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged) - Weekly Progress: No progress. 5. Boot instance specific storage backend - https://blueprints.launchpad.net/nova/+spec/boot-instance-specific-storage-backend - https://review.openstack.org/#/q/topic:bp/boot-instance-specific-storage-backend+(status:open+OR+status:merged) - Weekly Progress: Code is up and it is in runway. I am adding this in my tomorrow review list. 6. Add API ref guideline for body text (takashin) - https://review.openstack.org/#/c/605628/ - Weekly Progress: patch is up for review. I have reviewed it to map it in more structural way. Specs: 7. Detach and attach boot volumes - https://review.openstack.org/#/q/topic:bp/detach-boot-volume+(status:open+OR+status:merged) - Weekly Progress: under review. Kevin has updated the spec with review comment fix. 8. Nova API policy updates https://blueprints.launchpad.net/nova/+spec/granular-api-policy Spec: https://review.openstack.org/#/c/547850/ - Weekly Progress: No progress in this. first concentrating on its dependency on 'consistent policy name' - https://review.openstack.org/#/c/606214/ 9. Nova API cleanup https://blueprints.launchpad.net/nova/+spec/api-consistency-cleanup Spec: https://review.openstack.org/#/c/603969/ - Weekly Progress: No progress on this. I am thinking to keep it open till T cycle and we keep adding more and more API cleanup in this and then discuss that what all we can fix or not. This way we can avoid the re-iterate of API cleanup fixes. Obviously we cannot find all API cleanup till T but it is good to cover most of them together. Thoughts ? 10. Support deleting data volume when destroy instance(Brin Zhang) - https://review.openstack.org/#/c/580336/ - Weekly Progress: No Progress. Bugs: ==== This week Bug Progress: https://etherpad.openstack.org/p/nova-api-weekly-bug-report Critical: 0->0 High importance: 1->2 By Status: New: 1->4 Confirmed/Triage: 30-> 32 In-progress: 31->35 Incomplete: 3->3 ===== Total: 65->74 NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those are not in above list. Tag such bugs so that we can keep our eyes. -gmann From mnaser at vexxhost.com Thu Oct 11 12:20:29 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 11 Oct 2018 14:20:29 +0200 Subject: [openstack-dev] [tc] biweekly vs monthly meetings Message-ID: Hi everyone: We've discussed bringing back meetings back to the TC and there's different opinions on having biweekly vs monthly meetings. Therefore, I have added a patch similar to Doug's that instead lists biweekly meetings instead of monthly meetings. I would really appreciate if other community members could step vote on what they feel would be best. The agenda of those meetings would be published in advance, topics could be requested from chair/vice-chairs in advance and the notes would be available for the community to consume, which should be easier to parse. Besides the community, I'd invite TC members to vote on the change that they prefer! :) - Weekly: https://review.openstack.org/#/c/609562/ - Monthly: https://review.openstack.org/#/c/608751/ Thank you! Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From corey.bryant at canonical.com Thu Oct 11 13:05:18 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Thu, 11 Oct 2018 09:05:18 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> Message-ID: On Wed, Oct 10, 2018 at 7:36 PM Goutham Pacha Ravi wrote: > On Wed, Oct 10, 2018 at 2:10 PM Jeremy Stanley wrote: > > > > On 2018-10-10 16:00:40 -0500 (-0500), Sean McGinnis wrote: > > [...] > > > I would rather see us testing 3.5 and 3.7 versus 3.5, 3.6, and > > > 3.7. > > [...] > > > > I might have only pointed this out on IRC so far, but the > > expectation is that testing 3.5 and 3.6 at the same time was merely > > transitional since official OpenStack projects should be moving > > their testing from Ubuntu Xenial (which provides 3.5) to Ubuntu > > Bionic (which provides 3.6 and, now, 3.7 as well) during the Stein > > cycle and so will drop 3.5 testing on master in the process. > > ++ on switching python3.5 jobs to testing with python3.7 on Bionic. > python3.5 wasn't supported on all distros [1][2][3][4][5]. Xenial had it, > so it was nice to test with it when developing Queens and Rocky. > > > Thanks Corey for starting this effort. I proposed changes to > manila repos to use your template [1] [2], but the interpreter's not > being installed, > do you need to make any bindep changes to enable the "universe" ppa and > install > python3.7 and python3.7-dev? > > Great, thanks for doing that! I'll look into what's needed to get python3.7 installed by the CI job. Corey > [1] OpenSuse https://software.opensuse.org/package/python3 > [2] Ubuntu https://packages.ubuntu.com/search?keywords=python3 > [3] Fedora https://apps.fedoraproject.org/packages/python3 > [4] Arch https://www.archlinux.org/packages/extra/x86_64/python/ > [5] Gentoo https://wiki.gentoo.org/wiki/Project:Python/Implementations > [6] manila https://review.openstack.org/#/c/609558 > [7] python-manilaclient https://review.openstack.org/609557 > > -- > Goutham > > > -- > > Jeremy Stanley > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josephine.seifert at secustack.com Thu Oct 11 13:10:47 2018 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Thu, 11 Oct 2018 15:10:47 +0200 Subject: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal) In-Reply-To: References: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> Message-ID: <12290bd5-bce6-1f8f-abbd-29f2263487e8@secustack.com> Am 08.10.2018 um 17:16 schrieb Markus Hentsch: > Dear OpenStack developers, > > as you suggested, we have written individual specs for Nova [1] and > Cinder [2] so far and will write another spec for Glance soon. We'd > appreciate any feedback and reviews on the specs :) > > Thank you in advance, > Markus Hentsch > > [1] https://review.openstack.org/#/c/608696/ > [2] https://review.openstack.org/#/c/608663/ > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev The spec for Glance is also on gerrit now: https://review.openstack.org/#/c/609667/ From ellorent at redhat.com Thu Oct 11 13:53:49 2018 From: ellorent at redhat.com (Felix Enrique Llorente Pastora) Date: Thu, 11 Oct 2018 15:53:49 +0200 Subject: [openstack-dev] [tripleo][ci] Having more that one queue for gate pipeline at tripleo In-Reply-To: References: Message-ID: So for example, I don't see why changes at tripleo-quickstart can be reset if tripleo-ui fails, this is the kind of thing that maybe can be optimize. On Thu, Oct 11, 2018 at 1:17 PM Emilien Macchi wrote: > > > On Thu, Oct 11, 2018 at 10:01 AM Felix Enrique Llorente Pastora < > ellorent at redhat.com> wrote: > >> Hello there, >> >> After suffering a lot from zuul's tripleo gate piepeline queue >> reseting after failures on patches I have ask myself what would happend if >> we have more than one queue for gating tripleo. >> >> After a quick read here https://zuul-ci.org/docs/zuul/user/gating.html, >> I have found the following: >> >> "If changes with cross-project dependencies do not share a change queue >> then Zuul is unable to enqueue them together, and the first will be >> required to merge before the second is enqueued." >> >> So it make sense to share zuul queue, but maybe only one queue for all >> tripleo projects is too much, for example sharing queue between tripleo-ui >> and tripleo-quickstart, maybe we need for example to queues for product >> stuff and one for CI, so product does not get resetted if CI fails in a >> patch. >> >> What do you think ? >> > > Probably a wrong example, as TripleO UI gate is using CI jobs running > tripleo-quickstart scenarios. > We could create more queues for projects which are really independent from > each other but we need to be very careful about it. > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Quique Llorente Openstack TripleO CI -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Oct 11 14:17:32 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 11 Oct 2018 09:17:32 -0500 Subject: [openstack-dev] [tripleo][ci] Having more that one queue for gate pipeline at tripleo In-Reply-To: References: Message-ID: On 10/11/18 8:53 AM, Felix Enrique Llorente Pastora wrote: > So for example, I don't see why changes at tripleo-quickstart can be > reset if tripleo-ui fails, this is the kind of thing that maybe can be > optimize. Because if two incompatible changes are proposed to tripleo-quickstart and tripleo-ui and both end up in parallel gate queues at the same time, it's possible both queues could get wedged. Quickstart and the UI are not completely independent projects. Quickstart has roles for deploying the UI, which means there is a connection there. I think the only way you could have independent gate queues is if you had two disjoint sets of projects that could be gated without any use of projects from the other set. I don't think it's possible to divide TripleO in that way, but if I'm wrong then maybe you could do multiple queues. > > On Thu, Oct 11, 2018 at 1:17 PM Emilien Macchi > wrote: > > > > On Thu, Oct 11, 2018 at 10:01 AM Felix Enrique Llorente Pastora > > wrote: > > Hello there, > >    After suffering a lot from zuul's tripleo gate piepeline > queue reseting after failures on patches I have ask myself what > would happend if we have more than one queue for gating tripleo. > >    After a quick read here > https://zuul-ci.org/docs/zuul/user/gating.html, I have found the > following: > > "If changes with cross-project dependencies do not share a > change queue then Zuul is unable to enqueue them together, and > the first will be required to merge before the second is enqueued." > >    So it make sense to share zuul queue, but maybe only one > queue for all tripleo projects is too  much, for example sharing > queue between tripleo-ui and tripleo-quickstart, maybe we need > for example to queues for product stuff and one for CI, so > product does not get resetted if CI fails in a patch. > >    What do you think ? > > Probably a wrong example, as TripleO UI gate is using CI jobs > running tripleo-quickstart scenarios. > We could create more queues for projects which are really > independent from each other but we need to be very careful about it. > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Quique Llorente > > Openstack TripleO CI > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From aj at suse.com Thu Oct 11 14:19:21 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 11 Oct 2018 16:19:21 +0200 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <20181010211033.tatylo4fakiymvtq@yuggoth.org> References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> Message-ID: On 10/10/2018 23.10, Jeremy Stanley wrote: > I might have only pointed this out on IRC so far, but the > expectation is that testing 3.5 and 3.6 at the same time was merely > transitional since official OpenStack projects should be moving > their testing from Ubuntu Xenial (which provides 3.5) to Ubuntu > Bionic (which provides 3.6 and, now, 3.7 as well) during the Stein > cycle and so will drop 3.5 testing on master in the process. Agreed, this needs some larger communication and explanation on what to do, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From majopela at redhat.com Thu Oct 11 14:19:38 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Thu, 11 Oct 2018 16:19:38 +0200 Subject: [openstack-dev] [Openstack-operators] [SIGS] Ops Tools SIG In-Reply-To: <79A31C5A-F4C1-478E-AEE5-B9CB4693543F@gmail.com> References: <79A31C5A-F4C1-478E-AEE5-B9CB4693543F@gmail.com> Message-ID: Adding the mailing lists back to your reply, thank you :) I guess that +melvin.hillsman at huawei.com can help us a little bit organizing the SIG, but I guess the first thing would be collecting a list of tools which could be published under the umbrella of the SIG, starting by the ones already in Osops. Publishing documentation for those tools, and the catalog under docs.openstack.org is possibly the next step (or a parallel step). On Wed, Oct 10, 2018 at 4:43 PM Rob McAllister wrote: > Hi Miguel, > > I would love to join this. What do I need to do? > > Sent from my iPhone > > On Oct 9, 2018, at 03:17, Miguel Angel Ajo Pelayo > wrote: > > Hello > > Yesterday, during the Oslo meeting we discussed [6] the possibility of > creating a new Special Interest Group [1][2] to provide home and release > means for operator related tools [3] [4] [5] > > I continued the discussion with M.Hillsman later, and he made me aware > of the operator working group and mailing list, which existed even before > the SIGs. > > I believe it could be a very good idea, to give life and more > visibility to all those very useful tools (for example, I didn't know some > of them existed ...). > > Give this, I have two questions: > > 1) Do you know or more tools which could find home under an Ops Tools > SIG umbrella? > > 2) Do you want to join us? > > > Best regards and have a great day. > > > [1] https://governance.openstack.org/sigs/ > [2] http://git.openstack.org/cgit/openstack/governance-sigs/tree/sigs.yaml > [3] https://wiki.openstack.org/wiki/Osops > [4] http://git.openstack.org/cgit/openstack/ospurge/tree/ > [5] http://git.openstack.org/cgit/openstack/os-log-merger/tree/ > [6] > http://eavesdrop.openstack.org/meetings/oslo/2018/oslo.2018-10-08-15.00.log.html#l-130 > > > > -- > Miguel Ángel Ajo > OSP / Networking DFG, OVN Squad Engineering > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- Miguel Ángel Ajo OSP / Networking DFG, OVN Squad Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmellado at redhat.com Thu Oct 11 14:22:15 2018 From: dmellado at redhat.com (Daniel Mellado) Date: Thu, 11 Oct 2018 16:22:15 +0200 Subject: [openstack-dev] [kuryr-kubernetes] PTL out for two weeks In-Reply-To: <99754baa-3ea0-f625-3cbd-c5d89596a7b7@redhat.com> References: <99754baa-3ea0-f625-3cbd-c5d89596a7b7@redhat.com> Message-ID: <2e130552-95f8-2391-8c30-8beb2a9f7c45@redhat.com> Hi all! I'll be out for two weeks, coming back on Oct 30th. I won't cancel the upstream meeting, which will be led by apuimedo in the meanwhile. I also do plan to organize a vPTG, around my return date, so stay tuned for an email with a document for topics. See you in some days! Best! Daniel -------------- next part -------------- A non-text attachment was scrubbed... Name: 0x13DDF774E05F5B85.asc Type: application/pgp-keys Size: 2209 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0x13DDF774E05F5B85.asc Type: application/pgp-keys Size: 2208 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From dmellado at redhat.com Thu Oct 11 14:24:57 2018 From: dmellado at redhat.com (Daniel Mellado) Date: Thu, 11 Oct 2018 16:24:57 +0200 Subject: [openstack-dev] [kuryr-kubernetes] Kuryr vPTG Message-ID: <697b0e6a-c811-0d5c-b061-97378982f3a1@redhat.com> Hi Kuryrs! As I promised in my earlier email, I'd like to organize a virtual team gathering to gather around and sync for the upcoming and next release. I've put up an etherpad [1] for gathering topics for the sessions, which will be open until October 22nd, where I'll put up a tentative schedule for this. Thanks and looking forward to see you! [1] https://etherpad.openstack.org/p/kuryr-vtg-stein -------------- next part -------------- A non-text attachment was scrubbed... Name: 0x13DDF774E05F5B85.asc Type: application/pgp-keys Size: 2208 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From jaosorior at redhat.com Thu Oct 11 14:30:38 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Thu, 11 Oct 2018 17:30:38 +0300 Subject: [openstack-dev] [tripleo] PTL out of office Message-ID: <02785b3f-c912-9fb0-a6b2-1b99222adf3c@redhat.com> Hi all! I'll be out starting from Oct 15th, coming back on Oct 19th.] The upstream meeting will be led by Alex Schultz (mwhahaha). Best Regards From doug at doughellmann.com Thu Oct 11 16:08:44 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 11 Oct 2018 12:08:44 -0400 Subject: [openstack-dev] [goals][python3][telemetry][barbican][monasca][neutron] having a gerrit admin approve the remaining zuul job settings import patches In-Reply-To: References: Message-ID: Doug Hellmann writes: > We have about 17 remaining patches to import zuul job settings into a > few repositories. Those are mostly in stable branches and the jobs are > failing in ways that may take us a long time to fix. > > Rather than waiting for those, Andreas and I are proposing that we have > someone from the infra team approve them, bypassing the test jobs. That > will allow us to complete the cleanup work in the project-config > repository, and will not leave the affected repositories in a state that > is any more (or less) broken than they are today. > > If you have any objections to the plan, please speak up quickly. I > would like to try to proceed before the end of the week. > > Doug > > +----------------------------------------------+---------------------------------+-----------+--------+----------+-------------------------------------+---------------+---------------+ > | Subject | Repo | Team | Tests | Workflow | URL | Branch | Owner | > +----------------------------------------------+---------------------------------+-----------+--------+----------+-------------------------------------+---------------+---------------+ > | import zuul job settings from project-config | openstack/aodh | Telemetry | FAILED | NEW | https://review.openstack.org/598648 | stable/ocata | Doug Hellmann | > | import zuul job settings from project-config | openstack/barbican | barbican | FAILED | REVIEWED | https://review.openstack.org/599659 | stable/queens | Doug Hellmann | > | import zuul job settings from project-config | openstack/barbican | barbican | FAILED | REVIEWED | https://review.openstack.org/599661 | stable/rocky | Doug Hellmann | > | import zuul job settings from project-config | openstack/castellan-ui | barbican | FAILED | NEW | https://review.openstack.org/599649 | master | Doug Hellmann | > | import zuul job settings from project-config | openstack/ceilometermiddleware | Telemetry | FAILED | APPROVED | https://review.openstack.org/598634 | master | Doug Hellmann | > | import zuul job settings from project-config | openstack/ceilometermiddleware | Telemetry | FAILED | APPROVED | https://review.openstack.org/598655 | stable/pike | Doug Hellmann | > | import zuul job settings from project-config | openstack/ceilometermiddleware | Telemetry | PASS | NEW | https://review.openstack.org/598661 | stable/queens | Doug Hellmann | > | import zuul job settings from project-config | openstack/ceilometermiddleware | Telemetry | FAILED | NEW | https://review.openstack.org/598667 | stable/rocky | Doug Hellmann | > | import zuul job settings from project-config | openstack/monasca-analytics | monasca | FAILED | REVIEWED | https://review.openstack.org/595658 | master | Doug Hellmann | > | import zuul job settings from project-config | openstack/networking-midonet | neutron | PASS | REVIEWED | https://review.openstack.org/597937 | stable/queens | Doug Hellmann | > | import zuul job settings from project-config | openstack/networking-sfc | neutron | FAILED | NEW | https://review.openstack.org/597913 | stable/ocata | Doug Hellmann | > | import zuul job settings from project-config | openstack/networking-sfc | neutron | FAILED | NEW | https://review.openstack.org/597925 | stable/pike | Doug Hellmann | > | import zuul job settings from project-config | openstack/python-aodhclient | Telemetry | FAILED | NEW | https://review.openstack.org/598652 | stable/ocata | Doug Hellmann | > | import zuul job settings from project-config | openstack/python-aodhclient | Telemetry | FAILED | NEW | https://review.openstack.org/598657 | stable/pike | Doug Hellmann | > | import zuul job settings from project-config | openstack/python-aodhclient | Telemetry | FAILED | APPROVED | https://review.openstack.org/598669 | stable/rocky | Doug Hellmann | > | import zuul job settings from project-config | openstack/python-barbicanclient | barbican | FAILED | NEW | https://review.openstack.org/599656 | stable/ocata | Doug Hellmann | > | import zuul job settings from project-config | openstack/python-barbicanclient | barbican | FAILED | NEW | https://review.openstack.org/599658 | stable/pike | Doug Hellmann | > +----------------------------------------------+---------------------------------+-----------+--------+----------+-------------------------------------+---------------+---------------+ Clark went ahead and merged all of these today. We have 3 clean-up patches in project-config to approve now, and then the zuul migration phase of the goal is completed. Thank you, Clark, Andreas, and everyone who worked on these patches! Doug From zbitter at redhat.com Thu Oct 11 17:08:44 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 11 Oct 2018 13:08:44 -0400 Subject: [openstack-dev] [kolla][tc] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <6b63c2b5-2881-7a01-f586-7700a62bc16d@gmail.com> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <5cf93772-d8f6-4cbb-3088-eeb670941969@everyware.ch> <95509c0e-5abb-b0f2-eca3-f04e7eb995e3@gmail.com> <1A3C52DFCD06494D8528644858247BF01C1F64ED@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C1F6C0B@EX10MBOX03.pnnl.gov> <6b63c2b5-2881-7a01-f586-7700a62bc16d@gmail.com> Message-ID: <56ce550f-f3a7-0d35-0abd-f4d853c48cc0@redhat.com> On 10/10/18 1:35 PM, Jay Pipes wrote: > +tc topic > > On 10/10/2018 11:49 AM, Fox, Kevin M wrote: >> Sorry. Couldn't quite think of the name. I was meaning, openstack >> project tags. > > I think having a tag that indicates the project is no longer using > SELECT FOR UPDATE (and thus is safe to use multi-writer Galera) is an > excellent idea, Kevin. ++ I would support such a tag, especially if it came with detailed instructions on how to audit your code to make sure you are not doing this with sqlalchemy. (Bonus points for a flake8 plugin that can be enabled in the gate.) (One question for clarification: is this actually _required_ to use multi-writer Galera? My previous recollection was that it was possible, but inefficient, to use SELECT FOR UPDATE safely as long as you wrote a lot of boilerplate to restart the transaction if it failed.) > -jay > >> ________________________________________ >> From: Jay Pipes [jaypipes at gmail.com] >> Sent: Tuesday, October 09, 2018 12:22 PM >> To: openstack-dev at lists.openstack.org >> Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, >> vault, fabio and FQDN endpoints >> >> On 10/09/2018 03:10 PM, Fox, Kevin M wrote: >>> Oh, this does raise an interesting question... Should such >>> information be reported by the projects up to users through labels? >>> Something like, "percona_multimaster=safe" Its really difficult for >>> folks to know which projects can and can not be used that way currently. >> >> Are you referring to k8s labels/selectors? or are you referring to >> project tags (you know, part of that whole Big Tent thing...)? >> >> -jay >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cboylan at sapwetik.org Thu Oct 11 17:09:30 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 11 Oct 2018 10:09:30 -0700 Subject: [openstack-dev] [tripleo][ci] Having more that one queue for gate pipeline at tripleo In-Reply-To: References: Message-ID: <1539277770.700580.1538812536.5A7101CD@webmail.messagingengine.com> On Thu, Oct 11, 2018, at 7:17 AM, Ben Nemec wrote: > > > On 10/11/18 8:53 AM, Felix Enrique Llorente Pastora wrote: > > So for example, I don't see why changes at tripleo-quickstart can be > > reset if tripleo-ui fails, this is the kind of thing that maybe can be > > optimize. > > Because if two incompatible changes are proposed to tripleo-quickstart > and tripleo-ui and both end up in parallel gate queues at the same time, > it's possible both queues could get wedged. Quickstart and the UI are > not completely independent projects. Quickstart has roles for deploying > the UI, which means there is a connection there. > > I think the only way you could have independent gate queues is if you > had two disjoint sets of projects that could be gated without any use of > projects from the other set. I don't think it's possible to divide > TripleO in that way, but if I'm wrong then maybe you could do multiple > queues. To follow up on this the Gate pipeline queue that your projects belong to are how you indicate to Zuul that there is coupling between these projects. Having things set up in this way allows you to ensure (through the Gate and Zuul's speculative future states) that a change to one project in the queue can't break another because they are tested together. If your concern is "time to merge" splitting queues won't help all that much unless you put all of the unreliable broken code with broken tests in one queue and have the reliable code in another queue. Zuul tests everything in parallel within a queue. This means that if your code base and its tests are reliable you can merge 20 changes all at once and the time to merge for all 20 changes is the same as a single change. Problems arise when tests fail and these future states have to be updated and retested. This will affect one or many queues. The fix here is to work on making reliable test jobs so that you can merge all 20 changes in the span of time it takes to merge a single change. This isn't necessarily easy, but helps you merge more code and be confident it works too. Clark From Kevin.Fox at pnnl.gov Thu Oct 11 17:38:10 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 11 Oct 2018 17:38:10 +0000 Subject: [openstack-dev] [kolla][tc] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <56ce550f-f3a7-0d35-0abd-f4d853c48cc0@redhat.com> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <5cf93772-d8f6-4cbb-3088-eeb670941969@everyware.ch> <95509c0e-5abb-b0f2-eca3-f04e7eb995e3@gmail.com> <1A3C52DFCD06494D8528644858247BF01C1F64ED@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C1F6C0B@EX10MBOX03.pnnl.gov> <6b63c2b5-2881-7a01-f586-7700a62bc16d@gmail.com>, <56ce550f-f3a7-0d35-0abd-f4d853c48cc0@redhat.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C1F7518@EX10MBOX03.pnnl.gov> My understanding is it is still safeish to use when you deal with it right. it causes a transaction abort if the race condition ever hits, and you can keep retrying until your commit makes it. So, there are two issues here: 1. its a more rare kind of abort, so unless you are testing and retrying, it can cause operations to fail in a way the user might notice needlessly. This is bad. It should be tested for in the gate. 2. in highly contended systems, it can be a performance issue. This is less bad then #1. For certain codes, it may never be a problem. Thanks, Kevin ________________________________________ From: Zane Bitter [zbitter at redhat.com] Sent: Thursday, October 11, 2018 10:08 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [kolla][tc] add service discovery, proxysql, vault, fabio and FQDN endpoints On 10/10/18 1:35 PM, Jay Pipes wrote: > +tc topic > > On 10/10/2018 11:49 AM, Fox, Kevin M wrote: >> Sorry. Couldn't quite think of the name. I was meaning, openstack >> project tags. > > I think having a tag that indicates the project is no longer using > SELECT FOR UPDATE (and thus is safe to use multi-writer Galera) is an > excellent idea, Kevin. ++ I would support such a tag, especially if it came with detailed instructions on how to audit your code to make sure you are not doing this with sqlalchemy. (Bonus points for a flake8 plugin that can be enabled in the gate.) (One question for clarification: is this actually _required_ to use multi-writer Galera? My previous recollection was that it was possible, but inefficient, to use SELECT FOR UPDATE safely as long as you wrote a lot of boilerplate to restart the transaction if it failed.) > -jay > >> ________________________________________ >> From: Jay Pipes [jaypipes at gmail.com] >> Sent: Tuesday, October 09, 2018 12:22 PM >> To: openstack-dev at lists.openstack.org >> Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, >> vault, fabio and FQDN endpoints >> >> On 10/09/2018 03:10 PM, Fox, Kevin M wrote: >>> Oh, this does raise an interesting question... Should such >>> information be reported by the projects up to users through labels? >>> Something like, "percona_multimaster=safe" Its really difficult for >>> folks to know which projects can and can not be used that way currently. >> >> Are you referring to k8s labels/selectors? or are you referring to >> project tags (you know, part of that whole Big Tent thing...)? >> >> -jay >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at nemebean.com Thu Oct 11 19:25:04 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 11 Oct 2018 14:25:04 -0500 Subject: [openstack-dev] [oslo] Config Validator Message-ID: <98a77026-c3fc-65ba-ac65-0dee6d687e23@nemebean.com> Hi, We recently merged a new feature to oslo.config and it was suggested that I publicize it since it addresses a longstanding pain point. It's a validator tool[1] that will warn or error on any entries in a config file that aren't defined in the service or are deprecated. Previously this was difficult to do accurately because config opts are registered at runtime and you don't know for sure when all of the opts are present. This tool makes use of the less recently added machine-readable sample config[2], which should contain all of the available opts for a service. If any are missing, that is a bug and should be addressed in the service anyway. This is the same data used to generate sample config files and those should have all of the possible opts listed. The one limitation I'm aware of at this point is that dynamic groups aren't handled, so options in a dynamic group will be reported as missing even though they are recognized by the service. This should be solvable, but for the moment it is a limitation to keep in mind. So if this is something you were interested in, please try it out and let us know how it works for you. The latest release of oslo.config on pypi should have this tool, and since it doesn't necessarily have to be run on the live system you can install the bleeding edge oslo.config somewhere else and just generate the machine readable sample config from the production system. That functionality has been in oslo.config for a few cycles now so it's more likely to be available. Thanks. -Ben 1: https://docs.openstack.org/oslo.config/latest/cli/validator.html 2: https://docs.openstack.org/oslo.config/latest/cli/generator.html#machine-readable-configs From sean.mcginnis at gmx.com Thu Oct 11 20:07:04 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 11 Oct 2018 15:07:04 -0500 Subject: [openstack-dev] [ptl][release] Proposed changes for library releases Message-ID: <20181011200704.GA29670@sm-workstation> Libraries should be released early and often so their consumers can pick up merged changes, and issues with those changes can be identified close to when the change is made. To help with this, we are considering forcing at least one library release per milestone (if there are unreleased merged changes). Planned Changes --------------- The proposed change would be that for each cycle-with-intermediary library deliverable, if it was not released during that milestone timeframe, the release team would automatically generate a release request early in the week of the milestone deadline. For example, at Stein milestone 1, if the library was not released at all in the Stein cycle yet, we would trigger a release the week of the milestone. At Stein milestone 2, if the library was not released since milestone 1, we would trigger another release, etc. That autogenerated patch would be used as a base to communicate with the team: if a team knows it is not a good time to do a release for that library, someone from the team can -1 the patch to have it held, or update that patch with a different commit SHA where they think it would be better to release from. If there are no issues, ideally we would want a +1 from the PTL and/or release liaison to indicate approval, but we would also consider no negative feedback as an indicator that the automatically proposed patches without a -1 can all be approved on the Thursday milestone deadline. Frequently Asked Questions (we're guessing) ------------------------------------------- Q: Our team likes to release libraries often. We don't want to wait for each milestone. Why are you ruining our lives? A: Teams are encouraged to request library releases regularly, and at any point in time that makes sense. The automatic release patches only serve as a safeguard to guarantee that library changes are released and consumed early and often, in case no release is actively requested. Q: Our library has no change, that's why we are not requesting changes. Why are you forcing meaningless releases? You need a hobby. A: If the library has not had any change merged since the previous tag, we would not generate a release patch for it. Q: My team is responsible for this library. I don't feel comfortable having an autogenerated patch grab a random commit to release. Can we opt out of this? A: The team can do their own releases when they are ready. If we generate a release patch and you don't think you are ready, just -1 the patch. Then when you are ready, you can update the patch with the new commit to use. Please ask questions or raise concerns here and/or in the #openstack-release channel. Thanks! The Release Management Team From doug at doughellmann.com Thu Oct 11 21:50:16 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 11 Oct 2018 17:50:16 -0400 Subject: [openstack-dev] [tc] assigning new liaisons to projects In-Reply-To: References: Message-ID: Doug Hellmann writes: > TC members, > > Since we are starting a new term, and have several new members, we need > to decide how we want to rotate the liaisons attached to each our > project teams, SIGs, and working groups [1]. > > Last term we went through a period of volunteer sign-up and then I > randomly assigned folks to slots to fill out the roster evenly. During > the retrospective we talked a bit about how to ensure we had an > objective perspective for each team by not having PTLs sign up for their > own teams, but I don't think we settled on that as a hard rule. > > I think the easiest and fairest (to new members) way to manage the list > will be to wipe it and follow the same process we did last time. If you > agree, I will update the page this week and we can start collecting > volunteers over the next week or so. > > Doug > > [1] https://wiki.openstack.org/wiki/OpenStack_health_tracker > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I have cleared out the old assignments. Please go over and edit the wiki page to add yourself to the teams you want to volunteer for. Remember that each member needs to sign up for exactly 10 teams. If you don't volunteer for 10, we'll use the script to make random assignments for the remaining slots. Doug From dangtrinhnt at gmail.com Fri Oct 12 02:16:00 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 12 Oct 2018 11:16:00 +0900 Subject: [openstack-dev] [tc] assigning new liaisons to projects In-Reply-To: References: Message-ID: Thank Doug for coordinating this, Is there any way for us to update the health of the Searchlight project to reflect the current state? Right now the status is not good to attract new contributors. Bests, On Fri, Oct 12, 2018 at 6:50 AM Doug Hellmann wrote: > Doug Hellmann writes: > > > TC members, > > > > Since we are starting a new term, and have several new members, we need > > to decide how we want to rotate the liaisons attached to each our > > project teams, SIGs, and working groups [1]. > > > > Last term we went through a period of volunteer sign-up and then I > > randomly assigned folks to slots to fill out the roster evenly. During > > the retrospective we talked a bit about how to ensure we had an > > objective perspective for each team by not having PTLs sign up for their > > own teams, but I don't think we settled on that as a hard rule. > > > > I think the easiest and fairest (to new members) way to manage the list > > will be to wipe it and follow the same process we did last time. If you > > agree, I will update the page this week and we can start collecting > > volunteers over the next week or so. > > > > Doug > > > > [1] https://wiki.openstack.org/wiki/OpenStack_health_tracker > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > I have cleared out the old assignments. Please go over and edit the wiki > page to add yourself to the teams you want to volunteer for. Remember > that each member needs to sign up for exactly 10 teams. If you don't > volunteer for 10, we'll use the script to make random assignments for > the remaining slots. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Fri Oct 12 02:26:11 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 12 Oct 2018 11:26:11 +0900 Subject: [openstack-dev] [ptl][release] Proposed changes for library releases In-Reply-To: <20181011200704.GA29670@sm-workstation> References: <20181011200704.GA29670@sm-workstation> Message-ID: +1 from Searchlight since we have only a small number of changes. Thanks, On Fri, Oct 12, 2018 at 5:07 AM Sean McGinnis wrote: > Libraries should be released early and often so their consumers can pick up > merged changes, and issues with those changes can be identified close to > when > the change is made. To help with this, we are considering forcing at least > one > library release per milestone (if there are unreleased merged changes). > > Planned Changes > --------------- > > The proposed change would be that for each cycle-with-intermediary library > deliverable, if it was not released during that milestone timeframe, the > release team would automatically generate a release request early in the > week > of the milestone deadline. For example, at Stein milestone 1, if the > library > was not released at all in the Stein cycle yet, we would trigger a release > the > week of the milestone. At Stein milestone 2, if the library was not > released > since milestone 1, we would trigger another release, etc. > > That autogenerated patch would be used as a base to communicate with the > team: > if a team knows it is not a good time to do a release for that library, > someone > from the team can -1 the patch to have it held, or update that patch with a > different commit SHA where they think it would be better to release from. > If > there are no issues, ideally we would want a +1 from the PTL and/or release > liaison to indicate approval, but we would also consider no negative > feedback > as an indicator that the automatically proposed patches without a -1 can > all be > approved on the Thursday milestone deadline. > > Frequently Asked Questions (we're guessing) > ------------------------------------------- > > Q: Our team likes to release libraries often. We don't want to wait for > each > milestone. Why are you ruining our lives? > A: Teams are encouraged to request library releases regularly, and at any > point > in time that makes sense. The automatic release patches only serve as a > safeguard to guarantee that library changes are released and consumed > early > and often, in case no release is actively requested. > > Q: Our library has no change, that's why we are not requesting changes. > Why are > you forcing meaningless releases? You need a hobby. > A: If the library has not had any change merged since the previous tag, we > would not generate a release patch for it. > > Q: My team is responsible for this library. I don't feel comfortable > having an > autogenerated patch grab a random commit to release. Can we opt out of > this? > A: The team can do their own releases when they are ready. If we generate a > release patch and you don't think you are ready, just -1 the patch. Then > when you are ready, you can update the patch with the new commit to use. > > > Please ask questions or raise concerns here and/or in the > #openstack-release > channel. > > Thanks! > > The Release Management Team > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From yamamoto at midokura.com Fri Oct 12 06:00:55 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Fri, 12 Oct 2018 15:00:55 +0900 Subject: [openstack-dev] [taas] rocky In-Reply-To: References: Message-ID: i've just created 4.0.0 and rocky branch. On Wed, Sep 26, 2018 at 6:55 PM Takashi Yamamoto wrote: > > hi, > > it seems we forgot to create rocky branch. > i'll make a release and the branch sooner or later, unless someone > beat me to do so. From ndevos at redhat.com Fri Oct 12 09:13:13 2018 From: ndevos at redhat.com (Niels de Vos) Date: Fri, 12 Oct 2018 11:13:13 +0200 Subject: [openstack-dev] [cinder][swift] FOSDEM Call for Participation: Software Defined Storage devroom Message-ID: <20181012091313.GN15986@ndevos-x270.lan.nixpanic.net> CfP for the Software Defined Storage devroom at FOSDEM 2019 (Brussels, Belgium, February 3rd). FOSDEM is a free software event that offers open source communities a place to meet, share ideas and collaborate. It is renown for being highly developer- oriented and brings together 8000+ participants from all over the world. It is held in the city of Brussels (Belgium). FOSDEM 2019 will take place during the weekend of February 2nd-3rd 2019. More details about the event can be found at http://fosdem.org/ ** Call For Participation The Software Defined Storage devroom will go into it's third round for talks around Open Source Software Defined Storage projects, management tools and real world deployments. Presentation topics could include but are not limited too: - Your work on a SDS project like Ceph, Gluster, OpenEBS or LizardFS - Your work on or with SDS related projects like SWIFT or Container Storage Interface - Management tools for SDS deployments - Monitoring tools for SDS clusters ** Important dates: - Nov 25th 2018: submission deadline for talk proposals - Dec 17th 2018: announcement of the final schedule - Feb 3rd 2019: Software Defined Storage dev room Talk proposals will be reviewed by a steering committee: - Niels de Vos (Gluster Developer - Red Hat) - Jan Fajerski (Ceph Developer - SUSE) - other volunteers TBA Use the FOSDEM 'pentabarf' tool to submit your proposal: https://penta.fosdem.org/submission/FOSDEM19 - If necessary, create a Pentabarf account and activate it. Please reuse your account from previous years if you have already created it. - In the "Person" section, provide First name, Last name (in the "General" tab), Email (in the "Contact" tab) and Bio ("Abstract" field in the "Description" tab). - Submit a proposal by clicking on "Create event". - Important! Select the "Software Defined Storage devroom" track (on the "General" tab). - Provide the title of your talk ("Event title" in the "General" tab). - Provide a description of the subject of the talk and the intended audience (in the "Abstract" field of the "Description" tab) - Provide a rough outline of the talk or goals of the session (a short list of bullet points covering topics that will be discussed) in the "Full description" field in the "Description" tab - Provide an expected length of your talk in the "Duration" field. Please count at least 10 minutes of discussion into your proposal plus allow 5 minutes for the handover to the next presenter. Suggested talk length would be 20+10 and 45+15 minutes. ** Recording of talks The FOSDEM organizers plan to have live streaming and recording fully working, both for remote/later viewing of talks, and so that people can watch streams in the hallways when rooms are full. This requires speakers to consent to being recorded and streamed. If you plan to be a speaker, please understand that by doing so you implicitly give consent for your talk to be recorded and streamed. The recordings will be published under the same license as all FOSDEM content (CC-BY). Hope to hear from you soon! And please forward this announcement. If you have any further questions, please write to the mailinglist at storage-devroom at lists.fosdem.org and we will try to answer as soon as possible. Thanks! From mmagr at redhat.com Fri Oct 12 09:25:20 2018 From: mmagr at redhat.com (Martin Magr) Date: Fri, 12 Oct 2018 11:25:20 +0200 Subject: [openstack-dev] [Openstack-operators] [SIGS] Ops Tools SIG In-Reply-To: References: <79A31C5A-F4C1-478E-AEE5-B9CB4693543F@gmail.com> Message-ID: Greetings guys, On Thu, Oct 11, 2018 at 4:19 PM, Miguel Angel Ajo Pelayo < majopela at redhat.com> wrote: > Adding the mailing lists back to your reply, thank you :) > > I guess that +melvin.hillsman at huawei.com can > help us a little bit organizing the SIG, > but I guess the first thing would be collecting a list of tools which > could be published > under the umbrella of the SIG, starting by the ones already in Osops. > > Publishing documentation for those tools, and the catalog under > docs.openstack.org > is possibly the next step (or a parallel step). > > > On Wed, Oct 10, 2018 at 4:43 PM Rob McAllister > wrote: > >> Hi Miguel, >> >> I would love to join this. What do I need to do? >> >> Sent from my iPhone >> >> On Oct 9, 2018, at 03:17, Miguel Angel Ajo Pelayo >> wrote: >> >> Hello >> >> Yesterday, during the Oslo meeting we discussed [6] the possibility >> of creating a new Special Interest Group [1][2] to provide home and release >> means for operator related tools [3] [4] [5] >> >> all of those tools have python dependencies related to openstack such as python-openstackclient or python-pbr. Which is exactly the reason why we moved osops-tools-monitoring-oschecks packaging away from OpsTools SIG to Cloud SIG. AFAIR we had some issues of having opstools SIG being dependent on openstack SIG. I believe that Cloud SIG is proper home for tools like [3][4][5] as they are related to OpenStack anyway. OpsTools SIG contains general tools like fluentd, sensu, collectd. Hope this helps, Martin > >> I continued the discussion with M.Hillsman later, and he made me >> aware of the operator working group and mailing list, which existed even >> before the SIGs. >> >> I believe it could be a very good idea, to give life and more >> visibility to all those very useful tools (for example, I didn't know some >> of them existed ...). >> >> Give this, I have two questions: >> >> 1) Do you know or more tools which could find home under an Ops Tools >> SIG umbrella? >> >> 2) Do you want to join us? >> >> >> Best regards and have a great day. >> >> >> [1] https://governance.openstack.org/sigs/ >> [2] http://git.openstack.org/cgit/openstack/governance- >> sigs/tree/sigs.yaml >> [3] https://wiki.openstack.org/wiki/Osops >> [4] http://git.openstack.org/cgit/openstack/ospurge/tree/ >> [5] http://git.openstack.org/cgit/openstack/os-log-merger/tree/ >> [6] http://eavesdrop.openstack.org/meetings/oslo/ >> 2018/oslo.2018-10-08-15.00.log.html#l-130 >> >> >> >> -- >> Miguel Ángel Ajo >> OSP / Networking DFG, OVN Squad Engineering >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > > -- > Miguel Ángel Ajo > OSP / Networking DFG, OVN Squad Engineering > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Martin Mágr Senior Software Engineer Red Hat Czech -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.engelmann at everyware.ch Fri Oct 12 10:06:41 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Fri, 12 Oct 2018 12:06:41 +0200 Subject: [openstack-dev] [oslo][glance][cinder][nova][keystone] healthcheck In-Reply-To: References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> <20181009152157.x6yllzeaeqwjt6wl@gentoo.org> Message-ID: Hi, I tried to configure the healthcheck framework (/healthcheck) for nova, cinder, glance and keystone but it looks like paste is not used with keystone anymore? https://github.com/openstack/keystone/commit/8bf335bb015447448097a5c08b870da8e537a858 In our rocky deployment the healthcheck is working for keystone only and I failed to configure it for, eg. nova-api. Nova seems to use paste? Is there any example nova api-paste.ini with the olso healthcheck middleware enabled? To documentation is hard to understand - at least for me. Thank you for your help. All the best, Florian -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From mark at stackhpc.com Fri Oct 12 10:49:08 2018 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 12 Oct 2018 11:49:08 +0100 Subject: [openstack-dev] [kayobe][kolla] Announcing the release of Kayobe 4.0.0 Message-ID: Hi, Announcing the release of Kayobe 4.0.0. This release includes support for the Queens release of OpenStack, and is the first release of Kayobe built using the OpenStack infrastructure. Release notes: https://kayobe-release-notes.readthedocs.io/en/latest/queens.html#relnotes-4-0-0-stable-queens Documentation: https://kayobe.readthedocs.io Thanks to everyone who contributed to this release! Looking forward, we intend to catch up with the OpenStack release cycle, by making a smaller release with support for OpenStack Rocky, then moving straight onto Stein. Cheers, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sathlang at redhat.com Fri Oct 12 11:09:57 2018 From: sathlang at redhat.com (Sofer Athlan-Guyot) Date: Fri, 12 Oct 2018 13:09:57 +0200 Subject: [openstack-dev] [tripleo][ci][upgrade] New jobs for tripleo Upgrade in the CI. Message-ID: <87d0sf2zga.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> Hi, Testing and maintaining a green status for upgrade jobs within the 3h time limit has proven to be a very difficult job to say the least. The net result has been: we don't have anything even touching the upgrade code in the CI. So during Denver PTG it has been decided to give up on running a full upgrade job during the 3h time limit and instead to focus on two complementary approach to at least touch the upgrade code: 1. run a standalone upgrade: this test the ansible upgrade playbook; 2. run a N->N upgrade; this test the upgrade python code; And here there are, still not merged but seen working: - tripleo-ci-centos-7-standalone-upgrade: https://review.openstack.org/#/c/604706/ - tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades: https://review.openstack.org/#/c/607848/9 The first is good to merge (but other could disagree), the second could be as well (but I tend to disagree :)) The first leverage the standalone deployment and execute an standalone upgrade just after it. The limitation is that it only tests non-HA services (sorry pidone, cannot test ha in standalone) and only the upgrade_tasks (ie not any workflow related to the upgrade cli) The main benefits here are: - ~2h to run the upgrade, still a bit long but far away from the 3h time limit; - we trigger a yum upgrade so that we can catch problems there as well; - we test the standalone upgrade which is good in itself; - composable role available (as in standalone/all-in-all deployment) so you can make a specific upgrade test for your project if it fits into the standalone constraint; For this last point, if standalone specific role eventually goes into project testing (nova, neutron ...), they could have as well a way to test upgrade tasks. This would be a best case scenario. Now, for the second point, the N->N upgrade. Its "limitation" is that ... well it doesn't run a yum upgrade at all. We start from master and run the upgrade to master. It's main benefit are: - it takes ~2h20 to run, so well under the 3h time; - tripleoclient upgrade code is run, which is one thing that the standalone ugprade cannot do. - It also tend to exercise idempotency of all the tasks as it runs them on an already "upgraded" node; - As added bonus, it could gate the tripleo-upgrade role as well as it definitively loads all of the role's tasks[1] For those that stayed with me to this point, I'm throwing another CI test that already proved useful already (caught errors), it's the ansible-lint test. After a standalone deployment we just run ansible-lint on all playbook generated[2]. It produces standalone_ansible_lint.log[3] in the working directory. It only takes a couple of minute to install ansible-lint and run it. It definitively gate against typos and the like. It touches hard to reach code as well, for instance the fast_forward tasks are linted. Still no pidone tasks in there but it could easily be added to a job that has HA tasks generated. Note that by default ansible-lint barks, as the generated playbooks hit several lintage problems, so only syntax errors and misnamed tasks or parameters are currently activated. But all the lint problems are logged in the above file and can be fixed later on. At which point we could activate full lint gating. Thanks for this long reading, any comments, shout of victory, cry of despair and reviews are welcomed. [1] but this has still to be investigated. [2] testing review https://review.openstack.org/#/c/604756/ and main code https://review.openstack.org/#/c/604757/ [3] sample output http://paste.openstack.org/show/731960/ -- Sofer Athlan-Guyot chem on #freenode Upgrade DFG. From dtantsur at redhat.com Fri Oct 12 11:55:06 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 12 Oct 2018 13:55:06 +0200 Subject: [openstack-dev] [ironic] Stepping down as core In-Reply-To: References: Message-ID: I'm sad to hear it :( Good luck, do not disappear completely, it was a pleasure to work with you. See you in Berlin! On 10/11/18 1:40 PM, Sam Betts (sambetts) wrote: > As many of you will have seen on IRC, I've mostly been appearing AFK for the > last couple of development cycles. Due to other tasks downstream most of my > attention has been drawn away from upstream Ironic development. Going forward > I'm unlikely to be as heavily involved with the Ironic project as I have been in > the past, so I am stepping down as a core contributor to make way for those more > involved and with more time to contribute. > > That said I do not intend to disappear, Myself and my colleagues plan to > continue to support the Cisco Ironic drivers, we just won't be so heavily > involved in core ironic development and its worth noting that although I might > appear AFK on IRC because my main focus is on other things, I always have an ear > to the ground and direct pings will generally reach me. > > I will be in Berlin for the OpenStack summit, so to those that are attending I > hope to see you there. > > The Ironic project has been (and I hope continues to be) an awesome place to > contribute too, thank you > > Sam Betts > sambetts > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sean.mcginnis at gmx.com Fri Oct 12 12:21:00 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 12 Oct 2018 07:21:00 -0500 Subject: [openstack-dev] [Openstack-operators] [SIGS] Ops Tools SIG In-Reply-To: References: <79A31C5A-F4C1-478E-AEE5-B9CB4693543F@gmail.com> Message-ID: <20181012122059.GB3532@sm-xps> On Fri, Oct 12, 2018 at 11:25:20AM +0200, Martin Magr wrote: > Greetings guys, > > On Thu, Oct 11, 2018 at 4:19 PM, Miguel Angel Ajo Pelayo < > majopela at redhat.com> wrote: > > > Adding the mailing lists back to your reply, thank you :) > > > > I guess that +melvin.hillsman at huawei.com can > > help us a little bit organizing the SIG, > > but I guess the first thing would be collecting a list of tools which > > could be published > > under the umbrella of the SIG, starting by the ones already in Osops. > > > > Publishing documentation for those tools, and the catalog under > > docs.openstack.org > > is possibly the next step (or a parallel step). > > > > > > On Wed, Oct 10, 2018 at 4:43 PM Rob McAllister > > wrote: > > > >> Hi Miguel, > >> > >> I would love to join this. What do I need to do? > >> > >> Sent from my iPhone > >> > >> On Oct 9, 2018, at 03:17, Miguel Angel Ajo Pelayo > >> wrote: > >> > >> Hello > >> > >> Yesterday, during the Oslo meeting we discussed [6] the possibility > >> of creating a new Special Interest Group [1][2] to provide home and release > >> means for operator related tools [3] [4] [5] > >> > >> > all of those tools have python dependencies related to openstack such as > python-openstackclient or python-pbr. Which is exactly the reason why we > moved osops-tools-monitoring-oschecks packaging away from OpsTools SIG to > Cloud SIG. AFAIR we had some issues of having opstools SIG being dependent > on openstack SIG. I believe that Cloud SIG is proper home for tools like > [3][4][5] as they are related to OpenStack anyway. OpsTools SIG contains > general tools like fluentd, sensu, collectd. > > > Hope this helps, > Martin > Hey Martin, I'm not sure I understand the issue with these tools have dependencies on other packages and the relationship to SIG ownership. Is your concern (or the history of a concern you are pointing out) that the tools would have a more difficult time if they required updates to dependencies if they are owned by a different group? Thanks! Sean From doug at doughellmann.com Fri Oct 12 12:24:24 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 12 Oct 2018 08:24:24 -0400 Subject: [openstack-dev] [tc] assigning new liaisons to projects In-Reply-To: References: Message-ID: Trinh Nguyen writes: > Thank Doug for coordinating this, > > Is there any way for us to update the health of the Searchlight project to > reflect the current state? Right now the status is not good to attract new > contributors. When the new liaisons are assigned, please talk to them about the current status. Doug > > Bests, > > On Fri, Oct 12, 2018 at 6:50 AM Doug Hellmann wrote: > >> Doug Hellmann writes: >> >> > TC members, >> > >> > Since we are starting a new term, and have several new members, we need >> > to decide how we want to rotate the liaisons attached to each our >> > project teams, SIGs, and working groups [1]. >> > >> > Last term we went through a period of volunteer sign-up and then I >> > randomly assigned folks to slots to fill out the roster evenly. During >> > the retrospective we talked a bit about how to ensure we had an >> > objective perspective for each team by not having PTLs sign up for their >> > own teams, but I don't think we settled on that as a hard rule. >> > >> > I think the easiest and fairest (to new members) way to manage the list >> > will be to wipe it and follow the same process we did last time. If you >> > agree, I will update the page this week and we can start collecting >> > volunteers over the next week or so. >> > >> > Doug >> > >> > [1] https://wiki.openstack.org/wiki/OpenStack_health_tracker >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> I have cleared out the old assignments. Please go over and edit the wiki >> page to add yourself to the teams you want to volunteer for. Remember >> that each member needs to sign up for exactly 10 teams. If you don't >> volunteer for 10, we'll use the script to make random assignments for >> the remaining slots. >> >> Doug >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > *Trinh Nguyen* > *www.edlab.xyz * > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From corey.bryant at canonical.com Fri Oct 12 12:46:33 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 12 Oct 2018 08:46:33 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> Message-ID: On Wed, Oct 10, 2018 at 7:36 PM Goutham Pacha Ravi wrote: > On Wed, Oct 10, 2018 at 2:10 PM Jeremy Stanley wrote: > > > > On 2018-10-10 16:00:40 -0500 (-0500), Sean McGinnis wrote: > > [...] > > > I would rather see us testing 3.5 and 3.7 versus 3.5, 3.6, and > > > 3.7. > > [...] > > > > I might have only pointed this out on IRC so far, but the > > expectation is that testing 3.5 and 3.6 at the same time was merely > > transitional since official OpenStack projects should be moving > > their testing from Ubuntu Xenial (which provides 3.5) to Ubuntu > > Bionic (which provides 3.6 and, now, 3.7 as well) during the Stein > > cycle and so will drop 3.5 testing on master in the process. > > ++ on switching python3.5 jobs to testing with python3.7 on Bionic. > python3.5 wasn't supported on all distros [1][2][3][4][5]. Xenial had it, > so it was nice to test with it when developing Queens and Rocky. > > > Thanks Corey for starting this effort. I proposed changes to > manila repos to use your template [1] [2], but the interpreter's not > being installed, > do you need to make any bindep changes to enable the "universe" ppa and > install > python3.7 and python3.7-dev? > Following up on this for anyone else who's following along. The python3.7 interpreter and development files are now correctly being installed after some additional changes in zuul jobs. And we have our first py37 SUCCESS! https://review.openstack.org/#/c/609557/ Goutham, thanks again for jumping in and adding py37 tests to your projects. Thanks, Corey > > [1] OpenSuse https://software.opensuse.org/package/python3 > [2] Ubuntu https://packages.ubuntu.com/search?keywords=python3 > [3] Fedora https://apps.fedoraproject.org/packages/python3 > [4] Arch https://www.archlinux.org/packages/extra/x86_64/python/ > [5] Gentoo https://wiki.gentoo.org/wiki/Project:Python/Implementations > [6] manila https://review.openstack.org/#/c/609558 > [7] python-manilaclient https://review.openstack.org/609557 > > -- > Goutham > > > -- > > Jeremy Stanley > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Fri Oct 12 12:59:38 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 12 Oct 2018 08:59:38 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> Message-ID: On Thu, Oct 11, 2018 at 10:19 AM Andreas Jaeger wrote: > On 10/10/2018 23.10, Jeremy Stanley wrote: > > I might have only pointed this out on IRC so far, but the > > expectation is that testing 3.5 and 3.6 at the same time was merely > > transitional since official OpenStack projects should be moving > > their testing from Ubuntu Xenial (which provides 3.5) to Ubuntu > > Bionic (which provides 3.6 and, now, 3.7 as well) during the Stein > > cycle and so will drop 3.5 testing on master in the process. > > Agreed, this needs some larger communication and explanation on what to do, > > The good news is we now have an initial change underway and successful, dropping py35 and enabling py37: https://review.openstack.org/#/c/609557/ I'm happy to get things moving along and start proposing changes like this to other projects and communicating with PTLs along the way. Do you think we need more discussion/communication on this or should I get started? Thanks, Corey Andreas > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Oct 12 14:07:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 12 Oct 2018 10:07:11 -0400 Subject: [openstack-dev] [goals][python3] week 9 update Message-ID: This is week 9 of the "Run under Python 3 by default" goal (https://governance.openstack.org/tc/goals/stein/python3-first.html). == What we learned last week == We have claimed a few names on PyPI, and updated a few sdist names where we couldn't do that. The one remaining project with a rename is heat, and I'm working on an upgrade script that will clean up the old metadata to fix the duplicate plugin issue. * https://review.openstack.org/#/c/606160/ == Ongoing and Completed Work == The zuul migration portion of the goal work is completed! Thanks again to everyone who assisted with creating and reviewing those patches. We still have quite a few patches with tox settings and for documentation build updates left open or unreviewed. Those documentation updates should be relatively quick to review because they're very minimal patches. Please take a few minutes to look for them and let's try to get them merged before the first milestone. The tox patches may require a bit more work to update pylint and the goal champions could use your help there (see below). +---------------------+--------------+---------+----------+---------+------------+-------+--------------------+ | Team | tox defaults | Docs | 3.6 unit | Failing | Unreviewed | Total | Champion | +---------------------+--------------+---------+----------+---------+------------+-------+--------------------+ | adjutant | 1/ 1 | - | + | 0 | 1 | 2 | Doug Hellmann | | barbican | + | 1/ 3 | + | 1 | 1 | 7 | Doug Hellmann | | blazar | + | + | + | 0 | 0 | 9 | Nguyen Hai | | Chef OpenStack | + | - | - | 0 | 0 | 2 | Doug Hellmann | | cinder | + | + | + | 0 | 0 | 11 | Doug Hellmann | | cloudkitty | + | + | + | 0 | 0 | 9 | Doug Hellmann | | congress | 1/ 3 | + | + | 0 | 0 | 9 | Nguyen Hai | | cyborg | + | + | + | 0 | 0 | 7 | Nguyen Hai | | designate | 2/ 4 | + | + | 0 | 1 | 9 | Nguyen Hai | | Documentation | + | + | + | 0 | 0 | 10 | Doug Hellmann | | dragonflow | - | + | + | 0 | 0 | 2 | Nguyen Hai | | ec2-api | 2/ 2 | + | + | 2 | 2 | 7 | | | freezer | 1/ 5 | + | + | 0 | 1 | 11 | | | glance | 1/ 4 | + | + | 0 | 0 | 10 | Nguyen Hai | | heat | 3/ 8 | + | 1/ 7 | 0 | 0 | 21 | Doug Hellmann | | horizon | 1/ 32 | + | + | 0 | 1 | 34 | Nguyen Hai | | I18n | 1/ 1 | - | - | 0 | 0 | 1 | Doug Hellmann | | InteropWG | 3/ 4 | + | 1/ 3 | 1 | 3 | 10 | Doug Hellmann | | ironic | 1/ 10 | + | + | 0 | 1 | 35 | Doug Hellmann | | karbor | + | + | + | 0 | 0 | 7 | Nguyen Hai | | keystone | + | + | + | 0 | 0 | 18 | Doug Hellmann | | kolla | + | + | + | 0 | 0 | 5 | | | kuryr | + | + | + | 0 | 0 | 9 | Doug Hellmann | | magnum | 2/ 5 | + | + | 0 | 1 | 10 | | | manila | 1/ 8 | + | + | 0 | 0 | 13 | Goutham Pacha Ravi | | masakari | 3/ 5 | + | - | 0 | 3 | 6 | Nguyen Hai | | mistral | + | + | + | 0 | 0 | 13 | Nguyen Hai | | monasca | 1/ 17 | + | + | 1 | 1 | 34 | Doug Hellmann | | murano | + | + | + | 0 | 0 | 14 | | | neutron | 8/ 18 | 2/ 14 | 2/ 13 | 5 | 4 | 45 | Doug Hellmann | | nova | + | + | + | 0 | 0 | 14 | | | octavia | + | + | + | 0 | 0 | 12 | Nguyen Hai | | OpenStack Charms | 42/ 73 | - | - | 39 | 30 | 73 | Doug Hellmann | | OpenStack-Helm | + | + | - | 0 | 0 | 4 | | | OpenStackAnsible | + | + | - | 0 | 0 | 154 | | | OpenStackClient | 1/ 4 | + | + | 0 | 1 | 11 | | | OpenStackSDK | + | + | + | 0 | 0 | 10 | | | oslo | + | + | + | 0 | 0 | 63 | Doug Hellmann | | Packaging-rpm | 3/ 3 | + | + | 0 | 1 | 7 | Doug Hellmann | | PowerVMStackers | - | - | + | 0 | 0 | 3 | Doug Hellmann | | Puppet OpenStack | + | + | - | 0 | 0 | 44 | Doug Hellmann | | qinling | + | + | + | 0 | 0 | 6 | | | Quality Assurance | 5/ 11 | + | + | 1 | 4 | 32 | Doug Hellmann | | rally | 2/ 3 | + | - | 2 | 2 | 5 | Nguyen Hai | | Release Management | - | - | + | 0 | 0 | 1 | Doug Hellmann | | requirements | - | + | + | 0 | 0 | 2 | Doug Hellmann | | sahara | 1/ 6 | + | + | 0 | 0 | 13 | Doug Hellmann | | searchlight | + | + | + | 0 | 0 | 9 | Nguyen Hai | | senlin | + | + | + | 0 | 0 | 9 | Nguyen Hai | | SIGs | 2/ 9 | + | + | 0 | 2 | 12 | Doug Hellmann | | solum | + | + | + | 0 | 0 | 7 | Nguyen Hai | | storlets | 1/ 2 | + | + | 1 | 1 | 4 | | | swift | 2/ 3 | + | + | 1 | 1 | 6 | Nguyen Hai | | tacker | 3/ 4 | + | + | 1 | 2 | 9 | Nguyen Hai | | Technical Committee | 1/ 2 | - | + | 0 | 0 | 4 | Doug Hellmann | | Telemetry | 1/ 7 | 1/ 6 | 1/ 6 | 0 | 1 | 19 | Doug Hellmann | | tricircle | + | + | + | 0 | 0 | 5 | Nguyen Hai | | tripleo | 12/ 55 | + | + | 5 | 5 | 93 | Doug Hellmann | | trove | 3/ 5 | + | + | 1 | 1 | 11 | Doug Hellmann | | User Committee | 3/ 3 | 1/ 2 | - | 0 | 2 | 5 | Doug Hellmann | | vitrage | + | + | + | 0 | 0 | 9 | Nguyen Hai | | watcher | + | + | + | 0 | 0 | 10 | Nguyen Hai | | winstackers | + | + | + | 0 | 0 | 6 | | | zaqar | 1/ 3 | + | + | 1 | 1 | 8 | | | zun | + | + | + | 0 | 0 | 8 | Nguyen Hai | | | 29/ 61 | 54/ 58 | 52/ 56 | 62 | 74 | 1076 | | +---------------------+--------------+---------+----------+---------+------------+-------+--------------------+ == Next Steps == Quite a few of the recent tox updates also exposed issues with using pylint under python 3, mostly due to having an older version of the tool pinned. This is a known issue, which was discussed in an earlier update email. The fixes are usually pretty straightforward, and good opportunities to contribute while you're waiting for tests to run or if you're just starting to get into the community. The series of patches preceding https://review.openstack.org/#/c/606676/ in the openstack/neutron repository are examples of some of the sorts of changes needed. If you're interested in helping to fix these sorts of issues, please leave a comment on the patch that changes the tox configuration so that we don't have multiple folks working on the same failures. We need to to approve the patches proposed by the goal champions, and then to expand functional test coverage for python 3. Please document your team's status in the wiki as well: https://wiki.openstack.org/wiki/Python3 == How can you help? == 1. Choose a patch that has failing tests and help fix it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) 2. Review the patches for the zuul changes. Keep in mind that some of those patches will be on the stable branches for projects. 3. Work on adding functional test jobs that run under Python 3. == How can you ask for help? == If you have any questions, please post them here to the openstack-dev list with the topic tag [python3] in the subject line. Posting questions to the mailing list will give the widest audience the chance to see the answers. We are using the #openstack-dev IRC channel for discussion as well, but I'm not sure how good our timezone coverage is so it's probably better to use the mailing list. == Reference Material == Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open Storyboard: https://storyboard.openstack.org/#!/board/104 Zuul migration notes: https://etherpad.openstack.org/p/python3-first Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 From corey.bryant at canonical.com Fri Oct 12 15:19:35 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 12 Oct 2018 11:19:35 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> Message-ID: On Fri, Oct 12, 2018 at 8:59 AM Corey Bryant wrote: > > > On Thu, Oct 11, 2018 at 10:19 AM Andreas Jaeger wrote: > >> On 10/10/2018 23.10, Jeremy Stanley wrote: >> > I might have only pointed this out on IRC so far, but the >> > expectation is that testing 3.5 and 3.6 at the same time was merely >> > transitional since official OpenStack projects should be moving >> > their testing from Ubuntu Xenial (which provides 3.5) to Ubuntu >> > Bionic (which provides 3.6 and, now, 3.7 as well) during the Stein >> > cycle and so will drop 3.5 testing on master in the process. >> >> Agreed, this needs some larger communication and explanation on what to >> do, >> >> > The good news is we now have an initial change underway and successful, > dropping py35 and enabling py37: https://review.openstack.org/#/c/609557/ > > I'm happy to get things moving along and start proposing changes like this > to other projects and communicating with PTLs along the way. Do you think > we need more discussion/communication on this or should I get started? > > Thanks, > Corey > We have a story to track this now at: https://storyboard.openstack.org/#!/story/2004073 I think we will just get started on proposing changes. I've had a couple of folks ask if they can help out which is great so we will start to chip away at the story above. We'll also contact PTLs as we start working on projects in case they haven't seen this thread. Of course if anyone objects to us moving forward, please feel free to let us know. Thanks, Corey > > Andreas >> -- >> Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi >> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany >> GF: Felix Imendörffer, Jane Smithard, Graham Norton, >> HRB 21284 (AG Nürnberg) >> GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Oct 12 15:44:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 12 Oct 2018 11:44:11 -0400 Subject: [openstack-dev] [ptl][release] Proposed changes for cycle-with-milestones deliverables In-Reply-To: <20180926142229.GA26870@sm-workstation> References: <20180926142229.GA26870@sm-workstation> Message-ID: Sean McGinnis writes: > During the Stein PTG in Denver, the release management team talked about ways > we can make things simpler and reduce the "paper pushing" work that all teams > need to do right now. One topic that came up was the usefulness of pushing tags > around milestones during the cycle. > > There were a couple of needs identified for doing such "milestone releases": > 1) It tests the release automation machinery to identify problems before > the RC and final release crunch time. > 2) It creates a nice cadence throughout the cycle to help teams stay on > track and focus on the right things for each phase of the cycle. > 3) It gives us an indication that teams are healthy, active, and planning > to include their components in the final release. > > One of the big motivators in the past was also to have output that downstream > distros and users could pick up for testing and early packaging. Based on our > admittedly anecdotal small sample, it doesn't appear this is actually a big > need, so we propose to stop tagging milestone releases for the > cycle-with-milestone projects. One of the issues that was raised from downstream consumers [1] is that this complicates upgrade testing using packages, since tools like yum will think that the stable branch (with a final version tag) has a higher version number than master (with a dev version computed off of the first release candidate where the stable branch was created). We've discussed this problem in the past and not done anything because the downstream folks were always able to live with the gap until the first milestone. Now that we're unlikely to have milestone tags for most projects, the gap will extend to the length of the cycle, blocking upgrade testing until the release candidates are tagged, shortly before we're ready to release. They could guess at the next version numbers, but if they guess wrong they would be left with invalid packages and have to do a bunch of testing work again. It's better for us to provide authoritative information about version changes upstream. We need all projects to increment their version at least by one minor version at the start of each cycle to save space for patch releases on the stable branch, so we looked at a few options for triggering that update automatically. One option is to add a tag, like an alpha. This is somewhat appealing because the release team can just do it without anyone on the project teams having to worry about it. However, I don't particularly like this option for two reasons. First, the release team would have to monitor the work in each project and wait for some patch to land after the fork, so we could tag that (otherwise the branch would get the new version, too). More importantly, the tag would trigger a release, and I don't think we want to publish artifacts just to tweak the version calculation. A similarly low impact solution is to use pbr's Sem-Ver calculation feature and inject patches into master to bump the version being computed by 1 feature level (which should move from x.y.z.0rc1 to somethinglike x.y+1.0.devN). See [2] for details about how this works. This is the approach I prefer, and I have a patch to the branching scripts to add the Sem-Ver instruction to the patches we already generate to update reno [3]. That change should take care of our transition from Stein->T, but we're left with versions in Stein that are lower than Rocky right now. So, as a one time operation, Sean is going to propose empty patches with the Sem-Ver instruction in the commit message to all of the repositories for Stein deliverables that have stable/rocky branches. Let us know if you have any questions. Doug [1] http://lists.openstack.org/pipermail/openstack-operators/2018-October/015991.html [2] https://docs.openstack.org/pbr/latest/user/features.html#version [3] https://review.openstack.org/#/c/609827/ From lbragstad at gmail.com Fri Oct 12 16:45:17 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 12 Oct 2018 11:45:17 -0500 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> <1662fc326b2.b3cb83bc32239.7575898832806527463@ghanshyammann.com> Message-ID: Sending a follow up here quick. The reviewers actively participating in [0] are nearing a conclusion. Ultimately, the convention is going to be: :[:][:]:[:] Details about what that actually means can be found in the review [0]. Each piece is denoted as being required or optional, along with examples. I think this gives us a pretty good starting place, and the syntax is flexible enough to support almost every policy naming convention we've stumbled across. Now is the time if you have any final input or feedback. Thanks for sticking with the discussion. Lance [0] https://review.openstack.org/#/c/606214/ On Mon, Oct 8, 2018 at 8:49 AM Lance Bragstad wrote: > > On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann > wrote: > >> ---- On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad < >> lbragstad at gmail.com> wrote ---- >> > >> > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki >> wrote: >> > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg >> > wrote: >> > > >> > > Ideally I would like to see it in the form of least specific to >> most specific. But more importantly in a way that there is no additional >> delimiters between the service type and the resource. Finally, I do not >> like the change of plurality depending on action type. >> > > >> > > I propose we consider >> > > >> > > ::[:] >> > > >> > > Example for keystone (note, action names below are strictly >> examples I am fine with whatever form those actions take): >> > > identity:projects:create >> > > identity:projects:delete >> > > identity:projects:list >> > > identity:projects:get >> > > >> > > It keeps things simple and consistent when you're looking through >> overrides / defaults. >> > > --Morgan >> > +1 -- I think the ordering if `resource` comes before >> > `action|subaction` will be more clean. >> > >> > ++ >> > These are excellent points. I especially like being able to omit the >> convention about plurality. Furthermore, I'd like to add that I think we >> should make the resource singular (e.g., project instead or projects). For >> example: >> > compute:server:list >> > >> compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize >> (or confirm-resize) >> >> Do we need "action" word there? I think action name itself should convey >> the operation. IMO below notation without "äction" word looks clear enough. >> what you say? >> >> compute:server:reboot >> compute:server:confirm_resize >> > > I agree. I simplified this in the current version up for review. > > >> >> -gmann >> >> > >> > Otherwise, someone might mistake compute:servers:get, as "list". This >> is ultra-nick-picky, but something I thought of when seeing the usage of >> "get_all" in policy names in favor of "list." >> > In summary, the new convention based on the most recent feedback >> should be: >> > ::[:] >> > Rules:service-type is always defined in the service types authority >> > resources are always singular >> > Thanks to all for sticking through this tedious discussion. I >> appreciate it. >> > /R >> > >> > Harry >> > > >> > > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad >> wrote: >> > >> >> > >> Bumping this thread again and proposing two conventions based on >> the discussion here. I propose we decide on one of the two following >> conventions: >> > >> >> > >> :: >> > >> >> > >> or >> > >> >> > >> :_ >> > >> >> > >> Where is the corresponding service type of the >> project [0], and is either create, get, list, update, or delete. I >> think decoupling the method from the policy name should aid in consistency, >> regardless of the underlying implementation. The HTTP method specifics can >> still be relayed using oslo.policy's DocumentedRuleDefault object [1]. >> > >> >> > >> I think the plurality of the resource should default to what makes >> sense for the operation being carried out (e.g., list:foobars, >> create:foobar). >> > >> >> > >> I don't mind the first one because it's clear about what the >> delimiter is and it doesn't look weird when projects have something like: >> > >> >> > >> ::: >> > >> >> > >> If folks are ok with this, I can start working on some >> documentation that explains the motivation for this. Afterward, we can >> figure out how we want to track this work. >> > >> >> > >> What color do you want the shed to be? >> > >> >> > >> [0] https://service-types.openstack.org/service-types.json >> > >> [1] >> https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule >> > >> >> > >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad < >> lbragstad at gmail.com> wrote: >> > >>> >> > >>> >> > >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann < >> gmann at ghanshyammann.com> wrote: >> > >>>> >> > >>>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt < >> john at johngarbutt.com> wrote ---- >> > >>>> > tl;dr+1 consistent names >> > >>>> > I would make the names mirror the API... because the Operator >> setting them knows the API, not the codeIgnore the crazy names in Nova, I >> certainly hate them >> > >>>> >> > >>>> Big +1 on consistent naming which will help operator as well as >> developer to maintain those. >> > >>>> >> > >>>> > >> > >>>> > Lance Bragstad wrote: >> > >>>> > > I'm curious if anyone has context on the "os-" part of the >> format? >> > >>>> > >> > >>>> > My memory of the Nova policy mess...* Nova's policy rules >> traditionally followed the patterns of the code >> > >>>> > ** Yes, horrible, but it happened.* The code used to have the >> OpenStack API and the EC2 API, hence the "os"* API used to expand with >> extensions, so the policy name is often based on extensions** note most of >> the extension code has now gone, including lots of related policies* Policy >> in code was focused on getting us to a place where we could rename policy** >> Whoop whoop by the way, it feels like we are really close to something >> sensible now! >> > >>>> > Lance Bragstad wrote: >> > >>>> > Thoughts on using create, list, update, and delete as opposed >> to post, get, put, patch, and delete in the naming convention? >> > >>>> > I could go either way as I think about "list servers" in the >> API.But my preference is for the URL stub and POST, GET, etc. >> > >>>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad < >> lbragstad at gmail.com> wrote:If we consider dropping "os", should we >> entertain dropping "api", too? Do we have a good reason to keep "api"?I >> wouldn't be opposed to simple service types (e.g "compute" or >> "loadbalancer"). >> > >>>> > +1The API is known as "compute" in api-ref, so the policy >> should be for "compute", etc. >> > >>>> >> > >>>> Agree on mapping the policy name with api-ref as much as >> possible. Other than policy name having 'os-', we have 'os-' in resource >> name also in nova API url like /os-agents, /os-aggregates etc (almost every >> resource except servers , flavors). As we cannot get rid of those from API >> url, we need to keep the same in policy naming too? or we can have policy >> name like compute:agents:create/post but that mismatch from api-ref where >> agents resource url is os-agents. >> > >>> >> > >>> >> > >>> Good question. I think this depends on how the service does >> policy enforcement. >> > >>> >> > >>> I know we did something like this in keystone, which required >> policy names and method names to be the same: >> > >>> >> > >>> "identity:list_users": "..." >> > >>> >> > >>> Because the initial implementation of policy enforcement used a >> decorator like this: >> > >>> >> > >>> from keystone import controller >> > >>> >> > >>> @controller.protected >> > >>> def list_users(self): >> > >>> ... >> > >>> >> > >>> Having the policy name the same as the method name made it easier >> for the decorator implementation to resolve the policy needed to protect >> the API because it just looked at the name of the wrapped method. The >> advantage was that it was easy to implement new APIs because you only >> needed to add a policy, implement the method, and make sure you decorate >> the implementation. >> > >>> >> > >>> While this worked, we are moving away from it entirely. The >> decorator implementation was ridiculously complicated. Only a handful of >> keystone developers understood it. With the addition of system-scope, it >> would have only become more convoluted. It also enables a much more >> copy-paste pattern (e.g., so long as I wrap my method with this decorator >> implementation, things should work right?). Instead, we're calling >> enforcement within the controller implementation to ensure things are >> easier to understand. It requires developers to be cognizant of how >> different token types affect the resources within an API. That said, >> coupling the policy name to the method name is no longer a requirement for >> keystone. >> > >>> >> > >>> Hopefully, that helps explain why we needed them to match. >> > >>> >> > >>>> >> > >>>> >> > >>>> Also we have action API (i know from nova not sure from other >> services) like POST /servers/{server_id}/action {addSecurityGroup} and >> their current policy name is all inconsistent. few have policy name >> including their resource name like >> "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in >> policy name like "os_compute_api:os-admin-actions:reset_state" and few has >> direct action name like "os_compute_api:os-console-output" >> > >>> >> > >>> >> > >>> Since the actions API relies on the request body and uses a >> single HTTP method, does it make sense to have the HTTP method in the >> policy name? It feels redundant, and we might be able to establish a >> convention that's more meaningful for things like action APIs. It looks >> like cinder has a similar pattern [0]. >> > >>> >> > >>> [0] >> https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action >> > >>> >> > >>>> >> > >>>> >> > >>>> May be we can make them consistent with >> :: or any better opinion. >> > >>>> >> > >>>> > From: Lance Bragstad > The topic of >> having consistent policy names has popped up a few times this week. >> > >>>> > >> > >>>> > I would love to have this nailed down before we go through >> all the policy rules again. In my head I hope in Nova we can go through >> each policy rule and do the following: >> > >>>> > * move to new consistent policy name, deprecate existing >> name* hardcode scope check to project, system or user** (user, yes... >> keypairs, yuck, but its how they work)** deprecate in rule scope checks, >> which are largely bogus in Nova anyway* make read/write/admin distinction** >> therefore adding the "noop" role, amount other things >> > >>>> >> > >>>> + policy granularity. >> > >>>> >> > >>>> It is good idea to make the policy improvement all together and >> for all rules as you mentioned. But my worries is how much load it will be >> on operator side to migrate all policy rules at same time? What will be the >> deprecation period etc which i think we can discuss on proposed spec - >> https://review.openstack.org/#/c/547850 >> > >>> >> > >>> >> > >>> Yeah, that's another valid concern. I know at least one operator >> has weighed in already. I'm curious if operators have specific input here. >> > >>> >> > >>> It ultimately depends on if they override existing policies or >> not. If a deployment doesn't have any overrides, it should be a relatively >> simple change for operators to consume. >> > >>> >> > >>>> >> > >>>> >> > >>>> >> > >>>> -gmann >> > >>>> >> > >>>> > Thanks,John >> __________________________________________________________________________ >> > >>>> > OpenStack Development Mailing List (not for usage questions) >> > >>>> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > >>>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >>>> > >> > >>>> >> > >>>> >> > >>>> >> > >>>> >> __________________________________________________________________________ >> > >>>> OpenStack Development Mailing List (not for usage questions) >> > >>>> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> > >> >> __________________________________________________________________________ >> > >> OpenStack Development Mailing List (not for usage questions) >> > >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > >> > > >> __________________________________________________________________________ >> > > OpenStack Development Mailing List (not for usage questions) >> > > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Oct 12 17:16:02 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 12 Oct 2018 11:16:02 -0600 Subject: [openstack-dev] [tripleo][ci][upgrade] New jobs for tripleo Upgrade in the CI. In-Reply-To: <87d0sf2zga.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> References: <87d0sf2zga.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: On Fri, Oct 12, 2018 at 5:10 AM Sofer Athlan-Guyot wrote: > Hi, > > Testing and maintaining a green status for upgrade jobs within the 3h > time limit has proven to be a very difficult job to say the least. > Indeed > > The net result has been: we don't have anything even touching the > upgrade code in the CI. > > So during Denver PTG it has been decided to give up on running a full > upgrade job during the 3h time limit and instead to focus on two > complementary approach to at least touch the upgrade code: > 1. run a standalone upgrade: this test the ansible upgrade playbook; > 2. run a N->N upgrade; this test the upgrade python code; > And here there are, still not merged but seen working: > - tripleo-ci-centos-7-standalone-upgrade: > https://review.openstack.org/#/c/604706/ > - tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades: > https://review.openstack.org/#/c/607848/9 > > The first is good to merge (but other could disagree), the second could > be as well (but I tend to disagree :)) > > The first leverage the standalone deployment and execute an standalone > upgrade just after it. > > The limitation is that it only tests non-HA services (sorry pidone, > cannot test ha in standalone) and only the upgrade_tasks (ie not any > workflow related to the upgrade cli) > This can be augmented with 3rd party. The pidone team and the ci team are putting the final touches on a 3rd party job for HA services. Looking forward, I could see a 3rd party upgrade job that runs the pidone verification tests. > > The main benefits here are: > - ~2h to run the upgrade, still a bit long but far away from the 3h > time limit; > - we trigger a yum upgrade so that we can catch problems there as well; > - we test the standalone upgrade which is good in itself; > - composable role available (as in standalone/all-in-all deployment) so > you can make a specific upgrade test for your project if it fits into > the standalone constraint; > These are all huge benefits over the previous implementation that have been made available to us via the standalone deployment!!!! > > For this last point, if standalone specific role eventually goes into > project testing (nova, neutron ...), they could have as well a way to > test upgrade tasks. This would be a best case scenario. > !!!!!!!!!!!!! woot !!!!!!! This is a huge point that TripleO folks need to absorb!! !!!!!!!!!!!!! woot !!!!!!! In the next several sprints the TripleO CI team will do our best to focus on the standalone deployments to convert TripleO's upstream jobs over and paving the way for other projects to start consuming it. IMHO I would think other projects would be *very* interested in testing an upgrade of their individual component w/o all the noise of unrelated services/components. > > Now, for the second point, the N->N upgrade. Its "limitation" is that > ... well it doesn't run a yum upgrade at all. We start from master and > run the upgrade to master. > > It's main benefit are: > - it takes ~2h20 to run, so well under the 3h time; > - tripleoclient upgrade code is run, which is one thing that the > standalone ugprade cannot do. > - It also tend to exercise idempotency of all the tasks as it runs them > on an already "upgraded" node; > - As added bonus, it could gate the tripleo-upgrade role as well as it > definitively loads all of the role's tasks[1] > > For those that stayed with me to this point, I'm throwing another CI > test that already proved useful already (caught errors), it's the > ansible-lint test. After a standalone deployment we just run > ansible-lint on all playbook generated[2]. > This is nice, thanks chem! > > It produces standalone_ansible_lint.log[3] in the working directory. It > only takes a couple of minute to install ansible-lint and run it. It > definitively gate against typos and the like. It touches hard to > reach code as well, for instance the fast_forward tasks are linted. > Still no pidone tasks in there but it could easily be added to a job > that has HA tasks generated. > > Note that by default ansible-lint barks, as the generated playbooks hit > several lintage problems, so only syntax errors and misnamed tasks or > parameters are currently activated. But all the lint problems are > logged in the above file and can be fixed later on. At which point we > could activate full lint gating. > > Thanks for this long reading, any comments, shout of victory, cry of > despair and reviews are welcomed. > > [1] but this has still to be investigated. > [2] testing review https://review.openstack.org/#/c/604756/ and main code > https://review.openstack.org/#/c/604757/ > [3] sample output http://paste.openstack.org/show/731960/ > -- > Sofer Athlan-Guyot > chem on #freenode > Upgrade DFG. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Very well done!! -- Wes Hayutin Associate MANAGER Red Hat whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Fri Oct 12 17:50:36 2018 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Fri, 12 Oct 2018 10:50:36 -0700 Subject: [openstack-dev] [oslo][glance][cinder][nova][keystone] healthcheck In-Reply-To: References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> <20181009152157.x6yllzeaeqwjt6wl@gentoo.org> Message-ID: Keystone no longer uses paste (since Rocky) as paste is unmaintained. The healthcheck app is permanently enabled for keystone at /healthcheck. We chose to make it a default bit of functionality in how we have Keystone deployed. We also have unit tests in place to ensure we don't regress and healthcheck changes behavior down the line (future releases). You should be able to configure additional bits for healthcheck in keystone.conf (e.g. detailed mode, disable-by-file, etc). Cheers, --Morgan On Fri, Oct 12, 2018 at 3:07 AM Florian Engelmann < florian.engelmann at everyware.ch> wrote: > Hi, > > I tried to configure the healthcheck framework (/healthcheck) for nova, > cinder, glance and keystone but it looks like paste is not used with > keystone anymore? > > > https://github.com/openstack/keystone/commit/8bf335bb015447448097a5c08b870da8e537a858 > > In our rocky deployment the healthcheck is working for keystone only and > I failed to configure it for, eg. nova-api. > > Nova seems to use paste? > > Is there any example nova api-paste.ini with the olso healthcheck > middleware enabled? To documentation is hard to understand - at least > for me. > > Thank you for your help. > > All the best, > Florian > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Oct 12 22:05:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 12 Oct 2018 17:05:53 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-26 Update Message-ID: The big update this week is version 0.1.0 of oslo.upgradecheck was released. The documentation along with usage examples can be found here [1]. A big thanks to Ben Nemec for getting that done since a few projects were waiting for it. In other updates, some changes were proposed in other projects [2]. And finally, Lance Bragstad and I had a discussion this week [3] about the validity of upgrade checks looking for deleted configuration options. The main scenario I'm thinking about here is FFU where someone is going from Mitaka to Pike. Let's say a config option was deprecated in Newton and then removed in Ocata. As the operator is rolling through from Mitaka to Pike, they might have missed the deprecation signal in Newton and removal in Ocata. Does that mean we should have upgrade checks that look at the configuration for deleted options, or options where the deprecated alias is removed? My thought is that if things will not work once they get to the target release and restart the service code, which would definitely impact the upgrade, then checking for those scenarios is probably OK. If on the other hand the removed options were just tied to functionality that was removed and are otherwise not causing any harm then I don't think we need a check for that. It was noted that oslo.config has a new validation tool [4] so that would take care of some of this same work if run during upgrades. So I think whether or not an upgrade check should be looking for config option removal ultimately depends on the severity of what happens if the manual intervention to handle that removed option is not performed. That's pretty broad, but these upgrade checks aren't really set in stone for what is applied to them. I'd like to get input from others on this, especially operators and if they would find these types of checks useful. [1] https://docs.openstack.org/oslo.upgradecheck/latest/ [2] https://storyboard.openstack.org/#!/story/2003657 [3] http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-10-10.log.html#t2018-10-10T15:17:17 [4] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135688.html -- Thanks, Matt From jaypipes at gmail.com Fri Oct 12 23:20:29 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 12 Oct 2018 19:20:29 -0400 Subject: [openstack-dev] [kolla][tc] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <56ce550f-f3a7-0d35-0abd-f4d853c48cc0@redhat.com> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <5cf93772-d8f6-4cbb-3088-eeb670941969@everyware.ch> <95509c0e-5abb-b0f2-eca3-f04e7eb995e3@gmail.com> <1A3C52DFCD06494D8528644858247BF01C1F64ED@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C1F6C0B@EX10MBOX03.pnnl.gov> <6b63c2b5-2881-7a01-f586-7700a62bc16d@gmail.com> <56ce550f-f3a7-0d35-0abd-f4d853c48cc0@redhat.com> Message-ID: <3eb8148c-316b-f327-525e-3e2e9d7fc09f@gmail.com> On 10/11/2018 01:08 PM, Zane Bitter wrote: > On 10/10/18 1:35 PM, Jay Pipes wrote: >> +tc topic >> >> On 10/10/2018 11:49 AM, Fox, Kevin M wrote: >>> Sorry. Couldn't quite think of the name. I was meaning, openstack >>> project tags. >> >> I think having a tag that indicates the project is no longer using >> SELECT FOR UPDATE (and thus is safe to use multi-writer Galera) is an >> excellent idea, Kevin. ++ > > I would support such a tag, especially if it came with detailed > instructions on how to audit your code to make sure you are not doing > this with sqlalchemy. (Bonus points for a flake8 plugin that can be > enabled in the gate.) I can contribute to such a tag's documentation, but I don't currently have the bandwidth to start and shepherd it. > (One question for clarification: is this actually _required_ to use > multi-writer Galera? My previous recollection was that it was possible, > but inefficient, to use SELECT FOR UPDATE safely as long as you wrote a > lot of boilerplate to restart the transaction if it failed.) Certainly not. There is just a higher occurrence of the deadlock error in question when using SELECT FOR UPDATE versus using a compare-and-swap technique that does things like this: UPDATE tbl SET field = value, generation = generation + 1 WHERE generation = $expected_generation; The vast majority of cases I've seen where the deadlock occurred were during Rally tests, which were just brute-forcing breakage points and not particularly reflecting a real-world usage pattern. So, in short, yes, it's perfectly safe and fine to use Galera in a multi-writer setup from the get-go with most OpenStack projects. It's just that *some* OpenStack projects of later releases have fewer code areas that aggravate the aforementioned deadlock conditions with Galera in multi-writer mode. Best, -jay >> -jay >> >>> ________________________________________ >>> From: Jay Pipes [jaypipes at gmail.com] >>> Sent: Tuesday, October 09, 2018 12:22 PM >>> To: openstack-dev at lists.openstack.org >>> Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, >>> vault, fabio and FQDN endpoints >>> >>> On 10/09/2018 03:10 PM, Fox, Kevin M wrote: >>>> Oh, this does raise an interesting question... Should such >>>> information be reported by the projects up to users through labels? >>>> Something like, "percona_multimaster=safe" Its really difficult for >>>> folks to know which projects can and can not be used that way >>>> currently. >>> >>> Are you referring to k8s labels/selectors? or are you referring to >>> project tags (you know, part of that whole Big Tent thing...)? >>> >>> -jay >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gmann at ghanshyammann.com Sat Oct 13 11:07:10 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 13 Oct 2018 20:07:10 +0900 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> <1662fc326b2.b3cb83bc32239.7575898832806527463@ghanshyammann.com> Message-ID: <1666d1bcecf.e634f9cf181694.2527311199687749309@ghanshyammann.com> ---- On Sat, 13 Oct 2018 01:45:17 +0900 Lance Bragstad wrote ---- > Sending a follow up here quick. > The reviewers actively participating in [0] are nearing a conclusion. Ultimately, the convention is going to be: > :[:][:]:[:] > Details about what that actually means can be found in the review [0]. Each piece is denoted as being required or optional, along with examples. I think this gives us a pretty good starting place, and the syntax is flexible enough to support almost every policy naming convention we've stumbled across. > Now is the time if you have any final input or feedback. Thanks for sticking with the discussion. Thanks Lance for working on this. Current version lgtm. I would like to see some operators feedback also if this standard policy name format is clear and easy understandable. -gmann > Lance > [0] https://review.openstack.org/#/c/606214/ > > On Mon, Oct 8, 2018 at 8:49 AM Lance Bragstad wrote: > > On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann wrote: > ---- On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad wrote ---- > > > > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki wrote: > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > > wrote: > > > > > > Ideally I would like to see it in the form of least specific to most specific. But more importantly in a way that there is no additional delimiters between the service type and the resource. Finally, I do not like the change of plurality depending on action type. > > > > > > I propose we consider > > > > > > ::[:] > > > > > > Example for keystone (note, action names below are strictly examples I am fine with whatever form those actions take): > > > identity:projects:create > > > identity:projects:delete > > > identity:projects:list > > > identity:projects:get > > > > > > It keeps things simple and consistent when you're looking through overrides / defaults. > > > --Morgan > > +1 -- I think the ordering if `resource` comes before > > `action|subaction` will be more clean. > > > > ++ > > These are excellent points. I especially like being able to omit the convention about plurality. Furthermore, I'd like to add that I think we should make the resource singular (e.g., project instead or projects). For example: > > compute:server:list > > compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize (or confirm-resize) > > Do we need "action" word there? I think action name itself should convey the operation. IMO below notation without "äction" word looks clear enough. what you say? > > compute:server:reboot > compute:server:confirm_resize > > I agree. I simplified this in the current version up for review. > -gmann > > > > > Otherwise, someone might mistake compute:servers:get, as "list". This is ultra-nick-picky, but something I thought of when seeing the usage of "get_all" in policy names in favor of "list." > > In summary, the new convention based on the most recent feedback should be: > > ::[:] > > Rules:service-type is always defined in the service types authority > > resources are always singular > > Thanks to all for sticking through this tedious discussion. I appreciate it. > > /R > > > > Harry > > > > > > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad wrote: > > >> > > >> Bumping this thread again and proposing two conventions based on the discussion here. I propose we decide on one of the two following conventions: > > >> > > >> :: > > >> > > >> or > > >> > > >> :_ > > >> > > >> Where is the corresponding service type of the project [0], and is either create, get, list, update, or delete. I think decoupling the method from the policy name should aid in consistency, regardless of the underlying implementation. The HTTP method specifics can still be relayed using oslo.policy's DocumentedRuleDefault object [1]. > > >> > > >> I think the plurality of the resource should default to what makes sense for the operation being carried out (e.g., list:foobars, create:foobar). > > >> > > >> I don't mind the first one because it's clear about what the delimiter is and it doesn't look weird when projects have something like: > > >> > > >> ::: > > >> > > >> If folks are ok with this, I can start working on some documentation that explains the motivation for this. Afterward, we can figure out how we want to track this work. > > >> > > >> What color do you want the shed to be? > > >> > > >> [0] https://service-types.openstack.org/service-types.json > > >> [1] https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule > > >> > > >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad wrote: > > >>> > > >>> > > >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann wrote: > > >>>> > > >>>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt wrote ---- > > >>>> > tl;dr+1 consistent names > > >>>> > I would make the names mirror the API... because the Operator setting them knows the API, not the codeIgnore the crazy names in Nova, I certainly hate them > > >>>> > > >>>> Big +1 on consistent naming which will help operator as well as developer to maintain those. > > >>>> > > >>>> > > > >>>> > Lance Bragstad wrote: > > >>>> > > I'm curious if anyone has context on the "os-" part of the format? > > >>>> > > > >>>> > My memory of the Nova policy mess...* Nova's policy rules traditionally followed the patterns of the code > > >>>> > ** Yes, horrible, but it happened.* The code used to have the OpenStack API and the EC2 API, hence the "os"* API used to expand with extensions, so the policy name is often based on extensions** note most of the extension code has now gone, including lots of related policies* Policy in code was focused on getting us to a place where we could rename policy** Whoop whoop by the way, it feels like we are really close to something sensible now! > > >>>> > Lance Bragstad wrote: > > >>>> > Thoughts on using create, list, update, and delete as opposed to post, get, put, patch, and delete in the naming convention? > > >>>> > I could go either way as I think about "list servers" in the API.But my preference is for the URL stub and POST, GET, etc. > > >>>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad wrote:If we consider dropping "os", should we entertain dropping "api", too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple service types (e.g "compute" or "loadbalancer"). > > >>>> > +1The API is known as "compute" in api-ref, so the policy should be for "compute", etc. > > >>>> > > >>>> Agree on mapping the policy name with api-ref as much as possible. Other than policy name having 'os-', we have 'os-' in resource name also in nova API url like /os-agents, /os-aggregates etc (almost every resource except servers , flavors). As we cannot get rid of those from API url, we need to keep the same in policy naming too? or we can have policy name like compute:agents:create/post but that mismatch from api-ref where agents resource url is os-agents. > > >>> > > >>> > > >>> Good question. I think this depends on how the service does policy enforcement. > > >>> > > >>> I know we did something like this in keystone, which required policy names and method names to be the same: > > >>> > > >>> "identity:list_users": "..." > > >>> > > >>> Because the initial implementation of policy enforcement used a decorator like this: > > >>> > > >>> from keystone import controller > > >>> > > >>> @controller.protected > > >>> def list_users(self): > > >>> ... > > >>> > > >>> Having the policy name the same as the method name made it easier for the decorator implementation to resolve the policy needed to protect the API because it just looked at the name of the wrapped method. The advantage was that it was easy to implement new APIs because you only needed to add a policy, implement the method, and make sure you decorate the implementation. > > >>> > > >>> While this worked, we are moving away from it entirely. The decorator implementation was ridiculously complicated. Only a handful of keystone developers understood it. With the addition of system-scope, it would have only become more convoluted. It also enables a much more copy-paste pattern (e.g., so long as I wrap my method with this decorator implementation, things should work right?). Instead, we're calling enforcement within the controller implementation to ensure things are easier to understand. It requires developers to be cognizant of how different token types affect the resources within an API. That said, coupling the policy name to the method name is no longer a requirement for keystone. > > >>> > > >>> Hopefully, that helps explain why we needed them to match. > > >>> > > >>>> > > >>>> > > >>>> Also we have action API (i know from nova not sure from other services) like POST /servers/{server_id}/action {addSecurityGroup} and their current policy name is all inconsistent. few have policy name including their resource name like "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in policy name like "os_compute_api:os-admin-actions:reset_state" and few has direct action name like "os_compute_api:os-console-output" > > >>> > > >>> > > >>> Since the actions API relies on the request body and uses a single HTTP method, does it make sense to have the HTTP method in the policy name? It feels redundant, and we might be able to establish a convention that's more meaningful for things like action APIs. It looks like cinder has a similar pattern [0]. > > >>> > > >>> [0] https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action > > >>> > > >>>> > > >>>> > > >>>> May be we can make them consistent with :: or any better opinion. > > >>>> > > >>>> > From: Lance Bragstad > The topic of having consistent policy names has popped up a few times this week. > > >>>> > > > >>>> > I would love to have this nailed down before we go through all the policy rules again. In my head I hope in Nova we can go through each policy rule and do the following: > > >>>> > * move to new consistent policy name, deprecate existing name* hardcode scope check to project, system or user** (user, yes... keypairs, yuck, but its how they work)** deprecate in rule scope checks, which are largely bogus in Nova anyway* make read/write/admin distinction** therefore adding the "noop" role, amount other things > > >>>> > > >>>> + policy granularity. > > >>>> > > >>>> It is good idea to make the policy improvement all together and for all rules as you mentioned. But my worries is how much load it will be on operator side to migrate all policy rules at same time? What will be the deprecation period etc which i think we can discuss on proposed spec - https://review.openstack.org/#/c/547850 > > >>> > > >>> > > >>> Yeah, that's another valid concern. I know at least one operator has weighed in already. I'm curious if operators have specific input here. > > >>> > > >>> It ultimately depends on if they override existing policies or not. If a deployment doesn't have any overrides, it should be a relatively simple change for operators to consume. > > >>> > > >>>> > > >>>> > > >>>> > > >>>> -gmann > > >>>> > > >>>> > Thanks,John __________________________________________________________________________ > > >>>> > OpenStack Development Mailing List (not for usage questions) > > >>>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >>>> > > > >>>> > > >>>> > > >>>> > > >>>> __________________________________________________________________________ > > >>>> OpenStack Development Mailing List (not for usage questions) > > >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >> > > >> __________________________________________________________________________ > > >> OpenStack Development Mailing List (not for usage questions) > > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From mnaser at vexxhost.com Sat Oct 13 13:04:16 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sat, 13 Oct 2018 15:04:16 +0200 Subject: [openstack-dev] [tc][all] Discussing goals (upgrades) with community @ office hours Message-ID: Hi everyone! It looks like we're not going to be able to have a TC meeting every 2 weeks as I had hoped for, the majority of the TC seems to want to meet once every month. However, I wanted to ask if the community would be interested in taking one of the upcoming office hours to discuss the current community goals, more specifically upgrades. It's been brought to my attention by some community members that they feel like we've been deciding goals too early without having enough maturity in terms of implementation. In addition, it seems like the final implementation way is not fully baked in by the time we create the goal. This was brought up in the WSGI goal last time and it looks like there is some oddities at the moment with "do we implement our own stuff?" "do we use the new oslo library?" "is the library even ready?" I wanted to propose one of the upcoming office hours to perhaps invite some of the community members (PTL, developers, anyone!) as well as the TC with goal champions to perhaps discuss some of these goals to help everyone get a clear view on what's going on. Does this seem like it would be of interest to the community? I am currently trying to transform our office hours to be more of a space where we have more of the community and less of discussion between us. Regards, Mohammed From mnaser at vexxhost.com Sat Oct 13 21:29:46 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sat, 13 Oct 2018 23:29:46 +0200 Subject: [openstack-dev] [tc][all] meetings outside IRC Message-ID: Hi everyone: I was going over our governance documents, more specifically this section: "All project meetings are held in public IRC channels and recorded." Does this mean that all official projects are *required* to hold their project meetings over IRC? Is this a hard requirement or is this something that we're a bit more 'lax about? Do members of the community feel like it would be easier to hold their meetings if we allowed other avenues (assuming this isn't allowed?) Looking forward to hearing everyone's comments. Thanks Mohammed From mnaser at vexxhost.com Sat Oct 13 22:24:41 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sun, 14 Oct 2018 00:24:41 +0200 Subject: [openstack-dev] [openstack-ansible] dropping xenial jobs In-Reply-To: References: <29C62E4B-1B5C-4EA9-A750-4E408B9E88EF@vexxhost.com> Message-ID: FYI: Thanks to Jesse, he has picked up this work and it's up here: https://review.openstack.org/#/c/609329/6 On Wed, Oct 10, 2018 at 9:56 AM Jesse Pretorius wrote: > > On 10/10/18, 5:54 AM, "Mohammed Naser" wrote: > > > So I’ve been thinking of dropping the Xenial jobs to reduce our overall impact in terms of gate usage in master because we don’t support it. > > I think we can start dropping it given our intended supported platform for Stein is Bionic, not Xenial. We'll have to carry Xenial & Bionic for Rocky as voting jobs. Anything ported back and found not to work for both can be fixed through either another patch to master which is back ported, or a re-implementation, as necessary. > > > > ________________________________ > Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From emilien at redhat.com Sat Oct 13 22:28:48 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sat, 13 Oct 2018 18:28:48 -0400 Subject: [openstack-dev] [tc][all] meetings outside IRC In-Reply-To: References: Message-ID: On Sat, Oct 13, 2018 at 5:30 PM Mohammed Naser wrote: > Hi everyone: > > I was going over our governance documents, more specifically this section: > > "All project meetings are held in public IRC channels and recorded." > > Does this mean that all official projects are *required* to hold their > project meetings over IRC? Is this a hard requirement or is this > something that we're a bit more 'lax about? Do members of the > community feel like it would be easier to hold their meetings if we > allowed other avenues (assuming this isn't allowed?) > > Looking forward to hearing everyone's comments. > In my opinion, IRC is the best place to run meetings in OpenStack community, however we need to acknowledge that not everyone agrees and some functional teams or sub-teams prefer some other tools, including video-conference. What remains critical to me is: - wherever you run the meeting, make it publicly reachable, accessible, visible. - if you run the meeting outside of IRC, take and share notes on public channels. In general: decisions shouldn't not be taken during meetings, but rather on Gerrit or Mailing-lists. Otherwise you're fragmenting the community between those who can attend the meeting and those who can't. My 2 cents, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Oct 13 23:28:28 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 13 Oct 2018 23:28:28 +0000 Subject: [openstack-dev] [tc][all] meetings outside IRC In-Reply-To: References: Message-ID: <20181013232827.kxauxfnophcr643b@yuggoth.org> On 2018-10-13 23:29:46 +0200 (+0200), Mohammed Naser wrote: > I was going over our governance documents, more specifically this > section: > > "All project meetings are held in public IRC channels and > recorded." > > Does this mean that all official projects are *required* to hold > their project meetings over IRC? If an official project team holds a regular official team meeting, then it needs to be in a public IRC channel with published logs (either the channel log or a log specific to the meeting). > Is this a hard requirement or is this something that we're a bit > more 'lax about? [...] We've not generally required this of auxiliary meetings for official teams. Sub-team meetings and unofficial/ad-hoc team discussions over conference call or video chat media have been tolerated in the past. But for an official team meeting (if the team regularly holds one) we've stuck to the quoted expectation as a requirement as far as I'm aware. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack.org at sodarock.com Sun Oct 14 02:20:47 2018 From: openstack.org at sodarock.com (John Villalovos) Date: Sat, 13 Oct 2018 19:20:47 -0700 Subject: [openstack-dev] [ironic] Stepping down as core In-Reply-To: References: Message-ID: Sorry to see you go Sam. You were a big asset to the community! Good luck for the future. John On Thu, Oct 11, 2018 at 4:41 AM Sam Betts (sambetts) wrote: > As many of you will have seen on IRC, I've mostly been appearing AFK for > the last couple of development cycles. Due to other tasks downstream most > of my attention has been drawn away from upstream Ironic development. Going > forward I'm unlikely to be as heavily involved with the Ironic project as I > have been in the past, so I am stepping down as a core contributor to make > way for those more involved and with more time to contribute. > > That said I do not intend to disappear, Myself and my colleagues plan to > continue to support the Cisco Ironic drivers, we just won't be so heavily > involved in core ironic development and its worth noting that although I > might appear AFK on IRC because my main focus is on other things, I always > have an ear to the ground and direct pings will generally reach me. > > I will be in Berlin for the OpenStack summit, so to those that are > attending I hope to see you there. > > The Ironic project has been (and I hope continues to be) an awesome place > to contribute too, thank you > > Sam Betts > sambetts > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Sun Oct 14 11:02:54 2018 From: aj at suse.com (Andreas Jaeger) Date: Sun, 14 Oct 2018 13:02:54 +0200 Subject: [openstack-dev] [astara] Retirement of astara repos? In-Reply-To: <4D511F22-5D6F-43EF-BDCF-A2322103F12D@mcclain.xyz> References: <572FF9CF-9AB5-4CBA-A4C8-26E7A012309E@gmx.com> <0DE3CB09-5CA1-4557-9158-C40F0FC37E6E@mcclain.xyz> <4D511F22-5D6F-43EF-BDCF-A2322103F12D@mcclain.xyz> Message-ID: <699b08c4-8b90-780e-b986-f9644a009689@suse.com> On 20/08/2018 15.52, Mark McClain wrote: > Yeah. I’ll post the retirement commits this week. I posted them now - see https://review.openstack.org/#/q/status:open+++topic:retire-astara Please abandon also all open reviews: https://review.openstack.org/#/q/projects:openstack/astara+is:open If you need help, please speak up, Andreas > > mark > >> On Aug 18, 2018, at 13:39, Andreas Jaeger wrote: >> >> Mark, shall I start the retirement of astara now? I would appreciate a "go ahead" - unless you want to do it yourself... >> >> Andreas >> >>> On 2018-02-23 14:34, Andreas Jaeger wrote: >>>> On 2018-01-11 22:55, Mark McClain wrote: >>>> Sean, Andreas- >>>> >>>> Sorry I missed Andres’ message earlier in December about retiring astara. Everyone is correct that development stopped a good while ago. We attempted in Barcelona to find others in the community to take over the day-to-day management of the project. Unfortunately, nothing sustained resulted from that session. >>>> >>>> I’ve intentionally delayed archiving the repos because of background conversations around restarting active development for some pieces bubble up from time-to-time. I’ll contact those I know were interested and try for a resolution to propose before the PTG. >>> Mark, any update here? >> >> -- >> Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi >> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany >> GF: Felix Imendörffer, Jane Smithard, Graham Norton, >> HRB 21284 (AG Nürnberg) >> GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 >> > > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From emilien at redhat.com Sun Oct 14 15:07:14 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sun, 14 Oct 2018 11:07:14 -0400 Subject: [openstack-dev] [tripleo] request for feedback/review on docker2podman upgrade Message-ID: I recently wrote a blog post about how we could upgrade an host from Docker containers to Podman containers. http://my1.fr/blog/openstack-containerization-with-podman-part-3-upgrades/ I managed to get this prototype actually tested in CI: https://review.openstack.org/#/c/608463/ http://logs.openstack.org/63/608463/10/check/tripleo-ci-centos-7-containerized-undercloud-upgrades/c958861/logs/undercloud/var/log/extra/docker/docker_allinfo.log.txt.gz http://logs.openstack.org/63/608463/10/check/tripleo-ci-centos-7-containerized-undercloud-upgrades/c958861/logs/undercloud/var/log/extra/podman/podman_allinfo.log.txt.gz Therefore, I am requesting feedback and reviews on: - openstack/paunch: rm the docker container during upgrade - https://review.openstack.org/#/c/608319/ - openstack/tripleo-upgrade: set container_cli for undercloud - https://review.openstack.org/#/c/608462/ - openstack/tripleo-quickstart: fs050: upgrade the undercloud to Podman containers - https://review.openstack.org/#/c/608463/ Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From abregman at redhat.com Sun Oct 14 19:36:35 2018 From: abregman at redhat.com (Arie Bregman) Date: Sun, 14 Oct 2018 22:36:35 +0300 Subject: [openstack-dev] [tripleo][ci][upgrade] New jobs for tripleo Upgrade in the CI. In-Reply-To: <87d0sf2zga.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> References: <87d0sf2zga.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: On Fri, Oct 12, 2018 at 2:10 PM Sofer Athlan-Guyot wrote: > Hi, > > Testing and maintaining a green status for upgrade jobs within the 3h > time limit has proven to be a very difficult job to say the least. > > The net result has been: we don't have anything even touching the > upgrade code in the CI. > > So during Denver PTG it has been decided to give up on running a full > upgrade job during the 3h time limit and instead to focus on two > complementary approach to at least touch the upgrade code: > 1. run a standalone upgrade: this test the ansible upgrade playbook; > 2. run a N->N upgrade; this test the upgrade python code; > > And here there are, still not merged but seen working: > - tripleo-ci-centos-7-standalone-upgrade: > https://review.openstack.org/#/c/604706/ > - tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades: > https://review.openstack.org/#/c/607848/9 > > The first is good to merge (but other could disagree), the second could > be as well (but I tend to disagree :)) > > The first leverage the standalone deployment and execute an standalone > upgrade just after it. > > The limitation is that it only tests non-HA services (sorry pidone, > cannot test ha in standalone) and only the upgrade_tasks (ie not any > workflow related to the upgrade cli) > > The main benefits here are: > - ~2h to run the upgrade, still a bit long but far away from the 3h > time limit; > - we trigger a yum upgrade so that we can catch problems there as well; > - we test the standalone upgrade which is good in itself; > - composable role available (as in standalone/all-in-all deployment) so > you can make a specific upgrade test for your project if it fits into > the standalone constraint; > > For this last point, if standalone specific role eventually goes into > project testing (nova, neutron ...), they could have as well a way to > test upgrade tasks. This would be a best case scenario. > > Now, for the second point, the N->N upgrade. Its "limitation" is that > ... well it doesn't run a yum upgrade at all. We start from master and > run the upgrade to master. > > It's main benefit are: > - it takes ~2h20 to run, so well under the 3h time; > - tripleoclient upgrade code is run, which is one thing that the > standalone ugprade cannot do. > - It also tend to exercise idempotency of all the tasks as it runs them > on an already "upgraded" node; > - As added bonus, it could gate the tripleo-upgrade role as well as it > definitively loads all of the role's tasks[1] > > For those that stayed with me to this point, I'm throwing another CI > test that already proved useful already (caught errors), it's the > ansible-lint test. After a standalone deployment we just run > ansible-lint on all playbook generated[2]. > > It produces standalone_ansible_lint.log[3] in the working directory. It > only takes a couple of minute to install ansible-lint and run it. It > definitively gate against typos and the like. It touches hard to > reach code as well, for instance the fast_forward tasks are linted. > Still no pidone tasks in there but it could easily be added to a job > that has HA tasks generated. > > Note that by default ansible-lint barks, as the generated playbooks hit > several lintage problems, so only syntax errors and misnamed tasks or > parameters are currently activated. But all the lint problems are > logged in the above file and can be fixed later on. At which point we > could activate full lint gating. > > Thanks for this long reading, any comments, shout of victory, cry of > despair and reviews are welcomed. > That's awesome. It's perfect for a project we are working on (Tobiko) where we want to run tests before upgrade (setting up resources) and after (verifying those resources are still available). I want to add such job (upgrade standalone) and I need help: https://review.openstack.org/#/c/610397/ How do I set tempest regex for pre-upgrade and another one for post upgrade? > [1] but this has still to be investigated. > [2] testing review https://review.openstack.org/#/c/604756/ and main code > https://review.openstack.org/#/c/604757/ > [3] sample output http://paste.openstack.org/show/731960/ > -- > Sofer Athlan-Guyot > chem on #freenode > Upgrade DFG. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Oct 15 01:47:47 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 15 Oct 2018 10:47:47 +0900 Subject: [openstack-dev] [tc][all] Discussing goals (upgrades) with community @ office hours In-Reply-To: References: Message-ID: <166756863bc.129b69486190090.3724182267976004507@ghanshyammann.com> ---- On Sat, 13 Oct 2018 22:04:16 +0900 Mohammed Naser wrote ---- > Hi everyone! > > It looks like we're not going to be able to have a TC meeting every 2 > weeks as I had hoped for, the majority of the TC seems to want to meet > once every month. However, I wanted to ask if the community would be > interested in taking one of the upcoming office hours to discuss the > current community goals, more specifically upgrades. > > It's been brought to my attention by some community members that they > feel like we've been deciding goals too early without having enough > maturity in terms of implementation. In addition, it seems like the > final implementation way is not fully baked in by the time we create > the goal. This was brought up in the WSGI goal last time and it looks > like there is some oddities at the moment with "do we implement our > own stuff?" "do we use the new oslo library?" "is the library even > ready?" > > I wanted to propose one of the upcoming office hours to perhaps invite > some of the community members (PTL, developers, anyone!) as well as > the TC with goal champions to perhaps discuss some of these goals to > help everyone get a clear view on what's going on. > > Does this seem like it would be of interest to the community? I am > currently trying to transform our office hours to be more of a space > where we have more of the community and less of discussion between us. Thanks naser, this is good idea. Office hour is perfect time to have more technical and help-needed discussions for set goals or cross project work. Which office hour(Tue, Wed, Thur) we will use for this discussion? > > Regards, > Mohammed > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gmann at ghanshyammann.com Mon Oct 15 01:54:37 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 15 Oct 2018 10:54:37 +0900 Subject: [openstack-dev] [tc][all] meetings outside IRC In-Reply-To: References: Message-ID: <166756ea4f5.cf730b28190113.3544991241972915446@ghanshyammann.com> ---- On Sun, 14 Oct 2018 06:29:46 +0900 Mohammed Naser wrote ---- > Hi everyone: > > I was going over our governance documents, more specifically this section: > > "All project meetings are held in public IRC channels and recorded." > > Does this mean that all official projects are *required* to hold their > project meetings over IRC? Is this a hard requirement or is this > something that we're a bit more 'lax about? Do members of the > community feel like it would be easier to hold their meetings if we > allowed other avenues (assuming this isn't allowed?) > > Looking forward to hearing everyone's comments. Personally I feel IRC is good options which is more comfortable for non-english speakers then, video/audio call. But that is for official meeting not other ad-hoc/technical discussion which is or can be done on any channel. But if any team or all attendees of the meeting are more comfortable in other communication channel, we should have flexibility for them. Like PTL discussed in his team and everyone decided to use hangout for meeting then, we should not restrict them. As long as meeting logs(chat/audio/video) are linked in eavesdrop we should be good. -gmann > > Thanks > Mohammed > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From trungnv at vn.fujitsu.com Mon Oct 15 04:03:52 2018 From: trungnv at vn.fujitsu.com (Nguyen Van, Trung) Date: Mon, 15 Oct 2018 04:03:52 +0000 Subject: [openstack-dev] [watcher] [monasca] Bare metal node N+1 redundancy and Proactive HA Message-ID: Hi Alexander, Regard to N+1 redundancy feature, our team is testing with Watcher. We decided to follow 4 steps: 1. Test integrated Watcher with Ceilometer for alarm monitoring (done for basic test). We will do more tests for other alarming triggers. 2. Implementing business rules in Watcher for our instance switch-over solution (in progress). Then, we will implement code-base prototype on Watcher after that. 3. Test BM/VM instance switch-over from Nova [1] (in progress). 4. Integrate testing 1 & 3 for full solution which we are expectation. It will be perform after completed 1 & 2 & 3. If we have any issues, we will discuss with you on IRC and or IRC meeting. We really appreciate your help. [1] https://review.openstack.org/#/c/500677/ https://review.openstack.org/#/c/449155/ Best regards, Nguyen Van Trung. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Mon Oct 15 06:51:39 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Mon, 15 Oct 2018 15:51:39 +0900 Subject: [openstack-dev] [Searchlight] Week of Stein R-26 report Message-ID: Dear team, Here is the report of last week, Stein R-26 (Oct 8-12): https://www.dangtrinh.com/2018/10/searchlight-weekly-report-stein-r-26.html If you have any questions, please let me know. Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Mon Oct 15 07:36:46 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 15 Oct 2018 09:36:46 +0200 Subject: [openstack-dev] [tc][all] Discussing goals (upgrades) with community @ office hours In-Reply-To: References: Message-ID: On Sat, 2018-10-13 at 15:04 +0200, Mohammed Naser wrote: > I wanted to propose one of the upcoming office hours to perhaps > invite > some of the community members (PTL, developers, anyone!) as well as > the TC with goal champions to perhaps discuss some of these goals to > help everyone get a clear view on what's going on. > > Does this seem like it would be of interest to the community? I am > currently trying to transform our office hours to be more of a space > where we have more of the community and less of discussion between > us. > Great idea, mnaser. I think it's good if we discuss all together during office hours! I believe the office hours should not only be used for discussing progress by subject-matter experts (SME), but also a place for the community to discuss cross-team issues, and victories. I like to have goal champions talking about progress in the next office hours. +1! JP From jean-philippe at evrard.me Mon Oct 15 08:27:39 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 15 Oct 2018 10:27:39 +0200 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-26 Update In-Reply-To: References: Message-ID: On Fri, 2018-10-12 at 17:05 -0500, Matt Riedemann wrote: > The big update this week is version 0.1.0 of oslo.upgradecheck was > released. The documentation along with usage examples can be found > here > [1]. A big thanks to Ben Nemec for getting that done since a few > projects were waiting for it. > > In other updates, some changes were proposed in other projects [2]. > > And finally, Lance Bragstad and I had a discussion this week [3] > about > the validity of upgrade checks looking for deleted configuration > options. The main scenario I'm thinking about here is FFU where > someone > is going from Mitaka to Pike. Let's say a config option was > deprecated > in Newton and then removed in Ocata. As the operator is rolling > through > from Mitaka to Pike, they might have missed the deprecation signal > in > Newton and removal in Ocata. Does that mean we should have upgrade > checks that look at the configuration for deleted options, or > options > where the deprecated alias is removed? My thought is that if things > will > not work once they get to the target release and restart the service > code, which would definitely impact the upgrade, then checking for > those > scenarios is probably OK. If on the other hand the removed options > were > just tied to functionality that was removed and are otherwise not > causing any harm then I don't think we need a check for that. It was > noted that oslo.config has a new validation tool [4] so that would > take > care of some of this same work if run during upgrades. So I think > whether or not an upgrade check should be looking for config option > removal ultimately depends on the severity of what happens if the > manual > intervention to handle that removed option is not performed. That's > pretty broad, but these upgrade checks aren't really set in stone > for > what is applied to them. I'd like to get input from others on this, > especially operators and if they would find these types of checks > useful. > > [1] https://docs.openstack.org/oslo.upgradecheck/latest/ > [2] https://storyboard.openstack.org/#!/story/2003657 > [3] > http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-10-10.log.html#t2018-10-10T15:17:17 > [4] > http://lists.openstack.org/pipermail/openstack-dev/2018-October/135688.html > Hey, Nice topic, thanks Matt! TL:DR; I would rather fail explicitly for all removals, warning on all deprecations. My concern is, by being more surgical, we'd have to decide what's "not causing any harm" (and I think deployers/users are best to determine what's not causing them any harm). Also, it's probably more work to classify based on "severity". The quick win here (for upgrade-checks) is not about being smart, but being an exhaustive, standardized across projects, and _always used_ source of truth for upgrades, which is complemented by release notes. Long answer: At some point in the past, I was working full time on upgrades using OpenStack-Ansible. Our process was the following: 1) Read all the project's releases notes to find upgrade documentation 2) With said release notes, Adapt our deploy tools to handle the upgrade, or/and write ourselves extra documentation+release notes for our deployers. 3) Try the upgrade manually, fail because some release note was missing x or y. Find root cause and retry from step 2 until success. Here is where I see upgrade checkers improving things: 1) No need for deployment projects to parse all release notes for configuration changes, as tooling to upgrade check would be directly outputting things that need to change for scenario x or y that is included in the deployment project. No need to iterate either. 2) Test real deployer use cases. The deployers using openstack-ansible have ultimate flexibility without our code changes. Which means they may have different code paths than our gating. Including these checks in all upgrades, always requiring them to pass, and making them explicit about the changes is tremendously helpful for deployers: - If config deprecations are handled as warnings as part of the same process, we will output said warnings to generate a list of action items for the deployers. We would use only one tool as source of truth for giving the action items (and still continue the upgrade); - If config removals are handled as errors, the upgrade will fail, which is IMO normal, as the deployer would not have respected its action items. In OSA, we could probably implement a deployer override (variable). It would allow the deployers an explicit bypass of an upgrade failure. "I know I am doing this!". It would be useful for doing multiple serial upgrades. In that case, deployers could then share together their "recipes" for handling upgrade failure bypasses for certain multi-upgrade (jumps) scenarios. After a while, we could think of feeding those back to upgrade checkers. 3) I like the approach of having oslo-config-validator. However, I must admit it's not part of our process to always validate a config file before trying to start a service in OSA. I am not sure where other deployment projects are in terms of that usage. I am not familiar with upgrade checker code, but I would love to see it re-using oslo-config- validator, as it would be the unique source of truth for upgrades before the upgrade happens (vs having to do multiple steps). If I am completely out of my league here, tell me. Just my 2 cents. Jean-Philippe Evrard (evrardjp) From sfinucan at redhat.com Mon Oct 15 10:49:39 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Mon, 15 Oct 2018 11:49:39 +0100 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: <20181010185126.agh5d2msk2aut62d@yuggoth.org> References: <8b2f89f9-a8f1-fc82-0d08-e8a93ef1889c@gmail.com> <20181010185126.agh5d2msk2aut62d@yuggoth.org> Message-ID: <309fd108d93bc66fac5498507224a721d2f07d75.camel@redhat.com> On Wed, 2018-10-10 at 18:51 +0000, Jeremy Stanley wrote: > On 2018-10-10 13:35:00 -0500 (-0500), Greg Hill wrote: > [...] > > We plan to still have a CI gatekeeper, probably Travis CI, to make sure PRs > > past muster before being merged, so it's not like we're wanting to > > circumvent good contribution practices by committing whatever to HEAD. > > Travis CI has gained the ability to prevent you from merging changes > which fail testing? Or do you mean something else when you refer to > it as a "gatekeeper" here? Yup but it's GitHub feature rather than specifically a Travis CI feature. https://help.github.com/articles/about-required-status-checks/ Doesn't help the awful pull request workflow but that's neither here nor there. Stephen From gergely.csatari at nokia.com Mon Oct 15 10:49:52 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Mon, 15 Oct 2018 10:49:52 +0000 Subject: [openstack-dev] [edge][keystone][ptg]: Keystone edge architectures wiki updated Message-ID: Hi, I've updated the Keystone edge architectures wiki [1] based on the notes [2] we generated in the Denver workshop. Please check and comment. Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Keystone_edge_architectures [2]: https://etherpad.openstack.org/p/keystone-stein-edge-architecture -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Mon Oct 15 10:54:28 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Mon, 15 Oct 2018 11:54:28 +0100 Subject: [openstack-dev] [infra] Polygerrit (was: [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo) In-Reply-To: References: <8b2f89f9-a8f1-fc82-0d08-e8a93ef1889c@gmail.com> Message-ID: On Wed, 2018-10-10 at 13:35 -0500, Greg Hill wrote: > > If it's just about preferring the pull request workflow versus the > > Gerrit rebase workflow, just say so. Same for just preferring the Github > > UI versus Gerrit's UI (which I agree is awful). > > I mean, yes, I personally prefer the Github UI and workflow, but that > was not a primary consideration. I got used to using gerrit well > enough. It was mostly the There's also a sense that if a project is > in the Openstack umbrella, it's not useful outside Openstack, and > Taskflow is designed to be a general purpose library. The hope is > that just making it a regular open source project might attract more > users and contributors. This may or may not bear out, but as it is, > there's no real benefit to staying an openstack project on this front > since nobody is actively working on it within the community. As an aside, are there any plans to enable PolyGerrit [1] in the OpenStack Gerrit instance? The Gerrit documentation lists the feature as a beta [2], but I suspect that might be out-of-date given how long it's been around and folks suggesting the opposite elsewhere [3]. Stephen [1] https://www.youtube.com/watch?v=WsPhoPGUsss [2] https://www.gerritcodereview.com/dev-polygerrit.html# [3] https://gitenterprise.me/2018/04/23/gerrithub-adopts-100-polygerrit/ From cdent+os at anticdent.org Mon Oct 15 11:27:38 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 15 Oct 2018 12:27:38 +0100 (BST) Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: References: Message-ID: On Wed, 10 Oct 2018, Greg Hill wrote: > I guess I'm putting it forward to the larger community. Does anyone have > any objections to us doing this? Are there any non-obvious technicalities > that might make such a transition difficult? Who would need to be made > aware so they could adjust their own workflows? I've been on both sides of conversations like this a few different times. Generally speaking people who are not already in the OpenStack environment express an unwillingness to participate because of perceptions of walled-garden and too-many-hoops. Whatever the reality of the situation, those perceptions matter, and for libraries that are already or potentially useful to people who are not in OpenStack, being "outside" is probably beneficial. And for a library that is normally installed (or should optimally be installed because, really, isn't it nice to be decoupled?) via pip, does it matter to OpenStack where it comes from? > Or would it be preferable to just fork and rename the project so openstack > can continue to use the current taskflow version without worry of us > breaking features? Fork sounds worse. I've had gabbi contributors tell me, explicitly, that they would not bother contributing if they had to go through what they perceive to be the OpenStack hoops. That's anecdata, but for me it is pretty compelling. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From gergely.csatari at nokia.com Mon Oct 15 11:29:05 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Mon, 15 Oct 2018 11:29:05 +0000 Subject: [openstack-dev] [edge][glance][ptg]: Image handling wiki updated Message-ID: Hi, I've updated the Image handling in edge environment wiki [1] based on the notes [2] from the Denver ptg discussions. Please check and comment. Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [2]: https://etherpad.openstack.org/p/glance-stein-edge-architecture -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Oct 15 12:08:06 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 15 Oct 2018 12:08:06 +0000 Subject: [openstack-dev] [infra] Polygerrit (was: [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo) In-Reply-To: References: <8b2f89f9-a8f1-fc82-0d08-e8a93ef1889c@gmail.com> Message-ID: <20181015120806.mf2opeanihvcmg4t@yuggoth.org> On 2018-10-15 11:54:28 +0100 (+0100), Stephen Finucane wrote: [...] > As an aside, are there any plans to enable PolyGerrit [1] in the > OpenStack Gerrit instance? [...] I believe so, but first we need to upgrade to a newer Gerrit version which provides it (that in turn requires a newer Java which needs a server built from a newer distro version, which is all we've gotten through on the upgrade plan so far). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cdent+os at anticdent.org Mon Oct 15 12:16:25 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 15 Oct 2018 13:16:25 +0100 (BST) Subject: [openstack-dev] [tc][all] Discussing goals (upgrades) with community @ office hours In-Reply-To: References: Message-ID: On Sat, 13 Oct 2018, Mohammed Naser wrote: > Does this seem like it would be of interest to the community? I am > currently trying to transform our office hours to be more of a space > where we have more of the community and less of discussion between us. If we want discussion to actually be with the community at large (rather than giving lip service to the idea), then we need to be more oriented to using email. Each time we have an office hour or a meeting in IRC or elsewhere, or an ad hoc Hangout, unless we are super disciplined about reporting the details to email afterwards, a great deal of information falls on the floor and individuals who are unable to attend because of time, space, language or other constraints are left out. For community-wide issues, synchronous discussion should be the mode of last resort. Anything else creates a priesthood with a disempowered laity wondering how things got away from them. For community goals, in particular, preferring email for discussion and planning seems pretty key. I wonder if instead of specifying topics for TC office hours, we kill them instead? They've turned into gossiping echo chambers. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From cdent+os at anticdent.org Mon Oct 15 12:40:37 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 15 Oct 2018 13:40:37 +0100 (BST) Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance Message-ID: Back in August [1] there was an email thread about the Paste package being essentially unmaintained and several OpenStack projects still using it. At that time we reached the conclusion that we should investigate having OpenStack adopt Paste in some form as it would take some time or be not worth it to migrate services away from it. I went about trying to locate the last set of maintainers and get access to picking it up. It took a while, but I've now got owner bits for both bitbucket and PyPI and enthusiastic support from the previous maintainer for OpenStack to be the responsible party. I'd like some input from the community on how we'd like this to go. Some options. * Chris becomes the de-facto maintainer of paste and I do whatever I like to get it healthy and released. * Several volunteers from the community take over the existing bitbucket setup [2] and keep it going there. * Several volunteers from the community import the existing bitbucket setup to OpenStack^wOpenDev infra and manage it. What would people like? Who would like to volunteer? At this stage the main piece of blocking work is a patch [3] (and subsequent release) to get things working happily in Python 3.7. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-August/132792.html [2] https://bitbucket.org/ianb/paste [3] https://bitbucket.org/ianb/paste/pull-requests/41/python-37-support/diff -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From dtantsur at redhat.com Mon Oct 15 13:05:18 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 15 Oct 2018 15:05:18 +0200 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: References: Message-ID: <7e83990d-7620-f5af-9275-1ece9f84e26b@redhat.com> On 10/10/18 7:41 PM, Greg Hill wrote: > I've been out of the openstack loop for a few years, so I hope this reaches the > right folks. > > Josh Harlow (original author of taskflow and related libraries) and I have been > discussing the option of moving taskflow out of the openstack umbrella recently. > This move would likely also include the futurist and automaton libraries that > are primarily used by taskflow. Just for completeness: futurist and automaton are also heavily relied on by ironic without using taskflow. > The idea would be to just host them on github > and use the regular Github features for Issues, PRs, wiki, etc, in the hopes > that this would spur more development. Taskflow hasn't had any substantial > contributions in several years and it doesn't really seem that the current > openstack devs have a vested interest in moving it forward. I would like to move > it forward, but I don't have an interest in being bound by the openstack > workflow (this is why the project stagnated as core reviewers were pulled on to > other projects and couldn't keep up with the review backlog, so contributions > ground to a halt). > > I guess I'm putting it forward to the larger community. Does anyone have any > objections to us doing this? Are there any non-obvious technicalities that might > make such a transition difficult? Who would need to be made aware so they could > adjust their own workflows? > > Or would it be preferable to just fork and rename the project so openstack can > continue to use the current taskflow version without worry of us breaking features? > > Greg > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sean.mcginnis at gmx.com Mon Oct 15 13:32:15 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 15 Oct 2018 08:32:15 -0500 Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance In-Reply-To: References: Message-ID: <20181015133214.GA366@sm-workstation> On Mon, Oct 15, 2018 at 01:40:37PM +0100, Chris Dent wrote: > > Back in August [1] there was an email thread about the Paste package > being essentially unmaintained and several OpenStack projects still > using it. At that time we reached the conclusion that we should > investigate having OpenStack adopt Paste in some form as it would > take some time or be not worth it to migrate services away from it. > > [snip] > > I'd like some input from the community on how we'd like this to go. > Some options. > > * Chris becomes the de-facto maintainer of paste and I do whatever I > like to get it healthy and released. > > * Several volunteers from the community take over the existing > bitbucket setup [2] and keep it going there. > > * Several volunteers from the community import the existing > bitbucket setup to OpenStack^wOpenDev infra and manage it. > > What would people like? Who would like to volunteer? > Maybe some combination of bullets one and three? This seems like it would increase visibility in the community if it were moved under the OpenDev umbrella. Then we could also leverage our release automation as/when needed. It might also then attract some of the driveby contributions. From skaplons at redhat.com Mon Oct 15 13:33:35 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 15 Oct 2018 15:33:35 +0200 Subject: [openstack-dev] [neutron] Bug deputy report - week 42 Message-ID: <2C9595FF-A07A-4CD9-850A-1ED72A179650@redhat.com> Hi, I was on bug deputy in week of 8.10.2018 to 15.10.2018. Below is summary of reported bugs for Neutron: Bugs which needs some look to confirm them: * https://bugs.launchpad.net/neutron/+bug/1795432 - neutron does not create the necessary iptables rules for dhcp agents when linuxbridge is used This one should be confirmed on multinode environment with Linuxbridge, I didn’t have such env to work on that. * https://bugs.launchpad.net/neutron/+bug/1797368 - Trunk: different behavior of admin_state_up attribute between trunk and port I have no experience with trunks and I would like someone else to take a look on that. From description it looks for me that it’s valid issue * https://bugs.launchpad.net/neutron/+bug/1795816 - neutron_dynamic_routing Bgp floatingip_update KeyError: 'last_known_router_id Should be good that someone more familiar with neutron-dynamic-routing could take a look on it. I don’t have environment to confirm it but from bug report it looks as valid bug. Importance set to medium Bugs triaged or triage in progress already: * https://bugs.launchpad.net/neutron/+bug/1796491 - DVR Floating IP setup in the SNAT namespace of the network node and also in the qrouter namespace in the compute node Swami is checking if it wasn’t already fixed… * https://bugs.launchpad.net/neutron/+bug/1796824 - Port in some type of device_owner should not allow update IP address Importance set to Medium * https://bugs.launchpad.net/neutron/+bug/1797037 - Extra routes configured on routers are not set in the router namespace and snat namespace with DVR-HA routers already in progress, importance Medium * https://bugs.launchpad.net/neutron/+bug/1796854 - Neutron doesn't respect advscv role while creating port Importance set to Medium, this is an neutron-lib issue * https://bugs.launchpad.net/neutron/+bug/1796976 - neutron.conf needs lock_path set for router to operate Docs issue - importance Low, * https://bugs.launchpad.net/neutron/+bug/1797084 - Stale namespaces when fallback tunnels are present Importance Low, patch already proposed by dalvarez * https://bugs.launchpad.net/neutron/+bug/1797298 - Router gateway device are repeatedly configured When ha changed Importance Low, patch already proposed * https://bugs.launchpad.net/neutron/+bug/1796703 - HA router interfaces in standby state Needs look by some L3 experts - I already raised this on on L3 Sub-team meeting and Brian is triaging it now * https://bugs.launchpad.net/neutron/+bug/1796230 - no libnetfilter-log1 package on centos Waiting for reply for Brian’s question now…. * https://bugs.launchpad.net/neutron/+bug/1763608 - Netplan ignores Interfaces without IP Addresses I asked how it’s related to neutron as I don’t understand that. Waiting for reply now. * https://bugs.launchpad.net/neutron/+bug/1795222 - [l3] router agent side gateway IP not changed if directly change IP address Importance medium, patch already proposed * https://bugs.launchpad.net/neutron/+bug/1797663 - refactor def _get_dvr_sync_data from neutron/db/l3_dvr_db.py Importance wishlist, * https://bugs.launchpad.net/neutron/+bug/1796629 - Incorrectly passing ext_ips as gw_ips after creating router gateway Importance set to medium RFEs: * https://bugs.launchpad.net/neutron/+bug/1796925 - [RFE] port forwarding floating IP QoS * https://bugs.launchpad.net/neutron/+bug/1797140 - [RFE] create port by providing parameters subnet_id only Potential RFEs maybe: * https://bugs.launchpad.net/neutron/+bug/1792890 - The user can delete a security group which is used as remote-group-id It looks like maybe potential RFE as it may change current API behavior — Slawek Kaplonski Senior software engineer Red Hat From josephine.seifert at secustack.com Mon Oct 15 13:35:38 2018 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Mon, 15 Oct 2018 15:35:38 +0200 Subject: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal) In-Reply-To: <12290bd5-bce6-1f8f-abbd-29f2263487e8@secustack.com> References: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> <12290bd5-bce6-1f8f-abbd-29f2263487e8@secustack.com> Message-ID: <7811aa5b-9538-d73c-e1eb-5d46062da32e@secustack.com> Hello OpenStack developers, we have made an etherpad as there were a few questions concerning the library we want to use for the encryption and decryption method: https://etherpad.openstack.org/p/library-for-image-encryption-and-decryption Am 11.10.2018 um 15:10 schrieb Josephine Seifert: > Am 08.10.2018 um 17:16 schrieb Markus Hentsch: >> Dear OpenStack developers, >> >> as you suggested, we have written individual specs for Nova [1] and >> Cinder [2] so far and will write another spec for Glance soon. We'd >> appreciate any feedback and reviews on the specs :) >> >> Thank you in advance, >> Markus Hentsch >> >> [1] https://review.openstack.org/#/c/608696/ >> [2] https://review.openstack.org/#/c/608663/ >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > The spec for Glance is also on gerrit now: > > https://review.openstack.org/#/c/609667/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Mon Oct 15 13:39:57 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 15 Oct 2018 08:39:57 -0500 Subject: [openstack-dev] [ptl][release] Proposed changes for cycle-with-milestones deliverables In-Reply-To: References: <20180926142229.GA26870@sm-workstation> Message-ID: <20181015133957.GB366@sm-workstation> On Fri, Oct 12, 2018 at 11:44:11AM -0400, Doug Hellmann wrote: > Sean McGinnis writes: > > > [snip] > > > > One of the big motivators in the past was also to have output that downstream > > distros and users could pick up for testing and early packaging. Based on our > > admittedly anecdotal small sample, it doesn't appear this is actually a big > > need, so we propose to stop tagging milestone releases for the > > cycle-with-milestone projects. > > One of the issues that was raised from downstream consumers [1] is that > this complicates upgrade testing using packages, since tools like yum > will think that the stable branch (with a final version tag) has a > higher version number than master (with a dev version computed off of > the first release candidate where the stable branch was created). > > [snip] > > We need all projects to increment their version at least by one minor > version at the start of each cycle to save space for patch releases on > the stable branch, so we looked at a few options for triggering that > update automatically. > > [snip] > > A similarly low impact solution is to use pbr's Sem-Ver calculation > feature and inject patches into master to bump the version being > computed by 1 feature level (which should move from x.y.z.0rc1 to > somethinglike x.y+1.0.devN). See [2] for details about how this works. > > This is the approach I prefer, and I have a patch to the branching > scripts to add the Sem-Ver instruction to the patches we already > generate to update reno [3]. > > That change should take care of our transition from Stein->T, but we're > left with versions in Stein that are lower than Rocky right now. So, as > a one time operation, Sean is going to propose empty patches with the > Sem-Ver instruction in the commit message to all of the repositories for > Stein deliverables that have stable/rocky branches. > The patch to automatically propose the sem-ver flag on branching stable/* has landed and I have tested it out with our release-test repo. This seems to work well and is a much lower impact than other options we had considered. I have a set of patches queued up to now do this one-time manual step for the rocky to stein transition. **Please watch for these "empty" patches from me and get them through quickly if they look OK to you.** I have checked the list of repos to make sure none of them have done any sort off milestone release yet for stein. If you are aware of any that I have missed, please let me know. There is a strong warning with using this PBR feature that it is not obvious that having this metadata tag in the commit has this effect on a repo and it is very, very disruptive to try to undo its use. So please do not copy, backport, or do anything else with any of the commits that contain this flag. Any questions at all, please ask here or in the #openstack-release channel. Sean From ed at leafe.com Mon Oct 15 13:51:51 2018 From: ed at leafe.com (Ed Leafe) Date: Mon, 15 Oct 2018 08:51:51 -0500 Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance In-Reply-To: References: Message-ID: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> On Oct 15, 2018, at 7:40 AM, Chris Dent wrote: > > I'd like some input from the community on how we'd like this to go. I would say it depends on the long-term plans for paste. Are we planning on weaning ourselves off of paste, and simply need to maintain it until that can be completed, or are we planning on encouraging its use? -- Ed Leafe From aschadin at sbcloud.ru Mon Oct 15 13:55:36 2018 From: aschadin at sbcloud.ru (=?koi8-r?B?/sHEyc4g4czFy9PBzsTSIPPF0sfFxdfJ3g==?=) Date: Mon, 15 Oct 2018 13:55:36 +0000 Subject: [openstack-dev] [watcher] [monasca] Bare metal node N+1 redundancy and Proactive HA References: Message-ID: <9907ba1ef089436aa3e23919dab16d79@sbcloud.ru> On 10.10.2018 12:43, Vu Cong, Tuan wrote: > Then, I will fill the gap for communication between Monasca and Watcher by uploading new patch to Monasca. Could you please add me as a patch reviewer once patch is uploaded? Thanks! -- Alexander Chadin From lbragstad at gmail.com Mon Oct 15 14:07:09 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 15 Oct 2018 09:07:09 -0500 Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance In-Reply-To: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> References: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> Message-ID: On Mon, Oct 15, 2018 at 8:52 AM Ed Leafe wrote: > On Oct 15, 2018, at 7:40 AM, Chris Dent wrote: > > > > I'd like some input from the community on how we'd like this to go. > > I would say it depends on the long-term plans for paste. Are we planning > on weaning ourselves off of paste, and simply need to maintain it until > that can be completed, or are we planning on encouraging its use? > > Keystone started doing this last release and we're just finishing it up now. The removal of keystone's v2.0 API and our hand-rolled API dispatching ended up being the perfect storm for us to say "let's just remove paste entirely and migrate to something supported". It helped that we stacked a couple of long-standing work items behind the paste removal, but it was a ton of work [0]. I think Morgan was going to put together a summary of how we approached the removal. If the long-term goal is to help projects move away from Paste, then we can try and share some of the knowledge we have. [0] https://twitter.com/MdrnStm/status/1050519620724056065 > > -- Ed Leafe > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Mon Oct 15 14:32:16 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 15 Oct 2018 09:32:16 -0500 Subject: [openstack-dev] [neutron][neutron-release] feature/graphql branch rebase In-Reply-To: <649c9384-9d5c-3859-f893-71b48ecd675a@redhat.com> References: <89b44820-47c0-0a8b-a363-cf3ff4e1879a@redhat.com> <649c9384-9d5c-3859-f893-71b48ecd675a@redhat.com> Message-ID: Hi Gilles, The merge of master into feature/graphql has been approved: https://review.openstack.org/#/c/609455. In the future, you can create your own merge patch following the instructions here: https://docs.openstack.org/infra/manual/drivers.html#merge-master-into-feature-branch. The Neutron team will catch it in Gerrit and review it Regards Miguel On Thu, Oct 4, 2018 at 11:44 PM Gilles Dubreuil wrote: > Hey Neutron folks, > > I'm just reiterating the request. > > Thanks > > > On 20/06/18 11:34, Gilles Dubreuil wrote: > > Could someone from the Neutron release group rebase feature/graphql > > branch against master/HEAD branch please? > > > > Regards, > > Gilles > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From edmondsw at us.ibm.com Mon Oct 15 15:06:17 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Mon, 15 Oct 2018 11:06:17 -0400 Subject: [openstack-dev] [zVM] [python3] tox/zuul issues for zVM OpenStack Message-ID: The current tox.ini for ceilometer-zvm includes this line [1] similar to what ceilometer-powervm was doing up until recently: -egit+https://github.com/openstack/ceilometer at master#egg=ceilometer We found that this no longer works since ceilometer was added to upper-constraints [2]. We first got things working again by [3] and are now improving on that by [4]. I expect you will need to make similar changes. I was going to propose this to ceilometer-zvm for you, but then noticed that you don't even have a zuul.yaml file. You're missing changes like [5] adding lower-constraints checking and [6] for the python3-first effort. So that is probably a bigger issue to address first (and for networking-powervm and nova-zvm-virt-driver as well as ceilometer-zvm). When you get to the python3 stuff, I suggest you work with dhellmann on that. He's got scripts to generate at least some of those commits. [1] http://git.openstack.org/cgit/openstack/ceilometer-zvm/tree/tox.ini#n19 [2] https://review.openstack.org/#/c/601498 [3] https://review.openstack.org/#/c/609058/ [4] https://review.openstack.org/#/c/609823/ [5] https://review.openstack.org/#/c/555358/ [6] https://review.openstack.org/#/c/594984/ W. Matthew Edmonds Sr. Software Engineer, IBM Power Systems Email: edmondsw at us.ibm.com Phone: (919) 543-7538 / Tie-Line: 441-7538 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Oct 15 15:49:01 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 15 Oct 2018 11:49:01 -0400 Subject: [openstack-dev] [glance][upgrade-checkers] Question about glance rocky upgrade release note In-Reply-To: <651d4eb2-f838-12aa-867e-29928dc993c1@gmail.com> References: <5fa6be15-bf93-2f74-7799-2d602b96f363@gmail.com> <651d4eb2-f838-12aa-867e-29928dc993c1@gmail.com> Message-ID: <8b59d1de-34e5-1bfd-ddc5-5a0a6eed97a0@gmail.com> On 9/24/18 3:13 PM, Matt Riedemann wrote: > On 9/24/2018 2:06 PM, Matt Riedemann wrote: >> Are there more specific docs about how to configure the 'image import' >> feature so that I can be sure I'm careful? In other words, are there >> specific things a "glance-status upgrade check" check could look at >> and say, "your image import configuration is broken, here are details >> on how you should do this" Apologies for this delayed reply. > I guess this answers the question about docs: > > https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html Yes, you found the correct docs. They could probably use a revision to eliminate some of the references to Pike and Queens, but I think the content is accurate with respect to proper configuration of image import. > Would a basic upgrade check be such that if glance-api.conf contains > enable_image_import=False, you're going to have issues since that option > is removed in Rocky? I completely missed this question when I saw this email a few weeks ago. Yes, if a Queens glance-api.conf has enable_image_import=False, then it was disabled on purpose since the default in Queens was True. Given the Rocky defaults for import-related config (namely, all import_methods are enabled), the operator may need to make some kind of adjustment. As a side point, although the web-download import method is enabled by default in Rocky, it has whitelist/blacklist configurability to restrict what kind of URIs end-users may access. By default, end users are only able to access URIs using the http or https scheme on the standard ports. Thanks for working on the upgrade-checker goal for Glance! From lbragstad at gmail.com Mon Oct 15 16:51:45 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 15 Oct 2018 11:51:45 -0500 Subject: [openstack-dev] [glance][upgrade-checkers] Question about glance rocky upgrade release note In-Reply-To: <8b59d1de-34e5-1bfd-ddc5-5a0a6eed97a0@gmail.com> References: <5fa6be15-bf93-2f74-7799-2d602b96f363@gmail.com> <651d4eb2-f838-12aa-867e-29928dc993c1@gmail.com> <8b59d1de-34e5-1bfd-ddc5-5a0a6eed97a0@gmail.com> Message-ID: I haven't implemented any checks, but I did take a shot at laying down the scaffolding for implementing upgrade checks in glance [0]. Anyone who is more familiar with glance should be able to build off of that commit by implementing specific checks in glance/cmd/status.py [0] https://review.openstack.org/#/c/610661/ On Mon, Oct 15, 2018 at 10:49 AM Brian Rosmaita wrote: > On 9/24/18 3:13 PM, Matt Riedemann wrote: > > On 9/24/2018 2:06 PM, Matt Riedemann wrote: > >> Are there more specific docs about how to configure the 'image import' > >> feature so that I can be sure I'm careful? In other words, are there > >> specific things a "glance-status upgrade check" check could look at > >> and say, "your image import configuration is broken, here are details > >> on how you should do this" > Apologies for this delayed reply. > > I guess this answers the question about docs: > > > > > https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html > > Yes, you found the correct docs. They could probably use a revision to > eliminate some of the references to Pike and Queens, but I think the > content is accurate with respect to proper configuration of image import. > > Would a basic upgrade check be such that if glance-api.conf contains > > enable_image_import=False, you're going to have issues since that option > > is removed in Rocky? > > I completely missed this question when I saw this email a few weeks ago. > > Yes, if a Queens glance-api.conf has enable_image_import=False, then it > was disabled on purpose since the default in Queens was True. Given the > Rocky defaults for import-related config (namely, all import_methods are > enabled), the operator may need to make some kind of adjustment. > > As a side point, although the web-download import method is enabled by > default in Rocky, it has whitelist/blacklist configurability to restrict > what kind of URIs end-users may access. By default, end users are only > able to access URIs using the http or https scheme on the standard ports. > > Thanks for working on the upgrade-checker goal for Glance! > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Mon Oct 15 17:07:04 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 15 Oct 2018 10:07:04 -0700 Subject: [openstack-dev] [nova] shall we do a spec review day next tuesday oct 23? Message-ID: <3ec1bbe1-f53f-4dd7-53ba-da6afcf64e16@gmail.com> Hey all, Milestone s-1 is coming up next week on Thursday Oct 25 [1] and I was thinking it would be a good idea to have a spec review day next week on Tuesday Oct 23 to spend some focus on spec reviews together. Spec freeze is s-2 Jan 10, so the review day isn't related to any deadlines, but would just be a way to organize and make sure we have initial review on the specs that have been proposed so far. How does Tuesday Oct 23 work for everyone? Let me know if another day works better. So far, efried and mriedem are on board when I asked in the #openstack-nova channel. I'm sending this mail to gather more responses asynchronously. Cheers, -melanie [1] https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule From zbitter at redhat.com Mon Oct 15 19:00:07 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 15 Oct 2018 15:00:07 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> Message-ID: <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> On 12/10/18 8:59 AM, Corey Bryant wrote: > > > On Thu, Oct 11, 2018 at 10:19 AM Andreas Jaeger > wrote: > > On 10/10/2018 23.10, Jeremy Stanley wrote: > > I might have only pointed this out on IRC so far, but the > > expectation is that testing 3.5 and 3.6 at the same time was merely > > transitional since official OpenStack projects should be moving > > their testing from Ubuntu Xenial (which provides 3.5) to Ubuntu > > Bionic (which provides 3.6 and, now, 3.7 as well) during the Stein > > cycle and so will drop 3.5 testing on master in the process. > > Agreed, this needs some larger communication and explanation on what > to do, > > > The good news is we now have an initial change underway and successful, > dropping py35 and enabling py37: https://review.openstack.org/#/c/609557/ Hey Corey, Thanks for getting this underway, it's really important that we keep moving forward (we definitely got behind on the 3.6 transition and are paying for it now). That said, I don't think we should be dropping support/testing for 3.5. According to: https://governance.openstack.org/tc/reference/pti/python.html 3.5 is the only Python3 version that we require all projects to run tests for. Out goal is to get everyone running 3.6 unit tests by the end of Stein: https://governance.openstack.org/tc/goals/stein/python3-first.html#python-3-6-unit-test-jobs but we explicitly said there that we were not dropping support for 3.5 as part of the goal, and should continue to do so until we can effect an orderly transition later. Personally, I would see that including waiting for all the 3.5-supporting projects to add 3.6 jobs (which has been blocked up until ~this point, as we are only just now close to getting all of the repos using local Zuul config). I do agree that anything that works on 3.5 and 3.7 will almost certainly work on 3.6, so if you wanted to submit a patch to that goal saying that projects could add a unit test job for *either* 3.6 or 3.7 (in addition to 3.5) then I would probably support that. We could then switch all the 3.5 jobs to 3.6 later when we eventually drop 3.5 support. That would mean we'd only ever run 3 unit test jobs (and 2 once 2.7 is eventually dropped) - for the oldest and newest versions of Python 3 that a project supports. cheers, Zane. [This thread was also discussed on IRC starting here: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-10-15.log.html#t2018-10-15T18:09:05] > I'm happy to get things moving along and start proposing changes like > this to other projects and communicating with PTLs along the way. Do you > think we need more discussion/communication on this or should I get started? > > Thanks, > Corey > > > Andreas > -- >  Andreas Jaeger aj@{suse.com ,opensuse.org > } Twitter: jaegerandi >   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany >    GF: Felix Imendörffer, Jane Smithard, Graham Norton, >        HRB 21284 (AG Nürnberg) >     GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 > A126 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From corey.bryant at canonical.com Mon Oct 15 19:15:30 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Mon, 15 Oct 2018 15:15:30 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> Message-ID: On Mon, Oct 15, 2018 at 3:01 PM Zane Bitter wrote: > On 12/10/18 8:59 AM, Corey Bryant wrote: > > > > > > On Thu, Oct 11, 2018 at 10:19 AM Andreas Jaeger > > wrote: > > > > On 10/10/2018 23.10, Jeremy Stanley wrote: > > > I might have only pointed this out on IRC so far, but the > > > expectation is that testing 3.5 and 3.6 at the same time was > merely > > > transitional since official OpenStack projects should be moving > > > their testing from Ubuntu Xenial (which provides 3.5) to Ubuntu > > > Bionic (which provides 3.6 and, now, 3.7 as well) during the Stein > > > cycle and so will drop 3.5 testing on master in the process. > > > > Agreed, this needs some larger communication and explanation on what > > to do, > > > > > > The good news is we now have an initial change underway and successful, > > dropping py35 and enabling py37: > https://review.openstack.org/#/c/609557/ > > Hey Corey, > Thanks for getting this underway, it's really important that we keep > moving forward (we definitely got behind on the 3.6 transition and are > paying for it now). > > That said, I don't think we should be dropping support/testing for 3.5. > According to: > > https://governance.openstack.org/tc/reference/pti/python.html > > 3.5 is the only Python3 version that we require all projects to run > tests for. > > Out goal is to get everyone running 3.6 unit tests by the end of Stein: > > > > https://governance.openstack.org/tc/goals/stein/python3-first.html#python-3-6-unit-test-jobs > > but we explicitly said there that we were not dropping support for 3.5 > as part of the goal, and should continue to do so until we can effect an > orderly transition later. Personally, I would see that including waiting > for all the 3.5-supporting projects to add 3.6 jobs (which has been > blocked up until ~this point, as we are only just now close to getting > all of the repos using local Zuul config). > > I do agree that anything that works on 3.5 and 3.7 will almost certainly > work on 3.6, so if you wanted to submit a patch to that goal saying that > projects could add a unit test job for *either* 3.6 or 3.7 (in addition > to 3.5) then I would probably support that. We could then switch all the > 3.5 jobs to 3.6 later when we eventually drop 3.5 support. That would > mean we'd only ever run 3 unit test jobs (and 2 once 2.7 is eventually > dropped) - for the oldest and newest versions of Python 3 that a project > supports. > This seems like a reasonable approach to me. I'll get a review up and we can see what others think. Thanks, Corey > cheers, > Zane. > > [This thread was also discussed on IRC starting here: > > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-10-15.log.html#t2018-10-15T18:09:05 > ] > > > I'm happy to get things moving along and start proposing changes like > > this to other projects and communicating with PTLs along the way. Do you > > think we need more discussion/communication on this or should I get > started? > > > > Thanks, > > Corey > > > > > > Andreas > > -- > > Andreas Jaeger aj@{suse.com ,opensuse.org > > } Twitter: jaegerandi > > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > > HRB 21284 (AG Nürnberg) > > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 > > A126 > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > < > http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harlowja at fastmail.com Mon Oct 15 19:17:55 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Mon, 15 Oct 2018 12:17:55 -0700 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: References: Message-ID: <13f7b247-e51c-fb00-86c9-e6db70c59fe9@fastmail.com> I'm ok with trying this move out and seeing how it goes (maybe it will be for the better, idk). -Josh On 10/10/2018 10:41 AM, Greg Hill wrote: > I've been out of the openstack loop for a few years, so I hope this > reaches the right folks. > > Josh Harlow (original author of taskflow and related libraries) and I > have been discussing the option of moving taskflow out of the > openstack umbrella recently. This move would likely also include the > futurist and automaton libraries that are primarily used by taskflow. > The idea would be to just host them on github and use the regular > Github features for Issues, PRs, wiki, etc, in the hopes that this > would spur more development. Taskflow hasn't had any substantial > contributions in several years and it doesn't really seem that the > current openstack devs have a vested interest in moving it forward. I > would like to move it forward, but I don't have an interest in being > bound by the openstack workflow (this is why the project stagnated as > core reviewers were pulled on to other projects and couldn't keep up > with the review backlog, so contributions ground to a halt). > > I guess I'm putting it forward to the larger community. Does anyone > have any objections to us doing this? Are there any non-obvious > technicalities that might make such a transition difficult? Who would > need to be made aware so they could adjust their own workflows? > > Or would it be preferable to just fork and rename the project so > openstack can continue to use the current taskflow version without > worry of us breaking features? > > Greg > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Mon Oct 15 19:27:19 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Mon, 15 Oct 2018 15:27:19 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> Message-ID: On Mon, Oct 15, 2018 at 3:15 PM Corey Bryant wrote: > > > On Mon, Oct 15, 2018 at 3:01 PM Zane Bitter wrote: > >> On 12/10/18 8:59 AM, Corey Bryant wrote: >> > >> > >> > On Thu, Oct 11, 2018 at 10:19 AM Andreas Jaeger > > > wrote: >> > >> > On 10/10/2018 23.10, Jeremy Stanley wrote: >> > > I might have only pointed this out on IRC so far, but the >> > > expectation is that testing 3.5 and 3.6 at the same time was >> merely >> > > transitional since official OpenStack projects should be moving >> > > their testing from Ubuntu Xenial (which provides 3.5) to Ubuntu >> > > Bionic (which provides 3.6 and, now, 3.7 as well) during the >> Stein >> > > cycle and so will drop 3.5 testing on master in the process. >> > >> > Agreed, this needs some larger communication and explanation on what >> > to do, >> > >> > >> > The good news is we now have an initial change underway and successful, >> > dropping py35 and enabling py37: >> https://review.openstack.org/#/c/609557/ >> >> Hey Corey, >> Thanks for getting this underway, it's really important that we keep >> moving forward (we definitely got behind on the 3.6 transition and are >> paying for it now). >> >> That said, I don't think we should be dropping support/testing for 3.5. >> According to: >> >> https://governance.openstack.org/tc/reference/pti/python.html >> >> 3.5 is the only Python3 version that we require all projects to run >> tests for. >> >> Out goal is to get everyone running 3.6 unit tests by the end of Stein: >> >> >> >> https://governance.openstack.org/tc/goals/stein/python3-first.html#python-3-6-unit-test-jobs >> >> but we explicitly said there that we were not dropping support for 3.5 >> as part of the goal, and should continue to do so until we can effect an >> orderly transition later. Personally, I would see that including waiting >> for all the 3.5-supporting projects to add 3.6 jobs (which has been >> blocked up until ~this point, as we are only just now close to getting >> all of the repos using local Zuul config). >> >> I do agree that anything that works on 3.5 and 3.7 will almost certainly >> work on 3.6, so if you wanted to submit a patch to that goal saying that >> projects could add a unit test job for *either* 3.6 or 3.7 (in addition >> to 3.5) then I would probably support that. We could then switch all the >> 3.5 jobs to 3.6 later when we eventually drop 3.5 support. That would >> mean we'd only ever run 3 unit test jobs (and 2 once 2.7 is eventually >> dropped) - for the oldest and newest versions of Python 3 that a project >> supports. >> > > This seems like a reasonable approach to me. I'll get a review up and we > can see what others think. > > I have the following up for review to modify the python3-first goal to allow for python3.6 or python3.7 unit test enablement: https://review.openstack.org/#/c/610708/ Thanks, Corey >> cheers, >> Zane. >> >> [This thread was also discussed on IRC starting here: >> >> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-10-15.log.html#t2018-10-15T18:09:05 >> ] >> >> > I'm happy to get things moving along and start proposing changes like >> > this to other projects and communicating with PTLs along the way. Do >> you >> > think we need more discussion/communication on this or should I get >> started? >> > >> > Thanks, >> > Corey >> > >> > >> > Andreas >> > -- >> > Andreas Jaeger aj@{suse.com ,opensuse.org >> > } Twitter: jaegerandi >> > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany >> > GF: Felix Imendörffer, Jane Smithard, Graham Norton, >> > HRB 21284 (AG Nürnberg) >> > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 >> > A126 >> > >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > < >> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Oct 15 20:01:07 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 15 Oct 2018 15:01:07 -0500 Subject: [openstack-dev] Forum Schedule - Seeking Community Review Message-ID: <5BC4F203.4000904@openstack.org> Hi - The Forum schedule is now up (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). If you see a glaring content conflict within the Forum itself, please let me know. You can also view the Full Schedule in the attached PDF if that makes life easier... NOTE: BoFs and WGs are still not all up on the schedule. No need to let us know :) Cheers, Jimmy -------------- next part -------------- A non-text attachment was scrubbed... Name: full-schedule (2).pdf Type: application/pdf Size: 64066 bytes Desc: not available URL: From fungi at yuggoth.org Mon Oct 15 20:10:48 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 15 Oct 2018 20:10:48 +0000 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> Message-ID: <20181015201048.zg4nxyyq4b2vez5w@yuggoth.org> On 2018-10-15 15:00:07 -0400 (-0400), Zane Bitter wrote: [...] > That said, I don't think we should be dropping support/testing for 3.5. > According to: > > https://governance.openstack.org/tc/reference/pti/python.html > > 3.5 is the only Python3 version that we require all projects to run tests > for. Until we update it to refer to the version provided by the test platforms we document at: https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions > Out goal is to get everyone running 3.6 unit tests by the end of Stein: > > https://governance.openstack.org/tc/goals/stein/python3-first.html#python-3-6-unit-test-jobs > > but we explicitly said there that we were not dropping support for 3.5 as > part of the goal, and should continue to do so until we can effect an > orderly transition later. [...] We're not dropping support for 3.5 as part of the python3-first goal, but would be dropping it as part of the switch from Ubuntu 16.04 LTS (which provides Python 3.5) to 18.04 LTS (which provides Python 3.6). In the past the OpenStack Infra team has prodded us to follow our documented testing platform policies as new versions become available, but now with a move to providing infrastructure services to other OSF projects as well we're on our own to police this. We _could_ decide that we're going to start running tests on multiple versions of Python 3 indefinitely (rather than as a transitional state during the switch from Ubuntu Xenial to Bionic) but that does necessarily mean running more jobs. We could also decide to start targeting different versions of Python than provided by the distros on which we run our tests (and build it from source ourselves or something) but I think that's only reasonable if we're going to also recommend that users deploy OpenStack on top of custom-compiled Python interpreters rather than the interpreters provided by server distros like RHEL and Ubuntu. So to sum up the above, it's less a question of whether we're dropping Python 3.5 testing in Stein, and more a question of whether we're going to continue requiring OpenStack to also be able to run on Ubuntu 16.04 LTS (which wasn't the latest LTS even at the start of the cycle). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Mon Oct 15 20:29:58 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 15 Oct 2018 15:29:58 -0500 Subject: [openstack-dev] [Openstack-operators] [goals][upgrade-checkers] Week R-26 Update In-Reply-To: References: Message-ID: <1d7bfda7-615e-3e21-9174-631ca8d3910e@nemebean.com> On 10/15/18 3:27 AM, Jean-Philippe Evrard wrote: > On Fri, 2018-10-12 at 17:05 -0500, Matt Riedemann wrote: >> The big update this week is version 0.1.0 of oslo.upgradecheck was >> released. The documentation along with usage examples can be found >> here >> [1]. A big thanks to Ben Nemec for getting that done since a few >> projects were waiting for it. >> >> In other updates, some changes were proposed in other projects [2]. >> >> And finally, Lance Bragstad and I had a discussion this week [3] >> about >> the validity of upgrade checks looking for deleted configuration >> options. The main scenario I'm thinking about here is FFU where >> someone >> is going from Mitaka to Pike. Let's say a config option was >> deprecated >> in Newton and then removed in Ocata. As the operator is rolling >> through >> from Mitaka to Pike, they might have missed the deprecation signal >> in >> Newton and removal in Ocata. Does that mean we should have upgrade >> checks that look at the configuration for deleted options, or >> options >> where the deprecated alias is removed? My thought is that if things >> will >> not work once they get to the target release and restart the service >> code, which would definitely impact the upgrade, then checking for >> those >> scenarios is probably OK. If on the other hand the removed options >> were >> just tied to functionality that was removed and are otherwise not >> causing any harm then I don't think we need a check for that. It was >> noted that oslo.config has a new validation tool [4] so that would >> take >> care of some of this same work if run during upgrades. So I think >> whether or not an upgrade check should be looking for config option >> removal ultimately depends on the severity of what happens if the >> manual >> intervention to handle that removed option is not performed. That's >> pretty broad, but these upgrade checks aren't really set in stone >> for >> what is applied to them. I'd like to get input from others on this, >> especially operators and if they would find these types of checks >> useful. >> >> [1] https://docs.openstack.org/oslo.upgradecheck/latest/ >> [2] https://storyboard.openstack.org/#!/story/2003657 >> [3] >> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-10-10.log.html#t2018-10-10T15:17:17 >> [4] >> http://lists.openstack.org/pipermail/openstack-dev/2018-October/135688.html >> > > Hey, > > Nice topic, thanks Matt! > > TL:DR; I would rather fail explicitly for all removals, warning on all > deprecations. My concern is, by being more surgical, we'd have to > decide what's "not causing any harm" (and I think deployers/users are > best to determine what's not causing them any harm). > Also, it's probably more work to classify based on "severity". > The quick win here (for upgrade-checks) is not about being smart, but > being an exhaustive, standardized across projects, and _always used_ > source of truth for upgrades, which is complemented by release notes. > > Long answer: > > At some point in the past, I was working full time on upgrades using > OpenStack-Ansible. > > Our process was the following: > 1) Read all the project's releases notes to find upgrade documentation > 2) With said release notes, Adapt our deploy tools to handle the > upgrade, or/and write ourselves extra documentation+release notes for > our deployers. > 3) Try the upgrade manually, fail because some release note was missing > x or y. Find root cause and retry from step 2 until success. > > Here is where I see upgrade checkers improving things: > 1) No need for deployment projects to parse all release notes for > configuration changes, as tooling to upgrade check would be directly > outputting things that need to change for scenario x or y that is > included in the deployment project. No need to iterate either. > > 2) Test real deployer use cases. The deployers using openstack-ansible > have ultimate flexibility without our code changes. Which means they > may have different code paths than our gating. Including these checks > in all upgrades, always requiring them to pass, and making them > explicit about the changes is tremendously helpful for deployers: > - If config deprecations are handled as warnings as part of the same > process, we will output said warnings to generate a list of action > items for the deployers. We would use only one tool as source of truth > for giving the action items (and still continue the upgrade); > - If config removals are handled as errors, the upgrade will fail, > which is IMO normal, as the deployer would not have respected its > action items. Note that deprecated config opts should already be generating warnings in the logs. It is also possible now to use fatal-deprecations with config opts: https://github.com/openstack/oslo.config/commit/5f8b0e0185dafeb68cf04590948b9c9f7d727051 I'm not sure that's exactly what you're talking about, but those might be useful to get us at least part of the way there. > > In OSA, we could probably implement a deployer override (variable). It > would allow the deployers an explicit bypass of an upgrade failure. "I > know I am doing this!". It would be useful for doing multiple serial > upgrades. > > In that case, deployers could then share together their "recipes" for > handling upgrade failure bypasses for certain multi-upgrade (jumps) > scenarios. After a while, we could think of feeding those back to > upgrade checkers. > > 3) I like the approach of having oslo-config-validator. However, I must > admit it's not part of our process to always validate a config file > before trying to start a service in OSA. I am not sure where other > deployment projects are in terms of that usage. I am not familiar with > upgrade checker code, but I would love to see it re-using oslo-config- > validator, as it would be the unique source of truth for upgrades > before the upgrade happens (vs having to do multiple steps). > If I am completely out of my league here, tell me. This is a bit tricky as the validator requires information that is not necessarily available in a production environment. Specifically, it either needs the oslo-config-generator configuration file that lists all of the namespaces a project uses, or it needs a generated machine-readable sample config that contains all of the opt data. The latter is not generally available today, and I'm not sure whether the former is either. A quick pip install of an OpenStack service suggests that it is not. Ideally, the machine-readable sample config would be available from packages anyway as it has other uses too, but it's a pretty big ask to get all of the packagers shipping that this cycle. I'm not sure how it would work with pip installs either, although it seems like we should be able to figure out something there. Anyway, not saying we shouldn't do it, but I want to make it clear that this isn't as simple as just adding one more check to the upgrade checkers. There are some other dependencies to doing this in a non-service-specific way. > > Just my 2 cents. > Jean-Philippe Evrard (evrardjp) > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From johnsomor at gmail.com Mon Oct 15 20:44:05 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 15 Oct 2018 13:44:05 -0700 Subject: [openstack-dev] [tc][all] Discussing goals (upgrades) with community @ office hours In-Reply-To: References: Message-ID: I am interested in participating in this discussion. I think we have had a few goals that were selected before all of the parts were in place. This leads to re-work and/or pushing goals work into the already busy milestone 3 time frame. Michael On Mon, Oct 15, 2018 at 5:16 AM Chris Dent wrote: > > On Sat, 13 Oct 2018, Mohammed Naser wrote: > > > Does this seem like it would be of interest to the community? I am > > currently trying to transform our office hours to be more of a space > > where we have more of the community and less of discussion between us. > > If we want discussion to actually be with the community at large > (rather than giving lip service to the idea), then we need to be > more oriented to using email. Each time we have an office hour or a > meeting in IRC or elsewhere, or an ad hoc Hangout, unless we are > super disciplined about reporting the details to email afterwards, a > great deal of information falls on the floor and individuals who are > unable to attend because of time, space, language or other > constraints are left out. > > For community-wide issues, synchronous discussion should be the mode > of last resort. Anything else creates a priesthood with a > disempowered laity wondering how things got away from them. > > For community goals, in particular, preferring email for discussion > and planning seems pretty key. > > I wonder if instead of specifying topics for TC office hours, we kill > them instead? They've turned into gossiping echo chambers. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent__________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mordred at inaugust.com Mon Oct 15 21:03:14 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 15 Oct 2018 16:03:14 -0500 Subject: [openstack-dev] [publiccloud-wg][sdk][osc][tc] Extracting vendor profiles from openstacksdk Message-ID: <6365d18b-5550-ed93-8296-d74661cbe104@inaugust.com> Heya, Tobias and I were chatting at OpenStack Days Nordic about the Public Cloud Working Group potentially taking over as custodians of the vendor profile information [0][1] we keep in openstacksdk (and previously in os-client-config) I think this is a fine idea, but we've got some dancing to do I think. A few years ago Dean and I talked about splitting the vendor data into its own repo. We decided not to at the time because it seemed like extra unnecessary complication. But I think we may have reached that time. We should split out a new repo to hold the vendor data json files. We can manage this repo pretty much how we manage the service-types-authority [2] data now. Also similar to that (and similar to tzdata) these are files that contain information that is true currently and is not release specific - so it should be possible to update to the latest vendor files without updating to the latest openstacksdk. If nobody objects, I'll start working through getting a couple of new repos created. I'm thinking openstack/vendor-profile-data, owned/managed by Public Cloud WG, with the json files, docs, json schema, etc, and a second one, openstack/os-vendor-profiles - owned/managed by the openstacksdk team that's just like os-service-types [3] and is a tiny/thin library that exposes the files to python (so there's something to depend on) and gets proposed patches from zuul when new content is landed in openstack/vendor-profile-data. How's that sound? Thanks! Monty [0] http://git.openstack.org/cgit/openstack/openstacksdk/tree/openstack/config/vendors [1] https://docs.openstack.org/openstacksdk/latest/user/config/vendor-support.html [2] http://git.openstack.org/cgit/openstack/service-types-authority [3] http://git.openstack.org/cgit/openstack/os-service-types From mriedemos at gmail.com Mon Oct 15 21:08:55 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 15 Oct 2018 16:08:55 -0500 Subject: [openstack-dev] Forum Schedule - Seeking Community Review In-Reply-To: <5BC4F203.4000904@openstack.org> References: <5BC4F203.4000904@openstack.org> Message-ID: On 10/15/2018 3:01 PM, Jimmy McArthur wrote: > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > If you see a glaring content conflict within the Forum itself, please > let me know. Not a conflict, but it looks like there is a duplicate for Lee talking about encrypted volumes: https://www.openstack.org/summit/berlin-2018/summit-schedule/global-search?t=yarwood Unless he just loves it so much he needs to talk about it twice. -- Thanks, Matt From jimmy at openstack.org Mon Oct 15 21:19:46 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 15 Oct 2018 16:19:46 -0500 Subject: [openstack-dev] Forum Schedule - Seeking Community Review In-Reply-To: References: <5BC4F203.4000904@openstack.org> Message-ID: <5BC50472.6000805@openstack.org> Ha ha! Good catch :) On it. > Matt Riedemann > October 15, 2018 at 4:08 PM > > > Not a conflict, but it looks like there is a duplicate for Lee talking > about encrypted volumes: > > https://www.openstack.org/summit/berlin-2018/summit-schedule/global-search?t=yarwood > > > Unless he just loves it so much he needs to talk about it twice. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Mon Oct 15 21:21:59 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 15 Oct 2018 16:21:59 -0500 Subject: [openstack-dev] [infra] Polygerrit In-Reply-To: <20181015120806.mf2opeanihvcmg4t@yuggoth.org> References: <8b2f89f9-a8f1-fc82-0d08-e8a93ef1889c@gmail.com> <20181015120806.mf2opeanihvcmg4t@yuggoth.org> Message-ID: <0f887738-b66c-79ea-4975-38add873e6cd@inaugust.com> On 10/15/2018 07:08 AM, Jeremy Stanley wrote: > On 2018-10-15 11:54:28 +0100 (+0100), Stephen Finucane wrote: > [...] >> As an aside, are there any plans to enable PolyGerrit [1] in the >> OpenStack Gerrit instance? > [...] > > I believe so, but first we need to upgrade to a newer Gerrit version > which provides it (that in turn requires a newer Java which needs a > server built from a newer distro version, which is all we've gotten > through on the upgrade plan so far). I'm working on this now, so hopefully we should have an ETA soonish. From mordred at inaugust.com Mon Oct 15 21:27:00 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 15 Oct 2018 16:27:00 -0500 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: <309fd108d93bc66fac5498507224a721d2f07d75.camel@redhat.com> References: <8b2f89f9-a8f1-fc82-0d08-e8a93ef1889c@gmail.com> <20181010185126.agh5d2msk2aut62d@yuggoth.org> <309fd108d93bc66fac5498507224a721d2f07d75.camel@redhat.com> Message-ID: <0202621d-30ab-1fcd-24e2-d4e1662fefb3@inaugust.com> On 10/15/2018 05:49 AM, Stephen Finucane wrote: > On Wed, 2018-10-10 at 18:51 +0000, Jeremy Stanley wrote: >> On 2018-10-10 13:35:00 -0500 (-0500), Greg Hill wrote: >> [...] >>> We plan to still have a CI gatekeeper, probably Travis CI, to make sure PRs >>> past muster before being merged, so it's not like we're wanting to >>> circumvent good contribution practices by committing whatever to HEAD. >> >> Travis CI has gained the ability to prevent you from merging changes >> which fail testing? Or do you mean something else when you refer to >> it as a "gatekeeper" here? > > Yup but it's GitHub feature rather than specifically a Travis CI > feature. > > https://help.github.com/articles/about-required-status-checks/ > > Doesn't help the awful pull request workflow but that's neither here > nor there. It's also not the same as gating. The github feature is the equivalent of "Make sure the votes in check are green before letting someone click the merge button" The zuul feature is "run the tests between the human decision to merge and actually merging with the code in the state it will actually be in when merged". It sounds nitpicky, but the semantic distinction is important - and it catches things more frequently than you might imagine. That said - Zuul supports github, and there are Zuuls run by not-openstack, so taking a project out of OpenStack's free infrastructure does not mean you have to also abandon Zuul. The OpenStack Infra team isn't going to run a zuul to gate patches on a GitHub project - but other people might be happy to let you use a Zuul so that you don't have to give up the Zuul features in place today. If you go down that road, I'd suggest pinging the softwarefactory-project.io folks or the openlab folks. From corey.bryant at canonical.com Mon Oct 15 21:34:24 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Mon, 15 Oct 2018 17:34:24 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <20181015201048.zg4nxyyq4b2vez5w@yuggoth.org> References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> <20181015201048.zg4nxyyq4b2vez5w@yuggoth.org> Message-ID: On Mon, Oct 15, 2018, 4:11 PM Jeremy Stanley wrote: > On 2018-10-15 15:00:07 -0400 (-0400), Zane Bitter wrote: > [...] > > That said, I don't think we should be dropping support/testing for 3.5. > > According to: > > > > https://governance.openstack.org/tc/reference/pti/python.html > > > > 3.5 is the only Python3 version that we require all projects to run tests > > for. > > Until we update it to refer to the version provided by the test > platforms we document at: > > > https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions > > > Out goal is to get everyone running 3.6 unit tests by the end of Stein: > > > > > https://governance.openstack.org/tc/goals/stein/python3-first.html#python-3-6-unit-test-jobs > > > > but we explicitly said there that we were not dropping support for 3.5 as > > part of the goal, and should continue to do so until we can effect an > > orderly transition later. > [...] > > We're not dropping support for 3.5 as part of the python3-first > goal, but would be dropping it as part of the switch from Ubuntu > 16.04 LTS (which provides Python 3.5) to 18.04 LTS (which provides > Python 3.6). In the past the OpenStack Infra team has prodded us to > follow our documented testing platform policies as new versions > become available, but now with a move to providing infrastructure > services to other OSF projects as well we're on our own to police > this. > > We _could_ decide that we're going to start running tests on > multiple versions of Python 3 indefinitely (rather than as a > transitional state during the switch from Ubuntu Xenial to Bionic) > but that does necessarily mean running more jobs. We could also > decide to start targeting different versions of Python than provided > by the distros on which we run our tests (and build it from source > ourselves or something) but I think that's only reasonable if we're > going to also recommend that users deploy OpenStack on top of > custom-compiled Python interpreters rather than the interpreters > provided by server distros like RHEL and Ubuntu. > > So to sum up the above, it's less a question of whether we're > dropping Python 3.5 testing in Stein, and more a question of whether > we're going to continue requiring OpenStack to also be able to run > on Ubuntu 16.04 LTS (which wasn't the latest LTS even at the start > of the cycle). > >From an ubuntu perspective, ubuntu is going to support stein on 18.04 LTS (3.6) and 19.04 (3.7) only. With that said does upstream still want to ensure stein runs on 16.04 if ubuntu itself has no plans to? I was assuming the desire to keep 3.5 stein support was for other distros that plan to support stein with 3.5. Corey -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at oliver.net.au Mon Oct 15 21:59:46 2018 From: matt at oliver.net.au (Matthew Oliver) Date: Tue, 16 Oct 2018 08:59:46 +1100 Subject: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal) In-Reply-To: <7811aa5b-9538-d73c-e1eb-5d46062da32e@secustack.com> References: <1d7ca398-fb13-c8fc-bf4d-b94a3ae1a079@secustack.com> <12290bd5-bce6-1f8f-abbd-29f2263487e8@secustack.com> <7811aa5b-9538-d73c-e1eb-5d46062da32e@secustack.com> Message-ID: Just an FYI, it doesn't solved cached images, but Swift does support at rest encryption, so if using the Swift store backend you can at least know your image on disk on the storage nodes would be safe. We still need to add more functionality like key rotation, but we do integrate with kmip sevices or barbican. Still could be a good idea for other projects. I wasn't the one who wrote the Swift at-rest encryption but happy to, probably badly, help answer questions cause we might have some interesting lessons learned. Matt On Tue, Oct 16, 2018 at 12:36 AM Josephine Seifert < josephine.seifert at secustack.com> wrote: > Hello OpenStack developers, > > we have made an etherpad as there were a few questions concerning > the library we want to use for the encryption and decryption method: > > > https://etherpad.openstack.org/p/library-for-image-encryption-and-decryption > > > Am 11.10.2018 um 15:10 schrieb Josephine Seifert: > > Am 08.10.2018 um 17:16 schrieb Markus Hentsch: > >> Dear OpenStack developers, > >> > >> as you suggested, we have written individual specs for Nova [1] and > >> Cinder [2] so far and will write another spec for Glance soon. We'd > >> appreciate any feedback and reviews on the specs :) > >> > >> Thank you in advance, > >> Markus Hentsch > >> > >> [1] https://review.openstack.org/#/c/608696/ > >> [2] https://review.openstack.org/#/c/608663/ > >> > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > The spec for Glance is also on gerrit now: > > > > https://review.openstack.org/#/c/609667/ > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Mon Oct 15 22:10:35 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 15 Oct 2018 17:10:35 -0500 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: <2ce69a0a-56b9-e912-2ae2-7389394c0035@nemebean.com> References: <8b2f89f9-a8f1-fc82-0d08-e8a93ef1889c@gmail.com> <2ce69a0a-56b9-e912-2ae2-7389394c0035@nemebean.com> Message-ID: <64353d0c-f9a2-4913-e2ad-b405172852a7@inaugust.com> On 10/10/2018 03:17 PM, Ben Nemec wrote: > > > On 10/10/18 1:35 PM, Greg Hill wrote: >> >>     I'm not sure how using pull requests instead of Gerrit changesets >> would >>     help "core reviewers being pulled on to other projects"? >> >> >> The 2 +2 requirement works for larger projects with a lot of >> contributors. When you have only 3 regular contributors and 1 of them >> gets pulled on to a project and can no longer actively contribute, you >> have 2 developers who can +2 each other but nothing can get merged >> without that 3rd dev finding time to add another +2. This is what >> happened with Taskflow a few years back. Eventually the other 2 gave >> up and moved on also. > > As the others have mentioned, this doesn't need to continue to be a > blocker. If the alternative is nobody working on the project at all, a > single approver policy is far better. In practice it's probably not much > different from having a general oslo core rubber stamp +2 a patch that > was already reviewed by a taskflow expert. Just piling on on this. We do single-core approves in openstacksdk, although for REALLY hairy patches I try to get a few more people to get eyeballs on something. >> >>     Is this just about preferring not having a non-human gatekeeper like >>     Gerrit+Zuul and being able to just have a couple people merge >> whatever >>     they want to the master HEAD without needing to talk about +2/+W >> rights? >> >> >> We plan to still have a CI gatekeeper, probably Travis CI, to make >> sure PRs past muster before being merged, so it's not like we're >> wanting to circumvent good contribution practices by committing >> whatever to HEAD. But the +2/+W rights thing was a huge PITA to deal >> with with so few contributors, for sure. > > I guess this would be the one concern I'd have about moving it out. We > still have a fair number of OpenStack projects depending on taskflow[1] > to one degree or another, and having taskflow fully integrated into the > OpenStack CI system is nice for catching problems with proposed changes > early. I second this. Especially for a library like taskflow where the value is in the behavior engine, describing that as an API with an API surface is a bit harder than just testing a published library interface. It's also worth noting that we're working on plans to get the OpenStack Infra systems rebranded so that concerns people might have about brand association can be mitigated. > I think there was some work recently to get OpenStack CI voting > on Github, but it seems inefficient to do work to move it out of > OpenStack and then do more work to partially bring it back. Zuul supports cross-source dependencies and we have select github repos configured in OpenStack's Zuul so that projects can do cross-project verification. > I suppose the other option is to just stop CI'ing on OpenStack and rely > on the upper-constraints gating we do for our other dependencies. That > would be unfortunate, but again if the alternative is no development at > all then it might be a necessary compromise. I agree - if the main roadblock is just the 2x+2 policy, which is solvable without moving anything, then the pain of moving the libraries out to github just to turn around and cobble together a cross-source advisory testing system seems not very worth it and I'd be more inclined to use upper-constraints. By and large moving these is going to be pretty disruptive, so I'd personally prefer that they stayed where they. There are PLENTY of things hosted in OpenStack's infrastructure that are not OpenStack - or even OpenStack specific. > 1: > http://codesearch.openstack.org/?q=taskflow&i=nope&files=requirements.txt&repos= > > >> >>     If it's just about preferring the pull request workflow versus the >>     Gerrit rebase workflow, just say so. Same for just preferring the >>     Github >>     UI versus Gerrit's UI (which I agree is awful). >> >> >> I mean, yes, I personally prefer the Github UI and workflow, but that >> was not a primary consideration. I got used to using gerrit well >> enough. It was mostly the  There's also a sense that if a project is >> in the Openstack umbrella, it's not useful outside Openstack, and >> Taskflow is designed to be a general purpose library. The hope is that >> just making it a regular open source project might attract more users >> and contributors. I think we might be intertwining a few things that don't have to be intertwined. The libraries are currently part of the OpenStack umbrella, and as part of that are hosted in OpenStack's developer infrastructure. They can remain "part of OpenStack" and be managed with a relaxed core reviewer policy. This way, should they be desired, things like the release management team can still be used. They can cease being "part of OpenStack" without needing to move away from the OpenStack Developer Infrastructure. As I mentioned earlier we're working on rebranding the Developer Infrastructure, so if there is a concern that a git repo existing within the Developer Infrastructure implies being "part of OpenStack" - that confusion should be improved in the not-too-distant-future. But - over half of the repos contained in the OpenStack Developer Infrastructure are already not "part of OpenStack" - so they would not be alone. Finally, they can stop being "part of OpenStack" AND they can move their development to somewhere else. >> This may or may not bear out, but as it is, there's >> no real benefit to staying an openstack project on this front since >> nobody is actively working on it within the community. At the same time, taskflow is used by a good number of OpenStack services - to the point that taskflow developing an issue would be a *problem* for OpenStack. If something goes wrong, the OpenStack project and the Oslo team currently can fix it. Since, as you mentioned, there isn't a super active dev team currently - we'd be looking at moving from important-library-that-can-be-fixed-by-OpenStack-if-OpenStack-breaks to important-library-with-dev-team-of-unknown-size-or-resources-that-might-stay-broken. The last time we did this it was for an optional extra service with no real internal-OpenStack dependencies. If the end result had been the project completely ceasing to exist, as much as some humans would have been sad, OpenStack wouldn't have become broken. Absent a compelling reason to the contrary, I'd argue that it's in OpenStack's best interests that the libraries remain not only in the OpenStack developer infrastructure but also under the governance of the Oslo team. Relaxing reviewer requirements seems like a fine idea and not at all problematic. Obviously that's just me, but I'd love to see if we can work out making the current home more pleasant before we start ejecting libraries. Monty From pabelanger at redhat.com Mon Oct 15 22:41:46 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 15 Oct 2018 18:41:46 -0400 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: <0202621d-30ab-1fcd-24e2-d4e1662fefb3@inaugust.com> References: <8b2f89f9-a8f1-fc82-0d08-e8a93ef1889c@gmail.com> <20181010185126.agh5d2msk2aut62d@yuggoth.org> <309fd108d93bc66fac5498507224a721d2f07d75.camel@redhat.com> <0202621d-30ab-1fcd-24e2-d4e1662fefb3@inaugust.com> Message-ID: <20181015224146.GA28206@localhost.localdomain> On Mon, Oct 15, 2018 at 04:27:00PM -0500, Monty Taylor wrote: > On 10/15/2018 05:49 AM, Stephen Finucane wrote: > > On Wed, 2018-10-10 at 18:51 +0000, Jeremy Stanley wrote: > > > On 2018-10-10 13:35:00 -0500 (-0500), Greg Hill wrote: > > > [...] > > > > We plan to still have a CI gatekeeper, probably Travis CI, to make sure PRs > > > > past muster before being merged, so it's not like we're wanting to > > > > circumvent good contribution practices by committing whatever to HEAD. > > > > > > Travis CI has gained the ability to prevent you from merging changes > > > which fail testing? Or do you mean something else when you refer to > > > it as a "gatekeeper" here? > > > > Yup but it's GitHub feature rather than specifically a Travis CI > > feature. > > > > https://help.github.com/articles/about-required-status-checks/ > > > > Doesn't help the awful pull request workflow but that's neither here > > nor there. > > It's also not the same as gating. > > The github feature is the equivalent of "Make sure the votes in check are > green before letting someone click the merge button" > > The zuul feature is "run the tests between the human decision to merge and > actually merging with the code in the state it will actually be in when > merged". > > It sounds nitpicky, but the semantic distinction is important - and it > catches things more frequently than you might imagine. > > That said - Zuul supports github, and there are Zuuls run by not-openstack, > so taking a project out of OpenStack's free infrastructure does not mean you > have to also abandon Zuul. The OpenStack Infra team isn't going to run a > zuul to gate patches on a GitHub project - but other people might be happy > to let you use a Zuul so that you don't have to give up the Zuul features in > place today. If you go down that road, I'd suggest pinging the > softwarefactory-project.io folks or the openlab folks. > As somebody who has recently moved from a gerrit workflow to a github workflow using Zuul, keep in mind this is not a 1:1 feature map. The biggest difference, as people have said, is code review in github.com is terrible. It was something added after the fact, I wish daily to be able to use gerrit again :) Zuul does make things better, and 100% with Monty here. You want Zuul to be the gate, not travis CI. - Paul From zbitter at redhat.com Mon Oct 15 23:39:02 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 15 Oct 2018 19:39:02 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <20181015201048.zg4nxyyq4b2vez5w@yuggoth.org> References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> <20181015201048.zg4nxyyq4b2vez5w@yuggoth.org> Message-ID: <471f89e2-043b-e3bd-2083-27da76c5e5c0@redhat.com> On 15/10/18 4:10 PM, Jeremy Stanley wrote: > On 2018-10-15 15:00:07 -0400 (-0400), Zane Bitter wrote: > [...] >> That said, I don't think we should be dropping support/testing for 3.5. >> According to: >> >> https://governance.openstack.org/tc/reference/pti/python.html >> >> 3.5 is the only Python3 version that we require all projects to run tests >> for. > > Until we update it to refer to the version provided by the test > platforms we document at: > > https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions I'm sure we will update it, but as of now we haven't. People shouldn't have to guess which TC-maintained documentation is serious and which stuff they should just ignore on an ad-hoc basis. If it says 3.5 then the answer is 3.5 until somebody submits a patch and the TC approves it. >> Out goal is to get everyone running 3.6 unit tests by the end of Stein: >> >> https://governance.openstack.org/tc/goals/stein/python3-first.html#python-3-6-unit-test-jobs >> >> but we explicitly said there that we were not dropping support for 3.5 as >> part of the goal, and should continue to do so until we can effect an >> orderly transition later. > [...] > > We're not dropping support for 3.5 as part of the python3-first > goal, but would be dropping it as part of the switch from Ubuntu > 16.04 LTS (which provides Python 3.5) to 18.04 LTS (which provides > Python 3.6). In the past the OpenStack Infra team has prodded us to > follow our documented testing platform policies as new versions > become available, but now with a move to providing infrastructure > services to other OSF projects as well we're on our own to police > this. > > We _could_ decide that we're going to start running tests on > multiple versions of Python 3 indefinitely (rather than as a > transitional state during the switch from Ubuntu Xenial to Bionic) This is inevitable at some point - we say that we'll support both the latest release of Ubuntu LTS *and* CentOS. So far that's been irrelevant for Python3 because CentOS has only Python2, but we know that the next CentOS release will have Python3 and from that point on we will for sure be in a situation where we are supporting multiple Python3 versions, not always contiguous, for the indefinite future (because the release cycles of Ubuntu & CentOS are not aligned in any way). In fact, as far as we know the version we have to support in CentOS may actually be 3.5, which seems like a good reason to keep it working for long enough that we can find out for sure one way or the other. > but that does necessarily mean running more jobs. We could also > decide to start targeting different versions of Python than provided > by the distros on which we run our tests (and build it from source > ourselves or something) but I think that's only reasonable if we're > going to also recommend that users deploy OpenStack on top of > custom-compiled Python interpreters rather than the interpreters > provided by server distros like RHEL and Ubuntu. I am definitely spoiled by Fedora, where I have every version from 3.3 to 3.7 installed from the distro packages. > So to sum up the above, it's less a question of whether we're > dropping Python 3.5 testing in Stein, and more a question of whether > we're going to continue requiring OpenStack to also be able to run > on Ubuntu 16.04 LTS (which wasn't the latest LTS even at the start > of the cycle). There's actually another whole level of discussion we probably need to have. So far we've talked about unit tests, but running functional tests is whole other thing, and one we really do probably want to pick a single version of Ubuntu to run on for the sake of the gate (and I'd suggest that that version should probably by Bionic, if we can get everything working on 3.6 early enough in the cycle). That process would have been a lot easier if we were earlier on 3.6, so I'm grateful to the folks who are already working on 3.7 (which is a much more substantial change) to hopefully make this less painful in the future. cheers, Zane. From mordred at inaugust.com Tue Oct 16 00:00:31 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 15 Oct 2018 19:00:31 -0500 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <471f89e2-043b-e3bd-2083-27da76c5e5c0@redhat.com> References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> <20181015201048.zg4nxyyq4b2vez5w@yuggoth.org> <471f89e2-043b-e3bd-2083-27da76c5e5c0@redhat.com> Message-ID: On 10/15/2018 06:39 PM, Zane Bitter wrote: > > In fact, as far as we know the version we have to support in CentOS may > actually be 3.5, which seems like a good reason to keep it working for > long enough that we can find out for sure one way or the other. I certainly hope this is not what ends up happening, but seeing as how I actually do not know - I agree, I cannot discount the possibility that such a thing would happen. That said - until such a time as we get to actually drop python2, I don't see it as an actual issue. The reason being - if we test with 2.7 and 3.7 - the things in 3.6 that would break 3.5 get gated by the existence of 2.7 for our codebase. Case in point- the instant 3.6 is our min, I'm going to start replacing every instance of: "foo {bar}".format(bar=bar) in any code I spend time in with: f"foo {bar}" It TOTALLY won't parse on 3.5 ... but it also won't parse on 2.7. If we decide as a community to shift our testing of python3 to be 3.6 - or even 3.7 - as long as we still are testing 2.7, I'd argue we're adequately covered for 3.5. The day we decide we can drop 2.7 - if we've been testing 3.7 for python3 and it turns out RHEL/CentOS 8 ship with python 3.5, then instead of just deleting all of the openstack-tox-py27 jobs, we'd probably just need to replace them with openstack-tox-py35 jobs, as that would be our new low-water mark. Now, maybe we'll get lucky and RHEL/CentOS 8 will be a future-looking release and will ship with python 3.7 AND so will the corresponding Ubuntu LTS - and we'll get to only care about one release of python for a minute. :) Come on - I can dream, right? Monty From fungi at yuggoth.org Tue Oct 16 00:15:42 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 16 Oct 2018 00:15:42 +0000 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> <20181015201048.zg4nxyyq4b2vez5w@yuggoth.org> Message-ID: <20181016001541.mzk46boucrcvwudn@yuggoth.org> On 2018-10-15 17:34:24 -0400 (-0400), Corey Bryant wrote: [...] > I was assuming the desire to keep 3.5 stein support was for other distros > that plan to support stein with 3.5. If there are other distros which are planning to support OpenStack Stein on Python 3.5, then they should work with us to get testing in place on systems running their distro. That said, I'd be surprised if a new LTS server release of a distro targets Python 3.5... it's already over 3 years old, and 3.3 and 3.4 each got roughly 5 years before EOL so I'd expect 3.5 to no longer be supported upstream past September 2020. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From kennelson11 at gmail.com Tue Oct 16 00:18:27 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 15 Oct 2018 17:18:27 -0700 Subject: [openstack-dev] [os-upstream-institute] Find a slot for a meeting to discuss - ACTION NEEDED In-Reply-To: <20181011052007.GA18592@thor.bakeyournoodle.com> References: <313CAE1B-CCBB-426F-976B-0320B2273BA1@gmail.com> <948BBE83-6631-4CCC-A558-DEFDA6149C41@gmail.com> <20181011052007.GA18592@thor.bakeyournoodle.com> Message-ID: No we didn't record it, but we also didn't discuss much since only a few people actually showed up. Basically we decided that we were only going to have two meetings between then and the Summit. I don't think there was a whole lot else of particular note. We can probably recap it in our next meeting. -Kendall (diablo_rojo) On Wed, Oct 10, 2018 at 10:20 PM Tony Breeds wrote: > On Sat, Sep 29, 2018 at 02:50:31PM +0200, Ildiko Vancsa wrote: > > Hi Training Team, > > > > Based on the votes on the Doodle poll below we will have our ad-hoc > meeting __next Friday (October 5) 1600 UTC__. > > > > Hangouts link for the call: > https://hangouts.google.com/call/BKnvu7e72uB_Z-QDHDF2AAEI > > I don't suppose it was recorded? > > I was lucky enough to be on vacation for 3 weeks, which means I couldn't > make the call. > > Yours Tony. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensrloo at gmail.com Tue Oct 16 00:53:53 2018 From: opensrloo at gmail.com (Ruby Loo) Date: Mon, 15 Oct 2018 17:53:53 -0700 Subject: [openstack-dev] [ironic] Stepping down as core In-Reply-To: References: Message-ID: Sam, Thank you so much for all your contributions to ironic! I'll miss you. I'm glad you aren't disappearing though :) --ruby On Thu, Oct 11, 2018 at 4:41 AM Sam Betts (sambetts) wrote: > As many of you will have seen on IRC, I've mostly been appearing AFK for > the last couple of development cycles. Due to other tasks downstream most > of my attention has been drawn away from upstream Ironic development. Going > forward I'm unlikely to be as heavily involved with the Ironic project as I > have been in the past, so I am stepping down as a core contributor to make > way for those more involved and with more time to contribute. > > That said I do not intend to disappear, Myself and my colleagues plan to > continue to support the Cisco Ironic drivers, we just won't be so heavily > involved in core ironic development and its worth noting that although I > might appear AFK on IRC because my main focus is on other things, I always > have an ear to the ground and direct pings will generally reach me. > > I will be in Berlin for the OpenStack summit, so to those that are > attending I hope to see you there. > > The Ironic project has been (and I hope continues to be) an awesome place > to contribute too, thank you > > Sam Betts > sambetts > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuanvc at vn.fujitsu.com Tue Oct 16 01:22:21 2018 From: tuanvc at vn.fujitsu.com (Vu Cong, Tuan) Date: Tue, 16 Oct 2018 01:22:21 +0000 Subject: [openstack-dev] [watcher] [monasca] Bare metal node N+1 redundancy and Proactive HA In-Reply-To: <9907ba1ef089436aa3e23919dab16d79@sbcloud.ru> References: <9907ba1ef089436aa3e23919dab16d79@sbcloud.ru> Message-ID: <32986422a8ad4fb8803ac6488fd6d955@G07SGEXCMSGPS05.g07.fujitsu.local> Hi Alex, > Could you please add me as a patch reviewer once patch is uploaded? Sure, I will add you as a patch reviewer once the patch is uploaded. Thank you in advance and have a great day ahead :) -- Vu Cong Tuan From tim at swiftstack.com Tue Oct 16 01:24:41 2018 From: tim at swiftstack.com (Tim Burke) Date: Mon, 15 Oct 2018 18:24:41 -0700 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> <20181015201048.zg4nxyyq4b2vez5w@yuggoth.org> <471f89e2-043b-e3bd-2083-27da76c5e5c0@redhat.com> Message-ID: > On Oct 15, 2018, at 5:00 PM, Monty Taylor wrote: > > If we decide as a community to shift our testing of python3 to be 3.6 - or even 3.7 - as long as we still are testing 2.7, I'd argue we're adequately covered for 3.5. That's not enough for me to be willing to declare support. I'll grant that we'd catch the obvious SyntaxErrors, but that could be achieved just as easily (and probably more cheaply, resource-wise) with multiple linter jobs. The reason you want unit tests to actually run is to catch the not-so-obvious bugs. For example: there are a bunch of places in Swift's proxy-server where we get a JSON response from a backend server, loads() it up, and do some work based on it. As I've been trying to get the proxy ported to py3, I keep writing json.loads(rest.body.decode()). I'll sometimes get pushback from reviewers saying this shouldn't be necessary, and then I need to point out that while json.loads() is happy to accept either bytes or unicode on both py27 and py36, bytes will cause a TypeError on py35. And since https://bugs.python.org/issue17909 was termed an enhancement and not a regression (I guess the contract is str-or-unicode, for whatever str is?), I'm not expecting a backport. TLDR; if we want to say that something works, best to actually test that it works. I might be willing to believe that py35 and py37 working implies that py36 will work, but py27 -> py3x tells me little about whether py3w works for any w < x. Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Oct 16 02:25:19 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 15 Oct 2018 19:25:19 -0700 Subject: [openstack-dev] [nova] shall we do a spec review day next tuesday oct 23? In-Reply-To: <3ec1bbe1-f53f-4dd7-53ba-da6afcf64e16@gmail.com> References: <3ec1bbe1-f53f-4dd7-53ba-da6afcf64e16@gmail.com> Message-ID: Works for me too, thanks Melanie. On Mon, Oct 15, 2018, 10:07 AM melanie witt wrote: > Hey all, > > Milestone s-1 is coming up next week on Thursday Oct 25 [1] and I was > thinking it would be a good idea to have a spec review day next week on > Tuesday Oct 23 to spend some focus on spec reviews together. > > Spec freeze is s-2 Jan 10, so the review day isn't related to any > deadlines, but would just be a way to organize and make sure we have > initial review on the specs that have been proposed so far. > > How does Tuesday Oct 23 work for everyone? Let me know if another day > works better. > > So far, efried and mriedem are on board when I asked in the > #openstack-nova channel. I'm sending this mail to gather more responses > asynchronously. > > Cheers, > -melanie > > [1] https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain.bauza at gmail.com Tue Oct 16 05:16:50 2018 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Tue, 16 Oct 2018 07:16:50 +0200 Subject: [openstack-dev] [nova] shall we do a spec review day next tuesday oct 23? In-Reply-To: <3ec1bbe1-f53f-4dd7-53ba-da6afcf64e16@gmail.com> References: <3ec1bbe1-f53f-4dd7-53ba-da6afcf64e16@gmail.com> Message-ID: Le lun. 15 oct. 2018 à 19:07, melanie witt a écrit : > Hey all, > > Milestone s-1 is coming up next week on Thursday Oct 25 [1] and I was > thinking it would be a good idea to have a spec review day next week on > Tuesday Oct 23 to spend some focus on spec reviews together. > > Spec freeze is s-2 Jan 10, so the review day isn't related to any > deadlines, but would just be a way to organize and make sure we have > initial review on the specs that have been proposed so far. > > How does Tuesday Oct 23 work for everyone? Let me know if another day > works better. > > So far, efried and mriedem are on board when I asked in the > #openstack-nova channel. I'm sending this mail to gather more responses > asynchronously. > I'll only be available on the European morning but I can still surely help around this date. A spec review day is always a good idea :-) > Cheers, > -melanie > > [1] https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Tue Oct 16 05:22:09 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Tue, 16 Oct 2018 05:22:09 +0000 Subject: [openstack-dev] [zVM] [python3] tox/zuul issues for zVM OpenStack In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Tue Oct 16 06:37:50 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Tue, 16 Oct 2018 06:37:50 +0000 Subject: [openstack-dev] Forum Schedule - Seeking Community Review In-Reply-To: <5BC4F203.4000904@openstack.org> References: <5BC4F203.4000904@openstack.org> Message-ID: <971FEFFC-65C5-49B1-9306-A9FA91808BA8@cern.ch> Jimmy, While it's not a clash within the forum, there are two sessions for Ironic scheduled at the same time on Tuesday at 14h20, each of which has Julia as a speaker. Tim -----Original Message----- From: Jimmy McArthur Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Monday, 15 October 2018 at 22:04 To: "OpenStack Development Mailing List (not for usage questions)" , "OpenStack-operators at lists.openstack.org" , "community at lists.openstack.org" Subject: [openstack-dev] Forum Schedule - Seeking Community Review Hi - The Forum schedule is now up (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). If you see a glaring content conflict within the Forum itself, please let me know. You can also view the Full Schedule in the attached PDF if that makes life easier... NOTE: BoFs and WGs are still not all up on the schedule. No need to let us know :) Cheers, Jimmy From balazs.gibizer at ericsson.com Tue Oct 16 08:15:27 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Tue, 16 Oct 2018 08:15:27 +0000 Subject: [openstack-dev] [nova] shall we do a spec review day next tuesday oct 23? In-Reply-To: <3ec1bbe1-f53f-4dd7-53ba-da6afcf64e16@gmail.com> References: <3ec1bbe1-f53f-4dd7-53ba-da6afcf64e16@gmail.com> Message-ID: <1539677723.2027.0@smtp.office365.com> On Mon, Oct 15, 2018 at 7:07 PM, melanie witt wrote: > Hey all, > > Milestone s-1 is coming up next week on Thursday Oct 25 [1] and I was > thinking it would be a good idea to have a spec review day next week > on Tuesday Oct 23 to spend some focus on spec reviews together. > > Spec freeze is s-2 Jan 10, so the review day isn't related to any > deadlines, but would just be a way to organize and make sure we have > initial review on the specs that have been proposed so far. > > How does Tuesday Oct 23 work for everyone? Let me know if another day > works better. 22nd and 23rd are public holidays in Hungary os I will try to do some review on this Friday as a compromise. Cheers, gibi > > So far, efried and mriedem are on board when I asked in the > #openstack-nova channel. I'm sending this mail to gather more > responses asynchronously. > > Cheers, > -melanie > > [1] https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sfinucan at redhat.com Tue Oct 16 08:38:48 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 16 Oct 2018 09:38:48 +0100 Subject: [openstack-dev] [nova] shall we do a spec review day next tuesday oct 23? In-Reply-To: <3ec1bbe1-f53f-4dd7-53ba-da6afcf64e16@gmail.com> References: <3ec1bbe1-f53f-4dd7-53ba-da6afcf64e16@gmail.com> Message-ID: <204b3e250f703bb81dcdf33941b7cd91ba1cb0df.camel@redhat.com> On Mon, 2018-10-15 at 10:07 -0700, melanie witt wrote: > Hey all, > > Milestone s-1 is coming up next week on Thursday Oct 25 [1] and I was > thinking it would be a good idea to have a spec review day next week on > Tuesday Oct 23 to spend some focus on spec reviews together. > > Spec freeze is s-2 Jan 10, so the review day isn't related to any > deadlines, but would just be a way to organize and make sure we have > initial review on the specs that have been proposed so far. > > How does Tuesday Oct 23 work for everyone? Let me know if another day > works better. > > So far, efried and mriedem are on board when I asked in the > #openstack-nova channel. I'm sending this mail to gather more responses > asynchronously. > > Cheers, > -melanie > > [1] https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule Good by me. Stephen From gmann at ghanshyammann.com Tue Oct 16 09:59:49 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 16 Oct 2018 18:59:49 +0900 Subject: [openstack-dev] [Openstack-operators] [goals][upgrade-checkers] Week R-26 Update In-Reply-To: References: Message-ID: <1667c51369c.f0e01e0f236210.2319928222358081529@ghanshyammann.com> ---- On Sat, 13 Oct 2018 07:05:53 +0900 Matt Riedemann wrote ---- > The big update this week is version 0.1.0 of oslo.upgradecheck was > released. The documentation along with usage examples can be found here > [1]. A big thanks to Ben Nemec for getting that done since a few > projects were waiting for it. > > In other updates, some changes were proposed in other projects [2]. > > And finally, Lance Bragstad and I had a discussion this week [3] about > the validity of upgrade checks looking for deleted configuration > options. The main scenario I'm thinking about here is FFU where someone > is going from Mitaka to Pike. Let's say a config option was deprecated > in Newton and then removed in Ocata. As the operator is rolling through > from Mitaka to Pike, they might have missed the deprecation signal in > Newton and removal in Ocata. Does that mean we should have upgrade > checks that look at the configuration for deleted options, or options > where the deprecated alias is removed? My thought is that if things will > not work once they get to the target release and restart the service > code, which would definitely impact the upgrade, then checking for those > scenarios is probably OK. If on the other hand the removed options were > just tied to functionality that was removed and are otherwise not > causing any harm then I don't think we need a check for that. It was > noted that oslo.config has a new validation tool [4] so that would take > care of some of this same work if run during upgrades. So I think > whether or not an upgrade check should be looking for config option > removal ultimately depends on the severity of what happens if the manual > intervention to handle that removed option is not performed. That's > pretty broad, but these upgrade checks aren't really set in stone for > what is applied to them. I'd like to get input from others on this, > especially operators and if they would find these types of checks useful. > > [1] https://docs.openstack.org/oslo.upgradecheck/latest/ > [2] https://storyboard.openstack.org/#!/story/2003657 > [3] > http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-10-10.log.html#t2018-10-10T15:17:17 > [4] > http://lists.openstack.org/pipermail/openstack-dev/2018-October/135688.html Other point is about policy change and how we should accommodate those in upgrade-checks. There are below categorization of policy changes: 1. Policy rule name has been changed. Upgrade Impact: If that policy rule is overridden in policy.json then, yes we need to tell this in upgrade-check CLI. If not overridden which means operators depends on policy in code then, it would not impact their upgrade. 2. Policy rule (deprecated) has been removed Upgrade Impact: YES, as it can impact their API access after upgrade. This needs to be cover in upgrade-checks 3. Default value (including scope) of Policy rule has been changed Upgrade Impact: YES, this can change the access level of their API after upgrade. This needs to be cover in upgrade-checks 4. New Policy rule introduced Upgrade Impact: YES, same reason. I think policy changes can be added in upgrade checker by checking all the above category because everything will impact upgrade? For Example, cinder policy change [1]: "Add granularity to the volume_extension:volume_type_encryption policy with the addition of distinct actions for create, get, update, and delete: volume_extension:volume_type_encryption:create volume_extension:volume_type_encryption:get volume_extension:volume_type_encryption:update volume_extension:volume_type_encryption:delete To address backwards compatibility, the new rules added to the volume_type.py policy file, default to the existing rule, volume_extension:volume_type_encryption, if it is set to a non-default value. " [1] https://docs.openstack.org/releasenotes/cinder/unreleased.html#upgrade-notes -gmann > > -- > > Thanks, > > Matt > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From cdent+os at anticdent.org Tue Oct 16 10:48:21 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 16 Oct 2018 11:48:21 +0100 (BST) Subject: [openstack-dev] [placement] devstack, grenade, database management Message-ID: TL;DR: We need reviews on https://review.openstack.org/#/q/topic:cd/placement-solo+status:open and work on database management command line tools. More detail within. The stack of code, mostly put together by Matt, to get migrating placement-in-nova to placement-in-placement working is passing its tests. You can see the remaining pieces of not yet merged code at https://review.openstack.org/#/q/topic:cd/placement-solo+status:open Once that is fully merged, the first bullet point on the extraction plan at http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html will be complete and we'll have a model for how the next two bullet points can be done. At this time, there are two main sticking points to getting things merged: * The devstack, grenade, and devstack-gate changes need some review to make sure that some of the tricks Matt and I performed are acceptable to everyone. They are at: https://review.openstack.org/600162 https://review.openstack.org/604454 https://review.openstack.org/606853 * We need to address database creation scripts and database migrations. There's a general consensus that we should use alembic, and start things from a collapsed state. That is, we don't need to represent already existing migrations in the new repo, just the present-day structure of the tables. Right now the devstack code relies on a stubbed out command line tool at https://review.openstack.org/#/c/600161/ to create tables with a metadata.create_all(). This is a useful thing to have but doesn't follow the "db_sync" pattern set elsewhere, so I haven't followed through on making it pretty but can do so if people think it is useful. Whether we do that or not, we'll still need some kind of "db_sync" command. Do people want me to make a cleaned up "create" command? Ed has expressed some interest in exploring setting up alembic and the associated tools but that can easily be a more than one person job. Is anyone else interested? It would be great to get all this stuff working sooner than later. Without it we can't do two important tasks: * Integration tests with the extracted placement [1]. * Hacking on extracted placement in/with devstack. Another issue that needs some attention, but is not quite as urgent is the desire to support other databases during the upgrade, captured in this change https://review.openstack.org/#/c/604028/ [1] There's a stack of code for enabling placement integration tests starting at https://review.openstack.org/#/c/601614/ . It depends on the devstack changes. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From coolsvap at gmail.com Tue Oct 16 12:33:54 2018 From: coolsvap at gmail.com (=?UTF-8?B?yoLKjcmSz4HGnsSvxYIg0p7GsMi0xLfJksqByonJqA==?=) Date: Tue, 16 Oct 2018 18:03:54 +0530 Subject: [openstack-dev] [kolla] [requirements] Stepping down as core reviewer Message-ID: Dear OpenStackers, For a few months now, I am not able to contribute to code or reviewing Kolla and Requirements actively given my current responsibilities, I would like to take a step back and release my core reviewer ability for the Kolla and Requirements repositories. I want to use this moment to thank the everyone I have had a chance to work alongside with and I may have troubled. It has been both an honor and privilege to serve this community and I will continue to do so. In the new cloudy world I am sure the paths will cross again. Till then, Sayo Nara, Take Care. Best Regards, Swapnil (coolsvap) From coolsvap at gmail.com Tue Oct 16 12:38:01 2018 From: coolsvap at gmail.com (=?UTF-8?B?yoLKjcmSz4HGnsSvxYIg0p7GsMi0xLfJksqByonJqA==?=) Date: Tue, 16 Oct 2018 18:08:01 +0530 Subject: [openstack-dev] [all] Pycharm Message-ID: I will continue to be maintaining PyCharm community edition licenses for active OpenStack community contributors. If you have additional queries, please refer [1] for additional details or shoot me an email, I am happy to assist. Best Regards, Swapnil [1] https://wiki.openstack.org/wiki/Pycharm From mordred at inaugust.com Tue Oct 16 12:53:34 2018 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 16 Oct 2018 07:53:34 -0500 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> <20181015201048.zg4nxyyq4b2vez5w@yuggoth.org> <471f89e2-043b-e3bd-2083-27da76c5e5c0@redhat.com> Message-ID: <7a8515cb-0e02-d18d-1212-669fef218c07@inaugust.com> On 10/15/2018 08:24 PM, Tim Burke wrote: > >> On Oct 15, 2018, at 5:00 PM, Monty Taylor > > wrote: >> >> If we decide as a community to shift our testing of python3 to be 3.6 >> - or even 3.7 - as long as we still are testing 2.7, I'd argue we're >> adequately covered for 3.5. > > That's not enough for me to be willing to declare support. I'll grant > that we'd catch the obvious SyntaxErrors, but that could be achieved > just as easily (and probably more cheaply, resource-wise) with multiple > linter jobs. The reason you want unit tests to actually run is to catch > the not-so-obvious bugs. > > For example: there are a bunch of places in Swift's proxy-server where > we get a JSON response from a backend server, loads() it up, and do some > work based on it. As I've been trying to get the proxy ported to py3, I > keep writing json.loads(rest.body.decode()). I'll sometimes get pushback > from reviewers saying this shouldn't be necessary, and then I need to > point out that while json.loads() is happy to accept either bytes or > unicode on both py27 and py36, bytes will cause a TypeError on py35. And > since https://bugs.python.org/issue17909 was termed an enhancement and > not a regression (I guess the contract is str-or-unicode, for whatever > str is?), I'm not expecting a backport. > > TLDR; if we want to say that something works, best to actually test that > it works. I might be willing to believe that py35 and py37 working > implies that py36 will work, but py27 -> py3x tells me little about > whether py3w works for any w < x. Fair point - you've convinced me! From florian.engelmann at everyware.ch Tue Oct 16 13:48:33 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Tue, 16 Oct 2018 15:48:33 +0200 Subject: [openstack-dev] [oslo][glance][cinder][nova][keystone] healthcheck In-Reply-To: References: <20181009035940.tlg4mx4j2vbahybl@gentoo.org> <20181009152157.x6yllzeaeqwjt6wl@gentoo.org> Message-ID: Thank you very much for your detailed answer. Keystone healthcheck is working fine and as you said out of the box. I got trouble with, eg. neutron-server and cinder-api. While nova is happy with: [filter:healthcheck] paste.filter_factory = oslo_middleware:Healthcheck.factory backends = disable_by_file disable_by_file_path = /var/log/kolla/nova/healthcheck_disable and some changes to the pipeline: [pipeline:oscomputeversions] pipeline = healthcheck cors faultwrap request_log http_proxy_to_wsgi oscomputeversionapp I was not able to get the same thing working with neutron-server as it's paste configuration is very different: [composite:neutron] use = egg:Paste#urlmap /: neutronversions_composite /v2.0: neutronapi_v2_0 [composite:neutronapi_v2_0] use = call:neutron.auth:pipeline_factory noauth = cors http_proxy_to_wsgi request_id catch_errors extensions neutronapiapp_v2_0 keystone = cors http_proxy_to_wsgi request_id catch_errors authtoken keystonecontext extensions neutronapiapp_v2_0 [composite:neutronversions_composite] use = call:neutron.auth:pipeline_factory noauth = cors http_proxy_to_wsgi neutronversions keystone = cors http_proxy_to_wsgi neutronversions [filter:request_id] paste.filter_factory = oslo_middleware:RequestId.factory [filter:catch_errors] paste.filter_factory = oslo_middleware:CatchErrors.factory [filter:cors] paste.filter_factory = oslo_middleware.cors:filter_factory oslo_config_project = neutron [filter:http_proxy_to_wsgi] paste.filter_factory = oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory [filter:keystonecontext] paste.filter_factory = neutron.auth:NeutronKeystoneContext.factory [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory [filter:extensions] paste.filter_factory = neutron.api.extensions:plugin_aware_extension_middleware_factory [app:neutronversions] paste.app_factory = neutron.pecan_wsgi.app:versions_factory [app:neutronapiapp_v2_0] paste.app_factory = neutron.api.v2.router:APIRouter.factory [filter:osprofiler] paste.filter_factory = osprofiler.web:WsgiMiddleware.factory #[filter:healthcheck] #paste.filter_factory = oslo_middleware:Healthcheck.factory #backends = disable_by_file #disable_by_file_path = /var/log/kolla/neutron/healthcheck_disable I did read the oslo middleware documentation a couple of times but I still don't get what to do to enable the healthcheck API with neutron-server. Is there any "tutorial" that could help? Am 10/12/18 um 7:50 PM schrieb Morgan Fainberg: > Keystone no longer uses paste (since Rocky) as paste is unmaintained. > The healthcheck app is permanently enabled for keystone at > /healthcheck. We chose to make it a default bit of > functionality in how we have Keystone deployed. We also have unit tests > in place to ensure we don't regress and healthcheck changes behavior > down the line (future releases). You should be able to configure > additional bits for healthcheck in keystone.conf (e.g. detailed mode, > disable-by-file, etc). > > Cheers, > --Morgan > > On Fri, Oct 12, 2018 at 3:07 AM Florian Engelmann > > > wrote: > > Hi, > > I tried to configure the healthcheck framework (/healthcheck) for nova, > cinder, glance and keystone but it looks like paste is not used with > keystone anymore? > > https://github.com/openstack/keystone/commit/8bf335bb015447448097a5c08b870da8e537a858 > > In our rocky deployment the healthcheck is working for keystone only > and > I failed to configure it for, eg. nova-api. > > Nova seems to use paste? > > Is there any example nova api-paste.ini with the olso healthcheck > middleware enabled? To documentation is hard to understand - at least > for me. > > Thank you for your help. > > All the best, > Florian > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue Oct 16 14:43:58 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 16 Oct 2018 23:43:58 +0900 Subject: [openstack-dev] [goals][upgrade-checkers] Call for Volunteers to work on upgrade-checkers stein goal Message-ID: <1667d555e77.c3b5fe2c2771.365729908512377185@ghanshyammann.com> Hi All, I was discussing with mriedem [1] about idea of building a volunteer team which can work with him on upgrade-checkers goal [2]. There are lot of work needed for this goal[3], few projects which does not have upgrade impact yet needs CLI framework with placeholder only and other projects with upgrade impact need actual upgrade checks implementation. Idea is to build the volunteer team who can work with goal champion to finish the work early. This will help to share some work from goal champion as well from project side. - This email is request to call for volunteers (upstream developers from any projects) who can work closely with mriedem on upgrade-checkers goal. - Currently two developers has volunteered. 1. Akhil Jain (IRC: akhil_jain, email: akhil.jain at india.nec.com) 2. Rajat Dhasmana (IRC: whoami-rajat email: rajatdhasmana at gmail.com) - Anyone who would like to help on this work, feel free to reply this email or ping mriedem on IRC. - As next step, mriedem will plan the work distribution to volunteers. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-10-16.log.html#t2018-10-16T13:37:59 [2] https://governance.openstack.org/tc/goals/stein/upgrade-checkers.html [3] https://storyboard.openstack.org/#!/story/2003657 -gmann From zbitter at redhat.com Tue Oct 16 14:57:07 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 16 Oct 2018 10:57:07 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> <20181015201048.zg4nxyyq4b2vez5w@yuggoth.org> <471f89e2-043b-e3bd-2083-27da76c5e5c0@redhat.com> Message-ID: <8ce5a9e9-f5ee-aef0-92fa-ffe1866a23d8@redhat.com> On 15/10/18 8:00 PM, Monty Taylor wrote: > On 10/15/2018 06:39 PM, Zane Bitter wrote: >> >> In fact, as far as we know the version we have to support in CentOS >> may actually be 3.5, which seems like a good reason to keep it working >> for long enough that we can find out for sure one way or the other. > > I certainly hope this is not what ends up happening, but seeing as how I > actually do not know - I agree, I cannot discount the possibility that > such a thing would happen. I'm right there with ya. > That said - until such a time as we get to actually drop python2, I > don't see it as an actual issue. The reason being - if we test with 2.7 > and 3.7 - the things in 3.6 that would break 3.5 get gated by the > existence of 2.7 for our codebase. > > Case in point- the instant 3.6 is our min, I'm going to start replacing > every instance of: > >   "foo {bar}".format(bar=bar) > > in any code I spend time in with: > >   f"foo {bar}" > > It TOTALLY won't parse on 3.5 ... but it also won't parse on 2.7. > > If we decide as a community to shift our testing of python3 to be 3.6 - > or even 3.7 - as long as we still are testing 2.7, I'd argue we're > adequately covered for 3.5. Yeah, that is a good point. There are only a couple of edge-case scenarios where that might not prove to be the case. One is where we install a different (or a different version of a) 3rd-party library on py2 vs. py3. The other would be where you have some code like: if six.PY3: some_std_lib_function_added_in_3_6() else: py2_code() It may well be that we can say this is niche enough that we don't care. In theory the same thing could happen between versions of python3 (e.g. if we only tested on 3.5 & 3.7, and not 3.6). There certainly exist places where we check the minor version.* However, that's so much less likely again that it definitely seems negligible. * e.g. https://git.openstack.org/cgit/openstack/oslo.service/tree/oslo_service/service.py#n207 > The day we decide we can drop 2.7 - if we've been testing 3.7 for > python3 and it turns out RHEL/CentOS 8 ship with python 3.5, then > instead of just deleting all of the openstack-tox-py27 jobs, we'd > probably just need to replace them with openstack-tox-py35 jobs, as that > would be our new low-water mark. > > Now, maybe we'll get lucky and RHEL/CentOS 8 will be a future-looking > release and will ship with python 3.7 AND so will the corresponding > Ubuntu LTS - and we'll get to only care about one release of python for > a minute. :) > > Come on - I can dream, right? Sure, but let's not get complacent - 3.8 is right around the corner :) cheers, Zane. From tobias.rydberg at citynetwork.eu Tue Oct 16 15:05:10 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Tue, 16 Oct 2018 17:05:10 +0200 Subject: [openstack-dev] [publiccloud-wg] Reminder weekly meeting Public Cloud WG Message-ID: <5c2a917d-2a77-b7da-46d5-9fb02c6018ba@citynetwork.eu> Hi everyone, Time for a new meeting for PCWG - tomorrow Wednesday 0700 UTC in #openstack-publiccloud! Agenda found at https://etherpad.openstack.org/p/publiccloud-wg Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From lbragstad at gmail.com Tue Oct 16 15:11:19 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 16 Oct 2018 10:11:19 -0500 Subject: [openstack-dev] [Openstack-operators] [all] Consistent policy names In-Reply-To: <1666d1bcecf.e634f9cf181694.2527311199687749309@ghanshyammann.com> References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> <1662fc326b2.b3cb83bc32239.7575898832806527463@ghanshyammann.com> <1666d1bcecf.e634f9cf181694.2527311199687749309@ghanshyammann.com> Message-ID: It happened. Documentation is hot off the press and ready for you to read [0]. As always, feel free to raise concerns, comments, or questions any time. I appreciate everyone's help in nailing this down. [0] https://docs.openstack.org/oslo.policy/latest/user/usage.html#naming-policies On Sat, Oct 13, 2018 at 6:07 AM Ghanshyam Mann wrote: > ---- On Sat, 13 Oct 2018 01:45:17 +0900 Lance Bragstad < > lbragstad at gmail.com> wrote ---- > > Sending a follow up here quick. > > The reviewers actively participating in [0] are nearing a conclusion. > Ultimately, the convention is going to be: > > > :[:][:]:[:] > > Details about what that actually means can be found in the review [0]. > Each piece is denoted as being required or optional, along with examples. I > think this gives us a pretty good starting place, and the syntax is > flexible enough to support almost every policy naming convention we've > stumbled across. > > Now is the time if you have any final input or feedback. Thanks for > sticking with the discussion. > > Thanks Lance for working on this. Current version lgtm. I would like to > see some operators feedback also if this standard policy name format is > clear and easy understandable. > > -gmann > > > Lance > > [0] https://review.openstack.org/#/c/606214/ > > > > On Mon, Oct 8, 2018 at 8:49 AM Lance Bragstad > wrote: > > > > On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann > wrote: > > ---- On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad < > lbragstad at gmail.com> wrote ---- > > > > > > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki > wrote: > > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > > > wrote: > > > > > > > > Ideally I would like to see it in the form of least specific to > most specific. But more importantly in a way that there is no additional > delimiters between the service type and the resource. Finally, I do not > like the change of plurality depending on action type. > > > > > > > > I propose we consider > > > > > > > > ::[:] > > > > > > > > Example for keystone (note, action names below are strictly > examples I am fine with whatever form those actions take): > > > > identity:projects:create > > > > identity:projects:delete > > > > identity:projects:list > > > > identity:projects:get > > > > > > > > It keeps things simple and consistent when you're looking > through overrides / defaults. > > > > --Morgan > > > +1 -- I think the ordering if `resource` comes before > > > `action|subaction` will be more clean. > > > > > > ++ > > > These are excellent points. I especially like being able to omit > the convention about plurality. Furthermore, I'd like to add that I think > we should make the resource singular (e.g., project instead or projects). > For example: > > > compute:server:list > > > > compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize > (or confirm-resize) > > > > Do we need "action" word there? I think action name itself should > convey the operation. IMO below notation without "äction" word looks clear > enough. what you say? > > > > compute:server:reboot > > compute:server:confirm_resize > > > > I agree. I simplified this in the current version up for review. > > -gmann > > > > > > > > Otherwise, someone might mistake compute:servers:get, as "list". > This is ultra-nick-picky, but something I thought of when seeing the usage > of "get_all" in policy names in favor of "list." > > > In summary, the new convention based on the most recent feedback > should be: > > > ::[:] > > > Rules:service-type is always defined in the service types authority > > > resources are always singular > > > Thanks to all for sticking through this tedious discussion. I > appreciate it. > > > /R > > > > > > Harry > > > > > > > > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad < > lbragstad at gmail.com> wrote: > > > >> > > > >> Bumping this thread again and proposing two conventions based > on the discussion here. I propose we decide on one of the two following > conventions: > > > >> > > > >> :: > > > >> > > > >> or > > > >> > > > >> :_ > > > >> > > > >> Where is the corresponding service type of the > project [0], and is either create, get, list, update, or delete. I > think decoupling the method from the policy name should aid in consistency, > regardless of the underlying implementation. The HTTP method specifics can > still be relayed using oslo.policy's DocumentedRuleDefault object [1]. > > > >> > > > >> I think the plurality of the resource should default to what > makes sense for the operation being carried out (e.g., list:foobars, > create:foobar). > > > >> > > > >> I don't mind the first one because it's clear about what the > delimiter is and it doesn't look weird when projects have something like: > > > >> > > > >> ::: > > > >> > > > >> If folks are ok with this, I can start working on some > documentation that explains the motivation for this. Afterward, we can > figure out how we want to track this work. > > > >> > > > >> What color do you want the shed to be? > > > >> > > > >> [0] https://service-types.openstack.org/service-types.json > > > >> [1] > https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule > > > >> > > > >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad < > lbragstad at gmail.com> wrote: > > > >>> > > > >>> > > > >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann < > gmann at ghanshyammann.com> wrote: > > > >>>> > > > >>>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt < > john at johngarbutt.com> wrote ---- > > > >>>> > tl;dr+1 consistent names > > > >>>> > I would make the names mirror the API... because the > Operator setting them knows the API, not the codeIgnore the crazy names in > Nova, I certainly hate them > > > >>>> > > > >>>> Big +1 on consistent naming which will help operator as well > as developer to maintain those. > > > >>>> > > > >>>> > > > > >>>> > Lance Bragstad wrote: > > > >>>> > > I'm curious if anyone has context on the "os-" part of > the format? > > > >>>> > > > > >>>> > My memory of the Nova policy mess...* Nova's policy rules > traditionally followed the patterns of the code > > > >>>> > ** Yes, horrible, but it happened.* The code used to have > the OpenStack API and the EC2 API, hence the "os"* API used to expand with > extensions, so the policy name is often based on extensions** note most of > the extension code has now gone, including lots of related policies* Policy > in code was focused on getting us to a place where we could rename policy** > Whoop whoop by the way, it feels like we are really close to something > sensible now! > > > >>>> > Lance Bragstad wrote: > > > >>>> > Thoughts on using create, list, update, and delete as > opposed to post, get, put, patch, and delete in the naming convention? > > > >>>> > I could go either way as I think about "list servers" in > the API.But my preference is for the URL stub and POST, GET, etc. > > > >>>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad < > lbragstad at gmail.com> wrote:If we consider dropping "os", should we > entertain dropping "api", too? Do we have a good reason to keep "api"?I > wouldn't be opposed to simple service types (e.g "compute" or > "loadbalancer"). > > > >>>> > +1The API is known as "compute" in api-ref, so the policy > should be for "compute", etc. > > > >>>> > > > >>>> Agree on mapping the policy name with api-ref as much as > possible. Other than policy name having 'os-', we have 'os-' in resource > name also in nova API url like /os-agents, /os-aggregates etc (almost every > resource except servers , flavors). As we cannot get rid of those from API > url, we need to keep the same in policy naming too? or we can have policy > name like compute:agents:create/post but that mismatch from api-ref where > agents resource url is os-agents. > > > >>> > > > >>> > > > >>> Good question. I think this depends on how the service does > policy enforcement. > > > >>> > > > >>> I know we did something like this in keystone, which required > policy names and method names to be the same: > > > >>> > > > >>> "identity:list_users": "..." > > > >>> > > > >>> Because the initial implementation of policy enforcement used > a decorator like this: > > > >>> > > > >>> from keystone import controller > > > >>> > > > >>> @controller.protected > > > >>> def list_users(self): > > > >>> ... > > > >>> > > > >>> Having the policy name the same as the method name made it > easier for the decorator implementation to resolve the policy needed to > protect the API because it just looked at the name of the wrapped method. > The advantage was that it was easy to implement new APIs because you only > needed to add a policy, implement the method, and make sure you decorate > the implementation. > > > >>> > > > >>> While this worked, we are moving away from it entirely. The > decorator implementation was ridiculously complicated. Only a handful of > keystone developers understood it. With the addition of system-scope, it > would have only become more convoluted. It also enables a much more > copy-paste pattern (e.g., so long as I wrap my method with this decorator > implementation, things should work right?). Instead, we're calling > enforcement within the controller implementation to ensure things are > easier to understand. It requires developers to be cognizant of how > different token types affect the resources within an API. That said, > coupling the policy name to the method name is no longer a requirement for > keystone. > > > >>> > > > >>> Hopefully, that helps explain why we needed them to match. > > > >>> > > > >>>> > > > >>>> > > > >>>> Also we have action API (i know from nova not sure from other > services) like POST /servers/{server_id}/action {addSecurityGroup} and > their current policy name is all inconsistent. few have policy name > including their resource name like > "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in > policy name like "os_compute_api:os-admin-actions:reset_state" and few has > direct action name like "os_compute_api:os-console-output" > > > >>> > > > >>> > > > >>> Since the actions API relies on the request body and uses a > single HTTP method, does it make sense to have the HTTP method in the > policy name? It feels redundant, and we might be able to establish a > convention that's more meaningful for things like action APIs. It looks > like cinder has a similar pattern [0]. > > > >>> > > > >>> [0] > https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action > > > >>> > > > >>>> > > > >>>> > > > >>>> May be we can make them consistent with > :: or any better opinion. > > > >>>> > > > >>>> > From: Lance Bragstad > The topic of > having consistent policy names has popped up a few times this week. > > > >>>> > > > > >>>> > I would love to have this nailed down before we go through > all the policy rules again. In my head I hope in Nova we can go through > each policy rule and do the following: > > > >>>> > * move to new consistent policy name, deprecate existing > name* hardcode scope check to project, system or user** (user, yes... > keypairs, yuck, but its how they work)** deprecate in rule scope checks, > which are largely bogus in Nova anyway* make read/write/admin distinction** > therefore adding the "noop" role, amount other things > > > >>>> > > > >>>> + policy granularity. > > > >>>> > > > >>>> It is good idea to make the policy improvement all together > and for all rules as you mentioned. But my worries is how much load it will > be on operator side to migrate all policy rules at same time? What will be > the deprecation period etc which i think we can discuss on proposed spec - > https://review.openstack.org/#/c/547850 > > > >>> > > > >>> > > > >>> Yeah, that's another valid concern. I know at least one > operator has weighed in already. I'm curious if operators have specific > input here. > > > >>> > > > >>> It ultimately depends on if they override existing policies or > not. If a deployment doesn't have any overrides, it should be a relatively > simple change for operators to consume. > > > >>> > > > >>>> > > > >>>> > > > >>>> > > > >>>> -gmann > > > >>>> > > > >>>> > Thanks,John > __________________________________________________________________________ > > > >>>> > OpenStack Development Mailing List (not for usage > questions) > > > >>>> > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > >>>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > >>>> > > > > >>>> > > > >>>> > > > >>>> > > > >>>> > __________________________________________________________________________ > > > >>>> OpenStack Development Mailing List (not for usage questions) > > > >>>> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > >> > > > >> > __________________________________________________________________________ > > > >> OpenStack Development Mailing List (not for usage questions) > > > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Oct 16 15:21:11 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 16 Oct 2018 10:21:11 -0500 Subject: [openstack-dev] [Openstack-operators] Forum Schedule - Seeking Community Review In-Reply-To: <5BC4F203.4000904@openstack.org> References: <5BC4F203.4000904@openstack.org> Message-ID: <20181016152111.GA8297@sm-workstation> On Mon, Oct 15, 2018 at 03:01:07PM -0500, Jimmy McArthur wrote: > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > If you see a glaring content conflict within the Forum itself, please let me > know. > I have updated the Forum wiki page in preparation for the topic etherpads: https://wiki.openstack.org/wiki/Forum/Berlin2018 Please add your working session etherpad links once they are available so everyone has one spot to go to to find all relevant links. Thanks! Sean From corey.bryant at canonical.com Tue Oct 16 15:41:51 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Tue, 16 Oct 2018 11:41:51 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <8ce5a9e9-f5ee-aef0-92fa-ffe1866a23d8@redhat.com> References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> <20181015201048.zg4nxyyq4b2vez5w@yuggoth.org> <471f89e2-043b-e3bd-2083-27da76c5e5c0@redhat.com> <8ce5a9e9-f5ee-aef0-92fa-ffe1866a23d8@redhat.com> Message-ID: On Tue, Oct 16, 2018 at 10:58 AM Zane Bitter wrote: > On 15/10/18 8:00 PM, Monty Taylor wrote: > > On 10/15/2018 06:39 PM, Zane Bitter wrote: > >> > >> In fact, as far as we know the version we have to support in CentOS > >> may actually be 3.5, which seems like a good reason to keep it working > >> for long enough that we can find out for sure one way or the other. > > > > I certainly hope this is not what ends up happening, but seeing as how I > > actually do not know - I agree, I cannot discount the possibility that > > such a thing would happen. > > I'm right there with ya. > > > That said - until such a time as we get to actually drop python2, I > > don't see it as an actual issue. The reason being - if we test with 2.7 > > and 3.7 - the things in 3.6 that would break 3.5 get gated by the > > existence of 2.7 for our codebase. > > > > Case in point- the instant 3.6 is our min, I'm going to start replacing > > every instance of: > > > > "foo {bar}".format(bar=bar) > > > > in any code I spend time in with: > > > > f"foo {bar}" > > > > It TOTALLY won't parse on 3.5 ... but it also won't parse on 2.7. > > > > If we decide as a community to shift our testing of python3 to be 3.6 - > > or even 3.7 - as long as we still are testing 2.7, I'd argue we're > > adequately covered for 3.5. > > Yeah, that is a good point. There are only a couple of edge-case > scenarios where that might not prove to be the case. One is where we > install a different (or a different version of a) 3rd-party library on > py2 vs. py3. The other would be where you have some code like: > > if six.PY3: > some_std_lib_function_added_in_3_6() > else: > py2_code() > > It may well be that we can say this is niche enough that we don't care. > > In theory the same thing could happen between versions of python3 (e.g. > if we only tested on 3.5 & 3.7, and not 3.6). There certainly exist > places where we check the minor version.* However, that's so much less > likely again that it definitely seems negligible. > > * e.g. > > https://git.openstack.org/cgit/openstack/oslo.service/tree/oslo_service/service.py#n207 > > > The day we decide we can drop 2.7 - if we've been testing 3.7 for > > python3 and it turns out RHEL/CentOS 8 ship with python 3.5, then > > instead of just deleting all of the openstack-tox-py27 jobs, we'd > > probably just need to replace them with openstack-tox-py35 jobs, as that > > would be our new low-water mark. > > > > Now, maybe we'll get lucky and RHEL/CentOS 8 will be a future-looking > > release and will ship with python 3.7 AND so will the corresponding > > Ubuntu LTS - and we'll get to only care about one release of python for > > a minute. :) > > > > Come on - I can dream, right? > > Sure, but let's not get complacent - 3.8 is right around the corner :) > > Btw I confirmed this morning that the plan for 20.04 LTS is to have 3.8, so it really is around the corner. Thanks, Corey cheers, > Zane. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdake at cisco.com Tue Oct 16 16:15:39 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Tue, 16 Oct 2018 16:15:39 +0000 Subject: [openstack-dev] [kolla] [requirements] Stepping down as core reviewer In-Reply-To: References: Message-ID: <12D57E26-A1BB-42D8-8420-0A94F582D71A@cisco.com> Swapnil, Pleasure working with you - hope to see you around the computerverse. Cheers -steve On 10/16/18, 5:34 AM, "ʂʍɒρƞįł Ҟưȴķɒʁʉɨ" wrote: Dear OpenStackers, For a few months now, I am not able to contribute to code or reviewing Kolla and Requirements actively given my current responsibilities, I would like to take a step back and release my core reviewer ability for the Kolla and Requirements repositories. I want to use this moment to thank the everyone I have had a chance to work alongside with and I may have troubled. It has been both an honor and privilege to serve this community and I will continue to do so. In the new cloudy world I am sure the paths will cross again. Till then, Sayo Nara, Take Care. Best Regards, Swapnil (coolsvap) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From prometheanfire at gentoo.org Tue Oct 16 16:23:44 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 16 Oct 2018 11:23:44 -0500 Subject: [openstack-dev] [kolla] [requirements] Stepping down as core reviewer In-Reply-To: References: Message-ID: <20181016162344.r4dcaovybnea3qx4@gentoo.org> On 18-10-16 18:03:54, ʂʍɒρƞįł Ҟưȴķɒʁʉɨ wrote: > Dear OpenStackers, > > For a few months now, I am not able to contribute to code or reviewing > Kolla and Requirements actively given my current responsibilities, I > would like to take a step back and release my core reviewer ability > for the Kolla and Requirements repositories. > > I want to use this moment to thank the everyone I have had a chance to > work alongside with and I may have troubled. It has been both an honor > and privilege to serve this community and I will continue to do so. > > In the new cloudy world I am sure the paths will cross again. Till > then, Sayo Nara, Take Care. > Sad to see you go, hope to see you around though. Good luck on your journey. It was nice working with you. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jimmy at openstack.org Tue Oct 16 17:15:16 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 16 Oct 2018 12:15:16 -0500 Subject: [openstack-dev] [openstack-community] Forum Schedule - Seeking Community Review In-Reply-To: <971FEFFC-65C5-49B1-9306-A9FA91808BA8@cern.ch> References: <5BC4F203.4000904@openstack.org> <971FEFFC-65C5-49B1-9306-A9FA91808BA8@cern.ch> Message-ID: <5BC61CA4.2010002@openstack.org> I think you might have caught me while I was moving sessions around. This shouldn't be an issue now. Thanks for checking!! > Tim Bell > October 16, 2018 at 1:37 AM > Jimmy, > > While it's not a clash within the forum, there are two sessions for > Ironic scheduled at the same time on Tuesday at 14h20, each of which > has Julia as a speaker. > > Tim > > -----Original Message----- > From: Jimmy McArthur > Reply-To: "OpenStack Development Mailing List (not for usage > questions)" > Date: Monday, 15 October 2018 at 22:04 > To: "OpenStack Development Mailing List (not for usage questions)" > , > "OpenStack-operators at lists.openstack.org" > , > "community at lists.openstack.org" > Subject: [openstack-dev] Forum Schedule - Seeking Community Review > > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to let > us know :) > > Cheers, > Jimmy > > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > Jimmy McArthur > October 15, 2018 at 3:01 PM > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to > let us know :) > > Cheers, > Jimmy > _______________________________________________ > Staff mailing list > Staff at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Tue Oct 16 17:20:37 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 16 Oct 2018 13:20:37 -0400 Subject: [openstack-dev] Ops Meetups - Call for Hosts Message-ID: Hello all, The Ops Meetup team has embarked on a mission to revive the traditional Operators Meetup that have historically been held between Summits. With the upcoming merger of the PTG into the Summit week, and the merger of most Ops discussion sessions at Summits into the Forum, we felt that we needed to get back to our original format. With that in mind, we are beginning the process of selecting venues for both 2019 Meetups. Some guidelines for what is needed to host can be found here: https://wiki.openstack.org/wiki/Operations/Meetups#Venue_Selection Each of the etherpads below contains a template to collect information about the potential host and venue. If you are interested in hosting a meetup, simply copy and paste the template into a blank etherpad, fill it out, and place a link above the template on the original etherpad. Ops Meetup 2019 #1 - Late February / Early March - Somewhere in Europe https://etherpad.openstack.org/p/ops-meetup-venue-discuss-1st-2019 Ops Meetup 2019 #2 - Late July / Early August - Somewhere in North America https://etherpad.openstack.org/p/ops-meetup-venue-discuss-2nd-2019 Reply back to this thread with any questions or comments. If you are coming to the Berlin Summit, we will be having an Ops Meetup Team catch-up Forum session. We encourage all of you to join in making these events a success. Cheers, Erik From melwittt at gmail.com Tue Oct 16 17:56:38 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 16 Oct 2018 10:56:38 -0700 Subject: [openstack-dev] [nova] spec review day is ON for next tuesday oct 23 Message-ID: <4a48c674-9982-8cd8-e9e8-59d7deb91cf7@gmail.com> Thanks everyone for your replies on the thread to help organize this. Looks like most of the team is available to participate, so we will have a spec review day next week on Tuesday October 23. See ya at the nova-specs gerrit, -melanie From mriedemos at gmail.com Tue Oct 16 20:38:20 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 16 Oct 2018 15:38:20 -0500 Subject: [openstack-dev] [placement] devstack, grenade, database management In-Reply-To: References: Message-ID: <98d26a8a-3926-dce0-aa43-a9707a30fa79@gmail.com> On 10/16/2018 5:48 AM, Chris Dent wrote: > * We need to address database creation scripts and database migrations. > >   There's a general consensus that we should use alembic, and start >   things from a collapsed state. That is, we don't need to represent >   already existing migrations in the new repo, just the present-day >   structure of the tables. > >   Right now the devstack code relies on a stubbed out command line >   tool at https://review.openstack.org/#/c/600161/ to create tables >   with a metadata.create_all(). This is a useful thing to have but >   doesn't follow the "db_sync" pattern set elsewhere, so I haven't >   followed through on making it pretty but can do so if people think >   it is useful. Whether we do that or not, we'll still need some >   kind of "db_sync" command. Do people want me to make a cleaned up >   "create" command? > >   Ed has expressed some interest in exploring setting up alembic and >   the associated tools but that can easily be a more than one person >   job. Is anyone else interested? > > It would be great to get all this stuff working sooner than later. > Without it we can't do two important tasks: > > * Integration tests with the extracted placement [1]. > * Hacking on extracted placement in/with devstack. Another thing that came up today in IRC [1] which is maybe not as obvious from this email is what happens with the one online data migration we have for placement (create_incomplete_consumers). If we drop that online data migration from the placement repo, then ideally we'd have something to check it's done before people upgrade to stein and the extracted placement repo. There are some options there: placement-manage db sync could fail if there are missing consumers or we could simply have a placement-status upgrade check for it. > > Another issue that needs some attention, but is not quite as urgent > is the desire to support other databases during the upgrade, > captured in this change > > https://review.openstack.org/#/c/604028/ I have a grenade patch to test the postgresql-migrate-db.sh script now. [2] [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-10-16.log.html#t2018-10-16T19:37:25 [2] https://review.openstack.org/#/c/611020/ -- Thanks, Matt From sean.mcginnis at gmx.com Tue Oct 16 20:52:01 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 16 Oct 2018 15:52:01 -0500 Subject: [openstack-dev] [kolla] [requirements] Stepping down as core reviewer In-Reply-To: References: Message-ID: <20181016205200.GA12605@sm-workstation> On Tue, Oct 16, 2018 at 06:03:54PM +0530, ʂʍɒρƞįł Ҟưȴķɒʁʉɨ wrote: > Dear OpenStackers, > > For a few months now, I am not able to contribute to code or reviewing > Kolla and Requirements actively given my current responsibilities, I > would like to take a step back and release my core reviewer ability > for the Kolla and Requirements repositories. > > I want to use this moment to thank the everyone I have had a chance to > work alongside with and I may have troubled. It has been both an honor > and privilege to serve this community and I will continue to do so. > > In the new cloudy world I am sure the paths will cross again. Till > then, Sayo Nara, Take Care. > > Best Regards, > Swapnil (coolsvap) > Thanks for all you've been able to do Swapnil! Sean From juliaashleykreger at gmail.com Tue Oct 16 21:44:26 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 16 Oct 2018 15:44:26 -0600 Subject: [openstack-dev] [openstack-community] Forum Schedule - Seeking Community Review In-Reply-To: <5BC61CA4.2010002@openstack.org> References: <5BC4F203.4000904@openstack.org> <971FEFFC-65C5-49B1-9306-A9FA91808BA8@cern.ch> <5BC61CA4.2010002@openstack.org> Message-ID: Greetings Jimmy, Looks like it is still showing up on the schedule that way. I just reloaded the website page and it still has both sessions scheduled for 4:20 PM local. Sadly, I don't have cloning technology. Perhaps someone can help me with that for next year? :) -Julia On Tue, Oct 16, 2018 at 11:15 AM Jimmy McArthur wrote: > I think you might have caught me while I was moving sessions around. This > shouldn't be an issue now. > > Thanks for checking!! > > Tim Bell > October 16, 2018 at 1:37 AM > Jimmy, > > While it's not a clash within the forum, there are two sessions for Ironic > scheduled at the same time on Tuesday at 14h20, each of which has Julia as > a speaker. > > Tim > > -----Original Message----- > From: Jimmy McArthur > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > > Date: Monday, 15 October 2018 at 22:04 > To: "OpenStack Development Mailing List (not for usage questions)" > , > "OpenStack-operators at lists.openstack.org" > > > , "community at lists.openstack.org" > > > Subject: [openstack-dev] Forum Schedule - Seeking Community Review > > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to let > us know :) > > Cheers, > Jimmy > > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > Jimmy McArthur > October 15, 2018 at 3:01 PM > Hi - > > The Forum schedule is now up ( > https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > If you see a glaring content conflict within the Forum itself, please let > me know. > > You can also view the Full Schedule in the attached PDF if that makes life > easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to let > us know :) > > Cheers, > Jimmy > _______________________________________________ > Staff mailing list > Staff at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Oct 16 21:59:56 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 16 Oct 2018 16:59:56 -0500 Subject: [openstack-dev] [openstack-community] Forum Schedule - Seeking Community Review In-Reply-To: References: <5BC4F203.4000904@openstack.org> <971FEFFC-65C5-49B1-9306-A9FA91808BA8@cern.ch> <5BC61CA4.2010002@openstack.org> Message-ID: <5BC65F5C.4050401@openstack.org> Doh! You seriously need it! Working on a fix :) > Julia Kreger > October 16, 2018 at 4:44 PM > Greetings Jimmy, > > Looks like it is still showing up on the schedule that way. I just > reloaded the website page and it still has both sessions scheduled for > 4:20 PM local. Sadly, I don't have cloning technology. Perhaps someone > can help me with that for next year? :) > > -Julia > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > October 16, 2018 at 12:15 PM > I think you might have caught me while I was moving sessions around. > This shouldn't be an issue now. > > Thanks for checking!! > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Tim Bell > October 16, 2018 at 1:37 AM > Jimmy, > > While it's not a clash within the forum, there are two sessions for > Ironic scheduled at the same time on Tuesday at 14h20, each of which > has Julia as a speaker. > > Tim > > -----Original Message----- > From: Jimmy McArthur > Reply-To: "OpenStack Development Mailing List (not for usage > questions)" > Date: Monday, 15 October 2018 at 22:04 > To: "OpenStack Development Mailing List (not for usage questions)" > , > "OpenStack-operators at lists.openstack.org" > , > "community at lists.openstack.org" > Subject: [openstack-dev] Forum Schedule - Seeking Community Review > > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to let > us know :) > > Cheers, > Jimmy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > October 15, 2018 at 3:01 PM > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to > let us know :) > > Cheers, > Jimmy > _______________________________________________ > Staff mailing list > Staff at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Oct 16 22:05:07 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 16 Oct 2018 17:05:07 -0500 Subject: [openstack-dev] [openstack-community] Forum Schedule - Seeking Community Review In-Reply-To: References: <5BC4F203.4000904@openstack.org> <971FEFFC-65C5-49B1-9306-A9FA91808BA8@cern.ch> <5BC61CA4.2010002@openstack.org> Message-ID: <5BC66093.5070301@openstack.org> OK - I think I got this fixed. I had to move a couple of things around. Julia, please let me know if this all works for you: https://www.openstack.org/summit/berlin-2018/summit-schedule/global-search?t=Kreger PS - You're going to have a long week :| > Julia Kreger > October 16, 2018 at 4:44 PM > Greetings Jimmy, > > Looks like it is still showing up on the schedule that way. I just > reloaded the website page and it still has both sessions scheduled for > 4:20 PM local. Sadly, I don't have cloning technology. Perhaps someone > can help me with that for next year? :) > > -Julia > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > October 16, 2018 at 12:15 PM > I think you might have caught me while I was moving sessions around. > This shouldn't be an issue now. > > Thanks for checking!! > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Tim Bell > October 16, 2018 at 1:37 AM > Jimmy, > > While it's not a clash within the forum, there are two sessions for > Ironic scheduled at the same time on Tuesday at 14h20, each of which > has Julia as a speaker. > > Tim > > -----Original Message----- > From: Jimmy McArthur > Reply-To: "OpenStack Development Mailing List (not for usage > questions)" > Date: Monday, 15 October 2018 at 22:04 > To: "OpenStack Development Mailing List (not for usage questions)" > , > "OpenStack-operators at lists.openstack.org" > , > "community at lists.openstack.org" > Subject: [openstack-dev] Forum Schedule - Seeking Community Review > > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to let > us know :) > > Cheers, > Jimmy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > October 15, 2018 at 3:01 PM > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to > let us know :) > > Cheers, > Jimmy > _______________________________________________ > Staff mailing list > Staff at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Oct 16 22:50:06 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 17 Oct 2018 00:50:06 +0200 Subject: [openstack-dev] [publiccloud-wg][sdk][osc][tc] Extracting vendor profiles from openstacksdk In-Reply-To: <6365d18b-5550-ed93-8296-d74661cbe104@inaugust.com> References: <6365d18b-5550-ed93-8296-d74661cbe104@inaugust.com> Message-ID: I'm in support, mainly for quite a few reasons: - The vendor data should/might need to be be released often. If a provider makes a change, it'd be nice that you can pick it up without changing everything else that sits in your system (and potentially breaking other things) - We can add some very sort of basic gating that at least make sure the endpoints are responding - If we want to add a new region, we really shouldn't have to go through many hours of OpenStack SDK jobs to pick them up I'm all for it! On Mon, Oct 15, 2018 at 11:04 PM Monty Taylor wrote: > > Heya, > > Tobias and I were chatting at OpenStack Days Nordic about the Public > Cloud Working Group potentially taking over as custodians of the vendor > profile information [0][1] we keep in openstacksdk (and previously in > os-client-config) > > I think this is a fine idea, but we've got some dancing to do I think. > > A few years ago Dean and I talked about splitting the vendor data into > its own repo. We decided not to at the time because it seemed like extra > unnecessary complication. But I think we may have reached that time. > > We should split out a new repo to hold the vendor data json files. We > can manage this repo pretty much how we manage the > service-types-authority [2] data now. Also similar to that (and similar > to tzdata) these are files that contain information that is true > currently and is not release specific - so it should be possible to > update to the latest vendor files without updating to the latest > openstacksdk. > > If nobody objects, I'll start working through getting a couple of new > repos created. I'm thinking openstack/vendor-profile-data, owned/managed > by Public Cloud WG, with the json files, docs, json schema, etc, and a > second one, openstack/os-vendor-profiles - owned/managed by the > openstacksdk team that's just like os-service-types [3] and is a > tiny/thin library that exposes the files to python (so there's something > to depend on) and gets proposed patches from zuul when new content is > landed in openstack/vendor-profile-data. > > How's that sound? > > Thanks! > Monty > > [0] > http://git.openstack.org/cgit/openstack/openstacksdk/tree/openstack/config/vendors > [1] > https://docs.openstack.org/openstacksdk/latest/user/config/vendor-support.html > [2] http://git.openstack.org/cgit/openstack/service-types-authority > [3] http://git.openstack.org/cgit/openstack/os-service-types > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From juliaashleykreger at gmail.com Wed Oct 17 01:09:26 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 16 Oct 2018 19:09:26 -0600 Subject: [openstack-dev] [openstack-community] Forum Schedule - Seeking Community Review In-Reply-To: <5BC66093.5070301@openstack.org> References: <5BC4F203.4000904@openstack.org> <971FEFFC-65C5-49B1-9306-A9FA91808BA8@cern.ch> <5BC61CA4.2010002@openstack.org> <5BC66093.5070301@openstack.org> Message-ID: Looks Great, Thanks! -Julia PS - Indeed :( On Tue, Oct 16, 2018 at 4:05 PM Jimmy McArthur wrote: > OK - I think I got this fixed. I had to move a couple of things around. > Julia, please let me know if this all works for you: > > > https://www.openstack.org/summit/berlin-2018/summit-schedule/global-search?t=Kreger > > PS - You're going to have a long week :| > > Julia Kreger > October 16, 2018 at 4:44 PM > Greetings Jimmy, > > Looks like it is still showing up on the schedule that way. I just > reloaded the website page and it still has both sessions scheduled for 4:20 > PM local. Sadly, I don't have cloning technology. Perhaps someone can help > me with that for next year? :) > > -Julia > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > October 16, 2018 at 12:15 PM > I think you might have caught me while I was moving sessions around. This > shouldn't be an issue now. > > Thanks for checking!! > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Tim Bell > October 16, 2018 at 1:37 AM > Jimmy, > > While it's not a clash within the forum, there are two sessions for Ironic > scheduled at the same time on Tuesday at 14h20, each of which has Julia as > a speaker. > > Tim > > -----Original Message----- > From: Jimmy McArthur > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > > Date: Monday, 15 October 2018 at 22:04 > To: "OpenStack Development Mailing List (not for usage questions)" > , > "OpenStack-operators at lists.openstack.org" > > > , "community at lists.openstack.org" > > > Subject: [openstack-dev] Forum Schedule - Seeking Community Review > > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to let > us know :) > > Cheers, > Jimmy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > October 15, 2018 at 3:01 PM > Hi - > > The Forum schedule is now up ( > https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > If you see a glaring content conflict within the Forum itself, please let > me know. > > You can also view the Full Schedule in the attached PDF if that makes life > easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to let > us know :) > > Cheers, > Jimmy > _______________________________________________ > Staff mailing list > Staff at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Oct 17 01:24:48 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 17 Oct 2018 10:24:48 +0900 Subject: [openstack-dev] [tc][all] TC Office hour time Message-ID: <1667fa01067.cf17b4f77046.6252620145596090923@ghanshyammann.com> Hi All, TC office hour is started on #openstack-tc channel & many of TC members (may be not all due to TZ) gather for next 1 hour to discuss any topic from community. Feel free to reach to us for anything you want discuss/input/feedback/help from TC. -gmann From colleen at gazlene.net Wed Oct 17 05:55:22 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 17 Oct 2018 07:55:22 +0200 Subject: [openstack-dev] Forum Schedule - Seeking Community Review In-Reply-To: <5BC4F203.4000904@openstack.org> References: <5BC4F203.4000904@openstack.org> Message-ID: <1539755722.1700444.1544762240.7F81D932@webmail.messagingengine.com> On Mon, Oct 15, 2018, at 10:01 PM, Jimmy McArthur wrote: > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to let > us know :) Couple of things: 1. I noticed Julia's session "Community outreach when culture, time zones, and language differ" and Thierry's session "Getting OpenStack users involved in the project" are scheduled at the same time on Tuesday, but they're quite related topics and I think many people (especially in the TC) would want to attend both sessions. 2. The session "You don't know nothing about Public Cloud SDKs, yet" doesn't seem to have a moderator listed. Colleen From e0ne at e0ne.info Wed Oct 17 13:18:26 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 17 Oct 2018 16:18:26 +0300 Subject: [openstack-dev] [horizon][plugins] Horizon plugins validation on CI Message-ID: Hi all, We discussed this topic at PTG both with Horizon and other teams. Sounds like everybody is interested to have some cross-project CI jobs to verify that plugins are not broken with the latest Horizon changes. The initial idea was to use tempest plugins for this effort like we do for Horizon [1]. We've got a very simple test to verify that Horizon is up and running and a user is able to login. It's easy to implement such tests for any existing horizon plugin. I tried it for Heat and Manila dashboards. If I understand correctly how tempest plugins work, for such case we've got such options: a) to create the same tempest plugins for each plugin - it this case, we need to maintain new repos for tempest plugins b) add these tests to Horizon tempest plugin - in such case, it will be harder for plugin maintainers to support these tests. If we don't want to go forward with tempest plugins, we can create similar tests based on Horizon functional tests. I want to get more feedback both from Horizon and plugins teams on which direction we should go and start implementation. [1] https://github.com/openstack/tempest-horizon/blob/master/tempest_horizon/tests/scenario/test_dashboard_basic_ops.py#L138 Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.engelmann at everyware.ch Wed Oct 17 13:45:25 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Wed, 17 Oct 2018 15:45:25 +0200 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <200b392c-1a41-f53a-6822-01c809aa22fd@gmail.com> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <8c65a2ba-09e9-d286-9d3c-e577a59185cb@everyware.ch> <84d669d1-d3c5-cd15-84a0-bfc8e4e5eb66@everyware.ch> <200b392c-1a41-f53a-6822-01c809aa22fd@gmail.com> Message-ID: > On 10.10.2018 09:06, Florian Engelmann wrote: >> Now I get you. I would say all configuration templates need to be >> changed to allow, eg. >> >> $ grep http /etc/kolla/cinder-volume/cinder.conf >> glance_api_servers = http://10.10.10.5:9292 >> auth_url = http://internal.somedomain.tld:35357 >> www_authenticate_uri = http://internal.somedomain.tld:5000 >> auth_url = http://internal.somedomain.tld:35357 >> auth_endpoint = http://internal.somedomain.tld:5000 >> >> to look like: >> >> glance_api_servers = http://glance.service.somedomain.consul:9292 >> auth_url = http://keystone.service.somedomain.consul:35357 >> www_authenticate_uri = http://keystone.service.somedomain.consul:5000 >> auth_url = http://keystone.service.somedomain.consul:35357 >> auth_endpoint = http://keystone.service.somedomain.consul:5000 >> > > The idea with Consul looks interesting. > > But I don't get your issue with VIP address and spine-leaf network. > > What we have: > - controller1 behind leaf1 A/B pair with MLAG > - controller2 behind leaf2 A/B pair with MLAG > - controller3 behind leaf3 A/B pair with MLAG > > The VIP address is active on one controller server. > When the server fail then the VIP will move to another controller server. > Where do you see a SPOF in this configuration? > So leaf1 2 and 3 have to share the same L2 domain, right (in IPv4 network)? But we wanna deploy a layer3 spine-leaf network were every leaf is it's own L2 domain and everything above is layer3. eg: leaf1 = 10.1.1.0/24 leaf2 = 10.1.2.0/24 leaf2 = 10.1.3.0/24 So a VIP like, eg. 10.1.1.10 could only exist in leaf1 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From liliueecg at gmail.com Wed Oct 17 14:02:41 2018 From: liliueecg at gmail.com (Li Liu) Date: Wed, 17 Oct 2018 10:02:41 -0400 Subject: [openstack-dev] [Cyborg] Cyborg will have regular IRC meeting this week Message-ID: Hi Folks, We will be having our regular IRC meeting this week at the usual time. We will mainly discuss the demo plan for the summit. -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.engelmann at everyware.ch Wed Oct 17 14:05:24 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Wed, 17 Oct 2018 16:05:24 +0200 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <83f981fc-c49f-8207-215c-4cb2adbd9108@everyware.ch> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <1A3C52DFCD06494D8528644858247BF01C1F62DF@EX10MBOX03.pnnl.gov> <83f981fc-c49f-8207-215c-4cb2adbd9108@everyware.ch> Message-ID: <30bf2e74-3dc9-c55c-3045-1f1f02f57b71@everyware.ch> currently we are testing what is needed to get consul + registrator and kolla/kolla-ansible play together nicely. To get the services created in consul by registrator all kolla containers running relevant services (eg. keystone, nova, cinder, ... but also mariadb, memcached, es, ...) need to "--expose" their ports. Registrator will use those "exposed" ports to add a service to consul. I there any (existing) option to add those ports to the container bootstrap? What about "docker_common_options"? command should look like: docker run -d --expose 5000/tcp --expose 35357/tcp --name=keystone ... Am 10/10/18 um 9:18 AM schrieb Florian Engelmann: > by "another storage system" you mean the KV store of consul? That's just > someting consul brings with it... > > consul is very strong in doing health checks > > Am 10/9/18 um 6:09 PM schrieb Fox, Kevin M: >> etcd is an already approved openstack dependency. Could that be used >> instead of consul so as to not add yet another storage system? coredns >> with the https://coredns.io/plugins/etcd/ plugin would maybe do what >> you need? >> >> Thanks, >> Kevin >> ________________________________________ >> From: Florian Engelmann [florian.engelmann at everyware.ch] >> Sent: Monday, October 08, 2018 3:14 AM >> To: openstack-dev at lists.openstack.org >> Subject: [openstack-dev] [kolla] add service discovery, proxysql, >> vault, fabio and FQDN endpoints >> >> Hi, >> >> I would like to start a discussion about some changes and additions I >> would like to see in in kolla and kolla-ansible. >> >> 1. Keepalived is a problem in layer3 spine leaf networks as any floating >> IP can only exist in one leaf (and VRRP is a problem in layer3). I would >> like to use consul and registrar to get rid of the "internal" floating >> IP and use consuls DNS service discovery to connect all services with >> each other. >> >> 2. Using "ports" for external API (endpoint) access is a major headache >> if a firewall is involved. I would like to configure the HAProxy (or >> fabio) for the external access to use "Host:" like, eg. "Host: >> keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS. >> Any customer would just need HTTPS access and not have to open all those >> ports in his firewall. For some enterprise customers it is not possible >> to request FW changes like that. >> >> 3. HAProxy is not capable to handle "read/write" split with Galera. I >> would like to introduce ProxySQL to be able to scale Galera. >> >> 4. HAProxy is fine but fabio integrates well with consul, statsd and >> could be connected to a vault cluster to manage secure certificate >> access. >> >> 5. I would like to add vault as Barbican backend. >> >> 6. I would like to add an option to enable tokenless authentication for >> all services with each other to get rid of all the openstack service >> passwords (security issue). >> >> What do you think about it? >> >> All the best, >> Florian >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From e0ne at e0ne.info Wed Oct 17 14:24:04 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 17 Oct 2018 17:24:04 +0300 Subject: [openstack-dev] [horizon][nova][cinder][keystone][glance][neutron][swift] Horizon feature gaps Message-ID: Hi teams, As you may know, unfortunately, Horizon doesn't support all features provided by APIs. That's why we created feature gaps list [1]. I'd got a lot of great conversations with projects teams during the PTG and we tried to figure out what should be done prioritize these tasks. It's really helpful for Horizon to get feedback from other teams to understand what features should be implemented next. While I'm filling launchpad with new bugs and blueprints for [1], it would be good to review this list again and find some volunteers to decrease feature gaps. [1] https://etherpad.openstack.org/p/horizon-feature-gap Thanks everybody for any of your contributions to Horizon. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Wed Oct 17 15:13:48 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 17 Oct 2018 09:13:48 -0600 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> Message-ID: Time to resurrect this thread. On Thu, Jul 5, 2018 at 12:14 PM James Slagle wrote: > > On Thu, Jul 5, 2018 at 1:50 PM, Dan Prince wrote: > > Last week I was tinkering with my docker configuration a bit and was a > > bit surprised that puppet/services/docker.yaml no longer used puppet to > > configure the docker daemon. It now uses Ansible [1] which is very cool > > but brings up the question of how should we clearly indicate to > > developers and users that we are using Ansible vs Puppet for > > configuration? > > > > TripleO has been around for a while now, has supported multiple > > configuration ans service types over the years: os-apply-config, > > puppet, containers, and now Ansible. In the past we've used rigid > > directory structures to identify which "service type" was used. More > > recently we mixed things up a bit more even by extending one service > > type from another ("docker" services all initially extended the > > "puppet" services to generate config files and provide an easy upgrade > > path). > > > > Similarly we now use Ansible all over the place for other things in > > many of or docker and puppet services for things like upgrades. That is > > all good too. I guess the thing I'm getting at here is just a way to > > cleanly identify which services are configured via Puppet vs. Ansible. > > And how can we do that in the least destructive way possible so as not > > to confuse ourselves and our users in the process. > > > > Also, I think its work keeping in mind that TripleO was once a multi- > > vendor project with vendors that had different preferences on service > > configuration. Also having the ability to support multiple > > configuration mechanisms in the future could once again present itself > > (thinking of Kubernetes as an example). Keeping in mind there may be a > > conversion period that could well last more than a release or two. > > > > I suggested a 'services/ansible' directory with mixed responces in our > > #tripleo meeting this week. Any other thoughts on the matter? > > I would almost rather see us organize the directories by service > name/project instead of implementation. > > Instead of: > > puppet/services/nova-api.yaml > puppet/services/nova-conductor.yaml > docker/services/nova-api.yaml > docker/services/nova-conductor.yaml > > We'd have: > > services/nova/nova-api-puppet.yaml > services/nova/nova-conductor-puppet.yaml > services/nova/nova-api-docker.yaml > services/nova/nova-conductor-docker.yaml > > (or perhaps even another level of directories to indicate > puppet/docker/ansible?) > > Personally, such an organization is something I'm more used to. It > feels more similar to how most would expect a puppet module or ansible > role to be organized, where you have the abstraction (service > configuration) at a higher directory level than specific > implementations. > > It would also lend itself more easily to adding implementations only > for specific services, and address the question of if a new top level > implementation directory needs to be created. For example, adding a > services/nova/nova-api-chef.yaml seems a lot less contentious than > adding a top level chef/services/nova-api.yaml. > > It'd also be nice if we had a way to mark the default within a given > service's directory. Perhaps services/nova/nova-api-default.yaml, > which would be a new template that just consumes the default? Or > perhaps a symlink, although it was pointed out symlinks don't work in > swift containers. Still, that could possibly be addressed in our plan > upload workflows. Then the resource-registry would point at > nova-api-default.yaml. One could easily tell which is the default > without having to cross reference with the resource-registry. > So since I'm adding a new ansible service, I thought I'd try and take a stab at this naming thing. I've taken James's idea and proposed an implementation here: https://review.openstack.org/#/c/588111/ The idea would be that the THT code for the service deployment would end up in something like: deployment//-.yaml Additionally I took a stab at combining the puppet/docker service definitions for the aodh services in a similar structure to start reducing the overhead we've had from maintaining the docker/puppet implementations seperately. You can see the patch https://review.openstack.org/#/c/611188/ for an additional example of this. Please let me know what you think. Thanks, -Alex > > -- > -- James Slagle > -- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Wed Oct 17 15:41:36 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 17 Oct 2018 10:41:36 -0500 Subject: [openstack-dev] [horizon][nova][cinder][keystone][glance][neutron][swift] Horizon feature gaps In-Reply-To: References: Message-ID: <45d48394-4605-5ea4-f14d-48c1422b54cc@gmail.com> On 10/17/2018 9:24 AM, Ivan Kolodyazhny wrote: > > As you may know, unfortunately, Horizon doesn't support all features > provided by APIs. That's why we created feature gaps list [1]. > > I'd got a lot of great conversations with projects teams during the PTG > and we tried to figure out what should be done prioritize these tasks. > It's really helpful for Horizon to get feedback from other teams to > understand what features should be implemented next. > > While I'm filling launchpad with new bugs and blueprints for [1], it > would be good to review this list again and find some volunteers to > decrease feature gaps. > > [1] https://etherpad.openstack.org/p/horizon-feature-gap > > Thanks everybody for any of your contributions to Horizon. +openstack-sigs +openstack-operators I've left some notes for nova. This looks very similar to the compute API OSC gap analysis I did [1]. Unfortunately it's hard to prioritize what to really work on without some user/operator feedback - maybe we can get the user work group involved in trying to help prioritize what people really want that is missing from horizon, at least for compute? [1] https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc -- Thanks, Matt From harlowja at fastmail.com Wed Oct 17 15:59:08 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Wed, 17 Oct 2018 08:59:08 -0700 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: <7e83990d-7620-f5af-9275-1ece9f84e26b@redhat.com> References: <7e83990d-7620-f5af-9275-1ece9f84e26b@redhat.com> Message-ID: <5BC75C4C.1090108@fastmail.com> Dmitry Tantsur wrote: > On 10/10/18 7:41 PM, Greg Hill wrote: >> I've been out of the openstack loop for a few years, so I hope this >> reaches the right folks. >> >> Josh Harlow (original author of taskflow and related libraries) and I >> have been discussing the option of moving taskflow out of the >> openstack umbrella recently. This move would likely also include the >> futurist and automaton libraries that are primarily used by taskflow. > > Just for completeness: futurist and automaton are also heavily relied on > by ironic without using taskflow. When did futurist get used??? nice :) (I knew automaton was, but maybe I knew futurist was to and I forgot, lol). > >> The idea would be to just host them on github and use the regular >> Github features for Issues, PRs, wiki, etc, in the hopes that this >> would spur more development. Taskflow hasn't had any substantial >> contributions in several years and it doesn't really seem that the >> current openstack devs have a vested interest in moving it forward. I >> would like to move it forward, but I don't have an interest in being >> bound by the openstack workflow (this is why the project stagnated as >> core reviewers were pulled on to other projects and couldn't keep up >> with the review backlog, so contributions ground to a halt). >> >> I guess I'm putting it forward to the larger community. Does anyone >> have any objections to us doing this? Are there any non-obvious >> technicalities that might make such a transition difficult? Who would >> need to be made aware so they could adjust their own workflows? >> >> Or would it be preferable to just fork and rename the project so >> openstack can continue to use the current taskflow version without >> worry of us breaking features? >> >> Greg >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Wed Oct 17 16:05:52 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 17 Oct 2018 11:05:52 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Call for Volunteers to work on upgrade-checkers stein goal In-Reply-To: <1667d555e77.c3b5fe2c2771.365729908512377185@ghanshyammann.com> References: <1667d555e77.c3b5fe2c2771.365729908512377185@ghanshyammann.com> Message-ID: <20b8e73f-1bdb-2988-e400-2c6b216db9cb@gmail.com> On 10/16/2018 9:43 AM, Ghanshyam Mann wrote: > I was discussing with mriedem [1] about idea of building a volunteer team which can work with him on upgrade-checkers goal [2]. There are lot of work needed for this goal[3], few projects which does not have upgrade impact yet needs CLI framework with placeholder only and other projects with upgrade impact need actual upgrade checks implementation. > > Idea is to build the volunteer team who can work with goal champion to finish the work early. This will help to share some work from goal champion as well from project side. > > - This email is request to call for volunteers (upstream developers from any projects) who can work closely with mriedem on upgrade-checkers goal. > - Currently two developers has volunteered. > 1. Akhil Jain (IRC: akhil_jain, email:akhil.jain at india.nec.com) > 2. Rajat Dhasmana (IRC: whoami-rajat email:rajatdhasmana at gmail.com) > - Anyone who would like to help on this work, feel free to reply this email or ping mriedem on IRC. > - As next step, mriedem will plan the work distribution to volunteers. Thanks Ghanshyam. As can be seen from the cyborg [1] and congress [2] changes posted by Rajat and Akhil, the initial framework changes are pretty trivial. The harder part is working with core teams / PTLs to determine which real upgrade checks should be added based on the release notes. But having the framework done as a baseline across all service projects is a great start. [1] https://review.openstack.org/#/c/611368/ [2] https://review.openstack.org/#/c/611116/ -- Thanks, Matt From Kevin.Fox at pnnl.gov Wed Oct 17 16:07:07 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 17 Oct 2018 16:07:07 +0000 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <83f981fc-c49f-8207-215c-4cb2adbd9108@everyware.ch> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <1A3C52DFCD06494D8528644858247BF01C1F62DF@EX10MBOX03.pnnl.gov>, <83f981fc-c49f-8207-215c-4cb2adbd9108@everyware.ch> Message-ID: <1A3C52DFCD06494D8528644858247BF01C20C76C@EX10MBOX03.pnnl.gov> No, I mean, Consul would be an extra dependency in a big list of dependencies OpenStack already has. OpenStack has so many it is causing operators to reconsider adoption. I'm asking, if existing dependencies can be made to solve the problem without adding more? Stateful dependencies are much harder to deal with then stateless ones, as they take much more operator care/attention. Consul is stateful as is etcd, and etcd is already a dependency. Can etcd be used instead so as not to put more load on the operators? Thanks, Kevin ________________________________________ From: Florian Engelmann [florian.engelmann at everyware.ch] Sent: Wednesday, October 10, 2018 12:18 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints by "another storage system" you mean the KV store of consul? That's just someting consul brings with it... consul is very strong in doing health checks Am 10/9/18 um 6:09 PM schrieb Fox, Kevin M: > etcd is an already approved openstack dependency. Could that be used instead of consul so as to not add yet another storage system? coredns with the https://coredns.io/plugins/etcd/ plugin would maybe do what you need? > > Thanks, > Kevin > ________________________________________ > From: Florian Engelmann [florian.engelmann at everyware.ch] > Sent: Monday, October 08, 2018 3:14 AM > To: openstack-dev at lists.openstack.org > Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints > > Hi, > > I would like to start a discussion about some changes and additions I > would like to see in in kolla and kolla-ansible. > > 1. Keepalived is a problem in layer3 spine leaf networks as any floating > IP can only exist in one leaf (and VRRP is a problem in layer3). I would > like to use consul and registrar to get rid of the "internal" floating > IP and use consuls DNS service discovery to connect all services with > each other. > > 2. Using "ports" for external API (endpoint) access is a major headache > if a firewall is involved. I would like to configure the HAProxy (or > fabio) for the external access to use "Host:" like, eg. "Host: > keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS. > Any customer would just need HTTPS access and not have to open all those > ports in his firewall. For some enterprise customers it is not possible > to request FW changes like that. > > 3. HAProxy is not capable to handle "read/write" split with Galera. I > would like to introduce ProxySQL to be able to scale Galera. > > 4. HAProxy is fine but fabio integrates well with consul, statsd and > could be connected to a vault cluster to manage secure certificate access. > > 5. I would like to add vault as Barbican backend. > > 6. I would like to add an option to enable tokenless authentication for > all services with each other to get rid of all the openstack service > passwords (security issue). > > What do you think about it? > > All the best, > Florian > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch From johnsomor at gmail.com Wed Oct 17 18:56:32 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 17 Oct 2018 11:56:32 -0700 Subject: [openstack-dev] [horizon][plugins] Horizon plugins validation on CI In-Reply-To: References: Message-ID: Hi Ivan, As Octavia PTL I have no issue with adding a tempest-plugin repository for the octavia-dashboard. I think we have had examples with the main tempest tests and plugins where trying to do a suite of tests in one repository becomes messy. We may also want to consider doing a horizon-tempest-lib type of repository that can host common code/tools for the dashboard plugins to leverage. I'm thinking things like login code, etc. Michael On Wed, Oct 17, 2018 at 6:19 AM Ivan Kolodyazhny wrote: > > Hi all, > > We discussed this topic at PTG both with Horizon and other teams. Sounds like everybody is interested to have some cross-project CI jobs to verify that plugins are not broken with the latest Horizon changes. > > The initial idea was to use tempest plugins for this effort like we do for Horizon [1]. We've got a very simple test to verify that Horizon is up and running and a user is able to login. > > It's easy to implement such tests for any existing horizon plugin. I tried it for Heat and Manila dashboards. > > If I understand correctly how tempest plugins work, for such case we've got such options: > > a) to create the same tempest plugins for each plugin - it this case, we need to maintain new repos for tempest plugins > b) add these tests to Horizon tempest plugin - in such case, it will be harder for plugin maintainers to support these tests. > > If we don't want to go forward with tempest plugins, we can create similar tests based on Horizon functional tests. > > I want to get more feedback both from Horizon and plugins teams on which direction we should go and start implementation. > > > [1] https://github.com/openstack/tempest-horizon/blob/master/tempest_horizon/tests/scenario/test_dashboard_basic_ops.py#L138 > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From edmondsw at us.ibm.com Wed Oct 17 19:42:23 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Wed, 17 Oct 2018 15:42:23 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> <20181015201048.zg4nxyyq4b2vez5w@yuggoth.org> Message-ID: Corey Bryant wrote on 10/15/2018 05:34:24 PM: ... > From an ubuntu perspective, ubuntu is going to support stein on 18. > 04 LTS (3.6) and 19.04 (3.7) only. ... So folks with Ubuntu 16.04 LTS compute nodes will have to upgrade them all to 18.04 before upgrading to Stein? Of course this would be a distro statement, and would not preclude someone from building their own environment from source/pypi on Ubuntu 16.04. And 16.04 is still pretty heavily used, right? Ubuntu 18.04 LTS is not supported on PowerVM compute nodes, so the PowerVM CI will not be able to switch to running under py3 if code that doesn't work in py35 is introduced. At least until RHEL 8 comes out, at which point we could switch to using that in our CI. But please don't allow such changes before the RHEL 8 release. -------------- next part -------------- An HTML attachment was scrubbed... URL: From duc.openstack at gmail.com Wed Oct 17 22:29:49 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Wed, 17 Oct 2018 15:29:49 -0700 Subject: [openstack-dev] [senlin] Senlin Monthly(ish) Newsletter Oct 2018 Message-ID: HTML: https://dkt26111.wordpress.com/2018/10/17/senlin-monthlyish-newsletter-october-2018/ This is the October edition of the Senlin monthly(ish) newsletter. The goal of the newsletter is to highlight happenings in the Senlin project. If you have any feedback or questions regarding the contents, please feel free to reach out to me in the #senlin IRC channel. News ---- * There will be no Senlin meeting this week (Friday October 19, 2018) because I'm unavailable this week. We will resume regular meetings next week. * Autoscaling forum has been scheduled for the Berlin Summit (https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22753/autoscaling-integration-improvement-and-feedback). Add your comments/feedback to this etherpad: https://etherpad.openstack.org/p/autoscaling-integration-and-feedback Blueprint Status ---------------- * Fail fast locked resource - https://blueprints.launchpad.net/senlin/+spec/fail-fast-locked-resource - Implementation is completed and has been merged. - Working on documentation and release notes. * Multiple detection modes - https://blueprints.launchpad.net/senlin/+spec/multiple-detection-modes - Implementation is completed and has been merged. - Working on documentation and release notes. * Fail-fast on cooldown for scaling operations - https://blueprints.launchpad.net/senlin/+spec/scaling-action-acceptance - Implementation is completed. - Waiting for more reviews: https://review.openstack.org/#/c/585573/ * OpenStackSDK support senlin function test - Basic test cases have been implemented and merged: https://review.openstack.org/#/c/607061/ * Senlin add support use limit return - Waiting for blueprint submission. * Add zun driver in senlin, use zun manager container - Waiting for blueprint submission. Community Goal Status --------------------- * Python 3 - All patches by Python 3 goal champions for zuul migration, documentation and unit test changes have been merged. * Upgrade Checkers - No work has started on this. If you like to help out with this task, please let me know. Recently Merged Changes ----------------------- * Multiple detection mode support was added as part of the same blueprint: https://review.openstack.org/#/c/589990/. This change introduces version 1.1 of the health policy that includes breaking changes affecting clusters using the health policy version 1.0. We are working to address this issue. From corey.bryant at canonical.com Wed Oct 17 22:50:38 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Wed, 17 Oct 2018 18:50:38 -0400 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> <44777656-08d4-899d-f50b-1b517c09c9d5@redhat.com> <20181015201048.zg4nxyyq4b2vez5w@yuggoth.org> Message-ID: On Wed, Oct 17, 2018 at 3:43 PM William M Edmonds wrote: > > Corey Bryant wrote on 10/15/2018 05:34:24 PM: > ... > > From an ubuntu perspective, ubuntu is going to support stein on 18. > > 04 LTS (3.6) and 19.04 (3.7) only. > ... > > So folks with Ubuntu 16.04 LTS compute nodes will have to upgrade them all > to 18.04 before upgrading to Stein? Of course this would be a distro > statement,and would not preclude someone from building their own > environment from source/pypi on Ubuntu 16.04. And 16.04 is still pretty > heavily used, right? > > All true statements, and the answers to your questions is yes. > Ubuntu 18.04 LTS is not supported on PowerVM compute nodes, so the PowerVM > CI will not be able to switch to running under py3 if code that doesn't > work in py35 is introduced. At least until RHEL 8 comes out, at which point > we could switch to using that in our CI. But please don't allow such > changes before the RHEL 8 release. > This sounds like an orthogonal problem but maybe I'm confused. Corey > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotrmisiak1984 at gmail.com Wed Oct 17 23:19:29 2018 From: piotrmisiak1984 at gmail.com (Piotr Misiak) Date: Thu, 18 Oct 2018 01:19:29 +0200 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <8c65a2ba-09e9-d286-9d3c-e577a59185cb@everyware.ch> <84d669d1-d3c5-cd15-84a0-bfc8e4e5eb66@everyware.ch> <200b392c-1a41-f53a-6822-01c809aa22fd@gmail.com> Message-ID: On 17.10.2018 15:45, Florian Engelmann wrote: >> On 10.10.2018 09:06, Florian Engelmann wrote: >>> Now I get you. I would say all configuration templates need to be >>> changed to allow, eg. >>> >>> $ grep http /etc/kolla/cinder-volume/cinder.conf >>> glance_api_servers = http://10.10.10.5:9292 >>> auth_url = http://internal.somedomain.tld:35357 >>> www_authenticate_uri = http://internal.somedomain.tld:5000 >>> auth_url = http://internal.somedomain.tld:35357 >>> auth_endpoint = http://internal.somedomain.tld:5000 >>> >>> to look like: >>> >>> glance_api_servers = http://glance.service.somedomain.consul:9292 >>> auth_url = http://keystone.service.somedomain.consul:35357 >>> www_authenticate_uri = http://keystone.service.somedomain.consul:5000 >>> auth_url = http://keystone.service.somedomain.consul:35357 >>> auth_endpoint = http://keystone.service.somedomain.consul:5000 >>> >> >> The idea with Consul looks interesting. >> >> But I don't get your issue with VIP address and spine-leaf network. >> >> What we have: >> - controller1 behind leaf1 A/B pair with MLAG >> - controller2 behind leaf2 A/B pair with MLAG >> - controller3 behind leaf3 A/B pair with MLAG >> >> The VIP address is active on one controller server. >> When the server fail then the VIP will move to another controller >> server. >> Where do you see a SPOF in this configuration? >> > > So leaf1 2 and 3 have to share the same L2 domain, right (in IPv4 > network)? > Yes, they share L2 domain but we have ARP and ND suppression enabled. It is an EVPN network where there is a L3 with VxLANs between leafs and spines. So we don't care where a server is connected. It can be connected to any leaf. > But we wanna deploy a layer3 spine-leaf network were every leaf is > it's own L2 domain and everything above is layer3. > > eg: > > leaf1 = 10.1.1.0/24 > leaf2 = 10.1.2.0/24 > leaf2 = 10.1.3.0/24 > > So a VIP like, eg. 10.1.1.10 could only exist in leaf1 > In my opinion it's a very constrained environment, I don't like the idea. Regards, Piotr From ken1ohmichi at gmail.com Wed Oct 17 23:22:38 2018 From: ken1ohmichi at gmail.com (Kenichi Omichi) Date: Wed, 17 Oct 2018 16:22:38 -0700 Subject: [openstack-dev] [qa][vmware-nsx-tempest-plugin][networking-vsphere] Dependency of Tempest changes Message-ID: Hi A tempest patch[1] removes the deprecated library method and some projects are still using the method. Tempest provides another method instead and we have patches to swith it. The following patches are not merged yet at this time: - vmware-nsx-tempest-plugin: https://review.openstack.org/#/c/578166 - networking-vsphere: https://review.openstack.org/#/c/578168 Happy if they all are merged soon. Thanks Kenichi Omichi --- [1]: https://review.openstack.org/#/c/578169 From honza at redhat.com Thu Oct 18 00:17:43 2018 From: honza at redhat.com (Honza Pokorny) Date: Wed, 17 Oct 2018 21:17:43 -0300 Subject: [openstack-dev] [tripleo][ui][tempest][oooq] Refreshing plugins from git Message-ID: <20181018001743.swmj3icwzlezoqdd@localhost.localdomain> Hello folks, I'm working on the automated ui testing blueprint[1], and I think we need to change the way we ship our tempest tests. Here is where things stand at the moment: * We have a kolla image for tempest * This image contains the tempest rpm, and the openstack-tempest-all rpm * The openstack-tempest-all package in turn contains all of the openstack tempest plugins * Each of the plugins is shipped as an rpm So, in order for a new test in tempest-tripleo-ui to appear in CI we have to go through at least the following tests: * New tempest-tripleo-ui rpm * New openstack-tempest-all rpm * New tempest kolla image This could easily take a week, if not more. What I would like to build is something like the following: * Add an option to the tempest-setup.sh script in tripleo-quickstart to refresh all tempest plugins from git before running any tests * Optionally specify a zuul change for any of the plugins being refreshed * Hook up the test job to patches in tripleo-ui (which tests in tempest-tripleo-ui are testing) so that I can run a fix and its test in a single CI job This would allow the tripleo-ui team to develop code and tests at the same time, and prevent breakage before a patch is even merged. Here are a few questions: * Do you think this is a good idea? * Could we accomplish this by some other, simple mechanism? Any helpful suggestions, corrections, and feedback are much appreciated. Thanks Honza Pokorny [1]: https://blueprints.launchpad.net/tripleo/+spec/automated-ui-testing From tony at bakeyournoodle.com Thu Oct 18 01:52:16 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 18 Oct 2018 12:52:16 +1100 Subject: [openstack-dev] [horizon][plugins] Horizon plugins validation on CI In-Reply-To: References: Message-ID: <20181018015215.GA6589@thor.bakeyournoodle.com> On Wed, Oct 17, 2018 at 04:18:26PM +0300, Ivan Kolodyazhny wrote: > Hi all, > > We discussed this topic at PTG both with Horizon and other teams. Sounds > like everybody is interested to have some cross-project CI jobs to verify > that plugins are not broken with the latest Horizon changes. > > The initial idea was to use tempest plugins for this effort like we do for > Horizon [1]. We've got a very simple test to verify that Horizon is up and > running and a user is able to login. > > It's easy to implement such tests for any existing horizon plugin. I tried > it for Heat and Manila dashboards. Given that I know very little about this but isn't it just as simple as running the say the octavia-dashboard[1] npm tests on all horizon changes? This would be similar to the way we run the nova[2] functional tests on all constraints changes in openstack/requirements. Yours Tony. [1] Of course all dashbaords/plugins [2] Not just nova but you get the idea -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Thu Oct 18 02:01:26 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 18 Oct 2018 13:01:26 +1100 Subject: [openstack-dev] [kolla] [requirements] Stepping down as core reviewer In-Reply-To: References: Message-ID: <20181018020126.GB6589@thor.bakeyournoodle.com> On Tue, Oct 16, 2018 at 06:03:54PM +0530, ʂʍɒρƞįł Ҟưȴķɒʁʉɨ wrote: > Dear OpenStackers, > > For a few months now, I am not able to contribute to code or reviewing > Kolla and Requirements actively given my current responsibilities, I > would like to take a step back and release my core reviewer ability > for the Kolla and Requirements repositories. Swapnil, I'm sorry to see you go. It was a blast working with you and your generous nature. Safe travels and great luck with your path takes you. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Thu Oct 18 06:35:39 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 18 Oct 2018 17:35:39 +1100 Subject: [openstack-dev] [all] Naming the T release of OpenStack Message-ID: <20181018063539.GC6589@thor.bakeyournoodle.com> Hello all, As per [1] the nomination period for names for the T release have now closed (actually 3 days ago sorry). The nominated names and any qualifying remarks can be seen at2]. Proposed Names * Tarryall * Teakettle * Teller * Telluride * Thomas * Thornton * Tiger * Tincup * Timnath * Timber * Tiny Town * Torreys * Trail * Trinidad * Treasure * Troublesome * Trussville * Turret * Tyrone Proposed Names that do not meet the criteria * Train However I'd like to suggest we skip the CIVS poll and select 'Train' as the release name by TC resolution[3]. My think for this is * It's fun and celebrates a humorous moment in our community * As a developer I've heard the T release called Train for quite sometime, and was used often at the PTG[4]. * As the *next* PTG is also in Colorado we can still choose a geographic based name for U[5] * If train causes a problem for trademark reasons then we can always run the poll I'll leave[3] for marked -W for a week for discussion to happen before the TC can consider / vote on it. Yours Tony. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals [3] https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 [4] https://twitter.com/vkmc/status/1040321043959754752 [5] https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From renat.akhmerov at gmail.com Thu Oct 18 07:23:17 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Thu, 18 Oct 2018 14:23:17 +0700 Subject: [openstack-dev] [mistral][oslo][messaging] Removing =?utf-8?Q?=E2=80=9Cblocking=E2=80=9D_?=executor from oslo.messaging Message-ID: <682cb1b7-9c29-4242-90bd-7f28022a0bdb@Spark> Hi Oslo Team, Can we retain “blocking” executor for now in Oslo Messaging? Some background.. For a while we had to use Oslo Messaging with “blocking” executor in Mistral because of incompatibility of MySQL driver with green threads when choosing “eventlet” executor. Under certain conditions we would get deadlocks between green threads. Some time ago we switched to using PyMysql driver which is eventlet friendly and did a number of tests that showed that we could safely switch to “eventlet” executor (with that driver) so we introduced a new option in Mistral where we could choose an executor in Oslo Messaging. The corresponding bug is [1]. The issue is that we recently found that not everything actually works as expected when using combination PyMysql + “eventlet” executor. We also tried “threading” executor and the system seems to work with it but surprisingly performance is much worse. Given all of that we’d like to ask Oslo Team not to remove “blocking” executor for now completely, if that’s possible. We have a strong motivation to switch to “eventlet” for other reasons (parallelism => better performance etc.) but seems like we need some time to make it smoothly. [1] https://bugs.launchpad.net/mistral/+bug/1696469 Thanks Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Thu Oct 18 07:31:54 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 18 Oct 2018 09:31:54 +0200 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: <5BC75C4C.1090108@fastmail.com> References: <7e83990d-7620-f5af-9275-1ece9f84e26b@redhat.com> <5BC75C4C.1090108@fastmail.com> Message-ID: On 10/17/18 5:59 PM, Joshua Harlow wrote: > Dmitry Tantsur wrote: >> On 10/10/18 7:41 PM, Greg Hill wrote: >>> I've been out of the openstack loop for a few years, so I hope this >>> reaches the right folks. >>> >>> Josh Harlow (original author of taskflow and related libraries) and I >>> have been discussing the option of moving taskflow out of the >>> openstack umbrella recently. This move would likely also include the >>> futurist and automaton libraries that are primarily used by taskflow. >> >> Just for completeness: futurist and automaton are also heavily relied on >> by ironic without using taskflow. > > When did futurist get used??? nice :) > > (I knew automaton was, but maybe I knew futurist was to and I forgot, lol). I'm pretty sure you did, it happened back in Mitaka :) > >> >>> The idea would be to just host them on github and use the regular >>> Github features for Issues, PRs, wiki, etc, in the hopes that this >>> would spur more development. Taskflow hasn't had any substantial >>> contributions in several years and it doesn't really seem that the >>> current openstack devs have a vested interest in moving it forward. I >>> would like to move it forward, but I don't have an interest in being >>> bound by the openstack workflow (this is why the project stagnated as >>> core reviewers were pulled on to other projects and couldn't keep up >>> with the review backlog, so contributions ground to a halt). >>> >>> I guess I'm putting it forward to the larger community. Does anyone >>> have any objections to us doing this? Are there any non-obvious >>> technicalities that might make such a transition difficult? Who would >>> need to be made aware so they could adjust their own workflows? >>> >>> Or would it be preferable to just fork and rename the project so >>> openstack can continue to use the current taskflow version without >>> worry of us breaking features? >>> >>> Greg >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gmann at ghanshyammann.com Thu Oct 18 10:33:15 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 18 Oct 2018 19:33:15 +0900 Subject: [openstack-dev] [nova] API updates week 18-42 Message-ID: <16686bc8d18.bc99b62e27513.3498927092792757708@ghanshyammann.com> Hi All, Please find the Nova API highlights of this week. Weekly Office Hour: =============== What we discussed this week: - Discussed about API extensions works. - Discussed on 2 new bugs which needs more log for further debugging. added bug comments. Planned Features : ============== Below are the API related features for Stein. Ref - https://etherpad.openstack.org/p/stein-nova-subteam-tracking (feel free to add API item there if you are working or found any). NOTE: sequence order are not the priority, they are listed as per their start date. 1. API Extensions merge work - https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-stein - https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-stein+status:open - Weekly Progress: last patch has +2 and other has +A and on gate. 2. Handling a down cell - https://blueprints.launchpad.net/nova/+spec/handling-down-cell - https://review.openstack.org/#/q/topic:bp/handling-down-cell+(status:open+OR+status:merged) - Weekly Progress: tssurya has updated the patches on this. can we get this in runway ? 3. Servers Ips non-unique network names : - https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names - Spec Merged - https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged) - Weekly Progress: No progress. 4. Volume multiattach enhancements: - https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements - https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged) - Weekly Progress: No progress. 5. Boot instance specific storage backend - https://blueprints.launchpad.net/nova/+spec/boot-instance-specific-storage-backend - https://review.openstack.org/#/q/topic:bp/boot-instance-specific-storage-backend+(status:open+OR+status:merged) - Weekly Progress: COMPLETED 6. Add API ref guideline for body text (takashin) - https://review.openstack.org/#/c/605628/ - Weekly Progress: Reviewed most of the patches. Specs: 7. Detach and attach boot volumes - https://review.openstack.org/#/q/topic:bp/detach-boot-volume+(status:open+OR+status:merged) - Weekly Progress: under review. Kevin has updated the spec with review comment fix. 8. Nova API policy updates https://blueprints.launchpad.net/nova/+spec/granular-api-policy Spec: https://review.openstack.org/#/c/547850/ - Weekly Progress: No progress in this. first concentrating on its dependency on 'consistent policy name' - https://review.openstack.org/#/c/606214/ 9. Nova API cleanup https://blueprints.launchpad.net/nova/+spec/api-consistency-cleanup Spec: https://review.openstack.org/#/c/603969/ - Weekly Progress: No progress on this. I will update this with all cleanup in next week. 10. Support deleting data volume when destroy instance(Brin Zhang) - https://review.openstack.org/#/c/580336/ - Weekly Progress: No Progress. Bugs: ==== This week Bug Progress: https://etherpad.openstack.org/p/nova-api-weekly-bug-report Critical: 0->0 High importance: 2->1 By Status: New: 4->2 Confirmed/Triage: 32-> 32 In-progress: 35->36 Incomplete: 3->5 ===== Total: 74->75 NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those are not in above list. Tag such bugs so that we can keep our eyes. -gmann From ifatafekn at gmail.com Thu Oct 18 12:15:13 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Thu, 18 Oct 2018 15:15:13 +0300 Subject: [openstack-dev] [requirements][vitrage] SQLAlchemy-Utils version 0.33.6 breaks Vitrage gate Message-ID: Hi, In the last three days Vitrage gate is broken due to the new requirement of SQLAlchemy-Utils==0.33.6. We get the following error [1]: 2018-10-18 09:21:13.946 | Collecting SQLAlchemy-Utils===0.33.6 (from -c /opt/stack/new/requirements/upper-constraints.txt (line 31)) 2018-10-18 09:21:14.070 | Downloading http://mirror.mtl01.inap.openstack.org/wheel/ubuntu-16.04-x86_64/sqlalchemy-utils/SQLAlchemy_Utils-0.33.6-py2.py3-none-any.whl (88kB) 2018-10-18 09:21:14.105 | Exception: 2018-10-18 09:21:14.105 | Traceback (most recent call last): 2018-10-18 09:21:14.105 | File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 215, in main 2018-10-18 09:21:14.105 | status = self.run(options, args) 2018-10-18 09:21:14.105 | File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 335, in run 2018-10-18 09:21:14.105 | wb.build(autobuilding=True) 2018-10-18 09:21:14.105 | File "/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 749, in build 2018-10-18 09:21:14.105 | self.requirement_set.prepare_files(self.finder) 2018-10-18 09:21:14.105 | File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 380, in prepare_files 2018-10-18 09:21:14.105 | ignore_dependencies=self.ignore_dependencies)) 2018-10-18 09:21:14.105 | File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 620, in _prepare_file 2018-10-18 09:21:14.105 | session=self.session, hashes=hashes) 2018-10-18 09:21:14.106 | File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 821, in unpack_url 2018-10-18 09:21:14.106 | hashes=hashes 2018-10-18 09:21:14.106 | File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 663, in unpack_http_url 2018-10-18 09:21:14.106 | unpack_file(from_path, location, content_type, link) 2018-10-18 09:21:14.106 | File "/usr/local/lib/python2.7/dist-packages/pip/utils/__init__.py", line 599, in unpack_file 2018-10-18 09:21:14.106 | flatten=not filename.endswith('.whl') 2018-10-18 09:21:14.106 | File "/usr/local/lib/python2.7/dist-packages/pip/utils/__init__.py", line 484, in unzip_file 2018-10-18 09:21:14.106 | zip = zipfile.ZipFile(zipfp, allowZip64=True) 2018-10-18 09:21:14.106 | File "/usr/lib/python2.7/zipfile.py", line 770, in __init__ 2018-10-18 09:21:14.106 | self._RealGetContents() 2018-10-18 09:21:14.106 | File "/usr/lib/python2.7/zipfile.py", line 839, in _RealGetContents 2018-10-18 09:21:14.106 | raise BadZipfile("Bad magic number for central directory") 2018-10-18 09:21:14.106 | BadZipfile: Bad magic number for central directory Can we move back to version 0.33.5? or is there another solution? Thanks, Ifat [1] http://logs.openstack.org/39/611539/1/check/vitrage-dsvm-api-py27/c6a16c5/logs/devstacklog.txt.gz -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Oct 18 12:21:57 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 18 Oct 2018 07:21:57 -0500 Subject: [openstack-dev] [Openstack-sigs] [horizon][nova][cinder][keystone][glance][neutron][swift] Horizon feature gaps In-Reply-To: <45d48394-4605-5ea4-f14d-48c1422b54cc@gmail.com> References: <45d48394-4605-5ea4-f14d-48c1422b54cc@gmail.com> Message-ID: <20181018122157.GA3125@sm-workstation> On Wed, Oct 17, 2018 at 10:41:36AM -0500, Matt Riedemann wrote: > On 10/17/2018 9:24 AM, Ivan Kolodyazhny wrote: > > > > As you may know, unfortunately, Horizon doesn't support all features > > provided by APIs. That's why we created feature gaps list [1]. > > > > I'd got a lot of great conversations with projects teams during the PTG > > and we tried to figure out what should be done prioritize these tasks. > > It's really helpful for Horizon to get feedback from other teams to > > understand what features should be implemented next. > > > > While I'm filling launchpad with new bugs and blueprints for [1], it > > would be good to review this list again and find some volunteers to > > decrease feature gaps. > > > > [1] https://etherpad.openstack.org/p/horizon-feature-gap > > > > Thanks everybody for any of your contributions to Horizon. > > +openstack-sigs > +openstack-operators > > I've left some notes for nova. This looks very similar to the compute API > OSC gap analysis I did [1]. Unfortunately it's hard to prioritize what to > really work on without some user/operator feedback - maybe we can get the > user work group involved in trying to help prioritize what people really > want that is missing from horizon, at least for compute? > > [1] https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc > > -- > > Thanks, > > Matt I also have a cinderclient OSC gap analysis I've started working on. It might be useful to add a Horizon column to this list too. https://ethercalc.openstack.org/cinderclient-osc-gap Sean From bdobreli at redhat.com Thu Oct 18 12:45:08 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 18 Oct 2018 14:45:08 +0200 Subject: [openstack-dev] [tripleo][ui][tempest][oooq] Refreshing plugins from git In-Reply-To: <20181018001743.swmj3icwzlezoqdd@localhost.localdomain> References: <20181018001743.swmj3icwzlezoqdd@localhost.localdomain> Message-ID: <4b61614b-fbd3-3da0-3f60-c9a8516c3844@redhat.com> On 10/18/18 2:17 AM, Honza Pokorny wrote: > Hello folks, > > I'm working on the automated ui testing blueprint[1], and I think we > need to change the way we ship our tempest tests. > > Here is where things stand at the moment: > > * We have a kolla image for tempest > * This image contains the tempest rpm, and the openstack-tempest-all rpm > * The openstack-tempest-all package in turn contains all of the > openstack tempest plugins > * Each of the plugins is shipped as an rpm > > So, in order for a new test in tempest-tripleo-ui to appear in CI we > have to go through at least the following tests: > > * New tempest-tripleo-ui rpm > * New openstack-tempest-all rpm > * New tempest kolla image > > This could easily take a week, if not more. > > What I would like to build is something like the following: > > * Add an option to the tempest-setup.sh script in tripleo-quickstart to > refresh all tempest plugins from git before running any tests > * Optionally specify a zuul change for any of the plugins being > refreshed > * Hook up the test job to patches in tripleo-ui (which tests in > tempest-tripleo-ui are testing) so that I can run a fix and its test > in a single CI job > > This would allow the tripleo-ui team to develop code and tests at the > same time, and prevent breakage before a patch is even merged. > > Here are a few questions: > > * Do you think this is a good idea? This reminds the update_containers case, but relaxed the next level of updating from sources instead of rpm. Given that we already have that update_containers thing, the idea seems acceptable for CI use only. Although I'd prefer to see the packages and the tempest container (and all that update_containers affects) rebuilt in the same CI job run instead. Though I'm not sure for having different paths for "new test in tempest-tripleo-ui" getting into container: executed in CI vs executed via TripleO UI? I think the path it takes should always be the same. But please excuse me if I got the case wrong. [0] https://goo.gl/5bCWRX > * Could we accomplish this by some other, simple mechanism? > > Any helpful suggestions, corrections, and feedback are much appreciated. > > Thanks > > Honza Pokorny > > [1]: https://blueprints.launchpad.net/tripleo/+spec/automated-ui-testing > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From aj at suse.com Thu Oct 18 12:57:23 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 18 Oct 2018 14:57:23 +0200 Subject: [openstack-dev] [requirements][vitrage][infra] SQLAlchemy-Utils version 0.33.6 breaks Vitrage gate In-Reply-To: References: Message-ID: <715158a6-25fa-846e-149a-22d6e3d07ef5@suse.com> On 18/10/2018 14.15, Ifat Afek wrote: > Hi, > > In the last three days Vitrage gate is broken due to the new requirement > of SQLAlchemy-Utils==0.33.6. > We get the following error [1]: > > [...] > > Can we move back to version 0.33.5? or is there another solution? We discussed that on #openstack-infra, and fixed it each day - and then it appeared again. https://review.openstack.org/611444 is the proposed fix for that - the issues comes from the fact that we build wheels if there are none available and had a race in it. I hope an admin can delete the broken file again and it works again tomorrow - if not, best to speak up quickly on #openstack-infra, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From fungi at yuggoth.org Thu Oct 18 13:17:13 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 18 Oct 2018 13:17:13 +0000 Subject: [openstack-dev] [requirements][vitrage][infra] SQLAlchemy-Utils version 0.33.6 breaks Vitrage gate In-Reply-To: <715158a6-25fa-846e-149a-22d6e3d07ef5@suse.com> References: <715158a6-25fa-846e-149a-22d6e3d07ef5@suse.com> Message-ID: <20181018131713.nrnnlihipuvxaabu@yuggoth.org> On 2018-10-18 14:57:23 +0200 (+0200), Andreas Jaeger wrote: > On 18/10/2018 14.15, Ifat Afek wrote: > > Hi, > > > > In the last three days Vitrage gate is broken due to the new requirement > > of SQLAlchemy-Utils==0.33.6. > > We get the following error [1]: > > > > [...] > > > Can we move back to version 0.33.5? or is there another solution? > > We discussed that on #openstack-infra, and fixed it each day - and then it > appeared again. > > https://review.openstack.org/611444 is the proposed fix for that - the > issues comes from the fact that we build wheels if there are none available > and had a race in it. > > I hope an admin can delete the broken file again and it works again tomorrow > - if not, best to speak up quickly on #openstack-infra, It's been deleted (again) and the suspected fix approved so hopefully it won't recur. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Thu Oct 18 13:18:02 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 18 Oct 2018 08:18:02 -0500 Subject: [openstack-dev] [release] Release countdown for week R-24 and R-23, October 22 - November 2 Message-ID: <20181018131801.GA91056@smcginnis-mbp.local> Development Focus ----------------- Team focus should be on spec approval and implementation for priority features. General Information ------------------- Projects that have been following the cycle-with-milestone release model will be switched over to cycle-with-rc soon [0]. Just a reminder that although the changes mean milestones are no long required, projects are free to still request one if they feel there is a need. Projects following cycle-with-intermediary with libraries have hopefully seen the mailing list thread on changes there [1]. Projects with libraries following this release model that have unreleased commits are encouraged to request a release for those. But under the new plan, the release team will propose releases for those projects if there has not been one requested before the milestone. PTLs and/or release liaisons - a reminder that we would love to have you around during our weekly meeting [2]. It would also be very helpful if you would linger in the #openstack-release channel during deadline weeks. [0] http://lists.openstack.org/pipermail/openstack-dev/2018-September/135088.html [1] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135689.html [2] http://eavesdrop.openstack.org/#Release_Team_Meeting Upcoming Deadlines & Dates -------------------------- Stein-1 milestone: October 25 (R-24 week) Forum at OpenStack Summit in Berlin: November 13-15 -- Sean McGinnis (smcginnis) From jungleboyj at gmail.com Thu Oct 18 13:25:24 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 18 Oct 2018 08:25:24 -0500 Subject: [openstack-dev] [all] Naming the T release of OpenStack In-Reply-To: <20181018063539.GC6589@thor.bakeyournoodle.com> References: <20181018063539.GC6589@thor.bakeyournoodle.com> Message-ID: On 10/18/2018 1:35 AM, Tony Breeds wrote: > Hello all, > As per [1] the nomination period for names for the T release have > now closed (actually 3 days ago sorry). The nominated names and any > qualifying remarks can be seen at2]. > > Proposed Names > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria > * Train > > However I'd like to suggest we skip the CIVS poll and select 'Train' as > the release name by TC resolution[3]. My think for this is > > * It's fun and celebrates a humorous moment in our community > * As a developer I've heard the T release called Train for quite > sometime, and was used often at the PTG[4]. > * As the *next* PTG is also in Colorado we can still choose a > geographic based name for U[5] > * If train causes a problem for trademark reasons then we can always > run the poll +2 +W  Great names proposed above but based on other discussions with people I think the community would be happy with the Train name.  Plus if people ask about it, it gives us an opportunity to share a story that gives insight into the community we are.  :-) Thanks for proposing that path Tony! Jay > I'll leave[3] for marked -W for a week for discussion to happen before the > TC can consider / vote on it. > > Yours Tony. > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html > [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals > [3] https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 > [4] https://twitter.com/vkmc/status/1040321043959754752 > [5] https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Thu Oct 18 13:29:34 2018 From: neil at tigera.io (Neil Jerram) Date: Thu, 18 Oct 2018 14:29:34 +0100 Subject: [openstack-dev] [all] Naming the T release of OpenStack In-Reply-To: References: <20181018063539.GC6589@thor.bakeyournoodle.com> Message-ID: FWIW, I've have no clue if this is serious or not, or what 'Train' refers to... On Thu, Oct 18, 2018 at 2:25 PM Jay S Bryant wrote: > On 10/18/2018 1:35 AM, Tony Breeds wrote: > > Hello all, > As per [1] the nomination period for names for the T release have > now closed (actually 3 days ago sorry). The nominated names and any > qualifying remarks can be seen at2]. > > Proposed Names > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria > * Train > > However I'd like to suggest we skip the CIVS poll and select 'Train' as > the release name by TC resolution[3]. My think for this is > > * It's fun and celebrates a humorous moment in our community > * As a developer I've heard the T release called Train for quite > sometime, and was used often at the PTG[4]. > * As the *next* PTG is also in Colorado we can still choose a > geographic based name for U[5] > * If train causes a problem for trademark reasons then we can always > run the poll > > +2 +W Great names proposed above but based on other discussions with > people I think the community would be happy with the Train name. Plus if > people ask about it, it gives us an opportunity to share a story that gives > insight into the community we are. :-) Thanks for proposing that path > Tony! > > Jay > > > I'll leave[3] for marked -W for a week for discussion to happen before the > TC can consider / vote on it. > > Yours Tony. > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html > [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals > [3] https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 > [4] https://twitter.com/vkmc/status/1040321043959754752 > [5] https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Thu Oct 18 13:29:55 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 18 Oct 2018 08:29:55 -0500 Subject: [openstack-dev] [Openstack-sigs] [horizon][nova][cinder][keystone][glance][neutron][swift] Horizon feature gaps In-Reply-To: <20181018122157.GA3125@sm-workstation> References: <45d48394-4605-5ea4-f14d-48c1422b54cc@gmail.com> <20181018122157.GA3125@sm-workstation> Message-ID: <0836f4de-327a-0609-a2d1-5c4aec785f7d@gmail.com> On 10/18/2018 7:21 AM, Sean McGinnis wrote: > On Wed, Oct 17, 2018 at 10:41:36AM -0500, Matt Riedemann wrote: >> On 10/17/2018 9:24 AM, Ivan Kolodyazhny wrote: >>> As you may know, unfortunately, Horizon doesn't support all features >>> provided by APIs. That's why we created feature gaps list [1]. >>> >>> I'd got a lot of great conversations with projects teams during the PTG >>> and we tried to figure out what should be done prioritize these tasks. >>> It's really helpful for Horizon to get feedback from other teams to >>> understand what features should be implemented next. >>> >>> While I'm filling launchpad with new bugs and blueprints for [1], it >>> would be good to review this list again and find some volunteers to >>> decrease feature gaps. >>> >>> [1] https://etherpad.openstack.org/p/horizon-feature-gap >>> >>> Thanks everybody for any of your contributions to Horizon. >> +openstack-sigs >> +openstack-operators >> >> I've left some notes for nova. This looks very similar to the compute API >> OSC gap analysis I did [1]. Unfortunately it's hard to prioritize what to >> really work on without some user/operator feedback - maybe we can get the >> user work group involved in trying to help prioritize what people really >> want that is missing from horizon, at least for compute? >> >> [1] https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc >> >> -- >> >> Thanks, >> >> Matt > I also have a cinderclient OSC gap analysis I've started working on. It might > be useful to add a Horizon column to this list too. > > https://ethercalc.openstack.org/cinderclient-osc-gap I had forgotten that we had this.  I have added it to the persistent links at the top of our meeting agenda page so we have it for future reference.  Did the same for the Horizon Feature Gaps.  Think it would be good to heave those somewhere that we see them and are reminded about them. Thanks! Jay > Sean > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jungleboyj at gmail.com Thu Oct 18 13:35:21 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 18 Oct 2018 08:35:21 -0500 Subject: [openstack-dev] [all] Naming the T release of OpenStack In-Reply-To: References: <20181018063539.GC6589@thor.bakeyournoodle.com> Message-ID: <9278c4c4-32d9-4ead-3948-ce66bce8a056@gmail.com> On 10/18/2018 8:29 AM, Neil Jerram wrote: > FWIW, I've have no clue if this is serious or not, or what 'Train' > refers to... > Neil, The two Project Team Gathering events in Denver were held at a hotel next to the train line from downtown to the airport.  The crossing signals there had some sort of malfunction in the past causing them to not stop the cars when a train was coming properly.  As a result the trains were required to blow their horns when passing through that area.  Obviously staying in a hotel, by trains that are blowing their horns 24/7 was less than ideal.  As a result, many jokes popped up about Denver and trains. When OpenStack developers think Denver, we think Trains.  :-) Jay > > On Thu, Oct 18, 2018 at 2:25 PM Jay S Bryant > wrote: > > On 10/18/2018 1:35 AM, Tony Breeds wrote: > >> Hello all, >> As per [1] the nomination period for names for the T release have >> now closed (actually 3 days ago sorry). The nominated names and any >> qualifying remarks can be seen at2]. >> >> Proposed Names >> * Tarryall >> * Teakettle >> * Teller >> * Telluride >> * Thomas >> * Thornton >> * Tiger >> * Tincup >> * Timnath >> * Timber >> * Tiny Town >> * Torreys >> * Trail >> * Trinidad >> * Treasure >> * Troublesome >> * Trussville >> * Turret >> * Tyrone >> >> Proposed Names that do not meet the criteria >> * Train >> >> However I'd like to suggest we skip the CIVS poll and select 'Train' as >> the release name by TC resolution[3]. My think for this is >> >> * It's fun and celebrates a humorous moment in our community >> * As a developer I've heard the T release called Train for quite >> sometime, and was used often at the PTG[4]. >> * As the *next* PTG is also in Colorado we can still choose a >> geographic based name for U[5] >> * If train causes a problem for trademark reasons then we can always >> run the poll > +2 +W  Great names proposed above but based on other discussions > with people I think the community would be happy with the Train > name.  Plus if people ask about it, it gives us an opportunity to > share a story that gives insight into the community we are.  :-)  > Thanks for proposing that path Tony! > > Jay >> I'll leave[3] for marked -W for a week for discussion to happen before the >> TC can consider / vote on it. >> >> Yours Tony. >> >> [1]http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html >> [2]https://wiki.openstack.org/wiki/Release_Naming/T_Proposals >> [3]https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 >> [4]https://twitter.com/vkmc/status/1040321043959754752 >> [5]https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Thu Oct 18 13:40:16 2018 From: neil at tigera.io (Neil Jerram) Date: Thu, 18 Oct 2018 14:40:16 +0100 Subject: [openstack-dev] [all] Naming the T release of OpenStack In-Reply-To: <9278c4c4-32d9-4ead-3948-ce66bce8a056@gmail.com> References: <20181018063539.GC6589@thor.bakeyournoodle.com> <9278c4c4-32d9-4ead-3948-ce66bce8a056@gmail.com> Message-ID: On Thu, Oct 18, 2018 at 2:35 PM Jay S Bryant wrote: > > > > On 10/18/2018 8:29 AM, Neil Jerram wrote: > > FWIW, I've have no clue if this is serious or not, or what 'Train' refers to... > > Neil, > > The two Project Team Gathering events in Denver were held at a hotel next to the train line from downtown to the airport. The crossing signals there had some sort of malfunction in the past causing them to not stop the cars when a train was coming properly. As a result the trains were required to blow their horns when passing through that area. Obviously staying in a hotel, by trains that are blowing their horns 24/7 was less than ideal. As a result, many jokes popped up about Denver and trains. > > When OpenStack developers think Denver, we think Trains. :-) > > Jay :-) Thanks! Neil From fungi at yuggoth.org Thu Oct 18 13:59:42 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 18 Oct 2018 13:59:42 +0000 Subject: [openstack-dev] [all] Naming the T release of OpenStack In-Reply-To: <9278c4c4-32d9-4ead-3948-ce66bce8a056@gmail.com> References: <20181018063539.GC6589@thor.bakeyournoodle.com> <9278c4c4-32d9-4ead-3948-ce66bce8a056@gmail.com> Message-ID: <20181018135942.gzhgv5dtxjrtasny@yuggoth.org> On 2018-10-18 08:35:21 -0500 (-0500), Jay S Bryant wrote: [...] > When OpenStack developers think Denver, we think Trains.  :-) [...] As, presumably, do many OpenStack operators now since the Ops Mid-Cycle event was co-located with the most recent PTG. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dtantsur at redhat.com Thu Oct 18 14:23:17 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 18 Oct 2018 16:23:17 +0200 Subject: [openstack-dev] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge" Message-ID: <713a4c9a-d628-ee6d-49e8-c8e9763cdbfe@redhat.com> Hi all, Sorry for chiming in really late in this topic, but I think $subj is worth discussing until we settle harder on the potentially confusing terminology. I think the difference between "Edge" and "Far Edge" is too vague to use these terms in practice. Think about the "edge" metaphor itself: something rarely has several layers of edges. A knife has an edge, there are no far edges. I imagine zooming in and seeing more edges at the edge, and then it's quite cool indeed, but is it really a useful metaphor for those who never used a strong microscope? :) I think in the trivial sense "Far Edge" is a tautology, and should be avoided. As a weak proof of my words, I already see a lot of smart people confusing these two and actually use Central/Edge where they mean Edge/Far Edge. I suggest we adopt a different terminology, even if it less consistent with typical marketing term around the "Edge" movement. Now, I don't have really great suggestions. Something that came up in TripleO discussions [1] is Core/Hub/Edge, which I think reflects the idea better. I'd be very interested to hear your ideas. Dmitry [1] https://etherpad.openstack.org/p/tripleo-edge-mvp From Arkady.Kanevsky at dell.com Thu Oct 18 14:33:41 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Thu, 18 Oct 2018 14:33:41 +0000 Subject: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge" In-Reply-To: <713a4c9a-d628-ee6d-49e8-c8e9763cdbfe@redhat.com> References: <713a4c9a-d628-ee6d-49e8-c8e9763cdbfe@redhat.com> Message-ID: <2262e154f7a940368d8aff5c55b579f1@AUSX13MPS308.AMER.DELL.COM> Love the idea to have clearer terminology. Suggest we let telco folks to suggest terminology to use. This is not a 3 level hierarchy but much more. There are several layers of aggregation from local to metro, to regional, to DC. And potential multiple layers in each. -----Original Message----- From: Dmitry Tantsur Sent: Thursday, October 18, 2018 9:23 AM To: OpenStack Development Mailing List (not for usage questions); openstack-sigs at lists.openstack.org Subject: [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge" [EXTERNAL EMAIL] Please report any suspicious attachments, links, or requests for sensitive information. Hi all, Sorry for chiming in really late in this topic, but I think $subj is worth discussing until we settle harder on the potentially confusing terminology. I think the difference between "Edge" and "Far Edge" is too vague to use these terms in practice. Think about the "edge" metaphor itself: something rarely has several layers of edges. A knife has an edge, there are no far edges. I imagine zooming in and seeing more edges at the edge, and then it's quite cool indeed, but is it really a useful metaphor for those who never used a strong microscope? :) I think in the trivial sense "Far Edge" is a tautology, and should be avoided. As a weak proof of my words, I already see a lot of smart people confusing these two and actually use Central/Edge where they mean Edge/Far Edge. I suggest we adopt a different terminology, even if it less consistent with typical marketing term around the "Edge" movement. Now, I don't have really great suggestions. Something that came up in TripleO discussions [1] is Core/Hub/Edge, which I think reflects the idea better. I'd be very interested to hear your ideas. Dmitry [1] https://etherpad.openstack.org/p/tripleo-edge-mvp _______________________________________________ openstack-sigs mailing list openstack-sigs at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From bdobreli at redhat.com Thu Oct 18 14:40:19 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 18 Oct 2018 16:40:19 +0200 Subject: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge" In-Reply-To: <2262e154f7a940368d8aff5c55b579f1@AUSX13MPS308.AMER.DELL.COM> References: <713a4c9a-d628-ee6d-49e8-c8e9763cdbfe@redhat.com> <2262e154f7a940368d8aff5c55b579f1@AUSX13MPS308.AMER.DELL.COM> Message-ID: <711be4b1-8ace-8e7c-d650-37f30ca03395@redhat.com> On 10/18/18 4:33 PM, Arkady.Kanevsky at dell.com wrote: > Love the idea to have clearer terminology. > Suggest we let telco folks to suggest terminology to use. > This is not a 3 level hierarchy but much more. > There are several layers of aggregation from local to metro, to regional, to DC. And potential multiple layers in each. > > -----Original Message----- > From: Dmitry Tantsur > Sent: Thursday, October 18, 2018 9:23 AM > To: OpenStack Development Mailing List (not for usage questions); openstack-sigs at lists.openstack.org > Subject: [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge" > > > [EXTERNAL EMAIL] > Please report any suspicious attachments, links, or requests for sensitive information. > > > Hi all, > > Sorry for chiming in really late in this topic, but I think $subj is worth > discussing until we settle harder on the potentially confusing terminology. > > I think the difference between "Edge" and "Far Edge" is too vague to use these > terms in practice. Think about the "edge" metaphor itself: something rarely has > several layers of edges. A knife has an edge, there are no far edges. I imagine > zooming in and seeing more edges at the edge, and then it's quite cool indeed, > but is it really a useful metaphor for those who never used a strong microscope? :) > > I think in the trivial sense "Far Edge" is a tautology, and should be avoided. > As a weak proof of my words, I already see a lot of smart people confusing these > two and actually use Central/Edge where they mean Edge/Far Edge. I suggest we > adopt a different terminology, even if it less consistent with typical marketing > term around the "Edge" movement. > > Now, I don't have really great suggestions. Something that came up in TripleO > discussions [1] is Core/Hub/Edge, which I think reflects the idea better. > > I'd be very interested to hear your ideas. Similarly to NUMA distance is equal to the shortest path between the NUMA nodes, we could think of edges as facets and Edge distance as the shortest path between edge sites, counting from the central Edge (distance 0), or central Edges, if we have those decentralized and there is no a single central Edge? > > Dmitry > > [1] https://etherpad.openstack.org/p/tripleo-edge-mvp > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -- Best regards, Bogdan Dobrelya, Irc #bogdando From kgiusti at gmail.com Thu Oct 18 14:59:41 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Thu, 18 Oct 2018 10:59:41 -0400 Subject: [openstack-dev] =?utf-8?b?W21pc3RyYWxdW29zbG9dW21lc3NhZ2luZ10g?= =?utf-8?q?Removing_=E2=80=9Cblocking=E2=80=9D_executor_from_oslo?= =?utf-8?q?=2Emessaging?= In-Reply-To: <682cb1b7-9c29-4242-90bd-7f28022a0bdb@Spark> References: <682cb1b7-9c29-4242-90bd-7f28022a0bdb@Spark> Message-ID: Hi Renat, The biggest issue with the blocking executor (IMHO) is that it blocks the protocol I/O while RPC processing is in progress. This increases the likelihood that protocol processing will not get done in a timely manner and things start to fail in weird ways. These failures are timing related and are typically hard to reproduce or root-cause. This isn't something we can fix as blocking is the nature of the executor. If we are to leave it in we'd really want to discourage its use. However I'm ok with leaving it available if the policy for using blocking is 'use at your own risk', meaning that bug reports may have to be marked 'won't fix' if we have reason to believe that blocking is at fault. That implies removing 'blocking' as the default executor value in the API and having applications explicitly choose it. And we keep the deprecation warning. We could perhaps implement time duration checks around the executor callout and log a warning if the executor blocked for an extended amount of time (extended=TBD). Other opinions so we can come to a consensus? On Thu, Oct 18, 2018 at 3:24 AM Renat Akhmerov wrote: > Hi Oslo Team, > > Can we retain “blocking” executor for now in Oslo Messaging? > > > Some background.. > > For a while we had to use Oslo Messaging with “blocking” executor in > Mistral because of incompatibility of MySQL driver with green threads when > choosing “eventlet” executor. Under certain conditions we would get > deadlocks between green threads. Some time ago we switched to using PyMysql > driver which is eventlet friendly and did a number of tests that showed > that we could safely switch to “eventlet” executor (with that driver) so we > introduced a new option in Mistral where we could choose an executor in > Oslo Messaging. The corresponding bug is [1]. > > The issue is that we recently found that not everything actually works as > expected when using combination PyMysql + “eventlet” executor. We also > tried “threading” executor and the system *seems* to work with it but > surprisingly performance is much worse. > > Given all of that we’d like to ask Oslo Team not to remove “blocking” > executor for now completely, if that’s possible. We have a strong > motivation to switch to “eventlet” for other reasons (parallelism => better > performance etc.) but seems like we need some time to make it smoothly. > > > [1] https://bugs.launchpad.net/mistral/+bug/1696469 > > > Thanks > > Renat Akhmerov > @Nokia > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Ken Giusti (kgiusti at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Oct 18 15:04:08 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 19 Oct 2018 00:04:08 +0900 Subject: [openstack-dev] [tc][all] TC office hours is started now on #openstack-tc Message-ID: <16687b48b08.f0e2b6e739058.2040720891818751860@ghanshyammann.com> Hi All, TC office hour is started on #openstack-tc channel. Feel free to reach to us for anything you want discuss/input/feedback/help from TC. -gmann From jaypipes at gmail.com Thu Oct 18 15:08:05 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 18 Oct 2018 11:08:05 -0400 Subject: [openstack-dev] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge" In-Reply-To: <713a4c9a-d628-ee6d-49e8-c8e9763cdbfe@redhat.com> References: <713a4c9a-d628-ee6d-49e8-c8e9763cdbfe@redhat.com> Message-ID: <4b385272-8453-bcb7-b809-00d6df14f768@gmail.com> On 10/18/2018 10:23 AM, Dmitry Tantsur wrote: > Hi all, > > Sorry for chiming in really late in this topic, but I think $subj is > worth discussing until we settle harder on the potentially confusing > terminology. > > I think the difference between "Edge" and "Far Edge" is too vague to use > these terms in practice. Think about the "edge" metaphor itself: > something rarely has several layers of edges. A knife has an edge, there > are no far edges. I imagine zooming in and seeing more edges at the > edge, and then it's quite cool indeed, but is it really a useful > metaphor for those who never used a strong microscope? :) > > I think in the trivial sense "Far Edge" is a tautology, and should be > avoided. As a weak proof of my words, I already see a lot of smart > people confusing these two and actually use Central/Edge where they mean > Edge/Far Edge. I suggest we adopt a different terminology, even if it > less consistent with typical marketing term around the "Edge" movement. > > Now, I don't have really great suggestions. Something that came up in > TripleO discussions [1] is Core/Hub/Edge, which I think reflects the > idea better. > > I'd be very interested to hear your ideas. "The Edge" and "Lunatic Fringe". There, problem solved. -jay From openstack at medberry.net Thu Oct 18 15:39:13 2018 From: openstack at medberry.net (David Medberry) Date: Thu, 18 Oct 2018 09:39:13 -0600 Subject: [openstack-dev] [Openstack-sigs] [all] Naming the T release of OpenStack In-Reply-To: <20181018063539.GC6589@thor.bakeyournoodle.com> References: <20181018063539.GC6589@thor.bakeyournoodle.com> Message-ID: I'm fine with Train but I'm also fine with just adding it to the list and voting on it. It will win. Also, for those not familiar with the debian/ubuntu command "sl", now is the time to become so. apt install sl sl -Flea #ftw On Thu, Oct 18, 2018 at 12:35 AM Tony Breeds wrote: > Hello all, > As per [1] the nomination period for names for the T release have > now closed (actually 3 days ago sorry). The nominated names and any > qualifying remarks can be seen at2]. > > Proposed Names > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria > * Train > > However I'd like to suggest we skip the CIVS poll and select 'Train' as > the release name by TC resolution[3]. My think for this is > > * It's fun and celebrates a humorous moment in our community > * As a developer I've heard the T release called Train for quite > sometime, and was used often at the PTG[4]. > * As the *next* PTG is also in Colorado we can still choose a > geographic based name for U[5] > * If train causes a problem for trademark reasons then we can always > run the poll > > I'll leave[3] for marked -W for a week for discussion to happen before the > TC can consider / vote on it. > > Yours Tony. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html > [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals > [3] > https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 > [4] https://twitter.com/vkmc/status/1040321043959754752 > [5] https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at medberry.net Thu Oct 18 15:41:16 2018 From: openstack at medberry.net (David Medberry) Date: Thu, 18 Oct 2018 09:41:16 -0600 Subject: [openstack-dev] [Openstack-sigs] [all] Naming the T release of OpenStack In-Reply-To: References: <20181018063539.GC6589@thor.bakeyournoodle.com> Message-ID: and any talks I give in Denver (Forum, Ops, Main) will include "sl". It's handy in a variety of ways. On Thu, Oct 18, 2018 at 9:39 AM David Medberry wrote: > I'm fine with Train but I'm also fine with just adding it to the list and > voting on it. It will win. > > Also, for those not familiar with the debian/ubuntu command "sl", now is > the time to become so. > > apt install sl > sl -Flea #ftw > > On Thu, Oct 18, 2018 at 12:35 AM Tony Breeds > wrote: > >> Hello all, >> As per [1] the nomination period for names for the T release have >> now closed (actually 3 days ago sorry). The nominated names and any >> qualifying remarks can be seen at2]. >> >> Proposed Names >> * Tarryall >> * Teakettle >> * Teller >> * Telluride >> * Thomas >> * Thornton >> * Tiger >> * Tincup >> * Timnath >> * Timber >> * Tiny Town >> * Torreys >> * Trail >> * Trinidad >> * Treasure >> * Troublesome >> * Trussville >> * Turret >> * Tyrone >> >> Proposed Names that do not meet the criteria >> * Train >> >> However I'd like to suggest we skip the CIVS poll and select 'Train' as >> the release name by TC resolution[3]. My think for this is >> >> * It's fun and celebrates a humorous moment in our community >> * As a developer I've heard the T release called Train for quite >> sometime, and was used often at the PTG[4]. >> * As the *next* PTG is also in Colorado we can still choose a >> geographic based name for U[5] >> * If train causes a problem for trademark reasons then we can always >> run the poll >> >> I'll leave[3] for marked -W for a week for discussion to happen before the >> TC can consider / vote on it. >> >> Yours Tony. >> >> [1] >> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html >> [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals >> [3] >> https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 >> [4] https://twitter.com/vkmc/status/1040321043959754752 >> [5] https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z >> _______________________________________________ >> openstack-sigs mailing list >> openstack-sigs at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Thu Oct 18 15:55:02 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Thu, 18 Oct 2018 11:55:02 -0400 Subject: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge" In-Reply-To: <713a4c9a-d628-ee6d-49e8-c8e9763cdbfe@redhat.com> References: <713a4c9a-d628-ee6d-49e8-c8e9763cdbfe@redhat.com> Message-ID: On Thu, Oct 18, 2018 at 10:23 AM Dmitry Tantsur wrote: > Hi all, > > Sorry for chiming in really late in this topic, but I think $subj is worth > discussing until we settle harder on the potentially confusing terminology. > > I think the difference between "Edge" and "Far Edge" is too vague to use > these > terms in practice. Think about the "edge" metaphor itself: something > rarely has > several layers of edges. A knife has an edge, there are no far edges. I > imagine > zooming in and seeing more edges at the edge, and then it's quite cool > indeed, > but is it really a useful metaphor for those who never used a strong > microscope? :) > > I think in the trivial sense "Far Edge" is a tautology, and should be > avoided. > As a weak proof of my words, I already see a lot of smart people confusing > these > two and actually use Central/Edge where they mean Edge/Far Edge. I suggest > we > adopt a different terminology, even if it less consistent with typical > marketing > term around the "Edge" movement. > FWIW, we created rough definitions of "edge" and "far edge" during the edge WG session in Denver. It's mostly based on latency to the end user, though we also talked about quantities of compute resources, if someone can find the pictures. See the picture and table here: https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures#Overview > > Now, I don't have really great suggestions. Something that came up in > TripleO > discussions [1] is Core/Hub/Edge, which I think reflects the idea better. > I'm also fine with these names, as they do describe the concepts well. :) // jim > I'd be very interested to hear your ideas. > > Dmitry > > [1] https://etherpad.openstack.org/p/tripleo-edge-mvp > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Oct 18 16:34:43 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 18 Oct 2018 11:34:43 -0500 Subject: [openstack-dev] =?utf-8?b?W21pc3RyYWxdW29zbG9dW21lc3NhZ2luZ10g?= =?utf-8?q?Removing_=E2=80=9Cblocking=E2=80=9D_executor_from_oslo=2Emessag?= =?utf-8?q?ing?= In-Reply-To: References: <682cb1b7-9c29-4242-90bd-7f28022a0bdb@Spark> Message-ID: <1c5b7519-154e-7085-42c7-7c402d6a48df@nemebean.com> On 10/18/18 9:59 AM, Ken Giusti wrote: > Hi Renat, > > The biggest issue with the blocking executor (IMHO) is that it blocks > the protocol I/O while  RPC processing is in progress.  This increases > the likelihood that protocol processing will not get done in a timely > manner and things start to fail in weird ways.  These failures are > timing related and are typically hard to reproduce or root-cause.   This > isn't something we can fix as blocking is the nature of the executor. > > If we are to leave it in we'd really want to discourage its use. Since it appears the actual executor code lives in futurist, would it be possible to remove the entrypoint for blocking from oslo.messaging and have mistral just pull it in with their setup.cfg? Seems like they should be able to add something like: oslo.messaging.executors = blocking = futurist:SynchronousExecutor to their setup.cfg to keep it available to them even if we drop it from oslo.messaging itself. That seems like a good way to strongly discourage use of it while still making it available to projects that are really sure they want it. > > However I'm ok with leaving it available if the policy for using > blocking is 'use at your own risk', meaning that bug reports may have to > be marked 'won't fix' if we have reason to believe that blocking is at > fault.  That implies removing 'blocking' as the default executor value > in the API and having applications explicitly choose it.  And we keep > the deprecation warning. > > We could perhaps implement time duration checks around the executor > callout and log a warning if the executor blocked for an extended amount > of time (extended=TBD). > > Other opinions so we can come to a consensus? > > > On Thu, Oct 18, 2018 at 3:24 AM Renat Akhmerov > wrote: > > Hi Oslo Team, > > Can we retain “blocking” executor for now in Oslo Messaging? > > > Some background.. > > For a while we had to use Oslo Messaging with “blocking” executor in > Mistral because of incompatibility of MySQL driver with green > threads when choosing “eventlet” executor. Under certain conditions > we would get deadlocks between green threads. Some time ago we > switched to using PyMysql driver which is eventlet friendly and did > a number of tests that showed that we could safely switch to > “eventlet” executor (with that driver) so we introduced a new option > in Mistral where we could choose an executor in Oslo Messaging. The > corresponding bug is [1]. > > The issue is that we recently found that not everything actually > works as expected when using combination PyMysql + “eventlet” > executor. We also tried “threading” executor and the system *seems* > to work with it but surprisingly performance is much worse. > > Given all of that we’d like to ask Oslo Team not to remove > “blocking” executor for now completely, if that’s possible. We have > a strong motivation to switch to “eventlet” for other reasons > (parallelism => better performance etc.) but seems like we need some > time to make it smoothly. > > > [1] https://bugs.launchpad.net/mistral/+bug/1696469 > > > Thanks > > Renat Akhmerov > @Nokia > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Ken Giusti  (kgiusti at gmail.com ) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mdulko at redhat.com Thu Oct 18 16:43:17 2018 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Thu, 18 Oct 2018 18:43:17 +0200 Subject: [openstack-dev] [all] Naming the T release of OpenStack In-Reply-To: <20181018063539.GC6589@thor.bakeyournoodle.com> References: <20181018063539.GC6589@thor.bakeyournoodle.com> Message-ID: On Thu, 2018-10-18 at 17:35 +1100, Tony Breeds wrote: > Hello all, > As per [1] the nomination period for names for the T release have > now closed (actually 3 days ago sorry). The nominated names and any > qualifying remarks can be seen at2]. > > Proposed Names > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria > * Train > > However I'd like to suggest we skip the CIVS poll and select 'Train' as > the release name by TC resolution[3]. My think for this is > > * It's fun and celebrates a humorous moment in our community > * As a developer I've heard the T release called Train for quite > sometime, and was used often at the PTG[4]. > * As the *next* PTG is also in Colorado we can still choose a > geographic based name for U[5] > * If train causes a problem for trademark reasons then we can always > run the poll > > I'll leave[3] for marked -W for a week for discussion to happen before the > TC can consider / vote on it. I'm totally supportive for OpenStack Train, but got to say that OpenStack Troublesome is a wonderful name as well. :) > Yours Tony. > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html > [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals > [3] https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 > [4] https://twitter.com/vkmc/status/1040321043959754752 > [5] https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z From openstack at fried.cc Thu Oct 18 16:43:33 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 18 Oct 2018 11:43:33 -0500 Subject: [openstack-dev] [Openstack-sigs] [all] Naming the T release of OpenStack In-Reply-To: <4b02964cfa8a43b29a11d98a6606ccd6@AUSX13MPS308.AMER.DELL.COM> References: <20181018063539.GC6589@thor.bakeyournoodle.com> <4b02964cfa8a43b29a11d98a6606ccd6@AUSX13MPS308.AMER.DELL.COM> Message-ID: <6befdfa3-66ef-2e2d-1d9a-b1ba01f19b79@fried.cc> Sorry, I'm opposed to this idea. I admit I don't understand the political framework, nor have I read the governing documents beyond [1], but that document makes it clear that this is supposed to be a community-wide vote. Is it really legal for the TC (or whoever has merge rights on [2]) to merge a patch that gives that same body the power to take the decision out of the hands of the community? So it's really an oligarchy that gives its constituency the illusion of democracy until something comes up that it feels like not having a vote on? The fact that it's something relatively "unimportant" (this time) is not a comfort. Not that I think the TC would necessarily move forward with [2] in the face of substantial opposition from non-TC "cores" or whatever. I will vote enthusiastically for "Train". But a vote it should be. -efried [1] https://governance.openstack.org/tc/reference/release-naming.html [2] https://review.openstack.org/#/c/611511/ On 10/18/2018 10:52 AM, Arkady.Kanevsky at dell.com wrote: > +1 for the poll. > > Let’s follow well established process. > > If we want to add Train as one of the options for the name I am OK with it. > >   > > *From:* Jonathan Mills > *Sent:* Thursday, October 18, 2018 10:49 AM > *To:* openstack-sigs at lists.openstack.org > *Subject:* Re: [Openstack-sigs] [all] Naming the T release of OpenStack > >   > > [EXTERNAL EMAIL] > Please report any suspicious attachments, links, or requests for > sensitive information. > > +1 for just having a poll > >   > > On Thu, Oct 18, 2018 at 11:39 AM David Medberry > wrote: > > I'm fine with Train but I'm also fine with just adding it to the > list and voting on it. It will win. > >   > > Also, for those not familiar with the debian/ubuntu command "sl", > now is the time to become so. > >   > > apt install sl > > sl -Flea #ftw > >   > > On Thu, Oct 18, 2018 at 12:35 AM Tony Breeds > > wrote: > > Hello all, >     As per [1] the nomination period for names for the T release > have > now closed (actually 3 days ago sorry).  The nominated names and any > qualifying remarks can be seen at2]. > > Proposed Names >  * Tarryall >  * Teakettle >  * Teller >  * Telluride >  * Thomas >  * Thornton >  * Tiger >  * Tincup >  * Timnath >  * Timber >  * Tiny Town >  * Torreys >  * Trail >  * Trinidad >  * Treasure >  * Troublesome >  * Trussville >  * Turret >  * Tyrone > > Proposed Names that do not meet the criteria >  * Train > > However I'd like to suggest we skip the CIVS poll and select > 'Train' as > the release name by TC resolution[3].  My think for this is > >  * It's fun and celebrates a humorous moment in our community >  * As a developer I've heard the T release called Train for quite >    sometime, and was used often at the PTG[4]. >  * As the *next* PTG is also in Colorado we can still choose a >    geographic based name for U[5] >  * If train causes a problem for trademark reasons then we can > always >    run the poll > > I'll leave[3] for marked -W for a week for discussion to happen > before the > TC can consider / vote on it. > > Yours Tony. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html > [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals > [3] > https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 > [4] https://twitter.com/vkmc/status/1040321043959754752 > [5] > https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > From rfolco at redhat.com Thu Oct 18 17:05:33 2018 From: rfolco at redhat.com (Rafael Folco) Date: Thu, 18 Oct 2018 14:05:33 -0300 Subject: [openstack-dev] [tripleo] TripleO CI Summary: Sprint 20 Message-ID: Greetings, The TripleO CI team has just completed Sprint 20 (Sep-27 thru Oct-17). The following is a summary of completed work during this sprint cycle: - Bootstrapped upstream standalone job on Fedora 28. - Migrated RDO jobs in Software Factory to native Zuul v3. - Refactored Tempest tooling (python-tempestconf). - Made possible the use of zuul inventory variables from the job to the quickstart workflow - Build-test-packages role is now using zuul variables to gather information on the changes alongside the old ZUUL_CHANGES method, which is now deprecated and will be removed in the future. - Translated bash variables into ansible as part of the ZuulV3 migration work. - Enabled stackviz on openstack/openstack-ansible-os_tempest role. - Enabled projects to override tempest tests via zuul variables in the job definition. The sprint task board for CI team has moved from Trello to Taiga [1]. The Ruck and Rover notes for this sprint has been tracked in the etherpad [2]. The planned work for the next sprint focuses in running the upstream standalone job in Fedora 28 and continuing part of the work done in Sprint 20, including python-tempestconf refactoring, and enabling stackviz. The Ruck and Rover for this sprint are Sagi Shnaidman (sshnaidm) and Rafael Folco (rfolco). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Thanks, Folco -- Rafael Folco Senior Software Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From remo at rm.ht Thu Oct 18 17:08:12 2018 From: remo at rm.ht (Remo Mattei) Date: Thu, 18 Oct 2018 10:08:12 -0700 Subject: [openstack-dev] [Openstack-sigs] [all] Naming the T release of OpenStack In-Reply-To: <6befdfa3-66ef-2e2d-1d9a-b1ba01f19b79@fried.cc> References: <20181018063539.GC6589@thor.bakeyournoodle.com> <4b02964cfa8a43b29a11d98a6606ccd6@AUSX13MPS308.AMER.DELL.COM> <6befdfa3-66ef-2e2d-1d9a-b1ba01f19b79@fried.cc> Message-ID: <78F870CF-14EA-4BC5-BA97-FF0D871ED141@rm.ht> Michal, that will never work it’s 11 characters long > On Oct 18, 2018, at 09:43, Eric Fried wrote: > > Sorry, I'm opposed to this idea. > > I admit I don't understand the political framework, nor have I read the > governing documents beyond [1], but that document makes it clear that > this is supposed to be a community-wide vote. Is it really legal for > the TC (or whoever has merge rights on [2]) to merge a patch that gives > that same body the power to take the decision out of the hands of the > community? So it's really an oligarchy that gives its constituency the > illusion of democracy until something comes up that it feels like not > having a vote on? The fact that it's something relatively "unimportant" > (this time) is not a comfort. > > Not that I think the TC would necessarily move forward with [2] in the > face of substantial opposition from non-TC "cores" or whatever. > > I will vote enthusiastically for "Train". But a vote it should be. > > -efried > > [1] https://governance.openstack.org/tc/reference/release-naming.html > [2] https://review.openstack.org/#/c/611511/ > > On 10/18/2018 10:52 AM, Arkady.Kanevsky at dell.com wrote: >> +1 for the poll. >> >> Let’s follow well established process. >> >> If we want to add Train as one of the options for the name I am OK with it. >> >> >> >> *From:* Jonathan Mills >> *Sent:* Thursday, October 18, 2018 10:49 AM >> *To:* openstack-sigs at lists.openstack.org >> *Subject:* Re: [Openstack-sigs] [all] Naming the T release of OpenStack >> >> >> >> [EXTERNAL EMAIL] >> Please report any suspicious attachments, links, or requests for >> sensitive information. >> >> +1 for just having a poll >> >> >> >> On Thu, Oct 18, 2018 at 11:39 AM David Medberry > >> wrote: >> >> I'm fine with Train but I'm also fine with just adding it to the >> list and voting on it. It will win. >> >> >> >> Also, for those not familiar with the debian/ubuntu command "sl", >> now is the time to become so. >> >> >> >> apt install sl >> >> sl -Flea #ftw >> >> >> >> On Thu, Oct 18, 2018 at 12:35 AM Tony Breeds >> >> wrote: >> >> Hello all, >> As per [1] the nomination period for names for the T release >> have >> now closed (actually 3 days ago sorry). The nominated names and any >> qualifying remarks can be seen at2]. >> >> Proposed Names >> * Tarryall >> * Teakettle >> * Teller >> * Telluride >> * Thomas >> * Thornton >> * Tiger >> * Tincup >> * Timnath >> * Timber >> * Tiny Town >> * Torreys >> * Trail >> * Trinidad >> * Treasure >> * Troublesome >> * Trussville >> * Turret >> * Tyrone >> >> Proposed Names that do not meet the criteria >> * Train >> >> However I'd like to suggest we skip the CIVS poll and select >> 'Train' as >> the release name by TC resolution[3]. My think for this is >> >> * It's fun and celebrates a humorous moment in our community >> * As a developer I've heard the T release called Train for quite >> sometime, and was used often at the PTG[4]. >> * As the *next* PTG is also in Colorado we can still choose a >> geographic based name for U[5] >> * If train causes a problem for trademark reasons then we can >> always >> run the poll >> >> I'll leave[3] for marked -W for a week for discussion to happen >> before the >> TC can consider / vote on it. >> >> Yours Tony. >> >> [1] >> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html >> [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals >> [3] >> https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 >> [4] https://twitter.com/vkmc/status/1040321043959754752 >> [5] >> https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z >> > >> _______________________________________________ >> openstack-sigs mailing list >> openstack-sigs at lists.openstack.org >> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >> >> _______________________________________________ >> openstack-sigs mailing list >> openstack-sigs at lists.openstack.org >> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >> >> >> >> _______________________________________________ >> openstack-sigs mailing list >> openstack-sigs at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >> > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs -------------- next part -------------- An HTML attachment was scrubbed... URL: From rfolco at redhat.com Thu Oct 18 17:08:29 2018 From: rfolco at redhat.com (Rafael Folco) Date: Thu, 18 Oct 2018 14:08:29 -0300 Subject: [openstack-dev] [tripleo] TripleO CI Summary: Sprint 20 In-Reply-To: References: Message-ID: Oops. Missing footer info. [1] https://tree.taiga.io/project/tripleo-ci-board/taskboard/sprint-20-193 [2] https://review.rdoproject.org/etherpad/p/ruckrover-sprint20.1 On Thu, Oct 18, 2018 at 2:05 PM Rafael Folco wrote: > Greetings, > > The TripleO CI team has just completed Sprint 20 (Sep-27 thru Oct-17). > The following is a summary of completed work during this sprint cycle: > > - > > Bootstrapped upstream standalone job on Fedora 28. > - > > Migrated RDO jobs in Software Factory to native Zuul v3. > - > > Refactored Tempest tooling (python-tempestconf). > - > > Made possible the use of zuul inventory variables from the job to the > quickstart workflow > - > > Build-test-packages role is now using zuul variables to gather > information on the changes alongside the old ZUUL_CHANGES method, which is > now deprecated and will be removed in the future. > - > > Translated bash variables into ansible as part of the ZuulV3 migration > work. > - Enabled stackviz on openstack/openstack-ansible-os_tempest role. > - > > Enabled projects to override tempest tests via zuul variables in the > job definition. > > > The sprint task board for CI team has moved from Trello to Taiga [1]. The > Ruck and Rover notes for this sprint has been tracked in the etherpad [2]. > > The planned work for the next sprint focuses in running the upstream > standalone job in Fedora 28 and continuing part of the work done in Sprint > 20, including python-tempestconf refactoring, and enabling stackviz. > > The Ruck and Rover for this sprint are Sagi Shnaidman (sshnaidm) and > Rafael Folco (rfolco). Please direct questions or queries to them regarding > CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix > on their nick. > > Thanks, > Folco > > -- > Rafael Folco > Senior Software Engineer > -- Rafael Folco Senior Software Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From duc.openstack at gmail.com Thu Oct 18 17:08:38 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Thu, 18 Oct 2018 10:08:38 -0700 Subject: [openstack-dev] [senlin] Meeting cancelled for this week Message-ID: Everyone, I’m canceling the Senlin meeting this week because I’m unavailable. We will resume at the scheduled time next week. Duc (dtruong) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Thu Oct 18 18:16:54 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 18 Oct 2018 13:16:54 -0500 Subject: [openstack-dev] [all] Naming the T release of OpenStack In-Reply-To: References: <20181018063539.GC6589@thor.bakeyournoodle.com> Message-ID: On Oct 18, 2018, at 11:43 AM, Michał Dulko wrote: > > I'm totally supportive for OpenStack Train, but got to say that > OpenStack Troublesome is a wonderful name as well. :) Yeah, I’m sure that the marketing people won’t have a problem vetting that name. :) -- Ed Leafe From sombrafam at gmail.com Thu Oct 18 19:35:23 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Thu, 18 Oct 2018 16:35:23 -0300 Subject: [openstack-dev] [Openstack-sigs] [horizon][nova][cinder][keystone][glance][neutron][swift] Horizon feature gaps In-Reply-To: <0836f4de-327a-0609-a2d1-5c4aec785f7d@gmail.com> References: <45d48394-4605-5ea4-f14d-48c1422b54cc@gmail.com> <20181018122157.GA3125@sm-workstation> <0836f4de-327a-0609-a2d1-5c4aec785f7d@gmail.com> Message-ID: Theres also this document with very detailed information: https://docs.google.com/spreadsheets/d/18ZtWC75BNCwFqLfFpCGGJ9uPVBvUXX0xuXP1yYG0NDA/edit#gid=0 Erlon Em qui, 18 de out de 2018 às 10:30, Jay S Bryant escreveu: > > > On 10/18/2018 7:21 AM, Sean McGinnis wrote: > > On Wed, Oct 17, 2018 at 10:41:36AM -0500, Matt Riedemann wrote: > >> On 10/17/2018 9:24 AM, Ivan Kolodyazhny wrote: > >>> As you may know, unfortunately, Horizon doesn't support all features > >>> provided by APIs. That's why we created feature gaps list [1]. > >>> > >>> I'd got a lot of great conversations with projects teams during the PTG > >>> and we tried to figure out what should be done prioritize these tasks. > >>> It's really helpful for Horizon to get feedback from other teams to > >>> understand what features should be implemented next. > >>> > >>> While I'm filling launchpad with new bugs and blueprints for [1], it > >>> would be good to review this list again and find some volunteers to > >>> decrease feature gaps. > >>> > >>> [1] https://etherpad.openstack.org/p/horizon-feature-gap > >>> > >>> Thanks everybody for any of your contributions to Horizon. > >> +openstack-sigs > >> +openstack-operators > >> > >> I've left some notes for nova. This looks very similar to the compute > API > >> OSC gap analysis I did [1]. Unfortunately it's hard to prioritize what > to > >> really work on without some user/operator feedback - maybe we can get > the > >> user work group involved in trying to help prioritize what people really > >> want that is missing from horizon, at least for compute? > >> > >> [1] > https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc > >> > >> -- > >> > >> Thanks, > >> > >> Matt > > I also have a cinderclient OSC gap analysis I've started working on. It > might > > be useful to add a Horizon column to this list too. > > > > https://ethercalc.openstack.org/cinderclient-osc-gap > I had forgotten that we had this. I have added it to the persistent > links at the top of our meeting agenda page so we have it for future > reference. Did the same for the Horizon Feature Gaps. Think it would > be good to heave those somewhere that we see them and are reminded about > them. > > Thanks! > Jay > > Sean > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Oct 18 20:10:48 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Thu, 18 Oct 2018 22:10:48 +0200 Subject: [openstack-dev] [Openstack-sigs] [all] Naming the T release of OpenStack In-Reply-To: <78F870CF-14EA-4BC5-BA97-FF0D871ED141@rm.ht> References: <20181018063539.GC6589@thor.bakeyournoodle.com> <4b02964cfa8a43b29a11d98a6606ccd6@AUSX13MPS308.AMER.DELL.COM> <6befdfa3-66ef-2e2d-1d9a-b1ba01f19b79@fried.cc> <78F870CF-14EA-4BC5-BA97-FF0D871ED141@rm.ht> Message-ID: > Wiadomość napisana przez Remo Mattei w dniu 18.10.2018, o godz. 19:08: > > Michal, that will never work it’s 11 characters long Shorter could be Openstack Trouble ;) > > > > >> On Oct 18, 2018, at 09:43, Eric Fried wrote: >> >> Sorry, I'm opposed to this idea. >> >> I admit I don't understand the political framework, nor have I read the >> governing documents beyond [1], but that document makes it clear that >> this is supposed to be a community-wide vote. Is it really legal for >> the TC (or whoever has merge rights on [2]) to merge a patch that gives >> that same body the power to take the decision out of the hands of the >> community? So it's really an oligarchy that gives its constituency the >> illusion of democracy until something comes up that it feels like not >> having a vote on? The fact that it's something relatively "unimportant" >> (this time) is not a comfort. >> >> Not that I think the TC would necessarily move forward with [2] in the >> face of substantial opposition from non-TC "cores" or whatever. >> >> I will vote enthusiastically for "Train". But a vote it should be. >> >> -efried >> >> [1] https://governance.openstack.org/tc/reference/release-naming.html >> [2] https://review.openstack.org/#/c/611511/ >> >> On 10/18/2018 10:52 AM, Arkady.Kanevsky at dell.com wrote: >>> +1 for the poll. >>> >>> Let’s follow well established process. >>> >>> If we want to add Train as one of the options for the name I am OK with it. >>> >>> >>> >>> *From:* Jonathan Mills >>> *Sent:* Thursday, October 18, 2018 10:49 AM >>> *To:* openstack-sigs at lists.openstack.org >>> *Subject:* Re: [Openstack-sigs] [all] Naming the T release of OpenStack >>> >>> >>> >>> [EXTERNAL EMAIL] >>> Please report any suspicious attachments, links, or requests for >>> sensitive information. >>> >>> +1 for just having a poll >>> >>> >>> >>> On Thu, Oct 18, 2018 at 11:39 AM David Medberry >> > wrote: >>> >>> I'm fine with Train but I'm also fine with just adding it to the >>> list and voting on it. It will win. >>> >>> >>> >>> Also, for those not familiar with the debian/ubuntu command "sl", >>> now is the time to become so. >>> >>> >>> >>> apt install sl >>> >>> sl -Flea #ftw >>> >>> >>> >>> On Thu, Oct 18, 2018 at 12:35 AM Tony Breeds >>> > wrote: >>> >>> Hello all, >>> As per [1] the nomination period for names for the T release >>> have >>> now closed (actually 3 days ago sorry). The nominated names and any >>> qualifying remarks can be seen at2]. >>> >>> Proposed Names >>> * Tarryall >>> * Teakettle >>> * Teller >>> * Telluride >>> * Thomas >>> * Thornton >>> * Tiger >>> * Tincup >>> * Timnath >>> * Timber >>> * Tiny Town >>> * Torreys >>> * Trail >>> * Trinidad >>> * Treasure >>> * Troublesome >>> * Trussville >>> * Turret >>> * Tyrone >>> >>> Proposed Names that do not meet the criteria >>> * Train >>> >>> However I'd like to suggest we skip the CIVS poll and select >>> 'Train' as >>> the release name by TC resolution[3]. My think for this is >>> >>> * It's fun and celebrates a humorous moment in our community >>> * As a developer I've heard the T release called Train for quite >>> sometime, and was used often at the PTG[4]. >>> * As the *next* PTG is also in Colorado we can still choose a >>> geographic based name for U[5] >>> * If train causes a problem for trademark reasons then we can >>> always >>> run the poll >>> >>> I'll leave[3] for marked -W for a week for discussion to happen >>> before the >>> TC can consider / vote on it. >>> >>> Yours Tony. >>> >>> [1] >>> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html >>> [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals >>> [3] >>> https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53 >>> [4] https://twitter.com/vkmc/status/1040321043959754752 >>> [5] >>> https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z >>> >>> _______________________________________________ >>> openstack-sigs mailing list >>> openstack-sigs at lists.openstack.org >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >>> >>> _______________________________________________ >>> openstack-sigs mailing list >>> openstack-sigs at lists.openstack.org >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >>> >>> >>> >>> _______________________________________________ >>> openstack-sigs mailing list >>> openstack-sigs at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >>> >> >> _______________________________________________ >> openstack-sigs mailing list >> openstack-sigs at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From johfulto at redhat.com Thu Oct 18 20:45:09 2018 From: johfulto at redhat.com (John Fulton) Date: Thu, 18 Oct 2018 16:45:09 -0400 Subject: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge" In-Reply-To: References: <713a4c9a-d628-ee6d-49e8-c8e9763cdbfe@redhat.com> Message-ID: On Thu, Oct 18, 2018 at 11:56 AM Jim Rollenhagen wrote: > > On Thu, Oct 18, 2018 at 10:23 AM Dmitry Tantsur wrote: >> >> Hi all, >> >> Sorry for chiming in really late in this topic, but I think $subj is worth >> discussing until we settle harder on the potentially confusing terminology. >> >> I think the difference between "Edge" and "Far Edge" is too vague to use these >> terms in practice. Think about the "edge" metaphor itself: something rarely has >> several layers of edges. A knife has an edge, there are no far edges. I imagine >> zooming in and seeing more edges at the edge, and then it's quite cool indeed, >> but is it really a useful metaphor for those who never used a strong microscope? :) >> >> I think in the trivial sense "Far Edge" is a tautology, and should be avoided. >> As a weak proof of my words, I already see a lot of smart people confusing these >> two and actually use Central/Edge where they mean Edge/Far Edge. I suggest we >> adopt a different terminology, even if it less consistent with typical marketing >> term around the "Edge" movement. > > > FWIW, we created rough definitions of "edge" and "far edge" during the edge WG session in Denver. > It's mostly based on latency to the end user, though we also talked about quantities of compute resources, if someone can find the pictures. Perhaps these are the pictures Jim was referring to? https://www.dropbox.com/s/255x1cao14taer3/MVP-Architecture_edge-computing_PTG.pptx?dl=0# I'm involved in some TripleO work called the split control plane: https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html After the PTG I saw that the split control plane was compatible with the type of deployment discussed at the edge WG session in Denver and described the compatibility at: https://etherpad.openstack.org/p/tripleo-edge-working-group-split-control-plane > See the picture and table here: https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures#Overview > >> Now, I don't have really great suggestions. Something that came up in TripleO >> discussions [1] is Core/Hub/Edge, which I think reflects the idea better. > > > I'm also fine with these names, as they do describe the concepts well. :) > > // jim I'm fine with these terms too. In split control plane there's a deployment method for deploying a central site and then deploying remote sites independently. That deployment method could be used to deploy Core/Hub/Edge sites too. E.g. deploy the Core using Heat stack N. Deploy a Hub using stack N+1 and then deploy an Edge using stack N+2 etc. John >> >> I'd be very interested to hear your ideas. >> >> Dmitry >> >> [1] https://etherpad.openstack.org/p/tripleo-edge-mvp >> >> _______________________________________________ >> openstack-sigs mailing list >> openstack-sigs at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jim at jimrollenhagen.com Thu Oct 18 21:43:29 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Thu, 18 Oct 2018 17:43:29 -0400 Subject: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge" In-Reply-To: References: <713a4c9a-d628-ee6d-49e8-c8e9763cdbfe@redhat.com> Message-ID: On Thu, Oct 18, 2018 at 4:45 PM John Fulton wrote: > On Thu, Oct 18, 2018 at 11:56 AM Jim Rollenhagen > wrote: > > > > On Thu, Oct 18, 2018 at 10:23 AM Dmitry Tantsur > wrote: > >> > >> Hi all, > >> > >> Sorry for chiming in really late in this topic, but I think $subj is > worth > >> discussing until we settle harder on the potentially confusing > terminology. > >> > >> I think the difference between "Edge" and "Far Edge" is too vague to > use these > >> terms in practice. Think about the "edge" metaphor itself: something > rarely has > >> several layers of edges. A knife has an edge, there are no far edges. I > imagine > >> zooming in and seeing more edges at the edge, and then it's quite cool > indeed, > >> but is it really a useful metaphor for those who never used a strong > microscope? :) > >> > >> I think in the trivial sense "Far Edge" is a tautology, and should be > avoided. > >> As a weak proof of my words, I already see a lot of smart people > confusing these > >> two and actually use Central/Edge where they mean Edge/Far Edge. I > suggest we > >> adopt a different terminology, even if it less consistent with typical > marketing > >> term around the "Edge" movement. > > > > > > FWIW, we created rough definitions of "edge" and "far edge" during the > edge WG session in Denver. > > It's mostly based on latency to the end user, though we also talked > about quantities of compute resources, if someone can find the pictures. > > Perhaps these are the pictures Jim was referring to? > > https://www.dropbox.com/s/255x1cao14taer3/MVP-Architecture_edge-computing_PTG.pptx?dl=0# That's it, thank you! // jim > > > I'm involved in some TripleO work called the split control plane: > > https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html > > After the PTG I saw that the split control plane was compatible with > the type of deployment discussed at the edge WG session in Denver and > described the compatibility at: > > https://etherpad.openstack.org/p/tripleo-edge-working-group-split-control-plane > > > See the picture and table here: > https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures#Overview > > > >> Now, I don't have really great suggestions. Something that came up in > TripleO > >> discussions [1] is Core/Hub/Edge, which I think reflects the idea > better. > > > > > > I'm also fine with these names, as they do describe the concepts well. :) > > > > // jim > > I'm fine with these terms too. In split control plane there's a > deployment method for deploying a central site and then deploying > remote sites independently. That deployment method could be used to > deploy Core/Hub/Edge sites too. E.g. deploy the Core using Heat stack > N. Deploy a Hub using stack N+1 and then deploy an Edge using stack > N+2 etc. > > John > > >> > >> I'd be very interested to hear your ideas. > >> > >> Dmitry > >> > >> [1] https://etherpad.openstack.org/p/tripleo-edge-mvp > >> > >> _______________________________________________ > >> openstack-sigs mailing list > >> openstack-sigs at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Thu Oct 18 22:52:45 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 18 Oct 2018 17:52:45 -0500 Subject: [openstack-dev] [neutron] [drivers] Cancelling drivers meeting on October 19th Message-ID: Dear Neutron team, We don't have triaged RFEs to be discussed during the drivers meeting on October 19th. So let's skip that meeting and we will resume normally on the 26th. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From liu.xuefeng1 at zte.com.cn Fri Oct 19 00:27:11 2018 From: liu.xuefeng1 at zte.com.cn (liu.xuefeng1 at zte.com.cn) Date: Fri, 19 Oct 2018 08:27:11 +0800 (CST) Subject: [openstack-dev] =?utf-8?b?562U5aSNOiBbc2VubGluXSBNZWV0aW5nIGNh?= =?utf-8?q?ncelled_for_this_week?= In-Reply-To: References: CAN81NT5KMG3uc4Y7jtg=kv_toy==Y4a5yWzHb3CAw8EMfhLLrw@mail.gmail.com Message-ID: <201810190827118670359@zte.com.cn> ok. 原始邮件 发件人:DucTruong; 收件人:openstack-dev at lists.openstack.org; 日期:2018-10-19 01:08:18 主题:[openstack-dev] [senlin] Meeting cancelled for this week __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Everyone, I’m canceling the Senlin meeting this week because I’m unavailable. We will resume at the scheduled time next week. Duc (dtruong) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Fri Oct 19 00:41:54 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 19 Oct 2018 09:41:54 +0900 Subject: [openstack-dev] [Searchlight] Workshop at the Korea User Group monthly meetup tonight Message-ID: Hi all, I will have a small session at the OpenStack Korea User Group monthly meetup (October 2018) tonight. Stop by if you're around the Gangnam (Seoul, South Korea) area :) - *Location:* TOZ Gangnam Tower, near to Gangnam station (near to exit 2, https://goo.gl/maps/CVYDpgYdF5J2 or http://naver.me/5W7E2LQY) - *Event's detail:* https://festa.io/events/118 - *Time*: 19:00~22:00 (UTC+9) My session will be from 21:10 ~ 21:40 (UTC+9) Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From shu.mutow at gmail.com Fri Oct 19 02:49:39 2018 From: shu.mutow at gmail.com (Shu M.) Date: Fri, 19 Oct 2018 11:49:39 +0900 Subject: [openstack-dev] [zun][zun-ui] How to get "host" attribute for image Message-ID: Hi folks, I found the following commit to show "host" attribute for image. https://github.com/openstack/zun/commit/72eac7c8f281de64054dfa07e3f31369c5a251f0 But I could not get the "host" for image with zun-show. I think image-list and image-show need to show "host" for admin, so I'd like to add "host" for image into zun-ui. Please let me know how to show "host" attribute. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Fri Oct 19 03:28:26 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Thu, 18 Oct 2018 23:28:26 -0400 Subject: [openstack-dev] [zun][zun-ui] How to get "host" attribute for image In-Reply-To: References: Message-ID: Shu, It looks the 'host' field was added to the DB table but not exposed via REST API by mistake. See if this patch fixes the issue: https://review.openstack.org/#/c/611753/ . Best regards, Hongbin On Thu, Oct 18, 2018 at 10:50 PM Shu M. wrote: > Hi folks, > > I found the following commit to show "host" attribute for image. > > > https://github.com/openstack/zun/commit/72eac7c8f281de64054dfa07e3f31369c5a251f0 > > But I could not get the "host" for image with zun-show. > > I think image-list and image-show need to show "host" for admin, so I'd > like to add "host" for image into zun-ui. > Please let me know how to show "host" attribute. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bxzhu_5355 at 163.com Fri Oct 19 03:33:52 2018 From: bxzhu_5355 at 163.com (Boxiang Zhu) Date: Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00) Subject: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend Message-ID: <73dab45e.1a3d.1668a62f317.Coremail.bxzhu_5355@163.com> Hi folks, When I use the LVM backend to create the volume, then attach it to a vm. I can migrate the volume(in-use) from one host to another. The nova libvirt will call the 'rebase' to finish it. But if using ceph backend, it raises exception 'Swap only supports host devices'. So now it does not support to migrate volume(in-use). Does anyone do this work now? Or Is there any way to let me migrate volume(in-use) with ceph backend? Cheers, Boxiang -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Fri Oct 19 04:02:06 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Fri, 19 Oct 2018 11:02:06 +0700 Subject: [openstack-dev] [mistral][oslo][messaging] Removing =?utf-8?Q?=E2=80=9Cblocking=E2=80=9D_?=executor from oslo.messaging In-Reply-To: <1c5b7519-154e-7085-42c7-7c402d6a48df@nemebean.com> References: <682cb1b7-9c29-4242-90bd-7f28022a0bdb@Spark> <1c5b7519-154e-7085-42c7-7c402d6a48df@nemebean.com> Message-ID: <741d60d8-e5ba-4a60-9a9d-792fbb3b0abb@Spark> Hi, @Ken, I understand your considerations. I get that. I’m only asking not to remove it *for now*. And yes, if you think it should be discouraged from using it’s totally fine. But practically, it’s been the only reliable option for Mistral so far that may be our fault, I have to admit, because we weren’t able to make it work well with other executor types but we’ll try to fix that. By the way, I was playing with different options yesterday and it seems like that setting the executor to “threading” and the “executor_thread_pool_size” property to 1 behaves the same way as “blocking”. So may be that’s an option for us too, even if “blocking” is completely removed. But I would still be in favour of having some extra time to prove that with thorough testing. @Ben, including the executor via setup.cfg also looks OK to me. I see no issues with this approach. Thanks Renat Akhmerov @Nokia On 18 Oct 2018, 23:35 +0700, Ben Nemec , wrote: > > > On 10/18/18 9:59 AM, Ken Giusti wrote: > > Hi Renat, > > > > The biggest issue with the blocking executor (IMHO) is that it blocks > > the protocol I/O while  RPC processing is in progress.  This increases > > the likelihood that protocol processing will not get done in a timely > > manner and things start to fail in weird ways.  These failures are > > timing related and are typically hard to reproduce or root-cause.   This > > isn't something we can fix as blocking is the nature of the executor. > > > > If we are to leave it in we'd really want to discourage its use. > > Since it appears the actual executor code lives in futurist, would it be > possible to remove the entrypoint for blocking from oslo.messaging and > have mistral just pull it in with their setup.cfg? Seems like they > should be able to add something like: > > oslo.messaging.executors = > blocking = futurist:SynchronousExecutor > > to their setup.cfg to keep it available to them even if we drop it from > oslo.messaging itself. That seems like a good way to strongly discourage > use of it while still making it available to projects that are really > sure they want it. > > > > > However I'm ok with leaving it available if the policy for using > > blocking is 'use at your own risk', meaning that bug reports may have to > > be marked 'won't fix' if we have reason to believe that blocking is at > > fault.  That implies removing 'blocking' as the default executor value > > in the API and having applications explicitly choose it.  And we keep > > the deprecation warning. > > > > We could perhaps implement time duration checks around the executor > > callout and log a warning if the executor blocked for an extended amount > > of time (extended=TBD). > > > > Other opinions so we can come to a consensus? > > > > > > On Thu, Oct 18, 2018 at 3:24 AM Renat Akhmerov > > wrote: > > > > Hi Oslo Team, > > > > Can we retain “blocking” executor for now in Oslo Messaging? > > > > > > Some background.. > > > > For a while we had to use Oslo Messaging with “blocking” executor in > > Mistral because of incompatibility of MySQL driver with green > > threads when choosing “eventlet” executor. Under certain conditions > > we would get deadlocks between green threads. Some time ago we > > switched to using PyMysql driver which is eventlet friendly and did > > a number of tests that showed that we could safely switch to > > “eventlet” executor (with that driver) so we introduced a new option > > in Mistral where we could choose an executor in Oslo Messaging. The > > corresponding bug is [1]. > > > > The issue is that we recently found that not everything actually > > works as expected when using combination PyMysql + “eventlet” > > executor. We also tried “threading” executor and the system *seems* > > to work with it but surprisingly performance is much worse. > > > > Given all of that we’d like to ask Oslo Team not to remove > > “blocking” executor for now completely, if that’s possible. We have > > a strong motivation to switch to “eventlet” for other reasons > > (parallelism => better performance etc.) but seems like we need some > > time to make it smoothly. > > > > > > [1] https://bugs.launchpad.net/mistral/+bug/1696469 > > > > > > Thanks > > > > Renat Akhmerov > > @Nokia > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > -- > > Ken Giusti  (kgiusti at gmail.com ) > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangjun at yovole.com Fri Oct 19 06:14:54 2018 From: wangjun at yovole.com (=?gb2312?B?zfW/oQ==?=) Date: Fri, 19 Oct 2018 06:14:54 +0000 Subject: [openstack-dev] [cinder]ceph rbd replication group support Message-ID: <3de2af708d464e9aadb070f3e6a143c9@yovole.com> Hi: I have a question about rbd replication group, I want to know the plan or roadmap about it? Anybody work on it? Blueprint: https://blueprints.launchpad.net/cinder/+spec/ceph-rbd-replication-group-support Thanks 保密:本函仅供收件人使用,如阁下并非抬头标明的收件人,请您即刻删除本函,勿以任何方式使用及传播,并请您能将此误发情形通知发件人,谢谢! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergely.csatari at nokia.com Fri Oct 19 08:05:24 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Fri, 19 Oct 2018 08:05:24 +0000 Subject: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge" In-Reply-To: References: <713a4c9a-d628-ee6d-49e8-c8e9763cdbfe@redhat.com> Message-ID: Hi, I’m adding the ECG mailing list to the discussion. I think the root of the problem is that there is no single definition of „the edge” (except for [1]), but it changes from group to group or use case to use case. What I recognise as the commonalities in these edge definitions, are 1) a distributed cloud infrastructure (kind of a cloud of clouds) 2) need for automation or everything 3) resource constraints for the control plane. The different edge variants are putting different emphasis on these common needs based ont he use case discussed. To have a more clear understanding of these definitions we could try the following: 1. Always add the definition of these to the given context 2. Check what other groups are using and adopt to that 3. Define our own language and expect everyone else to adopt Br, Gerg0 [1]: https://en.wikipedia.org/wiki/The_Edge From: Jim Rollenhagen Sent: Thursday, October 18, 2018 11:43 PM To: fulton at redhat.com; OpenStack Development Mailing List (not for usage questions) Cc: openstack-sigs at lists.openstack.org Subject: Re: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge" On Thu, Oct 18, 2018 at 4:45 PM John Fulton > wrote: On Thu, Oct 18, 2018 at 11:56 AM Jim Rollenhagen > wrote: > > On Thu, Oct 18, 2018 at 10:23 AM Dmitry Tantsur > wrote: >> >> Hi all, >> >> Sorry for chiming in really late in this topic, but I think $subj is worth >> discussing until we settle harder on the potentially confusing terminology. >> >> I think the difference between "Edge" and "Far Edge" is too vague to use these >> terms in practice. Think about the "edge" metaphor itself: something rarely has >> several layers of edges. A knife has an edge, there are no far edges. I imagine >> zooming in and seeing more edges at the edge, and then it's quite cool indeed, >> but is it really a useful metaphor for those who never used a strong microscope? :) >> >> I think in the trivial sense "Far Edge" is a tautology, and should be avoided. >> As a weak proof of my words, I already see a lot of smart people confusing these >> two and actually use Central/Edge where they mean Edge/Far Edge. I suggest we >> adopt a different terminology, even if it less consistent with typical marketing >> term around the "Edge" movement. > > > FWIW, we created rough definitions of "edge" and "far edge" during the edge WG session in Denver. > It's mostly based on latency to the end user, though we also talked about quantities of compute resources, if someone can find the pictures. Perhaps these are the pictures Jim was referring to? https://www.dropbox.com/s/255x1cao14taer3/MVP-Architecture_edge-computing_PTG.pptx?dl=0# That's it, thank you! // jim I'm involved in some TripleO work called the split control plane: https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html After the PTG I saw that the split control plane was compatible with the type of deployment discussed at the edge WG session in Denver and described the compatibility at: https://etherpad.openstack.org/p/tripleo-edge-working-group-split-control-plane > See the picture and table here: https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures#Overview > >> Now, I don't have really great suggestions. Something that came up in TripleO >> discussions [1] is Core/Hub/Edge, which I think reflects the idea better. > > > I'm also fine with these names, as they do describe the concepts well. :) > > // jim I'm fine with these terms too. In split control plane there's a deployment method for deploying a central site and then deploying remote sites independently. That deployment method could be used to deploy Core/Hub/Edge sites too. E.g. deploy the Core using Heat stack N. Deploy a Hub using stack N+1 and then deploy an Edge using stack N+2 etc. John >> >> I'd be very interested to hear your ideas. >> >> Dmitry >> >> [1] https://etherpad.openstack.org/p/tripleo-edge-mvp >> >> _______________________________________________ >> openstack-sigs mailing list >> openstack-sigs at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.engelmann at everyware.ch Fri Oct 19 08:17:31 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Fri, 19 Oct 2018 10:17:31 +0200 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C20C76C@EX10MBOX03.pnnl.gov> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <1A3C52DFCD06494D8528644858247BF01C1F62DF@EX10MBOX03.pnnl.gov> <83f981fc-c49f-8207-215c-4cb2adbd9108@everyware.ch> <1A3C52DFCD06494D8528644858247BF01C20C76C@EX10MBOX03.pnnl.gov> Message-ID: > No, I mean, Consul would be an extra dependency in a big list of dependencies OpenStack already has. OpenStack has so many it is causing operators to reconsider adoption. I'm asking, if existing dependencies can be made to solve the problem without adding more? > > Stateful dependencies are much harder to deal with then stateless ones, as they take much more operator care/attention. Consul is stateful as is etcd, and etcd is already a dependency. > > Can etcd be used instead so as not to put more load on the operators? While etcd is a strong KV store it lacks many features consul has. Using consul for DNS based service discovery is very easy to implement without making it a dependency. So we will start with a "external" consul and see how to handle the service registration without modifying the kolla containers or any kolla-ansible code. All the best, Flo > > Thanks, > Kevin > ________________________________________ > From: Florian Engelmann [florian.engelmann at everyware.ch] > Sent: Wednesday, October 10, 2018 12:18 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints > > by "another storage system" you mean the KV store of consul? That's just > someting consul brings with it... > > consul is very strong in doing health checks > > Am 10/9/18 um 6:09 PM schrieb Fox, Kevin M: >> etcd is an already approved openstack dependency. Could that be used instead of consul so as to not add yet another storage system? coredns with the https://coredns.io/plugins/etcd/ plugin would maybe do what you need? >> >> Thanks, >> Kevin >> ________________________________________ >> From: Florian Engelmann [florian.engelmann at everyware.ch] >> Sent: Monday, October 08, 2018 3:14 AM >> To: openstack-dev at lists.openstack.org >> Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints >> >> Hi, >> >> I would like to start a discussion about some changes and additions I >> would like to see in in kolla and kolla-ansible. >> >> 1. Keepalived is a problem in layer3 spine leaf networks as any floating >> IP can only exist in one leaf (and VRRP is a problem in layer3). I would >> like to use consul and registrar to get rid of the "internal" floating >> IP and use consuls DNS service discovery to connect all services with >> each other. >> >> 2. Using "ports" for external API (endpoint) access is a major headache >> if a firewall is involved. I would like to configure the HAProxy (or >> fabio) for the external access to use "Host:" like, eg. "Host: >> keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS. >> Any customer would just need HTTPS access and not have to open all those >> ports in his firewall. For some enterprise customers it is not possible >> to request FW changes like that. >> >> 3. HAProxy is not capable to handle "read/write" split with Galera. I >> would like to introduce ProxySQL to be able to scale Galera. >> >> 4. HAProxy is fine but fabio integrates well with consul, statsd and >> could be connected to a vault cluster to manage secure certificate access. >> >> 5. I would like to add vault as Barbican backend. >> >> 6. I would like to add an option to enable tokenless authentication for >> all services with each other to get rid of all the openstack service >> passwords (security issue). >> >> What do you think about it? >> >> All the best, >> Florian >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -- > > EveryWare AG > Florian Engelmann > Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: mailto:florian.engelmann at everyware.ch > web: http://www.everyware.ch > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From florian.engelmann at everyware.ch Fri Oct 19 08:21:38 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Fri, 19 Oct 2018 10:21:38 +0200 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <8c65a2ba-09e9-d286-9d3c-e577a59185cb@everyware.ch> <84d669d1-d3c5-cd15-84a0-bfc8e4e5eb66@everyware.ch> <200b392c-1a41-f53a-6822-01c809aa22fd@gmail.com> Message-ID: <99173bcd-6209-e7f5-6d34-38440ebd66bf@everyware.ch> > > On 17.10.2018 15:45, Florian Engelmann wrote: >>> On 10.10.2018 09:06, Florian Engelmann wrote: >>>> Now I get you. I would say all configuration templates need to be >>>> changed to allow, eg. >>>> >>>> $ grep http /etc/kolla/cinder-volume/cinder.conf >>>> glance_api_servers = http://10.10.10.5:9292 >>>> auth_url = http://internal.somedomain.tld:35357 >>>> www_authenticate_uri = http://internal.somedomain.tld:5000 >>>> auth_url = http://internal.somedomain.tld:35357 >>>> auth_endpoint = http://internal.somedomain.tld:5000 >>>> >>>> to look like: >>>> >>>> glance_api_servers = http://glance.service.somedomain.consul:9292 >>>> auth_url = http://keystone.service.somedomain.consul:35357 >>>> www_authenticate_uri = http://keystone.service.somedomain.consul:5000 >>>> auth_url = http://keystone.service.somedomain.consul:35357 >>>> auth_endpoint = http://keystone.service.somedomain.consul:5000 >>>> >>> >>> The idea with Consul looks interesting. >>> >>> But I don't get your issue with VIP address and spine-leaf network. >>> >>> What we have: >>> - controller1 behind leaf1 A/B pair with MLAG >>> - controller2 behind leaf2 A/B pair with MLAG >>> - controller3 behind leaf3 A/B pair with MLAG >>> >>> The VIP address is active on one controller server. >>> When the server fail then the VIP will move to another controller >>> server. >>> Where do you see a SPOF in this configuration? >>> >> >> So leaf1 2 and 3 have to share the same L2 domain, right (in IPv4 >> network)? >> > Yes, they share L2 domain but we have ARP and ND suppression enabled. > > It is an EVPN network where there is a L3 with VxLANs between leafs and > spines. > > So we don't care where a server is connected. It can be connected to any > leaf. Ok that sounds very interesting. Is it possible to share some internals? Which switch vendor/model do you use? How does you IP address schema look like? If VxLAN is used between spine and leafs are you using VxLAN networking for Openstack as well? Where is your VTEP? > > >> But we wanna deploy a layer3 spine-leaf network were every leaf is >> it's own L2 domain and everything above is layer3. >> >> eg: >> >> leaf1 = 10.1.1.0/24 >> leaf2 = 10.1.2.0/24 >> leaf2 = 10.1.3.0/24 >> >> So a VIP like, eg. 10.1.1.10 could only exist in leaf1 >> > In my opinion it's a very constrained environment, I don't like the idea. > > > Regards, > > Piotr > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From gfidente at redhat.com Fri Oct 19 08:24:09 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Fri, 19 Oct 2018 10:24:09 +0200 Subject: [openstack-dev] [tripleo] request for feedback/review on docker2podman upgrade In-Reply-To: References: Message-ID: <4374516d-8e61-baf8-440b-e76cafd84874@redhat.com> On 10/14/18 5:07 PM, Emilien Macchi wrote: > I recently wrote a blog post about how we could upgrade an host from > Docker containers to Podman containers. > > http://my1.fr/blog/openstack-containerization-with-podman-part-3-upgrades/ thanks Emilien this looks nice and I believe the basic approach consisting of: 1) create the podman systemd unit 2) delete the docker container 3) start the podman container could be used to upgrade the Ceph containers as well (via ceph-ansible) -- Giulio Fidente GPG KEY: 08D733BA From florian.engelmann at everyware.ch Fri Oct 19 08:24:12 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Fri, 19 Oct 2018 10:24:12 +0200 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <30bf2e74-3dc9-c55c-3045-1f1f02f57b71@everyware.ch> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <1A3C52DFCD06494D8528644858247BF01C1F62DF@EX10MBOX03.pnnl.gov> <83f981fc-c49f-8207-215c-4cb2adbd9108@everyware.ch> <30bf2e74-3dc9-c55c-3045-1f1f02f57b71@everyware.ch> Message-ID: <2a18700f-0bc5-0193-b858-f5b51975427e@everyware.ch> > currently we are testing what is needed to get consul + registrator and > kolla/kolla-ansible play together nicely. > > To get the services created in consul by registrator all kolla > containers running relevant services (eg. keystone, nova, cinder, ... > but also mariadb, memcached, es, ...) need to "--expose" their ports. > Registrator will use those "exposed" ports to add a service to consul. > > I there any (existing) option to add those ports to the container > bootstrap? > What about "docker_common_options"? > > command should look like: > > docker run -d --expose 5000/tcp --expose 35357/tcp --name=keystone ... > After testing registrator I recognized the project seems to be unmaintained. So we won't use registrator. I just need to find another method to register a container (service) in consul after the contaier has started. I would like to do so without changing any kolla container or kolla-ansible code. > > Am 10/10/18 um 9:18 AM schrieb Florian Engelmann: >> by "another storage system" you mean the KV store of consul? That's >> just someting consul brings with it... >> >> consul is very strong in doing health checks >> >> Am 10/9/18 um 6:09 PM schrieb Fox, Kevin M: >>> etcd is an already approved openstack dependency. Could that be used >>> instead of consul so as to not add yet another storage system? >>> coredns with the https://coredns.io/plugins/etcd/ plugin would maybe >>> do what you need? >>> >>> Thanks, >>> Kevin >>> ________________________________________ >>> From: Florian Engelmann [florian.engelmann at everyware.ch] >>> Sent: Monday, October 08, 2018 3:14 AM >>> To: openstack-dev at lists.openstack.org >>> Subject: [openstack-dev] [kolla] add service discovery, proxysql, >>> vault, fabio and FQDN endpoints >>> >>> Hi, >>> >>> I would like to start a discussion about some changes and additions I >>> would like to see in in kolla and kolla-ansible. >>> >>> 1. Keepalived is a problem in layer3 spine leaf networks as any floating >>> IP can only exist in one leaf (and VRRP is a problem in layer3). I would >>> like to use consul and registrar to get rid of the "internal" floating >>> IP and use consuls DNS service discovery to connect all services with >>> each other. >>> >>> 2. Using "ports" for external API (endpoint) access is a major headache >>> if a firewall is involved. I would like to configure the HAProxy (or >>> fabio) for the external access to use "Host:" like, eg. "Host: >>> keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS. >>> Any customer would just need HTTPS access and not have to open all those >>> ports in his firewall. For some enterprise customers it is not possible >>> to request FW changes like that. >>> >>> 3. HAProxy is not capable to handle "read/write" split with Galera. I >>> would like to introduce ProxySQL to be able to scale Galera. >>> >>> 4. HAProxy is fine but fabio integrates well with consul, statsd and >>> could be connected to a vault cluster to manage secure certificate >>> access. >>> >>> 5. I would like to add vault as Barbican backend. >>> >>> 6. I would like to add an option to enable tokenless authentication for >>> all services with each other to get rid of all the openstack service >>> passwords (security issue). >>> >>> What do you think about it? >>> >>> All the best, >>> Florian >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From tony at bakeyournoodle.com Fri Oct 19 09:54:29 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 19 Oct 2018 20:54:29 +1100 Subject: [openstack-dev] [Openstack-sigs] [all] Naming the T release of OpenStack In-Reply-To: <20181018063539.GC6589@thor.bakeyournoodle.com> References: <20181018063539.GC6589@thor.bakeyournoodle.com> Message-ID: <20181019095428.GA9399@thor.bakeyournoodle.com> On Thu, Oct 18, 2018 at 05:35:39PM +1100, Tony Breeds wrote: > Hello all, > As per [1] the nomination period for names for the T release have > now closed (actually 3 days ago sorry). The nominated names and any > qualifying remarks can be seen at2]. > > Proposed Names > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria > * Train I have re-worked my openstack/governance change[1] to ask the TC to accept adding Train to the poll as (partially) described in [2]. I present the names above to the community and Foundation marketing team for consideration. The list above does contain Train, clearly if the TC do not approve [1] Train will not be included in the poll when created. I apologise for any offence or slight caused by my previous email in this thread. It was well intentioned albeit, with hindsight, poorly thought through. Yours Tony. [1] https://review.openstack.org/#/c/611511/ [2] https://governance.openstack.org/tc/reference/release-naming.html#release-name-criteria -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From sgolovat at redhat.com Fri Oct 19 11:08:48 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Fri, 19 Oct 2018 13:08:48 +0200 Subject: [openstack-dev] [tripleo] request for feedback/review on docker2podman upgrade In-Reply-To: <4374516d-8e61-baf8-440b-e76cafd84874@redhat.com> References: <4374516d-8e61-baf8-440b-e76cafd84874@redhat.com> Message-ID: Hi, On Fri, Oct 19, 2018 at 10:24 AM Giulio Fidente wrote: > On 10/14/18 5:07 PM, Emilien Macchi wrote: > > I recently wrote a blog post about how we could upgrade an host from > > Docker containers to Podman containers. > > > > > http://my1.fr/blog/openstack-containerization-with-podman-part-3-upgrades/ > thanks Emilien this looks nice and I believe the basic approach > consisting of: > > 1) create the podman systemd unit > 2) delete the docker container > 3) start the podman container > What about several chained containers? You may delete one and next one will fail. It would be better to stop container and delete it once all dependent containers are migrated, started, validated. What Emilien described works for all cases. It would be nice to have the same procedure for Ceph cases as well, IMHO. > could be used to upgrade the Ceph containers as well (via ceph-ansible) > -- > Giulio Fidente > GPG KEY: 08D733BA > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best Regards, Sergii Golovatiuk -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Fri Oct 19 12:00:33 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 19 Oct 2018 08:00:33 -0400 Subject: [openstack-dev] [tripleo] request for feedback/review on docker2podman upgrade In-Reply-To: <4374516d-8e61-baf8-440b-e76cafd84874@redhat.com> References: <4374516d-8e61-baf8-440b-e76cafd84874@redhat.com> Message-ID: On Fri, Oct 19, 2018 at 4:24 AM Giulio Fidente wrote: > 1) create the podman systemd unit > 2) delete the docker container > We finally went with "stop the docker container" 3) start the podman container > and 4) delete the docker container later in THT upgrade_tasks. And yes +1 to do the same in ceph-ansible if possible. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Fri Oct 19 12:10:23 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 19 Oct 2018 13:10:23 +0100 (BST) Subject: [openstack-dev] [placement] update 18-42 Message-ID: HTML: https://anticdent.org/placement-update-18-42.html After a gap from when I was away last week, here's this week's placement update. The situation this week remains much the same as last week: focus on specs and the bigger issues associated with extraction. # Most Important The major factors that need attention are managing database migrations and associated tooling and getting the ball rolling on properly producing documentation. More on both of these things in the extraction section below. # What's Changed mnaser found an [issue](https://launchpad.net/bugs/1798163) with the migrations associated with consumer ids. A fix was created [in nova](https://review.openstack.org/#/c/611115/) and [ported into placement](https://review.openstack.org/#/c/611165/) but it raised some [questions on what to do with those migrations](http://lists.openstack.org/pipermail/openstack-dev/2018-October/135796.html) in the extracted placement. Some work also needs to be done to check to make sure the solutions will work in postgresql, as it might tickle the way it is more strict about group by clauses. # Bugs * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 16. -1. * [In progress placement bugs](https://goo.gl/vzGGDQ) 8. # Specs There's a [spec review sprint](http://lists.openstack.org/pipermail/openstack-dev/2018-October/135748.html) this coming Tuesday. This may be missing some newer specs because I got exhausted keeping tabs on the ones that already exist. * Account for host agg allocation ratio in placement (Still in rocky/) * Add subtree filter for GET /resource_providers * Resource provider - request group mapping in allocation candidate * VMware: place instances on resource pool (still in rocky/) * Standardize CPU resource tracking * Allow overcommit of dedicated CPU (Has an alternative which changes allocations to a float) * List resource providers having inventory * Bi-directional enforcement of traits * Modelling passthrough devices for report to placement * Propose counting quota usage from placement and API database (A bit out of date but may be worth resurrecting) * Spec: allocation candidates in tree * [WIP] generic device discovery policy * Nova Cyborg interaction specification. * supporting virtual NVDIMM devices * Spec: Support filtering by forbidden aggregate * Proposes NUMA topology with RPs * Support initial allocation ratios * Count quota based on resource class * WIP: High Precision Event Timer (HPET) on x86 guests * Add support for emulated virtual TPM * Limit instance create max_count (spec) (has some concurrency issues related placement) * Adds spec for instance live resize # Main Themes ## Making Nested Useful Work on getting nova's use of nested resource providers happy and fixing bugs discovered in placement in the process. This is creeping ahead, but feels somewhat stalled out, presumably because people are busy with other things. * I feel like I'm missing some things in this area. Please let me know if there are others. This is related: * Pass allocations to virt drivers when resizing ## Extraction There continue to be three main tasks in regard to placement extraction: 1. upgrade and integration testing 2. database schema migration and management 3. documentation publishing The upgrade aspect of (1) is in progress with a [patch to grenade](https://review.openstack.org/#/c/604454/) and a [patch to devstack](https://review.openstack.org/#/c/600162/). This is very close to working. A main blocker is needing a proper tool for managing the creation and migration of database tables (more below). My experiments with using gabbi-tempest are getting [a bit closer](https://review.openstack.org/#/c/611678/). Successful devstack is dependent on us having a reasonable solution to (2). For the moment [a hacked up script](https://review.openstack.org/#/c/600161/) is being used to create tables. Ed has started some work on [moving to alembic](https://review.openstack.org/#/q/topic:1alembic). We have work in progress to tune up the documentation but we are not yet publishing documentation (3). We need to work out a plan for this. Presumably we don't want to be publishing docs until we are publishing code, but the interdependencies need to be teased out. # Other Various placement changes out in the world. * The fix, in placement, for the consumer id group by problem. * Generate sample policy in placement directory (This is a bit stuck on not being sure what the right thing to do is.) * Improve handling of default allocation ratios * Neutron minimum bandwidth implementation * TripleO: Use valid_interfaces instead of os_interface for placement * Add OWNERSHIP $SERVICE traits * Puppet: Initial cookiecutter and import from nova::placement * WIP: Add placement to devstack-gate PROJECTS This was done somewhere else wasn't it, so could this be abandoned? * zun: Use placement for unified resource management # End Hi! -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From jaosorior at redhat.com Fri Oct 19 12:23:14 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Fri, 19 Oct 2018 15:23:14 +0300 Subject: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer Message-ID: Hello! I would like to propose Bob Fournier (bfournie) as a core reviewer in TripleO. His patches and reviews have spanned quite a wide range in our project, his reviews show great insight and quality and I think he would be a addition to the core team. What do you folks think? Best Regards From lenikmutungi at gmail.com Fri Oct 19 12:27:41 2018 From: lenikmutungi at gmail.com (Leni Kadali Mutungi) Date: Fri, 19 Oct 2018 15:27:41 +0300 Subject: [openstack-dev] [manila] [contribute] In-Reply-To: References: <4374516d-8e61-baf8-440b-e76cafd84874@redhat.com> Message-ID: <6b827f3b-3804-8d53-3eb5-900cb7cc5c1d@gmail.com> Hi all. I've downloaded the manila project from GitHub as a zip file, unpacked it and have run `git fetch --depth=1` and been progressively running `git fetch --deepen=5` to get the commit history I need. For future reference, would a shallow clone e.g. `git clone depth=1` be enough to start working on the project or should one have the full commit history of the project? -- -- Kind regards, Leni Kadali Mutungi From emilien at redhat.com Fri Oct 19 12:28:24 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 19 Oct 2018 08:28:24 -0400 Subject: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer In-Reply-To: References: Message-ID: On Fri, Oct 19, 2018 at 8:24 AM Juan Antonio Osorio Robles < jaosorior at redhat.com> wrote: > I would like to propose Bob Fournier (bfournie) as a core reviewer in > TripleO. His patches and reviews have spanned quite a wide range in our > project, his reviews show great insight and quality and I think he would > be a addition to the core team. > > What do you folks think? > Big +1, Bob is a solid contributor/reviewer. His area of knowledge has been critical in all aspects of Hardware Provisioning integration but also in other TripleO bits. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Fri Oct 19 12:37:41 2018 From: neil at tigera.io (Neil Jerram) Date: Fri, 19 Oct 2018 13:37:41 +0100 Subject: [openstack-dev] [nova][NFS] Inexplicable utime permission denied when launching instance Message-ID: Wracking my brains over this one, would appreciate any pointers... Setup: Small test deployment with just 3 compute nodes, Queens on Ubuntu Bionic. The first compute node is an NFS server for /var/lib/nova/instances, and the other compute nodes mount that as NFS clients. Problem: Sometimes, when launching an instance which is scheduled to one of the client nodes, nova-compute (in imagebackend.py) gets Permission Denied (errno 13) when calling utime to touch the timestamp on the instance file. Through various bits of debugging and hackery, I've established that: - it looks like the problem never occurs when this is the call that bootstraps the privsep setup; but it does occur quite frequently on later calls - when the problem occurs, retrying doesn't help (5 times, with 0.5s in between) - the instance file does exist, and is owned by root with read/write permission for root - the privsep helper is running as root - the privsep helper receives and executes the request - so it's not a problem with communication between nova-compute and the helper - root is uid 0 on both NFS server and client - NFS setup does not have the root_squash option - there is some AppArmor setup, on both client and server, and I haven't yet worked out whether that might be relevant. Any ideas? Many thanks, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Oct 19 13:21:36 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 19 Oct 2018 15:21:36 +0200 Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance In-Reply-To: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> References: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> Message-ID: <6850a16b-a0f3-edeb-12c6-749745a32491@openstack.org> Ed Leafe wrote: > On Oct 15, 2018, at 7:40 AM, Chris Dent wrote: >> >> I'd like some input from the community on how we'd like this to go. > > I would say it depends on the long-term plans for paste. Are we planning on weaning ourselves off of paste, and simply need to maintain it until that can be completed, or are we planning on encouraging its use? Agree with Ed... is this something we plan to minimally maintain because we depend on it, something that needs feature work and that we want to encourage the adoption of, or something that we want to keep on life-support while we move away from it? My assumption is that it's "something we plan to minimally maintain because we depend on it". in which case all options would work: the exact choice depends on whether there is anybody interested in helping maintaining it, and where those contributors prefer to do the work. -- Thierry Carrez (ttx) From aschultz at redhat.com Fri Oct 19 13:44:38 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 19 Oct 2018 07:44:38 -0600 Subject: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer In-Reply-To: References: Message-ID: +1 On Fri, Oct 19, 2018 at 6:29 AM Emilien Macchi wrote: > > On Fri, Oct 19, 2018 at 8:24 AM Juan Antonio Osorio Robles wrote: >> >> I would like to propose Bob Fournier (bfournie) as a core reviewer in >> TripleO. His patches and reviews have spanned quite a wide range in our >> project, his reviews show great insight and quality and I think he would >> be a addition to the core team. >> >> What do you folks think? > > > Big +1, Bob is a solid contributor/reviewer. His area of knowledge has been critical in all aspects of Hardware Provisioning integration but also in other TripleO bits. > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From johfulto at redhat.com Fri Oct 19 13:46:27 2018 From: johfulto at redhat.com (John Fulton) Date: Fri, 19 Oct 2018 09:46:27 -0400 Subject: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer In-Reply-To: References: Message-ID: +1 On Fri, Oct 19, 2018 at 9:46 AM Alex Schultz wrote: > > +1 > On Fri, Oct 19, 2018 at 6:29 AM Emilien Macchi wrote: > > > > On Fri, Oct 19, 2018 at 8:24 AM Juan Antonio Osorio Robles wrote: > >> > >> I would like to propose Bob Fournier (bfournie) as a core reviewer in > >> TripleO. His patches and reviews have spanned quite a wide range in our > >> project, his reviews show great insight and quality and I think he would > >> be a addition to the core team. > >> > >> What do you folks think? > > > > > > Big +1, Bob is a solid contributor/reviewer. His area of knowledge has been critical in all aspects of Hardware Provisioning integration but also in other TripleO bits. > > -- > > Emilien Macchi > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From abishop at redhat.com Fri Oct 19 13:53:22 2018 From: abishop at redhat.com (Alan Bishop) Date: Fri, 19 Oct 2018 09:53:22 -0400 Subject: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer In-Reply-To: References: Message-ID: +1 On Fri, Oct 19, 2018 at 9:47 AM John Fulton wrote: > +1 > On Fri, Oct 19, 2018 at 9:46 AM Alex Schultz wrote: > > > > +1 > > On Fri, Oct 19, 2018 at 6:29 AM Emilien Macchi > wrote: > > > > > > On Fri, Oct 19, 2018 at 8:24 AM Juan Antonio Osorio Robles < > jaosorior at redhat.com> wrote: > > >> > > >> I would like to propose Bob Fournier (bfournie) as a core reviewer in > > >> TripleO. His patches and reviews have spanned quite a wide range in > our > > >> project, his reviews show great insight and quality and I think he > would > > >> be a addition to the core team. > > >> > > >> What do you folks think? > > > > > > > > > Big +1, Bob is a solid contributor/reviewer. His area of knowledge has > been critical in all aspects of Hardware Provisioning integration but also > in other TripleO bits. > > > -- > > > Emilien Macchi > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmail.com Fri Oct 19 14:19:07 2018 From: sean.mcginnis at gmail.com (Sean McGinnis) Date: Fri, 19 Oct 2018 09:19:07 -0500 Subject: [openstack-dev] [Release-job-failures] Release of openstack-infra/shade failed In-Reply-To: References: Message-ID: This appears to be another in-transit job conflict with the py3 work. Things should be fine, but we will need to manually propose the constraint update since it was skipped. On Fri, Oct 19, 2018, 09:14 wrote: > Build failed. > > - release-openstack-python3 > http://logs.openstack.org/33/33d839da8acb93a36a45186d158262f269e9bbd6/release/release-openstack-python3/1cb87ba/ > : SUCCESS in 2m 44s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > - release-openstack-python > http://logs.openstack.org/33/33d839da8acb93a36a45186d158262f269e9bbd6/release/release-openstack-python/3a9339d/ > : POST_FAILURE in 2m 40s > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Fri Oct 19 14:26:51 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 19 Oct 2018 16:26:51 +0200 Subject: [openstack-dev] [Release-job-failures] Release of openstack-infra/shade failed In-Reply-To: References: Message-ID: <970c7c5c-716e-e67b-a3f2-ded504e0e96d@suse.com> On 19/10/2018 16.19, Sean McGinnis wrote: > This appears to be another in-transit job conflict with the py3 work. > Things should be fine, but we will need to manually propose the > constraint update since it was skipped. > > On Fri, Oct 19, 2018, 09:14 > wrote: > > Build failed. > > - release-openstack-python3 > http://logs.openstack.org/33/33d839da8acb93a36a45186d158262f269e9bbd6/release/release-openstack-python3/1cb87ba/ > : SUCCESS in 2m 44s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > - release-openstack-python > http://logs.openstack.org/33/33d839da8acb93a36a45186d158262f269e9bbd6/release/release-openstack-python/3a9339d/ > : POST_FAILURE in 2m 40s We're not using tox venv anymore for the release job, so I think we need to remove the "fetch-tox-output" role completely: https://review.openstack.org/611886 Andreas > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From melwittt at gmail.com Fri Oct 19 14:39:30 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 19 Oct 2018 07:39:30 -0700 Subject: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend In-Reply-To: <73dab45e.1a3d.1668a62f317.Coremail.bxzhu_5355@163.com> References: <73dab45e.1a3d.1668a62f317.Coremail.bxzhu_5355@163.com> Message-ID: <8f8b257a-5219-60e1-8760-d905f5bb48ff@gmail.com> On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote: > When I use the LVM backend to create the volume, then attach it to a vm. > I can migrate the volume(in-use) from one host to another. The nova > libvirt will call the 'rebase' to finish it. But if using ceph backend, > it raises exception 'Swap only supports host devices'. So now it does > not support to migrate volume(in-use). Does anyone do this work now? Or > Is there any way to let me migrate volume(in-use) with ceph backend? What version of cinder and nova are you using? I found this question/answer on ask.openstack.org: https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/ and it looks like there was some work done on the cinder side [1] to enable migration of in-use volumes with ceph semi-recently (Queens). On the nova side, the code looks for the source_path in the volume config, and if there is not one present, it raises NotImplementedError(_("Swap only supports host devices"). So in your environment, the volume configs must be missing a source_path. If you are using at least Queens version, then there must be something additional missing that we would need to do to make the migration work. [1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate Cheers, -melanie From mriedemos at gmail.com Fri Oct 19 14:39:57 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 19 Oct 2018 09:39:57 -0500 Subject: [openstack-dev] [Openstack-operators] [goals][upgrade-checkers] Week R-26 Update In-Reply-To: <1d7bfda7-615e-3e21-9174-631ca8d3910e@nemebean.com> References: <1d7bfda7-615e-3e21-9174-631ca8d3910e@nemebean.com> Message-ID: <7cf92d0f-355c-3239-b9e1-dd687647ebe1@gmail.com> Top posting just to try and summarize my thought that for the goal in Stein, I think we should focus on getting the base framework in place for each service project, along with any non-config (including policy) specific upgrade checks that make sense for each project. As Ben mentioned, there are existing tools for validating config (I know BlueBox used to use the fatal_deprecations config in their CI/CD pipeline to know when they needed to change their deploy scripts because deploying new code from pre-prod would fail). Once we get the basics covered we can work, as a community, to figure out how best to integrate config validation into upgrade checks, because I don't really think we want to have upgrade checks that dump warnings for all deprecated options in addition to what is already provided by oslo.config/log. I have a feeling that would get so noisy that no one would ever pay attention to it. I'm mostly interested in the scenario that config is removed from code but still being set in the config file which could fail an upgrade on service restart (if an alias was removed for example), but I also tend to think those types of issues are case-by-case. On 10/15/2018 3:29 PM, Ben Nemec wrote: > > > On 10/15/18 3:27 AM, Jean-Philippe Evrard wrote: >> On Fri, 2018-10-12 at 17:05 -0500, Matt Riedemann wrote: >>> The big update this week is version 0.1.0 of oslo.upgradecheck was >>> released. The documentation along with usage examples can be found >>> here >>> [1]. A big thanks to Ben Nemec for getting that done since a few >>> projects were waiting for it. >>> >>> In other updates, some changes were proposed in other projects [2]. >>> >>> And finally, Lance Bragstad and I had a discussion this week [3] >>> about >>> the validity of upgrade checks looking for deleted configuration >>> options. The main scenario I'm thinking about here is FFU where >>> someone >>> is going from Mitaka to Pike. Let's say a config option was >>> deprecated >>> in Newton and then removed in Ocata. As the operator is rolling >>> through >>> from Mitaka to Pike, they might have missed the deprecation signal >>> in >>> Newton and removal in Ocata. Does that mean we should have upgrade >>> checks that look at the configuration for deleted options, or >>> options >>> where the deprecated alias is removed? My thought is that if things >>> will >>> not work once they get to the target release and restart the service >>> code, which would definitely impact the upgrade, then checking for >>> those >>> scenarios is probably OK. If on the other hand the removed options >>> were >>> just tied to functionality that was removed and are otherwise not >>> causing any harm then I don't think we need a check for that. It was >>> noted that oslo.config has a new validation tool [4] so that would >>> take >>> care of some of this same work if run during upgrades. So I think >>> whether or not an upgrade check should be looking for config option >>> removal ultimately depends on the severity of what happens if the >>> manual >>> intervention to handle that removed option is not performed. That's >>> pretty broad, but these upgrade checks aren't really set in stone >>> for >>> what is applied to them. I'd like to get input from others on this, >>> especially operators and if they would find these types of checks >>> useful. >>> >>> [1] https://docs.openstack.org/oslo.upgradecheck/latest/ >>> [2] https://storyboard.openstack.org/#!/story/2003657 >>> [3] >>> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-10-10.log.html#t2018-10-10T15:17:17 >>> >>> [4] >>> http://lists.openstack.org/pipermail/openstack-dev/2018-October/135688.html >>> >>> >> >> Hey, >> >> Nice topic, thanks Matt! >> >> TL:DR; I would rather fail explicitly for all removals, warning on all >> deprecations. My concern is, by being more surgical, we'd have to >> decide what's "not causing any harm" (and I think deployers/users are >> best to determine what's not causing them any harm). >> Also, it's probably more work to classify based on "severity". >> The quick win here (for upgrade-checks) is not about being smart, but >> being an exhaustive, standardized across projects, and _always used_ >> source of truth for upgrades, which is complemented by release notes. >> >> Long answer: >> >> At some point in the past, I was working full time on upgrades using >> OpenStack-Ansible. >> >> Our process was the following: >> 1) Read all the project's releases notes to find upgrade documentation >> 2) With said release notes, Adapt our deploy tools to handle the >> upgrade, or/and write ourselves extra documentation+release notes for >> our deployers. >> 3) Try the upgrade manually, fail because some release note was missing >> x or y. Find root cause and retry from step 2 until success. >> >> Here is where I see upgrade checkers improving things: >> 1) No need for deployment projects to parse all release notes for >> configuration changes, as tooling to upgrade check would be directly >> outputting things that need to change for scenario x or y that is >> included in the deployment project. No need to iterate either. >> >> 2) Test real deployer use cases. The deployers using openstack-ansible >> have ultimate flexibility without our code changes. Which means they >> may have different code paths than our gating. Including these checks >> in all upgrades, always requiring them to pass, and making them >> explicit about the changes is tremendously helpful for deployers: >> - If config deprecations are handled as warnings as part of the same >> process, we will output said warnings to generate a list of action >> items for the deployers. We would use only one tool as source of truth >> for giving the action items (and still continue the upgrade); >> - If config removals are handled as errors, the upgrade will fail, >> which is IMO normal, as the deployer would not have respected its >> action items. > > Note that deprecated config opts should already be generating warnings > in the logs. It is also possible now to use fatal-deprecations with > config opts: > https://github.com/openstack/oslo.config/commit/5f8b0e0185dafeb68cf04590948b9c9f7d727051 > > > I'm not sure that's exactly what you're talking about, but those might > be useful to get us at least part of the way there. > >> >> In OSA, we could probably implement a deployer override (variable). It >> would allow the deployers an explicit bypass of an upgrade failure. "I >> know I am doing this!". It would be useful for doing multiple serial >> upgrades. >> >> In that case, deployers could then share together their "recipes" for >> handling upgrade failure bypasses for certain multi-upgrade (jumps) >> scenarios. After a while, we could think of feeding those back to >> upgrade checkers. >> >> 3) I like the approach of having oslo-config-validator. However, I must >> admit it's not part of our process to always validate a config file >> before trying to start a service in OSA. I am not sure where other >> deployment projects are in terms of that usage. I am not familiar with >> upgrade checker code, but I would love to see it re-using oslo-config- >> validator, as it would be the unique source of truth for upgrades >> before the upgrade happens (vs having to do multiple steps). >> If I am completely out of my league here, tell me. > > This is a bit tricky as the validator requires information that is not > necessarily available in a production environment. Specifically, it > either needs the oslo-config-generator configuration file that lists all > of the namespaces a project uses, or it needs a generated > machine-readable sample config that contains all of the opt data. The > latter is not generally available today, and I'm not sure whether the > former is either. A quick pip install of an OpenStack service suggests > that it is not. > > Ideally, the machine-readable sample config would be available from > packages anyway as it has other uses too, but it's a pretty big ask to > get all of the packagers shipping that this cycle. I'm not sure how it > would work with pip installs either, although it seems like we should be > able to figure out something there. > > Anyway, not saying we shouldn't do it, but I want to make it clear that > this isn't as simple as just adding one more check to the upgrade > checkers. There are some other dependencies to doing this in a > non-service-specific way. > >> >> Just my 2 cents. >> Jean-Philippe Evrard (evrardjp) >> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Matt From tpb at dyncloud.net Fri Oct 19 14:55:36 2018 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 19 Oct 2018 10:55:36 -0400 Subject: [openstack-dev] [manila] [contribute] In-Reply-To: <6b827f3b-3804-8d53-3eb5-900cb7cc5c1d@gmail.com> References: <4374516d-8e61-baf8-440b-e76cafd84874@redhat.com> <6b827f3b-3804-8d53-3eb5-900cb7cc5c1d@gmail.com> Message-ID: <20181019145536.n6ca4dxjwlywb2s3@barron.net> On 19/10/18 15:27 +0300, Leni Kadali Mutungi wrote: >Hi all. > >I've downloaded the manila project from GitHub as a zip >file, unpacked it and have run `git fetch --depth=1` and >been progressively running `git fetch --deepen=5` to get >the commit history I need. For future reference, would a >shallow clone e.g. `git clone depth=1` be enough to start >working on the project or should one have the full commit >history of the project? > >-- >-- Kind regards, >Leni Kadali Mutungi Hi Leni, First I'd like to extend a warm welcome to you as a new manila project contributor! We have some contributor/developer documentation [1] that you may find useful. If you find any gaps or misinformation, we will be happy to work with you to address these. In addition to this email list, the #openstack-manila IRC channel on freenode is a good place to ask questions. Many of us run irc bouncers so we'll see the question even if we're not looking right when it is asked. Finally, we have a meeting most weeks on Thursdays at 1500UTC in #openstack-meeting-alt -- agendas are posted here [2]. Also, here is our work-plan for the current Stein development cycle [3]. Now for your question about shallow clones. I hope others who know more will chime in but here are my thoughts ... Although having the full commit history for the project is useful, it is certainly possible to get started with a shallow clone of the project. That said, I'm not sure if the space and download-time/bandwidth gains are going to be that significant because once you have the workspace you will want to run unit tests, pep8, etc. using tox as explained in the developer documentation mentioned earlier. That will download virtual environments for manila's dependencies in your workspace (under .tox directory) that dwarf the space used for manila proper. $ git clone --depth=1 git at github.com:openstack/manila.git shallow-manila Cloning into 'shallow-manila'... ... $ git clone git at github.com:openstack/manila.git deep-manila Cloning into 'deep-manila'... ... $ du -sh shallow-manila deep-manila/ 20M shallow-manila 35M deep-manila/ But after we run tox inside shallow-manila and deep-manila we see: $ du -sh shallow-manila deep-manila/ 589M shallow-manila 603M deep-manila/ Similarly, you are likely to want to run devstack locally and that will clone the repositories for the other openstack components you need and the savings from shallow clones won't be that significant relative to the total needed. Happy developing! -- Tom Barron (Manila PTL) irc: tbarron [1] https://docs.openstack.org/manila/rocky/contributor/index.html [2] https://wiki.openstack.org/wiki/Manila/Meetings [3] https://wiki.openstack.org/wiki/Manila/SteinCycle From mriedemos at gmail.com Fri Oct 19 15:09:07 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 19 Oct 2018 10:09:07 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update Message-ID: The big news this week is we have a couple of volunteer developers from NEC (Akhil Jain and Rajat Dhasmana) who are pushing the base framework changes across a lot of the projects [1]. I'm trying to review as many of these as I can. The request now is for the core teams on these projects to review them as well so we can keep moving, and then start thinking about non-placeholder specific checks for each project. The one other open question I have is about the Adjutant change [2]. I know Adjutant is very new and I'm not sure what upgrades look like for that project, so I don't really know how valuable adding the upgrade check framework is to that project. Is it like Horizon where it's mostly stateless and fed off plugins? Because we don't have an upgrade check CLI for Horizon for that reason. [1] https://review.openstack.org/#/q/topic:upgrade-checkers+(status:open+OR+status:merged) [2] https://review.openstack.org/#/c/611812/ -- Thanks, Matt From zbitter at redhat.com Fri Oct 19 15:17:09 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 19 Oct 2018 11:17:09 -0400 Subject: [openstack-dev] Proposal for a process to keep up with Python releases Message-ID: There hasn't been a Python 2 release in 8 years, and during that time we've gotten used to the idea that that's the way things go. However, with the switch to Python 3 looming (we will drop support for Python 2 in the U release[1]), history is no longer a good guide: Python 3 releases drop as often as every year. We are already feeling the pain from this, as Linux distros have largely already completed the shift to Python 3, and those that have are on versions newer than the py35 we currently have in gate jobs. We have traditionally held to the principle that we want each release to support the latest release of CentOS and the latest LTS release of Ubuntu, as they existed at the beginning of the release cycle.[2] Currently this means in practice one version of py2 and one of py3, but in the future it will mean two, usually different, versions of py3. There are two separate issues that we need to address: unit tests (we'll define this as code tested in isolation, within or spawned from within the testing process), and integration tests (we'll define this as code running in its own process, tested from the outside). I have two separate but related proposal for how to handle those. I'd like to avoid discussion which versions of things we think should be supported in Stein in this thread. Let's come up with a process that we think is a good one to take into T and beyond, and then retroactively apply it to Stein. Competing proposals are of course welcome, in addition to feedback on this one. Unit Tests ---------- For unit tests, the most important thing is to test on the versions of Python we target. It's less important to be using the exact distro that we want to target, because unit tests generally won't interact with stuff outside of Python. I'd like to propose that we handle this by setting up a unit test template in openstack-zuul-jobs for each release. So for Stein we'd have openstack-python3-stein-jobs. This template would contain: * A voting gate job for the highest minor version of py3 we want to support in that release. * A voting gate job for the lowest minor version of py3 we want to support in that release. * A periodic job for any interim minor releases. * (Starting late in the cycle) a non-voting check job for the highest minor version of py3 we want to support in the *next* release (if different), on the master branch only. So, for example, (and this is still under active debate) for Stein we might have gating jobs for py35 and py37, with a periodic job for py36. The T jobs might only have voting py36 and py37 jobs, but late in the T cycle we might add a non-voting py38 job on master so that people who haven't switched to the U template yet can see what, if anything, they'll need to fix. We'll run the unit tests on any distro we can find that supports the version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a particular Python version before trying to test it. Before the start of each cycle, the TC would determine which range of versions we want to support, on the basis of the latest one we can find in any distro and the earliest one we're likely to need in one of the supported Linux distros. There will be a project-wide goal to switch the testing template from e.g. openstack-python3-stein-jobs to openstack-python3-treasure-jobs for every repo before the end of the cycle. We'll have goal champions as usual following up and helping teams with the process. We'll know where the problem areas are because we'll have added non-voting jobs for any new Python versions to the previous release's template. Integration Tests ----------------- Integration tests do test, amongst other things, integration with non-openstack-supplied things in the distro, so it's important that we test on the actual distros we have identified as popular.[2] It's also important that every project be testing on the same distro at the end of a release, so we can be sure they all work together for users. When a new release of CentOS or a new LTS release of Ubuntu comes out, the TC will create a project-wide goal for the *next* release cycle to switch all integration tests over to that distro. It's up to individual projects to make the switch for the tests that they own (e.g. it'd be the QA team for Tempest, but other individual projects for their own jobs). Again, there'll be a goal champion to monitor and follow up. [1] https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html [2] https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions From jimmy at openstack.org Fri Oct 19 15:19:31 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 19 Oct 2018 10:19:31 -0500 Subject: [openstack-dev] Forum Schedule - Seeking Community Review In-Reply-To: <1539755722.1700444.1544762240.7F81D932@webmail.messagingengine.com> References: <5BC4F203.4000904@openstack.org> <1539755722.1700444.1544762240.7F81D932@webmail.messagingengine.com> Message-ID: <5BC9F603.5030705@openstack.org> > Colleen Murphy > October 17, 2018 at 12:55 AM > > Couple of things: > > 1. I noticed Julia's session "Community outreach when culture, time > zones, and language differ" and Thierry's session "Getting OpenStack > users involved in the project" are scheduled at the same time on > Tuesday, but they're quite related topics and I think many people > (especially in the TC) would want to attend both sessions. Thanks! Just fixed this, per Thierry's suggestion. "Community outreach..." is now 3:20-4pm. > > 2. The session "You don't know nothing about Public Cloud SDKs, yet" > doesn't seem to have a moderator listed. Good catch! Thank you. That's now corrected as well. > > Colleen > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > October 15, 2018 at 3:01 PM > Hi - > > The Forum schedule is now up > (https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262). > If you see a glaring content conflict within the Forum itself, please > let me know. > > You can also view the Full Schedule in the attached PDF if that makes > life easier... > > NOTE: BoFs and WGs are still not all up on the schedule. No need to > let us know :) > > Cheers, > Jimmy > _______________________________________________ > Staff mailing list > Staff at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Fri Oct 19 15:21:10 2018 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 19 Oct 2018 11:21:10 -0400 Subject: [openstack-dev] Next Manila meeting cancelled Message-ID: <20181019152110.wo6km63a3c6skwep@barron.net> We have a number of manila cores and regular participants who cannot attend the regular Thursday Manila meeting this coming week so it is cancelled. We will meet as normal the following Thursday, 1 November, at 1500 UTC on #openstack-meeting-alt [1]. Cheers, -- Tom Barron (tbarron) [1] https://wiki.openstack.org/wiki/Manila/Meetings From bxzhu_5355 at 163.com Fri Oct 19 15:21:01 2018 From: bxzhu_5355 at 163.com (Boxiang Zhu) Date: Fri, 19 Oct 2018 23:21:01 +0800 (GMT+08:00) Subject: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend In-Reply-To: <8f8b257a-5219-60e1-8760-d905f5bb48ff@gmail.com> References: <73dab45e.1a3d.1668a62f317.Coremail.bxzhu_5355@163.com> <8f8b257a-5219-60e1-8760-d905f5bb48ff@gmail.com> Message-ID: <47653a27.59b1.1668cea5d9b.Coremail.bxzhu_5355@163.com> Hi melanie, thanks for your reply. The version of my cinder and nova is Rocky. The scope of the cinder spec[1] is only for available volume migration between two pools from the same ceph cluster. If the volume is in-use status[2], it will call the generic migration function. So that as you describe it, on the nova side, it raises NotImplementedError(_("Swap only supports host devices"). The get_config of net volume[3] has not source_path. So does anyone try to succeed to migrate volume(in-use) with ceph backend or is anyone doing something of it? [1] https://review.openstack.org/#/c/296150 [2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py [3] https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101 Cheers, Boxiang On 10/19/2018 22:39,melanie witt wrote: On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote: When I use the LVM backend to create the volume, then attach it to a vm. I can migrate the volume(in-use) from one host to another. The nova libvirt will call the 'rebase' to finish it. But if using ceph backend, it raises exception 'Swap only supports host devices'. So now it does not support to migrate volume(in-use). Does anyone do this work now? Or Is there any way to let me migrate volume(in-use) with ceph backend? What version of cinder and nova are you using? I found this question/answer on ask.openstack.org: https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/ and it looks like there was some work done on the cinder side [1] to enable migration of in-use volumes with ceph semi-recently (Queens). On the nova side, the code looks for the source_path in the volume config, and if there is not one present, it raises NotImplementedError(_("Swap only supports host devices"). So in your environment, the volume configs must be missing a source_path. If you are using at least Queens version, then there must be something additional missing that we would need to do to make the migration work. [1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate Cheers, -melanie __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Fri Oct 19 15:47:46 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 19 Oct 2018 08:47:46 -0700 Subject: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend In-Reply-To: <47653a27.59b1.1668cea5d9b.Coremail.bxzhu_5355@163.com> References: <73dab45e.1a3d.1668a62f317.Coremail.bxzhu_5355@163.com> <8f8b257a-5219-60e1-8760-d905f5bb48ff@gmail.com> <47653a27.59b1.1668cea5d9b.Coremail.bxzhu_5355@163.com> Message-ID: <57a0f1a3-a014-5e71-d685-9b953cf0eef1@gmail.com> On Fri, 19 Oct 2018 23:21:01 +0800 (GMT+08:00), Boxiang Zhu wrote: > > The version of my cinder and nova is Rocky. The scope of the cinder spec[1] > is only for available volume migration between two pools from the same > ceph cluster. > If the volume is in-use status[2], it will call the generic migration > function. So that as you > describe it, on the nova side, it raises NotImplementedError(_("Swap > only supports host devices"). > The get_config of net volume[3] has not source_path. Ah, OK, so you're trying to migrate a volume across two separate ceph clusters, and that is not supported. > So does anyone try to succeed to migrate volume(in-use) with ceph > backend or is anyone doing something of it? Hopefully someone can share their experience with trying to migrate volumes across separate ceph clusters. I unfortunately don't know anything about it. Best, -melanie > [1] https://review.openstack.org/#/c/296150 > [2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py > [3] > https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101 From cboylan at sapwetik.org Fri Oct 19 16:30:11 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 19 Oct 2018 09:30:11 -0700 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: References: Message-ID: <1539966611.2016714.1547916472.26717683@webmail.messagingengine.com> On Fri, Oct 19, 2018, at 8:17 AM, Zane Bitter wrote: > There hasn't been a Python 2 release in 8 years, and during that time > we've gotten used to the idea that that's the way things go. However, > with the switch to Python 3 looming (we will drop support for Python 2 > in the U release[1]), history is no longer a good guide: Python 3 > releases drop as often as every year. We are already feeling the pain > from this, as Linux distros have largely already completed the shift to > Python 3, and those that have are on versions newer than the py35 we > currently have in gate jobs. > > We have traditionally held to the principle that we want each release to > support the latest release of CentOS and the latest LTS release of > Ubuntu, as they existed at the beginning of the release cycle.[2] > Currently this means in practice one version of py2 and one of py3, but > in the future it will mean two, usually different, versions of py3. > > There are two separate issues that we need to address: unit tests (we'll > define this as code tested in isolation, within or spawned from within > the testing process), and integration tests (we'll define this as code > running in its own process, tested from the outside). I have two > separate but related proposal for how to handle those. > > I'd like to avoid discussion which versions of things we think should be > supported in Stein in this thread. Let's come up with a process that we > think is a good one to take into T and beyond, and then retroactively > apply it to Stein. Competing proposals are of course welcome, in > addition to feedback on this one. > > Unit Tests > ---------- > > For unit tests, the most important thing is to test on the versions of > Python we target. It's less important to be using the exact distro that > we want to target, because unit tests generally won't interact with > stuff outside of Python. > > I'd like to propose that we handle this by setting up a unit test > template in openstack-zuul-jobs for each release. So for Stein we'd have > openstack-python3-stein-jobs. This template would contain: Because zuul config is branch specific we could set up every project to use a `openstack-python3-jobs` template then define that template differently on each branch. This would mean you only have to update the location where the template is defined and not need to update every other project after cutting a stable branch. I would suggest we take advantage of that to reduce churn. > > * A voting gate job for the highest minor version of py3 we want to > support in that release. > * A voting gate job for the lowest minor version of py3 we want to > support in that release. > * A periodic job for any interim minor releases. > * (Starting late in the cycle) a non-voting check job for the highest > minor version of py3 we want to support in the *next* release (if > different), on the master branch only. > > So, for example, (and this is still under active debate) for Stein we > might have gating jobs for py35 and py37, with a periodic job for py36. > The T jobs might only have voting py36 and py37 jobs, but late in the T > cycle we might add a non-voting py38 job on master so that people who > haven't switched to the U template yet can see what, if anything, > they'll need to fix. > > We'll run the unit tests on any distro we can find that supports the > version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian > unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a > particular Python version before trying to test it. > > Before the start of each cycle, the TC would determine which range of > versions we want to support, on the basis of the latest one we can find > in any distro and the earliest one we're likely to need in one of the > supported Linux distros. There will be a project-wide goal to switch the > testing template from e.g. openstack-python3-stein-jobs to > openstack-python3-treasure-jobs for every repo before the end of the > cycle. We'll have goal champions as usual following up and helping teams > with the process. We'll know where the problem areas are because we'll > have added non-voting jobs for any new Python versions to the previous > release's template. I don't know that this needs to be a project wide goal if you can just update the template on the master branch where the template is defined. Do that then every project is now running with the up to date version of the template. We should probably advertise when this is happening with some links to python version x.y breakages/features, but the process itself should be quick. As for python version range selection I worry that that the criteria about relies on too much guesswork. I do think we should do our best to test future incoming versions of python even while not officially supporting them. We will have to support them at some point, either directly or via some later version that includes the changes from that intermediate version. Could the criteria be: Support the lowest version of python on a supported distro release and the highest version of python on a supported distro and test (but not support so we can drop testing for this python version on stable branches) the current latest release of python? This is objective and doesn't require anyone to guess at what versions need to be supported. > > Integration Tests > ----------------- > > Integration tests do test, amongst other things, integration with > non-openstack-supplied things in the distro, so it's important that we > test on the actual distros we have identified as popular.[2] It's also > important that every project be testing on the same distro at the end of > a release, so we can be sure they all work together for users. > > When a new release of CentOS or a new LTS release of Ubuntu comes out, > the TC will create a project-wide goal for the *next* release cycle to > switch all integration tests over to that distro. It's up to individual > projects to make the switch for the tests that they own (e.g. it'd be > the QA team for Tempest, but other individual projects for their own > jobs). Again, there'll be a goal champion to monitor and follow up. > > > [1] > https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html > [2] > https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions Overall this approach seems fine. It is basically the approach we've used in the past with the addition of explicit testing of future python (which we've never really done before). I do think we need to avoid letting every project move at their own pace. This is what we did with the last python and distro switchover (trusty to xenial) and a number of projects failed to get it done within the release cycle. Instead I think we should rely on shared branch specific zuul templates that can be updated centrally when the next release is cut. Clark From james.slagle at gmail.com Fri Oct 19 16:52:25 2018 From: james.slagle at gmail.com (James Slagle) Date: Fri, 19 Oct 2018 12:52:25 -0400 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> Message-ID: On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz wrote: > Additionally I took a stab at combining the puppet/docker service > definitions for the aodh services in a similar structure to start > reducing the overhead we've had from maintaining the docker/puppet > implementations seperately. You can see the patch > https://review.openstack.org/#/c/611188/ for an additional example of > this. That patch takes the approach of removing baremetal support. Is that what we agreed to do? I'm not specifically opposed, as I'm pretty sure the baremetal implementations are no longer tested anywhere, but I know that Dan had some concerns about that last time around. The alternative we discussed was using jinja2 to include common data/tasks in both the puppet/docker/ansible implementations. That would also result in reducing the number of Heat resources in these stacks and hopefully reduce the amount of time it takes to create/update the ServiceChain stacks. -- -- James Slagle -- From aschultz at redhat.com Fri Oct 19 17:04:10 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 19 Oct 2018 11:04:10 -0600 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> Message-ID: On Fri, Oct 19, 2018 at 10:53 AM James Slagle wrote: > > On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz wrote: > > Additionally I took a stab at combining the puppet/docker service > > definitions for the aodh services in a similar structure to start > > reducing the overhead we've had from maintaining the docker/puppet > > implementations seperately. You can see the patch > > https://review.openstack.org/#/c/611188/ for an additional example of > > this. > > That patch takes the approach of removing baremetal support. Is that > what we agreed to do? > Since it's deprecated since Queens[0], yes? I think it is time to stop continuing this method of installation. Given that I'm not even sure the upgrade process even works anymore with baremetal, I don't think there's a reason to keep it as it directly impacts the time it takes to perform deployments and also contributes to increased complexity all around. [0] http://lists.openstack.org/pipermail/openstack-dev/2017-September/122248.html > I'm not specifically opposed, as I'm pretty sure the baremetal > implementations are no longer tested anywhere, but I know that Dan had > some concerns about that last time around. > > The alternative we discussed was using jinja2 to include common > data/tasks in both the puppet/docker/ansible implementations. That > would also result in reducing the number of Heat resources in these > stacks and hopefully reduce the amount of time it takes to > create/update the ServiceChain stacks. > I'd rather we officially get rid of the one of the two methods and converge on a single method without increasing the complexity via jinja to continue to support both. If there's an improvement to be had after we've converged on a single structure for including the base bits, maybe we could do that then? Thanks, -Alex > -- > -- James Slagle > -- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kgiusti at gmail.com Fri Oct 19 18:19:44 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Fri, 19 Oct 2018 14:19:44 -0400 Subject: [openstack-dev] =?utf-8?b?W21pc3RyYWxdW29zbG9dW21lc3NhZ2luZ10g?= =?utf-8?q?Removing_=E2=80=9Cblocking=E2=80=9D_executor_from_oslo?= =?utf-8?q?=2Emessaging?= In-Reply-To: <741d60d8-e5ba-4a60-9a9d-792fbb3b0abb@Spark> References: <682cb1b7-9c29-4242-90bd-7f28022a0bdb@Spark> <1c5b7519-154e-7085-42c7-7c402d6a48df@nemebean.com> <741d60d8-e5ba-4a60-9a9d-792fbb3b0abb@Spark> Message-ID: Hi Renat, After discussing this a bit with Ben on IRC we're going to push the removal off to T milestone 1. I really like Ben's idea re: adding a blocking entry to your project's setup.cfg file. We can remove the explicit check for blocking in oslo.messaging so you won't get an annoying warning if you want to load blocking on your own. Let me know what you think, thanks. On Fri, Oct 19, 2018 at 12:02 AM Renat Akhmerov wrote: > Hi, > > > @Ken, I understand your considerations. I get that. I’m only asking not to > remove it *for now*. And yes, if you think it should be discouraged from > using it’s totally fine. But practically, it’s been the only reliable > option for Mistral so far that may be our fault, I have to admit, because > we weren’t able to make it work well with other executor types but we’ll > try to fix that. > > By the way, I was playing with different options yesterday and it seems > like that setting the executor to “threading” and the > “executor_thread_pool_size” property to 1 behaves the same way as > “blocking”. So may be that’s an option for us too, even if “blocking” is > completely removed. But I would still be in favour of having some extra > time to prove that with thorough testing. > > @Ben, including the executor via setup.cfg also looks OK to me. I see no > issues with this approach. > > > Thanks > > Renat Akhmerov > @Nokia > On 18 Oct 2018, 23:35 +0700, Ben Nemec , wrote: > > > > On 10/18/18 9:59 AM, Ken Giusti wrote: > > Hi Renat, > > The biggest issue with the blocking executor (IMHO) is that it blocks > the protocol I/O while RPC processing is in progress. This increases > the likelihood that protocol processing will not get done in a timely > manner and things start to fail in weird ways. These failures are > timing related and are typically hard to reproduce or root-cause. This > isn't something we can fix as blocking is the nature of the executor. > > If we are to leave it in we'd really want to discourage its use. > > > Since it appears the actual executor code lives in futurist, would it be > possible to remove the entrypoint for blocking from oslo.messaging and > have mistral just pull it in with their setup.cfg? Seems like they > should be able to add something like: > > oslo.messaging.executors = > blocking = futurist:SynchronousExecutor > > to their setup.cfg to keep it available to them even if we drop it from > oslo.messaging itself. That seems like a good way to strongly discourage > use of it while still making it available to projects that are really > sure they want it. > > > However I'm ok with leaving it available if the policy for using > blocking is 'use at your own risk', meaning that bug reports may have to > be marked 'won't fix' if we have reason to believe that blocking is at > fault. That implies removing 'blocking' as the default executor value > in the API and having applications explicitly choose it. And we keep > the deprecation warning. > > We could perhaps implement time duration checks around the executor > callout and log a warning if the executor blocked for an extended amount > of time (extended=TBD). > > Other opinions so we can come to a consensus? > > > On Thu, Oct 18, 2018 at 3:24 AM Renat Akhmerov > wrote: > > Hi Oslo Team, > > Can we retain “blocking” executor for now in Oslo Messaging? > > > Some background.. > > For a while we had to use Oslo Messaging with “blocking” executor in > Mistral because of incompatibility of MySQL driver with green > threads when choosing “eventlet” executor. Under certain conditions > we would get deadlocks between green threads. Some time ago we > switched to using PyMysql driver which is eventlet friendly and did > a number of tests that showed that we could safely switch to > “eventlet” executor (with that driver) so we introduced a new option > in Mistral where we could choose an executor in Oslo Messaging. The > corresponding bug is [1]. > > The issue is that we recently found that not everything actually > works as expected when using combination PyMysql + “eventlet” > executor. We also tried “threading” executor and the system *seems* > to work with it but surprisingly performance is much worse. > > Given all of that we’d like to ask Oslo Team not to remove > “blocking” executor for now completely, if that’s possible. We have > a strong motivation to switch to “eventlet” for other reasons > (parallelism => better performance etc.) but seems like we need some > time to make it smoothly. > > > [1] https://bugs.launchpad.net/mistral/+bug/1696469 > > > Thanks > > Renat Akhmerov > @Nokia > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Ken Giusti (kgiusti at gmail.com ) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Ken Giusti (kgiusti at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Fri Oct 19 18:24:12 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Fri, 19 Oct 2018 18:24:12 +0000 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <1A3C52DFCD06494D8528644858247BF01C1F62DF@EX10MBOX03.pnnl.gov> <83f981fc-c49f-8207-215c-4cb2adbd9108@everyware.ch> <1A3C52DFCD06494D8528644858247BF01C20C76C@EX10MBOX03.pnnl.gov>, Message-ID: <1A3C52DFCD06494D8528644858247BF01C20D8E7@EX10MBOX03.pnnl.gov> Adding a stateless service provider on the existing etcd key value store would be pretty easy with something like coredns I think without adding another stateful storage dependency. I don't really have a horse in the game other then I'm an operator and we're feeling overwhelmed by all the state stuff to maintain. If consul is entirely optional, its probably fine to add the feature. But I worry operators may avoid it. Thanks, Kevin ________________________________________ From: Florian Engelmann [florian.engelmann at everyware.ch] Sent: Friday, October 19, 2018 1:17 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints > No, I mean, Consul would be an extra dependency in a big list of dependencies OpenStack already has. OpenStack has so many it is causing operators to reconsider adoption. I'm asking, if existing dependencies can be made to solve the problem without adding more? > > Stateful dependencies are much harder to deal with then stateless ones, as they take much more operator care/attention. Consul is stateful as is etcd, and etcd is already a dependency. > > Can etcd be used instead so as not to put more load on the operators? While etcd is a strong KV store it lacks many features consul has. Using consul for DNS based service discovery is very easy to implement without making it a dependency. So we will start with a "external" consul and see how to handle the service registration without modifying the kolla containers or any kolla-ansible code. All the best, Flo > > Thanks, > Kevin > ________________________________________ > From: Florian Engelmann [florian.engelmann at everyware.ch] > Sent: Wednesday, October 10, 2018 12:18 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints > > by "another storage system" you mean the KV store of consul? That's just > someting consul brings with it... > > consul is very strong in doing health checks > > Am 10/9/18 um 6:09 PM schrieb Fox, Kevin M: >> etcd is an already approved openstack dependency. Could that be used instead of consul so as to not add yet another storage system? coredns with the https://coredns.io/plugins/etcd/ plugin would maybe do what you need? >> >> Thanks, >> Kevin >> ________________________________________ >> From: Florian Engelmann [florian.engelmann at everyware.ch] >> Sent: Monday, October 08, 2018 3:14 AM >> To: openstack-dev at lists.openstack.org >> Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints >> >> Hi, >> >> I would like to start a discussion about some changes and additions I >> would like to see in in kolla and kolla-ansible. >> >> 1. Keepalived is a problem in layer3 spine leaf networks as any floating >> IP can only exist in one leaf (and VRRP is a problem in layer3). I would >> like to use consul and registrar to get rid of the "internal" floating >> IP and use consuls DNS service discovery to connect all services with >> each other. >> >> 2. Using "ports" for external API (endpoint) access is a major headache >> if a firewall is involved. I would like to configure the HAProxy (or >> fabio) for the external access to use "Host:" like, eg. "Host: >> keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS. >> Any customer would just need HTTPS access and not have to open all those >> ports in his firewall. For some enterprise customers it is not possible >> to request FW changes like that. >> >> 3. HAProxy is not capable to handle "read/write" split with Galera. I >> would like to introduce ProxySQL to be able to scale Galera. >> >> 4. HAProxy is fine but fabio integrates well with consul, statsd and >> could be connected to a vault cluster to manage secure certificate access. >> >> 5. I would like to add vault as Barbican backend. >> >> 6. I would like to add an option to enable tokenless authentication for >> all services with each other to get rid of all the openstack service >> passwords (security issue). >> >> What do you think about it? >> >> All the best, >> Florian >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -- > > EveryWare AG > Florian Engelmann > Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: mailto:florian.engelmann at everyware.ch > web: http://www.everyware.ch > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch From rosmaita.fossdev at gmail.com Fri Oct 19 18:27:58 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 19 Oct 2018 14:27:58 -0400 Subject: [openstack-dev] [glance] next python-glanceclient release Message-ID: <5cb2972a-913a-a41d-48df-82dcf03b16ce@gmail.com> Hello Glancers, I was looking at a Cinder patch [0] and it made me realize that we should do a glanceclient release that includes the multihash-download-verification [1] before the next scheduled Stein release (which was to be 3.0.0, basically Rocky with v1 support removed; see [2]). I think it would be good to have the new verification change released so other projects can consume the code and we can find out sooner if it breaks anyone. (I'm worried about allow_md5_fallback=False [6], which I think is definitely the right thing for the CLI client, but the discussion about allowing users to pick an os_hash_algo on Iain's spec-lite [4] is making me worry about the effect that default value could have on other services.) Here are the options: (1) backport [1] to stable/rocky and cut 2.12.1 (2) cut 2.13.0 from master and make it the first Stein glanceclient, leaving legacy md5 checksum verification the only validation option in Rocky (3) wait for 3.0.0 to include [1] (4) change the default for allow_md5_fallback to True for the data() function [6] (the CLI code already explicitly sets it and won't need to be adjusted [5]) and then do (1) or (2) or (3) Obviously, I don't like (3). Not sure I like (4) either, but figured we should at least think about it. If we pick (1), we should merge the periodic tips job change [3] to master and immediately backport it to stable/rocky before cutting the release. That way we won't have any unreleased patches sitting in stable/rocky. Let me know what you think. cheers, brian [0] https://review.openstack.org/#/c/611081/ [1] http://git.openstack.org/cgit/openstack/python-glanceclient/commit/?id=a4ea9f0720214bd4aa6d72e81776e1260b30ad2f [2] https://launchpad.net/python-glanceclient/+series [3] https://review.openstack.org/#/c/599844/ [4] https://review.openstack.org/#/c/597648/ [5] http://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/v2/shell.py?id=a4ea9f0720214bd4aa6d72e81776e1260b30ad2f#n521 [6] http://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/v2/images.py?id=a4ea9f0720214bd4aa6d72e81776e1260b30ad2f#n201 From aj at suse.com Fri Oct 19 18:58:50 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 19 Oct 2018 20:58:50 +0200 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: <1539966611.2016714.1547916472.26717683@webmail.messagingengine.com> References: <1539966611.2016714.1547916472.26717683@webmail.messagingengine.com> Message-ID: <5bc36890-e63e-9402-2f03-7ae81c1c9b84@suse.com> On 19/10/2018 18.30, Clark Boylan wrote: > [...] > Because zuul config is branch specific we could set up every project to use a `openstack-python3-jobs` template then define that template differently on each branch. This would mean you only have to update the location where the template is defined and not need to update every other project after cutting a stable branch. I would suggest we take advantage of that to reduce churn. Alternative we have a single "openstack-python3-jobs" template in an unbranched repo like openstack-zuul-jobs and define different jobs per branch. The end result would be the same, each repo uses the same template and no changes are needed for the repo when branching... Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From mordred at inaugust.com Fri Oct 19 19:08:23 2018 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 19 Oct 2018 14:08:23 -0500 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: <5bc36890-e63e-9402-2f03-7ae81c1c9b84@suse.com> References: <1539966611.2016714.1547916472.26717683@webmail.messagingengine.com> <5bc36890-e63e-9402-2f03-7ae81c1c9b84@suse.com> Message-ID: <571f068b-8016-8d9e-7647-efaac5a64091@inaugust.com> On 10/19/2018 01:58 PM, Andreas Jaeger wrote: > On 19/10/2018 18.30, Clark Boylan wrote: > > [...] >> Because zuul config is branch specific we could set up every project >> to use a `openstack-python3-jobs` template then define that template >> differently on each branch. This would mean you only have to update >> the location where the template is defined and not need to update >> every other project after cutting a stable branch. I would suggest we >> take advantage of that to reduce churn. > > Alternative we have a single "openstack-python3-jobs" template in an > unbranched repo like openstack-zuul-jobs and define different jobs per > branch. > > The end result would be the same, each repo uses the same template and > no changes are needed for the repo when branching... Yes - I agree that we should take advantage of zuul's branching support. And I agree with Andreas that we should just use branch matchers in openstack-zuul-jobs to do it. From openstack at nemebean.com Fri Oct 19 19:19:32 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 19 Oct 2018 14:19:32 -0500 Subject: [openstack-dev] [oslo] Next meeting cancelled Message-ID: <89893c26-4e85-258d-bc50-b60d216bc87d@nemebean.com> Hi, Sorry for the late notice, but I just found out I'm going to be travelling next week, which will mean I can't run the meeting. Since Doug is also out and it's really late to find someone else to run it, we're just going to skip it. We'll resume as normal the following week. Also note that this means I may have somewhat limited availability on IRC next week. I'll try to keep up with emails, but I can't make any guarantees. If you need immediate assistance with Oslo, try ChangBo (gcb), Ken (kgiusti), or Stephen (stephenfin). Thanks. -Ben From zbitter at redhat.com Fri Oct 19 19:45:16 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 19 Oct 2018 15:45:16 -0400 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: <1539966611.2016714.1547916472.26717683@webmail.messagingengine.com> References: <1539966611.2016714.1547916472.26717683@webmail.messagingengine.com> Message-ID: <68457976-faab-26e3-74da-ee7ebbd20bdd@redhat.com> On 19/10/18 12:30 PM, Clark Boylan wrote: > On Fri, Oct 19, 2018, at 8:17 AM, Zane Bitter wrote: >> Unit Tests >> ---------- >> >> For unit tests, the most important thing is to test on the versions of >> Python we target. It's less important to be using the exact distro that >> we want to target, because unit tests generally won't interact with >> stuff outside of Python. >> >> I'd like to propose that we handle this by setting up a unit test >> template in openstack-zuul-jobs for each release. So for Stein we'd have >> openstack-python3-stein-jobs. This template would contain: > > Because zuul config is branch specific we could set up every project to use a `openstack-python3-jobs` template then define that template differently on each branch. This would mean you only have to update the location where the template is defined and not need to update every other project after cutting a stable branch. I would suggest we take advantage of that to reduce churn. There was a reason I didn't propose that approach: in practice you can't add a new gating test to a centralised zuul template definition. If you do, many projects will break because the change is not self-testing. At best you'll be pitchforked by an angry mob of people who can't get anything but py37 fixes through the gate, and at worst they'll all stop using the template to get unblocked and then never go back to it. We don't need everyone to cut over at the same time. We just need them to do it in the space of one release cycle. One patch every 6 months is not an excessive amount of churn. >> * A voting gate job for the highest minor version of py3 we want to >> support in that release. >> * A voting gate job for the lowest minor version of py3 we want to >> support in that release. >> * A periodic job for any interim minor releases. >> * (Starting late in the cycle) a non-voting check job for the highest >> minor version of py3 we want to support in the *next* release (if >> different), on the master branch only. >> >> So, for example, (and this is still under active debate) for Stein we >> might have gating jobs for py35 and py37, with a periodic job for py36. >> The T jobs might only have voting py36 and py37 jobs, but late in the T >> cycle we might add a non-voting py38 job on master so that people who >> haven't switched to the U template yet can see what, if anything, >> they'll need to fix. >> >> We'll run the unit tests on any distro we can find that supports the >> version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian >> unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a >> particular Python version before trying to test it. >> >> Before the start of each cycle, the TC would determine which range of >> versions we want to support, on the basis of the latest one we can find >> in any distro and the earliest one we're likely to need in one of the >> supported Linux distros. There will be a project-wide goal to switch the >> testing template from e.g. openstack-python3-stein-jobs to >> openstack-python3-treasure-jobs for every repo before the end of the >> cycle. We'll have goal champions as usual following up and helping teams >> with the process. We'll know where the problem areas are because we'll >> have added non-voting jobs for any new Python versions to the previous >> release's template. > > I don't know that this needs to be a project wide goal if you can just update the template on the master branch where the template is defined. Do that then every project is now running with the up to date version of the template. We should probably advertise when this is happening with some links to python version x.y breakages/features, but the process itself should be quick. Either way, it'll be project teams themselves fixing any broken tests due to a new version being added. So we can either have a formal project-wide goal where we project-manage that process across the space of a release, or a de-facto project-wide goal where we break everybody and then nothing gets merged until they fix it. > As for python version range selection I worry that that the criteria about relies on too much guesswork. Some guesswork is going to be inevitable, unfortunately, (we have no way of knowing what will be in CentOS 8, for example) but I agree that we should try to tighten up the criteria as much as possible. > I do think we should do our best to test future incoming versions of python even while not officially supporting them. We will have to support them at some point, either directly or via some later version that includes the changes from that intermediate version. +1, I think we should try to add support for higher versions as soon as possible. It may take a long time to get into an LTS release, but there's bound to be _some_ distro out there where people want to use it. (Case in point: Debian really wanted py37 support in Rocky, at which point a working 3.7 wasn't even available in _any_ Ubuntu release, let alone an LTS). That's why I said "the latest one we can find in any distro" - if we have any way to test it at all then we should. > Could the criteria be: > Support the lowest version of python on a supported distro release and the highest version of python on a supported distro As of now we can't objectively determine the minimum version because it isn't the future yet. That may change once every distro is on Python 3 though. > and test (but not support so we can drop testing for this python version on stable branches) the current latest release of python? That's certainly worth considering; we could tighten up the range on stable branches. I think we'd need to hear from Ubuntu and Debian folks what they think about that. My guess is that they'd prefer even stable branches to continue testing recent versions, so that they could be used on Debian unstable and on Ubuntu between LTS releases. > This is objective and doesn't require anyone to guess at what versions need to be supported. > >> >> Integration Tests >> ----------------- >> >> Integration tests do test, amongst other things, integration with >> non-openstack-supplied things in the distro, so it's important that we >> test on the actual distros we have identified as popular.[2] It's also >> important that every project be testing on the same distro at the end of >> a release, so we can be sure they all work together for users. >> >> When a new release of CentOS or a new LTS release of Ubuntu comes out, >> the TC will create a project-wide goal for the *next* release cycle to >> switch all integration tests over to that distro. It's up to individual >> projects to make the switch for the tests that they own (e.g. it'd be >> the QA team for Tempest, but other individual projects for their own >> jobs). Again, there'll be a goal champion to monitor and follow up. >> >> >> [1] >> https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html >> [2] >> https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions > > Overall this approach seems fine. It is basically the approach we've used in the past with the addition of explicit testing of future python (which we've never really done before). I do think we need to avoid letting every project move at their own pace. This is what we did with the last python and distro switchover (trusty to xenial) and a number of projects failed to get it done within the release cycle. Instead I think we should rely on shared branch specific zuul templates that can be updated centrally when the next release is cut. For those just joining, we discussed this on IRC yesterday.[1] fungi mentioned that we tried two different approaches for precise->trusty and trusty->xenial, and they both failed in different ways. The first time infra gave teams time to prepare and then eventually cut the build over for everyone, with the result that lots of things broke. The second time infra allowed teams to switch over in their own time, with the result that a lot of things released with tests running on an outdated LTS release. We do have one new tool in our toolbox: project-wide goals. They're not magically going to solve the problem, but at least they give us some visibility into what has and has not happened. Perhaps we could provide periodic, rather than experimental, jobs to test with so that the goal champions can track where the likely problems are in the lead-up to a switchover? As far as the LTS distro goes, I don't have a strong opinion on which approach is the least worst. I think if we want to have a switchover (just after milestone-2 maybe?) then we should just write that into the project-wide goal. cheers, Zane. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-10-18.log.html#t2018-10-18T15:16:13 From zbitter at redhat.com Fri Oct 19 20:13:59 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 19 Oct 2018 16:13:59 -0400 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: References: Message-ID: <4678c020-ba2d-4086-b1b5-f57286ae5a6d@redhat.com> On 19/10/18 11:17 AM, Zane Bitter wrote: > I'd like to propose that we handle this by setting up a unit test > template in openstack-zuul-jobs for each release. So for Stein we'd have > openstack-python3-stein-jobs. This template would contain: > > * A voting gate job for the highest minor version of py3 we want to > support in that release. > * A voting gate job for the lowest minor version of py3 we want to > support in that release. > * A periodic job for any interim minor releases. > * (Starting late in the cycle) a non-voting check job for the highest > minor version of py3 we want to support in the *next* release (if > different), on the master branch only. > > So, for example, (and this is still under active debate) for Stein we > might have gating jobs for py35 and py37, with a periodic job for py36. > The T jobs might only have voting py36 and py37 jobs, but late in the T > cycle we might add a non-voting py38 job on master so that people who > haven't switched to the U template yet can see what, if anything, > they'll need to fix. Just to make it easier to visualise, here is an example for how the Zuul config _might_ look now if we had adopted this proposal during Rocky: https://review.openstack.org/611947 And instead of having a project-wide goal in Stein to add `openstack-python36-jobs` to the list that currently includes `openstack-python35-jobs` in each project's Zuul config[1], we'd have had a goal to change `openstack-python3-rocky-jobs` to `openstack-python3-stein-jobs` in each project's Zuul config. - ZB [1] https://governance.openstack.org/tc/goals/stein/python3-first.html#python-3-6-unit-test-jobs From piotrmisiak1984 at gmail.com Fri Oct 19 20:15:45 2018 From: piotrmisiak1984 at gmail.com (Piotr Misiak) Date: Fri, 19 Oct 2018 22:15:45 +0200 Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints In-Reply-To: <99173bcd-6209-e7f5-6d34-38440ebd66bf@everyware.ch> References: <224b4a9a-893b-fb2c-766b-6fa97503fa5b@everyware.ch> <8c65a2ba-09e9-d286-9d3c-e577a59185cb@everyware.ch> <84d669d1-d3c5-cd15-84a0-bfc8e4e5eb66@everyware.ch> <200b392c-1a41-f53a-6822-01c809aa22fd@gmail.com> <99173bcd-6209-e7f5-6d34-38440ebd66bf@everyware.ch> Message-ID: On 19.10.2018 10:21, Florian Engelmann wrote: >> >> On 17.10.2018 15:45, Florian Engelmann wrote: >>>> On 10.10.2018 09:06, Florian Engelmann wrote: >>>>> Now I get you. I would say all configuration templates need to be >>>>> changed to allow, eg. >>>>> >>>>> $ grep http /etc/kolla/cinder-volume/cinder.conf >>>>> glance_api_servers = http://10.10.10.5:9292 >>>>> auth_url = http://internal.somedomain.tld:35357 >>>>> www_authenticate_uri = http://internal.somedomain.tld:5000 >>>>> auth_url = http://internal.somedomain.tld:35357 >>>>> auth_endpoint = http://internal.somedomain.tld:5000 >>>>> >>>>> to look like: >>>>> >>>>> glance_api_servers = http://glance.service.somedomain.consul:9292 >>>>> auth_url = http://keystone.service.somedomain.consul:35357 >>>>> www_authenticate_uri = http://keystone.service.somedomain.consul:5000 >>>>> auth_url = http://keystone.service.somedomain.consul:35357 >>>>> auth_endpoint = http://keystone.service.somedomain.consul:5000 >>>>> >>>> >>>> The idea with Consul looks interesting. >>>> >>>> But I don't get your issue with VIP address and spine-leaf network. >>>> >>>> What we have: >>>> - controller1 behind leaf1 A/B pair with MLAG >>>> - controller2 behind leaf2 A/B pair with MLAG >>>> - controller3 behind leaf3 A/B pair with MLAG >>>> >>>> The VIP address is active on one controller server. >>>> When the server fail then the VIP will move to another controller >>>> server. >>>> Where do you see a SPOF in this configuration? >>>> >>> >>> So leaf1 2 and 3 have to share the same L2 domain, right (in IPv4 >>> network)? >>> >> Yes, they share L2 domain but we have ARP and ND suppression enabled. >> >> It is an EVPN network where there is a L3 with VxLANs between leafs >> and spines. >> >> So we don't care where a server is connected. It can be connected to >> any leaf. > > Ok that sounds very interesting. Is it possible to share some > internals? Which switch vendor/model do you use? How does you IP > address schema look like? > If VxLAN is used between spine and leafs are you using VxLAN > networking for Openstack as well? Where is your VTEP? > We have Mellanox switches with Cumulus Linux installed. Here you have a documentation: https://docs.cumulusnetworks.com/display/DOCS/Ethernet+Virtual+Private+Network+-+EVPN EVPN is a well known standard and it is also supported by Juniper, Cisco etc. We have standard VLANs between servers and leaf switches, they are mapped to VxLANs between leafs and spines. In our env every leaf switch is a VTEP. Servers have MLAG/CLAG connections to two leaf switches. We also have anycast gateways on leaf switches. From servers point of view our network is like a very big switch with hundreds of ports and standard VLANs. We are using VxLAN networking for OpenStack, but it is configured on top of network VxLANs, we dont mix them. From emccormick at cirrusseven.com Fri Oct 19 21:30:55 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Fri, 19 Oct 2018 17:30:55 -0400 Subject: [openstack-dev] [Octavia] SSL errors polling amphorae and missing tenant network interface In-Reply-To: References: Message-ID: Apologies for cross-posting, but in the event that these might be worth filing as bugs, I wanted the Octavia devs to see it as well... I've been wrestling with getting Octavia up and running and have become stuck on two issues. I'm hoping someone has run into these before. My google foo has come up empty. Issue 1: When the Octavia controller tries to poll the amphora instance, it tries repeatedly and eventually fails. The error on the controller side is: 2018-10-19 14:17:39.181 26 ERROR octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries (currently set to 300) exhausted. The amphora is unavailable. Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)) On the amphora side I see: [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake failure (_ssl.c:1754) I've generated certificates both with the script in the Octavia git repo, and with the Openstack Ansible playbook. I can see that they are present in /etc/octavia/certs. I'm using the Kolla (Queens) containers for the control plane so I'm sure I've satisfied all the python library constraints. Issue 2: I"m not sure how it gets configured, but the tenant network interface (ens6) never comes up. I can spawn other instances on that network with no issue, and I can see that Neutron has the port attached to the instance. However, in the instance this is all I get: ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 9000 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe30:c460/64 scope link valid_lft forever preferred_lft forever 3: ens6: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff There's no evidence of the interface anywhere else including udev rules. Any help with either or both issues would be greatly appreciated. Cheers, Erik From johnsomor at gmail.com Fri Oct 19 23:49:34 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 19 Oct 2018 16:49:34 -0700 Subject: [openstack-dev] [Octavia] SSL errors polling amphorae and missing tenant network interface In-Reply-To: References: Message-ID: Hi Erik, Sorry to hear you are still having certificate issues. Issue #2 is probably caused by issue #1. Since we hot-plug the tenant network for the VIP, one of the first steps after the worker connects to the amphora agent is finishing the required configuration of the VIP interface inside the network namespace on the amphroa. If I remember correctly, you are attempting to configure Octavia with the dual CA option (which is good for non-development use). This is what I have for notes: [certificates] gets the following: cert_generator = local_cert_generator ca_certificate = server CA's "server.pem" file ca_private_key = server CA's "server.key" file ca_private_key_passphrase = pass phrase for ca_private_key [controller_worker] client_ca = Client CA's ca_cert file [haproxy_amphora] client_cert = Client CA's client.pem file (I think with it's key concatenated is what rm_work said the other day) server_ca = Server CA's ca_cert file That said, I can probably run through this and write something up next week that is more step-by-step/detailed. Michael On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick wrote: > > Apologies for cross-posting, but in the event that these might be > worth filing as bugs, I wanted the Octavia devs to see it as well... > > I've been wrestling with getting Octavia up and running and have > become stuck on two issues. I'm hoping someone has run into these > before. My google foo has come up empty. > > Issue 1: > When the Octavia controller tries to poll the amphora instance, it > tries repeatedly and eventually fails. The error on the controller > side is: > > 2018-10-19 14:17:39.181 26 ERROR > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > retries (currently set to 300) exhausted. The amphora is unavailable. > Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 > (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > 'tls_process_server_certificate', 'certificate verify > failed')],)",),)) > > On the amphora side I see: > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. > [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > failure (_ssl.c:1754) > > I've generated certificates both with the script in the Octavia git > repo, and with the Openstack Ansible playbook. I can see that they are > present in /etc/octavia/certs. > > I'm using the Kolla (Queens) containers for the control plane so I'm > sure I've satisfied all the python library constraints. > > Issue 2: > I"m not sure how it gets configured, but the tenant network interface > (ens6) never comes up. I can spawn other instances on that network > with no issue, and I can see that Neutron has the port attached to the > instance. However, in the instance this is all I get: > > ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > group default qlen 1 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: ens3: mtu 9000 qdisc pfifo_fast > state UP group default qlen 1000 > link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe30:c460/64 scope link > valid_lft forever preferred_lft forever > 3: ens6: mtu 1500 qdisc noop state DOWN group > default qlen 1000 > link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > > There's no evidence of the interface anywhere else including udev rules. > > Any help with either or both issues would be greatly appreciated. > > Cheers, > Erik > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ifatafekn at gmail.com Sun Oct 21 07:07:16 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Sun, 21 Oct 2018 10:07:16 +0300 Subject: [openstack-dev] [requirements][vitrage][infra] SQLAlchemy-Utils version 0.33.6 breaks Vitrage gate In-Reply-To: <20181018131713.nrnnlihipuvxaabu@yuggoth.org> References: <715158a6-25fa-846e-149a-22d6e3d07ef5@suse.com> <20181018131713.nrnnlihipuvxaabu@yuggoth.org> Message-ID: Thanks for your help, Ifat On Thu, Oct 18, 2018 at 4:17 PM Jeremy Stanley wrote: > On 2018-10-18 14:57:23 +0200 (+0200), Andreas Jaeger wrote: > > On 18/10/2018 14.15, Ifat Afek wrote: > > > Hi, > > > > > > In the last three days Vitrage gate is broken due to the new > requirement > > > of SQLAlchemy-Utils==0.33.6. > > > We get the following error [1]: > > > > > > [...] > > > > Can we move back to version 0.33.5? or is there another solution? > > > > We discussed that on #openstack-infra, and fixed it each day - and then > it > > appeared again. > > > > https://review.openstack.org/611444 is the proposed fix for that - the > > issues comes from the fact that we build wheels if there are none > available > > and had a race in it. > > > > I hope an admin can delete the broken file again and it works again > tomorrow > > - if not, best to speak up quickly on #openstack-infra, > > It's been deleted (again) and the suspected fix approved so > hopefully it won't recur. > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Sun Oct 21 15:19:24 2018 From: jungleboyj at gmail.com (Jay S. Bryant) Date: Sun, 21 Oct 2018 10:19:24 -0500 Subject: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend In-Reply-To: <47653a27.59b1.1668cea5d9b.Coremail.bxzhu_5355@163.com> References: <73dab45e.1a3d.1668a62f317.Coremail.bxzhu_5355@163.com> <8f8b257a-5219-60e1-8760-d905f5bb48ff@gmail.com> <47653a27.59b1.1668cea5d9b.Coremail.bxzhu_5355@163.com> Message-ID: <40297115-571b-c07c-88b4-9e0d24b7301b@gmail.com> Boxiang, I have not herd any discussion of extending this functionality for Ceph to work between different Ceph Clusters.  I wasn't aware, however, that the existing spec was limited to one Ceph cluster. So, that is good to know. I would recommend reaching out to Jon Bernard or Eric Harney for guidance on how to proceed.  They work closely with the Ceph driver and could provide insight. Jay On 10/19/2018 10:21 AM, Boxiang Zhu wrote: > > Hi melanie, thanks for your reply. > > The version of my cinder and nova is Rocky. The scope of the cinder > spec[1] > is only for available volume migration between two pools from the same > ceph cluster. > If the volume is in-use status[2], it will call the generic migration > function. So that as you > describe it, on the nova side, it raises NotImplementedError(_("Swap > only supports host devices"). > The get_config of net volume[3] has not source_path. > > So does anyone try to succeed to migrate volume(in-use) with ceph > backend or is anyone doing something of it? > > [1] https://review.openstack.org/#/c/296150 > [2] > https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py > [3] > https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101 > > > Cheers, > Boxiang > On 10/19/2018 22:39,melanie witt > wrote: > > On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote: > > When I use the LVM backend to create the volume, then attach > it to a vm. > I can migrate the volume(in-use) from one host to another. The > nova > libvirt will call the 'rebase' to finish it. But if using ceph > backend, > it raises exception 'Swap only supports host devices'. So now > it does > not support to migrate volume(in-use). Does anyone do this > work now? Or > Is there any way to let me migrate volume(in-use) with ceph > backend? > > > What version of cinder and nova are you using? > > I found this question/answer on ask.openstack.org: > > https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/ > > and it looks like there was some work done on the cinder side [1] to > enable migration of in-use volumes with ceph semi-recently (Queens). > > On the nova side, the code looks for the source_path in the volume > config, and if there is not one present, it raises > NotImplementedError(_("Swap only supports host devices"). So in your > environment, the volume configs must be missing a source_path. > > If you are using at least Queens version, then there must be > something > additional missing that we would need to do to make the migration > work. > > [1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate > > Cheers, > -melanie > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Sun Oct 21 15:32:00 2018 From: jungleboyj at gmail.com (Jay S. Bryant) Date: Sun, 21 Oct 2018 10:32:00 -0500 Subject: [openstack-dev] [cinder]ceph rbd replication group support In-Reply-To: <3de2af708d464e9aadb070f3e6a143c9@yovole.com> References: <3de2af708d464e9aadb070f3e6a143c9@yovole.com> Message-ID: <8d21fb5a-4686-4dd7-4067-4ce18fab6a78@gmail.com> I would reach out to Lisa Li (lixiaoy1) on Cinder to see if this is something they may pick back up.  She has been more active in the community lately and may be able to look at this again or at least have good guidance for you. Thanks! Jay On 10/19/2018 1:14 AM, 王俊 wrote: > > Hi: > > I have a question about rbd replication group, I want to know the plan > or roadmap about it? Anybody work on it? > > Blueprint: > https://blueprints.launchpad.net/cinder/+spec/ceph-rbd-replication-group-support > > Thanks > > > > 保密:本函仅供收件人使用,如阁下并非抬头标明的收件人,请您即刻删除本函,勿以任何方式使用及传播,并请您能将此误发情形通知发件人,谢谢! > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Sat Oct 20 09:37:21 2018 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Sat, 20 Oct 2018 11:37:21 +0200 Subject: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer In-Reply-To: References: Message-ID: <74f8ade0b080b594791d91f136dbbe4d6a85a3d0.camel@redhat.com> +1 Bob's review's has been; and continues to be; insightful and on point. He is very thorough and notice the details. On Fri, 2018-10-19 at 09:53 -0400, Alan Bishop wrote: > +1 > > On Fri, Oct 19, 2018 at 9:47 AM John Fulton > wrote: > > +1 > > On Fri, Oct 19, 2018 at 9:46 AM Alex Schultz > > wrote: > > > > > > +1 > > > On Fri, Oct 19, 2018 at 6:29 AM Emilien Macchi < > > emilien at redhat.com> wrote: > > > > > > > > On Fri, Oct 19, 2018 at 8:24 AM Juan Antonio Osorio Robles < > > jaosorior at redhat.com> wrote: > > > >> > > > >> I would like to propose Bob Fournier (bfournie) as a core > > reviewer in > > > >> TripleO. His patches and reviews have spanned quite a wide > > range in our > > > >> project, his reviews show great insight and quality and I > > think he would > > > >> be a addition to the core team. > > > >> > > > >> What do you folks think? > > > > > > > > > > > > Big +1, Bob is a solid contributor/reviewer. His area of > > knowledge has been critical in all aspects of Hardware Provisioning > > integration but also in other TripleO bits. > > > > -- > > > > Emilien Macchi > > > > > > ___________________________________________________________________ > > _______ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > > unsubscribe > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > ___________________________________________________________________ > > _______ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:un > > subscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > ___________________________________________________________________ > > _______ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > bscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From flux.adam at gmail.com Mon Oct 22 00:38:40 2018 From: flux.adam at gmail.com (Adam Harwell) Date: Mon, 22 Oct 2018 09:38:40 +0900 Subject: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo In-Reply-To: References: <7e83990d-7620-f5af-9275-1ece9f84e26b@redhat.com> <5BC75C4C.1090108@fastmail.com> Message-ID: Octavia relies heavily on Taskflow and Futurist as well. Personally I agree with basically everything Monty said earlier. The problem here really isn't anything besides relaxing the social review policy, which is as simple as just deciding it as a team and saying "well, ok then". :) I also use a number of openstack libs outside of openstack to great effect and have had no problems to speak of, so I don't really think this should be a concern. I know it can be daunting to first enter the dev/review process because it is so different from the workflow most people are used to, but this is a problem that can be solved by having good docs (I think the existing developer quickstart docs are very effective) and maintaining an open and welcoming community. --Adam On Thu, Oct 18, 2018, 16:32 Dmitry Tantsur wrote: > On 10/17/18 5:59 PM, Joshua Harlow wrote: > > Dmitry Tantsur wrote: > >> On 10/10/18 7:41 PM, Greg Hill wrote: > >>> I've been out of the openstack loop for a few years, so I hope this > >>> reaches the right folks. > >>> > >>> Josh Harlow (original author of taskflow and related libraries) and I > >>> have been discussing the option of moving taskflow out of the > >>> openstack umbrella recently. This move would likely also include the > >>> futurist and automaton libraries that are primarily used by taskflow. > >> > >> Just for completeness: futurist and automaton are also heavily relied on > >> by ironic without using taskflow. > > > > When did futurist get used??? nice :) > > > > (I knew automaton was, but maybe I knew futurist was to and I forgot, > lol). > > I'm pretty sure you did, it happened back in Mitaka :) > > > > >> > >>> The idea would be to just host them on github and use the regular > >>> Github features for Issues, PRs, wiki, etc, in the hopes that this > >>> would spur more development. Taskflow hasn't had any substantial > >>> contributions in several years and it doesn't really seem that the > >>> current openstack devs have a vested interest in moving it forward. I > >>> would like to move it forward, but I don't have an interest in being > >>> bound by the openstack workflow (this is why the project stagnated as > >>> core reviewers were pulled on to other projects and couldn't keep up > >>> with the review backlog, so contributions ground to a halt). > >>> > >>> I guess I'm putting it forward to the larger community. Does anyone > >>> have any objections to us doing this? Are there any non-obvious > >>> technicalities that might make such a transition difficult? Who would > >>> need to be made aware so they could adjust their own workflows? > >>> > >>> Or would it be preferable to just fork and rename the project so > >>> openstack can continue to use the current taskflow version without > >>> worry of us breaking features? > >>> > >>> Greg > >>> > >>> > >>> > __________________________________________________________________________ > >>> > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Mon Oct 22 02:00:05 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Mon, 22 Oct 2018 09:00:05 +0700 Subject: [openstack-dev] [mistral][oslo][messaging] Removing =?utf-8?Q?=E2=80=9Cblocking=E2=80=9D_?=executor from oslo.messaging In-Reply-To: References: <682cb1b7-9c29-4242-90bd-7f28022a0bdb@Spark> <1c5b7519-154e-7085-42c7-7c402d6a48df@nemebean.com> <741d60d8-e5ba-4a60-9a9d-792fbb3b0abb@Spark> Message-ID: <63c69bfd-9ef5-46a7-9a56-a704dbf68aee@Spark> Hi Ken, Awesome! IMO it works for us. Thanks Renat Akhmerov @Nokia On 20 Oct 2018, 01:19 +0700, Ken Giusti , wrote: > Hi Renat, > After discussing this a bit with Ben on IRC we're going to push the removal off to T milestone 1. > > I really like Ben's idea re: adding a blocking entry to your project's setup.cfg file.  We can remove the explicit check for blocking in oslo.messaging so you won't get an annoying warning if you want to load blocking on your own. > > Let me know what you think, thanks. > > > On Fri, Oct 19, 2018 at 12:02 AM Renat Akhmerov wrote: > > > Hi, > > > > > > > > > @Ken, I understand your considerations. I get that. I’m only asking not to remove it *for now*. And yes, if you think it should be discouraged from using it’s totally fine. But practically, it’s been the only reliable option for Mistral so far that may be our fault, I have to admit, because we weren’t able to make it work well with other executor types but we’ll try to fix that. > > > > > > By the way, I was playing with different options yesterday and it seems like that setting the executor to “threading” and the “executor_thread_pool_size” property to 1 behaves the same way as “blocking”. So may be that’s an option for us too, even if “blocking” is completely removed. But I would still be in favour of having some extra time to prove that with thorough testing. > > > > > > @Ben, including the executor via setup.cfg also looks OK to me. I see no issues with this approach. > > > > > > > > > Thanks > > > > > > Renat Akhmerov > > > @Nokia > > > On 18 Oct 2018, 23:35 +0700, Ben Nemec , wrote: > > > > > > > > > > > > On 10/18/18 9:59 AM, Ken Giusti wrote: > > > > > Hi Renat, > > > > > > > > > > The biggest issue with the blocking executor (IMHO) is that it blocks > > > > > the protocol I/O while  RPC processing is in progress.  This increases > > > > > the likelihood that protocol processing will not get done in a timely > > > > > manner and things start to fail in weird ways.  These failures are > > > > > timing related and are typically hard to reproduce or root-cause.   This > > > > > isn't something we can fix as blocking is the nature of the executor. > > > > > > > > > > If we are to leave it in we'd really want to discourage its use. > > > > > > > > Since it appears the actual executor code lives in futurist, would it be > > > > possible to remove the entrypoint for blocking from oslo.messaging and > > > > have mistral just pull it in with their setup.cfg? Seems like they > > > > should be able to add something like: > > > > > > > > oslo.messaging.executors = > > > > blocking = futurist:SynchronousExecutor > > > > > > > > to their setup.cfg to keep it available to them even if we drop it from > > > > oslo.messaging itself. That seems like a good way to strongly discourage > > > > use of it while still making it available to projects that are really > > > > sure they want it. > > > > > > > > > > > > > > However I'm ok with leaving it available if the policy for using > > > > > blocking is 'use at your own risk', meaning that bug reports may have to > > > > > be marked 'won't fix' if we have reason to believe that blocking is at > > > > > fault.  That implies removing 'blocking' as the default executor value > > > > > in the API and having applications explicitly choose it.  And we keep > > > > > the deprecation warning. > > > > > > > > > > We could perhaps implement time duration checks around the executor > > > > > callout and log a warning if the executor blocked for an extended amount > > > > > of time (extended=TBD). > > > > > > > > > > Other opinions so we can come to a consensus? > > > > > > > > > > > > > > > On Thu, Oct 18, 2018 at 3:24 AM Renat Akhmerov > > > > > wrote: > > > > > > > > > > Hi Oslo Team, > > > > > > > > > > Can we retain “blocking” executor for now in Oslo Messaging? > > > > > > > > > > > > > > > Some background.. > > > > > > > > > > For a while we had to use Oslo Messaging with “blocking” executor in > > > > > Mistral because of incompatibility of MySQL driver with green > > > > > threads when choosing “eventlet” executor. Under certain conditions > > > > > we would get deadlocks between green threads. Some time ago we > > > > > switched to using PyMysql driver which is eventlet friendly and did > > > > > a number of tests that showed that we could safely switch to > > > > > “eventlet” executor (with that driver) so we introduced a new option > > > > > in Mistral where we could choose an executor in Oslo Messaging. The > > > > > corresponding bug is [1]. > > > > > > > > > > The issue is that we recently found that not everything actually > > > > > works as expected when using combination PyMysql + “eventlet” > > > > > executor. We also tried “threading” executor and the system *seems* > > > > > to work with it but surprisingly performance is much worse. > > > > > > > > > > Given all of that we’d like to ask Oslo Team not to remove > > > > > “blocking” executor for now completely, if that’s possible. We have > > > > > a strong motivation to switch to “eventlet” for other reasons > > > > > (parallelism => better performance etc.) but seems like we need some > > > > > time to make it smoothly. > > > > > > > > > > > > > > > [1] https://bugs.launchpad.net/mistral/+bug/1696469 > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Renat Akhmerov > > > > > @Nokia > > > > > __________________________________________________________________________ > > > > > OpenStack Development Mailing List (not for usage questions) > > > > > Unsubscribe: > > > > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > > > > > > > -- > > > > > Ken Giusti  (kgiusti at gmail.com ) > > > > > > > > > > __________________________________________________________________________ > > > > > OpenStack Development Mailing List (not for usage questions) > > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -- > Ken Giusti  (kgiusti at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From shu.mutow at gmail.com Mon Oct 22 02:17:34 2018 From: shu.mutow at gmail.com (Shu M.) Date: Mon, 22 Oct 2018 11:17:34 +0900 Subject: [openstack-dev] [zun][zun-ui] How to get "host" attribute for image In-Reply-To: References: Message-ID: Hongbin, Thank you for your proposing the patch! I'll check it and proceed implementation for zun-ui from now. Best regards, Shu 2018年10月19日(金) 12:28 Hongbin Lu : > Shu, > > It looks the 'host' field was added to the DB table but not exposed via > REST API by mistake. See if this patch fixes the issue: > https://review.openstack.org/#/c/611753/ . > > Best regards, > Hongbin > > On Thu, Oct 18, 2018 at 10:50 PM Shu M. wrote: > >> Hi folks, >> >> I found the following commit to show "host" attribute for image. >> >> >> https://github.com/openstack/zun/commit/72eac7c8f281de64054dfa07e3f31369c5a251f0 >> >> But I could not get the "host" for image with zun-show. >> >> I think image-list and image-show need to show "host" for admin, so I'd >> like to add "host" for image into zun-ui. >> Please let me know how to show "host" attribute. >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bxzhu_5355 at 163.com Mon Oct 22 03:45:55 2018 From: bxzhu_5355 at 163.com (Boxiang Zhu) Date: Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00) Subject: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend In-Reply-To: <40297115-571b-c07c-88b4-9e0d24b7301b@gmail.com> References: <73dab45e.1a3d.1668a62f317.Coremail.bxzhu_5355@163.com> <8f8b257a-5219-60e1-8760-d905f5bb48ff@gmail.com> <47653a27.59b1.1668cea5d9b.Coremail.bxzhu_5355@163.com> <40297115-571b-c07c-88b4-9e0d24b7301b@gmail.com> Message-ID: <8e1eba3.1f61.16699e1111d.Coremail.bxzhu_5355@163.com> Jay and Melanie, It's my fault to let you misunderstand the problem. I should describe my problem more clearly. My problem is not to migrate volumes between two ceph clusters. I have two clusters, one is openstack cluster(allinone env, hostname is dev) and another is ceph cluster. Omit the integrated configurations for openstack and ceph.[1] The special config of cinder.conf is as followed: [DEFAULT] enabled_backends = rbd-1,rbd-2 ...... [rbd-1] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes001 rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = true rbd_max_clone_depth = 2 rbd_store_chunk_size = 4 rados_connect_timeout = 5 rbd_user = cinder rbd_secret_uuid = 86d3922a-b471-4dc1-bb89-b46ab7024e81 [rbd-2] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes002 rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = true rbd_max_clone_depth = 2 rbd_store_chunk_size = 4 rados_connect_timeout = 5 rbd_user = cinder rbd_secret_uuid = 86d3922a-b471-4dc1-bb89-b46ab7024e81 There will be two hosts named dev at rbd-1#ceph and dev at rbd-2#ceph. Then I create a volume type named 'ceph' with the command 'cinder type-create ceph' and add extra_spec 'volume_backend_name=ceph' for it with the command 'cinder type-key set volume_backend_name=ceph'. I created a new vm and a new volume with type 'ceph'[So that the volume will be created on one of two hosts. I assume that the volume created on host dev at rbd-1#ceph this time]. Next step is to attach the volume to the vm. At last I want to migrate the volume from host dev at rbd-1#ceph to host dev at rbd-2#ceph, but it failed with the exception 'NotImplementedError(_("Swap only supports host devices")'. So that, my real problem is that is there any work to migrate volume(in-use)(ceph rbd) from one host(pool) to another host(pool) in the same ceph cluster? The difference between the spec[2] with my scope is only one is available(the spec) and another is in-use(my scope). [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/ [2] https://review.openstack.org/#/c/296150 Cheers, Boxiang On 10/21/2018 23:19,Jay S. Bryant wrote: Boxiang, I have not herd any discussion of extending this functionality for Ceph to work between different Ceph Clusters. I wasn't aware, however, that the existing spec was limited to one Ceph cluster. So, that is good to know. I would recommend reaching out to Jon Bernard or Eric Harney for guidance on how to proceed. They work closely with the Ceph driver and could provide insight. Jay On 10/19/2018 10:21 AM, Boxiang Zhu wrote: Hi melanie, thanks for your reply. The version of my cinder and nova is Rocky. The scope of the cinder spec[1] is only for available volume migration between two pools from the same ceph cluster. If the volume is in-use status[2], it will call the generic migration function. So that as you describe it, on the nova side, it raises NotImplementedError(_("Swap only supports host devices"). The get_config of net volume[3] has not source_path. So does anyone try to succeed to migrate volume(in-use) with ceph backend or is anyone doing something of it? [1] https://review.openstack.org/#/c/296150 [2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py [3] https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101 Cheers, Boxiang On 10/19/2018 22:39,melanie witt wrote: On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote: When I use the LVM backend to create the volume, then attach it to a vm. I can migrate the volume(in-use) from one host to another. The nova libvirt will call the 'rebase' to finish it. But if using ceph backend, it raises exception 'Swap only supports host devices'. So now it does not support to migrate volume(in-use). Does anyone do this work now? Or Is there any way to let me migrate volume(in-use) with ceph backend? What version of cinder and nova are you using? I found this question/answer on ask.openstack.org: https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/ and it looks like there was some work done on the cinder side [1] to enable migration of in-use volumes with ceph semi-recently (Queens). On the nova side, the code looks for the source_path in the volume config, and if there is not one present, it raises NotImplementedError(_("Swap only supports host devices"). So in your environment, the volume configs must be missing a source_path. If you are using at least Queens version, then there must be something additional missing that we would need to do to make the migration work. [1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate Cheers, -melanie __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Mon Oct 22 08:21:10 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 22 Oct 2018 10:21:10 +0200 Subject: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer In-Reply-To: References: Message-ID: <45f7d6f7-b5ed-98e4-c005-07d4916f88ae@redhat.com> +1 On 10/19/18 3:44 PM, Alex Schultz wrote: > +1 > On Fri, Oct 19, 2018 at 6:29 AM Emilien Macchi wrote: >> >> On Fri, Oct 19, 2018 at 8:24 AM Juan Antonio Osorio Robles wrote: >>> >>> I would like to propose Bob Fournier (bfournie) as a core reviewer in >>> TripleO. His patches and reviews have spanned quite a wide range in our >>> project, his reviews show great insight and quality and I think he would >>> be a addition to the core team. >>> >>> What do you folks think? >> >> >> Big +1, Bob is a solid contributor/reviewer. His area of knowledge has been critical in all aspects of Hardware Provisioning integration but also in other TripleO bits. >> -- >> Emilien Macchi >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From tobias.urdin at binero.se Mon Oct 22 08:22:15 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 22 Oct 2018 10:22:15 +0200 Subject: [openstack-dev] [Octavia] SSL errors polling amphorae and missing tenant network interface In-Reply-To: References: Message-ID: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> Hello, I've been having a lot of issues with SSL certificates myself, on my second trip now trying to get it working. Before I spent a lot of time walking through every line in the DevStack plugin and fixing my config options, used the generate script [1] and still it didn't work. When I got the "invalid padding" issue it was because of the DN I used for the CA and the certificate IIRC. > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'), ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'), ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",) > 19:47 < tobias-urdin> after a quick google "The problem was that my CA DN was the same as the certificate DN." IIRC I think that solved it, but then again I wouldn't remember fully since I've been at so many different angles by now. Here is my IRC logs history from the #openstack-lbaas channel, perhaps it can help you out http://paste.openstack.org/show/732575/ ----- Sorry for hijacking the thread but I'm stuck as well. I've in the past tried to generate the certificates with [1] but now moved on to using the openstack-ansible way of generating them [2] with some modifications. Right now I'm just getting: Could not connect to instance. Retrying.: SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579) from the amphoras, haven't got any further but I've eliminated a lot of stuck in the middle. Tried deploying Ocatavia on Ubuntu with python3 to just make sure there wasn't an issue with CentOS and OpenSSL versions since it tends to lag behind. Checking the amphora with openssl s_client [3] it gives the same one, but the verification is successful just that I don't understand what the bad signature part is about, from browsing some OpenSSL code it seems to be related to RSA signatures somehow. 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad signature:s3_clnt.c:2032: So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm back to something related to the certificates or the communication between the endpoints, or what actually responds inside the amphora (gunicorn IIUC?). Based on the "verify" functions actually causing that bad signature error I would assume it's the generated certificate that the amphora presents that is causing it. I'll have to continue the troubleshooting to the inside of the amphora, I've used the test-only amphora image before but have now built my own one that is using the amphora-agent from the actual stable branch, but same issue (bad signature). For verbosity this is the config options set for the certificates in octavia.conf and which file it was copied from [4], same here, a replication of what openstack-ansible does. Appreciate any feedback or help :) Best regards Tobias [1] https://github.com/openstack/octavia/blob/master/bin/create_certificates.sh [2] http://paste.openstack.org/show/732483/ [3] http://paste.openstack.org/show/732486/ [4] http://paste.openstack.org/show/732487/ On 10/20/2018 01:53 AM, Michael Johnson wrote: > Hi Erik, > > Sorry to hear you are still having certificate issues. > > Issue #2 is probably caused by issue #1. Since we hot-plug the tenant > network for the VIP, one of the first steps after the worker connects > to the amphora agent is finishing the required configuration of the > VIP interface inside the network namespace on the amphroa. > > If I remember correctly, you are attempting to configure Octavia with > the dual CA option (which is good for non-development use). > > This is what I have for notes: > > [certificates] gets the following: > cert_generator = local_cert_generator > ca_certificate = server CA's "server.pem" file > ca_private_key = server CA's "server.key" file > ca_private_key_passphrase = pass phrase for ca_private_key > [controller_worker] > client_ca = Client CA's ca_cert file > [haproxy_amphora] > client_cert = Client CA's client.pem file (I think with it's key > concatenated is what rm_work said the other day) > server_ca = Server CA's ca_cert file > > That said, I can probably run through this and write something up next > week that is more step-by-step/detailed. > > Michael > > On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick > wrote: >> Apologies for cross-posting, but in the event that these might be >> worth filing as bugs, I wanted the Octavia devs to see it as well... >> >> I've been wrestling with getting Octavia up and running and have >> become stuck on two issues. I'm hoping someone has run into these >> before. My google foo has come up empty. >> >> Issue 1: >> When the Octavia controller tries to poll the amphora instance, it >> tries repeatedly and eventually fails. The error on the controller >> side is: >> >> 2018-10-19 14:17:39.181 26 ERROR >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection >> retries (currently set to 300) exhausted. The amphora is unavailable. >> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries >> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by >> SSLError(SSLError("bad handshake: Error([('rsa routines', >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >> 'tls_process_server_certificate', 'certificate verify >> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', >> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 >> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >> 'tls_process_server_certificate', 'certificate verify >> failed')],)",),)) >> >> On the amphora side I see: >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from >> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake >> failure (_ssl.c:1754) >> >> I've generated certificates both with the script in the Octavia git >> repo, and with the Openstack Ansible playbook. I can see that they are >> present in /etc/octavia/certs. >> >> I'm using the Kolla (Queens) containers for the control plane so I'm >> sure I've satisfied all the python library constraints. >> >> Issue 2: >> I"m not sure how it gets configured, but the tenant network interface >> (ens6) never comes up. I can spawn other instances on that network >> with no issue, and I can see that Neutron has the port attached to the >> instance. However, in the instance this is all I get: >> >> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> group default qlen 1 >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> inet 127.0.0.1/8 scope host lo >> valid_lft forever preferred_lft forever >> inet6 ::1/128 scope host >> valid_lft forever preferred_lft forever >> 2: ens3: mtu 9000 qdisc pfifo_fast >> state UP group default qlen 1000 >> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff >> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 >> valid_lft forever preferred_lft forever >> inet6 fe80::f816:3eff:fe30:c460/64 scope link >> valid_lft forever preferred_lft forever >> 3: ens6: mtu 1500 qdisc noop state DOWN group >> default qlen 1000 >> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff >> >> There's no evidence of the interface anywhere else including udev rules. >> >> Any help with either or both issues would be greatly appreciated. >> >> Cheers, >> Erik >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tobias.urdin at binero.se Mon Oct 22 08:25:36 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 22 Oct 2018 10:25:36 +0200 Subject: [openstack-dev] [Octavia] SSL errors polling amphorae and missing tenant network interface In-Reply-To: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> References: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> Message-ID: +operators My bad. On 10/22/2018 10:22 AM, Tobias Urdin wrote: > Hello, > > I've been having a lot of issues with SSL certificates myself, on my > second trip now trying to get it working. > > Before I spent a lot of time walking through every line in the DevStack > plugin and fixing my config options, used the generate > script [1] and still it didn't work. > > When I got the "invalid padding" issue it was because of the DN I used > for the CA and the certificate IIRC. > > > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa > routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'), > ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'), > ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",) > > 19:47 < tobias-urdin> after a quick google "The problem was that my > CA DN was the same as the certificate DN." > > IIRC I think that solved it, but then again I wouldn't remember fully > since I've been at so many different angles by now. > > Here is my IRC logs history from the #openstack-lbaas channel, perhaps > it can help you out > http://paste.openstack.org/show/732575/ > > ----- > > Sorry for hijacking the thread but I'm stuck as well. > > I've in the past tried to generate the certificates with [1] but now > moved on to using the openstack-ansible way of generating them [2] > with some modifications. > > Right now I'm just getting: Could not connect to instance. Retrying.: > SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579) > from the amphoras, haven't got any further but I've eliminated a lot of > stuck in the middle. > > Tried deploying Ocatavia on Ubuntu with python3 to just make sure there > wasn't an issue with CentOS and OpenSSL versions since it tends to lag > behind. > Checking the amphora with openssl s_client [3] it gives the same one, > but the verification is successful just that I don't understand what the > bad signature > part is about, from browsing some OpenSSL code it seems to be related to > RSA signatures somehow. > > 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad > signature:s3_clnt.c:2032: > > So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS > (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm > back to something related > to the certificates or the communication between the endpoints, or what > actually responds inside the amphora (gunicorn IIUC?). Based on the > "verify" functions actually causing that bad signature error I would > assume it's the generated certificate that the amphora presents that is > causing it. > > I'll have to continue the troubleshooting to the inside of the amphora, > I've used the test-only amphora image before but have now built my own > one that is > using the amphora-agent from the actual stable branch, but same issue > (bad signature). > > For verbosity this is the config options set for the certificates in > octavia.conf and which file it was copied from [4], same here, a > replication of what openstack-ansible does. > > Appreciate any feedback or help :) > > Best regards > Tobias > > [1] > https://github.com/openstack/octavia/blob/master/bin/create_certificates.sh > [2] http://paste.openstack.org/show/732483/ > [3] http://paste.openstack.org/show/732486/ > [4] http://paste.openstack.org/show/732487/ > > On 10/20/2018 01:53 AM, Michael Johnson wrote: >> Hi Erik, >> >> Sorry to hear you are still having certificate issues. >> >> Issue #2 is probably caused by issue #1. Since we hot-plug the tenant >> network for the VIP, one of the first steps after the worker connects >> to the amphora agent is finishing the required configuration of the >> VIP interface inside the network namespace on the amphroa. >> >> If I remember correctly, you are attempting to configure Octavia with >> the dual CA option (which is good for non-development use). >> >> This is what I have for notes: >> >> [certificates] gets the following: >> cert_generator = local_cert_generator >> ca_certificate = server CA's "server.pem" file >> ca_private_key = server CA's "server.key" file >> ca_private_key_passphrase = pass phrase for ca_private_key >> [controller_worker] >> client_ca = Client CA's ca_cert file >> [haproxy_amphora] >> client_cert = Client CA's client.pem file (I think with it's key >> concatenated is what rm_work said the other day) >> server_ca = Server CA's ca_cert file >> >> That said, I can probably run through this and write something up next >> week that is more step-by-step/detailed. >> >> Michael >> >> On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick >> wrote: >>> Apologies for cross-posting, but in the event that these might be >>> worth filing as bugs, I wanted the Octavia devs to see it as well... >>> >>> I've been wrestling with getting Octavia up and running and have >>> become stuck on two issues. I'm hoping someone has run into these >>> before. My google foo has come up empty. >>> >>> Issue 1: >>> When the Octavia controller tries to poll the amphora instance, it >>> tries repeatedly and eventually fails. The error on the controller >>> side is: >>> >>> 2018-10-19 14:17:39.181 26 ERROR >>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection >>> retries (currently set to 300) exhausted. The amphora is unavailable. >>> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries >>> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by >>> SSLError(SSLError("bad handshake: Error([('rsa routines', >>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >>> 'tls_process_server_certificate', 'certificate verify >>> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', >>> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 >>> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', >>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >>> 'tls_process_server_certificate', 'certificate verify >>> failed')],)",),)) >>> >>> On the amphora side I see: >>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. >>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from >>> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake >>> failure (_ssl.c:1754) >>> >>> I've generated certificates both with the script in the Octavia git >>> repo, and with the Openstack Ansible playbook. I can see that they are >>> present in /etc/octavia/certs. >>> >>> I'm using the Kolla (Queens) containers for the control plane so I'm >>> sure I've satisfied all the python library constraints. >>> >>> Issue 2: >>> I"m not sure how it gets configured, but the tenant network interface >>> (ens6) never comes up. I can spawn other instances on that network >>> with no issue, and I can see that Neutron has the port attached to the >>> instance. However, in the instance this is all I get: >>> >>> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a >>> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >>> group default qlen 1 >>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >>> inet 127.0.0.1/8 scope host lo >>> valid_lft forever preferred_lft forever >>> inet6 ::1/128 scope host >>> valid_lft forever preferred_lft forever >>> 2: ens3: mtu 9000 qdisc pfifo_fast >>> state UP group default qlen 1000 >>> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff >>> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 >>> valid_lft forever preferred_lft forever >>> inet6 fe80::f816:3eff:fe30:c460/64 scope link >>> valid_lft forever preferred_lft forever >>> 3: ens6: mtu 1500 qdisc noop state DOWN group >>> default qlen 1000 >>> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff >>> >>> There's no evidence of the interface anywhere else including udev rules. >>> >>> Any help with either or both issues would be greatly appreciated. >>> >>> Cheers, >>> Erik >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > From shardy at redhat.com Mon Oct 22 08:27:46 2018 From: shardy at redhat.com (Steven Hardy) Date: Mon, 22 Oct 2018 09:27:46 +0100 Subject: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer In-Reply-To: References: Message-ID: On Fri, Oct 19, 2018 at 1:24 PM Juan Antonio Osorio Robles wrote: > > Hello! > > > I would like to propose Bob Fournier (bfournie) as a core reviewer in > TripleO. His patches and reviews have spanned quite a wide range in our > project, his reviews show great insight and quality and I think he would > be a addition to the core team. > > What do you folks think? +1! From skramaja at redhat.com Mon Oct 22 09:34:54 2018 From: skramaja at redhat.com (Saravanan KR) Date: Mon, 22 Oct 2018 15:04:54 +0530 Subject: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer In-Reply-To: References: Message-ID: +1 Regards, Saravanan KR On Fri, Oct 19, 2018 at 5:53 PM Juan Antonio Osorio Robles wrote: > > Hello! > > > I would like to propose Bob Fournier (bfournie) as a core reviewer in > TripleO. His patches and reviews have spanned quite a wide range in our > project, his reviews show great insight and quality and I think he would > be a addition to the core team. > > What do you folks think? > > > Best Regards > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Mon Oct 22 10:55:19 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 22 Oct 2018 11:55:19 +0100 (BST) Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance In-Reply-To: <6850a16b-a0f3-edeb-12c6-749745a32491@openstack.org> References: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> <6850a16b-a0f3-edeb-12c6-749745a32491@openstack.org> Message-ID: On Fri, 19 Oct 2018, Thierry Carrez wrote: > Ed Leafe wrote: >> On Oct 15, 2018, at 7:40 AM, Chris Dent wrote: >>> >>> I'd like some input from the community on how we'd like this to go. >> >> I would say it depends on the long-term plans for paste. Are we planning on >> weaning ourselves off of paste, and simply need to maintain it until that >> can be completed, or are we planning on encouraging its use? > > Agree with Ed... is this something we plan to minimally maintain because we > depend on it, something that needs feature work and that we want to encourage > the adoption of, or something that we want to keep on life-support while we > move away from it? That is indeed the question. I was rather hoping that some people who are using paste (besides Keystone) would chime in here with what they would like to do. My preference would be that we immediately start moving away from it and keep paste barely on life-support (a bit like WSME which I also somehow managed to get involved with despite thinking it is horrible). However, that's not easy to do because the paste.ini files have to be considered config because of the way some projects and deployments use them to drive custom middleware and the ordering of middleware. So we're in for at least a year or so. > My assumption is that it's "something we plan to minimally maintain because > we depend on it". in which case all options would work: the exact choice > depends on whether there is anybody interested in helping maintaining it, and > where those contributors prefer to do the work. Thus far I'm not hearing any volunteers. If that continues to be the case, I'll just keep it on bitbucket as that's the minimal change. My concern with that is my aforementioned feelings of "it is horrible". It might be better if someone who actually appreciates Paste was involved as well. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From cdent+os at anticdent.org Mon Oct 22 11:06:25 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 22 Oct 2018 12:06:25 +0100 (BST) Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: <4678c020-ba2d-4086-b1b5-f57286ae5a6d@redhat.com> References: <4678c020-ba2d-4086-b1b5-f57286ae5a6d@redhat.com> Message-ID: On Fri, 19 Oct 2018, Zane Bitter wrote: > Just to make it easier to visualise, here is an example for how the Zuul > config _might_ look now if we had adopted this proposal during Rocky: > > https://review.openstack.org/611947 > > And instead of having a project-wide goal in Stein to add > `openstack-python36-jobs` to the list that currently includes > `openstack-python35-jobs` in each project's Zuul config[1], we'd have had a > goal to change `openstack-python3-rocky-jobs` to > `openstack-python3-stein-jobs` in each project's Zuul config. I like this, because it involves conscious actions, awareness and self-testing by each project to move forward to a thing with a reasonable name (the cycle name). I don't think we should call that "churn". "Intention" might be a better word. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From beagles at redhat.com Mon Oct 22 11:46:56 2018 From: beagles at redhat.com (Brent Eagles) Date: Mon, 22 Oct 2018 09:16:56 -0230 Subject: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer In-Reply-To: References: Message-ID: +1 Cheers, Brent On Mon, Oct 22, 2018 at 7:06 AM Saravanan KR wrote: > +1 > > Regards, > Saravanan KR > On Fri, Oct 19, 2018 at 5:53 PM Juan Antonio Osorio Robles > wrote: > > > > Hello! > > > > > > I would like to propose Bob Fournier (bfournie) as a core reviewer in > > TripleO. His patches and reviews have spanned quite a wide range in our > > project, his reviews show great insight and quality and I think he would > > be a addition to the core team. > > > > What do you folks think? > > > > > > Best Regards > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at redhat.com Mon Oct 22 11:50:56 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Mon, 22 Oct 2018 14:50:56 +0300 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> Message-ID: <5bc322a2-da57-7dbb-e2fb-2d10043bf23d@redhat.com> On 10/19/18 8:04 PM, Alex Schultz wrote: > On Fri, Oct 19, 2018 at 10:53 AM James Slagle wrote: >> On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz wrote: >>> Additionally I took a stab at combining the puppet/docker service >>> definitions for the aodh services in a similar structure to start >>> reducing the overhead we've had from maintaining the docker/puppet >>> implementations seperately. You can see the patch >>> https://review.openstack.org/#/c/611188/ for an additional example of >>> this. >> That patch takes the approach of removing baremetal support. Is that >> what we agreed to do? >> > Since it's deprecated since Queens[0], yes? I think it is time to stop > continuing this method of installation. Given that I'm not even sure > the upgrade process even works anymore with baremetal, I don't think > there's a reason to keep it as it directly impacts the time it takes > to perform deployments and also contributes to increased complexity > all around. > > [0] http://lists.openstack.org/pipermail/openstack-dev/2017-September/122248.html As an advantage to removing baremetal support, our nested stack usage would be a little lighter and this might actually help out deployment times and resource usage. I like the idea of going ahead and starting to flatten the stacks for our services. > >> I'm not specifically opposed, as I'm pretty sure the baremetal >> implementations are no longer tested anywhere, but I know that Dan had >> some concerns about that last time around. >> >> The alternative we discussed was using jinja2 to include common >> data/tasks in both the puppet/docker/ansible implementations. That >> would also result in reducing the number of Heat resources in these >> stacks and hopefully reduce the amount of time it takes to >> create/update the ServiceChain stacks. >> > I'd rather we officially get rid of the one of the two methods and > converge on a single method without increasing the complexity via > jinja to continue to support both. If there's an improvement to be had > after we've converged on a single structure for including the base > bits, maybe we could do that then? > > Thanks, > -Alex > >> -- >> -- James Slagle >> -- >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From chkumar246 at gmail.com Mon Oct 22 12:55:00 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Mon, 22 Oct 2018 18:25:00 +0530 Subject: [openstack-dev] [tripleo][ui][tempest][oooq] Refreshing plugins from git In-Reply-To: <4b61614b-fbd3-3da0-3f60-c9a8516c3844@redhat.com> References: <20181018001743.swmj3icwzlezoqdd@localhost.localdomain> <4b61614b-fbd3-3da0-3f60-c9a8516c3844@redhat.com> Message-ID: Hello Honza, On Thu, Oct 18, 2018 at 6:15 PM Bogdan Dobrelya wrote: > > On 10/18/18 2:17 AM, Honza Pokorny wrote: > > Hello folks, > > > > I'm working on the automated ui testing blueprint[1], and I think we > > need to change the way we ship our tempest tests. > > > > Here is where things stand at the moment: > > > > * We have a kolla image for tempest > > * This image contains the tempest rpm, and the openstack-tempest-all rpm > > * The openstack-tempest-all package in turn contains all of the > > openstack tempest plugins > > * Each of the plugins is shipped as an rpm > > > > So, in order for a new test in tempest-tripleo-ui to appear in CI we > > have to go through at least the following tests: > > > > * New tempest-tripleo-ui rpm > > * New openstack-tempest-all rpm > > * New tempest kolla image > > > > This could easily take a week, if not more. > > > > What I would like to build is something like the following: > > > > * Add an option to the tempest-setup.sh script in tripleo-quickstart to > > refresh all tempest plugins from git before running any tests > > * Optionally specify a zuul change for any of the plugins being > > refreshed > > * Hook up the test job to patches in tripleo-ui (which tests in > > tempest-tripleo-ui are testing) so that I can run a fix and its test > > in a single CI job I have added a patch in TripleO Quickstart extras Validate-tempest role: https://review.openstack.org/#/c/612377/ to install any tempest plugin from git and zuul will pick the specific change in the gates. Here is the patch on how to test it with FS: https://review.openstack.org/612386 Basically in any FS, we can add following lines tempest_format: venv tempest_plugins_git: - 'https://git.openstack.org/openstack/tempest-tripleo-ui.git' the respective FS related job will install the tempest plugin and we can also use test_white_regex: to trigger the tempest tests. I think it will solve the problem. Thanks Chandan Kumar From corey.bryant at canonical.com Mon Oct 22 14:00:44 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Mon, 22 Oct 2018 10:00:44 -0400 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: <68457976-faab-26e3-74da-ee7ebbd20bdd@redhat.com> References: <1539966611.2016714.1547916472.26717683@webmail.messagingengine.com> <68457976-faab-26e3-74da-ee7ebbd20bdd@redhat.com> Message-ID: On Fri, Oct 19, 2018 at 3:46 PM Zane Bitter wrote: > On 19/10/18 12:30 PM, Clark Boylan wrote: > > On Fri, Oct 19, 2018, at 8:17 AM, Zane Bitter wrote: > >> Unit Tests > >> ---------- > >> > >> For unit tests, the most important thing is to test on the versions of > >> Python we target. It's less important to be using the exact distro that > >> we want to target, because unit tests generally won't interact with > >> stuff outside of Python. > >> > >> I'd like to propose that we handle this by setting up a unit test > >> template in openstack-zuul-jobs for each release. So for Stein we'd have > >> openstack-python3-stein-jobs. This template would contain: > > > > Because zuul config is branch specific we could set up every project to > use a `openstack-python3-jobs` template then define that template > differently on each branch. This would mean you only have to update the > location where the template is defined and not need to update every other > project after cutting a stable branch. I would suggest we take advantage of > that to reduce churn. > > There was a reason I didn't propose that approach: in practice you can't > add a new gating test to a centralised zuul template definition. If you > do, many projects will break because the change is not self-testing. At > best you'll be pitchforked by an angry mob of people who can't get > anything but py37 fixes through the gate, and at worst they'll all stop > using the template to get unblocked and then never go back to it. > > We don't need everyone to cut over at the same time. We just need them > to do it in the space of one release cycle. One patch every 6 months is > not an excessive amount of churn. > > >> * A voting gate job for the highest minor version of py3 we want to > >> support in that release. > >> * A voting gate job for the lowest minor version of py3 we want to > >> support in that release. > >> * A periodic job for any interim minor releases. > >> * (Starting late in the cycle) a non-voting check job for the highest > >> minor version of py3 we want to support in the *next* release (if > >> different), on the master branch only. > >> > >> So, for example, (and this is still under active debate) for Stein we > >> might have gating jobs for py35 and py37, with a periodic job for py36. > >> The T jobs might only have voting py36 and py37 jobs, but late in the T > >> cycle we might add a non-voting py38 job on master so that people who > >> haven't switched to the U template yet can see what, if anything, > >> they'll need to fix. > >> > >> We'll run the unit tests on any distro we can find that supports the > >> version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian > >> unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a > >> particular Python version before trying to test it. > >> > >> Before the start of each cycle, the TC would determine which range of > >> versions we want to support, on the basis of the latest one we can find > >> in any distro and the earliest one we're likely to need in one of the > >> supported Linux distros. There will be a project-wide goal to switch the > >> testing template from e.g. openstack-python3-stein-jobs to > >> openstack-python3-treasure-jobs for every repo before the end of the > >> cycle. We'll have goal champions as usual following up and helping teams > >> with the process. We'll know where the problem areas are because we'll > >> have added non-voting jobs for any new Python versions to the previous > >> release's template. > > > > I don't know that this needs to be a project wide goal if you can just > update the template on the master branch where the template is defined. Do > that then every project is now running with the up to date version of the > template. We should probably advertise when this is happening with some > links to python version x.y breakages/features, but the process itself > should be quick. > > Either way, it'll be project teams themselves fixing any broken tests > due to a new version being added. So we can either have a formal > project-wide goal where we project-manage that process across the space > of a release, or a de-facto project-wide goal where we break everybody > and then nothing gets merged until they fix it. > > > As for python version range selection I worry that that the criteria > about relies on too much guesswork. > > Some guesswork is going to be inevitable, unfortunately, (we have no way > of knowing what will be in CentOS 8, for example) but I agree that we > should try to tighten up the criteria as much as possible. > > > I do think we should do our best to test future incoming versions of > python even while not officially supporting them. We will have to support > them at some point, either directly or via some later version that includes > the changes from that intermediate version. > > +1, I think we should try to add support for higher versions as soon as > possible. It may take a long time to get into an LTS release, but > there's bound to be _some_ distro out there where people want to use it. > (Case in point: Debian really wanted py37 support in Rocky, at which > point a working 3.7 wasn't even available in _any_ Ubuntu release, let > alone an LTS). That's why I said "the latest one we can find in any > distro" - if we have any way to test it at all then we should. > > > Could the criteria be: > > Support the lowest version of python on a supported distro release and > the highest version of python on a supported distro > > As of now we can't objectively determine the minimum version because it > isn't the future yet. That may change once every distro is on Python 3 > though. > > > and test (but not support so we can drop testing for this python version > on stable branches) the current latest release of python? > > That's certainly worth considering; we could tighten up the range on > stable branches. I think we'd need to hear from Ubuntu and Debian folks > what they think about that. My guess is that they'd prefer even stable > branches to continue testing recent versions, so that they could be used > on Debian unstable and on Ubuntu between LTS releases. > > Zane and all, thanks very much for pushing this initiative forward. The general approach makes sense to me. Proactively testing with the latest python versions would be awesome and would close a much needed gap. As for testing stable branches with the latest Python versions. I believe this refers to the latest Python version available anywhere, ie. perhaps an unstable release from sid. In general it would make sense to to continue testing the most recent stable release the same way it was tested when it was master. There's a good chance that by the time stable/xyz branches are cut from master, the unstable Python version that was being tested in master is now in a stable distro release. For example, for the OpenStack U release this will most likely be the case for Ubuntu with Python 3.8. Here are the versions we support in Ubuntu, and (most likely) will support in the future, in case it's helpful: Queens: py3.5 (16.04); py3.6 (18.04) Rocky: py3.6 (18.04); py3.6 (18.10) Stein: py3.6 (18.04); py3.7 (19.04) T: py3.6 (18.04); py3.7 (19.10) U: py3.6 (18.04); py3.8 (20.04) V: py3.8 (20.04); py3.? (20.10) Thanks, Corey > > This is objective and doesn't require anyone to guess at what versions > need to be supported. > > > >> > >> Integration Tests > >> ----------------- > >> > >> Integration tests do test, amongst other things, integration with > >> non-openstack-supplied things in the distro, so it's important that we > >> test on the actual distros we have identified as popular.[2] It's also > >> important that every project be testing on the same distro at the end of > >> a release, so we can be sure they all work together for users. > >> > >> When a new release of CentOS or a new LTS release of Ubuntu comes out, > >> the TC will create a project-wide goal for the *next* release cycle to > >> switch all integration tests over to that distro. It's up to individual > >> projects to make the switch for the tests that they own (e.g. it'd be > >> the QA team for Tempest, but other individual projects for their own > >> jobs). Again, there'll be a goal champion to monitor and follow up. > >> > >> > >> [1] > >> > https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html > >> [2] > >> > https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions > > > > Overall this approach seems fine. It is basically the approach we've > used in the past with the addition of explicit testing of future python > (which we've never really done before). I do think we need to avoid letting > every project move at their own pace. This is what we did with the last > python and distro switchover (trusty to xenial) and a number of projects > failed to get it done within the release cycle. Instead I think we should > rely on shared branch specific zuul templates that can be updated centrally > when the next release is cut. > > For those just joining, we discussed this on IRC yesterday.[1] fungi > mentioned that we tried two different approaches for precise->trusty and > trusty->xenial, and they both failed in different ways. The first time > infra gave teams time to prepare and then eventually cut the build over > for everyone, with the result that lots of things broke. The second time > infra allowed teams to switch over in their own time, with the result > that a lot of things released with tests running on an outdated LTS > release. > > We do have one new tool in our toolbox: project-wide goals. They're not > magically going to solve the problem, but at least they give us some > visibility into what has and has not happened. Perhaps we could provide > periodic, rather than experimental, jobs to test with so that the goal > champions can track where the likely problems are in the lead-up to a > switchover? As far as the LTS distro goes, I don't have a strong opinion > on which approach is the least worst. I think if we want to have a > switchover (just after milestone-2 maybe?) then we should just write > that into the project-wide goal. > > cheers, > Zane. > > [1] > > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-10-18.log.html#t2018-10-18T15:16:13 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Oct 22 14:33:49 2018 From: zigo at debian.org (Thomas Goirand) Date: Mon, 22 Oct 2018 16:33:49 +0200 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: References: Message-ID: <5cf5bee4-d780-57fb-6099-fa018e3ab097@debian.org> On 10/19/18 5:17 PM, Zane Bitter wrote: > We have traditionally held to the principle that we want each release to > support the latest release of CentOS and the latest LTS release of > Ubuntu, as they existed at the beginning of the release cycle.[2] > Currently this means in practice one version of py2 and one of py3, but > in the future it will mean two, usually different, versions of py3. That's not very nice to forget about the Debian case, which usually closely precedes Ubuntu. If you want to support Ubuntu better, then supporting Debian better helps. I usually get the issue before everyone, as Sid is the distro which is updated the most often. Therefore, please make sure to include Debian in your proposal. > For unit tests, the most important thing is to test on the versions of > Python we target. It's less important to be using the exact distro that > we want to target, because unit tests generally won't interact with > stuff outside of Python. One of the reoccurring problem that I'm facing in Debian is that not only Python 3 version is lagging behind, but OpenStack dependencies are also lagging behind the distro. Often, the answer is "we don't support this or that version of X", which of course is very frustrating. One thing which would be super nice, would be a non-voting gate job that test with the latest version of every Python dependencies as well, so we get to see breakage early. We've stopped seeing them since we decided it breaks too often and we would hide problems behind the global-requirement thing. And sometimes, we have weird interactions. For example, taskflow was broken in Python 3.7 before this patch: https://salsa.debian.org/openstack-team/libs/python-taskflow/commit/6a10261a8a147d901c07a6e7272dc75b9f4d0988 which broke multiple packages using it. Funny thing, it looks like it wouldn't have happen if we didn't have a pre-version of Python 3.7.1 in Sid, apparently. Anyway, this can happen again. > So, for example, (and this is still under active debate) for Stein we > might have gating jobs for py35 and py37, with a periodic job for py36. > The T jobs might only have voting py36 and py37 jobs, but late in the T > cycle we might add a non-voting py38 job on master so that people who > haven't switched to the U template yet can see what, if anything, > they'll need to fix. This can only happen if we have supporting distribution packages for it. IMO, this is a call for using Debian Testing or even Sid in the gate. > We'll run the unit tests on any distro we can find that supports the > version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian > unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a > particular Python version before trying to test it. I very much agree with that. > Before the start of each cycle, the TC would determine which range of > versions we want to support, on the basis of the latest one we can find > in any distro and the earliest one we're likely to need in one of the > supported Linux distros. Release of Python aren't aligned with OpenStack cycles. Python 3.7 appeared late in the Rocky cycle. Therefore, unfortunately, doing what you propose above doesn't address the issue. > Integration Tests > ----------------- > > Integration tests do test, amongst other things, integration with > non-openstack-supplied things in the distro, so it's important that we > test on the actual distros we have identified as popular.[2] It's also > important that every project be testing on the same distro at the end of > a release, so we can be sure they all work together for users. I find very disturbing to see the project only leaning toward these only 2 distributions. Why not SuSE & Debian? Cheers, Thomas Goirand (zigo) From zigo at debian.org Mon Oct 22 14:38:20 2018 From: zigo at debian.org (Thomas Goirand) Date: Mon, 22 Oct 2018 16:38:20 +0200 Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance In-Reply-To: References: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> <6850a16b-a0f3-edeb-12c6-749745a32491@openstack.org> Message-ID: <1e2ebe5c-b355-6833-d2bb-b2a28d2b4657@debian.org> On 10/22/18 12:55 PM, Chris Dent wrote: >> My assumption is that it's "something we plan to minimally maintain >> because we depend on it". in which case all options would work: the >> exact choice depends on whether there is anybody interested in helping >> maintaining it, and where those contributors prefer to do the work. > > Thus far I'm not hearing any volunteers. If that continues to be the > case, I'll just keep it on bitbucket as that's the minimal change. Could you please move it to Github, so that at least, it's easier to check out? Mercurial is always a pain... Cheers, Thomas Goirand (zigo) From emccormick at cirrusseven.com Mon Oct 22 14:44:16 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Mon, 22 Oct 2018 10:44:16 -0400 Subject: [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface In-Reply-To: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> References: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> Message-ID: On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin wrote: > > Hello, > > I've been having a lot of issues with SSL certificates myself, on my > second trip now trying to get it working. > > Before I spent a lot of time walking through every line in the DevStack > plugin and fixing my config options, used the generate > script [1] and still it didn't work. > > When I got the "invalid padding" issue it was because of the DN I used > for the CA and the certificate IIRC. > > > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa > routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'), > ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'), > ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",) > > 19:47 < tobias-urdin> after a quick google "The problem was that my > CA DN was the same as the certificate DN." > > IIRC I think that solved it, but then again I wouldn't remember fully > since I've been at so many different angles by now. > > Here is my IRC logs history from the #openstack-lbaas channel, perhaps > it can help you out > http://paste.openstack.org/show/732575/ > Tobias, I owe you a beer. This was precisely the issue. I'm deploying Octavia with kolla-ansible. It only deploys a single CA. After hacking the templates and playbook to incorporate a separate server CA, the amphorae now load and provision the required namespace. I'm adding a kolla tag to the subject of this in hopes that someone might want to take on changing this behavior in the project. Hopefully after I get through Upstream Institute in Berlin I'll be able to do it myself if nobody else wants to do it. For certificate generation, I extracted the contents of octavia_certs_install.yml (which sets up the directory structure, openssl.cnf, and the client CA), and octavia_certs.yml (which creates the server CA and the client certificate) and mashed them into a separate playbook just for this purpose. At the end I get: ca_01.pem - Client CA Certificate ca_01.key - Client CA Key ca_server_01.pem - Server CA Certificate cakey.pem - Server CA Key client.pem - Concatenated Client Key and Certificate If it would help to have the playbook, I can stick it up on github with a huge "This is a hack" disclaimer on it. > ----- > > Sorry for hijacking the thread but I'm stuck as well. > > I've in the past tried to generate the certificates with [1] but now > moved on to using the openstack-ansible way of generating them [2] > with some modifications. > > Right now I'm just getting: Could not connect to instance. Retrying.: > SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579) > from the amphoras, haven't got any further but I've eliminated a lot of > stuck in the middle. > > Tried deploying Ocatavia on Ubuntu with python3 to just make sure there > wasn't an issue with CentOS and OpenSSL versions since it tends to lag > behind. > Checking the amphora with openssl s_client [3] it gives the same one, > but the verification is successful just that I don't understand what the > bad signature > part is about, from browsing some OpenSSL code it seems to be related to > RSA signatures somehow. > > 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad > signature:s3_clnt.c:2032: > > So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS > (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm > back to something related > to the certificates or the communication between the endpoints, or what > actually responds inside the amphora (gunicorn IIUC?). Based on the > "verify" functions actually causing that bad signature error I would > assume it's the generated certificate that the amphora presents that is > causing it. > > I'll have to continue the troubleshooting to the inside of the amphora, > I've used the test-only amphora image before but have now built my own > one that is > using the amphora-agent from the actual stable branch, but same issue > (bad signature). > > For verbosity this is the config options set for the certificates in > octavia.conf and which file it was copied from [4], same here, a > replication of what openstack-ansible does. > > Appreciate any feedback or help :) > > Best regards > Tobias > > [1] > https://github.com/openstack/octavia/blob/master/bin/create_certificates.sh > [2] http://paste.openstack.org/show/732483/ > [3] http://paste.openstack.org/show/732486/ > [4] http://paste.openstack.org/show/732487/ > > On 10/20/2018 01:53 AM, Michael Johnson wrote: > > Hi Erik, > > > > Sorry to hear you are still having certificate issues. > > > > Issue #2 is probably caused by issue #1. Since we hot-plug the tenant > > network for the VIP, one of the first steps after the worker connects > > to the amphora agent is finishing the required configuration of the > > VIP interface inside the network namespace on the amphroa. > > Thanks for the hint on the workflow of this. I hadn't gotten deep enough into the code to find that yet, but I suspected it was blocking since the namespace never got created either. Thanks > > If I remember correctly, you are attempting to configure Octavia with > > the dual CA option (which is good for non-development use). > > > > This is what I have for notes: > > > > [certificates] gets the following: > > cert_generator = local_cert_generator > > ca_certificate = server CA's "server.pem" file > > ca_private_key = server CA's "server.key" file > > ca_private_key_passphrase = pass phrase for ca_private_key > > [controller_worker] > > client_ca = Client CA's ca_cert file > > [haproxy_amphora] > > client_cert = Client CA's client.pem file (I think with it's key > > concatenated is what rm_work said the other day) > > server_ca = Server CA's ca_cert file > > This is all very helpful. It's a bit difficult to know what goes where the way the documentation is written presently. For something that's going to be the defacto standard for loadbalancing, we as a community need to do a better job of documenting how to set up, configure, and manage this in production. I'm trying to capture my lessons learned and processes as I go to help with that if I can. -Erik > > That said, I can probably run through this and write something up next > > week that is more step-by-step/detailed. > > > > Michael > > > > On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick > > wrote: > >> Apologies for cross-posting, but in the event that these might be > >> worth filing as bugs, I wanted the Octavia devs to see it as well... > >> > >> I've been wrestling with getting Octavia up and running and have > >> become stuck on two issues. I'm hoping someone has run into these > >> before. My google foo has come up empty. > >> > >> Issue 1: > >> When the Octavia controller tries to poll the amphora instance, it > >> tries repeatedly and eventually fails. The error on the controller > >> side is: > >> > >> 2018-10-19 14:17:39.181 26 ERROR > >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > >> retries (currently set to 300) exhausted. The amphora is unavailable. > >> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > >> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > >> SSLError(SSLError("bad handshake: Error([('rsa routines', > >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > >> 'tls_process_server_certificate', 'certificate verify > >> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > >> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 > >> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > >> 'tls_process_server_certificate', 'certificate verify > >> failed')],)",),)) > >> > >> On the amphora side I see: > >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. > >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > >> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > >> failure (_ssl.c:1754) > >> > >> I've generated certificates both with the script in the Octavia git > >> repo, and with the Openstack Ansible playbook. I can see that they are > >> present in /etc/octavia/certs. > >> > >> I'm using the Kolla (Queens) containers for the control plane so I'm > >> sure I've satisfied all the python library constraints. > >> > >> Issue 2: > >> I"m not sure how it gets configured, but the tenant network interface > >> (ens6) never comes up. I can spawn other instances on that network > >> with no issue, and I can see that Neutron has the port attached to the > >> instance. However, in the instance this is all I get: > >> > >> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > >> group default qlen 1 > >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > >> inet 127.0.0.1/8 scope host lo > >> valid_lft forever preferred_lft forever > >> inet6 ::1/128 scope host > >> valid_lft forever preferred_lft forever > >> 2: ens3: mtu 9000 qdisc pfifo_fast > >> state UP group default qlen 1000 > >> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > >> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > >> valid_lft forever preferred_lft forever > >> inet6 fe80::f816:3eff:fe30:c460/64 scope link > >> valid_lft forever preferred_lft forever > >> 3: ens6: mtu 1500 qdisc noop state DOWN group > >> default qlen 1000 > >> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > >> > >> There's no evidence of the interface anywhere else including udev rules. > >> > >> Any help with either or both issues would be greatly appreciated. > >> > >> Cheers, > >> Erik > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From morgan.fainberg at gmail.com Mon Oct 22 14:49:35 2018 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Mon, 22 Oct 2018 07:49:35 -0700 Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance In-Reply-To: <1e2ebe5c-b355-6833-d2bb-b2a28d2b4657@debian.org> References: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> <6850a16b-a0f3-edeb-12c6-749745a32491@openstack.org> <1e2ebe5c-b355-6833-d2bb-b2a28d2b4657@debian.org> Message-ID: I should be able to do a write up for Keystone's removal of paste *and* move to flask soon. I can easily extract the bit of code I wrote to load our external middleware (and add an external loader) for the transition away from paste. I also think paste is terrible, and would be willing to help folks move off of it rather than maintain it. --Morgan On Mon, Oct 22, 2018, 07:38 Thomas Goirand wrote: > On 10/22/18 12:55 PM, Chris Dent wrote: > >> My assumption is that it's "something we plan to minimally maintain > >> because we depend on it". in which case all options would work: the > >> exact choice depends on whether there is anybody interested in helping > >> maintaining it, and where those contributors prefer to do the work. > > > > Thus far I'm not hearing any volunteers. If that continues to be the > > case, I'll just keep it on bitbucket as that's the minimal change. > > Could you please move it to Github, so that at least, it's easier to > check out? Mercurial is always a pain... > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Mon Oct 22 14:50:05 2018 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Mon, 22 Oct 2018 07:50:05 -0700 Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance In-Reply-To: References: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> <6850a16b-a0f3-edeb-12c6-749745a32491@openstack.org> <1e2ebe5c-b355-6833-d2bb-b2a28d2b4657@debian.org> Message-ID: Also, doesn't bitbucket have a git interface now too (optionally)? On Mon, Oct 22, 2018, 07:49 Morgan Fainberg wrote: > I should be able to do a write up for Keystone's removal of paste *and* > move to flask soon. > > I can easily extract the bit of code I wrote to load our external > middleware (and add an external loader) for the transition away from paste. > > I also think paste is terrible, and would be willing to help folks move > off of it rather than maintain it. > > --Morgan > > On Mon, Oct 22, 2018, 07:38 Thomas Goirand wrote: > >> On 10/22/18 12:55 PM, Chris Dent wrote: >> >> My assumption is that it's "something we plan to minimally maintain >> >> because we depend on it". in which case all options would work: the >> >> exact choice depends on whether there is anybody interested in helping >> >> maintaining it, and where those contributors prefer to do the work. >> > >> > Thus far I'm not hearing any volunteers. If that continues to be the >> > case, I'll just keep it on bitbucket as that's the minimal change. >> >> Could you please move it to Github, so that at least, it's easier to >> check out? Mercurial is always a pain... >> >> Cheers, >> >> Thomas Goirand (zigo) >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Mon Oct 22 14:50:06 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Mon, 22 Oct 2018 10:50:06 -0400 Subject: [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface In-Reply-To: References: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> Message-ID: Oops, dropped Operators. Can't wait until it's all one list... On Mon, Oct 22, 2018 at 10:44 AM Erik McCormick wrote: > > On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin wrote: > > > > Hello, > > > > I've been having a lot of issues with SSL certificates myself, on my > > second trip now trying to get it working. > > > > Before I spent a lot of time walking through every line in the DevStack > > plugin and fixing my config options, used the generate > > script [1] and still it didn't work. > > > > When I got the "invalid padding" issue it was because of the DN I used > > for the CA and the certificate IIRC. > > > > > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING > > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > > to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa > > routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'), > > ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'), > > ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",) > > > 19:47 < tobias-urdin> after a quick google "The problem was that my > > CA DN was the same as the certificate DN." > > > > IIRC I think that solved it, but then again I wouldn't remember fully > > since I've been at so many different angles by now. > > > > Here is my IRC logs history from the #openstack-lbaas channel, perhaps > > it can help you out > > http://paste.openstack.org/show/732575/ > > > > Tobias, I owe you a beer. This was precisely the issue. I'm deploying > Octavia with kolla-ansible. It only deploys a single CA. After hacking > the templates and playbook to incorporate a separate server CA, the > amphorae now load and provision the required namespace. I'm adding a > kolla tag to the subject of this in hopes that someone might want to > take on changing this behavior in the project. Hopefully after I get > through Upstream Institute in Berlin I'll be able to do it myself if > nobody else wants to do it. > > For certificate generation, I extracted the contents of > octavia_certs_install.yml (which sets up the directory structure, > openssl.cnf, and the client CA), and octavia_certs.yml (which creates > the server CA and the client certificate) and mashed them into a > separate playbook just for this purpose. At the end I get: > > ca_01.pem - Client CA Certificate > ca_01.key - Client CA Key > ca_server_01.pem - Server CA Certificate > cakey.pem - Server CA Key > client.pem - Concatenated Client Key and Certificate > > If it would help to have the playbook, I can stick it up on github > with a huge "This is a hack" disclaimer on it. > > > ----- > > > > Sorry for hijacking the thread but I'm stuck as well. > > > > I've in the past tried to generate the certificates with [1] but now > > moved on to using the openstack-ansible way of generating them [2] > > with some modifications. > > > > Right now I'm just getting: Could not connect to instance. Retrying.: > > SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579) > > from the amphoras, haven't got any further but I've eliminated a lot of > > stuck in the middle. > > > > Tried deploying Ocatavia on Ubuntu with python3 to just make sure there > > wasn't an issue with CentOS and OpenSSL versions since it tends to lag > > behind. > > Checking the amphora with openssl s_client [3] it gives the same one, > > but the verification is successful just that I don't understand what the > > bad signature > > part is about, from browsing some OpenSSL code it seems to be related to > > RSA signatures somehow. > > > > 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad > > signature:s3_clnt.c:2032: > > > > So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS > > (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm > > back to something related > > to the certificates or the communication between the endpoints, or what > > actually responds inside the amphora (gunicorn IIUC?). Based on the > > "verify" functions actually causing that bad signature error I would > > assume it's the generated certificate that the amphora presents that is > > causing it. > > > > I'll have to continue the troubleshooting to the inside of the amphora, > > I've used the test-only amphora image before but have now built my own > > one that is > > using the amphora-agent from the actual stable branch, but same issue > > (bad signature). > > > > For verbosity this is the config options set for the certificates in > > octavia.conf and which file it was copied from [4], same here, a > > replication of what openstack-ansible does. > > > > Appreciate any feedback or help :) > > > > Best regards > > Tobias > > > > [1] > > https://github.com/openstack/octavia/blob/master/bin/create_certificates.sh > > [2] http://paste.openstack.org/show/732483/ > > [3] http://paste.openstack.org/show/732486/ > > [4] http://paste.openstack.org/show/732487/ > > > > On 10/20/2018 01:53 AM, Michael Johnson wrote: > > > Hi Erik, > > > > > > Sorry to hear you are still having certificate issues. > > > > > > Issue #2 is probably caused by issue #1. Since we hot-plug the tenant > > > network for the VIP, one of the first steps after the worker connects > > > to the amphora agent is finishing the required configuration of the > > > VIP interface inside the network namespace on the amphroa. > > > > Thanks for the hint on the workflow of this. I hadn't gotten deep > enough into the code to find that yet, but I suspected it was blocking > since the namespace never got created either. Thanks > > > > If I remember correctly, you are attempting to configure Octavia with > > > the dual CA option (which is good for non-development use). > > > > > > This is what I have for notes: > > > > > > [certificates] gets the following: > > > cert_generator = local_cert_generator > > > ca_certificate = server CA's "server.pem" file > > > ca_private_key = server CA's "server.key" file > > > ca_private_key_passphrase = pass phrase for ca_private_key > > > [controller_worker] > > > client_ca = Client CA's ca_cert file > > > [haproxy_amphora] > > > client_cert = Client CA's client.pem file (I think with it's key > > > concatenated is what rm_work said the other day) > > > server_ca = Server CA's ca_cert file > > > > > This is all very helpful. It's a bit difficult to know what goes where > the way the documentation is written presently. For something that's > going to be the defacto standard for loadbalancing, we as a community > need to do a better job of documenting how to set up, configure, and > manage this in production. I'm trying to capture my lessons learned > and processes as I go to help with that if I can. > > -Erik > > > > That said, I can probably run through this and write something up next > > > week that is more step-by-step/detailed. > > > > > > Michael > > > > > > On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick > > > wrote: > > >> Apologies for cross-posting, but in the event that these might be > > >> worth filing as bugs, I wanted the Octavia devs to see it as well... > > >> > > >> I've been wrestling with getting Octavia up and running and have > > >> become stuck on two issues. I'm hoping someone has run into these > > >> before. My google foo has come up empty. > > >> > > >> Issue 1: > > >> When the Octavia controller tries to poll the amphora instance, it > > >> tries repeatedly and eventually fails. The error on the controller > > >> side is: > > >> > > >> 2018-10-19 14:17:39.181 26 ERROR > > >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > > >> retries (currently set to 300) exhausted. The amphora is unavailable. > > >> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > > >> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > > >> SSLError(SSLError("bad handshake: Error([('rsa routines', > > >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > > >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > > >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > > >> 'tls_process_server_certificate', 'certificate verify > > >> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > > >> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 > > >> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > > >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > > >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > > >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > > >> 'tls_process_server_certificate', 'certificate verify > > >> failed')],)",),)) > > >> > > >> On the amphora side I see: > > >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. > > >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > > >> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > > >> failure (_ssl.c:1754) > > >> > > >> I've generated certificates both with the script in the Octavia git > > >> repo, and with the Openstack Ansible playbook. I can see that they are > > >> present in /etc/octavia/certs. > > >> > > >> I'm using the Kolla (Queens) containers for the control plane so I'm > > >> sure I've satisfied all the python library constraints. > > >> > > >> Issue 2: > > >> I"m not sure how it gets configured, but the tenant network interface > > >> (ens6) never comes up. I can spawn other instances on that network > > >> with no issue, and I can see that Neutron has the port attached to the > > >> instance. However, in the instance this is all I get: > > >> > > >> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > > >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > >> group default qlen 1 > > >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > >> inet 127.0.0.1/8 scope host lo > > >> valid_lft forever preferred_lft forever > > >> inet6 ::1/128 scope host > > >> valid_lft forever preferred_lft forever > > >> 2: ens3: mtu 9000 qdisc pfifo_fast > > >> state UP group default qlen 1000 > > >> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > > >> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > > >> valid_lft forever preferred_lft forever > > >> inet6 fe80::f816:3eff:fe30:c460/64 scope link > > >> valid_lft forever preferred_lft forever > > >> 3: ens6: mtu 1500 qdisc noop state DOWN group > > >> default qlen 1000 > > >> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > > >> > > >> There's no evidence of the interface anywhere else including udev rules. > > >> > > >> Any help with either or both issues would be greatly appreciated. > > >> > > >> Cheers, > > >> Erik > > >> > > >> __________________________________________________________________________ > > >> OpenStack Development Mailing List (not for usage questions) > > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Mon Oct 22 15:12:18 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 22 Oct 2018 10:12:18 -0500 Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance In-Reply-To: References: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> <6850a16b-a0f3-edeb-12c6-749745a32491@openstack.org> <1e2ebe5c-b355-6833-d2bb-b2a28d2b4657@debian.org> Message-ID: <20181022151217.GA6133@sm-workstation> On Mon, Oct 22, 2018 at 07:49:35AM -0700, Morgan Fainberg wrote: > I should be able to do a write up for Keystone's removal of paste *and* > move to flask soon. > > I can easily extract the bit of code I wrote to load our external > middleware (and add an external loader) for the transition away from paste. > > I also think paste is terrible, and would be willing to help folks move off > of it rather than maintain it. > > --Morgan > Do I detect a volunteer to champion a cycle goal? :) From morgan.fainberg at gmail.com Mon Oct 22 15:32:37 2018 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Mon, 22 Oct 2018 08:32:37 -0700 Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance In-Reply-To: <20181022151217.GA6133@sm-workstation> References: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> <6850a16b-a0f3-edeb-12c6-749745a32491@openstack.org> <1e2ebe5c-b355-6833-d2bb-b2a28d2b4657@debian.org> <20181022151217.GA6133@sm-workstation> Message-ID: I can spin up the code and make it available. There is an example (highly-flask specific right now, but would be easy to make it generic) from keystone [0] has been implemented. Where should this code live? A new library? oslo.? The aforementioned example would need "external middleware via config" loading capabilities, but that isn't hard to do, just adding an oslo.cfg opt and referencing it. [0] https://github.com/openstack/keystone/blob/master/keystone/server/flask/core.py#L93 Cheers, --Morgan On Mon, Oct 22, 2018 at 8:12 AM Sean McGinnis wrote: > On Mon, Oct 22, 2018 at 07:49:35AM -0700, Morgan Fainberg wrote: > > I should be able to do a write up for Keystone's removal of paste *and* > > move to flask soon. > > > > I can easily extract the bit of code I wrote to load our external > > middleware (and add an external loader) for the transition away from > paste. > > > > I also think paste is terrible, and would be willing to help folks move > off > > of it rather than maintain it. > > > > --Morgan > > > > Do I detect a volunteer to champion a cycle goal? :) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Mon Oct 22 15:46:22 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 22 Oct 2018 08:46:22 -0700 Subject: [openstack-dev] [nova] spec review day is ON for next tuesday oct 23 In-Reply-To: <4a48c674-9982-8cd8-e9e8-59d7deb91cf7@gmail.com> References: <4a48c674-9982-8cd8-e9e8-59d7deb91cf7@gmail.com> Message-ID: On Tue, 16 Oct 2018 10:56:38 -0700, Melanie Witt wrote: > Thanks everyone for your replies on the thread to help organize this. > > Looks like most of the team is available to participate, so we will have > a spec review day next week on Tuesday October 23. Just wanted to remind everybody that the spec review day is tomorrow! Cheers, -melanie From dtantsur at redhat.com Mon Oct 22 16:06:41 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 22 Oct 2018 18:06:41 +0200 Subject: [openstack-dev] [ironic] Team gathering at the Forum? Message-ID: Hi ironicers! :) We are trying to plan an informal Ironic team gathering in Berlin. If you care about Ironic and would like to participate, please fill in https://doodle.com/poll/iw5992px765nthde. Note that the location is tentative, also depending on how many people sign up. Dmitry From gael.therond at gmail.com Mon Oct 22 16:13:06 2018 From: gael.therond at gmail.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Mon, 22 Oct 2018 18:13:06 +0200 Subject: [openstack-dev] [Openstack-operators] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface In-Reply-To: References: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> Message-ID: Doing the same documentation process here as well (except that I’m using kolla). The only annoying thing is the doc submission process :-/. Le lun. 22 oct. 2018 à 16:50, Erik McCormick a écrit : > Oops, dropped Operators. Can't wait until it's all one list... > On Mon, Oct 22, 2018 at 10:44 AM Erik McCormick > wrote: > > > > On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin > wrote: > > > > > > Hello, > > > > > > I've been having a lot of issues with SSL certificates myself, on my > > > second trip now trying to get it working. > > > > > > Before I spent a lot of time walking through every line in the DevStack > > > plugin and fixing my config options, used the generate > > > script [1] and still it didn't work. > > > > > > When I got the "invalid padding" issue it was because of the DN I used > > > for the CA and the certificate IIRC. > > > > > > > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING > > > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > > > to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa > > > routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'), > > > ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'), > > > ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",) > > > > 19:47 < tobias-urdin> after a quick google "The problem was that my > > > CA DN was the same as the certificate DN." > > > > > > IIRC I think that solved it, but then again I wouldn't remember fully > > > since I've been at so many different angles by now. > > > > > > Here is my IRC logs history from the #openstack-lbaas channel, perhaps > > > it can help you out > > > http://paste.openstack.org/show/732575/ > > > > > > > Tobias, I owe you a beer. This was precisely the issue. I'm deploying > > Octavia with kolla-ansible. It only deploys a single CA. After hacking > > the templates and playbook to incorporate a separate server CA, the > > amphorae now load and provision the required namespace. I'm adding a > > kolla tag to the subject of this in hopes that someone might want to > > take on changing this behavior in the project. Hopefully after I get > > through Upstream Institute in Berlin I'll be able to do it myself if > > nobody else wants to do it. > > > > For certificate generation, I extracted the contents of > > octavia_certs_install.yml (which sets up the directory structure, > > openssl.cnf, and the client CA), and octavia_certs.yml (which creates > > the server CA and the client certificate) and mashed them into a > > separate playbook just for this purpose. At the end I get: > > > > ca_01.pem - Client CA Certificate > > ca_01.key - Client CA Key > > ca_server_01.pem - Server CA Certificate > > cakey.pem - Server CA Key > > client.pem - Concatenated Client Key and Certificate > > > > If it would help to have the playbook, I can stick it up on github > > with a huge "This is a hack" disclaimer on it. > > > > > ----- > > > > > > Sorry for hijacking the thread but I'm stuck as well. > > > > > > I've in the past tried to generate the certificates with [1] but now > > > moved on to using the openstack-ansible way of generating them [2] > > > with some modifications. > > > > > > Right now I'm just getting: Could not connect to instance. Retrying.: > > > SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579) > > > from the amphoras, haven't got any further but I've eliminated a lot of > > > stuck in the middle. > > > > > > Tried deploying Ocatavia on Ubuntu with python3 to just make sure there > > > wasn't an issue with CentOS and OpenSSL versions since it tends to lag > > > behind. > > > Checking the amphora with openssl s_client [3] it gives the same one, > > > but the verification is successful just that I don't understand what > the > > > bad signature > > > part is about, from browsing some OpenSSL code it seems to be related > to > > > RSA signatures somehow. > > > > > > 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad > > > signature:s3_clnt.c:2032: > > > > > > So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS > > > (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm > > > back to something related > > > to the certificates or the communication between the endpoints, or what > > > actually responds inside the amphora (gunicorn IIUC?). Based on the > > > "verify" functions actually causing that bad signature error I would > > > assume it's the generated certificate that the amphora presents that is > > > causing it. > > > > > > I'll have to continue the troubleshooting to the inside of the amphora, > > > I've used the test-only amphora image before but have now built my own > > > one that is > > > using the amphora-agent from the actual stable branch, but same issue > > > (bad signature). > > > > > > For verbosity this is the config options set for the certificates in > > > octavia.conf and which file it was copied from [4], same here, a > > > replication of what openstack-ansible does. > > > > > > Appreciate any feedback or help :) > > > > > > Best regards > > > Tobias > > > > > > [1] > > > > https://github.com/openstack/octavia/blob/master/bin/create_certificates.sh > > > [2] http://paste.openstack.org/show/732483/ > > > [3] http://paste.openstack.org/show/732486/ > > > [4] http://paste.openstack.org/show/732487/ > > > > > > On 10/20/2018 01:53 AM, Michael Johnson wrote: > > > > Hi Erik, > > > > > > > > Sorry to hear you are still having certificate issues. > > > > > > > > Issue #2 is probably caused by issue #1. Since we hot-plug the tenant > > > > network for the VIP, one of the first steps after the worker connects > > > > to the amphora agent is finishing the required configuration of the > > > > VIP interface inside the network namespace on the amphroa. > > > > > > Thanks for the hint on the workflow of this. I hadn't gotten deep > > enough into the code to find that yet, but I suspected it was blocking > > since the namespace never got created either. Thanks > > > > > > If I remember correctly, you are attempting to configure Octavia with > > > > the dual CA option (which is good for non-development use). > > > > > > > > This is what I have for notes: > > > > > > > > [certificates] gets the following: > > > > cert_generator = local_cert_generator > > > > ca_certificate = server CA's "server.pem" file > > > > ca_private_key = server CA's "server.key" file > > > > ca_private_key_passphrase = pass phrase for ca_private_key > > > > [controller_worker] > > > > client_ca = Client CA's ca_cert file > > > > [haproxy_amphora] > > > > client_cert = Client CA's client.pem file (I think with it's key > > > > concatenated is what rm_work said the other day) > > > > server_ca = Server CA's ca_cert file > > > > > > > > This is all very helpful. It's a bit difficult to know what goes where > > the way the documentation is written presently. For something that's > > going to be the defacto standard for loadbalancing, we as a community > > need to do a better job of documenting how to set up, configure, and > > manage this in production. I'm trying to capture my lessons learned > > and processes as I go to help with that if I can. > > > > -Erik > > > > > > That said, I can probably run through this and write something up > next > > > > week that is more step-by-step/detailed. > > > > > > > > Michael > > > > > > > > On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick > > > > wrote: > > > >> Apologies for cross-posting, but in the event that these might be > > > >> worth filing as bugs, I wanted the Octavia devs to see it as well... > > > >> > > > >> I've been wrestling with getting Octavia up and running and have > > > >> become stuck on two issues. I'm hoping someone has run into these > > > >> before. My google foo has come up empty. > > > >> > > > >> Issue 1: > > > >> When the Octavia controller tries to poll the amphora instance, it > > > >> tries repeatedly and eventually fails. The error on the controller > > > >> side is: > > > >> > > > >> 2018-10-19 14:17:39.181 26 ERROR > > > >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > > > >> retries (currently set to 300) exhausted. The amphora is > unavailable. > > > >> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max > retries > > > >> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > > > >> SSLError(SSLError("bad handshake: Error([('rsa routines', > > > >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa > routines', > > > >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > > > >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > > > >> 'tls_process_server_certificate', 'certificate verify > > > >> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > > > >> port=9443): Max retries exceeded with url: /0.5/plug/vip/ > 10.250.20.15 > > > >> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > > > >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa > routines', > > > >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > > > >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > > > >> 'tls_process_server_certificate', 'certificate verify > > > >> failed')],)",),)) > > > >> > > > >> On the amphora side I see: > > > >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL > request. > > > >> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > > > >> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > > > >> failure (_ssl.c:1754) > > > >> > > > >> I've generated certificates both with the script in the Octavia git > > > >> repo, and with the Openstack Ansible playbook. I can see that they > are > > > >> present in /etc/octavia/certs. > > > >> > > > >> I'm using the Kolla (Queens) containers for the control plane so I'm > > > >> sure I've satisfied all the python library constraints. > > > >> > > > >> Issue 2: > > > >> I"m not sure how it gets configured, but the tenant network > interface > > > >> (ens6) never comes up. I can spawn other instances on that network > > > >> with no issue, and I can see that Neutron has the port attached to > the > > > >> instance. However, in the instance this is all I get: > > > >> > > > >> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > > > >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > > >> group default qlen 1 > > > >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > > >> inet 127.0.0.1/8 scope host lo > > > >> valid_lft forever preferred_lft forever > > > >> inet6 ::1/128 scope host > > > >> valid_lft forever preferred_lft forever > > > >> 2: ens3: mtu 9000 qdisc pfifo_fast > > > >> state UP group default qlen 1000 > > > >> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > > > >> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > > > >> valid_lft forever preferred_lft forever > > > >> inet6 fe80::f816:3eff:fe30:c460/64 scope link > > > >> valid_lft forever preferred_lft forever > > > >> 3: ens6: mtu 1500 qdisc noop state DOWN group > > > >> default qlen 1000 > > > >> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > > > >> > > > >> There's no evidence of the interface anywhere else including udev > rules. > > > >> > > > >> Any help with either or both issues would be greatly appreciated. > > > >> > > > >> Cheers, > > > >> Erik > > > >> > > > >> > __________________________________________________________________________ > > > >> OpenStack Development Mailing List (not for usage questions) > > > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scarvalhojr at gmail.com Mon Oct 22 16:25:03 2018 From: scarvalhojr at gmail.com (Sergio A. de Carvalho Jr.) Date: Mon, 22 Oct 2018 17:25:03 +0100 Subject: [openstack-dev] [nova] Metadata API cross joining "instance_metadata" and "instance_system_metadata" Message-ID: Hi, While troubleshooting a production issue we identified that the Nova metadata API is fetching a lot more raw data from the database than seems necessary. The problem appears to be caused by the SQL query used to fetch instance data that joins the "instance" table with, among others, two metadata tables: "instance_metadata" and "instance_system_metadata". Below is a simplified version of this query (I've added the full query at the end of this message for reference): SELECT ... FROM (SELECT ... FROM `instances` WHERE `instances` . `deleted` = ? AND `instances` . `uuid` = ? LIMIT ?) AS `anon_1` LEFT OUTER JOIN `instance_system_metadata` AS `instance_system_metadata_1` ON `anon_1` . `instances_uuid` = `instance_system_metadata_1` . `instance_uuid` LEFT OUTER JOIN (`security_group_instance_association` AS `security_group_instance_association_1` INNER JOIN `security_groups` AS `security_groups_1` ON `security_groups_1` . `id` = `security_group_instance_association_1` . `security_group_id` AND `security_group_instance_association_1` . `deleted` = ? AND `security_groups_1` . `deleted` = ? ) ON `security_group_instance_association_1` . `instance_uuid` = `anon_1` . `instances_uuid` AND `anon_1` . `instances_deleted` = ? LEFT OUTER JOIN `security_group_rules` AS `security_group_rules_1` ON `security_group_rules_1` . `parent_group_id` = `security_groups_1` . `id` AND `security_group_rules_1` . `deleted` = ? LEFT OUTER JOIN `instance_info_caches` AS `instance_info_caches_1` ON `instance_info_caches_1` . `instance_uuid` = `anon_1` . `instances_uuid` LEFT OUTER JOIN `instance_extra` AS `instance_extra_1` ON `instance_extra_1` . `instance_uuid` = `anon_1` . `instances_uuid` LEFT OUTER JOIN `instance_metadata` AS `instance_metadata_1` ON `instance_metadata_1` . `instance_uuid` = `anon_1` . `instances_uuid` AND `instance_metadata_1` . `deleted` = ? The instance table has a 1-to-many relationship to both "instance_metadata" and "instance_system_metadata" tables, so the query is effectively producing a cross join of both metadata tables. To illustrate the impact of this query, I have an instance that has 2 records in "instance_metadata" and 5 records in "instance_system_metadata": > select instance_uuid,`key`,value from instance_metadata where instance_uuid = 'a6cf4a6a-effe-4438-9b7f-d61b23117b9b'; +--------------------------------------+-----------+--------+ | instance_uuid | key | value | +--------------------------------------+-----------+--------+ | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property1 | value1 | | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property2 | value | +--------------------------------------+-----------+--------+ 2 rows in set (0.61 sec) > select instance_uuid,`key`,valusystem_metadata where instance_uuid = 'a6cf4a6a-effe-4438-9b7f-d61b23117b9b'; +------------------------+--------------------------------------+ | key | value | +------------------------+--------------------------------------+ | image_disk_format | qcow2 | | image_min_ram | 0 | | image_min_disk | 20 | | image_base_image_ref | 39cd564f-6a29-43e2-815b-62097968486a | | image_container_format | bare | +------------------------+--------------------------------------+ 5 rows in set (0.00 sec) For this particular instance, the query used by the metadata API will fetch 10 records from the database: +--------------------------------------+-------------------------+---------------------------+--------------------------------+--------------------------------------+ | anon_1_instances_uuid | instance_metadata_1_key | instance_metadata_1_value | instance_system_metadata_1_key | instance_system_metadata_1_value | +--------------------------------------+-------------------------+---------------------------+--------------------------------+--------------------------------------+ | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property1 | value1 | image_disk_format | qcow2 | | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property2 | value | image_disk_format | qcow2 | | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property1 | value1 | image_min_ram | 0 | | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property2 | value | image_min_ram | 0 | | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property1 | value1 | image_min_disk | 20 | | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property2 | value | image_min_disk | 20 | | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property1 | value1 | image_base_image_ref | 39cd564f-6a29-43e2-815b-62097968486a | | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property2 | value | image_base_image_ref | 39cd564f-6a29-43e2-815b-62097968486a | | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property1 | value1 | image_container_format | bare | | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property2 | value | image_container_format | bare | +--------------------------------------+-------------------------+---------------------------+--------------------------------+--------------------------------------+ 10 rows in set (0.00 sec) Of course this is only a problem when instances have a lot of metadata records. An instance with 50 records in "instance_metadata" and 50 records in "instance_system_metadata" will fetch 50 x 50 = 2,500 rows from the database. It's not difficult to see how this can escalate quickly. This can be a particularly significant problem in a HA scenario with multiple API nodes pulling data from multiple database nodes. This issue is affecting our clusters running OpenStack Mitaka. I verified that this is not an issue on clusters running OpenStack Icehouse because, in Icehouse, instance data is pulled as needed, executing separate queries for each table, but I as far as I could see, this issue could be affecting every release since Mitaka. Since both metadata tables are structurally identical, a UNION of both metadata tables could be performed before joining but of course the Instance class would need to be updated to reflect this change. I did not find any bug report related to this problem and I'm wondering if this is a known issue or if anybody has any ideas of how this can be mitigated. Any help here would be hugely appreciated. Regards, Sergio --- Full SQL query: SELECT `anon_1` . `instances_created_at` AS `anon_1_instances_created_at`, `anon_1` . `instances_updated_at` AS `anon_1_instances_updated_at`, `anon_1` . `instances_deleted_at` AS `anon_1_instances_deleted_at`, `anon_1` . `instances_deleted` AS `anon_1_instances_deleted`, `anon_1` . `instances_id` AS `anon_1_instances_id`, `anon_1` . `instances_user_id` AS `anon_1_instances_user_id`, `anon_1` . `instances_project_id` AS `anon_1_instances_project_id`, `anon_1` . `instances_image_ref` AS `anon_1_instances_image_ref`, `anon_1` . `instances_kernel_id` AS `anon_1_instances_kernel_id`, `anon_1` . `instances_ramdisk_id` AS `anon_1_instances_ramdisk_id`, `anon_1` . `instances_hostname` AS `anon_1_instances_hostname`, `anon_1` . `instances_launch_index` AS `anon_1_instances_launch_index`, `anon_1` . `instances_key_name` AS `anon_1_instances_key_name`, `anon_1` . `instances_key_data` AS `anon_1_instances_key_data`, `anon_1` . `instances_power_state` AS `anon_1_instances_power_state`, `anon_1` . `instances_vm_state` AS `anon_1_instances_vm_state`, `anon_1` . `instances_task_state` AS `anon_1_instances_task_state`, `anon_1` . `instances_memory_mb` AS `anon_1_instances_memory_mb`, `anon_1` . `instances_vcpus` AS `anon_1_instances_vcpus`, `anon_1` . `instances_root_gb` AS `anon_1_instances_root_gb`, `anon_1` . `instances_ephemeral_gb` AS `anon_1_instances_ephemeral_gb`, `anon_1` . `instances_ephemeral_key_uuid` AS `anon_1_instances_ephemeral_key_uuid`, `anon_1` . `instances_host` AS `anon_1_instances_host`, `anon_1` . `instances_node` AS `anon_1_instances_node`, `anon_1` . `instances_instance_type_id` AS `anon_1_instances_instance_type_id`, `anon_1` . `instances_user_data` AS `anon_1_instances_user_data`, `anon_1` . `instances_reservation_id` AS `anon_1_instances_reservation_id`, `anon_1` . `instances_launched_at` AS `anon_1_instances_launched_at`, `anon_1` . `instances_terminated_at` AS `anon_1_instances_terminated_at`, `anon_1` . `instances_availability_zone` AS `anon_1_instances_availability_zone`, `anon_1` . `instances_display_name` AS `anon_1_instances_display_name`, `anon_1` . `instances_display_description` AS `anon_1_instances_display_description`, `anon_1` . `instances_launched_on` AS `anon_1_instances_launched_on`, `anon_1` . `instances_locked` AS `anon_1_instances_locked`, `anon_1` . `instances_locked_by` AS `anon_1_instances_locked_by`, `anon_1` . `instances_os_type` AS `anon_1_instances_os_type`, `anon_1` . `instances_architecture` AS `anon_1_instances_architecture`, `anon_1` . `instances_vm_mode` AS `anon_1_instances_vm_mode`, `anon_1` . `instances_uuid` AS `anon_1_instances_uuid`, `anon_1` . `instances_root_device_name` AS `anon_1_instances_root_device_name`, `anon_1` . `instances_default_ephemeral_device` AS `anon_1_instances_default_ephemeral_device`, `anon_1` . `instances_default_swap_device` AS `anon_1_instances_default_swap_device`, `anon_1` . `instances_config_drive` AS `anon_1_instances_config_drive`, `anon_1` . `instances_access_ip_v4` AS `anon_1_instances_access_ip_v4`, `anon_1` . `instances_access_ip_v6` AS `anon_1_instances_access_ip_v6`, `anon_1` . `instances_auto_disk_config` AS `anon_1_instances_auto_disk_config`, `anon_1` . `instances_progress` AS `anon_1_instances_progress`, `anon_1` . `instances_shutdown_terminate` AS `anon_1_instances_shutdown_terminate`, `anon_1` . `instances_disable_terminate` AS `anon_1_instances_disable_terminate`, `anon_1` . `instances_cell_name` AS `anon_1_instances_cell_name`, `anon_1` . `instances_internal_id` AS `anon_1_instances_internal_id`, `anon_1` . `instances_cleaned` AS `anon_1_instances_cleaned`, `instance_system_metadata_1` . `created_at` AS `instance_system_metadata_1_created_at`, `instance_system_metadata_1` . `updated_at` AS `instance_system_metadata_1_updated_at`, `instance_system_metadata_1` . `deleted_at` AS `instance_system_metadata_1_deleted_at`, `instance_system_metadata_1` . `deleted` AS `instance_system_metadata_1_deleted`, `instance_system_metadata_1` . `id` AS `instance_system_metadata_1_id`, `instance_system_metadata_1` . `key` AS `instance_system_metadata_1_key`, `instance_system_metadata_1` . `value` AS `instance_system_metadata_1_value`, `instance_system_metadata_1` . `instance_uuid` AS `instance_system_metadata_1_instance_uuid`, `security_groups_1` . `created_at` AS `security_groups_1_created_at`, `security_groups_1` . `updated_at` AS `security_groups_1_updated_at`, `security_groups_1` . `deleted_at` AS `security_groups_1_deleted_at`, `security_groups_1` . `deleted` AS `security_groups_1_deleted`, `security_groups_1` . `id` AS `security_groups_1_id`, `security_groups_1` . `name` AS `security_groups_1_name`, `security_groups_1` . `description` AS `security_groups_1_description`, `security_groups_1` . `user_id` AS `security_groups_1_user_id`, `security_groups_1` . `project_id` AS `security_groups_1_project_id`, `security_group_rules_1` . `created_at` AS `security_group_rules_1_created_at`, `security_group_rules_1` . `updated_at` AS `security_group_rules_1_updated_at`, `security_group_rules_1` . `deleted_at` AS `security_group_rules_1_deleted_at`, `security_group_rules_1` . `deleted` AS `security_group_rules_1_deleted`, `security_group_rules_1` . `id` AS `security_group_rules_1_id`, `security_group_rules_1` . `parent_group_id` AS `security_group_rules_1_parent_group_id`, `security_group_rules_1` . `protocol` AS `security_group_rules_1_protocol`, `security_group_rules_1` . `from_port` AS `security_group_rules_1_from_port`, `security_group_rules_1` . `to_port` AS `security_group_rules_1_to_port`, `security_group_rules_1` . `cidr` AS `security_group_rules_1_cidr`, `security_group_rules_1` . `group_id` AS `security_group_rules_1_group_id`, `instance_info_caches_1` . `created_at` AS `instance_info_caches_1_created_at`, `instance_info_caches_1` . `updated_at` AS `instance_info_caches_1_updated_at`, `instance_info_caches_1` . `deleted_at` AS `instance_info_caches_1_deleted_at`, `instance_info_caches_1` . `deleted` AS `instance_info_caches_1_deleted`, `instance_info_caches_1` . `id` AS `instance_info_caches_1_id`, `instance_info_caches_1` . `network_info` AS `instance_info_caches_1_network_info`, `instance_info_caches_1` . `instance_uuid` AS `instance_info_caches_1_instance_uuid`, `instance_extra_1` . `flavor` AS `instance_extra_1_flavor`, `instance_extra_1` . `created_at` AS `instance_extra_1_created_at`, `instance_extra_1` . `updated_at` AS `instance_extra_1_updated_at`, `instance_extra_1` . `deleted_at` AS `instance_extra_1_deleted_at`, `instance_extra_1` . `deleted` AS `instance_extra_1_deleted`, `instance_extra_1` . `id` AS `instance_extra_1_id`, `instance_extra_1` . `instance_uuid` AS `instance_extra_1_instance_uuid`, `instance_metadata_1` . `created_at` AS `instance_metadata_1_created_at`, `instance_metadata_1` . `updated_at` AS `instance_metadata_1_updated_at`, `instance_metadata_1` . `deleted_at` AS `instance_metadata_1_deleted_at`, `instance_metadata_1` . `deleted` AS `instance_metadata_1_deleted`, `instance_metadata_1` . `id` AS `instance_metadata_1_id`, `instance_metadata_1` . `key` AS `instance_metadata_1_key`, `instance_metadata_1` . `value` AS `instance_metadata_1_value`, `instance_metadata_1` . `instance_uuid` AS `instance_metadata_1_instance_uuid` FROM (SELECT `instances` . `created_at` AS `instances_created_at`, `instances` . `updated_at` AS `instances_updated_at`, `instances` . `deleted_at` AS `instances_deleted_at`, `instances` . `deleted` AS `instances_deleted`, `instances` . `id` AS `instances_id`, `instances` . `user_id` AS `instances_user_id`, `instances` . `project_id` AS `instances_project_id`, `instances` . `image_ref` AS `instances_image_ref`, `instances` . `kernel_id` AS `instances_kernel_id`, `instances` . `ramdisk_id` AS `instances_ramdisk_id`, `instances` . `hostname` AS `instances_hostname`, `instances` . `launch_index` AS `instances_launch_index`, `instances` . `key_name` AS `instances_key_name`, `instances` . `key_data` AS `instances_key_data`, `instances` . `power_state` AS `instances_power_state`, `instances` . `vm_state` AS `instances_vm_state`, `instances` . `task_state` AS `instances_task_state`, `instances` . `memory_mb` AS `instances_memory_mb`, `instances` . `vcpus` AS `instances_vcpus`, `instances` . `root_gb` AS `instances_root_gb`, `instances` . `ephemeral_gb` AS `instances_ephemeral_gb`, `instances` . `ephemeral_key_uuid` AS `instances_ephemeral_key_uuid`, `instances` . `host` AS `instances_host`, `instances` . `node` AS `instances_node`, `instances` . `instance_type_id` AS `instances_instance_type_id`, `instances` . `user_data` AS `instances_user_data`, `instances` . `reservation_id` AS `instances_reservation_id`, `instances` . `launched_at` AS `instances_launched_at`, `instances` . `terminated_at` AS `instances_terminated_at`, `instances` . `availability_zone` AS `instances_availability_zone`, `instances` . `display_name` AS `instances_display_name`, `instances` . `display_description` AS `instances_display_description`, `instances` . `launched_on` AS `instances_launched_on`, `instances` . `locked` AS `instances_locked`, `instances` . `locked_by` AS `instances_locked_by`, `instances` . `os_type` AS `instances_os_type`, `instances` . `architecture` AS `instances_architecture`, `instances` . `vm_mode` AS `instances_vm_mode`, `instances` . `uuid` AS `instances_uuid`, `instances` . `root_device_name` AS `instances_root_device_name`, `instances` . `default_ephemeral_device` AS `instances_default_ephemeral_device`, `instances` . `default_swap_device` AS `instances_default_swap_device`, `instances` . `config_drive` AS `instances_config_drive`, `instances` . `access_ip_v4` AS `instances_access_ip_v4`, `instances` . `access_ip_v6` AS `instances_access_ip_v6`, `instances` . `auto_disk_config` AS `instances_auto_disk_config`, `instances` . `progress` AS `instances_progress`, `instances` . `shutdown_terminate` AS `instances_shutdown_terminate`, `instances` . `disable_terminate` AS `instances_disable_terminate`, `instances` . `cell_name` AS `instances_cell_name`, `instances` . `internal_id` AS `instances_internal_id`, `instances` . `cleaned` AS `instances_cleaned` FROM `instances` WHERE `instances` . `deleted` = ? AND `instances` . `uuid` = ? LIMIT ?) AS `anon_1` LEFT OUTER JOIN `instance_system_metadata` AS `instance_system_metadata_1` ON `anon_1` . `instances_uuid` = `instance_system_metadata_1` . `instance_uuid` LEFT OUTER JOIN (`security_group_instance_association` AS `security_group_instance_association_1` INNER JOIN `security_groups` AS `security_groups_1` ON `security_groups_1` . `id` = `security_group_instance_association_1` . `security_group_id` AND `security_group_instance_association_1` . `deleted` = ? AND `security_groups_1` . `deleted` = ? ) ON `security_group_instance_association_1` . `instance_uuid` = `anon_1` . `instances_uuid` AND `anon_1` . `instances_deleted` = ? LEFT OUTER JOIN `security_group_rules` AS `security_group_rules_1` ON `security_group_rules_1` . `parent_group_id` = `security_groups_1` . `id` AND `security_group_rules_1` . `deleted` = ? LEFT OUTER JOIN `instance_info_caches` AS `instance_info_caches_1` ON `instance_info_caches_1` . `instance_uuid` = `anon_1` . `instances_uuid` LEFT OUTER JOIN `instance_extra` AS `instance_extra_1` ON `instance_extra_1` . `instance_uuid` = `anon_1` . `instances_uuid` LEFT OUTER JOIN `instance_metadata` AS `instance_metadata_1` ON `instance_metadata_1` . `instance_uuid` = `anon_1` . `instances_uuid` AND `instance_metadata_1` . `deleted` = ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Oct 22 16:59:48 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 22 Oct 2018 11:59:48 -0500 Subject: [openstack-dev] [nova] Metadata API cross joining "instance_metadata" and "instance_system_metadata" In-Reply-To: References: Message-ID: On 10/22/2018 11:25 AM, Sergio A. de Carvalho Jr. wrote: > Hi, > > While troubleshooting a production issue we identified that the Nova > metadata API is fetching a lot more raw data from the database than > seems necessary. The problem appears to be caused by the SQL query used > to fetch instance data that joins the "instance" table with, among > others, two metadata tables: "instance_metadata" and > "instance_system_metadata". Below is a simplified version of this query > (I've added the full query at the end of this message for reference): > > SELECT ... >   FROM (SELECT ... >           FROM `instances` >          WHERE `instances` . `deleted` = ? >            AND `instances` . `uuid` = ? >          LIMIT ?) AS `anon_1` >   LEFT OUTER JOIN `instance_system_metadata` AS > `instance_system_metadata_1` >     ON `anon_1` . `instances_uuid` = `instance_system_metadata_1` . > `instance_uuid` >   LEFT OUTER JOIN (`security_group_instance_association` AS > `security_group_instance_association_1` >                    INNER JOIN `security_groups` AS `security_groups_1` >                    ON `security_groups_1` . `id` = > `security_group_instance_association_1` . `security_group_id` >                    AND `security_group_instance_association_1` . > `deleted` = ? >                    AND `security_groups_1` . `deleted` = ? ) >     ON `security_group_instance_association_1` . `instance_uuid` = > `anon_1` . `instances_uuid` >    AND `anon_1` . `instances_deleted` = ? >   LEFT OUTER JOIN `security_group_rules` AS `security_group_rules_1` >     ON `security_group_rules_1` . `parent_group_id` = > `security_groups_1` . `id` >    AND `security_group_rules_1` . `deleted` = ? >   LEFT OUTER JOIN `instance_info_caches` AS `instance_info_caches_1` >     ON `instance_info_caches_1` . `instance_uuid` = `anon_1` . > `instances_uuid` >   LEFT OUTER JOIN `instance_extra` AS `instance_extra_1` >     ON `instance_extra_1` . `instance_uuid` = `anon_1` . `instances_uuid` >   LEFT OUTER JOIN `instance_metadata` AS `instance_metadata_1` >     ON `instance_metadata_1` . `instance_uuid` = `anon_1` . > `instances_uuid` >    AND `instance_metadata_1` . `deleted` = ? > > The instance table has a 1-to-many relationship to both > "instance_metadata" and "instance_system_metadata" tables, so the query > is effectively producing a cross join of both metadata tables. > > To illustrate the impact of this query, I have an instance that has 2 > records in "instance_metadata" and 5 records in "instance_system_metadata": > > > select instance_uuid,`key`,value from instance_metadata where > instance_uuid = 'a6cf4a6a-effe-4438-9b7f-d61b23117b9b'; > +--------------------------------------+-----------+--------+ > | instance_uuid                        | key       | value  | > +--------------------------------------+-----------+--------+ > | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property1 | value1 | > | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property2 | value  | > +--------------------------------------+-----------+--------+ > 2 rows in set (0.61 sec) > > > select instance_uuid,`key`,valusystem_metadata where instance_uuid = > 'a6cf4a6a-effe-4438-9b7f-d61b23117b9b'; > +------------------------+--------------------------------------+ > | key                    | value                                | > +------------------------+--------------------------------------+ > | image_disk_format      | qcow2                                | > | image_min_ram          | 0                                    | > | image_min_disk         | 20                                   | > | image_base_image_ref   | 39cd564f-6a29-43e2-815b-62097968486a | > | image_container_format | bare                                 | > +------------------------+--------------------------------------+ > 5 rows in set (0.00 sec) > > For this particular instance, the query used by the metadata API will > fetch 10 records from the database: > > +--------------------------------------+-------------------------+---------------------------+--------------------------------+--------------------------------------+ > | anon_1_instances_uuid                | instance_metadata_1_key | > instance_metadata_1_value | instance_system_metadata_1_key | > instance_system_metadata_1_value     | > +--------------------------------------+-------------------------+---------------------------+--------------------------------+--------------------------------------+ > | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property1               | > value1                    | image_disk_format              | qcow2 >                          | > | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property2               | value >                     | image_disk_format              | qcow2 >                      | > | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property1               | > value1                    | image_min_ram                  | 0 >                          | > | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property2               | value >                     | image_min_ram                  | 0 >                      | > | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property1               | > value1                    | image_min_disk                 | 20 >                           | > | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property2               | value >                     | image_min_disk                 | 20 >                     | > | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property1               | > value1                    | image_base_image_ref           | > 39cd564f-6a29-43e2-815b-62097968486a | > | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property2               | value >                     | image_base_image_ref           | > 39cd564f-6a29-43e2-815b-62097968486a | > | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property1               | > value1                    | image_container_format         | bare >                           | > | a6cf4a6a-effe-4438-9b7f-d61b23117b9b | property2               | value >                     | image_container_format         | bare >                     | > +--------------------------------------+-------------------------+---------------------------+--------------------------------+--------------------------------------+ > 10 rows in set (0.00 sec) > > Of course this is only a problem when instances have a lot of metadata > records. An instance with 50 records in "instance_metadata" and 50 > records in "instance_system_metadata" will fetch 50 x 50 = 2,500 rows > from the database. It's not difficult to see how this can escalate > quickly. This can be a particularly significant problem in a HA scenario > with multiple API nodes pulling data from multiple database nodes. > > This issue is affecting our clusters running OpenStack Mitaka. I > verified that this is not an issue on clusters running OpenStack > Icehouse because, in Icehouse, instance data is pulled as needed, > executing separate queries for each table, but I as far as I could see, > this issue could be affecting every release since Mitaka. > > Since both metadata tables are structurally identical, a UNION of both > metadata tables could be performed before joining but of course the > Instance class would need to be updated to reflect this change. > > I did not find any bug report related to this problem and I'm wondering > if this is a known issue or if anybody has any ideas of how this can be > mitigated. > > Any help here would be hugely appreciated. > > Regards, > > Sergio > > --- > > Full SQL query: > > SELECT `anon_1` . `instances_created_at` AS `anon_1_instances_created_at`, >        `anon_1` . `instances_updated_at` AS `anon_1_instances_updated_at`, >        `anon_1` . `instances_deleted_at` AS `anon_1_instances_deleted_at`, >        `anon_1` . `instances_deleted` AS `anon_1_instances_deleted`, >        `anon_1` . `instances_id` AS `anon_1_instances_id`, >        `anon_1` . `instances_user_id` AS `anon_1_instances_user_id`, >        `anon_1` . `instances_project_id` AS `anon_1_instances_project_id`, >        `anon_1` . `instances_image_ref` AS `anon_1_instances_image_ref`, >        `anon_1` . `instances_kernel_id` AS `anon_1_instances_kernel_id`, >        `anon_1` . `instances_ramdisk_id` AS `anon_1_instances_ramdisk_id`, >        `anon_1` . `instances_hostname` AS `anon_1_instances_hostname`, >        `anon_1` . `instances_launch_index` AS > `anon_1_instances_launch_index`, >        `anon_1` . `instances_key_name` AS `anon_1_instances_key_name`, >        `anon_1` . `instances_key_data` AS `anon_1_instances_key_data`, >        `anon_1` . `instances_power_state` AS > `anon_1_instances_power_state`, >        `anon_1` . `instances_vm_state` AS `anon_1_instances_vm_state`, >        `anon_1` . `instances_task_state` AS `anon_1_instances_task_state`, >        `anon_1` . `instances_memory_mb` AS `anon_1_instances_memory_mb`, >        `anon_1` . `instances_vcpus` AS `anon_1_instances_vcpus`, >        `anon_1` . `instances_root_gb` AS `anon_1_instances_root_gb`, >        `anon_1` . `instances_ephemeral_gb` AS > `anon_1_instances_ephemeral_gb`, >        `anon_1` . `instances_ephemeral_key_uuid` AS > `anon_1_instances_ephemeral_key_uuid`, >        `anon_1` . `instances_host` AS `anon_1_instances_host`, >        `anon_1` . `instances_node` AS `anon_1_instances_node`, >        `anon_1` . `instances_instance_type_id` AS > `anon_1_instances_instance_type_id`, >        `anon_1` . `instances_user_data` AS `anon_1_instances_user_data`, >        `anon_1` . `instances_reservation_id` AS > `anon_1_instances_reservation_id`, >        `anon_1` . `instances_launched_at` AS > `anon_1_instances_launched_at`, >        `anon_1` . `instances_terminated_at` AS > `anon_1_instances_terminated_at`, >        `anon_1` . `instances_availability_zone` AS > `anon_1_instances_availability_zone`, >        `anon_1` . `instances_display_name` AS > `anon_1_instances_display_name`, >        `anon_1` . `instances_display_description` AS > `anon_1_instances_display_description`, >        `anon_1` . `instances_launched_on` AS > `anon_1_instances_launched_on`, >        `anon_1` . `instances_locked` AS `anon_1_instances_locked`, >        `anon_1` . `instances_locked_by` AS `anon_1_instances_locked_by`, >        `anon_1` . `instances_os_type` AS `anon_1_instances_os_type`, >        `anon_1` . `instances_architecture` AS > `anon_1_instances_architecture`, >        `anon_1` . `instances_vm_mode` AS `anon_1_instances_vm_mode`, >        `anon_1` . `instances_uuid` AS `anon_1_instances_uuid`, >        `anon_1` . `instances_root_device_name` AS > `anon_1_instances_root_device_name`, >        `anon_1` . `instances_default_ephemeral_device` AS > `anon_1_instances_default_ephemeral_device`, >        `anon_1` . `instances_default_swap_device` AS > `anon_1_instances_default_swap_device`, >        `anon_1` . `instances_config_drive` AS > `anon_1_instances_config_drive`, >        `anon_1` . `instances_access_ip_v4` AS > `anon_1_instances_access_ip_v4`, >        `anon_1` . `instances_access_ip_v6` AS > `anon_1_instances_access_ip_v6`, >        `anon_1` . `instances_auto_disk_config` AS > `anon_1_instances_auto_disk_config`, >        `anon_1` . `instances_progress` AS `anon_1_instances_progress`, >        `anon_1` . `instances_shutdown_terminate` AS > `anon_1_instances_shutdown_terminate`, >        `anon_1` . `instances_disable_terminate` AS > `anon_1_instances_disable_terminate`, >        `anon_1` . `instances_cell_name` AS `anon_1_instances_cell_name`, >        `anon_1` . `instances_internal_id` AS > `anon_1_instances_internal_id`, >        `anon_1` . `instances_cleaned` AS `anon_1_instances_cleaned`, >        `instance_system_metadata_1` . `created_at` AS > `instance_system_metadata_1_created_at`, >        `instance_system_metadata_1` . `updated_at` AS > `instance_system_metadata_1_updated_at`, >        `instance_system_metadata_1` . `deleted_at` AS > `instance_system_metadata_1_deleted_at`, >        `instance_system_metadata_1` . `deleted` AS > `instance_system_metadata_1_deleted`, >        `instance_system_metadata_1` . `id` AS > `instance_system_metadata_1_id`, >        `instance_system_metadata_1` . `key` AS > `instance_system_metadata_1_key`, >        `instance_system_metadata_1` . `value` AS > `instance_system_metadata_1_value`, >        `instance_system_metadata_1` . `instance_uuid` AS > `instance_system_metadata_1_instance_uuid`, >        `security_groups_1` . `created_at` AS > `security_groups_1_created_at`, >        `security_groups_1` . `updated_at` AS > `security_groups_1_updated_at`, >        `security_groups_1` . `deleted_at` AS > `security_groups_1_deleted_at`, >        `security_groups_1` . `deleted` AS `security_groups_1_deleted`, >        `security_groups_1` . `id` AS `security_groups_1_id`, >        `security_groups_1` . `name` AS `security_groups_1_name`, >        `security_groups_1` . `description` AS > `security_groups_1_description`, >        `security_groups_1` . `user_id` AS `security_groups_1_user_id`, >        `security_groups_1` . `project_id` AS > `security_groups_1_project_id`, >        `security_group_rules_1` . `created_at` AS > `security_group_rules_1_created_at`, >        `security_group_rules_1` . `updated_at` AS > `security_group_rules_1_updated_at`, >        `security_group_rules_1` . `deleted_at` AS > `security_group_rules_1_deleted_at`, >        `security_group_rules_1` . `deleted` AS > `security_group_rules_1_deleted`, >        `security_group_rules_1` . `id` AS `security_group_rules_1_id`, >        `security_group_rules_1` . `parent_group_id` AS > `security_group_rules_1_parent_group_id`, >        `security_group_rules_1` . `protocol` AS > `security_group_rules_1_protocol`, >        `security_group_rules_1` . `from_port` AS > `security_group_rules_1_from_port`, >        `security_group_rules_1` . `to_port` AS > `security_group_rules_1_to_port`, >        `security_group_rules_1` . `cidr` AS `security_group_rules_1_cidr`, >        `security_group_rules_1` . `group_id` AS > `security_group_rules_1_group_id`, >        `instance_info_caches_1` . `created_at` AS > `instance_info_caches_1_created_at`, >        `instance_info_caches_1` . `updated_at` AS > `instance_info_caches_1_updated_at`, >        `instance_info_caches_1` . `deleted_at` AS > `instance_info_caches_1_deleted_at`, >        `instance_info_caches_1` . `deleted` AS > `instance_info_caches_1_deleted`, >        `instance_info_caches_1` . `id` AS `instance_info_caches_1_id`, >        `instance_info_caches_1` . `network_info` AS > `instance_info_caches_1_network_info`, >        `instance_info_caches_1` . `instance_uuid` AS > `instance_info_caches_1_instance_uuid`, >        `instance_extra_1` . `flavor` AS `instance_extra_1_flavor`, >        `instance_extra_1` . `created_at` AS `instance_extra_1_created_at`, >        `instance_extra_1` . `updated_at` AS `instance_extra_1_updated_at`, >        `instance_extra_1` . `deleted_at` AS `instance_extra_1_deleted_at`, >        `instance_extra_1` . `deleted` AS `instance_extra_1_deleted`, >        `instance_extra_1` . `id` AS `instance_extra_1_id`, >        `instance_extra_1` . `instance_uuid` AS > `instance_extra_1_instance_uuid`, >        `instance_metadata_1` . `created_at` AS > `instance_metadata_1_created_at`, >        `instance_metadata_1` . `updated_at` AS > `instance_metadata_1_updated_at`, >        `instance_metadata_1` . `deleted_at` AS > `instance_metadata_1_deleted_at`, >        `instance_metadata_1` . `deleted` AS `instance_metadata_1_deleted`, >        `instance_metadata_1` . `id` AS `instance_metadata_1_id`, >        `instance_metadata_1` . `key` AS `instance_metadata_1_key`, >        `instance_metadata_1` . `value` AS `instance_metadata_1_value`, >        `instance_metadata_1` . `instance_uuid` AS > `instance_metadata_1_instance_uuid` >   FROM (SELECT `instances` . `created_at` AS `instances_created_at`, >                `instances` . `updated_at` AS `instances_updated_at`, >                `instances` . `deleted_at` AS `instances_deleted_at`, >                `instances` . `deleted` AS `instances_deleted`, >                `instances` . `id` AS `instances_id`, >                `instances` . `user_id` AS `instances_user_id`, >                `instances` . `project_id` AS `instances_project_id`, >                `instances` . `image_ref` AS `instances_image_ref`, >                `instances` . `kernel_id` AS `instances_kernel_id`, >                `instances` . `ramdisk_id` AS `instances_ramdisk_id`, >                `instances` . `hostname` AS `instances_hostname`, >                `instances` . `launch_index` AS `instances_launch_index`, >                `instances` . `key_name` AS `instances_key_name`, >                `instances` . `key_data` AS `instances_key_data`, >                `instances` . `power_state` AS `instances_power_state`, >                `instances` . `vm_state` AS `instances_vm_state`, >                `instances` . `task_state` AS `instances_task_state`, >                `instances` . `memory_mb` AS `instances_memory_mb`, >                `instances` . `vcpus` AS `instances_vcpus`, >                `instances` . `root_gb` AS `instances_root_gb`, >                `instances` . `ephemeral_gb` AS `instances_ephemeral_gb`, >                `instances` . `ephemeral_key_uuid` AS > `instances_ephemeral_key_uuid`, >                `instances` . `host` AS `instances_host`, >                `instances` . `node` AS `instances_node`, >                `instances` . `instance_type_id` AS > `instances_instance_type_id`, >                `instances` . `user_data` AS `instances_user_data`, >                `instances` . `reservation_id` AS > `instances_reservation_id`, >                `instances` . `launched_at` AS `instances_launched_at`, >                `instances` . `terminated_at` AS `instances_terminated_at`, >                `instances` . `availability_zone` AS > `instances_availability_zone`, >                `instances` . `display_name` AS `instances_display_name`, >                `instances` . `display_description` AS > `instances_display_description`, >                `instances` . `launched_on` AS `instances_launched_on`, >                `instances` . `locked` AS `instances_locked`, >                `instances` . `locked_by` AS `instances_locked_by`, >                `instances` . `os_type` AS `instances_os_type`, >                `instances` . `architecture` AS `instances_architecture`, >                `instances` . `vm_mode` AS `instances_vm_mode`, >                `instances` . `uuid` AS `instances_uuid`, >                `instances` . `root_device_name` AS > `instances_root_device_name`, >                `instances` . `default_ephemeral_device` AS > `instances_default_ephemeral_device`, >                `instances` . `default_swap_device` AS > `instances_default_swap_device`, >                `instances` . `config_drive` AS `instances_config_drive`, >                `instances` . `access_ip_v4` AS `instances_access_ip_v4`, >                `instances` . `access_ip_v6` AS `instances_access_ip_v6`, >                `instances` . `auto_disk_config` AS > `instances_auto_disk_config`, >                `instances` . `progress` AS `instances_progress`, >                `instances` . `shutdown_terminate` AS > `instances_shutdown_terminate`, >                `instances` . `disable_terminate` AS > `instances_disable_terminate`, >                `instances` . `cell_name` AS `instances_cell_name`, >                `instances` . `internal_id` AS `instances_internal_id`, >                `instances` . `cleaned` AS `instances_cleaned` >           FROM `instances` >          WHERE `instances` . `deleted` = ? >            AND `instances` . `uuid` = ? >          LIMIT ?) AS `anon_1` >   LEFT OUTER JOIN `instance_system_metadata` AS > `instance_system_metadata_1` >     ON `anon_1` . `instances_uuid` = `instance_system_metadata_1` . > `instance_uuid` >   LEFT OUTER JOIN (`security_group_instance_association` AS > `security_group_instance_association_1` >                    INNER JOIN `security_groups` AS `security_groups_1` >                    ON `security_groups_1` . `id` = > `security_group_instance_association_1` . `security_group_id` >                    AND `security_group_instance_association_1` . > `deleted` = ? >                    AND `security_groups_1` . `deleted` = ? ) >     ON `security_group_instance_association_1` . `instance_uuid` = > `anon_1` . `instances_uuid` >    AND `anon_1` . `instances_deleted` = ? >   LEFT OUTER JOIN `security_group_rules` AS `security_group_rules_1` >     ON `security_group_rules_1` . `parent_group_id` = > `security_groups_1` . `id` >    AND `security_group_rules_1` . `deleted` = ? >   LEFT OUTER JOIN `instance_info_caches` AS `instance_info_caches_1` >     ON `instance_info_caches_1` . `instance_uuid` = `anon_1` . > `instances_uuid` >   LEFT OUTER JOIN `instance_extra` AS `instance_extra_1` >     ON `instance_extra_1` . `instance_uuid` = `anon_1` . `instances_uuid` >   LEFT OUTER JOIN `instance_metadata` AS `instance_metadata_1` >     ON `instance_metadata_1` . `instance_uuid` = `anon_1` . > `instances_uuid` >    AND `instance_metadata_1` . `deleted` = ? > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Thanks for this. Have you debugged to the point of knowing where the initial DB query is starting from? Looking at history, my guess is this is the change which introduced it for all requests: https://review.openstack.org/#/c/276861/ -- Thanks, Matt From mriedemos at gmail.com Mon Oct 22 17:07:20 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 22 Oct 2018 12:07:20 -0500 Subject: [openstack-dev] [nova] Metadata API cross joining "instance_metadata" and "instance_system_metadata" In-Reply-To: References: Message-ID: On 10/22/2018 11:59 AM, Matt Riedemann wrote: > Thanks for this. Have you debugged to the point of knowing where the > initial DB query is starting from? > > Looking at history, my guess is this is the change which introduced it > for all requests: > > https://review.openstack.org/#/c/276861/ From that change, as far as I can tell, we only needed to pre-join on metadata because of setting the "launch_metadata" variable: https://review.openstack.org/#/c/276861/1/nova/api/metadata/base.py at 145 I don't see anything directly using system_metadata, although that one is sometimes tricky and could be lazy-loaded elsewhere. I do know that starting in ocata we use system_metadata for dynamic vendor metadata: https://github.com/openstack/nova/blob/stable/ocata/nova/api/metadata/vendordata_dynamic.py#L85 Added in change: https://review.openstack.org/#/c/417780/ But if you don't provide vendor data then that should not be a problem. -- Thanks, Matt From melwittt at gmail.com Mon Oct 22 17:55:48 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 22 Oct 2018 10:55:48 -0700 Subject: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend In-Reply-To: <8e1eba3.1f61.16699e1111d.Coremail.bxzhu_5355@163.com> References: <73dab45e.1a3d.1668a62f317.Coremail.bxzhu_5355@163.com> <8f8b257a-5219-60e1-8760-d905f5bb48ff@gmail.com> <47653a27.59b1.1668cea5d9b.Coremail.bxzhu_5355@163.com> <40297115-571b-c07c-88b4-9e0d24b7301b@gmail.com> <8e1eba3.1f61.16699e1111d.Coremail.bxzhu_5355@163.com> Message-ID: <2f610cad-7a08-78cf-5ca4-63e0ea59d661@gmail.com> On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote: > I created a new vm and a new volume with type 'ceph'[So that the volume > will be created on one of two hosts. I assume that the volume created on > host dev at rbd-1#ceph this time]. Next step is to attach the volume to the > vm. At last I want to migrate the volume from host dev at rbd-1#ceph to > host dev at rbd-2#ceph, but it failed with the exception > 'NotImplementedError(_("Swap only supports host devices")'. > > So that, my real problem is that is there any work to migrate > volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool) > in the same ceph cluster? > The difference between the spec[2] with my scope is only one is > *available*(the spec) and another is *in-use*(my scope). > > > [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/ > [2] https://review.openstack.org/#/c/296150 Ah, I think I understand now, thank you for providing all of those details. And I think you explained it in your first email, that cinder supports migration of ceph volumes if they are 'available' but not if they are 'in-use'. Apologies that I didn't get your meaning the first time. I see now the code you were referring to is this [3]: if volume.status not in ('available', 'retyping', 'maintenance'): LOG.debug('Only available volumes can be migrated using backend ' 'assisted migration. Falling back to generic migration.') return refuse_to_migrate So because your volume is not 'available', 'retyping', or 'maintenance', it's falling back to generic migration, which will end up with an error in nova because the source_path is not set in the volume config. Can anyone from the cinder team chime in about whether the ceph volume migration could be expanded to allow migration of 'in-use' volumes? Is there a reason not to allow migration of 'in-use' volumes? [3] https://github.com/openstack/cinder/blob/c42fdc470223d27850627fd4fc9d8cb15f2941f8/cinder/volume/drivers/rbd.py#L1618-L1621 Cheers, -melanie From dms at danplanet.com Mon Oct 22 18:01:37 2018 From: dms at danplanet.com (Dan Smith) Date: Mon, 22 Oct 2018 11:01:37 -0700 Subject: [openstack-dev] [nova] Metadata API cross joining "instance_metadata" and "instance_system_metadata" In-Reply-To: (Sergio A. de Carvalho, Jr.'s message of "Mon, 22 Oct 2018 17:25:03 +0100") References: Message-ID: > Of course this is only a problem when instances have a lot of metadata > records. An instance with 50 records in "instance_metadata" and 50 > records in "instance_system_metadata" will fetch 50 x 50 = 2,500 rows > from the database. It's not difficult to see how this can escalate > quickly. This can be a particularly significant problem in a HA > scenario with multiple API nodes pulling data from multiple database > nodes. We haven't been doing this (intentionally) for quite some time, as we query and fill metadata linearly: https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L2244 and have since 2013 (Havana): https://review.openstack.org/#/c/26136/ So unless there has been a regression that is leaking those columns back into the join list, I'm not sure why the query you show would be generated. Just to be clear, you don't have any modifications to the code anywhere, do you? --Dan From zbitter at redhat.com Mon Oct 22 18:11:25 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 22 Oct 2018 14:11:25 -0400 Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance In-Reply-To: <1e2ebe5c-b355-6833-d2bb-b2a28d2b4657@debian.org> References: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> <6850a16b-a0f3-edeb-12c6-749745a32491@openstack.org> <1e2ebe5c-b355-6833-d2bb-b2a28d2b4657@debian.org> Message-ID: <076f2165-205f-7bf0-8bf9-25da10cb6426@redhat.com> On 22/10/18 10:38 AM, Thomas Goirand wrote: > On 10/22/18 12:55 PM, Chris Dent wrote: >>> My assumption is that it's "something we plan to minimally maintain >>> because we depend on it". in which case all options would work: the >>> exact choice depends on whether there is anybody interested in helping >>> maintaining it, and where those contributors prefer to do the work. >> >> Thus far I'm not hearing any volunteers. If that continues to be the >> case, I'll just keep it on bitbucket as that's the minimal change. > > Could you please move it to Github, so that at least, it's easier to > check out? Mercurial is always a pain... FWIW as one data point I probably would have fixed the py37 pull request myself instead of just commenting, had it not involved doing: * A pull request * on bitbucket * with Mercurial (I used to like the Mercurial UI, but it turns out that after_really_ learning Git... my brain is full and I can't remember anything else.) - ZB From openstack at nemebean.com Mon Oct 22 18:14:38 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 22 Oct 2018 14:14:38 -0400 Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance In-Reply-To: References: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> <6850a16b-a0f3-edeb-12c6-749745a32491@openstack.org> <1e2ebe5c-b355-6833-d2bb-b2a28d2b4657@debian.org> <20181022151217.GA6133@sm-workstation> Message-ID: <8a35a650-1264-9109-2e44-f9181a7dfbce@nemebean.com> On 10/22/18 10:32 AM, Morgan Fainberg wrote: > I can spin up the code and make it available. There is an example > (highly-flask specific right now, but would be easy to make it generic) > from keystone [0] has been implemented. Where should this code live? A > new library? oslo.?  The aforementioned example would need > "external middleware via config" loading capabilities, but that isn't > hard to do, just adding an oslo.cfg opt and referencing it. This seems pretty closely related to oslo.middleware, so maybe we could put it there? Everything in there now appears to be actual middleware, but a module to allow the use of that middleware isn't too much of a stretch. > > [0] > https://github.com/openstack/keystone/blob/master/keystone/server/flask/core.py#L93 > Cheers, > --Morgan > > On Mon, Oct 22, 2018 at 8:12 AM Sean McGinnis > wrote: > > On Mon, Oct 22, 2018 at 07:49:35AM -0700, Morgan Fainberg wrote: > > I should be able to do a write up for Keystone's removal of paste > *and* > > move to flask soon. > > > > I can easily extract the bit of code I wrote to load our external > > middleware (and add an external loader) for the transition away > from paste. > > > > I also think paste is terrible, and would be willing to help > folks move off > > of it rather than maintain it. > > > > --Morgan > > > > Do I detect a volunteer to champion a cycle goal? :) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dms at danplanet.com Mon Oct 22 18:17:43 2018 From: dms at danplanet.com (Dan Smith) Date: Mon, 22 Oct 2018 11:17:43 -0700 Subject: [openstack-dev] [nova] Metadata API cross joining "instance_metadata" and "instance_system_metadata" In-Reply-To: (Dan Smith's message of "Mon, 22 Oct 2018 11:01:37 -0700") References: Message-ID: > We haven't been doing this (intentionally) for quite some time, as we > query and fill metadata linearly: > > https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L2244 > > and have since 2013 (Havana): > > https://review.openstack.org/#/c/26136/ > > So unless there has been a regression that is leaking those columns back > into the join list, I'm not sure why the query you show would be > generated. Ah, Matt Riedemann just pointed out on IRC that we're not doing it on single-instance fetch, which is what you'd be hitting in this path. We use that approach in a lot of places where the rows would also be multiplied by the number of instances, but not in the single case. So, that makes sense now. --Dan From fm577c at att.com Mon Oct 22 18:34:51 2018 From: fm577c at att.com (MONTEIRO, FELIPE C) Date: Mon, 22 Oct 2018 18:34:51 +0000 Subject: [openstack-dev] [qa] patrole] Nominating Sergey Vilgelm and Mykola Yakovliev for Patrole core Message-ID: <7D5E803080EF7047850D309B333CB94E22EBA021@GAALPA1MSGUSRBI.ITServices.sbc.com> Hi, I would like to nominate Sergey Vilgelm and Mykola Yakovliev for Patrole core as they have both done excellent work the past cycle in improving the Patrole framework as well as increasing Neutron Patrole test coverage, which includes various Neutron plugins/extensions as well like fwaas. I believe they will both make an excellent addition to the Patrole core team. Please vote with a +1/-1 for the nomination, which will stay open for one week. Felipe -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Mon Oct 22 18:37:14 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Mon, 22 Oct 2018 11:37:14 -0700 Subject: [openstack-dev] [cyborg] [nova] Poll: Name for VARs Message-ID: Hi, The name VAR (Virtual Accelerator Request) is introduced in https://review.openstack.org/#/c/603955/. It came up during the Stein PTG and is being used by default, but some folks have said they find the name VAR to be confusing. I would like to resolve this to completion, so that whatever name we choose is not subject to recurrent debates in the future. Here is a poll for Cyborg and Nova developers to indicate their preferences for existing or proposed options: https://docs.google.com/spreadsheets/d/179Q8J9qIJNOiVm86K7bWPxo7otTsU18XVCI32V77JaU/edit?usp=sharing 1. Please add your name, if not already listed, and please feel free to propose additional options as you see fit. 2. The voting is by rank -- 1 indicates most preferred. 3. If you strongly oppose a term, you may say 'No' and justify with a comment.    (Comments are added by pressing Ctrl-Alt-M on a cell.) I'll keep this open for a minimum of two days and possibly for a week depending on feedback. Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Mon Oct 22 18:42:27 2018 From: james.slagle at gmail.com (James Slagle) Date: Mon, 22 Oct 2018 14:42:27 -0400 Subject: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer In-Reply-To: References: Message-ID: On Fri, Oct 19, 2018 at 8:23 AM Juan Antonio Osorio Robles wrote: > > Hello! > > > I would like to propose Bob Fournier (bfournie) as a core reviewer in > TripleO. His patches and reviews have spanned quite a wide range in our > project, his reviews show great insight and quality and I think he would > be a addition to the core team. > > What do you folks think? +1 -- -- James Slagle -- From cboylan at sapwetik.org Mon Oct 22 19:09:47 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 22 Oct 2018 12:09:47 -0700 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: <5cf5bee4-d780-57fb-6099-fa018e3ab097@debian.org> References: <5cf5bee4-d780-57fb-6099-fa018e3ab097@debian.org> Message-ID: <1540235387.2614366.1550788912.2BE96324@webmail.messagingengine.com> On Mon, Oct 22, 2018, at 7:33 AM, Thomas Goirand wrote: > On 10/19/18 5:17 PM, Zane Bitter wrote: snip > > Integration Tests > > ----------------- > > > > Integration tests do test, amongst other things, integration with > > non-openstack-supplied things in the distro, so it's important that we > > test on the actual distros we have identified as popular.[2] It's also > > important that every project be testing on the same distro at the end of > > a release, so we can be sure they all work together for users. > > I find very disturbing to see the project only leaning toward these only > 2 distributions. Why not SuSE & Debian? It has to do with previous statements about distro support from the TC: https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions is the [2] above. Changing this would be an orthogonal piece of work even though there is relationship between the two topics. Zane's proposal can accommodate change in the distro support assertion, but is focused on figuring out which python versions to test with that as one of the inputs. Clark From zbitter at redhat.com Mon Oct 22 19:12:46 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 22 Oct 2018 15:12:46 -0400 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: <5cf5bee4-d780-57fb-6099-fa018e3ab097@debian.org> References: <5cf5bee4-d780-57fb-6099-fa018e3ab097@debian.org> Message-ID: <33f9616c-4d81-e239-a1f2-8b1696b8ddd7@redhat.com> On 22/10/18 10:33 AM, Thomas Goirand wrote: > On 10/19/18 5:17 PM, Zane Bitter wrote: >> We have traditionally held to the principle that we want each release to >> support the latest release of CentOS and the latest LTS release of >> Ubuntu, as they existed at the beginning of the release cycle.[2] >> Currently this means in practice one version of py2 and one of py3, but >> in the future it will mean two, usually different, versions of py3. > > That's not very nice to forget about the Debian case, which usually > closely precedes Ubuntu. If you want to support Ubuntu better, then > supporting Debian better helps. I usually get the issue before everyone, > as Sid is the distro which is updated the most often. Therefore, please > make sure to include Debian in your proposal. This is something that needs to be addressed separately I think. It has been our long-standing, documented testing policy. If you want to change it, make a proposal. For the purposes of this discussion though, the main point to take away from the paragraph you quoted is that once Python2 is EOL there will rarely be a _single_ version of Python3 that is sufficient to support even 2 distros, let alone more. I haven't forgotten about you, and in fact one of the goals of this process is to ensure that we stay up-to-date and not get into situations like you had in Rocky where we were two releases behind. Debian will definitely benefit from that. >> For unit tests, the most important thing is to test on the versions of >> Python we target. It's less important to be using the exact distro that >> we want to target, because unit tests generally won't interact with >> stuff outside of Python. > > One of the reoccurring problem that I'm facing in Debian is that not > only Python 3 version is lagging behind, but OpenStack dependencies are > also lagging behind the distro. Often, the answer is "we don't support > this or that version of X", which of course is very frustrating. One > thing which would be super nice, would be a non-voting gate job that > test with the latest version of every Python dependencies as well, so we > get to see breakage early. We've stopped seeing them since we decided it > breaks too often and we would hide problems behind the > global-requirement thing. I'll leave this to the requirements team, who are more qualified to comment. > And sometimes, we have weird interactions. For example, taskflow was > broken in Python 3.7 before this patch: > https://salsa.debian.org/openstack-team/libs/python-taskflow/commit/6a10261a8a147d901c07a6e7272dc75b9f4d0988 > > which broke multiple packages using it. Funny thing, it looks like it > wouldn't have happen if we didn't have a pre-version of Python 3.7.1 in > Sid, apparently. Anyway, this can happen again. > >> So, for example, (and this is still under active debate) for Stein we >> might have gating jobs for py35 and py37, with a periodic job for py36. >> The T jobs might only have voting py36 and py37 jobs, but late in the T >> cycle we might add a non-voting py38 job on master so that people who >> haven't switched to the U template yet can see what, if anything, >> they'll need to fix. > > This can only happen if we have supporting distribution packages for it. > IMO, this is a call for using Debian Testing or even Sid in the gate. It depends on which versions we choose to support, but if necessary yes. >> We'll run the unit tests on any distro we can find that supports the >> version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian >> unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a >> particular Python version before trying to test it. > > I very much agree with that. > >> Before the start of each cycle, the TC would determine which range of >> versions we want to support, on the basis of the latest one we can find >> in any distro and the earliest one we're likely to need in one of the >> supported Linux distros. > > Release of Python aren't aligned with OpenStack cycles. Python 3.7 > appeared late in the Rocky cycle. Therefore, unfortunately, doing what > you propose above doesn't address the issue. This is valuable feedback; it's important to know where there are real-world cases that we're not addressing. Python 3.7 was released 3 weeks after rocky-2 and only 4 weeks before rocky-3. TBH I find it hard to imagine any process that would have led us to attempt to get every OpenStack project supporting 3.7 in Rocky without a radical change in our conception of how OpenStack is distributed. On the bright side, under this process we would have had 3.6 support in Ocata and we could have automatically added a non-voting (or periodic) 3.7 job during Rocky development as soon as a distro was available for testing, which would at least have made it easier to locate problems earlier even if we didn't get full 3.7 support until the Stein release. >> Integration Tests >> ----------------- >> >> Integration tests do test, amongst other things, integration with >> non-openstack-supplied things in the distro, so it's important that we >> test on the actual distros we have identified as popular.[2] It's also >> important that every project be testing on the same distro at the end of >> a release, so we can be sure they all work together for users. > > I find very disturbing to see the project only leaning toward these only > 2 distributions. Why not SuSE & Debian? The bottom line is it's because targeting those two catches 88% of our users. (For once I did not make this statistic up.) Also note that in practice I believe almost everything is actually tested on Ubuntu LTS, and only TripleO is testing on CentOS. It's difficult to imagine how to slot another distro into the mix without doubling up on jobs. cheers, Zane. From gfidente at redhat.com Mon Oct 22 19:28:57 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Mon, 22 Oct 2018 21:28:57 +0200 Subject: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer In-Reply-To: References: Message-ID: On 10/19/18 2:23 PM, Juan Antonio Osorio Robles wrote: > Hello! > > > I would like to propose Bob Fournier (bfournie) as a core reviewer in > TripleO. His patches and reviews have spanned quite a wide range in our > project, his reviews show great insight and quality and I think he would > be a addition to the core team. > > What do you folks think? I think thank you Bob! -- Giulio Fidente GPG KEY: 08D733BA From scarvalhojr at gmail.com Mon Oct 22 19:58:09 2018 From: scarvalhojr at gmail.com (Sergio A. de Carvalho Jr.) Date: Mon, 22 Oct 2018 20:58:09 +0100 Subject: [openstack-dev] [nova] Metadata API cross joining "instance_metadata" and "instance_system_metadata" In-Reply-To: References: Message-ID: Thanks so much for the replies, guys. > Have you debugged to the point of knowing where the initial DB query is starting from? > > Looking at history, my guess is this is the change which introduced it for all requests: > > https://review.openstack.org/#/c/276861/ That is my understanding too. Before, metadata was fetched separately but this change meant that it both tables are always joined with other instance data. I've debugged it to the point where the query gets built, in https://github.com/openstack/nova/blob/mitaka-eol/nova/db/sqlalchemy/api.py#L2005 which results in a number of left outer joins by the "joinedload" calls, including to "instance_metadata" and "instance_system_metadata". Most other tables have a single row for each instance (on our clusters, anyway), so the impact for us is simply down to metadata. > Just to be clear, you don't have any modifications to the code anywhere, do you? We do have some minor changes to Nova but nothing near instance, metadata or the ORM code. Do you guys see an easy fix here? Should I open a bug report? On Mon, Oct 22, 2018 at 7:17 PM Dan Smith wrote: > > We haven't been doing this (intentionally) for quite some time, as we > > query and fill metadata linearly: > > > > > https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L2244 > > > > and have since 2013 (Havana): > > > > https://review.openstack.org/#/c/26136/ > > > > So unless there has been a regression that is leaking those columns back > > into the join list, I'm not sure why the query you show would be > > generated. > > Ah, Matt Riedemann just pointed out on IRC that we're not doing it on > single-instance fetch, which is what you'd be hitting in this path. We > use that approach in a lot of places where the rows would also be > multiplied by the number of instances, but not in the single case. So, > that makes sense now. > > --Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Mon Oct 22 20:04:14 2018 From: dms at danplanet.com (Dan Smith) Date: Mon, 22 Oct 2018 13:04:14 -0700 Subject: [openstack-dev] [nova] Metadata API cross joining "instance_metadata" and "instance_system_metadata" In-Reply-To: (Sergio A. de Carvalho, Jr.'s message of "Mon, 22 Oct 2018 20:58:09 +0100") References: Message-ID: > Do you guys see an easy fix here? > > Should I open a bug report? Definitely open a bug. IMHO, we should just make the single-instance load work like the multi ones, where we load the metadata separately if requested. We might be able to get away without sysmeta these days, but we needed it for the flavor details back when the join was added. But, user metadata is controllable by the user and definitely of interest in that code, so just dropping sysmeta from the explicit required_attrs isn't enough, IMHO. --Dan From scarvalhojr at gmail.com Mon Oct 22 20:15:38 2018 From: scarvalhojr at gmail.com (Sergio A. de Carvalho Jr.) Date: Mon, 22 Oct 2018 21:15:38 +0100 Subject: [openstack-dev] [nova] Metadata API cross joining "instance_metadata" and "instance_system_metadata" In-Reply-To: References: Message-ID: Cool, I'll open a bug then. I was wondering if, before joining the metadata tables with the rest of instance data, we could do a UNION, since both tables are structurally identical. On Mon, Oct 22, 2018 at 9:04 PM Dan Smith wrote: > > Do you guys see an easy fix here? > > > > Should I open a bug report? > > Definitely open a bug. IMHO, we should just make the single-instance > load work like the multi ones, where we load the metadata separately if > requested. We might be able to get away without sysmeta these days, but > we needed it for the flavor details back when the join was added. But, > user metadata is controllable by the user and definitely of interest in > that code, so just dropping sysmeta from the explicit required_attrs > isn't enough, IMHO. > > --Dan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrist at redhat.com Mon Oct 22 20:55:00 2018 From: jrist at redhat.com (Jason E. Rist) Date: Mon, 22 Oct 2018 14:55:00 -0600 Subject: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer In-Reply-To: References: Message-ID: <836ebd56-2150-5fda-2f18-ca79038ba44f@redhat.com> On 10/19/2018 06:23 AM, Juan Antonio Osorio Robles wrote: > Hello! > > > I would like to propose Bob Fournier (bfournie) as a core reviewer in > TripleO. His patches and reviews have spanned quite a wide range in our > project, his reviews show great insight and quality and I think he would > be a addition to the core team. > > What do you folks think? > > > Best Regards > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > yup. -- Jason E. Rist Senior Software Engineer OpenStack User Interfaces Red Hat, Inc. Freenode: jrist github/twitter: knowncitizen From scarvalhojr at gmail.com Mon Oct 22 21:05:24 2018 From: scarvalhojr at gmail.com (Sergio A. de Carvalho Jr.) Date: Mon, 22 Oct 2018 22:05:24 +0100 Subject: [openstack-dev] [nova] Metadata API cross joining "instance_metadata" and "instance_system_metadata" In-Reply-To: References: Message-ID: https://bugs.launchpad.net/nova/+bug/1799298 On Mon, Oct 22, 2018 at 9:15 PM Sergio A. de Carvalho Jr. < scarvalhojr at gmail.com> wrote: > Cool, I'll open a bug then. > > I was wondering if, before joining the metadata tables with the rest of > instance data, we could do a UNION, since both tables are structurally > identical. > > On Mon, Oct 22, 2018 at 9:04 PM Dan Smith wrote: > >> > Do you guys see an easy fix here? >> > >> > Should I open a bug report? >> >> Definitely open a bug. IMHO, we should just make the single-instance >> load work like the multi ones, where we load the metadata separately if >> requested. We might be able to get away without sysmeta these days, but >> we needed it for the flavor details back when the join was added. But, >> user metadata is controllable by the user and definitely of interest in >> that code, so just dropping sysmeta from the explicit required_attrs >> isn't enough, IMHO. >> >> --Dan >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adriant at catalyst.net.nz Mon Oct 22 21:35:10 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Tue, 23 Oct 2018 10:35:10 +1300 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update In-Reply-To: References: Message-ID: <6b6ee75b-82c3-711c-f2c6-b6d6f145f3a7@catalyst.net.nz> On 20/10/18 4:09 AM, Matt Riedemann wrote: > The big news this week is we have a couple of volunteer developers > from NEC (Akhil Jain and Rajat Dhasmana) who are pushing the base > framework changes across a lot of the projects [1]. I'm trying to > review as many of these as I can. The request now is for the core > teams on these projects to review them as well so we can keep moving, > and then start thinking about non-placeholder specific checks for each > project. > > The one other open question I have is about the Adjutant change [2]. I > know Adjutant is very new and I'm not sure what upgrades look like for > that project, so I don't really know how valuable adding the upgrade > check framework is to that project. Is it like Horizon where it's > mostly stateless and fed off plugins? Because we don't have an upgrade > check CLI for Horizon for that reason. > > [1] > https://review.openstack.org/#/q/topic:upgrade-checkers+(status:open+OR+status:merged) > [2] https://review.openstack.org/#/c/611812/ > Adjutant's codebase is also going to be a bit unstable for the next few cycles while we refactor some internals (we're not marking it 1.0 yet). Once the current set of ugly refactors planned for late Stein are done I may look at building some upgrade checking, once we also work out what out upgrade checking should look like. Probably mostly checking config changes, database migration states, and plugin compatibility. Adjutant already has a concept of startup checks at least, which while not anywhere near as extensive as they should be, mostly amount to making sure your config file looks 'mostly' sane regarding plugins before starting up the service, and we do intend to expand on that, plus we can reuse a large chunk of that for upgrade checking. From mriedemos at gmail.com Mon Oct 22 21:40:50 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 22 Oct 2018 16:40:50 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update In-Reply-To: <6b6ee75b-82c3-711c-f2c6-b6d6f145f3a7@catalyst.net.nz> References: <6b6ee75b-82c3-711c-f2c6-b6d6f145f3a7@catalyst.net.nz> Message-ID: On 10/22/2018 4:35 PM, Adrian Turjak wrote: >> The one other open question I have is about the Adjutant change [2]. I >> know Adjutant is very new and I'm not sure what upgrades look like for >> that project, so I don't really know how valuable adding the upgrade >> check framework is to that project. Is it like Horizon where it's >> mostly stateless and fed off plugins? Because we don't have an upgrade >> check CLI for Horizon for that reason. >> >> [1] >> https://review.openstack.org/#/q/topic:upgrade-checkers+(status:open+OR+status:merged) >> [2]https://review.openstack.org/#/c/611812/ >> > Adjutant's codebase is also going to be a bit unstable for the next few > cycles while we refactor some internals (we're not marking it 1.0 yet). > Once the current set of ugly refactors planned for late Stein are done I > may look at building some upgrade checking, once we also work out what > out upgrade checking should look like. Probably mostly checking config > changes, database migration states, and plugin compatibility. > > Adjutant already has a concept of startup checks at least, which while > not anywhere near as extensive as they should be, mostly amount to > making sure your config file looks 'mostly' sane regarding plugins > before starting up the service, and we do intend to expand on that, plus > we can reuse a large chunk of that for upgrade checking. OK it seems there is not really any point in trying to satisfy the upgrade checkers goal for Adjutant in Stein then. Should we just abandon the change? -- Thanks, Matt From gmann at ghanshyammann.com Mon Oct 22 23:09:13 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 23 Oct 2018 08:09:13 +0900 Subject: [openstack-dev] [qa] patrole] Nominating Sergey Vilgelm and Mykola Yakovliev for Patrole core In-Reply-To: <7D5E803080EF7047850D309B333CB94E22EBA021@GAALPA1MSGUSRBI.ITServices.sbc.com> References: <7D5E803080EF7047850D309B333CB94E22EBA021@GAALPA1MSGUSRBI.ITServices.sbc.com> Message-ID: <1669e0a173d.c3a7d12643083.6400498436645570362@ghanshyammann.com> +1 for both of them. They have been doing great work in Patrole and will be good addition in team. -gmann ---- On Tue, 23 Oct 2018 03:34:51 +0900 MONTEIRO, FELIPE C wrote ---- > > Hi, > > I would like to nominate Sergey Vilgelm and Mykola Yakovliev for Patrole core as they have both done excellent work the past cycle in improving the Patrole framework as well as increasing Neutron Patrole test coverage, which includes various Neutron plugins/extensions as well like fwaas. I believe they will both make an excellent addition to the Patrole core team. > > Please vote with a +1/-1 for the nomination, which will stay open for one week. > > Felipe > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tony at bakeyournoodle.com Tue Oct 23 01:52:36 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 23 Oct 2018 12:52:36 +1100 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: <5cf5bee4-d780-57fb-6099-fa018e3ab097@debian.org> References: <5cf5bee4-d780-57fb-6099-fa018e3ab097@debian.org> Message-ID: <20181023015236.GB18528@thor.bakeyournoodle.com> On Mon, Oct 22, 2018 at 04:33:49PM +0200, Thomas Goirand wrote: > One of the reoccurring problem that I'm facing in Debian is that not > only Python 3 version is lagging behind, but OpenStack dependencies are > also lagging behind the distro. Often, the answer is "we don't support > this or that version of X", which of course is very frustrating. Can you provide a recent instance of this? I feel like we have a misunderstanding here. > One > thing which would be super nice, would be a non-voting gate job that > test with the latest version of every Python dependencies as well, so we > get to see breakage early. We've stopped seeing them since we decided it > breaks too often and we would hide problems behind the > global-requirement thing. We watch for this in requirements where everyday we update to the latest co-installable dependencies[1] and gate on them. If that passes it gets merged into the repo and used by all projects. Where we could do better is making the failures visible, and we're open to suggestions there. We have the following caps: cmd2!=0.8.3,<0.9.0;python_version<'3.0' # MIT construct<2.9 # MIT Django<2;python_version<'3.0' # BSD Django<2.1;python_version>='3.0' # BSD django-floppyforms<2 # BSD elasticsearch<3.0.0 # Apache-2.0 jsonpath-rw<2.0 # Apache-2.0 jsonschema<3.0.0 # MIT PrettyTable<0.8 # BSD python-congressclient<2000 # Apache-2.0 warlock<2 # Apache-2.0 XStatic-jQuery<2 # MIT License These of course do impact the dependencies that are considered co-installable and we're working towards minimising this list. You can see from[1] that the lates urllib is incompatible with botocore. So we'll exclude that from the update and try again. Meanwahile we'll file a bug (Possibly a patch) in botocore to get the cap removed or bumped. Yours Tony. [1] https://review.openstack.org/#/c/612252/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From gdubreui at redhat.com Tue Oct 23 08:30:18 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Tue, 23 Oct 2018 19:30:18 +1100 Subject: [openstack-dev] [neutron][neutron-release] feature/graphql branch rebase In-Reply-To: References: <89b44820-47c0-0a8b-a363-cf3ff4e1879a@redhat.com> <649c9384-9d5c-3859-f893-71b48ecd675a@redhat.com> Message-ID: <55ba80c3-bbfc-76dd-affb-f0c4023af33e@redhat.com> Hi Miguel, Thank you for your help. I'll use those precious instructions next time. Cheers, Gilles On 16/10/18 1:32 am, Miguel Lavalle wrote: > Hi Gilles, > > The merge of master into feature/graphql  has been approved: > https://review.openstack.org/#/c/609455. In the future, you can create > your own merge patch following the instructions here: > https://docs.openstack.org/infra/manual/drivers.html#merge-master-into-feature-branch. > The Neutron team will catch it in Gerrit and review it > > Regards > > Miguel > > On Thu, Oct 4, 2018 at 11:44 PM Gilles Dubreuil > wrote: > > Hey Neutron folks, > > I'm just reiterating the request. > > Thanks > > > On 20/06/18 11:34, Gilles Dubreuil wrote: > > Could someone from the Neutron release group rebase feature/graphql > > branch against master/HEAD branch please? > > > > Regards, > > Gilles > > > > > -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Oct 23 08:37:35 2018 From: zigo at debian.org (Thomas Goirand) Date: Tue, 23 Oct 2018 10:37:35 +0200 Subject: [openstack-dev] [horizon] xstatic-bootstrap-datepicker and twitter-bootstrap dependency Message-ID: Hi, The python3-xstatic-bootstrap-datepicker Debian package runtime depends on libjs-twitter-bootstrap-datepicker which itself depends on libjs-twitter-bootstrap, which is produced by the twitter-bootstrap source package. The twitter-bootstrap will go away from Debian Buster, as per https://bugs.debian.org/907724 So a few questions here: - Do I really need to have libjs-twitter-bootstrap-datepicker depend on libjs-twitter-bootstrap (which is version 2 of bootstrap)? - Is Horizon using bootstrap 3? - What action does the Horizon team suggest to keep Horizon working in Debian? Cheers, Thomas Goirand (zigo) From gmann at ghanshyammann.com Tue Oct 23 09:01:27 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 23 Oct 2018 18:01:27 +0900 Subject: [openstack-dev] [tc][all] TC office hours is started now on #openstack-tc Message-ID: <166a0284b8a.110679edd51607.6362498248541323677@ghanshyammann.com> Hi All, TC office hour is started on #openstack-tc channel. Feel free to reach to us for anything you want discuss/input/feedback/help from TC. -gmann From gergely.csatari at nokia.com Tue Oct 23 09:26:12 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Tue, 23 Oct 2018 09:26:12 +0000 Subject: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge" In-Reply-To: References: <713a4c9a-d628-ee6d-49e8-c8e9763cdbfe@redhat.com> Message-ID: Hi, Yes, https://github.com/State-of-the-Edge/glossary is a good initiative. Maybe we should all just start using the terms defined there and contribute if we have problems with the definitions. Br, Gerg0 From: Teresa Peluso Sent: Friday, October 19, 2018 4:39 PM To: Csatari, Gergely (Nokia - HU/Budapest) ; OpenStack Development Mailing List (not for usage questions) ; fulton at redhat.com; edge-computing at lists.openstack.org Cc: openstack-sigs at lists.openstack.org Subject: RE: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge" Fyi – could this help? https://www.linuxfoundation.org/blog/2018/06/edge-computing-just-got-its-rosetta-stone/ https://imasons.org/ starting to host workshops about this as well https://imasons.org/events/2018-im-edge-congress/ From: Csatari, Gergely (Nokia - HU/Budapest) > Sent: Friday, October 19, 2018 1:05 AM To: OpenStack Development Mailing List (not for usage questions) >; fulton at redhat.com; edge-computing at lists.openstack.org Cc: openstack-sigs at lists.openstack.org Subject: [EXTERNAL] Re: [Edge-computing] [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge" Hi, I’m adding the ECG mailing list to the discussion. I think the root of the problem is that there is no single definition of „the edge” (except for [1]), but it changes from group to group or use case to use case. What I recognise as the commonalities in these edge definitions, are 1) a distributed cloud infrastructure (kind of a cloud of clouds) 2) need for automation or everything 3) resource constraints for the control plane. The different edge variants are putting different emphasis on these common needs based ont he use case discussed. To have a more clear understanding of these definitions we could try the following: 1. Always add the definition of these to the given context 2. Check what other groups are using and adopt to that 3. Define our own language and expect everyone else to adopt Br, Gerg0 [1]: https://en.wikipedia.org/wiki/The_Edge From: Jim Rollenhagen > Sent: Thursday, October 18, 2018 11:43 PM To: fulton at redhat.com; OpenStack Development Mailing List (not for usage questions) > Cc: openstack-sigs at lists.openstack.org Subject: Re: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge" On Thu, Oct 18, 2018 at 4:45 PM John Fulton > wrote: On Thu, Oct 18, 2018 at 11:56 AM Jim Rollenhagen > wrote: > > On Thu, Oct 18, 2018 at 10:23 AM Dmitry Tantsur > wrote: >> >> Hi all, >> >> Sorry for chiming in really late in this topic, but I think $subj is worth >> discussing until we settle harder on the potentially confusing terminology. >> >> I think the difference between "Edge" and "Far Edge" is too vague to use these >> terms in practice. Think about the "edge" metaphor itself: something rarely has >> several layers of edges. A knife has an edge, there are no far edges. I imagine >> zooming in and seeing more edges at the edge, and then it's quite cool indeed, >> but is it really a useful metaphor for those who never used a strong microscope? :) >> >> I think in the trivial sense "Far Edge" is a tautology, and should be avoided. >> As a weak proof of my words, I already see a lot of smart people confusing these >> two and actually use Central/Edge where they mean Edge/Far Edge. I suggest we >> adopt a different terminology, even if it less consistent with typical marketing >> term around the "Edge" movement. > > > FWIW, we created rough definitions of "edge" and "far edge" during the edge WG session in Denver. > It's mostly based on latency to the end user, though we also talked about quantities of compute resources, if someone can find the pictures. Perhaps these are the pictures Jim was referring to? https://www.dropbox.com/s/255x1cao14taer3/MVP-Architecture_edge-computing_PTG.pptx?dl=0# That's it, thank you! // jim I'm involved in some TripleO work called the split control plane: https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html After the PTG I saw that the split control plane was compatible with the type of deployment discussed at the edge WG session in Denver and described the compatibility at: https://etherpad.openstack.org/p/tripleo-edge-working-group-split-control-plane > See the picture and table here: https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures#Overview > >> Now, I don't have really great suggestions. Something that came up in TripleO >> discussions [1] is Core/Hub/Edge, which I think reflects the idea better. > > > I'm also fine with these names, as they do describe the concepts well. :) > > // jim I'm fine with these terms too. In split control plane there's a deployment method for deploying a central site and then deploying remote sites independently. That deployment method could be used to deploy Core/Hub/Edge sites too. E.g. deploy the Core using Heat stack N. Deploy a Hub using stack N+1 and then deploy an Edge using stack N+2 etc. John >> >> I'd be very interested to hear your ideas. >> >> Dmitry >> >> [1] https://etherpad.openstack.org/p/tripleo-edge-mvp >> >> _______________________________________________ >> openstack-sigs mailing list >> openstack-sigs at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Tue Oct 23 09:28:19 2018 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 23 Oct 2018 20:28:19 +1100 Subject: [openstack-dev] [requirements][vitrage][infra] SQLAlchemy-Utils version 0.33.6 breaks Vitrage gate In-Reply-To: <20181018131713.nrnnlihipuvxaabu@yuggoth.org> References: <715158a6-25fa-846e-149a-22d6e3d07ef5@suse.com> <20181018131713.nrnnlihipuvxaabu@yuggoth.org> Message-ID: <20181023092819.GA29294@fedora19.localdomain> On Thu, Oct 18, 2018 at 01:17:13PM +0000, Jeremy Stanley wrote: > It's been deleted (again) and the suspected fix approved so > hopefully it won't recur. Unfortunately the underlying issue is still a mystery. It recurred once after the suspected fix was merged [1], and despite trying to replicate it mostly in-situ we could not duplicate the issue. Another change [2] has made our builds use a modified pip [3] which logs the sha256 hash of the .whl outputs. If this reappears, we can look at the logs and the final (corrupt) wheel and see if the problem is coming from pip, or something after that as we copy the files. If looking at hexdumps of zip files is your idea of a good time, there are some details on the corruption in the comments of [2]. Any suggestions welcome :) Also any corruption reports welcome too, and we can continue investigation. Thanks, -i [1] https://review.openstack.org/611444 [2] https://review.openstack.org/612234 [3] https://github.com/pypa/pip/pull/5908 From haleyb.dev at gmail.com Tue Oct 23 11:52:23 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Tue, 23 Oct 2018 07:52:23 -0400 Subject: [openstack-dev] [neutron] Bug deputy report week of October 15th Message-ID: Hi, I was Neutron bug deputy last week. Below is a short summary about reported bugs. Note: I will not be at the team meeting this morning, sorry for the late notice. -Brian Critical bugs ------------- None High bugs --------- * https://bugs.launchpad.net/neutron/+bug/1798472 - Fullstack tests fails because process is not killed properly - gate failure * https://bugs.launchpad.net/neutron/+bug/1798475 - Fullstack test test_ha_router_restart_agents_no_packet_lost failing - gate failure * https://bugs.launchpad.net/neutron/+bug/1799124 - Path MTU discovery fails for VMs with Floating IP behind DVR routers - Needs confirmation, I took ownership Medium bugs ----------- * https://bugs.launchpad.net/neutron/+bug/1798577 - Fix proposed, https://review.openstack.org/#/c/606007/ * A number of port-forwarding bugs were filed by Liu Yulong - https://bugs.launchpad.net/neutron/+bug/1799135 - https://bugs.launchpad.net/neutron/+bug/1799137 - https://bugs.launchpad.net/neutron/+bug/1799138 - https://bugs.launchpad.net/neutron/+bug/1799140 - https://bugs.launchpad.net/neutron/+bug/1799150 - https://bugs.launchpad.net/neutron/+bug/1799155 - Will discuss with Liu if he is working on them Wishlist bugs ------------- * https://bugs.launchpad.net/neutron/+bug/1777746 - When use ‘neutron net-update’, we cannot change the 'vlan-transparent' dynamically - not a bug as per the API definition, asked if proposing extension - perhaps possible to implement in backward-compatible way * https://bugs.launchpad.net/neutron/+bug/1799178 - l2 pop doesn't always provide the whole list of fdb entries on agent restart - Need a smarter way to detect agent restarts Invalid bugs ------------ * https://bugs.launchpad.net/neutron/+bug/1798536 - OpenVswitch: qg-XXX goes to br-int instead of br-ext * https://bugs.launchpad.net/neutron/+bug/1798689 - Fullstack test test_create_one_default_qos_policy_per_project failed - Fixed by https://review.openstack.org/#/c/610280/ Further triage required ----------------------- * https://bugs.launchpad.net/neutron/+bug/1798588 - neutron-openvswitch-agent break network connection on second reboot - Asked for more information from submitter * https://bugs.launchpad.net/neutron/+bug/1798688 - iptables_hybrid tests tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_shelve_shelved_server failed - tempest.lib.exceptions.NotFound: Object not found Details: {u'message': u'Instance None could not be found.', u'code': 404} Not sure if issue with shelve/unshelve since the instance is gone * https://bugs.launchpad.net/bugs/1798713 - [fwaas]wrong judgment in _is_supported_by_fw_l2_driver method - Fix proposed, https://review.openstack.org/#/c/605988 Need someone from FWaaS team to confirm and set priority From e0ne at e0ne.info Tue Oct 23 12:11:13 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Tue, 23 Oct 2018 15:11:13 +0300 Subject: [openstack-dev] [horizon][plugins] Horizon plugins validation on CI In-Reply-To: <20181018015215.GA6589@thor.bakeyournoodle.com> References: <20181018015215.GA6589@thor.bakeyournoodle.com> Message-ID: Hi Tony, I like the idea to get functional tests instead of tempest. We can extend our functional tests to plugins. Personally, I don't have a strong opinion on what way we should go forward. I'll support any community decision which helps us to get cross projects CI up and running. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Thu, Oct 18, 2018 at 4:55 AM Tony Breeds wrote: > On Wed, Oct 17, 2018 at 04:18:26PM +0300, Ivan Kolodyazhny wrote: > > Hi all, > > > > We discussed this topic at PTG both with Horizon and other teams. Sounds > > like everybody is interested to have some cross-project CI jobs to verify > > that plugins are not broken with the latest Horizon changes. > > > > The initial idea was to use tempest plugins for this effort like we do > for > > Horizon [1]. We've got a very simple test to verify that Horizon is up > and > > running and a user is able to login. > > > > It's easy to implement such tests for any existing horizon plugin. I > tried > > it for Heat and Manila dashboards. > > Given that I know very little about this but isn't it just as simple as > running the say the octavia-dashboard[1] npm tests on all horizon changes? > This would be similar to the way we run the nova[2] functional tests on all > constraints changes in openstack/requirements. > > Yours Tony. > > [1] Of course all dashbaords/plugins > [2] Not just nova but you get the idea > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Oct 23 13:09:56 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 23 Oct 2018 09:09:56 -0400 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update In-Reply-To: References: <6b6ee75b-82c3-711c-f2c6-b6d6f145f3a7@catalyst.net.nz> Message-ID: <28463cba-869b-a7a7-2ef8-c66ce6d3e9cb@nemebean.com> On 10/22/18 5:40 PM, Matt Riedemann wrote: > On 10/22/2018 4:35 PM, Adrian Turjak wrote: >>> The one other open question I have is about the Adjutant change [2]. I >>> know Adjutant is very new and I'm not sure what upgrades look like for >>> that project, so I don't really know how valuable adding the upgrade >>> check framework is to that project. Is it like Horizon where it's >>> mostly stateless and fed off plugins? Because we don't have an upgrade >>> check CLI for Horizon for that reason. >>> >>> [1] >>> https://review.openstack.org/#/q/topic:upgrade-checkers+(status:open+OR+status:merged) >>> >>> [2]https://review.openstack.org/#/c/611812/ >>> >> Adjutant's codebase is also going to be a bit unstable for the next few >> cycles while we refactor some internals (we're not marking it 1.0 yet). >> Once the current set of ugly refactors planned for late Stein are done I >> may look at building some upgrade checking, once we also work out what >> out upgrade checking should look like. Probably mostly checking config >> changes, database migration states, and plugin compatibility. >> >> Adjutant already has a concept of startup checks at least, which while >> not anywhere near as extensive as they should be, mostly amount to >> making sure your config file looks 'mostly' sane regarding plugins >> before starting up the service, and we do intend to expand on that, plus >> we can reuse a large chunk of that for upgrade checking. > > OK it seems there is not really any point in trying to satisfy the > upgrade checkers goal for Adjutant in Stein then. Should we just abandon > the change? > Can't we just add a noop command like we are for the services that don't currently need upgrade checks? From mriedemos at gmail.com Tue Oct 23 13:58:40 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 23 Oct 2018 08:58:40 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update In-Reply-To: <28463cba-869b-a7a7-2ef8-c66ce6d3e9cb@nemebean.com> References: <6b6ee75b-82c3-711c-f2c6-b6d6f145f3a7@catalyst.net.nz> <28463cba-869b-a7a7-2ef8-c66ce6d3e9cb@nemebean.com> Message-ID: On 10/23/2018 8:09 AM, Ben Nemec wrote: > Can't we just add a noop command like we are for the services that don't > currently need upgrade checks? We could, but I was also hoping that for most projects we will actually be able to replace the noop / placeholder check with *something* useful in Stein. -- Thanks, Matt From jobernar at redhat.com Tue Oct 23 14:01:42 2018 From: jobernar at redhat.com (Jon Bernard) Date: Tue, 23 Oct 2018 10:01:42 -0400 Subject: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend In-Reply-To: <2f610cad-7a08-78cf-5ca4-63e0ea59d661@gmail.com> References: <73dab45e.1a3d.1668a62f317.Coremail.bxzhu_5355@163.com> <8f8b257a-5219-60e1-8760-d905f5bb48ff@gmail.com> <47653a27.59b1.1668cea5d9b.Coremail.bxzhu_5355@163.com> <40297115-571b-c07c-88b4-9e0d24b7301b@gmail.com> <8e1eba3.1f61.16699e1111d.Coremail.bxzhu_5355@163.com> <2f610cad-7a08-78cf-5ca4-63e0ea59d661@gmail.com> Message-ID: <20181023140140.rs3sf6huy4czgtak@exaesuubou5k> * melanie witt wrote: > On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote: > > I created a new vm and a new volume with type 'ceph'[So that the volume > > will be created on one of two hosts. I assume that the volume created on > > host dev at rbd-1#ceph this time]. Next step is to attach the volume to the > > vm. At last I want to migrate the volume from host dev at rbd-1#ceph to > > host dev at rbd-2#ceph, but it failed with the exception > > 'NotImplementedError(_("Swap only supports host devices")'. > > > > So that, my real problem is that is there any work to migrate > > volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool) > > in the same ceph cluster? > > The difference between the spec[2] with my scope is only one is > > *available*(the spec) and another is *in-use*(my scope). > > > > > > [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/ > > [2] https://review.openstack.org/#/c/296150 > > Ah, I think I understand now, thank you for providing all of those details. > And I think you explained it in your first email, that cinder supports > migration of ceph volumes if they are 'available' but not if they are > 'in-use'. Apologies that I didn't get your meaning the first time. > > I see now the code you were referring to is this [3]: > > if volume.status not in ('available', 'retyping', 'maintenance'): > LOG.debug('Only available volumes can be migrated using backend ' > 'assisted migration. Falling back to generic migration.') > return refuse_to_migrate > > So because your volume is not 'available', 'retyping', or 'maintenance', > it's falling back to generic migration, which will end up with an error in > nova because the source_path is not set in the volume config. > > Can anyone from the cinder team chime in about whether the ceph volume > migration could be expanded to allow migration of 'in-use' volumes? Is there > a reason not to allow migration of 'in-use' volumes? Generally speaking, Nova must facilitate the migration of a live (or in-use) volume. A volume attached to a running instance requires code in the I/O path to correctly route traffic to the correct location - so Cinder must refuse (or defer) a migrate operation if the volume is attached. Until somewhat recently Qemu and Libvirt did not support the migration to non-block (RBD) targets which is the reason for lack of support. I believe we now have all of the pieces to perform this operation successfully, but I suspect it will require a setup with correct versions of all the related software. I will try to verify this during the current release cycle and report back. -- Jon > > [3] https://github.com/openstack/cinder/blob/c42fdc470223d27850627fd4fc9d8cb15f2941f8/cinder/volume/drivers/rbd.py#L1618-L1621 > > Cheers, > -melanie > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jobernar at redhat.com Tue Oct 23 14:08:06 2018 From: jobernar at redhat.com (Jon Bernard) Date: Tue, 23 Oct 2018 10:08:06 -0400 Subject: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend In-Reply-To: <57a0f1a3-a014-5e71-d685-9b953cf0eef1@gmail.com> References: <73dab45e.1a3d.1668a62f317.Coremail.bxzhu_5355@163.com> <8f8b257a-5219-60e1-8760-d905f5bb48ff@gmail.com> <47653a27.59b1.1668cea5d9b.Coremail.bxzhu_5355@163.com> <57a0f1a3-a014-5e71-d685-9b953cf0eef1@gmail.com> Message-ID: <20181023140805.qrbv7k2g3kmxlh77@exaesuubou5k> * melanie witt wrote: > On Fri, 19 Oct 2018 23:21:01 +0800 (GMT+08:00), Boxiang Zhu wrote: > > > > The version of my cinder and nova is Rocky. The scope of the cinder spec[1] > > is only for available volume migration between two pools from the same > > ceph cluster. > > If the volume is in-use status[2], it will call the generic migration > > function. So that as you > > describe it, on the nova side, it raises NotImplementedError(_("Swap > > only supports host devices"). > > The get_config of net volume[3] has not source_path. > > Ah, OK, so you're trying to migrate a volume across two separate ceph > clusters, and that is not supported. > > > So does anyone try to succeed to migrate volume(in-use) with ceph > > backend or is anyone doing something of it? > > Hopefully someone can share their experience with trying to migrate volumes > across separate ceph clusters. I unfortunately don't know anything about it. If this is the case, then Cinder cannot request a storage-specific migration which is typically more efficient. The migration will require a complete copy of each allocated block. Whether the volume is attached or not will determine who (cinder or nova) will perform the operation. -- Jon > > Best, > -melanie > > > [1] https://review.openstack.org/#/c/296150 > > [2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py > > [3] https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101 > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tobias.urdin at binero.se Tue Oct 23 14:19:58 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Tue, 23 Oct 2018 16:19:58 +0200 Subject: [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface In-Reply-To: References: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> Message-ID: <3f27e1b3-1bce-dd31-d81a-5352ca900ccc@binero.se> Hello Erik, Could you specify the DNs you used for all certificates just so that I can rule it out on my side. You can redact anything sensitive with some to just get the feel on how it's configured. Best regards Tobias On 10/22/2018 04:47 PM, Erik McCormick wrote: > On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin wrote: >> Hello, >> >> I've been having a lot of issues with SSL certificates myself, on my >> second trip now trying to get it working. >> >> Before I spent a lot of time walking through every line in the DevStack >> plugin and fixing my config options, used the generate >> script [1] and still it didn't work. >> >> When I got the "invalid padding" issue it was because of the DN I used >> for the CA and the certificate IIRC. >> >> > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect >> to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa >> routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'), >> ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'), >> ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",) >> > 19:47 < tobias-urdin> after a quick google "The problem was that my >> CA DN was the same as the certificate DN." >> >> IIRC I think that solved it, but then again I wouldn't remember fully >> since I've been at so many different angles by now. >> >> Here is my IRC logs history from the #openstack-lbaas channel, perhaps >> it can help you out >> http://paste.openstack.org/show/732575/ >> > Tobias, I owe you a beer. This was precisely the issue. I'm deploying > Octavia with kolla-ansible. It only deploys a single CA. After hacking > the templates and playbook to incorporate a separate server CA, the > amphorae now load and provision the required namespace. I'm adding a > kolla tag to the subject of this in hopes that someone might want to > take on changing this behavior in the project. Hopefully after I get > through Upstream Institute in Berlin I'll be able to do it myself if > nobody else wants to do it. > > For certificate generation, I extracted the contents of > octavia_certs_install.yml (which sets up the directory structure, > openssl.cnf, and the client CA), and octavia_certs.yml (which creates > the server CA and the client certificate) and mashed them into a > separate playbook just for this purpose. At the end I get: > > ca_01.pem - Client CA Certificate > ca_01.key - Client CA Key > ca_server_01.pem - Server CA Certificate > cakey.pem - Server CA Key > client.pem - Concatenated Client Key and Certificate > > If it would help to have the playbook, I can stick it up on github > with a huge "This is a hack" disclaimer on it. > >> ----- >> >> Sorry for hijacking the thread but I'm stuck as well. >> >> I've in the past tried to generate the certificates with [1] but now >> moved on to using the openstack-ansible way of generating them [2] >> with some modifications. >> >> Right now I'm just getting: Could not connect to instance. Retrying.: >> SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579) >> from the amphoras, haven't got any further but I've eliminated a lot of >> stuck in the middle. >> >> Tried deploying Ocatavia on Ubuntu with python3 to just make sure there >> wasn't an issue with CentOS and OpenSSL versions since it tends to lag >> behind. >> Checking the amphora with openssl s_client [3] it gives the same one, >> but the verification is successful just that I don't understand what the >> bad signature >> part is about, from browsing some OpenSSL code it seems to be related to >> RSA signatures somehow. >> >> 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad >> signature:s3_clnt.c:2032: >> >> So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS >> (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm >> back to something related >> to the certificates or the communication between the endpoints, or what >> actually responds inside the amphora (gunicorn IIUC?). Based on the >> "verify" functions actually causing that bad signature error I would >> assume it's the generated certificate that the amphora presents that is >> causing it. >> >> I'll have to continue the troubleshooting to the inside of the amphora, >> I've used the test-only amphora image before but have now built my own >> one that is >> using the amphora-agent from the actual stable branch, but same issue >> (bad signature). >> >> For verbosity this is the config options set for the certificates in >> octavia.conf and which file it was copied from [4], same here, a >> replication of what openstack-ansible does. >> >> Appreciate any feedback or help :) >> >> Best regards >> Tobias >> >> [1] >> https://github.com/openstack/octavia/blob/master/bin/create_certificates.sh >> [2] http://paste.openstack.org/show/732483/ >> [3] http://paste.openstack.org/show/732486/ >> [4] http://paste.openstack.org/show/732487/ >> >> On 10/20/2018 01:53 AM, Michael Johnson wrote: >>> Hi Erik, >>> >>> Sorry to hear you are still having certificate issues. >>> >>> Issue #2 is probably caused by issue #1. Since we hot-plug the tenant >>> network for the VIP, one of the first steps after the worker connects >>> to the amphora agent is finishing the required configuration of the >>> VIP interface inside the network namespace on the amphroa. >>> > Thanks for the hint on the workflow of this. I hadn't gotten deep > enough into the code to find that yet, but I suspected it was blocking > since the namespace never got created either. Thanks > >>> If I remember correctly, you are attempting to configure Octavia with >>> the dual CA option (which is good for non-development use). >>> >>> This is what I have for notes: >>> >>> [certificates] gets the following: >>> cert_generator = local_cert_generator >>> ca_certificate = server CA's "server.pem" file >>> ca_private_key = server CA's "server.key" file >>> ca_private_key_passphrase = pass phrase for ca_private_key >>> [controller_worker] >>> client_ca = Client CA's ca_cert file >>> [haproxy_amphora] >>> client_cert = Client CA's client.pem file (I think with it's key >>> concatenated is what rm_work said the other day) >>> server_ca = Server CA's ca_cert file >>> > This is all very helpful. It's a bit difficult to know what goes where > the way the documentation is written presently. For something that's > going to be the defacto standard for loadbalancing, we as a community > need to do a better job of documenting how to set up, configure, and > manage this in production. I'm trying to capture my lessons learned > and processes as I go to help with that if I can. > > -Erik > >>> That said, I can probably run through this and write something up next >>> week that is more step-by-step/detailed. >>> >>> Michael >>> >>> On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick >>> wrote: >>>> Apologies for cross-posting, but in the event that these might be >>>> worth filing as bugs, I wanted the Octavia devs to see it as well... >>>> >>>> I've been wrestling with getting Octavia up and running and have >>>> become stuck on two issues. I'm hoping someone has run into these >>>> before. My google foo has come up empty. >>>> >>>> Issue 1: >>>> When the Octavia controller tries to poll the amphora instance, it >>>> tries repeatedly and eventually fails. The error on the controller >>>> side is: >>>> >>>> 2018-10-19 14:17:39.181 26 ERROR >>>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection >>>> retries (currently set to 300) exhausted. The amphora is unavailable. >>>> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries >>>> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by >>>> SSLError(SSLError("bad handshake: Error([('rsa routines', >>>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >>>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >>>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >>>> 'tls_process_server_certificate', 'certificate verify >>>> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', >>>> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 >>>> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', >>>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >>>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >>>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >>>> 'tls_process_server_certificate', 'certificate verify >>>> failed')],)",),)) >>>> >>>> On the amphora side I see: >>>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. >>>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from >>>> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake >>>> failure (_ssl.c:1754) >>>> >>>> I've generated certificates both with the script in the Octavia git >>>> repo, and with the Openstack Ansible playbook. I can see that they are >>>> present in /etc/octavia/certs. >>>> >>>> I'm using the Kolla (Queens) containers for the control plane so I'm >>>> sure I've satisfied all the python library constraints. >>>> >>>> Issue 2: >>>> I"m not sure how it gets configured, but the tenant network interface >>>> (ens6) never comes up. I can spawn other instances on that network >>>> with no issue, and I can see that Neutron has the port attached to the >>>> instance. However, in the instance this is all I get: >>>> >>>> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a >>>> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >>>> group default qlen 1 >>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >>>> inet 127.0.0.1/8 scope host lo >>>> valid_lft forever preferred_lft forever >>>> inet6 ::1/128 scope host >>>> valid_lft forever preferred_lft forever >>>> 2: ens3: mtu 9000 qdisc pfifo_fast >>>> state UP group default qlen 1000 >>>> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff >>>> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 >>>> valid_lft forever preferred_lft forever >>>> inet6 fe80::f816:3eff:fe30:c460/64 scope link >>>> valid_lft forever preferred_lft forever >>>> 3: ens6: mtu 1500 qdisc noop state DOWN group >>>> default qlen 1000 >>>> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff >>>> >>>> There's no evidence of the interface anywhere else including udev rules. >>>> >>>> Any help with either or both issues would be greatly appreciated. >>>> >>>> Cheers, >>>> Erik >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at nemebean.com Tue Oct 23 14:30:23 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 23 Oct 2018 10:30:23 -0400 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update In-Reply-To: References: <6b6ee75b-82c3-711c-f2c6-b6d6f145f3a7@catalyst.net.nz> <28463cba-869b-a7a7-2ef8-c66ce6d3e9cb@nemebean.com> Message-ID: On 10/23/18 9:58 AM, Matt Riedemann wrote: > On 10/23/2018 8:09 AM, Ben Nemec wrote: >> Can't we just add a noop command like we are for the services that >> don't currently need upgrade checks? > > We could, but I was also hoping that for most projects we will actually > be able to replace the noop / placeholder check with *something* useful > in Stein. > Yeah, but part of the reason for placeholders was consistency across all of the services. I guess if there are never going to be upgrade checks in adjutant then I could see skipping it, but otherwise I would prefer to at least get the framework in place. From miguel at mlavalle.com Tue Oct 23 14:59:37 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Tue, 23 Oct 2018 09:59:37 -0500 Subject: [openstack-dev] Neutron stadium project Tempest plugins Message-ID: Dear Neutron Stadium projects, In a QA session during the recent PTG in Denver, it was suggested that the Stadium projects should move their Tempest plugins to a repository of their own or added to the Neutron Tempest plugin repository ( https://github.com/openstack/neutron-tempest-plugin). The purpose of this message is to start a conversation for the Stadium projects to indicate what is their preference. Please respond to this thread indicating how do you want to move forward. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From scarvalhojr at gmail.com Tue Oct 23 15:37:30 2018 From: scarvalhojr at gmail.com (Sergio A. de Carvalho Jr.) Date: Tue, 23 Oct 2018 16:37:30 +0100 Subject: [openstack-dev] [nova] Metadata API cross joining "instance_metadata" and "instance_system_metadata" In-Reply-To: References: Message-ID: I tested a code change that essentially reverts https://review.openstack.org/#/c/276861/1/nova/api/metadata/base.py In other words, with this change metadata tables are not fetched by default in API requests. If I understand correctly, metadata is fetched in separate queries as the instance object is created. Everything seems to work just fine, and I've considerably reduced the amount of data fetched from the database, as well as reduced the average response time of API requests. Given how simple it is and the results I'm getting, I don't see any reason not to patch my clusters with this change. Do you guys see any other impact this change could have? Anything that it could potentially break? On Mon, Oct 22, 2018 at 10:05 PM Sergio A. de Carvalho Jr. < scarvalhojr at gmail.com> wrote: > > https://bugs.launchpad.net/nova/+bug/1799298 > > On Mon, Oct 22, 2018 at 9:15 PM Sergio A. de Carvalho Jr. < > scarvalhojr at gmail.com> wrote: > >> Cool, I'll open a bug then. >> >> I was wondering if, before joining the metadata tables with the rest of >> instance data, we could do a UNION, since both tables are structurally >> identical. >> >> On Mon, Oct 22, 2018 at 9:04 PM Dan Smith wrote: >> >>> > Do you guys see an easy fix here? >>> > >>> > Should I open a bug report? >>> >>> Definitely open a bug. IMHO, we should just make the single-instance >>> load work like the multi ones, where we load the metadata separately if >>> requested. We might be able to get away without sysmeta these days, but >>> we needed it for the flavor details back when the join was added. But, >>> user metadata is controllable by the user and definitely of interest in >>> that code, so just dropping sysmeta from the explicit required_attrs >>> isn't enough, IMHO. >>> >>> --Dan >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Tue Oct 23 16:01:40 2018 From: dms at danplanet.com (Dan Smith) Date: Tue, 23 Oct 2018 09:01:40 -0700 Subject: [openstack-dev] [nova] Metadata API cross joining "instance_metadata" and "instance_system_metadata" In-Reply-To: (Sergio A. de Carvalho, Jr.'s message of "Tue, 23 Oct 2018 16:37:30 +0100") References: Message-ID: > I tested a code change that essentially reverts > https://review.openstack.org/#/c/276861/1/nova/api/metadata/base.py > > In other words, with this change metadata tables are not fetched by > default in API requests. If I understand correctly, metadata is > fetched in separate queries as the instance object is > created. Everything seems to work just fine, and I've considerably > reduced the amount of data fetched from the database, as well as > reduced the average response time of API requests. > > Given how simple it is and the results I'm getting, I don't see any > reason not to patch my clusters with this change. > > Do you guys see any other impact this change could have? Anything that > it could potentially break? This is probably fine as a bandage fix, but it's not the right one for upstream, IMHO. By doing what you did, you cause two RPC round-trips to fetch the instance and then the metadata every single time the metadata API is hit (not including the cache). By converting the DB load to do the two-step, we still hit the DB twice, but only one RPC round-trip, which will be much more efficient especially at load/scale. --Dan From cdent+os at anticdent.org Tue Oct 23 16:28:05 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 23 Oct 2018 17:28:05 +0100 (BST) Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance In-Reply-To: References: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> <6850a16b-a0f3-edeb-12c6-749745a32491@openstack.org> Message-ID: On Mon, 22 Oct 2018, Chris Dent wrote: > Thus far I'm not hearing any volunteers. If that continues to be the > case, I'll just keep it on bitbucket as that's the minimal change. As there was some noise that suggested "if you make it use git I might help", I put it on github: https://github.com/cdent/paste I'm now in the process of getting it somewhat sane for modern python, however test coverage isn't that great so additional work is required. Once it seems mostly okay, I'll push out a new version to pypi. I welcome assistance from any and all. And, rather importantly, we also need to take over pastedeploy as well, as the functionality there is also important. I've started that ball rolling. If having it live in my github proves a problem we can easily move it along somewhere else, but this was the shortest hop. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From scarvalhojr at gmail.com Tue Oct 23 16:34:50 2018 From: scarvalhojr at gmail.com (Sergio A. de Carvalho Jr.) Date: Tue, 23 Oct 2018 17:34:50 +0100 Subject: [openstack-dev] [nova] Metadata API cross joining "instance_metadata" and "instance_system_metadata" In-Reply-To: References: Message-ID: Make sense, Dan. Thanks so much for your help. Sergio On Tue, Oct 23, 2018 at 5:01 PM Dan Smith wrote: > > I tested a code change that essentially reverts > > https://review.openstack.org/#/c/276861/1/nova/api/metadata/base.py > > > > In other words, with this change metadata tables are not fetched by > > default in API requests. If I understand correctly, metadata is > > fetched in separate queries as the instance object is > > created. Everything seems to work just fine, and I've considerably > > reduced the amount of data fetched from the database, as well as > > reduced the average response time of API requests. > > > > Given how simple it is and the results I'm getting, I don't see any > > reason not to patch my clusters with this change. > > > > Do you guys see any other impact this change could have? Anything that > > it could potentially break? > > This is probably fine as a bandage fix, but it's not the right one for > upstream, IMHO. By doing what you did, you cause two RPC round-trips to > fetch the instance and then the metadata every single time the metadata > API is hit (not including the cache). By converting the DB load to do > the two-step, we still hit the DB twice, but only one RPC round-trip, > which will be much more efficient especially at load/scale. > > --Dan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Tue Oct 23 17:20:02 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 23 Oct 2018 13:20:02 -0400 Subject: [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface In-Reply-To: <3f27e1b3-1bce-dd31-d81a-5352ca900ccc@binero.se> References: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> <3f27e1b3-1bce-dd31-d81a-5352ca900ccc@binero.se> Message-ID: On Tue, Oct 23, 2018 at 10:20 AM Tobias Urdin wrote: > > Hello Erik, > > Could you specify the DNs you used for all certificates just so that I > can rule it out on my side. > You can redact anything sensitive with some to just get the feel on how > it's configured. > > Best regards > Tobias > I'm not actually using anything special or custom. For right now I just let it use the default www.example.com stuff. These are the settings in the playbook which I distilled from OSA octavia_cert_key_length_server: '4096' # key length octavia_cert_cipher_server: 'aes256' octavia_cert_cipher_client: 'aes256' octavia_cert_key_length_client: '4096' # key length octavia_cert_server_ca_subject: '/C=US/ST=Denial/L=Nowhere/O=Dis/CN=www.example.com' # change this to something more real octavia_cert_client_ca_subject: '/C=US/ST=Denial/L=Nowhere/O=Dis/CN=www.example.com' # change this to something more real octavia_cert_client_req_common_name: 'www.example.com' # change this to something more real octavia_cert_client_req_country_name: 'US' octavia_cert_client_req_state_or_province_name: 'Denial' octavia_cert_client_req_locality_name: 'Nowhere' octavia_cert_client_req_organization_name: 'Dis' octavia_cert_validity_days: 1825 # 5 years -Erik > On 10/22/2018 04:47 PM, Erik McCormick wrote: > > On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin wrote: > >> Hello, > >> > >> I've been having a lot of issues with SSL certificates myself, on my > >> second trip now trying to get it working. > >> > >> Before I spent a lot of time walking through every line in the DevStack > >> plugin and fixing my config options, used the generate > >> script [1] and still it didn't work. > >> > >> When I got the "invalid padding" issue it was because of the DN I used > >> for the CA and the certificate IIRC. > >> > >> > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING > >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > >> to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa > >> routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'), > >> ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'), > >> ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",) > >> > 19:47 < tobias-urdin> after a quick google "The problem was that my > >> CA DN was the same as the certificate DN." > >> > >> IIRC I think that solved it, but then again I wouldn't remember fully > >> since I've been at so many different angles by now. > >> > >> Here is my IRC logs history from the #openstack-lbaas channel, perhaps > >> it can help you out > >> http://paste.openstack.org/show/732575/ > >> > > Tobias, I owe you a beer. This was precisely the issue. I'm deploying > > Octavia with kolla-ansible. It only deploys a single CA. After hacking > > the templates and playbook to incorporate a separate server CA, the > > amphorae now load and provision the required namespace. I'm adding a > > kolla tag to the subject of this in hopes that someone might want to > > take on changing this behavior in the project. Hopefully after I get > > through Upstream Institute in Berlin I'll be able to do it myself if > > nobody else wants to do it. > > > > For certificate generation, I extracted the contents of > > octavia_certs_install.yml (which sets up the directory structure, > > openssl.cnf, and the client CA), and octavia_certs.yml (which creates > > the server CA and the client certificate) and mashed them into a > > separate playbook just for this purpose. At the end I get: > > > > ca_01.pem - Client CA Certificate > > ca_01.key - Client CA Key > > ca_server_01.pem - Server CA Certificate > > cakey.pem - Server CA Key > > client.pem - Concatenated Client Key and Certificate > > > > If it would help to have the playbook, I can stick it up on github > > with a huge "This is a hack" disclaimer on it. > > > >> ----- > >> > >> Sorry for hijacking the thread but I'm stuck as well. > >> > >> I've in the past tried to generate the certificates with [1] but now > >> moved on to using the openstack-ansible way of generating them [2] > >> with some modifications. > >> > >> Right now I'm just getting: Could not connect to instance. Retrying.: > >> SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579) > >> from the amphoras, haven't got any further but I've eliminated a lot of > >> stuck in the middle. > >> > >> Tried deploying Ocatavia on Ubuntu with python3 to just make sure there > >> wasn't an issue with CentOS and OpenSSL versions since it tends to lag > >> behind. > >> Checking the amphora with openssl s_client [3] it gives the same one, > >> but the verification is successful just that I don't understand what the > >> bad signature > >> part is about, from browsing some OpenSSL code it seems to be related to > >> RSA signatures somehow. > >> > >> 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad > >> signature:s3_clnt.c:2032: > >> > >> So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS > >> (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm > >> back to something related > >> to the certificates or the communication between the endpoints, or what > >> actually responds inside the amphora (gunicorn IIUC?). Based on the > >> "verify" functions actually causing that bad signature error I would > >> assume it's the generated certificate that the amphora presents that is > >> causing it. > >> > >> I'll have to continue the troubleshooting to the inside of the amphora, > >> I've used the test-only amphora image before but have now built my own > >> one that is > >> using the amphora-agent from the actual stable branch, but same issue > >> (bad signature). > >> > >> For verbosity this is the config options set for the certificates in > >> octavia.conf and which file it was copied from [4], same here, a > >> replication of what openstack-ansible does. > >> > >> Appreciate any feedback or help :) > >> > >> Best regards > >> Tobias > >> > >> [1] > >> https://github.com/openstack/octavia/blob/master/bin/create_certificates.sh > >> [2] http://paste.openstack.org/show/732483/ > >> [3] http://paste.openstack.org/show/732486/ > >> [4] http://paste.openstack.org/show/732487/ > >> > >> On 10/20/2018 01:53 AM, Michael Johnson wrote: > >>> Hi Erik, > >>> > >>> Sorry to hear you are still having certificate issues. > >>> > >>> Issue #2 is probably caused by issue #1. Since we hot-plug the tenant > >>> network for the VIP, one of the first steps after the worker connects > >>> to the amphora agent is finishing the required configuration of the > >>> VIP interface inside the network namespace on the amphroa. > >>> > > Thanks for the hint on the workflow of this. I hadn't gotten deep > > enough into the code to find that yet, but I suspected it was blocking > > since the namespace never got created either. Thanks > > > >>> If I remember correctly, you are attempting to configure Octavia with > >>> the dual CA option (which is good for non-development use). > >>> > >>> This is what I have for notes: > >>> > >>> [certificates] gets the following: > >>> cert_generator = local_cert_generator > >>> ca_certificate = server CA's "server.pem" file > >>> ca_private_key = server CA's "server.key" file > >>> ca_private_key_passphrase = pass phrase for ca_private_key > >>> [controller_worker] > >>> client_ca = Client CA's ca_cert file > >>> [haproxy_amphora] > >>> client_cert = Client CA's client.pem file (I think with it's key > >>> concatenated is what rm_work said the other day) > >>> server_ca = Server CA's ca_cert file > >>> > > This is all very helpful. It's a bit difficult to know what goes where > > the way the documentation is written presently. For something that's > > going to be the defacto standard for loadbalancing, we as a community > > need to do a better job of documenting how to set up, configure, and > > manage this in production. I'm trying to capture my lessons learned > > and processes as I go to help with that if I can. > > > > -Erik > > > >>> That said, I can probably run through this and write something up next > >>> week that is more step-by-step/detailed. > >>> > >>> Michael > >>> > >>> On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick > >>> wrote: > >>>> Apologies for cross-posting, but in the event that these might be > >>>> worth filing as bugs, I wanted the Octavia devs to see it as well... > >>>> > >>>> I've been wrestling with getting Octavia up and running and have > >>>> become stuck on two issues. I'm hoping someone has run into these > >>>> before. My google foo has come up empty. > >>>> > >>>> Issue 1: > >>>> When the Octavia controller tries to poll the amphora instance, it > >>>> tries repeatedly and eventually fails. The error on the controller > >>>> side is: > >>>> > >>>> 2018-10-19 14:17:39.181 26 ERROR > >>>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > >>>> retries (currently set to 300) exhausted. The amphora is unavailable. > >>>> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > >>>> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > >>>> SSLError(SSLError("bad handshake: Error([('rsa routines', > >>>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > >>>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > >>>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > >>>> 'tls_process_server_certificate', 'certificate verify > >>>> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > >>>> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 > >>>> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > >>>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > >>>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > >>>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > >>>> 'tls_process_server_certificate', 'certificate verify > >>>> failed')],)",),)) > >>>> > >>>> On the amphora side I see: > >>>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. > >>>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > >>>> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > >>>> failure (_ssl.c:1754) > >>>> > >>>> I've generated certificates both with the script in the Octavia git > >>>> repo, and with the Openstack Ansible playbook. I can see that they are > >>>> present in /etc/octavia/certs. > >>>> > >>>> I'm using the Kolla (Queens) containers for the control plane so I'm > >>>> sure I've satisfied all the python library constraints. > >>>> > >>>> Issue 2: > >>>> I"m not sure how it gets configured, but the tenant network interface > >>>> (ens6) never comes up. I can spawn other instances on that network > >>>> with no issue, and I can see that Neutron has the port attached to the > >>>> instance. However, in the instance this is all I get: > >>>> > >>>> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > >>>> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > >>>> group default qlen 1 > >>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > >>>> inet 127.0.0.1/8 scope host lo > >>>> valid_lft forever preferred_lft forever > >>>> inet6 ::1/128 scope host > >>>> valid_lft forever preferred_lft forever > >>>> 2: ens3: mtu 9000 qdisc pfifo_fast > >>>> state UP group default qlen 1000 > >>>> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > >>>> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > >>>> valid_lft forever preferred_lft forever > >>>> inet6 fe80::f816:3eff:fe30:c460/64 scope link > >>>> valid_lft forever preferred_lft forever > >>>> 3: ens6: mtu 1500 qdisc noop state DOWN group > >>>> default qlen 1000 > >>>> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > >>>> > >>>> There's no evidence of the interface anywhere else including udev rules. > >>>> > >>>> Any help with either or both issues would be greatly appreciated. > >>>> > >>>> Cheers, > >>>> Erik > >>>> > >>>> __________________________________________________________________________ > >>>> OpenStack Development Mailing List (not for usage questions) > >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From chris at openstack.org Tue Oct 23 18:17:20 2018 From: chris at openstack.org (Chris Hoge) Date: Tue, 23 Oct 2018 11:17:20 -0700 Subject: [openstack-dev] OpenStack Foundation Community Meeting - October 24 - StarlingX In-Reply-To: References: Message-ID: <92E23EF2-4815-4553-A9FF-C316D771FF64@openstack.org> On Wednesday, October 24 we will host our next Foundation community meeting at 8:00 PT / 15:00 UTC. This meeting will focus on an update on StarlingX, one of the projects in the Edge Computing Strategic Focus Area. The full agenda is here: https://etherpad.openstack.org/p/openstack-community-meeting Do you have something you'd like to discuss or share with the community? Please share them with me so that I can schedule them for future meetings. Thanks, Chris -------------- next part -------------- A non-text attachment was scrubbed... Name: Mail Attachment.ics Type: text/calendar Size: 3160 bytes Desc: not available URL: From sean.mcginnis at gmx.com Tue Oct 23 18:41:24 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 23 Oct 2018 13:41:24 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update In-Reply-To: References: <6b6ee75b-82c3-711c-f2c6-b6d6f145f3a7@catalyst.net.nz> <28463cba-869b-a7a7-2ef8-c66ce6d3e9cb@nemebean.com> Message-ID: <20181023184123.GA20743@sm-workstation> On Tue, Oct 23, 2018 at 10:30:23AM -0400, Ben Nemec wrote: > > > On 10/23/18 9:58 AM, Matt Riedemann wrote: > > On 10/23/2018 8:09 AM, Ben Nemec wrote: > > > Can't we just add a noop command like we are for the services that > > > don't currently need upgrade checks? > > > > We could, but I was also hoping that for most projects we will actually > > be able to replace the noop / placeholder check with *something* useful > > in Stein. > > > > Yeah, but part of the reason for placeholders was consistency across all of > the services. I guess if there are never going to be upgrade checks in > adjutant then I could see skipping it, but otherwise I would prefer to at > least get the framework in place. > +1 Even if there is nothing to check at this point, I think having the facility there is a benefit for projects and scripts that are going to be consuming these checks. Having nothing to check, but having the status check there, is going to be better than everything needing to keep a list of which projects to run the checks on and which not. From skaplons at redhat.com Tue Oct 23 20:08:11 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Tue, 23 Oct 2018 22:08:11 +0200 Subject: [openstack-dev] Neutron stadium project Tempest plugins In-Reply-To: References: Message-ID: <60857DF6-9E5D-4BCB-80DC-EEF49072C61A@redhat.com> Hi, Thx Miguel for raising this. List of tempest plugins is on https://docs.openstack.org/tempest/latest/plugin-registry.html - if URL for Your plugin is the same as Your main repo, You should move Your tempest plugin code. > Wiadomość napisana przez Miguel Lavalle w dniu 23.10.2018, o godz. 16:59: > > Dear Neutron Stadium projects, > > In a QA session during the recent PTG in Denver, it was suggested that the Stadium projects should move their Tempest plugins to a repository of their own or added to the Neutron Tempest plugin repository (https://github.com/openstack/neutron-tempest-plugin). The purpose of this message is to start a conversation for the Stadium projects to indicate what is their preference. Please respond to this thread indicating how do you want to move forward. > > Best regards > > Miguel > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From mriedemos at gmail.com Tue Oct 23 21:40:59 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 23 Oct 2018 16:40:59 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update In-Reply-To: <20181023184123.GA20743@sm-workstation> References: <6b6ee75b-82c3-711c-f2c6-b6d6f145f3a7@catalyst.net.nz> <28463cba-869b-a7a7-2ef8-c66ce6d3e9cb@nemebean.com> <20181023184123.GA20743@sm-workstation> Message-ID: <6b09f693-98d9-9a84-8e47-c22669efd586@gmail.com> On 10/23/2018 1:41 PM, Sean McGinnis wrote: >> Yeah, but part of the reason for placeholders was consistency across all of >> the services. I guess if there are never going to be upgrade checks in >> adjutant then I could see skipping it, but otherwise I would prefer to at >> least get the framework in place. >> > +1 > > Even if there is nothing to check at this point, I think having the facility > there is a benefit for projects and scripts that are going to be consuming > these checks. Having nothing to check, but having the status check there, is > going to be better than everything needing to keep a list of which projects to > run the checks on and which not. > Sure, that works for me as well. I'm not against adding placeholder/noop checks knowing that nothing is immediately obvious to replace those in Stein, but could be done later when the opportunity arises. If it's debatable on a per-project basis, then I'd defer to the core team for the project. -- Thanks, Matt From sorrison at gmail.com Tue Oct 23 23:54:31 2018 From: sorrison at gmail.com (Sam Morrison) Date: Wed, 24 Oct 2018 10:54:31 +1100 Subject: [openstack-dev] [nova] nova cellsv2 and DBs / down cells / quotas Message-ID: <929B1777-594D-440A-9847-84EE66A7146F@gmail.com> Hi nova devs, Have been having a good look into cellsv2 and how we migrate to them (we’re still on cellsv1 and about to upgrade to queens and still run cells v1 for now). One of the problems I have is that now all our nova cell database servers need to respond to API requests. With cellsv1 our architecture was to have a big powerful DB cluster (3 physical servers) at the API level to handle the API cell and then a smallish non HA DB server (usually just a VM) for each of the compute cells. This architecture won’t work with cells V2 and we’ll now need to have a lot of highly available and responsive DB servers for all the cells. It will also mean that our nova-apis which reside in Melbourne, Australia will now need to talk to database servers in Auckland, New Zealand. The biggest issue we have is when a cell is down. We sometimes have cells go down for an hour or so planned or unplanned and with cellsv1 this does not affect other cells. Looks like some good work going on here https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/handling-down-cell But what about quota? If a cell goes down then it would seem that a user all of a sudden would regain some quota from the instances that are in the down cell? Just wondering if anyone has thought about this? Cheers, Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Oct 24 01:04:25 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 24 Oct 2018 10:04:25 +0900 Subject: [openstack-dev] [tc][all] TC office hours is started now on #openstack-tc Message-ID: <166a399ec62.1113fc2f675975.1294718744916787308@ghanshyammann.com> Hi All, TC office hour is started on #openstack-tc channel. Feel free to reach to us for anything you want discuss/input/feedback/help from TC. -gmann From adriant at catalyst.net.nz Wed Oct 24 01:55:33 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Wed, 24 Oct 2018 14:55:33 +1300 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update In-Reply-To: <28463cba-869b-a7a7-2ef8-c66ce6d3e9cb@nemebean.com> References: <6b6ee75b-82c3-711c-f2c6-b6d6f145f3a7@catalyst.net.nz> <28463cba-869b-a7a7-2ef8-c66ce6d3e9cb@nemebean.com> Message-ID: <8efde991-a756-bad3-caa8-9bd67e380e81@catalyst.net.nz> On 24/10/18 2:09 AM, Ben Nemec wrote: > > > On 10/22/18 5:40 PM, Matt Riedemann wrote: >> On 10/22/2018 4:35 PM, Adrian Turjak wrote: >>>> The one other open question I have is about the Adjutant change [2]. I >>>> know Adjutant is very new and I'm not sure what upgrades look like for >>>> that project, so I don't really know how valuable adding the upgrade >>>> check framework is to that project. Is it like Horizon where it's >>>> mostly stateless and fed off plugins? Because we don't have an upgrade >>>> check CLI for Horizon for that reason. >>>> >>>> [1] >>>> https://review.openstack.org/#/q/topic:upgrade-checkers+(status:open+OR+status:merged) >>>> >>>> [2]https://review.openstack.org/#/c/611812/ >>>> >>> Adjutant's codebase is also going to be a bit unstable for the next few >>> cycles while we refactor some internals (we're not marking it 1.0 yet). >>> Once the current set of ugly refactors planned for late Stein are >>> done I >>> may look at building some upgrade checking, once we also work out what >>> out upgrade checking should look like. Probably mostly checking config >>> changes, database migration states, and plugin compatibility. >>> >>> Adjutant already has a concept of startup checks at least, which while >>> not anywhere near as extensive as they should be, mostly amount to >>> making sure your config file looks 'mostly' sane regarding plugins >>> before starting up the service, and we do intend to expand on that, >>> plus >>> we can reuse a large chunk of that for upgrade checking. >> >> OK it seems there is not really any point in trying to satisfy the >> upgrade checkers goal for Adjutant in Stein then. Should we just >> abandon the change? >> > > Can't we just add a noop command like we are for the services that > don't currently need upgrade checks? I mostly was responding to this in the review itself rather than on here. We are probably going to have reason for an upgrade check in Adjutant, my main gripe is, Adjutant is Django based and there isn't a good point in adding a separate cli when we already expose 'adjutant-api' as a proxy to manage.py and as such we should just register the upgrade check as a custom Django admin command. More so because all of the logic needed to actually run the check in future will require Django settings to be configured. We don't actually use any oslo libraries yet so the current code for the check doesn't actually make sense in context. I'm fine with a noop check, but we have to make it fit. From manuel.sb at garvan.org.au Wed Oct 24 02:44:08 2018 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Wed, 24 Oct 2018 02:44:08 +0000 Subject: [openstack-dev] [KOLLA] error deploying openstack -- TASK [keystone : Creating default user role] keystone is accessible and urllib3 and chardet libraries up to date Message-ID: <9D8A2486E35F0941A60430473E29F15B017BAD0CDC@MXDB2.ad.garvan.unsw.edu.au> Dear Kolla-ansible team, I am trying to deploy openstack pike using kolla-ansible 6.1.0 without success. I am not a python developer so I was wondering whether someone could help troubleshooting. [root at openstack-deployment ~]# pip show kolla-ansible Name: kolla-ansible Version: 6.1.0 Summary: Ansible Deployment of Kolla containers Home-page: https://docs.openstack.org/kolla-ansible/latest/ Author: OpenStack Author-email: openstack-dev at lists.openstack.org License: Apache License, Version 2.0 Location: /usr/lib/python2.7/site-packages Requires: PyYAML, setuptools, oslo.utils, Jinja2, cryptography, docker, netaddr, six, pbr, oslo.config Required-by: This is the ansible output TASK [keystone : Creating default user role] **************************************************************************************************************************************************************************************************************** task path: /usr/share/kolla-ansible/ansible/roles/keystone/tasks/register.yml:10 ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/104cd4ab74 test-openstack-controller '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/104cd4ab74 test-openstack-controller '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907 `" && echo ansible-tmp-1540346152.6-54138515670907="` echo /root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1540346152.6-54138515670907=/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907\n', '') Using module file /usr/share/kolla-ansible/ansible/library/kolla_toolbox.py PUT /root/.ansible/tmp/ansible-local-10970L49VmL/tmpFspLOR TO /root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/104cd4ab74 '[test-openstack-controller]' (0, 'sftp> put /root/.ansible/tmp/ansible-local-10970L49VmL/tmpFspLOR /root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py\n', '') ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/104cd4ab74 test-openstack-controller '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/ /root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py && sleep 0'"'"'' (0, '', '') ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/104cd4ab74 -tt test-openstack-controller '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py && sleep 0'"'"'' (1, '/usr/lib/python2.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.24) or chardet (2.2.1) doesn\'t match a supported version!\r\n RequestsDependencyWarning)\r\nTraceback (most recent call last):\r\n File "/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py", line 113, in \r\n _ansiballz_main()\r\n File "/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py", line 105, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File "/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py", line 48, in invoke_module\r\n imp.load_module(\'__main__\', mod, module, MOD_DESC)\r\n File "/tmp/ansible_kolla_toolbox_payload_JkGoxn/__main__.py", line 155, in \r\n File "/tmp/ansible_kolla_toolbox_payload_JkGoxn/__main__.py", line 133, in main\r\n File "/usr/lib/python2.7/site-packages/docker/utils/decorators.py", line 19, in wrapped\r\n return f(self, resource_id, *args, **kwargs)\r\n File "/usr/lib/python2.7/site-packages/docker/api/exec_api.py", line 165, in exec_start\r\n return self._read_from_socket(res, stream, tty)\r\n File "/usr/lib/python2.7/site-packages/docker/api/client.py", line 377, in _read_from_socket\r\n return six.binary_type().join(gen)\r\n File "/usr/lib/python2.7/site-packages/docker/utils/socket.py", line 75, in frames_iter\r\n n = next_frame_size(socket)\r\n File "/usr/lib/python2.7/site-packages/docker/utils/socket.py", line 62, in next_frame_size\r\n data = read_exactly(socket, 8)\r\n File "/usr/lib/python2.7/site-packages/docker/utils/socket.py", line 47, in read_exactly\r\n next_data = read(socket, n - len(data))\r\n File "/usr/lib/python2.7/site-packages/docker/utils/socket.py", line 31, in read\r\n return socket.recv(n)\r\nsocket.timeout: timed out\r\n', 'Shared connection to test-openstack-controller closed.\r\n') ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/104cd4ab74 test-openstack-controller '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') [DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|success` use `result is success`. This feature will be removed in version 2.9. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. FAILED - RETRYING: Creating default user role (10 retries left).Result was: { "attempts": 1, "changed": false, "module_stderr": "Shared connection to test-openstack-controller closed.\r\n", "module_stdout": "/usr/lib/python2.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.24) or chardet (2.2.1) doesn't match a supported version!\r\n RequestsDependencyWarning)\r\nTraceback (most recent call last):\r\n File \"/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py\", line 113, in \r\n _ansiballz_main()\r\n File \"/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py\", line 105, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py\", line 48, in invoke_module\r\n imp.load_module('__main__', mod, module, MOD_DESC)\r\n File \"/tmp/ansible_kolla_toolbox_payload_JkGoxn/__main__.py\", line 155, in \r\n File \"/tmp/ansible_kolla_toolbox_payload_JkGoxn/__main__.py\", line 133, in main\r\n File \"/usr/lib/python2.7/site-packages/docker/utils/decorators.py\", line 19, in wrapped\r\n return f(self, resource_id, *args, **kwargs)\r\n File \"/usr/lib/python2.7/site-packages/docker/api/exec_api.py\", line 165, in exec_start\r\n return self._read_from_socket(res, stream, tty)\r\n File \"/usr/lib/python2.7/site-packages/docker/api/client.py\", line 377, in _read_from_socket\r\n return six.binary_type().join(gen)\r\n File \"/usr/lib/python2.7/site-packages/docker/utils/socket.py\", line 75, in frames_iter\r\n n = next_frame_size(socket)\r\n File \"/usr/lib/python2.7/site-packages/docker/utils/socket.py\", line 62, in next_frame_size\r\n data = read_exactly(socket, 8)\r\n File \"/usr/lib/python2.7/site-packages/docker/utils/socket.py\", line 47, in read_exactly\r\n next_data = read(socket, n - len(data))\r\n File \"/usr/lib/python2.7/site-packages/docker/utils/socket.py\", line 31, in read\r\n return socket.recv(n)\r\nsocket.timeout: timed out\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1, "retries": 11 } ... ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/104cd4ab74 test-openstack-controller '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' (0, '/root\n', '') ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/104cd4ab74 test-openstack-controller '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1540346807.21-23787866279279 `" && echo ansible-tmp-1540346807.21-23787866279279="` echo /root/.ansible/tmp/ansible-tmp-1540346807.21-23787866279279 `" ) && sleep 0'"'"'' (0, 'ansible-tmp-1540346807.21-23787866279279=/root/.ansible/tmp/ansible-tmp-1540346807.21-23787866279279\n', '') Using module file /usr/share/kolla-ansible/ansible/library/kolla_toolbox.py PUT /root/.ansible/tmp/ansible-local-10970L49VmL/tmpHcMzMC TO /root/.ansible/tmp/ansible-tmp-1540346807.21-23787866279279/AnsiballZ_kolla_toolbox.py SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/104cd4ab74 '[test-openstack-controller]' (0, 'sftp> put /root/.ansible/tmp/ansible-local-10970L49VmL/tmpHcMzMC /root/.ansible/tmp/ansible-tmp-1540346807.21-23787866279279/AnsiballZ_kolla_toolbox.py\n', '') ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/104cd4ab74 test-openstack-controller '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1540346807.21-23787866279279/ /root/.ansible/tmp/ansible-tmp-1540346807.21-23787866279279/AnsiballZ_kolla_toolbox.py && sleep 0'"'"'' (0, '', '') ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/104cd4ab74 -tt test-openstack-controller '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1540346807.21-23787866279279/AnsiballZ_kolla_toolbox.py && sleep 0'"'"'' (1, '/usr/lib/python2.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.24) or chardet (2.2.1) doesn\'t match a supported version!\r\n RequestsDependencyWarning)\r\nTraceback (most recent call last):\r\n File "/root/.ansible/tmp/ansible-tmp-1540346807.21-23787866279279/AnsiballZ_kolla_toolbox.py", line 113, in \r\n _ansiballz_main()\r\n File "/root/.ansible/tmp/ansible-tmp-1540346807.21-23787866279279/AnsiballZ_kolla_toolbox.py", line 105, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File "/root/.ansible/tmp/ansible-tmp-1540346807.21-23787866279279/AnsiballZ_kolla_toolbox.py", line 48, in invoke_module\r\n imp.load_module(\'__main__\', mod, module, MOD_DESC)\r\n File "/tmp/ansible_kolla_toolbox_payload_9nPmOp/__main__.py", line 155, in \r\n File "/tmp/ansible_kolla_toolbox_payload_9nPmOp/__main__.py", line 133, in main\r\n File "/usr/lib/python2.7/site-packages/docker/utils/decorators.py", line 19, in wrapped\r\n return f(self, resource_id, *args, **kwargs)\r\n File "/usr/lib/python2.7/site-packages/docker/api/exec_api.py", line 165, in exec_start\r\n return self._read_from_socket(res, stream, tty)\r\n File "/usr/lib/python2.7/site-packages/docker/api/client.py", line 377, in _read_from_socket\r\n return six.binary_type().join(gen)\r\n File "/usr/lib/python2.7/site-packages/docker/utils/socket.py", line 75, in frames_iter\r\n n = next_frame_size(socket)\r\n File "/usr/lib/python2.7/site-packages/docker/utils/socket.py", line 62, in next_frame_size\r\n data = read_exactly(socket, 8)\r\n File "/usr/lib/python2.7/site-packages/docker/utils/socket.py", line 47, in read_exactly\r\n next_data = read(socket, n - len(data))\r\n File "/usr/lib/python2.7/site-packages/docker/utils/socket.py", line 31, in read\r\n return socket.recv(n)\r\nsocket.timeout: timed out\r\n', 'Shared connection to test-openstack-controller closed.\r\n') ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/104cd4ab74 test-openstack-controller '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1540346807.21-23787866279279/ > /dev/null 2>&1 && sleep 0'"'"'' (0, '', '') fatal: [test-openstack-controller]: FAILED! => { "attempts": 10, "changed": false, "module_stderr": "Shared connection to test-openstack-controller closed.\r\n", "module_stdout": "/usr/lib/python2.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.24) or chardet (2.2.1) doesn't match a supported version!\r\n RequestsDependencyWarning)\r\nTraceback (most recent call last):\r\n File \"/root/.ansible/tmp/ansible-tmp-1540346807.21-23787866279279/AnsiballZ_kolla_toolbox.py\", line 113, in \r\n _ansiballz_main()\r\n File \"/root/.ansible/tmp/ansible-tmp-1540346807.21-23787866279279/AnsiballZ_kolla_toolbox.py\", line 105, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/root/.ansible/tmp/ansible-tmp-1540346807.21-23787866279279/AnsiballZ_kolla_toolbox.py\", line 48, in invoke_module\r\n imp.load_module('__main__', mod, module, MOD_DESC)\r\n File \"/tmp/ansible_kolla_toolbox_payload_9nPmOp/__main__.py\", line 155, in \r\n File \"/tmp/ansible_kolla_toolbox_payload_9nPmOp/__main__.py\", line 133, in main\r\n File \"/usr/lib/python2.7/site-packages/docker/utils/decorators.py\", line 19, in wrapped\r\n return f(self, resource_id, *args, **kwargs)\r\n File \"/usr/lib/python2.7/site-packages/docker/api/exec_api.py\", line 165, in exec_start\r\n return self._read_from_socket(res, stream, tty)\r\n File \"/usr/lib/python2.7/site-packages/docker/api/client.py\", line 377, in _read_from_socket\r\n return six.binary_type().join(gen)\r\n File \"/usr/lib/python2.7/site-packages/docker/utils/socket.py\", line 75, in frames_iter\r\n n = next_frame_size(socket)\r\n File \"/usr/lib/python2.7/site-packages/docker/utils/socket.py\", line 62, in next_frame_size\r\n data = read_exactly(socket, 8)\r\n File \"/usr/lib/python2.7/site-packages/docker/utils/socket.py\", line 47, in read_exactly\r\n next_data = read(socket, n - len(data))\r\n File \"/usr/lib/python2.7/site-packages/docker/utils/socket.py\", line 31, in read\r\n return socket.recv(n)\r\nsocket.timeout: timed out\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1 } I checked keystone container is up and accessible through the floating IP (kolla_internal_vip_address): [root at test-openstack-controller ~]# docker ps --all CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 78e3ddb3016a kolla/centos-binary-memcached:pike "kolla_start" 16 minutes ago Restarting (71) 53 seconds ago memcached bc0126a27698 kolla/centos-binary-keystone:pike "kolla_start" About an hour ago Up 24 minutes keystone 3cca9652501a kolla/centos-binary-rabbitmq:pike "kolla_start" About an hour ago Up 24 minutes rabbitmq 5f5640baa9c9 kolla/centos-binary-mariadb:pike "kolla_start" About an hour ago Up 25 minutes mariadb e74475e46374 kolla/centos-binary-keepalived:pike "kolla_start" About an hour ago Up 25 minutes keepalived 5353cc97349e kolla/centos-binary-haproxy:pike "kolla_start" About an hour ago Up 25 minutes haproxy 8ecee7fd3f7f kolla/centos-binary-cron:pike "kolla_start" About an hour ago Up 25 minutes cron 02c3ab218420 kolla/centos-binary-kolla-toolbox:pike "kolla_start" About an hour ago Up 25 minutes kolla_toolbox e54a657552be kolla/centos-binary-fluentd:pike "kolla_start" About an hour ago Up 25 minutes fluentd [root at test-openstack-controller ~]# telnet 192.168.1.51 35357 Trying 192.168.1.51... Connected to 192.168.1.51. Escape character is '^]'. ^CConnection closed by foreign host. [root at test-openstack-controller ~]# telnet 192.168.1.51 5000 Trying 192.168.1.51... Connected to 192.168.1.51. Escape character is '^]'. ^CConnection closed by foreign host. Check python libraries urllib3 and chardet version: [root at test-openstack-controller ~]# pip show urllib3 Name: urllib3 Version: 1.24 Summary: HTTP library with thread-safe connection pooling, file post, and more. Home-page: https://urllib3.readthedocs.io/ Author: Andrey Petrov Author-email: andrey.petrov at shazow.net License: MIT Location: /usr/lib/python2.7/site-packages Requires: Required-by: requests [root at test-openstack-controller ~]# pip show chardet Name: chardet Version: 3.0.4 Summary: Universal encoding detector for Python 2 and 3 Home-page: https://github.com/chardet/chardet Author: Daniel Blanchard Author-email: dan.blanchard at gmail.com License: LGPL Location: /usr/lib/python2.7/site-packages Requires: Required-by: requests Any idea? Manuel Sopena Ballesteros | Big data Engineer Garvan Institute of Medical Research The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010 T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel.sb at garvan.org.au NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From liliueecg at gmail.com Wed Oct 24 05:54:09 2018 From: liliueecg at gmail.com (Li Liu) Date: Wed, 24 Oct 2018 01:54:09 -0400 Subject: [openstack-dev] [cyborg] [weekly-meeting] Message-ID: Weekly meeting tomorrow will be held tomorrow at the usual time10AM EST/10PM BJ time please provide inputs to Sundar's docs if you have the change before the meeting https://docs.google.com/spreadsheets/d/179Q8J9qIJNOiVm86K7bWPxo7otTsU18XVCI32V77JaU/edit#gid=0 Let's make our final decision for the naming. -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschadin at sbcloud.ru Wed Oct 24 04:53:33 2018 From: aschadin at sbcloud.ru (=?utf-8?B?0KfQsNC00LjQvSDQkNC70LXQutGB0LDQvdC00YAg0KHQtdGA0LPQtdC10LI=?= =?utf-8?B?0LjRhw==?=) Date: Wed, 24 Oct 2018 04:53:33 +0000 Subject: [openstack-dev] =?utf-8?q?_=5Bwatcher=5D_today=E2=80=99s_meeting_?= =?utf-8?q?is_cancelled?= Message-ID: I won’t be able to handle the meeting at 8:00 am, so I’d propose to meet at 10:30 am UTC on regular openstack-watcher channel if that’s suitable for you. Alex Chadin From melwittt at gmail.com Wed Oct 24 05:01:08 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 23 Oct 2018 22:01:08 -0700 Subject: [openstack-dev] [nova] nova cellsv2 and DBs / down cells / quotas In-Reply-To: <929B1777-594D-440A-9847-84EE66A7146F@gmail.com> References: <929B1777-594D-440A-9847-84EE66A7146F@gmail.com> Message-ID: <4f38170f-75c2-8e8a-92fc-92e4ec47f356@gmail.com> On Wed, 24 Oct 2018 10:54:31 +1100, Sam Morrison wrote: > Hi nova devs, > > Have been having a good look into cellsv2 and how we migrate to them > (we’re still on cellsv1 and about to upgrade to queens and still run > cells v1 for now). > > One of the problems I have is that now all our nova cell database > servers need to respond to API requests. > With cellsv1 our architecture was to have a big powerful DB cluster (3 > physical servers) at the API level to handle the API cell and then a > smallish non HA DB server (usually just a VM) for each of the compute > cells. > > This architecture won’t work with cells V2 and we’ll now need to have a > lot of highly available and responsive DB servers for all the cells. > > It will also mean that our nova-apis which reside in Melbourne, > Australia will now need to talk to database servers in Auckland, New > Zealand. > > The biggest issue we have is when a cell is down. We sometimes have > cells go down for an hour or so planned or unplanned and with cellsv1 > this does not affect other cells. > Looks like some good work going on here > https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/handling-down-cell > > But what about quota? If a cell goes down then it would seem that a user > all of a sudden would regain some quota from the instances that are in > the down cell? > Just wondering if anyone has thought about this? Yes, we've discussed it quite a bit. The current plan is to offer a policy-driven behavior as part of the "down" cell handling which will control whether nova will: a) Reject a server create request if the user owns instances in "down" cells b) Go ahead and count quota usage "as-is" if the user owns instances in "down" cells and allow quota limit to be potentially exceeded We would like to know if you think this plan will work for you. Further down the road, if we're able to come to an agreement on a consumer type/owner or partitioning concept in placement (to be certain we are counting usage our instance of nova owns, as placement is a shared service), we could count quota usage from placement instead of querying cells. Cheers, -melanie From tony at bakeyournoodle.com Wed Oct 24 05:08:26 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 24 Oct 2018 16:08:26 +1100 Subject: [openstack-dev] [Release-job-failures] Release of openstack-infra/shade failed In-Reply-To: References: Message-ID: <20181024050825.GA28200@thor.bakeyournoodle.com> On Wed, Oct 24, 2018 at 03:23:53AM +0000, zuul at openstack.org wrote: > Build failed. > > - release-openstack-python3 http://logs.openstack.org/ab/abac67d7bb347e1caba4d74c81712de86790316b/release/release-openstack-python3/e84da68/ : POST_FAILURE in 2m 18s So this failed because pypi thinks there was a name collision[1]: HTTPError: 400 Client Error: File already exists. See https://pypi.org/help/#file-name-reuse for url: https://upload.pypi.org/legacy/ AFACIT the upload was successful: shade-1.27.2-py2-none-any.whl : 2018-10-24T03:20:00 d30a230461ba276c8bc561a27e61dcfd6769ca00bb4c652a841f7148a0d74a5a shade-1.27.2-py2.py3-none-any.whl : 2018-10-24T03:20:11 8942b56d7d02740fb9c799a57f0c4ff13d300680c89e6f04dadb5eaa854e1792 shade-1.27.2.tar.gz : 2018-10-24T03:20:04 ebf40040b892f3e9bd4229fd05fff7ea24a08c51e46b7f2d8b3901ce34f51cbf The strange thing is that the tar.gz was uploaded *befoer* the wheel even though our publish jobs explictly do it in the other order and the timestamp of the tar.gz doesn't match the error message. SO I think we have a bug somewhere, more digging tomorrow Yours Tony. [1] http://logs.openstack.org/ab/abac67d7bb347e1caba4d74c81712de86790316b/release/release-openstack-python3/e84da68/job-output.txt.gz#_2018-10-24_03_20_15_264676 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jean-philippe at evrard.me Wed Oct 24 08:22:14 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 24 Oct 2018 10:22:14 +0200 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update In-Reply-To: <6b09f693-98d9-9a84-8e47-c22669efd586@gmail.com> References: <6b6ee75b-82c3-711c-f2c6-b6d6f145f3a7@catalyst.net.nz> <28463cba-869b-a7a7-2ef8-c66ce6d3e9cb@nemebean.com> <20181023184123.GA20743@sm-workstation> <6b09f693-98d9-9a84-8e47-c22669efd586@gmail.com> Message-ID: <9f991b1594f9590c7b4aab63e5f1861aa49fa67c.camel@evrard.me> On Tue, 2018-10-23 at 16:40 -0500, Matt Riedemann wrote: > On 10/23/2018 1:41 PM, Sean McGinnis wrote: > > > Yeah, but part of the reason for placeholders was consistency > > > across all of > > > the services. I guess if there are never going to be upgrade > > > checks in > > > adjutant then I could see skipping it, but otherwise I would > > > prefer to at > > > least get the framework in place. > > > > > +1 > > > > Even if there is nothing to check at this point, I think having the > > facility > > there is a benefit for projects and scripts that are going to be > > consuming > > these checks. Having nothing to check, but having the status check > > there, is > > going to be better than everything needing to keep a list of which > > projects to > > run the checks on and which not. > > > > Sure, that works for me as well. I'm not against adding > placeholder/noop > checks knowing that nothing is immediately obvious to replace those > in > Stein, but could be done later when the opportunity arises. If it's > debatable on a per-project basis, then I'd defer to the core team > for > the project. > +1 on what Ben, Matt, and Sean said there. From jean-philippe at evrard.me Wed Oct 24 08:29:01 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 24 Oct 2018 10:29:01 +0200 Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance In-Reply-To: References: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> <6850a16b-a0f3-edeb-12c6-749745a32491@openstack.org> <1e2ebe5c-b355-6833-d2bb-b2a28d2b4657@debian.org> Message-ID: <8e8bef0d2f8c25f1c995200859b1d4606309bdc9.camel@evrard.me> On Mon, 2018-10-22 at 07:50 -0700, Morgan Fainberg wrote: > Also, doesn't bitbucket have a git interface now too (optionally)? > It does :) But I think it requires a new repo, so it means that could as well move to somewhere else like github or openstack infra :p From rico.lin.guanyu at gmail.com Wed Oct 24 09:09:14 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 24 Oct 2018 17:09:14 +0800 Subject: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration In-Reply-To: References: <20180927022738.GA22304@rcp.sl.cloud9.ibm.com> <5d7129ce-39c8-fb9e-517c-64d030f71963@redhat.com> <20181009062143.GA32130@rcp.sl.cloud9.ibm.com> Message-ID: Hi all, I'm glad to notify you all that our forum session has been accepted [1] and that forum time schedule (Thursday, November 15, 9:50am-10:30am) should be stable by now. So please save your schedule for it!! Any feedback are welcome! [1] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22753/autoscaling-integration-improvement-and-feedback On Tue, Oct 9, 2018 at 3:07 PM Rico Lin wrote: > a reminder for all, please put your ideas/thoughts/suggest actions in our > etherpad [1], > which we gonna use for further discussion in Forum, or in PTG if we got no > forum for it. > So we won't be missing anything. > > > > [1] https://etherpad.openstack.org/p/autoscaling-integration-and-feedback > > On Tue, Oct 9, 2018 at 2:22 PM Qiming Teng wrote: > >> > >One approach would be to switch the underlying Heat AutoScalingGroup >> > >implementation to use Senlin and then deprecate the AutoScalingGroup >> > >resource type in favor of the Senlin resource type over several >> > >cycles. >> > >> > The hard part (or one hard part, at least) of that is migrating the >> existing >> > data. >> >> Agreed. In an ideal world, we can transparently transplant the "scaling >> group" resource implementation onto something (e.g. a library or an >> interface). This sounds like an option for both teams to brainstorm >> together. >> >> - Qiming >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Oct 24 10:30:43 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 24 Oct 2018 12:30:43 +0200 Subject: [openstack-dev] [Release-job-failures] Release of openstack-infra/shade failed In-Reply-To: <20181024050825.GA28200@thor.bakeyournoodle.com> References: <20181024050825.GA28200@thor.bakeyournoodle.com> Message-ID: Tony Breeds wrote: > AFACIT the upload was successful: > > shade-1.27.2-py2-none-any.whl : 2018-10-24T03:20:00 d30a230461ba276c8bc561a27e61dcfd6769ca00bb4c652a841f7148a0d74a5a > shade-1.27.2-py2.py3-none-any.whl : 2018-10-24T03:20:11 8942b56d7d02740fb9c799a57f0c4ff13d300680c89e6f04dadb5eaa854e1792 > shade-1.27.2.tar.gz : 2018-10-24T03:20:04 ebf40040b892f3e9bd4229fd05fff7ea24a08c51e46b7f2d8b3901ce34f51cbf Yes the release is up on Pypi and on releases.o.o, so I think we are good. > The strange thing is that the tar.gz was uploaded *befoer* the wheel > even though our publish jobs explictly do it in the other order and the > timestamp of the tar.gz doesn't match the error message. The timestamps don't match, but in the logs the tar.gz is uploaded last, as designed... Where did you get the timestamps from ? If from Pypi maybe their clocks are off or they do some kind of processing that affects the timestamp... -- Thierry Carrez (ttx) From aspiers at suse.com Wed Oct 24 12:25:54 2018 From: aspiers at suse.com (Adam Spiers) Date: Wed, 24 Oct 2018 13:25:54 +0100 Subject: [openstack-dev] [ClusterLabs Developers] [HA] future of OpenStack OCF resource agents (was: resource-agents v4.2.0) In-Reply-To: <20181024082154.a3gyn4cgtu5o3sj3@redhat.com> References: <20181024082154.a3gyn4cgtu5o3sj3@redhat.com> Message-ID: <20181024122554.xilfw3xx6igcg5e6@pacific.linksys.moosehall> [cross-posting to openstack-dev] Oyvind Albrigtsen wrote: >ClusterLabs is happy to announce resource-agents v4.2.0. >Source code is available at: >https://github.com/ClusterLabs/resource-agents/releases/tag/v4.2.0 > >The most significant enhancements in this release are: >- new resource agents: [snipped] > - openstack-cinder-volume > - openstack-floating-ip > - openstack-info That's an interesting development. By popular demand from the community, in Oct 2015 the canonical location for OpenStack-specific resource agents became: https://git.openstack.org/cgit/openstack/openstack-resource-agents/ as announced here: http://lists.openstack.org/pipermail/openstack-dev/2015-October/077601.html However I have to admit I have done a terrible job of maintaining it since then. Since OpenStack RAs are now beginning to creep into ClusterLabs/resource-agents, now seems a good time to revisit this and decide a coherent strategy. I'm not religious either way, although I do have a fairly strong preference for picking one strategy which both ClusterLabs and OpenStack communities can align on, so that all OpenStack RAs are in a single place. I'll kick the bikeshedding off: Pros of hosting OpenStack RAs on ClusterLabs -------------------------------------------- - ClusterLabs developers get the GitHub code review and Travis CI experience they expect. - Receive all the same maintenance attention as other RAs - any changes to coding style, utility libraries, Pacemaker APIs, refactorings etc. which apply to all RAs would automatically get applied to the OpenStack RAs too. - Documentation gets built in the same way as other RAs. - Unit tests get run in the same way as other RAs (although does ocf-tester even get run by the CI currently?) - Doesn't get maintained by me ;-) Pros of hosting OpenStack RAs on OpenStack infrastructure --------------------------------------------------------- - OpenStack developers get the Gerrit code review and Zuul CI experience they expect. - Releases and stable/foo branches could be made to align with OpenStack releases (..., Queens, Rocky, Stein, T(rains?)...) - Automated testing could in the future spin up a full cloud and do integration tests by simulating failure scenarios, as discussed here: https://storyboard.openstack.org/#!/story/2002129 That said, that is still very much work in progress, so it remains to be seen when that could come to fruition. No doubt I've missed some pros and cons here. At this point personally I'm slightly leaning towards keeping them in the openstack-resource-agents - but that's assuming I can either hand off maintainership to someone with more time, or somehow find the time myself to do a better job. What does everyone else think? All opinions are very welcome, obviously. From sombrafam at gmail.com Wed Oct 24 12:40:23 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Wed, 24 Oct 2018 09:40:23 -0300 Subject: [openstack-dev] [nova][NFS] Inexplicable utime permission denied when launching instance In-Reply-To: References: Message-ID: I think that there's a change that AppArmor is blocking the access. Have you checked the dmesg messages related with apparmor? Em sex, 19 de out de 2018 às 09:38, Neil Jerram escreveu: > Wracking my brains over this one, would appreciate any pointers... > > Setup: Small test deployment with just 3 compute nodes, Queens on Ubuntu > Bionic. The first compute node is an NFS server for > /var/lib/nova/instances, and the other compute nodes mount that as NFS > clients. > > Problem: Sometimes, when launching an instance which is scheduled to one > of the client nodes, nova-compute (in imagebackend.py) gets Permission > Denied (errno 13) when calling utime to touch the timestamp on the instance > file. > > Through various bits of debugging and hackery, I've established that: > > - it looks like the problem never occurs when this is the call that > bootstraps the privsep setup; but it does occur quite frequently on later > calls > > - when the problem occurs, retrying doesn't help (5 times, with 0.5s in > between) > > - the instance file does exist, and is owned by root with read/write > permission for root > > - the privsep helper is running as root > > - the privsep helper receives and executes the request - so it's not a > problem with communication between nova-compute and the helper > > - root is uid 0 on both NFS server and client > > - NFS setup does not have the root_squash option > > - there is some AppArmor setup, on both client and server, and I haven't > yet worked out whether that might be relevant. > > Any ideas? > > Many thanks, > Neil > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sombrafam at gmail.com Wed Oct 24 12:41:20 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Wed, 24 Oct 2018 09:41:20 -0300 Subject: [openstack-dev] [nova][NFS] Inexplicable utime permission denied when launching instance In-Reply-To: References: Message-ID: PS. Don't forget that if you change or disable AppArmor you will have to reboot the host so the kernel gets reloaded. Em qua, 24 de out de 2018 às 09:40, Erlon Cruz escreveu: > I think that there's a change that AppArmor is blocking the access. Have > you checked the dmesg messages related with apparmor? > > Em sex, 19 de out de 2018 às 09:38, Neil Jerram escreveu: > >> Wracking my brains over this one, would appreciate any pointers... >> >> Setup: Small test deployment with just 3 compute nodes, Queens on Ubuntu >> Bionic. The first compute node is an NFS server for >> /var/lib/nova/instances, and the other compute nodes mount that as NFS >> clients. >> >> Problem: Sometimes, when launching an instance which is scheduled to one >> of the client nodes, nova-compute (in imagebackend.py) gets Permission >> Denied (errno 13) when calling utime to touch the timestamp on the instance >> file. >> >> Through various bits of debugging and hackery, I've established that: >> >> - it looks like the problem never occurs when this is the call that >> bootstraps the privsep setup; but it does occur quite frequently on later >> calls >> >> - when the problem occurs, retrying doesn't help (5 times, with 0.5s in >> between) >> >> - the instance file does exist, and is owned by root with read/write >> permission for root >> >> - the privsep helper is running as root >> >> - the privsep helper receives and executes the request - so it's not a >> problem with communication between nova-compute and the helper >> >> - root is uid 0 on both NFS server and client >> >> - NFS setup does not have the root_squash option >> >> - there is some AppArmor setup, on both client and server, and I haven't >> yet worked out whether that might be relevant. >> >> Any ideas? >> >> Many thanks, >> Neil >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Wed Oct 24 13:22:09 2018 From: jungleboyj at gmail.com (Jay S. Bryant) Date: Wed, 24 Oct 2018 08:22:09 -0500 Subject: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend In-Reply-To: <20181023140140.rs3sf6huy4czgtak@exaesuubou5k> References: <73dab45e.1a3d.1668a62f317.Coremail.bxzhu_5355@163.com> <8f8b257a-5219-60e1-8760-d905f5bb48ff@gmail.com> <47653a27.59b1.1668cea5d9b.Coremail.bxzhu_5355@163.com> <40297115-571b-c07c-88b4-9e0d24b7301b@gmail.com> <8e1eba3.1f61.16699e1111d.Coremail.bxzhu_5355@163.com> <2f610cad-7a08-78cf-5ca4-63e0ea59d661@gmail.com> <20181023140140.rs3sf6huy4czgtak@exaesuubou5k> Message-ID: <791626eb-07a1-c244-5c39-8686f1946db4@gmail.com> On 10/23/2018 9:01 AM, Jon Bernard wrote: > * melanie witt wrote: >> On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote: >>> I created a new vm and a new volume with type 'ceph'[So that the volume >>> will be created on one of two hosts. I assume that the volume created on >>> host dev at rbd-1#ceph this time]. Next step is to attach the volume to the >>> vm. At last I want to migrate the volume from host dev at rbd-1#ceph to >>> host dev at rbd-2#ceph, but it failed with the exception >>> 'NotImplementedError(_("Swap only supports host devices")'. >>> >>> So that, my real problem is that is there any work to migrate >>> volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool) >>> in the same ceph cluster? >>> The difference between the spec[2] with my scope is only one is >>> *available*(the spec) and another is *in-use*(my scope). >>> >>> >>> [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/ >>> [2] https://review.openstack.org/#/c/296150 >> Ah, I think I understand now, thank you for providing all of those details. >> And I think you explained it in your first email, that cinder supports >> migration of ceph volumes if they are 'available' but not if they are >> 'in-use'. Apologies that I didn't get your meaning the first time. >> >> I see now the code you were referring to is this [3]: >> >> if volume.status not in ('available', 'retyping', 'maintenance'): >> LOG.debug('Only available volumes can be migrated using backend ' >> 'assisted migration. Falling back to generic migration.') >> return refuse_to_migrate >> >> So because your volume is not 'available', 'retyping', or 'maintenance', >> it's falling back to generic migration, which will end up with an error in >> nova because the source_path is not set in the volume config. >> >> Can anyone from the cinder team chime in about whether the ceph volume >> migration could be expanded to allow migration of 'in-use' volumes? Is there >> a reason not to allow migration of 'in-use' volumes? > Generally speaking, Nova must facilitate the migration of a live (or > in-use) volume. A volume attached to a running instance requires code > in the I/O path to correctly route traffic to the correct location - so > Cinder must refuse (or defer) a migrate operation if the volume is > attached. Until somewhat recently Qemu and Libvirt did not support the > migration to non-block (RBD) targets which is the reason for lack of > support. I believe we now have all of the pieces to perform this > operation successfully, but I suspect it will require a setup with > correct versions of all the related software. I will try to verify this > during the current release cycle and report back. Jon, Thanks for the explanation and investigation! Jay From neil at tigera.io Wed Oct 24 13:41:07 2018 From: neil at tigera.io (Neil Jerram) Date: Wed, 24 Oct 2018 14:41:07 +0100 Subject: [openstack-dev] [nova][NFS] Inexplicable utime permission denied when launching instance In-Reply-To: References: Message-ID: Thanks so much for these hints, Erlon. I will look closer at AppArmor. Neil On Wed, Oct 24, 2018 at 1:41 PM Erlon Cruz wrote: > > PS. Don't forget that if you change or disable AppArmor you will have to reboot the host so the kernel gets reloaded. > > Em qua, 24 de out de 2018 às 09:40, Erlon Cruz escreveu: >> >> I think that there's a change that AppArmor is blocking the access. Have you checked the dmesg messages related with apparmor? >> >> Em sex, 19 de out de 2018 às 09:38, Neil Jerram escreveu: >>> >>> Wracking my brains over this one, would appreciate any pointers... >>> >>> Setup: Small test deployment with just 3 compute nodes, Queens on Ubuntu Bionic. The first compute node is an NFS server for /var/lib/nova/instances, and the other compute nodes mount that as NFS clients. >>> >>> Problem: Sometimes, when launching an instance which is scheduled to one of the client nodes, nova-compute (in imagebackend.py) gets Permission Denied (errno 13) when calling utime to touch the timestamp on the instance file. >>> >>> Through various bits of debugging and hackery, I've established that: >>> >>> - it looks like the problem never occurs when this is the call that bootstraps the privsep setup; but it does occur quite frequently on later calls >>> >>> - when the problem occurs, retrying doesn't help (5 times, with 0.5s in between) >>> >>> - the instance file does exist, and is owned by root with read/write permission for root >>> >>> - the privsep helper is running as root >>> >>> - the privsep helper receives and executes the request - so it's not a problem with communication between nova-compute and the helper >>> >>> - root is uid 0 on both NFS server and client >>> >>> - NFS setup does not have the root_squash option >>> >>> - there is some AppArmor setup, on both client and server, and I haven't yet worked out whether that might be relevant. >>> >>> Any ideas? >>> >>> Many thanks, >>> Neil >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at nemebean.com Wed Oct 24 13:53:28 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 24 Oct 2018 09:53:28 -0400 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update In-Reply-To: <8efde991-a756-bad3-caa8-9bd67e380e81@catalyst.net.nz> References: <6b6ee75b-82c3-711c-f2c6-b6d6f145f3a7@catalyst.net.nz> <28463cba-869b-a7a7-2ef8-c66ce6d3e9cb@nemebean.com> <8efde991-a756-bad3-caa8-9bd67e380e81@catalyst.net.nz> Message-ID: On 10/23/18 9:55 PM, Adrian Turjak wrote: > > On 24/10/18 2:09 AM, Ben Nemec wrote: >> >> >> On 10/22/18 5:40 PM, Matt Riedemann wrote: >>> On 10/22/2018 4:35 PM, Adrian Turjak wrote: >>>>> The one other open question I have is about the Adjutant change [2]. I >>>>> know Adjutant is very new and I'm not sure what upgrades look like for >>>>> that project, so I don't really know how valuable adding the upgrade >>>>> check framework is to that project. Is it like Horizon where it's >>>>> mostly stateless and fed off plugins? Because we don't have an upgrade >>>>> check CLI for Horizon for that reason. >>>>> >>>>> [1] >>>>> https://review.openstack.org/#/q/topic:upgrade-checkers+(status:open+OR+status:merged) >>>>> >>>>> [2]https://review.openstack.org/#/c/611812/ >>>>> >>>> Adjutant's codebase is also going to be a bit unstable for the next few >>>> cycles while we refactor some internals (we're not marking it 1.0 yet). >>>> Once the current set of ugly refactors planned for late Stein are >>>> done I >>>> may look at building some upgrade checking, once we also work out what >>>> out upgrade checking should look like. Probably mostly checking config >>>> changes, database migration states, and plugin compatibility. >>>> >>>> Adjutant already has a concept of startup checks at least, which while >>>> not anywhere near as extensive as they should be, mostly amount to >>>> making sure your config file looks 'mostly' sane regarding plugins >>>> before starting up the service, and we do intend to expand on that, >>>> plus >>>> we can reuse a large chunk of that for upgrade checking. >>> >>> OK it seems there is not really any point in trying to satisfy the >>> upgrade checkers goal for Adjutant in Stein then. Should we just >>> abandon the change? >>> >> >> Can't we just add a noop command like we are for the services that >> don't currently need upgrade checks? > > > I mostly was responding to this in the review itself rather than on here. > > We are probably going to have reason for an upgrade check in Adjutant, > my main gripe is, Adjutant is Django based and there isn't a good point > in adding a separate cli when we already expose 'adjutant-api' as a > proxy to manage.py and as such we should just register the upgrade check > as a custom Django admin command. > > More so because all of the logic needed to actually run the check in > future will require Django settings to be configured. We don't actually > use any oslo libraries yet so the current code for the check doesn't > actually make sense in context. > > I'm fine with a noop check, but we have to make it fit. What I'm trying to avoid is creating any snowflake upgrade processes. It may not make sense for Adjutant to do this in isolation, but Adjutant doesn't exist in isolation. Also, if I understand correctly, you're proposing to add startup checks instead of upgrade checks. The downside I see there is that you have to have already restarted the service before the check runs so if there's a problem now you have downtime. With a standalone upgrade check you can run the check while the old version of the code is still running. If problems are found you fix them before doing the restart. That said, I don't particularly care how the upgrade check is implemented. If 'adjutant-status upgrade check' just calls 'adjutant-api --check' or something else that returns 0 or non-0 appropriately that satisfies me. I don't want to cross the line into foolish consistency either. :-) -Ben From fungi at yuggoth.org Wed Oct 24 14:05:54 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 24 Oct 2018 14:05:54 +0000 Subject: [openstack-dev] [Release-job-failures] Release of openstack-infra/shade failed In-Reply-To: <20181024050825.GA28200@thor.bakeyournoodle.com> References: <20181024050825.GA28200@thor.bakeyournoodle.com> Message-ID: <20181024140554.i53t4v7scgtt56le@yuggoth.org> On 2018-10-24 16:08:26 +1100 (+1100), Tony Breeds wrote: > On Wed, Oct 24, 2018 at 03:23:53AM +0000, zuul at openstack.org wrote: > > Build failed. > > > > - release-openstack-python3 http://logs.openstack.org/ab/abac67d7bb347e1caba4d74c81712de86790316b/release/release-openstack-python3/e84da68/ : POST_FAILURE in 2m 18s > > So this failed because pypi thinks there was a name collision[1]: > HTTPError: 400 Client Error: File already exists. See https://pypi.org/help/#file-name-reuse for url: https://upload.pypi.org/legacy/ > > AFACIT the upload was successful: > > shade-1.27.2-py2-none-any.whl : 2018-10-24T03:20:00 d30a230461ba276c8bc561a27e61dcfd6769ca00bb4c652a841f7148a0d74a5a > shade-1.27.2-py2.py3-none-any.whl : 2018-10-24T03:20:11 8942b56d7d02740fb9c799a57f0c4ff13d300680c89e6f04dadb5eaa854e1792 > shade-1.27.2.tar.gz : 2018-10-24T03:20:04 ebf40040b892f3e9bd4229fd05fff7ea24a08c51e46b7f2d8b3901ce34f51cbf [...] I think PyPI is right. Note the fact that there are not two but *three* artifacts there. We shouldn't be building both a py2 and py2.py3 wheel. The log you linked is uploaded shade-1.27.2-py2.py3-none-any.whl and tried (but failed) to upload shade-1.27.2.tar.gz. So where did shade-1.27.2-py2-none-any.whl come from then? Hold onto your hats folks: http://logs.openstack.org/ab/abac67d7bb347e1caba4d74c81712de86790316b/release/release-openstack-python/f38f2b9/job-output.txt.gz#_2018-10-24_03_20_02_134223 I suppose we don't expect a project to run both the release-openstack-python and release-openstack-python3 jobs on the same tags. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmail.com Wed Oct 24 14:11:45 2018 From: sean.mcginnis at gmail.com (Sean McGinnis) Date: Wed, 24 Oct 2018 09:11:45 -0500 Subject: [openstack-dev] [Release-job-failures] Release of openstack-infra/shade failed In-Reply-To: <20181024050825.GA28200@thor.bakeyournoodle.com> References: <20181024050825.GA28200@thor.bakeyournoodle.com> Message-ID: On Wed, Oct 24, 2018 at 12:08 AM Tony Breeds wrote: > On Wed, Oct 24, 2018 at 03:23:53AM +0000, zuul at openstack.org wrote: > > Build failed. > > > > - release-openstack-python3 > http://logs.openstack.org/ab/abac67d7bb347e1caba4d74c81712de86790316b/release/release-openstack-python3/e84da68/ > : POST_FAILURE in 2m 18s > > So this failed because pypi thinks there was a name collision[1]: > HTTPError: 400 Client Error: File already exists. See > https://pypi.org/help/#file-name-reuse for url: > https://upload.pypi.org/legacy/ > > AFACIT the upload was successful: > > shade-1.27.2-py2-none-any.whl : > 2018-10-24T03:20:00 > d30a230461ba276c8bc561a27e61dcfd6769ca00bb4c652a841f7148a0d74a5a > shade-1.27.2-py2.py3-none-any.whl : > 2018-10-24T03:20:11 > 8942b56d7d02740fb9c799a57f0c4ff13d300680c89e6f04dadb5eaa854e1792 > shade-1.27.2.tar.gz : > 2018-10-24T03:20:04 > ebf40040b892f3e9bd4229fd05fff7ea24a08c51e46b7f2d8b3901ce34f51cbf > > The strange thing is that the tar.gz was uploaded *befoer* the wheel > even though our publish jobs explictly do it in the other order and the > timestamp of the tar.gz doesn't match the error message. > > SO I think we have a bug somewhere, more digging tomorrow > > Yours Tony. > Looks like this was another case of conflicting jobs. This still has both release-openstack-python3 and release-openstack-python jobs running, so I think it ended up being a race between the two of which one got to pypi first. I think the "fix" is to get the release-openstack-python out of there now that we are able to run the Python 3 version. On the plus side, all of the subsequent jobs passed, so the package is published, the announcement went out, and the requirements update patch was generated. > [1] > http://logs.openstack.org/ab/abac67d7bb347e1caba4d74c81712de86790316b/release/release-openstack-python3/e84da68/job-output.txt.gz#_2018-10-24_03_20_15_264676 > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalvarez at redhat.com Wed Oct 24 14:40:17 2018 From: dalvarez at redhat.com (Daniel Alvarez Sanchez) Date: Wed, 24 Oct 2018 16:40:17 +0200 Subject: [openstack-dev] [TripleO][OVN] Switching the default network backend to ML2/OVN Message-ID: Hi Stackers! The purpose of this email is to share with the community the intention of switching the default network backend in TripleO from ML2/OVS to ML2/OVN by changing the mechanism driver from openvswitch to ovn. This doesn’t mean that ML2/OVS will be dropped but users deploying OpenStack without explicitly specifying a network driver will get ML2/OVN by default. OVN in Short ========== Open Virtual Network is managed under the OVS project, and was created by the original authors of OVS. It is an attempt to re-do the ML2/OVS control plane, using lessons learned throughout the years. It is intended to be used in projects such as OpenStack and Kubernetes. OVN has a different architecture, moving us away from Python agents communicating with the Neutron API service via RabbitMQ to daemons written in C communicating via OpenFlow and OVSDB. OVN is built with a modern architecture that offers better foundations for a simpler and more performant solution. What does this mean? For example, at Red Hat we executed some preliminary testing during the Queens cycle and found significant CPU savings due to OVN not using RabbitMQ (CPU utilization during a Rally scenario using ML2/OVS [0] or ML2/OVN [1]). Also, we tested API performance and found out that most of the operations are significantly faster with ML2/OVN. Please see more details in the FAQ section. Here’s a few useful links about OpenStack’s integration of OVN: * OpenStack Boston Summit talk on OVN [2] * OpenStack networking-ovn documentation [3] * OpenStack networking-ovn code repository [4] How? ==== The goal is to merge this patch [5] during the Stein cycle which pursues the following actions: 1. Switch the default mechanism driver from openvswitch to ovn. 2. Adapt all jobs so that they use ML2/OVN as the network backend. 3. Create legacy environment file for ML2/OVS to allow deployments based on it. 4. Flip scenario007 job from ML2/OVN to ML2/OVS so that we continue testing it. 5. Continue using ML2/OVS in the undercloud. 6. Ensure that updates/upgrades from ML2/OVS don’t break and don’t switch automatically to the new default. As some parity gaps exist right now, we don’t want to change the network backend automatically. Instead, if the user wants to migrate from ML2/OVS to ML2/OVN, we’ll provide an ansible based tool that will perform the operation. More info and code at [6]. Reviews, comments and suggestions are really appreciated :) FAQ === Can you talk about the advantages of OVN over ML2/OVS? ------------------------------------------------------------------------------- If asked to describe the ML2/OVS control plane (OVS, L3, DHCP and metadata agents using the messaging bus to sync with the Neutron API service) one would not tend to use the term ‘simple’. There is liberal use of a smattering of Linux networking technologies such as: * iptables * network namespaces * ARP manipulation * Different forms of NAT * keepalived, radvd, haproxy, dnsmasq * Source based routing, * … and of course OVS flows. OVN simplifies this to a single process running on compute nodes, and another process running on centralized nodes, communicating via OVSDB and OpenFlow, ultimately setting OVS flows. The simplified, new architecture allows us to re-do features like DVR and L3 HA in more efficient and elegant ways. For example, L3 HA failover is faster: It doesn’t use keepalived, rather OVN monitors neighbor tunnel endpoints. OVN supports enabling both DVR and L3 HA simultaneously, something we never supported with ML2/OVS. We also found out that not depending on RPC messages for agents communication brings a lot of benefits. From our experience, RabbitMQ sometimes represents a bottleneck and it can be very intense when it comes to resources utilization. What about the undercloud? -------------------------------------- ML2/OVS will be still used in the undercloud as OVN has some limitations with regards to baremetal provisioning mainly (keep reading about the parity gaps). We aim to convert the undercloud to ML2/OVN to provide the operator a more consistent experience as soon as possible. It would be possible however to use the Neutron DHCP agent in the short term to solve this limitation but in the long term we intend to implement support for baremetal provisioning in the OVN built-in DHCP server. What about CI? --------------------- * networking-ovn has: * Devstack based Tempest (API, scenario from Tempest and Neutron Tempest plugin) against the latest released OVS version, and against OVS master (thus also OVN master) * Devstack based Rally * Grenade * A multinode, container based TripleO job that installs and issues a basic VM connectivity scenario test * Supports Python 3 and 2 * TripleO has currently OVN enabled in one quickstart featureset (fs30). Are there any known parity issues with ML2/OVS? ------------------------------------------------------------------- * OVN supports VLAN provider networks, but not VLAN tenant networks. This will be addressed and is being tracked in RHBZ 1561880 [7] * SRIOV: A limitation exists for this scenario where OVN needs to support VLAN tenant networks and Neutron DHCP Agent has to be deployed. The goal is to include support in OVN to get rid of Neutron DHCP agent. [8] * QoS: Lack of support for DSCP marking and egress bandwidth limiting RHBZ 1503494 [9] * OVN does not presently support the new Security Groups logging API RHBZ 1619266 [10] * OVN does not correctly support Jumbo frames for North/South traffic RHBZ 1547074 [11] * OVN built-in DHCP server currently can not be used to provision baremetal nodes (RHBZ 1622154 [12]) (this affects the undercloud and overcloud’s baremetal-to-tenant use case). * End-to-end encryption support in TripleO (RHBZ 1601926 [13]) More info at [14]. How does the performance look like? ------------------------------------------------- We have carried out different performance tests. Overall, ML2/OVN outperforms ML2/OVS in most of the operations as this graph [15] shows. Only creating networks and listing ports are slower which is mostly due to the fact that ML2/OVN creates an extra port (for metadata) upon network creation so the amount of ports listed for the same rally task is 2x for the ML2/OVN case. Also, the resources utilization is lower in ML2/OVN [16] vs ML2/OVS [17] mainly due to the lack of agents and not using RPC. OVN only supports VLAN and Geneve (tunneled) networks, while ML2/OVS uses VXLAN. What, if any, is the impact? What about hardware offload? ----------------------------------------------------------------------------------------------------- Good question! We asked this ourselves, and research showed that this is not a problem. Normally, NICs that support VXLAN also support Geneve hardware offload. Interestingly, even in the cases where they don’t, performance was found to be better using Geneve due to other optimizations that Geneve benefits from. More information can be found in Russell’s Bryant blog [18], who did extensive work in this space. Links ==== [0] https://imgur.com/a/oOmuAqj [1] https://imgur.com/a/N9jrIXV [2] https://www.youtube.com/watch?v=sgc7myiX6ts [3] https://docs.openstack.org/networking-ovn/queens/admin/index.html [4] https://github.com/openstack/networking-ovn [5] https://review.openstack.org/#/c/593056/ [6] https://github.com/openstack/networking-ovn/tree/master/migration [7] https://bugzilla.redhat.com/show_bug.cgi?id=1561880 [8] https://mail.openvswitch.org/pipermail/ovs-discuss/2018-April/046543.html [9] https://bugzilla.redhat.com/show_bug.cgi?id=1503494 [10] https://bugzilla.redhat.com/show_bug.cgi?id= 1619266 [11] https://bugzilla.redhat.com/show_bug.cgi?id= 1547074 [12] https://bugzilla.redhat.com/show_bug.cgi?id= 1622154 [13] https://bugzilla.redhat.com/show_bug.cgi?id= 1601926 [14] https://wiki.openstack.org/wiki/Networking-ovn [15] https://imgur.com/a/4QtaN6b [16] https://imgur.com/a/N9jrIXV [17] https://imgur.com/a/oOmuAqj [18] https://blog.russellbryant.net/2017/05/30/ovn-geneve-vs-vxlan-does-it-matter/ Thanks! Daniel Alvarez From zbitter at redhat.com Wed Oct 24 14:54:24 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 10:54:24 -0400 Subject: [openstack-dev] [horizon][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <46fa3beb-27cb-1017-6059-326fdb054096@redhat.com> Greetings, Horizon team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. We have said that other kinds of user interface, e.g. the CLI, are out of scope for the document (though not for OpenStack, of course!). After some discussion, we decided that Horizon being a service was more important to its categorisation than it being a user interface, so I wrote the Graphical User Interface design goal to ensure that it is covered. However, I'm sure y'all have spent much more time thinking about what Horizon contributes to OpenStack than I, so your feedback and suggestions are needed. That is not the only way in which I think this document is relevant to the Horizon team: one of my goals with the exercise is to encourage the service projects to make sure their APIs make all of the operationally-relevant information available and legible to applications. That would include e.g. surfacing events, which I know is something that Horizon has wanted for a long time, and hopefully this will lead to easier ways to build a GUI without as much polling. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 14:54:42 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 10:54:42 -0400 Subject: [openstack-dev] [magnum][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <81dd0009-7298-f9de-05da-11e7e5c6f0a8@redhat.com> Greetings, Magnum team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. Magnum would fall under the 'Plays Well With Others' design goal, as it's one way of integrating OpenStack with Kubernetes, ensuring that OpenStack users have access to container orchestration tools. And it's also an example (along with Sahara and Trove) of the 'Abstract Specialised Operations' goal, since it allows operators to have a centralised team of Kubernetes cluster operators to serve multiple tenants. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 14:55:04 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 10:55:04 -0400 Subject: [openstack-dev] [trove][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <9ca6eb4c-0236-b1e1-1311-0fc051659bdb@redhat.com> Greetings, Trove team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. I wrote the 'Abstract Specialised Operations' design goal to specifically to cover Trove (and Sahara). (As you can see, I was really struggling to find a good, generic name for the principle; better suggestions are welcome.) This is the best explanation I could think of to explain why it's important to have a DBaaS in OpenStack, even if it only scales at a coarse granularity (as opposed to a DynamoDB-style service like MagnetoDB was, which would be a natural fit for the 'Infinite, Continuous Scaling' design goal). However, the Trove team might well have a different perspective on why Trove is important to OpenStack, so I would very much like to hear your feedback and suggestions. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 14:55:57 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 10:55:57 -0400 Subject: [openstack-dev] [sahara][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <5fd5d3c8-058c-85bb-5ce5-7333ce25f1ea@redhat.com> Greetings, Sahara team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. I wrote the 'Abstract Specialised Operations' design goal to specifically to cover Sahara and Trove. (As you can see, I was really struggling to find a good, generic name for the principle; better suggestions are welcome.) I think this is a decent explanation for why Hadoop-as-a-Service should be in OpenStack, but I am by no means an expert so I would really like to hear the Sahara team's perspective on it. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 14:56:58 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 10:56:58 -0400 Subject: [openstack-dev] [murano][tc] Seeking feedback on the OpenStack cloud vision Message-ID: Greetings, Murano team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. To be honest, nothing in the document we have so far really captures the scope and ambition of the original vision behind Murano. You could say that it fulfils a similar role to Heat in meeting the Customisable Integration goal by being one of the components that users can use to wire the various servies the OpenStack offers together into a coherent application, and functionally that would be a pretty accurate description. But nothing in there suggests that we want OpenStack to produce a standard packaging format for cloud application components or a marketplace where they can be published. Is that still part of the vision for Murano after the closure of the application catalog? Is it something that should be explicitly part of the vision for OpenStack clouds? If so, what should that look like? The sections on Interoperability and Bidirectional Compatibility formalise what are already important design considerations for Murano, since one goal of its packaging format is obviously to provide interoperability across clouds. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 14:58:03 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 10:58:03 -0400 Subject: [openstack-dev] [freezer][tc] Seeking feedback on the OpenStack cloud vision Message-ID: Greetings, Freezer team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. For the purposes of this document we can largely ignore the Freezer guest agent, because we're only looking at cloud services. (To be clear, this doesn't mean the guest agent is outside the scope of OpenStack, just that it doesn't need to be covered by the vision document.) It appears to me that the Freezer API is targeting the 'Built-in Reliability and Durability' design goal: it provides a way to e.g. reliably trigger and generally manage the backup process, and by making it a cloud service the cost of providing that can be spread across multiple tenants. But it may be that we should also say something more specific about OpenStack's role in data protection. Perhaps y'all could work with the Karbor team to figure out what. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 14:59:05 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 10:59:05 -0400 Subject: [openstack-dev] [solum][tc] Seeking feedback on the OpenStack cloud vision Message-ID: Greetings, Solum team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. As I understand it, Solum's goal is to provide native OpenStack integration for PaaS layers, so it would be covered by the 'Plays Well With Others' design goal. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:01:01 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:01:01 -0400 Subject: [openstack-dev] [masakari][tc] Seeking feedback on the OpenStack cloud vision Message-ID: Greetings, Masakari team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. In my view, Masakari's role in terms of the design goals is to augment Nova (which obviously fits in the Basic Physical Data Center Management and Hardware Virtualisation goals) to improve its compliance with the section on Application Control of the infrastructure. Without Masakari there's no good way for an application to be notified about events like failure of a VM or hypervisor, and no way to perform some of the recovery actions. The section on Customisable Integration states that we place a lot of value on allowing users and applications to configure how they want to handle events (including events like failures) rather than acting automatically, because every application's requirements are unique. This is probably going to be a valuable thing to keep in mind when making design decisions in Masakari. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:01:21 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:01:21 -0400 Subject: [openstack-dev] [heat][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <433b825b-2814-02e3-e4d4-a3b3531ceace@redhat.com> Greetings, Heat team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. I think the most relevant design goal here for Heat is the one on Customisable Integration. This definitely has implications for how Heat designs things - for example, Heat follows these guidelines with its autoscaling implementation, by providing a webhook URL that can be used for scaling up and down and allowing users to wire it to either Aodh, Monasca, or some other thing (possibly of their own design). But beyond that, Heat is the service that actually provides the wiring, not only for itself but for all of OpenStack. When users want to connect resources from different services together, much of the time they'll be doing so using the declarative model of a Heat template. The sections on Interoperability and Bidirectional Compatibility should also be important considerations when making design decisions, since Heat templates should help provide interoperability across clouds. The Cross-Project Dependencies section is also likely of interest, since several projects rely on Heat, and in fact in the distant past the TC used to require this, but that is no longer the case either in practice or in the document as proposed. Finally, the section on Application Control mentions the importance of allowing applications to authenticate securely to the cloud, which is something Heat has put a lot of work into and run into a lot of problems with. My hope is that this document will help to spread that focus further in other parts of OpenStack so that this kind of thing gets easier over time. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:01:41 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:01:41 -0400 Subject: [openstack-dev] [mistral][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <262d89ce-c48c-14d6-529b-c8e51ba7dc68@redhat.com> Greetings, Mistral team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. I see Mistral contributing to two of the design goals. First, it helps with Customisable Integration by enabling application developers to incorporate glue logic between cloud services or between the application and cloud services, and host it in the cloud without the need to pre-allocate a VM for it. Secondly, it also contributes to the Built-in Reliability and Durability goal by providing applications with a highly-reliable way of maintaining workflow state without the need for the application itself to do it. The sections on Bidirectional Compatibility and Interoperability will probably be relevant to design decisions in Mistral, since workbooks are one of the artifact types that I'd expect to help with interoperability across clouds. The Cross-Project Dependencies section may also be of special interest to review, since Mistral is a service that many other OpenStack services could potentially rely on. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:02:00 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:02:00 -0400 Subject: [openstack-dev] [telemetry][aodh][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <070304ec-22e3-9dfc-ea43-b3b7340accca@redhat.com> Greetings, Telemetry team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. The scope of the document (which doesn't attempt to cover the whole scope of OpenStack) is user-facing services, so within the Telemetry stable I think that means mostly just Aodh at this point? The most relevant design goal is probably 'Customisable Integration'. This section emphasises the importance of allowing users to connect alarms to whatever they wish - from other OpenStack services to something application-specific. With its support for arbitrary webhooks and optional trust-token authentication on outgoing alarms, Aodh is already doing a very good job with this. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:03:44 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:03:44 -0400 Subject: [openstack-dev] [senlin][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <768c00e1-31cf-0986-7e60-c66e23e22904@redhat.com> Greetings, Senlin team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. As for Heat, I think the most important of the design goals for Senlin is the Customisable Integration one. Senlin is already designed around this concept, with Receivers that have webhook URLs allowing users to wire alarms for any source together with autoscaling in whatever way they like. However, even more important than that is the way that Senlin helps the other services deliver on the 'Application Control' pillar, by helping applications manage their own infrastructure from within the cloud itself. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:05:19 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:05:19 -0400 Subject: [openstack-dev] [zaqar][tc] Seeking feedback on the OpenStack cloud vision Message-ID: Greetings, Zaqar team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. The two design goals that Zaqar contributes to are 'Infinite, Continuous Scaling' and 'Built-in Reliability and Durability'. It allows application developers to do asynchronous messaging and have the scaling handled by the cloud, so they can send as many or as few messages as they need without having to scale in VM-sized chunks. And it offers reliable at-least-once delivery, so application developers can rely on the cloud to provide that, simplifying the fault tolerance requirements for the rest of the application. Of course Zaqar can also fulfill a valuable role carrying messages from the OpenStack services to the application. This capability will be critical to achieving the ideals outlined in the 'Application Control' section, since delivery of event notifications from the cloud services to the application should be both asynchronous (the cloud can't wait for a user application) and reliable (so some sort of queuing is required). If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:05:50 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:05:50 -0400 Subject: [openstack-dev] [blazar][tc] Seeking feedback on the OpenStack cloud vision Message-ID: Greetings, Blazar team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. Blazar is one of the most interesting projects when it comes to defining a vision for OpenStack clouds, because it has a really well-defined set of goals around energy efficiency and capacity planning that we've so far failed to capture in the document. In the 'Self-Service' section we talk about aligning user charges with operators' opportunity costs, which hints at the leasing concept but seems incomplete without a discussion about capacity planning. Similarly, we talk in various place about reducing costs to users by sharing resources across tenants, but not about how to physically pack those resources to minimise the costs to operators. I would really value the Blazar team's input on where and how best to introduce these concepts into the vision. As far as what we have already goes, I think the compute host reservation part of Blazar definitely qualifies as part of 'Basic Physical Data Center Management' since it's about optimally managing physical resources in the data center. Arguably the VM reservation part could too, as something that effectively augments Nova, but it's more of a stretch which makes me wonder if there's something missing. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:06:34 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:06:34 -0400 Subject: [openstack-dev] [nova][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <459292d9-8fa5-895f-420e-00d5be51fd33@redhat.com> Greetings, Nova team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. The 'Basic Physical Data Center Management' goal was written to acknowledge Nova's central role in OpenStack, and emphasize that OpenStack differs from projects like Kubernetes in that we don't expect something else to manage the physical data center for us; we expect OpenStack to be the thing that does that for other projects. Obviously Nova is also covered by the 'Hardware Virtualisation' design goal. The last paragraph of the 'Plays Well With Others' design goal was prompted by discussions in Cinder. I don't think the topic of other systems using parts of Nova standalone has ever really come up, but if it did this might be somewhere to look for guidance. (Note that it's phrased as completely optional.) A couple of the other sections are also (I think) worthy of close attention. The principles in the 'Application Control' section of the cloud pillars remain important. Nova is a bit unusual in that there are a number of auxiliary services that provide functionality here (I'm thinking of e.g. Masakari) - which is good, but it means more things to think about. Not only whether any given functionality is needed, but whether it is best provided by Nova or some other project, and if the latter how Nova can provide affordances for that project to integrate with it. The Partitioning section was suggested by Jay. It highlights the known mismatch between the concept of Availability Zones as borrowed from other clouds and the way operators use OpenStack, and offers a long-term design direction without being prescriptive. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:07:22 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:07:22 -0400 Subject: [openstack-dev] [zun][tc] Seeking feedback on the OpenStack cloud vision Message-ID: Greetings, Zun team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. Zun seems to fit nicely with the 'Infinite, Continuous Scaling' design goal, since it allows users to scale their applications and share physical resources at a more fine-grained level than a VM. I'm not actually up to date with the details under the hood, but from reading the docs it looks like it would also be doing Basic Physical Data Center Management - effectively doing what Nova does except with containers instead of VMs. And the future plans to integrate with Kubernetes also fit with the 'Plays Well With Others" design goal. I'm looking forward to your feedback on all of those areas, and I hope that the rest of the principles articulated in the vision will prove helpful to you as you make design decisions. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:07:57 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:07:57 -0400 Subject: [openstack-dev] [qinling][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <6dd31741-0ac0-60ec-e09d-2adcf6712454@redhat.com> Greetings, Qinling team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. Qinling offers perhaps the ultimate in 'Infinite, Continuous Scaling' for compute resources, by offering extremely fine-grained variation in the capacity utilized; by not reserving any capacity at all but sharing it in real time across tenants; and by at least in principle not having an upper bound for how big an application can scale without modifying its architecture. It also 'Plays Well With Others' by tightly integrating the backend components of a FaaS into OpenStack. Qinling also has a role to play in the 'Customisable Integration' goal, since it offers a way for application developers to deploy some glue logic in the cloud itself without needing to either pre-allocate a chunk of resources (i.e. a VM) to it or to host it outside of the cloud. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:08:43 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:08:43 -0400 Subject: [openstack-dev] [neutron][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <5372d663-e45b-35b7-3603-06f577aa289e@redhat.com> Greetings, Neutron team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. Neutron pretty obviously falls under the goals of 'Basic Physical Data Center Management' and 'Hardware Virtualisation'. The last paragraph of the 'Plays Well With Others' design goal (about offering standalone layers) was prompted by discussions in Cinder. My sense is that this is less relevant to Neutron because of the existence of OpenDaylight, but it might be something to pay particular attention to when reviewing the document. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:09:21 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:09:21 -0400 Subject: [openstack-dev] [octavia][tc] Seeking feedback on the OpenStack cloud vision Message-ID: Greetings, Octavia team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. I think the main design goal that applies to Octavia is the 'Hardware Virtualisation' one, since Octavia provides an API and abstraction layer over hardware (and software) load balancers. The 'Customisable Integration' goal plays a role too though, because even when a software load balancer is used, one advantage of having an OpenStack API for it is to allow integration with other OpenStack services (like autoscaling). If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From jaypipes at gmail.com Wed Oct 24 15:10:12 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 24 Oct 2018 11:10:12 -0400 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? Message-ID: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> Nova's API has the ability to create "quota classes", which are basically limits for a set of resource types. There is something called the "default quota class" which corresponds to the limits in the CONF.quota section. Quota classes are basically templates of limits to be applied if the calling project doesn't have any stored project-specific limits. Has anyone ever created a quota class that is different from "default"? I'd like to propose deprecating this API and getting rid of this functionality since it conflicts with the new Keystone /limits endpoint, is highly coupled with RAX's turnstile middleware and I can't seem to find anyone who has ever used it. Deprecating this API and functionality would make the transition to a saner quota management system much easier and straightforward. Also, I'm apparently blocked now from the operators ML so could someone please forward this there? Thanks, -jay From zbitter at redhat.com Wed Oct 24 15:10:30 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:10:30 -0400 Subject: [openstack-dev] [designate][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <78d108ac-8380-2f81-e5e7-b72bfb3c9d48@redhat.com> Greetings, Designate team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. I wrote DNS in as a late addition to the list of systems OpenStack needs to interface with for the 'Basic Physical Data Center Management' goal, because on reflection it seems essential to any basic physical data center that things outside the data center need some way of addressing resources running within it. If there's a more generic way of expressing that, or if you think Designate would be a better fit with some other design goal (whether it's already on the list or not), please let us know. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:11:40 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:11:40 -0400 Subject: [openstack-dev] [ironic][tc] Seeking feedback on the OpenStack cloud vision Message-ID: Greetings, Ironic team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. I'd say that Ironic definitely contributes to the 'Basic Physical Data Center Management' goal, since it manages physical resources in the data center and allows users to access them. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:12:27 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:12:27 -0400 Subject: [openstack-dev] [cyborg][tc] Seeking feedback on the OpenStack cloud vision Message-ID: Greetings, Cyborg team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. Cyborg is very obviously a major contributor to the 'Hardware Virtualisation' design goal. There's no attempt to make an exhaustive list of the types of hardware we want to virtualise, but if anything is obviously missing then please suggest it. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:13:09 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:13:09 -0400 Subject: [openstack-dev] [swift][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <174a84ef-1ca3-d79e-b333-01f812695b56@redhat.com> Greetings, Swift team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. The vision puts Swift firmly in-scope as the provider of Infinite, Continuous Scaling for data storage. And of course Swift is also part of the 'Built-in Reliability and Durability' goal, since it provides extremely durable storage and spreads the cost across multiple tenants. This is clearly a critical aspect of any cloud, and I'm hopeful this exercise will help put to rest a lot of the pointless speculation about whether Swift 'really' belongs in OpenStack. I know y'all have a very data-centric viewpoint on cloud that is probably unique in the OpenStack community, so I'm particularly interested in any insights you might have to offer on the vision as a whole from that perspective. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:13:33 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:13:33 -0400 Subject: [openstack-dev] [cinder][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <56ef1a51-c95c-f36b-69c9-a8a3ee62e471@redhat.com> Greetings, Cinder team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. Clearly Cinder is an integral part of meeting the 'Basic Physical Data Center Management' design goal, and also contributes to the 'Hardware Virtualisation' goal. The last paragraph in the 'Plays Well With Others' goal, about providing a standalone backend abstraction layer independently of the higher-level API (that might include e.g. scheduling and integration with other OpenStack services) was added with Cinder in mind, as I know that this is something the Cinder community has discussed, and it might also be applicable to other projects. Of course this is by no means mandatory, but it might be an interesting are to continue exploring. The Partitioning section highlights the known mismatch between the concept of Availability Zones as borrowed from other clouds and the way operators use OpenStack, and offers a long-term design direction that Cinder might want to pursue in conjunction with Nova. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:14:03 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:14:03 -0400 Subject: [openstack-dev] [manila][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <4df1c910-ffec-dff8-ff13-c8184bbc4173@redhat.com> Greetings, Manila team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. I think that, like Cinder, Manila would qualify as contributing to the 'Basic Physical Data Center Management' goal, since it also allows users to access external storage providers through a standardised API. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:14:26 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:14:26 -0400 Subject: [openstack-dev] [keystone][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <72e23233-b650-c984-22bc-97a40c17efe0@redhat.com> Greetings, Keystone team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. Identity management is specifically called out as a key aspect of the 'Basic Physical Data Center Management' design goal, so obviously Keystone fits in there. However, there are other parts of the document that can also help provide guidance. One is the last paragraph of the 'Customisable Integration' goal, which talks about which combinations of interactions need to be possible (needs that are currently met by a combination of application credentials and trusts), and the importance of least-privilege access and credential rotation. Another is the section on 'Application Control'. All of this is stuff we have talked about in the past so there should be no surprises, but hopefully this helps situate it all in the context of the bigger picture. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:15:03 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:15:03 -0400 Subject: [openstack-dev] [glance][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <99498aa3-d22f-ce58-ffdc-694c48fe4da8@redhat.com> Greetings, Glance team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. There's not a lot to say about Glance specifically in the document. Obviously a disk image management service is a fairly fundamental component of 'Basic Physical Data Center Management', so it certainly fits with the vision. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From openstack at fried.cc Wed Oct 24 15:16:10 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 24 Oct 2018 10:16:10 -0500 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> Message-ID: Forwarding to openstack-operators per Jay. On 10/24/18 10:10, Jay Pipes wrote: > Nova's API has the ability to create "quota classes", which are > basically limits for a set of resource types. There is something called > the "default quota class" which corresponds to the limits in the > CONF.quota section. Quota classes are basically templates of limits to > be applied if the calling project doesn't have any stored > project-specific limits. > > Has anyone ever created a quota class that is different from "default"? > > I'd like to propose deprecating this API and getting rid of this > functionality since it conflicts with the new Keystone /limits endpoint, > is highly coupled with RAX's turnstile middleware and I can't seem to > find anyone who has ever used it. Deprecating this API and functionality > would make the transition to a saner quota management system much easier > and straightforward. > > Also, I'm apparently blocked now from the operators ML so could someone > please forward this there? > > Thanks, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zbitter at redhat.com Wed Oct 24 15:16:13 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:16:13 -0400 Subject: [openstack-dev] [barbican][tc] Seeking feedback on the OpenStack cloud vision Message-ID: <2d1f4240-5e59-340e-91a2-48307e58f601@redhat.com> Greetings, Barbican team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. Barbican provides an abstraction over HSMs and software equivalents (like Vault), so the immediate design goal that it meets is the 'Hardware Virtualisation' one. However, the most interesting part of the document for the Barbican team is probably the section on cross-project dependencies. In discussions at the PTG, the TC concluded that we shouldn't force projects to adopt hard dependencies on other services (like Barbican), but recommend that they do so when there is a benefit to the user. The challenge here I think is that not duplicating security-sensitive code such as secret storage is well known to be something that is both of great benefit to the user and highly tempting to take a shortcut on. Your feedback on whether we have got the right balance is important. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:16:40 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:16:40 -0400 Subject: [openstack-dev] [searchlight][tc] Seeking feedback on the OpenStack cloud vision Message-ID: Greetings, Searchlight team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. Searchlight is one of the trickier projects to categorise. It's difficult to point to any of the listed 'Design Goals' in the document and say that Searchlight is contributing directly, although it does contribute a search capability to Horizon so arguably you could say it's a part of the GUI goal. But I think it is definitely contributing indirectly by helping the projects that do fulfill those design goals to better meet the requirements laid out in the preceding sections - in particular the one about Application Control. As such, I don't think there's any danger of this document appearing to exclude Searchlight from OpenStack, but it might be the case that we can learn from Searchlight and document more explicitly the things that it brings to the table as things that OpenStack should be striving for. I'd be interested in your thoughts on whether anything is missing. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:17:06 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:17:06 -0400 Subject: [openstack-dev] [karbor][tc] Seeking feedback on the OpenStack cloud vision Message-ID: Greetings, Karbor team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. Karbor is one of the most difficult projects when it comes to describing where it fits in the design goals, which may be an indication that we're missing something from the vision about the role OpenStack has to play in data protection. If that's the case, I'd be very interested in hearing what you think that should look like. For now perhaps the closest match is with the 'Basic Data Center Management' goal, since Karbor is an abstraction for its various plugins, some of which must interact with the physical data center to accomplish their work. Of the other sections, the Interoperability one is probably worth paying attention to. Any project which provides access to a lot of different vendor plugins always have to balance the desire to expose as much functionaility as possible with the need to ensure that applications can be ported between OpenStack clouds running different sets of plugins. OpenStack places a high value on interoperability, so this is something to keep in mind when designing. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From zbitter at redhat.com Wed Oct 24 15:17:30 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 11:17:30 -0400 Subject: [openstack-dev] [monasca][tc] Seeking feedback on the OpenStack cloud vision Message-ID: Greetings, Monasca team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help guide decisions. You can read the latest draft here: https://review.openstack.org/592205 We're trying to get feedback from as many people as possible - in many ways the value is in the process of coming together to figure out what we're trying to achieve as a community with OpenStack and how we can work together to build it. The document is there to help us remember what we decided so we don't have to do it all again over and over. The vision is structured with two sections that apply broadly to every project in OpenStack - describing the principles that we believe are essential to every cloud, and the ones that make OpenStack different from some other clouds. The third section is a list of design goals that we want OpenStack as a whole to be able to meet - ideally each project would be contributing toward one or more of these design goals. Monasca is a project that has both user-facing and operator-facing functions, so it straddles the border of the scope of the vision document (which, to be clear, is not the same as the scope of OpenStack itself). The user-facing part is covered by the vision, and would probably fit under the 'Customisable Integration' design goal. I think the design principle for Monasca to be aware of here, as I mentioned at the PTG, is that alarms should work in such a way that it is up to the user where to direct them to - it could be autoscaling in Heat, autoscaling in Senlin, or something else that is completely application-specific. If you would like me or another TC member to join one of your team IRC meetings to discuss further what the vision means for your team, please reply to this thread to set it up. You are also welcome to bring up any questions in the TC IRC channel, #openstack-tc - there's more of us around during Office Hours (https://governance.openstack.org/tc/#office-hours), but you can talk to us at any time. Feedback can also happen either in this thread or on the review https://review.openstack.org/592205 If the team is generally happy with the vision as it is and doesn't have any specific feedback, that's cool but I'd like to request that at least the PTL leave a vote on the review. It's important to know whether we are actually developing a consensus in the community or just talking to ourselves :) many thanks, Zane. From kendall at openstack.org Wed Oct 24 16:56:11 2018 From: kendall at openstack.org (Kendall Waters) Date: Wed, 24 Oct 2018 11:56:11 -0500 Subject: [openstack-dev] Registration Prices Increase Today - OpenStack Summit Berlin Message-ID: <04574EB7-40F3-473B-9577-0F53FF5825A3@openstack.org> Hi everyone, Friendly reminder that the ticket price for the OpenStack Summit Berlin increases today, October 24 at 11:59pm PDT (October 25 at 6:59 UTC). Also, ALL registration codes (sponsor, speaker, ATC, AUC) will expire on November 2. Register now before the price increases!  Once you have registered, make sure to download the mobile app and plan your personal Summit schedule . Don’t forget to RSVP to intensive trainings as this is the only way you will be guaranteed a spot in the room! If you have any Summit related questions, please email summit at openstack.org . Cheers, Kendall Kendall Waters OpenStack Marketing & Events kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From anteaya at anteaya.info Wed Oct 24 17:02:15 2018 From: anteaya at anteaya.info (Anita Kuno) Date: Wed, 24 Oct 2018 13:02:15 -0400 Subject: [openstack-dev] Use of capslock on the mailing lists Message-ID: Hello Gentle Reader: I'm writing to share my thoughts on how I feel when I open my inbox on my account subscribed to OpenStack mailing lists. I've been subscribed to various lists for some time and have accommodated my consumption style to suit the broadcast nature of the specific lists; use of filters etcetera. I have noticed a new habit on some of the mailing lists and I find the effect of it to feel rather aggressive to me. I am used to copious amounts of emails and it is my responsibility as consumer to filter out and reply to the ones that affect me. I'm not comfortable with the recent trend of using capslock. I'm feeling yelled at by my inbox. This is having the effect of me giving as little attention as possible to anyone using capslock. I wanted to give capslock affectionados that feedback. If you are using it to agressively distance yourself from me as a consumer, it is highly successful. Thank you for reading, Anita From Tim.Bell at cern.ch Wed Oct 24 17:22:04 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 24 Oct 2018 17:22:04 +0000 Subject: [openstack-dev] [ClusterLabs Developers] [HA] future of OpenStack OCF resource agents (was: resource-agents v4.2.0) In-Reply-To: <20181024122554.xilfw3xx6igcg5e6@pacific.linksys.moosehall> References: <20181024082154.a3gyn4cgtu5o3sj3@redhat.com> <20181024122554.xilfw3xx6igcg5e6@pacific.linksys.moosehall> Message-ID: Adam, Personally, I would prefer the approach where the OpenStack resource agents are part of the repository in which they are used. This is also the approach taken in other open source projects such as Kubernetes and avoids the inconsistency where, for example, Azure resource agents are in the Cluster Labs repository but OpenStack ones are not. This can mean that people miss there is OpenStack integration available. This does not reflect, in any way, the excellent efforts and results made so far. I don't think it would negate the possibility to include testing in the OpenStack gate since there are other examples where code is pulled in from other sources. Tim -----Original Message----- From: Adam Spiers Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 24 October 2018 at 14:29 To: "developers at clusterlabs.org" , openstack-dev mailing list Subject: Re: [openstack-dev] [ClusterLabs Developers] [HA] future of OpenStack OCF resource agents (was: resource-agents v4.2.0) [cross-posting to openstack-dev] Oyvind Albrigtsen wrote: >ClusterLabs is happy to announce resource-agents v4.2.0. >Source code is available at: >https://github.com/ClusterLabs/resource-agents/releases/tag/v4.2.0 > >The most significant enhancements in this release are: >- new resource agents: [snipped] > - openstack-cinder-volume > - openstack-floating-ip > - openstack-info That's an interesting development. By popular demand from the community, in Oct 2015 the canonical location for OpenStack-specific resource agents became: https://git.openstack.org/cgit/openstack/openstack-resource-agents/ as announced here: http://lists.openstack.org/pipermail/openstack-dev/2015-October/077601.html However I have to admit I have done a terrible job of maintaining it since then. Since OpenStack RAs are now beginning to creep into ClusterLabs/resource-agents, now seems a good time to revisit this and decide a coherent strategy. I'm not religious either way, although I do have a fairly strong preference for picking one strategy which both ClusterLabs and OpenStack communities can align on, so that all OpenStack RAs are in a single place. I'll kick the bikeshedding off: Pros of hosting OpenStack RAs on ClusterLabs -------------------------------------------- - ClusterLabs developers get the GitHub code review and Travis CI experience they expect. - Receive all the same maintenance attention as other RAs - any changes to coding style, utility libraries, Pacemaker APIs, refactorings etc. which apply to all RAs would automatically get applied to the OpenStack RAs too. - Documentation gets built in the same way as other RAs. - Unit tests get run in the same way as other RAs (although does ocf-tester even get run by the CI currently?) - Doesn't get maintained by me ;-) Pros of hosting OpenStack RAs on OpenStack infrastructure --------------------------------------------------------- - OpenStack developers get the Gerrit code review and Zuul CI experience they expect. - Releases and stable/foo branches could be made to align with OpenStack releases (..., Queens, Rocky, Stein, T(rains?)...) - Automated testing could in the future spin up a full cloud and do integration tests by simulating failure scenarios, as discussed here: https://storyboard.openstack.org/#!/story/2002129 That said, that is still very much work in progress, so it remains to be seen when that could come to fruition. No doubt I've missed some pros and cons here. At this point personally I'm slightly leaning towards keeping them in the openstack-resource-agents - but that's assuming I can either hand off maintainership to someone with more time, or somehow find the time myself to do a better job. What does everyone else think? All opinions are very welcome, obviously. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Wed Oct 24 17:38:11 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 24 Oct 2018 18:38:11 +0100 (BST) Subject: [openstack-dev] [all] [tc] [api] Paste Maintenance In-Reply-To: <8e8bef0d2f8c25f1c995200859b1d4606309bdc9.camel@evrard.me> References: <4BE29D94-AF0D-4D29-B530-AD7818C34561@leafe.com> <6850a16b-a0f3-edeb-12c6-749745a32491@openstack.org> <1e2ebe5c-b355-6833-d2bb-b2a28d2b4657@debian.org> <8e8bef0d2f8c25f1c995200859b1d4606309bdc9.camel@evrard.me> Message-ID: On Wed, 24 Oct 2018, Jean-Philippe Evrard wrote: > On Mon, 2018-10-22 at 07:50 -0700, Morgan Fainberg wrote: >> Also, doesn't bitbucket have a git interface now too (optionally)? >> > It does :) > But I think it requires a new repo, so it means that could as well move > to somewhere else like github or openstack infra :p Right, so that combined with bitbucket oozing surveys and assorted other annoyances over me has meant that I've moved paste to github: https://github.com/cdent/paste I merged some of the outstanding patches, forced Zane to fix up a few more Python 3.7 related things, fixed up some of the docs and released a new version (3.0.0) to pypi: https://pypi.org/p/Paste And I published the docs (linked from the new release and the repo) to a new URL on RTD, as older versions of the docs were not something I was able to adopt: https://pythonpaste.readthedocs.io And some travis-ci stuff. I didn't bother to bring Paste into OpenDev infra because that felt like that was indicating a longer and more engaged commitment than it feels responses here indicated should happen. We want to encourage migration away. As Morgan stated elsewhere in the thread [1] work is in progress to make using something else easier for people. If you want to help with Paste, make some issues and pull requests in the repo above. Thanks. Next step? paste.deploy (which is a separate repo). [1] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135937.html -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From mriedemos at gmail.com Wed Oct 24 18:57:05 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 24 Oct 2018 13:57:05 -0500 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> Message-ID: <78efc5ab-036d-3a74-de43-d83d543bb849@gmail.com> On 10/24/2018 10:10 AM, Jay Pipes wrote: > I'd like to propose deprecating this API and getting rid of this > functionality since it conflicts with the new Keystone /limits endpoint, > is highly coupled with RAX's turnstile middleware and I can't seem to > find anyone who has ever used it. Deprecating this API and functionality > would make the transition to a saner quota management system much easier > and straightforward. I was trying to do this before it was cool: https://review.openstack.org/#/c/411035/ I think it was the Pike PTG in ATL where people said, "meh, let's just wait for unified limits from keystone and let this rot on the vine". I'd be happy to restore and update that spec. -- Thanks, Matt From melwittt at gmail.com Wed Oct 24 19:29:12 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 24 Oct 2018 12:29:12 -0700 Subject: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend In-Reply-To: <20181023140140.rs3sf6huy4czgtak@exaesuubou5k> References: <73dab45e.1a3d.1668a62f317.Coremail.bxzhu_5355@163.com> <8f8b257a-5219-60e1-8760-d905f5bb48ff@gmail.com> <47653a27.59b1.1668cea5d9b.Coremail.bxzhu_5355@163.com> <40297115-571b-c07c-88b4-9e0d24b7301b@gmail.com> <8e1eba3.1f61.16699e1111d.Coremail.bxzhu_5355@163.com> <2f610cad-7a08-78cf-5ca4-63e0ea59d661@gmail.com> <20181023140140.rs3sf6huy4czgtak@exaesuubou5k> Message-ID: On Tue, 23 Oct 2018 10:01:42 -0400, Jon Bernard wrote: > * melanie witt wrote: >> On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote: >>> I created a new vm and a new volume with type 'ceph'[So that the volume >>> will be created on one of two hosts. I assume that the volume created on >>> host dev at rbd-1#ceph this time]. Next step is to attach the volume to the >>> vm. At last I want to migrate the volume from host dev at rbd-1#ceph to >>> host dev at rbd-2#ceph, but it failed with the exception >>> 'NotImplementedError(_("Swap only supports host devices")'. >>> >>> So that, my real problem is that is there any work to migrate >>> volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool) >>> in the same ceph cluster? >>> The difference between the spec[2] with my scope is only one is >>> *available*(the spec) and another is *in-use*(my scope). >>> >>> >>> [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/ >>> [2] https://review.openstack.org/#/c/296150 >> >> Ah, I think I understand now, thank you for providing all of those details. >> And I think you explained it in your first email, that cinder supports >> migration of ceph volumes if they are 'available' but not if they are >> 'in-use'. Apologies that I didn't get your meaning the first time. >> >> I see now the code you were referring to is this [3]: >> >> if volume.status not in ('available', 'retyping', 'maintenance'): >> LOG.debug('Only available volumes can be migrated using backend ' >> 'assisted migration. Falling back to generic migration.') >> return refuse_to_migrate >> >> So because your volume is not 'available', 'retyping', or 'maintenance', >> it's falling back to generic migration, which will end up with an error in >> nova because the source_path is not set in the volume config. >> >> Can anyone from the cinder team chime in about whether the ceph volume >> migration could be expanded to allow migration of 'in-use' volumes? Is there >> a reason not to allow migration of 'in-use' volumes? > > Generally speaking, Nova must facilitate the migration of a live (or > in-use) volume. A volume attached to a running instance requires code > in the I/O path to correctly route traffic to the correct location - so > Cinder must refuse (or defer) a migrate operation if the volume is > attached. Until somewhat recently Qemu and Libvirt did not support the > migration to non-block (RBD) targets which is the reason for lack of > support. I believe we now have all of the pieces to perform this > operation successfully, but I suspect it will require a setup with > correct versions of all the related software. I will try to verify this > during the current release cycle and report back. OK, thanks for this info, Jon. I'll be interested in your findings. Cheers, -melanie From zbitter at redhat.com Wed Oct 24 19:34:27 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 24 Oct 2018 15:34:27 -0400 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: References: Message-ID: <663883be-97fb-d813-a2e9-7882eae717bd@redhat.com> There seems to be agreement that this is broadly a good direction to pursue, so I proposed a TC resolution. Let's shift discussion to the review: https://review.openstack.org/613145 cheers, Zane. On 19/10/18 11:17 AM, Zane Bitter wrote: > There hasn't been a Python 2 release in 8 years, and during that time > we've gotten used to the idea that that's the way things go. However, > with the switch to Python 3 looming (we will drop support for Python 2 > in the U release[1]), history is no longer a good guide: Python 3 > releases drop as often as every year. We are already feeling the pain > from this, as Linux distros have largely already completed the shift to > Python 3, and those that have are on versions newer than the py35 we > currently have in gate jobs. > > We have traditionally held to the principle that we want each release to > support the latest release of CentOS and the latest LTS release of > Ubuntu, as they existed at the beginning of the release cycle.[2] > Currently this means in practice one version of py2 and one of py3, but > in the future it will mean two, usually different, versions of py3. > > There are two separate issues that we need to address: unit tests (we'll > define this as code tested in isolation, within or spawned from within > the testing process), and integration tests (we'll define this as code > running in its own process, tested from the outside). I have two > separate but related proposal for how to handle those. > > I'd like to avoid discussion which versions of things we think should be > supported in Stein in this thread. Let's come up with a process that we > think is a good one to take into T and beyond, and then retroactively > apply it to Stein. Competing proposals are of course welcome, in > addition to feedback on this one. > > Unit Tests > ---------- > > For unit tests, the most important thing is to test on the versions of > Python we target. It's less important to be using the exact distro that > we want to target, because unit tests generally won't interact with > stuff outside of Python. > > I'd like to propose that we handle this by setting up a unit test > template in openstack-zuul-jobs for each release. So for Stein we'd have > openstack-python3-stein-jobs. This template would contain: > > * A voting gate job for the highest minor version of py3 we want to > support in that release. > * A voting gate job for the lowest minor version of py3 we want to > support in that release. > * A periodic job for any interim minor releases. > * (Starting late in the cycle) a non-voting check job for the highest > minor version of py3 we want to support in the *next* release (if > different), on the master branch only. > > So, for example, (and this is still under active debate) for Stein we > might have gating jobs for py35 and py37, with a periodic job for py36. > The T jobs might only have voting py36 and py37 jobs, but late in the T > cycle we might add a non-voting py38 job on master so that people who > haven't switched to the U template yet can see what, if anything, > they'll need to fix. > > We'll run the unit tests on any distro we can find that supports the > version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian > unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a > particular Python version before trying to test it. > > Before the start of each cycle, the TC would determine which range of > versions we want to support, on the basis of the latest one we can find > in any distro and the earliest one we're likely to need in one of the > supported Linux distros. There will be a project-wide goal to switch the > testing template from e.g. openstack-python3-stein-jobs to > openstack-python3-treasure-jobs for every repo before the end of the > cycle. We'll have goal champions as usual following up and helping teams > with the process. We'll know where the problem areas are because we'll > have added non-voting jobs for any new Python versions to the previous > release's template. > > Integration Tests > ----------------- > > Integration tests do test, amongst other things, integration with > non-openstack-supplied things in the distro, so it's important that we > test on the actual distros we have identified as popular.[2] It's also > important that every project be testing on the same distro at the end of > a release, so we can be sure they all work together for users. > > When a new release of CentOS or a new LTS release of Ubuntu comes out, > the TC will create a project-wide goal for the *next* release cycle to > switch all integration tests over to that distro. It's up to individual > projects to make the switch for the tests that they own (e.g. it'd be > the QA team for Tempest, but other individual projects for their own > jobs). Again, there'll be a goal champion to monitor and follow up. > > > [1] > https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html > > [2] > https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions > From ildiko.vancsa at gmail.com Wed Oct 24 19:44:27 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 24 Oct 2018 21:44:27 +0200 Subject: [openstack-dev] Announcing the First Release of StarlingX, an open source edge computing platform Message-ID: Hi, You may have heard, StarlingX[1] is a new independent, top-level, open source pilot project that's supported by the OpenStack Foundation. StarlingX joins other pilot projects hosted at OpenStack Foundation[2], including Airship, Kata Containers and Zuul. Today the first release of StarlingX is here! We invite you to participate in getting the word out that the release is ready and that we’re eager to welcome more contributors to this project. Learn more about it: • Read more about the project at starlingx.io • Listen to a recording of the onboarding webinar[3] • On-boarding slide deck[4] • Overview document[5] Some things you can share: • A blog on starlingx.io[6] • Social sharing: Announcements on Twitter[7] Want to get involved in the community? • Mailing Lists[8] • Weekly Calls[9] • Freenode IRC: #starlingx channel[10] Ready to dive into the code? • You can get download the first release at git.starlingx.io • StarlingX Install Guide[11] • StarlingX Developer Guide[12] If you’re at the Berlin Summit November 13-15[13]: Tuesday 11/13 • StarlingX – Project update – 6 months in the life of a new Open Source project with Brent Rowsell & Dean Troyer • StarlingX CI, from zero to Zuul with Hazzim Anaya & Elio Martinez Wednesday 11/14 • Keynote spotlight on the main stage with Ian Jolliffe & Dean Troyer • MVP (Minimum Viable Product) architecture for edge - Forum session • "Ask Me Anything" about StarlingX - Forum session • StarlingX Enhancements for Edge Networking presentation with Kailun Qin, Ruijing Guo & Dan Chen • Project Onboarding session with Greg Waines • Integrating IOT Device Management with the Edge Cloud - Forum session Thursday 11/15 • Containerized Applications' Requirements on Kubernetes Cluster at the Edge - Forum session Check out the materials to learn about the project, try out the software and join the community! We hope to see many of you in Berlin! Ildikó [1] https://www.starlingx.io/ [2] https://www.openstack.org/foundation/ [3] https://www.youtube.com/watch?v=G9uwGnKD6tM&t=232s [4] https://www.starlingx.io/collateral/StarlingX-Onboarding-Deck-Web.pdf [5] https://www.starlingx.io/collateral/StarlingX_OnePager_Web-102318pdf/ [6] https://www.starlingx.io/blog/starlingx-initial-release.html [7] https://twitter.com/starlingx [8] http://lists.starlingx.io/cgi-bin/mailman/listinfo [9] https://wiki.openstack.org/wiki/Starlingx/Meetings [10] https://freenode.net/ [11] https://docs.starlingx.io/installation_guide/index.html [12] https://docs.starlingx.io/developer_guide/index.html [13] https://www.openstack.org/summit/berlin-2018 From jaypipes at gmail.com Wed Oct 24 19:49:23 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 24 Oct 2018 15:49:23 -0400 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: <78efc5ab-036d-3a74-de43-d83d543bb849@gmail.com> References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> <78efc5ab-036d-3a74-de43-d83d543bb849@gmail.com> Message-ID: On 10/24/2018 02:57 PM, Matt Riedemann wrote: > On 10/24/2018 10:10 AM, Jay Pipes wrote: >> I'd like to propose deprecating this API and getting rid of this >> functionality since it conflicts with the new Keystone /limits >> endpoint, is highly coupled with RAX's turnstile middleware and I >> can't seem to find anyone who has ever used it. Deprecating this API >> and functionality would make the transition to a saner quota >> management system much easier and straightforward. > > I was trying to do this before it was cool: > > https://review.openstack.org/#/c/411035/ > > I think it was the Pike PTG in ATL where people said, "meh, let's just > wait for unified limits from keystone and let this rot on the vine". > > I'd be happy to restore and update that spec. ++ I think partly things have stalled out because maybe each side (keystone + nova) think the other is working on something but isn't? I'm currently working on cleaning up the quota system and would be happy to deprecate the os-quota-classes API along with the patch series that does that cleanup. -jay From melwittt at gmail.com Wed Oct 24 19:54:00 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 24 Oct 2018 12:54:00 -0700 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: <78efc5ab-036d-3a74-de43-d83d543bb849@gmail.com> References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> <78efc5ab-036d-3a74-de43-d83d543bb849@gmail.com> Message-ID: On Wed, 24 Oct 2018 13:57:05 -0500, Matt Riedemann wrote: > On 10/24/2018 10:10 AM, Jay Pipes wrote: >> I'd like to propose deprecating this API and getting rid of this >> functionality since it conflicts with the new Keystone /limits endpoint, >> is highly coupled with RAX's turnstile middleware and I can't seem to >> find anyone who has ever used it. Deprecating this API and functionality >> would make the transition to a saner quota management system much easier >> and straightforward. > I was trying to do this before it was cool: > > https://review.openstack.org/#/c/411035/ > > I think it was the Pike PTG in ATL where people said, "meh, let's just > wait for unified limits from keystone and let this rot on the vine". > > I'd be happy to restore and update that spec. Yeah, we were thinking the presence of the API and code isn't harming anything and sometimes we talk about situations where we could use them. Quota classes come up occasionally whenever we talk about preemptible instances. Example: we could create and use a quota class "preemptible" and decorate preemptible flavors with that quota_class in order to give them unlimited quota. There's also talk of quota classes in the "Count quota based on resource class" spec [1] where we could have leveraged quota classes to create and enforce quota limits per custom resource class. But I think the consensus there was to hold off on quota by custom resource class until we migrate to unified limits and oslo.limit. So, I think my concern in removing the internal code that is capable of enforcing quota limit per quota class is the preemptible instance use case. I don't have my mind wrapped around if/how we could solve it using unified limits yet. And I was just thinking, if we added a project_id column to the quota_classes table and correspondingly added it to the os-quota-class-sets API, we could pretty simply implement quota by flavor, which is a feature operators like Oath need. An operator could create a quota class limit per project_id and then decorate flavors with quota_class to enforce them per flavor. I recognize that maybe it would be too confusing to solve use cases with quota classes given that we're going to migrate to united limits. At the same time, I'm hesitant to close the door on a possibility before we have some idea about how we'll solve them without quota classes. Has anyone thought about how we can solve the use cases with unified limits for things like preemptible instances and quota by flavor? Cheers, -melanie [1] https://review.openstack.org/569011 From lbragstad at gmail.com Wed Oct 24 20:06:24 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 24 Oct 2018 15:06:24 -0500 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> <78efc5ab-036d-3a74-de43-d83d543bb849@gmail.com> Message-ID: On Wed, Oct 24, 2018 at 2:49 PM Jay Pipes wrote: > On 10/24/2018 02:57 PM, Matt Riedemann wrote: > > On 10/24/2018 10:10 AM, Jay Pipes wrote: > >> I'd like to propose deprecating this API and getting rid of this > >> functionality since it conflicts with the new Keystone /limits > >> endpoint, is highly coupled with RAX's turnstile middleware and I > >> can't seem to find anyone who has ever used it. Deprecating this API > >> and functionality would make the transition to a saner quota > >> management system much easier and straightforward. > > > > I was trying to do this before it was cool: > > > > https://review.openstack.org/#/c/411035/ > > > > I think it was the Pike PTG in ATL where people said, "meh, let's just > > wait for unified limits from keystone and let this rot on the vine". > > > > I'd be happy to restore and update that spec. > > ++ > > I think partly things have stalled out because maybe each side (keystone > + nova) think the other is working on something but isn't? > I have a Post-it on my montior to follow up with what we talked about at the PTG. AFAIK, the next steps were to use the examples we went through and apply them to nova [0] using oslo.limit. We were hoping this would do two things. First, it would expose any remaining gaps we have in oslo.limit that need to get closed before other services start using the library. Second, we could iterate on the example in gerrit as a nova review and making it easier to merge when it's working. Is that still the case and if so, how can I help? [0] https://gist.github.com/lbragstad/69d28dca8adfa689c00b272d6db8bde7 > > I'm currently working on cleaning up the quota system and would be happy > to deprecate the os-quota-classes API along with the patch series that > does that cleanup. > > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From honza at redhat.com Wed Oct 24 22:31:11 2018 From: honza at redhat.com (Honza Pokorny) Date: Wed, 24 Oct 2018 19:31:11 -0300 Subject: [openstack-dev] [tripleo][ui][tempest][oooq] Refreshing plugins from git In-Reply-To: References: <20181018001743.swmj3icwzlezoqdd@localhost.localdomain> <4b61614b-fbd3-3da0-3f60-c9a8516c3844@redhat.com> Message-ID: <20181024223111.7z7h3kqwscpoltci@localhost.localdomain> Here is an etherpad with all of the open patches that Chandan and I have been working on. https://etherpad.openstack.org/p/selenium-testing-ci On 2018-10-22 18:25, Chandan kumar wrote: > Hello Honza, > > On Thu, Oct 18, 2018 at 6:15 PM Bogdan Dobrelya wrote: > > > > On 10/18/18 2:17 AM, Honza Pokorny wrote: > > > Hello folks, > > > > > > I'm working on the automated ui testing blueprint[1], and I think we > > > need to change the way we ship our tempest tests. > > > > > > Here is where things stand at the moment: > > > > > > * We have a kolla image for tempest > > > * This image contains the tempest rpm, and the openstack-tempest-all rpm > > > * The openstack-tempest-all package in turn contains all of the > > > openstack tempest plugins > > > * Each of the plugins is shipped as an rpm > > > > > > So, in order for a new test in tempest-tripleo-ui to appear in CI we > > > have to go through at least the following tests: > > > > > > * New tempest-tripleo-ui rpm > > > * New openstack-tempest-all rpm > > > * New tempest kolla image > > > > > > This could easily take a week, if not more. > > > > > > What I would like to build is something like the following: > > > > > > * Add an option to the tempest-setup.sh script in tripleo-quickstart to > > > refresh all tempest plugins from git before running any tests > > > * Optionally specify a zuul change for any of the plugins being > > > refreshed > > > * Hook up the test job to patches in tripleo-ui (which tests in > > > tempest-tripleo-ui are testing) so that I can run a fix and its test > > > in a single CI job > > I have added a patch in TripleO Quickstart extras Validate-tempest > role: https://review.openstack.org/#/c/612377/ to install any tempest > plugin from git and zuul will pick > the specific change in the gates. > Here is the patch on how to test it with FS: https://review.openstack.org/612386 > Basically in any FS, we can add following lines > tempest_format: venv > tempest_plugins_git: > - 'https://git.openstack.org/openstack/tempest-tripleo-ui.git' > the respective FS related job will install the tempest plugin and we > can also use test_white_regex: to > trigger the tempest tests. > > I think it will solve the problem. > > Thanks > > Chandan Kumar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sorrison at gmail.com Wed Oct 24 23:55:15 2018 From: sorrison at gmail.com (Sam Morrison) Date: Thu, 25 Oct 2018 10:55:15 +1100 Subject: [openstack-dev] [nova] nova cellsv2 and DBs / down cells / quotas In-Reply-To: <4f38170f-75c2-8e8a-92fc-92e4ec47f356@gmail.com> References: <929B1777-594D-440A-9847-84EE66A7146F@gmail.com> <4f38170f-75c2-8e8a-92fc-92e4ec47f356@gmail.com> Message-ID: <48AF63AB-8FDD-4653-938D-8B3144FE8507@gmail.com> > On 24 Oct 2018, at 4:01 pm, melanie witt wrote: > > On Wed, 24 Oct 2018 10:54:31 +1100, Sam Morrison wrote: >> Hi nova devs, >> Have been having a good look into cellsv2 and how we migrate to them (we’re still on cellsv1 and about to upgrade to queens and still run cells v1 for now). >> One of the problems I have is that now all our nova cell database servers need to respond to API requests. >> With cellsv1 our architecture was to have a big powerful DB cluster (3 physical servers) at the API level to handle the API cell and then a smallish non HA DB server (usually just a VM) for each of the compute cells. >> This architecture won’t work with cells V2 and we’ll now need to have a lot of highly available and responsive DB servers for all the cells. >> It will also mean that our nova-apis which reside in Melbourne, Australia will now need to talk to database servers in Auckland, New Zealand. >> The biggest issue we have is when a cell is down. We sometimes have cells go down for an hour or so planned or unplanned and with cellsv1 this does not affect other cells. >> Looks like some good work going on here https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/handling-down-cell >> But what about quota? If a cell goes down then it would seem that a user all of a sudden would regain some quota from the instances that are in the down cell? >> Just wondering if anyone has thought about this? > > Yes, we've discussed it quite a bit. The current plan is to offer a policy-driven behavior as part of the "down" cell handling which will control whether nova will: > > a) Reject a server create request if the user owns instances in "down" cells > > b) Go ahead and count quota usage "as-is" if the user owns instances in "down" cells and allow quota limit to be potentially exceeded > > We would like to know if you think this plan will work for you. > > Further down the road, if we're able to come to an agreement on a consumer type/owner or partitioning concept in placement (to be certain we are counting usage our instance of nova owns, as placement is a shared service), we could count quota usage from placement instead of querying cells. OK great, always good to know other people are thinking for you :-) , I don’t really like a or b but the idea about using placement sounds like a good one to me. I guess our architecture is pretty unique in a way but I wonder if other people are also a little scared about the whole all DB servers need to up to serve API requests? I’ve been thinking of some hybrid cellsv1/v2 thing where we’d still have the top level api cell DB but the API would only ever read from it. Nova-api would only write to the compute cell DBs. Then keep the nova-cells processes just doing instance_update_at_top to keep the nova-cell-api db up to date. We’d still have syncing issues but we have that with placement now and that is more frequent than nova-cells-v1 is for us. Cheers, Sam > > Cheers, > -melanie > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From soulxu at gmail.com Thu Oct 25 00:57:23 2018 From: soulxu at gmail.com (Alex Xu) Date: Thu, 25 Oct 2018 08:57:23 +0800 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> <78efc5ab-036d-3a74-de43-d83d543bb849@gmail.com> Message-ID: so FYI, in case people missing this spec, there is spec from John https://review.openstack.org/#/c/602201/3/specs/stein/approved/unified-limits-stein.rst at 170 the roadmap of this spec is also saying deprecate the quota-class API. melanie witt 于2018年10月25日周四 上午3:54写道: > On Wed, 24 Oct 2018 13:57:05 -0500, Matt Riedemann wrote: > > On 10/24/2018 10:10 AM, Jay Pipes wrote: > >> I'd like to propose deprecating this API and getting rid of this > >> functionality since it conflicts with the new Keystone /limits endpoint, > >> is highly coupled with RAX's turnstile middleware and I can't seem to > >> find anyone who has ever used it. Deprecating this API and functionality > >> would make the transition to a saner quota management system much easier > >> and straightforward. > > I was trying to do this before it was cool: > > > > https://review.openstack.org/#/c/411035/ > > > > I think it was the Pike PTG in ATL where people said, "meh, let's just > > wait for unified limits from keystone and let this rot on the vine". > > > > I'd be happy to restore and update that spec. > > Yeah, we were thinking the presence of the API and code isn't harming > anything and sometimes we talk about situations where we could use them. > > Quota classes come up occasionally whenever we talk about preemptible > instances. Example: we could create and use a quota class "preemptible" > and decorate preemptible flavors with that quota_class in order to give > them unlimited quota. There's also talk of quota classes in the "Count > quota based on resource class" spec [1] where we could have leveraged > quota classes to create and enforce quota limits per custom resource > class. But I think the consensus there was to hold off on quota by > custom resource class until we migrate to unified limits and oslo.limit. > > So, I think my concern in removing the internal code that is capable of > enforcing quota limit per quota class is the preemptible instance use > case. I don't have my mind wrapped around if/how we could solve it using > unified limits yet. > > And I was just thinking, if we added a project_id column to the > quota_classes table and correspondingly added it to the > os-quota-class-sets API, we could pretty simply implement quota by > flavor, which is a feature operators like Oath need. An operator could > create a quota class limit per project_id and then decorate flavors with > quota_class to enforce them per flavor. > > I recognize that maybe it would be too confusing to solve use cases with > quota classes given that we're going to migrate to united limits. At the > same time, I'm hesitant to close the door on a possibility before we > have some idea about how we'll solve them without quota classes. Has > anyone thought about how we can solve the use cases with unified limits > for things like preemptible instances and quota by flavor? > > Cheers, > -melanie > > [1] https://review.openstack.org/569011 > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From klmitch at mit.edu Thu Oct 25 02:35:08 2018 From: klmitch at mit.edu (Kevin L. Mitchell) Date: Wed, 24 Oct 2018 21:35:08 -0500 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> Message-ID: > On 10/24/18 10:10, Jay Pipes wrote: > > Nova's API has the ability to create "quota classes", which are > > basically limits for a set of resource types. There is something called > > the "default quota class" which corresponds to the limits in the > > CONF.quota section. Quota classes are basically templates of limits to > > be applied if the calling project doesn't have any stored > > project-specific limits. For the record, my original concept in creating quota classes is that you'd be able to set quotas per tier of user and easily be able to move users from one tier to another. This was just a neat idea I had, and AFAIK, Rackspace never used it, so you can call it YAGNI as far as I'm concerned :) > > Has anyone ever created a quota class that is different from "default"? > > > > I'd like to propose deprecating this API and getting rid of this > > functionality since it conflicts with the new Keystone /limits endpoint, > > is highly coupled with RAX's turnstile middleware I didn't intend it to be highly coupled, but it's been a while since I wrote it, and of course I've matured as a developer since then, so *shrug*. I also don't think Rackspace has ever used turnstile. > > and I can't seem to > > find anyone who has ever used it. Deprecating this API and functionality > > would make the transition to a saner quota management system much easier > > and straightforward. I'm fine with that plan, speaking as the original developer; as I say, I don't think Rackspace ever utilized the functionality anyway, and if no one else pipes up saying that they're using it, I'd be all over deprecating the quota classes in favor of the new hotness. -- Kevin L. Mitchell -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 220 bytes Desc: This is a digitally signed message part URL: From dinesh.bhor at linecorp.com Thu Oct 25 05:12:51 2018 From: dinesh.bhor at linecorp.com (=?utf-8?B?44Oc44O844Ki44OH44Kj44ON44K344OlW0Job3IgRGluZXNoXQ==?=) Date: Thu, 25 Oct 2018 14:12:51 +0900 Subject: [openstack-dev] =?utf-8?q?=5Bnova=5D=5Blimits=5D_Does_ANYONE_at_a?= =?utf-8?q?ll_use_the_quota_class_functionality_in_Nova=3F?= In-Reply-To: References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> Message-ID: Hi All, We were having a similar use case like *Preemptible Instances* called as *Rich-VM’s* which are high in resources and are deployed each per hypervisor. We have a custom code in production which tracks the quota for such instances separately and for the same reason we have *rich_instances* custom quota class same as *instances* quota class. I discussed this thing pretty recently with sean-k-mooney I hope he remembers it. ボーアディネシュ Bhor Dinesh Verda2チーム 〒160-0022 東京都新宿区新宿4-1-6 JR新宿ミライナタワー 23階 Mobile 08041289520 Fax 03-4316-2116 Email dinesh.bhor at linecorp.com ​ -----Original Message----- From: "Kevin L. Mitchell" To: "OpenStack Development Mailing List (not for usage questions)"; "openstack-operators at lists.openstack.org"; Cc: Sent: Oct 25, 2018 (Thu) 11:35:08 Subject: Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? > On 10/24/18 10:10, Jay Pipes wrote: > > Nova's API has the ability to create "quota classes", which are > > basically limits for a set of resource types. There is something called > > the "default quota class" which corresponds to the limits in the > > CONF.quota section. Quota classes are basically templates of limits to > > be applied if the calling project doesn't have any stored > > project-specific limits. For the record, my original concept in creating quota classes is that you'd be able to set quotas per tier of user and easily be able to move users from one tier to another. This was just a neat idea I had, and AFAIK, Rackspace never used it, so you can call it YAGNI as far as I'm concerned :) > > Has anyone ever created a quota class that is different from "default"? > > > > I'd like to propose deprecating this API and getting rid of this > > functionality since it conflicts with the new Keystone /limits endpoint, > > is highly coupled with RAX's turnstile middleware I didn't intend it to be highly coupled, but it's been a while since I wrote it, and of course I've matured as a developer since then, so *shrug*. I also don't think Rackspace has ever used turnstile. > > and I can't seem to > > find anyone who has ever used it. Deprecating this API and functionality > > would make the transition to a saner quota management system much easier > > and straightforward. I'm fine with that plan, speaking as the original developer; as I say, I don't think Rackspace ever utilized the functionality anyway, and if no one else pipes up saying that they're using it, I'd be all over deprecating the quota classes in favor of the new hotness. -- Kevin L. Mitchell __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Oct 25 06:10:12 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 24 Oct 2018 23:10:12 -0700 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> <78efc5ab-036d-3a74-de43-d83d543bb849@gmail.com> Message-ID: On Wed, 24 Oct 2018 12:54:00 -0700, Melanie Witt wrote: > On Wed, 24 Oct 2018 13:57:05 -0500, Matt Riedemann wrote: >> On 10/24/2018 10:10 AM, Jay Pipes wrote: >>> I'd like to propose deprecating this API and getting rid of this >>> functionality since it conflicts with the new Keystone /limits endpoint, >>> is highly coupled with RAX's turnstile middleware and I can't seem to >>> find anyone who has ever used it. Deprecating this API and functionality >>> would make the transition to a saner quota management system much easier >>> and straightforward. >> I was trying to do this before it was cool: >> >> https://review.openstack.org/#/c/411035/ >> >> I think it was the Pike PTG in ATL where people said, "meh, let's just >> wait for unified limits from keystone and let this rot on the vine". >> >> I'd be happy to restore and update that spec. > > Yeah, we were thinking the presence of the API and code isn't harming > anything and sometimes we talk about situations where we could use them. > > Quota classes come up occasionally whenever we talk about preemptible > instances. Example: we could create and use a quota class "preemptible" > and decorate preemptible flavors with that quota_class in order to give > them unlimited quota. There's also talk of quota classes in the "Count > quota based on resource class" spec [1] where we could have leveraged > quota classes to create and enforce quota limits per custom resource > class. But I think the consensus there was to hold off on quota by > custom resource class until we migrate to unified limits and oslo.limit. > > So, I think my concern in removing the internal code that is capable of > enforcing quota limit per quota class is the preemptible instance use > case. I don't have my mind wrapped around if/how we could solve it using > unified limits yet. > > And I was just thinking, if we added a project_id column to the > quota_classes table and correspondingly added it to the > os-quota-class-sets API, we could pretty simply implement quota by > flavor, which is a feature operators like Oath need. An operator could > create a quota class limit per project_id and then decorate flavors with > quota_class to enforce them per flavor. > > I recognize that maybe it would be too confusing to solve use cases with > quota classes given that we're going to migrate to united limits. At the > same time, I'm hesitant to close the door on a possibility before we > have some idea about how we'll solve them without quota classes. Has > anyone thought about how we can solve the use cases with unified limits > for things like preemptible instances and quota by flavor? > > [1] https://review.openstack.org/56901 After I sent this, I realized that I _have_ thought about how to solve these use cases with unified limits before and commented about it on the "Count quota based on resource class" spec some months ago. For preemptible instances, we could leverage registered limits in keystone [2] (registered limits span across all projects) by creating a limit with resource_name='preemptible', for example. Then we could decorate a flavor with quota_resource_name='preemptible' which would designate a preemptible instance type. Then we use the quota_resource_name from the flavor to check the quota for the corresponding registered limit in keystone. This way, preemptible instances can be assigned their own special quota (probably unlimited). And for quota by flavor, same concept. I think we could use registered limits and project limits [3] by creating limits with resource_name='flavorX', for example. We could decorate flavors with quota_resource_name='flavorX' and check quota for special quota for flavorX. Unified limits provide all of the same ability as quota classes, as far as I can tell. Given that, I think we are OK to deprecate quota classes. Cheers, -melanie [2] https://developer.openstack.org/api-ref/identity/v3/?expanded=create-registered-limits-detail,create-limits-detail#create-registered-limits [3] https://developer.openstack.org/api-ref/identit/v3/?expanded=create-registered-limits-detail,create-limits-detail#create-limits From melwittt at gmail.com Thu Oct 25 06:14:40 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 24 Oct 2018 23:14:40 -0700 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> Message-ID: On Thu, 25 Oct 2018 14:12:51 +0900, ボーアディネシュ[bhor Dinesh] wrote: > We were having a similar use case like *Preemptible Instances* called as > *Rich-VM’s* which > > are high in resources and are deployed each per hypervisor. We have a > custom code in > > production which tracks the quota for such instances separately and for > the same reason > > we have *rich_instances* custom quota class same as *instances* quota class. Please see the last reply I recently sent on this thread. I have been thinking the same as you about how we could use quota classes to implement the quota piece of preemptible instances. I think we can achieve the same thing using unified limits, specifically registered limits [1], which span across all projects. So, I think we are covered moving forward with migrating to unified limits and deprecation of quota classes. Let me know if you spot any issues with this idea. Cheers, -melanie [1] https://developer.openstack.org/api-ref/identity/v3/?expanded=create-registered-limits-detail,create-limits-detail#create-registered-limits From melwittt at gmail.com Thu Oct 25 06:29:18 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 24 Oct 2018 23:29:18 -0700 Subject: [openstack-dev] [nova] nova cellsv2 and DBs / down cells / quotas In-Reply-To: <48AF63AB-8FDD-4653-938D-8B3144FE8507@gmail.com> References: <929B1777-594D-440A-9847-84EE66A7146F@gmail.com> <4f38170f-75c2-8e8a-92fc-92e4ec47f356@gmail.com> <48AF63AB-8FDD-4653-938D-8B3144FE8507@gmail.com> Message-ID: <2a76c0c0-fefd-82ab-9a4f-16aa1a55238d@gmail.com> On Thu, 25 Oct 2018 10:55:15 +1100, Sam Morrison wrote: > > >> On 24 Oct 2018, at 4:01 pm, melanie witt wrote: >> >> On Wed, 24 Oct 2018 10:54:31 +1100, Sam Morrison wrote: >>> Hi nova devs, >>> Have been having a good look into cellsv2 and how we migrate to them (we’re still on cellsv1 and about to upgrade to queens and still run cells v1 for now). >>> One of the problems I have is that now all our nova cell database servers need to respond to API requests. >>> With cellsv1 our architecture was to have a big powerful DB cluster (3 physical servers) at the API level to handle the API cell and then a smallish non HA DB server (usually just a VM) for each of the compute cells. >>> This architecture won’t work with cells V2 and we’ll now need to have a lot of highly available and responsive DB servers for all the cells. >>> It will also mean that our nova-apis which reside in Melbourne, Australia will now need to talk to database servers in Auckland, New Zealand. >>> The biggest issue we have is when a cell is down. We sometimes have cells go down for an hour or so planned or unplanned and with cellsv1 this does not affect other cells. >>> Looks like some good work going on here https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/handling-down-cell >>> But what about quota? If a cell goes down then it would seem that a user all of a sudden would regain some quota from the instances that are in the down cell? >>> Just wondering if anyone has thought about this? >> >> Yes, we've discussed it quite a bit. The current plan is to offer a policy-driven behavior as part of the "down" cell handling which will control whether nova will: >> >> a) Reject a server create request if the user owns instances in "down" cells >> >> b) Go ahead and count quota usage "as-is" if the user owns instances in "down" cells and allow quota limit to be potentially exceeded >> >> We would like to know if you think this plan will work for you. >> >> Further down the road, if we're able to come to an agreement on a consumer type/owner or partitioning concept in placement (to be certain we are counting usage our instance of nova owns, as placement is a shared service), we could count quota usage from placement instead of querying cells. > > OK great, always good to know other people are thinking for you :-) , I don’t really like a or b but the idea about using placement sounds like a good one to me. Your honesty is appreciated. :) We do want to get to where we can use placement for quota usage. There is a significant amount of higher priority placement-related work in flight right now (getting nested resource providers working end-to-end, for one) for it to receive adequate attention at this moment. We've been discussing it on the spec [1] the past few days, if you're interested. > I guess our architecture is pretty unique in a way but I wonder if other people are also a little scared about the whole all DB servers need to up to serve API requests? You are not alone. At CERN, they are experiencing the same challenges. They too have an architecture where they had deployed less powerful database servers in cells and also have cell sites that are located geographically far away. They have been driving the "handling of a down cell" work. > I’ve been thinking of some hybrid cellsv1/v2 thing where we’d still have the top level api cell DB but the API would only ever read from it. Nova-api would only write to the compute cell DBs. > Then keep the nova-cells processes just doing instance_update_at_top to keep the nova-cell-api db up to date. > > We’d still have syncing issues but we have that with placement now and that is more frequent than nova-cells-v1 is for us. I have had similar thoughts, but keep ending up at the syncing/racing issues, like you said. I think it's something we'll need to discuss and explore more, to see if we can come up with a reasonable way to address the increased demand on cell databases as it's been a considerable pain point for deployments like yours and CERN's. Cheers, -melanie [1] https://review.openstack.org/509042 From tobias.rydberg at citynetwork.eu Thu Oct 25 09:20:04 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Thu, 25 Oct 2018 11:20:04 +0200 Subject: [openstack-dev] [publiccloud-wg] Reminder weekly meeting Public Cloud WG Message-ID: <1e16eefd-142b-9a3b-4812-baf463e1fa03@citynetwork.eu> Hi everyone, Time for a new meeting for PCWG - today 1400 UTC in #openstack-publiccloud! Agenda found at https://etherpad.openstack.org/p/publiccloud-wg Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From majopela at redhat.com Thu Oct 25 09:22:18 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Thu, 25 Oct 2018 11:22:18 +0200 Subject: [openstack-dev] [TripleO][OVN] Switching the default network backend to ML2/OVN In-Reply-To: References: Message-ID: Daniel, thank you very much for the extensive and detailed email. The plan looks good to me and it makes sense, also the OVS option will still be tested, and available when selected. On Wed, Oct 24, 2018 at 4:41 PM Daniel Alvarez Sanchez wrote: > Hi Stackers! > > The purpose of this email is to share with the community the intention > of switching the default network backend in TripleO from ML2/OVS to > ML2/OVN by changing the mechanism driver from openvswitch to ovn. This > doesn’t mean that ML2/OVS will be dropped but users deploying > OpenStack without explicitly specifying a network driver will get > ML2/OVN by default. > > OVN in Short > ========== > > Open Virtual Network is managed under the OVS project, and was created > by the original authors of OVS. It is an attempt to re-do the ML2/OVS > control plane, using lessons learned throughout the years. It is > intended to be used in projects such as OpenStack and Kubernetes. Also oVirt / RHEV. > OVN > has a different architecture, moving us away from Python agents > communicating with the Neutron API service via RabbitMQ to daemons > written in C communicating via OpenFlow and OVSDB. > > OVN is built with a modern architecture that offers better foundations > for a simpler and more performant solution. What does this mean? For > example, at Red Hat we executed some preliminary testing during the > Queens cycle and found significant CPU savings due to OVN not using > RabbitMQ (CPU utilization during a Rally scenario using ML2/OVS [0] or > ML2/OVN [1]). Also, we tested API performance and found out that most > of the operations are significantly faster with ML2/OVN. Please see > more details in the FAQ section. > > Here’s a few useful links about OpenStack’s integration of OVN: > > * OpenStack Boston Summit talk on OVN [2] > * OpenStack networking-ovn documentation [3] > * OpenStack networking-ovn code repository [4] > > How? > ==== > > The goal is to merge this patch [5] during the Stein cycle which > pursues the following actions: > > 1. Switch the default mechanism driver from openvswitch to ovn. > 2. Adapt all jobs so that they use ML2/OVN as the network backend. > 3. Create legacy environment file for ML2/OVS to allow deployments based > on it. > 4. Flip scenario007 job from ML2/OVN to ML2/OVS so that we continue > testing it. > 5. Continue using ML2/OVS in the undercloud. > 6. Ensure that updates/upgrades from ML2/OVS don’t break and don’t > switch automatically to the new default. As some parity gaps exist > right now, we don’t want to change the network backend automatically. > Instead, if the user wants to migrate from ML2/OVS to ML2/OVN, we’ll > provide an ansible based tool that will perform the operation. > More info and code at [6]. > > Reviews, comments and suggestions are really appreciated :) > > > FAQ > === > > Can you talk about the advantages of OVN over ML2/OVS? > > ------------------------------------------------------------------------------- > > If asked to describe the ML2/OVS control plane (OVS, L3, DHCP and > metadata agents using the messaging bus to sync with the Neutron API > service) one would not tend to use the term ‘simple’. There is liberal > use of a smattering of Linux networking technologies such as: > * iptables > * network namespaces > * ARP manipulation > * Different forms of NAT > * keepalived, radvd, haproxy, dnsmasq > * Source based routing, > * … and of course OVS flows. > > OVN simplifies this to a single process running on compute nodes, and > another process running on centralized nodes, communicating via OVSDB > and OpenFlow, ultimately setting OVS flows. > > The simplified, new architecture allows us to re-do features like DVR > and L3 HA in more efficient and elegant ways. For example, L3 HA > failover is faster: It doesn’t use keepalived, rather OVN monitors > neighbor tunnel endpoints. OVN supports enabling both DVR and L3 HA > simultaneously, something we never supported with ML2/OVS. > > We also found out that not depending on RPC messages for agents > communication brings a lot of benefits. From our experience, RabbitMQ > sometimes represents a bottleneck and it can be very intense when it > comes to resources utilization. > > > What about the undercloud? > -------------------------------------- > > ML2/OVS will be still used in the undercloud as OVN has some > limitations with regards to baremetal provisioning mainly (keep > reading about the parity gaps). We aim to convert the undercloud to > ML2/OVN to provide the operator a more consistent experience as soon > as possible. > > It would be possible however to use the Neutron DHCP agent in the > short term to solve this limitation but in the long term we intend to > implement support for baremetal provisioning in the OVN built-in DHCP > server. > > > What about CI? > --------------------- > > * networking-ovn has: > * Devstack based Tempest (API, scenario from Tempest and Neutron > Tempest plugin) against the latest released OVS version, and against > OVS master (thus also OVN master) > * Devstack based Rally > * Grenade > * A multinode, container based TripleO job that installs and issues a > basic VM connectivity scenario test > * Supports Python 3 and 2 > * TripleO has currently OVN enabled in one quickstart featureset (fs30). > > Are there any known parity issues with ML2/OVS? > ------------------------------------------------------------------- > > * OVN supports VLAN provider networks, but not VLAN tenant networks. > This will be addressed and is being tracked in RHBZ 1561880 [7] > * SRIOV: A limitation exists for this scenario where OVN needs to > support VLAN tenant networks and Neutron DHCP Agent has to be > deployed. The goal is to include support in OVN to get rid of Neutron > DHCP agent. [8] > * QoS: Lack of support for DSCP marking and egress bandwidth limiting > RHBZ 1503494 [9] > * OVN does not presently support the new Security Groups logging API > RHBZ 1619266 [10] > * OVN does not correctly support Jumbo frames for North/South traffic > RHBZ 1547074 [11] > * OVN built-in DHCP server currently can not be used to provision > baremetal nodes (RHBZ 1622154 [12]) (this affects the undercloud and > overcloud’s baremetal-to-tenant use case). > * End-to-end encryption support in TripleO (RHBZ 1601926 [13]) > > More info at [14]. > > > How does the performance look like? > ------------------------------------------------- > > We have carried out different performance tests. Overall, ML2/OVN > outperforms ML2/OVS in most of the operations as this graph [15] > shows. > Only creating networks and listing ports are slower which is mostly > due to the fact that ML2/OVN creates an extra port (for metadata) upon > network creation so the amount of ports listed for the same rally task > is 2x for the ML2/OVN case. > > Also, the resources utilization is lower in ML2/OVN [16] vs ML2/OVS > [17] mainly due to the lack of agents and not using RPC. > > OVN only supports VLAN and Geneve (tunneled) networks, while ML2/OVS > uses VXLAN. What, if any, is the impact? What about hardware offload? > > ----------------------------------------------------------------------------------------------------- > > Good question! We asked this ourselves, and research showed that this > is not a problem. Normally, NICs that support VXLAN also support > Geneve hardware offload. Interestingly, even in the cases where they > don’t, performance was found to be better using Geneve due to other > optimizations that Geneve benefits from. More information can be found > in Russell’s Bryant blog [18], who did extensive work in this space. > > > Links > ==== > > [0] https://imgur.com/a/oOmuAqj > [1] https://imgur.com/a/N9jrIXV > [2] https://www.youtube.com/watch?v=sgc7myiX6ts > [3] https://docs.openstack.org/networking-ovn/queens/admin/index.html > [4] https://github.com/openstack/networking-ovn > [5] https://review.openstack.org/#/c/593056/ > [6] https://github.com/openstack/networking-ovn/tree/master/migration > [7] https://bugzilla.redhat.com/show_bug.cgi?id=1561880 > [8] > https://mail.openvswitch.org/pipermail/ovs-discuss/2018-April/046543.html > [9] https://bugzilla.redhat.com/show_bug.cgi?id=1503494 > [10] https://bugzilla.redhat.com/show_bug.cgi?id= 1619266 > [11] https://bugzilla.redhat.com/show_bug.cgi?id= 1547074 > [12] https://bugzilla.redhat.com/show_bug.cgi?id= 1622154 > [13] https://bugzilla.redhat.com/show_bug.cgi?id= 1601926 > [14] https://wiki.openstack.org/wiki/Networking-ovn > [15] https://imgur.com/a/4QtaN6b > [16] https://imgur.com/a/N9jrIXV > [17] https://imgur.com/a/oOmuAqj > [18] > https://blog.russellbryant.net/2017/05/30/ovn-geneve-vs-vxlan-does-it-matter/ > > > Thanks! > Daniel Alvarez > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Miguel Ángel Ajo OSP / Networking DFG, OVN Squad Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From bxzhu_5355 at 163.com Thu Oct 25 09:23:03 2018 From: bxzhu_5355 at 163.com (Boxiang Zhu) Date: Thu, 25 Oct 2018 17:23:03 +0800 (GMT+08:00) Subject: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend In-Reply-To: <20181023140140.rs3sf6huy4czgtak@exaesuubou5k> References: <73dab45e.1a3d.1668a62f317.Coremail.bxzhu_5355@163.com> <8f8b257a-5219-60e1-8760-d905f5bb48ff@gmail.com> <47653a27.59b1.1668cea5d9b.Coremail.bxzhu_5355@163.com> <40297115-571b-c07c-88b4-9e0d24b7301b@gmail.com> <8e1eba3.1f61.16699e1111d.Coremail.bxzhu_5355@163.com> <2f610cad-7a08-78cf-5ca4-63e0ea59d661@gmail.com> <20181023140140.rs3sf6huy4czgtak@exaesuubou5k> Message-ID: <67b36dd5.48bf.166aa88ca3a.Coremail.bxzhu_5355@163.com> Great, Jon. Thanks for your reply. I am looking forward to your report. Cheers, Boxiang On 10/23/2018 22:01,Jon Bernard wrote: * melanie witt wrote: On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote: I created a new vm and a new volume with type 'ceph'[So that the volume will be created on one of two hosts. I assume that the volume created on host dev at rbd-1#ceph this time]. Next step is to attach the volume to the vm. At last I want to migrate the volume from host dev at rbd-1#ceph to host dev at rbd-2#ceph, but it failed with the exception 'NotImplementedError(_("Swap only supports host devices")'. So that, my real problem is that is there any work to migrate volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool) in the same ceph cluster? The difference between the spec[2] with my scope is only one is *available*(the spec) and another is *in-use*(my scope). [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/ [2] https://review.openstack.org/#/c/296150 Ah, I think I understand now, thank you for providing all of those details. And I think you explained it in your first email, that cinder supports migration of ceph volumes if they are 'available' but not if they are 'in-use'. Apologies that I didn't get your meaning the first time. I see now the code you were referring to is this [3]: if volume.status not in ('available', 'retyping', 'maintenance'): LOG.debug('Only available volumes can be migrated using backend ' 'assisted migration. Falling back to generic migration.') return refuse_to_migrate So because your volume is not 'available', 'retyping', or 'maintenance', it's falling back to generic migration, which will end up with an error in nova because the source_path is not set in the volume config. Can anyone from the cinder team chime in about whether the ceph volume migration could be expanded to allow migration of 'in-use' volumes? Is there a reason not to allow migration of 'in-use' volumes? Generally speaking, Nova must facilitate the migration of a live (or in-use) volume. A volume attached to a running instance requires code in the I/O path to correctly route traffic to the correct location - so Cinder must refuse (or defer) a migrate operation if the volume is attached. Until somewhat recently Qemu and Libvirt did not support the migration to non-block (RBD) targets which is the reason for lack of support. I believe we now have all of the pieces to perform this operation successfully, but I suspect it will require a setup with correct versions of all the related software. I will try to verify this during the current release cycle and report back. -- Jon [3] https://github.com/openstack/cinder/blob/c42fdc470223d27850627fd4fc9d8cb15f2941f8/cinder/volume/drivers/rbd.py#L1618-L1621 Cheers, -melanie __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Thu Oct 25 09:38:55 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 25 Oct 2018 18:38:55 +0900 Subject: [openstack-dev] [Searchlight] Team meeting today reminder Message-ID: Hi team, It's a kindly reminder that we will have the team meeting at 12:00 UTC today on #openstack-meeting-4 channel. Hope to see you there :) -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Thu Oct 25 10:05:03 2018 From: neil at tigera.io (Neil Jerram) Date: Thu, 25 Oct 2018 11:05:03 +0100 Subject: [openstack-dev] [nova][NFS] Inexplicable utime permission denied when launching instance In-Reply-To: References: Message-ID: I'm still seeing the same problem after disabling AppArmor, so I think it must be some other root problem. On Wed, Oct 24, 2018 at 2:41 PM Neil Jerram wrote: > > Thanks so much for these hints, Erlon. I will look closer at AppArmor. > > Neil > > On Wed, Oct 24, 2018 at 1:41 PM Erlon Cruz wrote: > > > > PS. Don't forget that if you change or disable AppArmor you will have to reboot the host so the kernel gets reloaded. > > > > Em qua, 24 de out de 2018 às 09:40, Erlon Cruz escreveu: > >> > >> I think that there's a change that AppArmor is blocking the access. Have you checked the dmesg messages related with apparmor? > >> > >> Em sex, 19 de out de 2018 às 09:38, Neil Jerram escreveu: > >>> > >>> Wracking my brains over this one, would appreciate any pointers... > >>> > >>> Setup: Small test deployment with just 3 compute nodes, Queens on Ubuntu Bionic. The first compute node is an NFS server for /var/lib/nova/instances, and the other compute nodes mount that as NFS clients. > >>> > >>> Problem: Sometimes, when launching an instance which is scheduled to one of the client nodes, nova-compute (in imagebackend.py) gets Permission Denied (errno 13) when calling utime to touch the timestamp on the instance file. > >>> > >>> Through various bits of debugging and hackery, I've established that: > >>> > >>> - it looks like the problem never occurs when this is the call that bootstraps the privsep setup; but it does occur quite frequently on later calls > >>> > >>> - when the problem occurs, retrying doesn't help (5 times, with 0.5s in between) > >>> > >>> - the instance file does exist, and is owned by root with read/write permission for root > >>> > >>> - the privsep helper is running as root > >>> > >>> - the privsep helper receives and executes the request - so it's not a problem with communication between nova-compute and the helper > >>> > >>> - root is uid 0 on both NFS server and client > >>> > >>> - NFS setup does not have the root_squash option > >>> > >>> - there is some AppArmor setup, on both client and server, and I haven't yet worked out whether that might be relevant. > >>> > >>> Any ideas? > >>> > >>> Many thanks, > >>> Neil > >>> > >>> __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dmccowan at cisco.com Thu Oct 25 12:46:44 2018 From: dmccowan at cisco.com (Dave McCowan (dmccowan)) Date: Thu, 25 Oct 2018 12:46:44 +0000 Subject: [openstack-dev] [barbican][tc] Seeking feedback on the OpenStack cloud vision In-Reply-To: <2d1f4240-5e59-340e-91a2-48307e58f601@redhat.com> References: <2d1f4240-5e59-340e-91a2-48307e58f601@redhat.com> Message-ID: Hello Zane-- Yes, this vision is consistent with the Barbican team's vision. Barbican provides an abstraction layer over HSMs and other secret storage services. We have a plugin architecture to enable this abstraction over a variety of backends. Vault is a recent addition to our supported options. Barbican uses Keystone for authentication and oslo.policy for RBAC, which allows for multi-tenancy and makes secret storage consistent with other OpenStack services. The topic of cross-project dependencies is one we've been wrestling with for a while. At the Pike PTG[1], we had discussions with the Architecture Working Group on how to address this. We concluded that the cross-project requirement should not be on Barbican, but on a "Castellan compatible secret store". At the time, Barbican was the only choice, but we wanted to encourage new development. We shifted ownership of Castellan (a python key manager abstraction layer) from the Barbican team to the Oslo team. The idea was that people would write Castellan plugins for key managers other than Barbican. Later that year, a Castellan plugin for Vault was contributed.[2] At this time, the direct-to-vault plugin does not use Keystone for authentication or oslo.policy for RBAC. Users can configure the Barbican-to-Vault architecture if they need to meet those requirements. tl;dr: This vision looks good. The Castellan and Barbican software provides abstraction for either the key storage or the key manager, so the cross project dependency can be "a key manager", instead of specifically Barbican. --Dave [1] https://etherpad.openstack.org/p/barbican-pike-ptg-barbican-discussion [2] https://review.openstack.org/#/c/483080/ On 10/24/18, 11:16 AM, "Zane Bitter" wrote: >Greetings, Barbican team! >As you may be aware, I've been working with other folks in the community >on documenting a vision for OpenStack clouds (formerly known as the >'Technical Vision') - essentially to interpret the mission statement in >long-form, in a way that we can use to actually help guide decisions. >You can read the latest draft here: https://review.openstack.org/592205 > >We're trying to get feedback from as many people as possible - in many >ways the value is in the process of coming together to figure out what >we're trying to achieve as a community with OpenStack and how we can >work together to build it. The document is there to help us remember >what we decided so we don't have to do it all again over and over. > >The vision is structured with two sections that apply broadly to every >project in OpenStack - describing the principles that we believe are >essential to every cloud, and the ones that make OpenStack different >from some other clouds. The third section is a list of design goals that >we want OpenStack as a whole to be able to meet - ideally each project >would be contributing toward one or more of these design goals. > >Barbican provides an abstraction over HSMs and software equivalents >(like Vault), so the immediate design goal that it meets is the >'Hardware Virtualisation' one. However, the most interesting part of the >document for the Barbican team is probably the section on cross-project >dependencies. In discussions at the PTG, the TC concluded that we >shouldn't force projects to adopt hard dependencies on other services >(like Barbican), but recommend that they do so when there is a benefit >to the user. The challenge here I think is that not duplicating >security-sensitive code such as secret storage is well known to be >something that is both of great benefit to the user and highly tempting >to take a shortcut on. Your feedback on whether we have got the right >balance is important. > >If you would like me or another TC member to join one of your team IRC >meetings to discuss further what the vision means for your team, please >reply to this thread to set it up. You are also welcome to bring up any >questions in the TC IRC channel, #openstack-tc - there's more of us >around during Office Hours >(https://governance.openstack.org/tc/#office-hours), but you can talk to >us at any time. > >Feedback can also happen either in this thread or on the review >https://review.openstack.org/592205 > >If the team is generally happy with the vision as it is and doesn't have >any specific feedback, that's cool but I'd like to request that at least >the PTL leave a vote on the review. It's important to know whether we >are actually developing a consensus in the community or just talking to >ourselves :) > >many thanks, >Zane. > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tobias.urdin at binero.se Thu Oct 25 14:00:06 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Thu, 25 Oct 2018 16:00:06 +0200 Subject: [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface In-Reply-To: <3f27e1b3-1bce-dd31-d81a-5352ca900ccc@binero.se> References: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> <3f27e1b3-1bce-dd31-d81a-5352ca900ccc@binero.se> Message-ID: <94c3392c-a4b3-6cfa-4b14-83818807f25a@binero.se> Might as well throw it out here. After a lot of troubleshooting we were able to narrow our issue down to our test environment running qemu virtualization, we moved our compute node to hardware and used kvm full virtualization instead. We could properly reproduce the issue where generating a CSR from a private key and then trying to verify the CSR would fail complaining about "Signature did not match the certificate request" We suspect qemu floating point emulation caused this, the same OpenSSL function that validates a CSR is the one used when validating the SSL handshake which caused our issue. After going through the whole stack, we have Octavia working flawlessly without any issues at all. Best regards Tobias On 10/23/2018 04:31 PM, Tobias Urdin wrote: > Hello Erik, > > Could you specify the DNs you used for all certificates just so that I > can rule it out on my side. > You can redact anything sensitive with some to just get the feel on how > it's configured. > > Best regards > Tobias > > On 10/22/2018 04:47 PM, Erik McCormick wrote: >> On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin wrote: >>> Hello, >>> >>> I've been having a lot of issues with SSL certificates myself, on my >>> second trip now trying to get it working. >>> >>> Before I spent a lot of time walking through every line in the DevStack >>> plugin and fixing my config options, used the generate >>> script [1] and still it didn't work. >>> >>> When I got the "invalid padding" issue it was because of the DN I used >>> for the CA and the certificate IIRC. >>> >>> > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING >>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect >>> to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa >>> routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'), >>> ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'), >>> ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",) >>> > 19:47 < tobias-urdin> after a quick google "The problem was that my >>> CA DN was the same as the certificate DN." >>> >>> IIRC I think that solved it, but then again I wouldn't remember fully >>> since I've been at so many different angles by now. >>> >>> Here is my IRC logs history from the #openstack-lbaas channel, perhaps >>> it can help you out >>> http://paste.openstack.org/show/732575/ >>> >> Tobias, I owe you a beer. This was precisely the issue. I'm deploying >> Octavia with kolla-ansible. It only deploys a single CA. After hacking >> the templates and playbook to incorporate a separate server CA, the >> amphorae now load and provision the required namespace. I'm adding a >> kolla tag to the subject of this in hopes that someone might want to >> take on changing this behavior in the project. Hopefully after I get >> through Upstream Institute in Berlin I'll be able to do it myself if >> nobody else wants to do it. >> >> For certificate generation, I extracted the contents of >> octavia_certs_install.yml (which sets up the directory structure, >> openssl.cnf, and the client CA), and octavia_certs.yml (which creates >> the server CA and the client certificate) and mashed them into a >> separate playbook just for this purpose. At the end I get: >> >> ca_01.pem - Client CA Certificate >> ca_01.key - Client CA Key >> ca_server_01.pem - Server CA Certificate >> cakey.pem - Server CA Key >> client.pem - Concatenated Client Key and Certificate >> >> If it would help to have the playbook, I can stick it up on github >> with a huge "This is a hack" disclaimer on it. >> >>> ----- >>> >>> Sorry for hijacking the thread but I'm stuck as well. >>> >>> I've in the past tried to generate the certificates with [1] but now >>> moved on to using the openstack-ansible way of generating them [2] >>> with some modifications. >>> >>> Right now I'm just getting: Could not connect to instance. Retrying.: >>> SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579) >>> from the amphoras, haven't got any further but I've eliminated a lot of >>> stuck in the middle. >>> >>> Tried deploying Ocatavia on Ubuntu with python3 to just make sure there >>> wasn't an issue with CentOS and OpenSSL versions since it tends to lag >>> behind. >>> Checking the amphora with openssl s_client [3] it gives the same one, >>> but the verification is successful just that I don't understand what the >>> bad signature >>> part is about, from browsing some OpenSSL code it seems to be related to >>> RSA signatures somehow. >>> >>> 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad >>> signature:s3_clnt.c:2032: >>> >>> So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS >>> (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm >>> back to something related >>> to the certificates or the communication between the endpoints, or what >>> actually responds inside the amphora (gunicorn IIUC?). Based on the >>> "verify" functions actually causing that bad signature error I would >>> assume it's the generated certificate that the amphora presents that is >>> causing it. >>> >>> I'll have to continue the troubleshooting to the inside of the amphora, >>> I've used the test-only amphora image before but have now built my own >>> one that is >>> using the amphora-agent from the actual stable branch, but same issue >>> (bad signature). >>> >>> For verbosity this is the config options set for the certificates in >>> octavia.conf and which file it was copied from [4], same here, a >>> replication of what openstack-ansible does. >>> >>> Appreciate any feedback or help :) >>> >>> Best regards >>> Tobias >>> >>> [1] >>> https://github.com/openstack/octavia/blob/master/bin/create_certificates.sh >>> [2] http://paste.openstack.org/show/732483/ >>> [3] http://paste.openstack.org/show/732486/ >>> [4] http://paste.openstack.org/show/732487/ >>> >>> On 10/20/2018 01:53 AM, Michael Johnson wrote: >>>> Hi Erik, >>>> >>>> Sorry to hear you are still having certificate issues. >>>> >>>> Issue #2 is probably caused by issue #1. Since we hot-plug the tenant >>>> network for the VIP, one of the first steps after the worker connects >>>> to the amphora agent is finishing the required configuration of the >>>> VIP interface inside the network namespace on the amphroa. >>>> >> Thanks for the hint on the workflow of this. I hadn't gotten deep >> enough into the code to find that yet, but I suspected it was blocking >> since the namespace never got created either. Thanks >> >>>> If I remember correctly, you are attempting to configure Octavia with >>>> the dual CA option (which is good for non-development use). >>>> >>>> This is what I have for notes: >>>> >>>> [certificates] gets the following: >>>> cert_generator = local_cert_generator >>>> ca_certificate = server CA's "server.pem" file >>>> ca_private_key = server CA's "server.key" file >>>> ca_private_key_passphrase = pass phrase for ca_private_key >>>> [controller_worker] >>>> client_ca = Client CA's ca_cert file >>>> [haproxy_amphora] >>>> client_cert = Client CA's client.pem file (I think with it's key >>>> concatenated is what rm_work said the other day) >>>> server_ca = Server CA's ca_cert file >>>> >> This is all very helpful. It's a bit difficult to know what goes where >> the way the documentation is written presently. For something that's >> going to be the defacto standard for loadbalancing, we as a community >> need to do a better job of documenting how to set up, configure, and >> manage this in production. I'm trying to capture my lessons learned >> and processes as I go to help with that if I can. >> >> -Erik >> >>>> That said, I can probably run through this and write something up next >>>> week that is more step-by-step/detailed. >>>> >>>> Michael >>>> >>>> On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick >>>> wrote: >>>>> Apologies for cross-posting, but in the event that these might be >>>>> worth filing as bugs, I wanted the Octavia devs to see it as well... >>>>> >>>>> I've been wrestling with getting Octavia up and running and have >>>>> become stuck on two issues. I'm hoping someone has run into these >>>>> before. My google foo has come up empty. >>>>> >>>>> Issue 1: >>>>> When the Octavia controller tries to poll the amphora instance, it >>>>> tries repeatedly and eventually fails. The error on the controller >>>>> side is: >>>>> >>>>> 2018-10-19 14:17:39.181 26 ERROR >>>>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection >>>>> retries (currently set to 300) exhausted. The amphora is unavailable. >>>>> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries >>>>> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by >>>>> SSLError(SSLError("bad handshake: Error([('rsa routines', >>>>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >>>>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >>>>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >>>>> 'tls_process_server_certificate', 'certificate verify >>>>> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', >>>>> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 >>>>> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', >>>>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', >>>>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding >>>>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', >>>>> 'tls_process_server_certificate', 'certificate verify >>>>> failed')],)",),)) >>>>> >>>>> On the amphora side I see: >>>>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. >>>>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from >>>>> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake >>>>> failure (_ssl.c:1754) >>>>> >>>>> I've generated certificates both with the script in the Octavia git >>>>> repo, and with the Openstack Ansible playbook. I can see that they are >>>>> present in /etc/octavia/certs. >>>>> >>>>> I'm using the Kolla (Queens) containers for the control plane so I'm >>>>> sure I've satisfied all the python library constraints. >>>>> >>>>> Issue 2: >>>>> I"m not sure how it gets configured, but the tenant network interface >>>>> (ens6) never comes up. I can spawn other instances on that network >>>>> with no issue, and I can see that Neutron has the port attached to the >>>>> instance. However, in the instance this is all I get: >>>>> >>>>> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a >>>>> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >>>>> group default qlen 1 >>>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >>>>> inet 127.0.0.1/8 scope host lo >>>>> valid_lft forever preferred_lft forever >>>>> inet6 ::1/128 scope host >>>>> valid_lft forever preferred_lft forever >>>>> 2: ens3: mtu 9000 qdisc pfifo_fast >>>>> state UP group default qlen 1000 >>>>> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff >>>>> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 >>>>> valid_lft forever preferred_lft forever >>>>> inet6 fe80::f816:3eff:fe30:c460/64 scope link >>>>> valid_lft forever preferred_lft forever >>>>> 3: ens6: mtu 1500 qdisc noop state DOWN group >>>>> default qlen 1000 >>>>> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff >>>>> >>>>> There's no evidence of the interface anywhere else including udev rules. >>>>> >>>>> Any help with either or both issues would be greatly appreciated. >>>>> >>>>> Cheers, >>>>> Erik >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dms at danplanet.com Thu Oct 25 14:42:37 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 25 Oct 2018 07:42:37 -0700 Subject: [openstack-dev] [nova] nova cellsv2 and DBs / down cells / quotas In-Reply-To: <48AF63AB-8FDD-4653-938D-8B3144FE8507@gmail.com> (Sam Morrison's message of "Thu, 25 Oct 2018 10:55:15 +1100") References: <929B1777-594D-440A-9847-84EE66A7146F@gmail.com> <4f38170f-75c2-8e8a-92fc-92e4ec47f356@gmail.com> <48AF63AB-8FDD-4653-938D-8B3144FE8507@gmail.com> Message-ID: > I guess our architecture is pretty unique in a way but I wonder if > other people are also a little scared about the whole all DB servers > need to up to serve API requests? When we started down this path, we acknowledged that this would create a different access pattern which would require ops to treat the cell databases differently. The input we were getting at the time was that the benefits outweighed the costs here, and that we'd work on caching to deal with performance issues if/when that became necessary. > I’ve been thinking of some hybrid cellsv1/v2 thing where we’d still > have the top level api cell DB but the API would only ever read from > it. Nova-api would only write to the compute cell DBs. > Then keep the nova-cells processes just doing instance_update_at_top to keep the nova-cell-api db up to date. I'm definitely not in favor of doing more replication in python to address this. What was there in cellsv1 was lossy, even for the subset of things it actually supported (which didn't cover all nova features at the time and hasn't kept pace with features added since, obviously). About a year ago, I proposed that we add another "read only mirror" field to the cell mapping, which nova would use if and only if the primary cell database wasn't reachable, and only for read operations. The ops, if they wanted to use this, would configure plain old one-way mysql replication of the cell databases to a highly-available server (probably wherever the api_db is) and nova could use that as a read-only cache for things like listing instances and calculating quotas. The reaction was (very surprisingly to me) negative to this option. It seems very low-effort, high-gain, and proper re-use of existing technologies to me, without us having to replicate a replication engine (hah) in python. So, I'm curious: does that sound more palatable to you? --Dan From amal.kammoun.2 at gmail.com Thu Oct 25 15:10:18 2018 From: amal.kammoun.2 at gmail.com (amal kammoun) Date: Thu, 25 Oct 2018 17:10:18 +0200 Subject: [openstack-dev] [Monasca] Where Monasca stores collected Data? Message-ID: Hello Monasca team, I'm experiencing several problems with monasca. I have Monasca running with OpenStack. I added alarms to my system but I cannot see where monasca stores the collected data. With Grafana I cannot see any datapoints from the influxdb datasource. In the influxdb I have all the tables with the measurments that monasca will collect but with no vlues. How can I fix this problem? Thank you! Amal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Thu Oct 25 15:15:39 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 25 Oct 2018 17:15:39 +0200 Subject: [openstack-dev] [TripleO] easily identifying how services are configured Message-ID: <67a8a99e-6d65-6798-6863-73cc7afca4ec@redhat.com> On 10/19/18 8:04 PM, Alex Schultz wrote: > On Fri, Oct 19, 2018 at 10:53 AM James Slagle wrote: >> >> On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz wrote: >> > Additionally I took a stab at combining the puppet/docker service >> > definitions for the aodh services in a similar structure to start >> > reducing the overhead we've had from maintaining the docker/puppet >> > implementations seperately. You can see the patch >> > https://review.openstack.org/#/c/611188/ for an additional example of >> > this. >> >> That patch takes the approach of removing baremetal support. Is that >> what we agreed to do? >> > > Since it's deprecated since Queens[0], yes? I think it is time to stop > continuing this method of installation. Given that I'm not even sure My point and concern retains as before, unless we fully dropped the docker support for Queens (and downstream LTS released for it), we should not modify the t-h-t directory tree, due to associated maintenance of backports complexity reasons > the upgrade process even works anymore with baremetal, I don't think > there's a reason to keep it as it directly impacts the time it takes > to perform deployments and also contributes to increased complexity > all around. > > [0] http://lists.openstack.org/pipermail/openstack-dev/2017-September/122248.html > >> I'm not specifically opposed, as I'm pretty sure the baremetal >> implementations are no longer tested anywhere, but I know that Dan had >> some concerns about that last time around. >> >> The alternative we discussed was using jinja2 to include common >> data/tasks in both the puppet/docker/ansible implementations. That >> would also result in reducing the number of Heat resources in these >> stacks and hopefully reduce the amount of time it takes to >> create/update the ServiceChain stacks. >> > > I'd rather we officially get rid of the one of the two methods and > converge on a single method without increasing the complexity via > jinja to continue to support both. If there's an improvement to be had > after we've converged on a single structure for including the base > bits, maybe we could do that then? > > Thanks, > -Alex -- Best regards, Bogdan Dobrelya, Irc #bogdando From aschultz at redhat.com Thu Oct 25 15:24:36 2018 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 25 Oct 2018 09:24:36 -0600 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: <67a8a99e-6d65-6798-6863-73cc7afca4ec@redhat.com> References: <67a8a99e-6d65-6798-6863-73cc7afca4ec@redhat.com> Message-ID: On Thu, Oct 25, 2018 at 9:16 AM Bogdan Dobrelya wrote: > > > On 10/19/18 8:04 PM, Alex Schultz wrote: > > On Fri, Oct 19, 2018 at 10:53 AM James Slagle wrote: > >> > >> On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz wrote: > >> > Additionally I took a stab at combining the puppet/docker service > >> > definitions for the aodh services in a similar structure to start > >> > reducing the overhead we've had from maintaining the docker/puppet > >> > implementations seperately. You can see the patch > >> > https://review.openstack.org/#/c/611188/ for an additional example of > >> > this. > >> > >> That patch takes the approach of removing baremetal support. Is that > >> what we agreed to do? > >> > > > > Since it's deprecated since Queens[0], yes? I think it is time to stop > > continuing this method of installation. Given that I'm not even sure > > My point and concern retains as before, unless we fully dropped the > docker support for Queens (and downstream LTS released for it), we > should not modify the t-h-t directory tree, due to associated > maintenance of backports complexity reasons > This is why we have duplication of things in THT. For environment files this is actually an issue due to the fact they are the end user interface. But these service files should be internal and where they live should not matter. We already have had this in the past and have managed to continue to do backports so I don't think this as a reason not to do this clean up. It feels like we use this as a reason not to actually move forward on cleanup and we end up carrying the tech debt. By this logic, we'll never be able to cleanup anything if we can't handle moving files around. I think there are some patches to do soft links (dprince might be able to provide the patches) which could at least handle this backward compatibility around locations, but I think we need to actually move forward on the simplification of the service definitions unless there's a blocking technical issue with this effort. Thanks, -Alex > > the upgrade process even works anymore with baremetal, I don't think > > there's a reason to keep it as it directly impacts the time it takes > > to perform deployments and also contributes to increased complexity > > all around. > > > > [0] http://lists.openstack.org/pipermail/openstack-dev/2017-September/122248.html > > > >> I'm not specifically opposed, as I'm pretty sure the baremetal > >> implementations are no longer tested anywhere, but I know that Dan had > >> some concerns about that last time around. > >> > >> The alternative we discussed was using jinja2 to include common > >> data/tasks in both the puppet/docker/ansible implementations. That > >> would also result in reducing the number of Heat resources in these > >> stacks and hopefully reduce the amount of time it takes to > >> create/update the ServiceChain stacks. > >> > > > > I'd rather we officially get rid of the one of the two methods and > > converge on a single method without increasing the complexity via > > jinja to continue to support both. If there's an improvement to be had > > after we've converged on a single structure for including the base > > bits, maybe we could do that then? > > > > Thanks, > > -Alex > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From witold.bedyk at est.fujitsu.com Thu Oct 25 15:50:38 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Thu, 25 Oct 2018 15:50:38 +0000 Subject: [openstack-dev] [Monasca] Where Monasca stores collected Data? In-Reply-To: References: Message-ID: <401f2672c92048e08309b32e394734cc@R01UKEXCASM126.r01.fujitsu.local> Hi Amal, please check if you collect and persist the measurements correctly. Check agent forwarder logs. Check the logs of API, with DEBUG log level you should see all the messages sent to Kafka. Check persister logs. You should see information about consuming messages from `metrics` topic. Hope it helps Witek From: amal kammoun Sent: Donnerstag, 25. Oktober 2018 17:10 To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [Monasca] Where Monasca stores collected Data? Hello Monasca team, I'm experiencing several problems with monasca. I have Monasca running with OpenStack. I added alarms to my system but I cannot see where monasca stores the collected data. With Grafana I cannot see any datapoints from the influxdb datasource. In the influxdb I have all the tables with the measurments that monasca will collect but with no vlues. How can I fix this problem? Thank you! Amal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Oct 25 15:58:32 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 25 Oct 2018 10:58:32 -0500 Subject: [openstack-dev] [nova] nova cellsv2 and DBs / down cells / quotas In-Reply-To: <48AF63AB-8FDD-4653-938D-8B3144FE8507@gmail.com> References: <929B1777-594D-440A-9847-84EE66A7146F@gmail.com> <4f38170f-75c2-8e8a-92fc-92e4ec47f356@gmail.com> <48AF63AB-8FDD-4653-938D-8B3144FE8507@gmail.com> Message-ID: <2d1ae8ec-0bf4-2761-3781-151e3fc62303@gmail.com> On 10/24/2018 6:55 PM, Sam Morrison wrote: > I’ve been thinking of some hybrid cellsv1/v2 thing where we’d still have the top level api cell DB but the API would only ever read from it. Nova-api would only write to the compute cell DBs. > Then keep the nova-cells processes just doing instance_update_at_top to keep the nova-cell-api db up to date. There was also the "read from searchlight" idea [1] but that died in Boston. [1] https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/list-instances-using-searchlight.html -- Thanks, Matt From aspiers at suse.com Thu Oct 25 17:09:22 2018 From: aspiers at suse.com (Adam Spiers) Date: Thu, 25 Oct 2018 18:09:22 +0100 Subject: [openstack-dev] [ClusterLabs Developers] [HA] future of OpenStack OCF resource agents (was: resource-agents v4.2.0) In-Reply-To: References: <20181024082154.a3gyn4cgtu5o3sj3@redhat.com> <20181024122554.xilfw3xx6igcg5e6@pacific.linksys.moosehall> Message-ID: <20181025170922.i3e53tvw5yqtpe4a@pacific.linksys.moosehall> Hi Tim, Tim Bell wrote: >Adam, > >Personally, I would prefer the approach where the OpenStack resource agents are part of the repository in which they are used. Thanks for chipping in. Just checking - by this you mean the resource-agents rather than openstack-resource-agents, right? Obviously the agents aren't usable as standalone components in either context, without a cloud's worth of dependencies including Pacemaker. >This is also the approach taken in other open source projects such as Kubernetes and avoids the inconsistency where, for example, Azure resource agents are in the Cluster Labs repository but OpenStack ones are not. Right. I suspect there's no clearly defined scope for the resource-agents repository at the moment, so it's probably hard to say "agent X belongs here but agent Y doesn't". Although has been alluded to elsewhere in this thread, that in itself could be problematic in terms of the repository constantly growing. >This can mean that people miss there is OpenStack integration available. Yes, discoverability is important, although I think we can make more impact on that via better documentation (another area I am struggling to make time for ...) >This does not reflect, in any way, the excellent efforts and results made so far. I don't think it would negate the possibility to include testing in the OpenStack gate since there are other examples where code is pulled in from other sources. There are a number of technical barriers, or at very least inconveniences, here - because the resource-agents repository is hosted on GitHub, therefore none of the normal processes based around Gerrit apply. I guess it's feasible that since Zuul v3 gained GitHub support, it could orchestrate running OpenStack CI on GitHub pull requests, although it would have to make sure to only run on PRs which affect the OpenStack RAs, and none of the others. Additionally, we'd probably need tags / releases corresponding to each OpenStack release, which means polluting a fundamentally non-OpenStack-specific repository with OpenStack-specific metadata. I think either way we go, there is ugliness. Personally I'm still leaning towards continued use of openstack-resource-agents, but I'm happy to go with the majority consensus if we can get a semi-respectable number of respondees :-) From edmondsw at us.ibm.com Thu Oct 25 17:29:09 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Thu, 25 Oct 2018 13:29:09 -0400 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> Message-ID: melanie witt wrote on 10/25/2018 02:14:40 AM: > On Thu, 25 Oct 2018 14:12:51 +0900, ボーアディネシュ[bhor Dinesh] wrote: > > We were having a similar use case like *Preemptible Instances* called as > > *Rich-VM’s* which > > > > are high in resources and are deployed each per hypervisor. We have a > > custom code in > > > > production which tracks the quota for such instances separately and for > > the same reason > > > > we have *rich_instances* custom quota class same as *instances* quota class. > > Please see the last reply I recently sent on this thread. I have been > thinking the same as you about how we could use quota classes to > implement the quota piece of preemptible instances. I think we can > achieve the same thing using unified limits, specifically registered > limits [1], which span across all projects. So, I think we are covered > moving forward with migrating to unified limits and deprecation of quota > classes. Let me know if you spot any issues with this idea. And we could finally close https://bugs.launchpad.net/nova/+bug/1602396 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Oct 25 17:34:06 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 25 Oct 2018 12:34:06 -0500 Subject: [openstack-dev] [tripleo][openstack-ansible][nova][placement] Owners needed for placement extraction upgrade deployment tooling Message-ID: Hello OSA/TripleO people, A plan/checklist was put in place at the Stein PTG for extracting placement from nova [1]. The first item in that list is done in grenade [2], which is the devstack-based upgrade project in the integrated gate. That should serve as a template for the necessary upgrade steps in deployment projects. The related devstack change for extracted placement on the master branch (Stein) is [3]. Note that change has some dependencies. The second point in the plan from the PTG was getting extracted placement upgrade tooling support in a deployment project, notably TripleO (and/or OpenStackAnsible). Given the grenade change is done and passing tests, TripleO/OSA should be able to start coding up and testing an upgrade step when going from Rocky to Stein. My question is who can we name as an owner in either project to start this work? Because we really need to be starting this as soon as possible to flush out any issues before they are too late to correct in Stein. So if we have volunteers or better yet potential patches that I'm just not aware of, please speak up here so we know who to contact about status updates and if there are any questions with the upgrade. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html [2] https://review.openstack.org/#/c/604454/ [3] https://review.openstack.org/#/c/600162/ -- Thanks, Matt From edmondsw at us.ibm.com Thu Oct 25 17:38:06 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Thu, 25 Oct 2018 13:38:06 -0400 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: <33f9616c-4d81-e239-a1f2-8b1696b8ddd7@redhat.com> References: <5cf5bee4-d780-57fb-6099-fa018e3ab097@debian.org> <33f9616c-4d81-e239-a1f2-8b1696b8ddd7@redhat.com> Message-ID: Zane Bitter wrote on 10/22/2018 03:12:46 PM: > On 22/10/18 10:33 AM, Thomas Goirand wrote: > > On 10/19/18 5:17 PM, Zane Bitter wrote: > >> Integration Tests > >> ----------------- > >> > >> Integration tests do test, amongst other things, integration with > >> non-openstack-supplied things in the distro, so it's important that we > >> test on the actual distros we have identified as popular.[2] It's also > >> important that every project be testing on the same distro at the end of > >> a release, so we can be sure they all work together for users. > > > > I find very disturbing to see the project only leaning toward these only > > 2 distributions. Why not SuSE & Debian? > > The bottom line is it's because targeting those two catches 88% of our > users. (For once I did not make this statistic up.) > > Also note that in practice I believe almost everything is actually > tested on Ubuntu LTS, and only TripleO is testing on CentOS. It's > difficult to imagine how to slot another distro into the mix without > doubling up on jobs. I think you meant 78%, assuming you were looking at the latest User Survey results [1], page 55. Still a hefty number. It is important to note that the User Survey lumps all versions of a given OS together, whereas the TC reference [2] only considers the latest LTS/stable version. If the User Survey split out latests LTS/stable versions vs. others (e.g. Ubuntu 16.04 LTS), I expect we'd see Ubuntu 18.04 LTS + Centos 7 adding up to much less than 78%. [1] https://www.openstack.org/assets/survey/April2017SurveyReport.pdf [2] https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Thu Oct 25 17:38:32 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 25 Oct 2018 11:38:32 -0600 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> Message-ID: On 10/24/2018 9:10 AM, Jay Pipes wrote: > Nova's API has the ability to create "quota classes", which are > basically limits for a set of resource types. There is something called > the "default quota class" which corresponds to the limits in the > CONF.quota section. Quota classes are basically templates of limits to > be applied if the calling project doesn't have any stored > project-specific limits. > > Has anyone ever created a quota class that is different from "default"? The Compute API specifically says: "Only ‘default’ quota class is valid and used to set the default quotas, all other quota class would not be used anywhere." What this API does provide is the ability to set new default quotas for *all* projects at once rather than individually specifying new defaults for each project. Chris From jaypipes at gmail.com Thu Oct 25 18:00:08 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 25 Oct 2018 14:00:08 -0400 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> Message-ID: <0c7d005e-3fdd-2f63-8e40-789c83d7c58b@gmail.com> On 10/25/2018 01:38 PM, Chris Friesen wrote: > On 10/24/2018 9:10 AM, Jay Pipes wrote: >> Nova's API has the ability to create "quota classes", which are >> basically limits for a set of resource types. There is something >> called the "default quota class" which corresponds to the limits in >> the CONF.quota section. Quota classes are basically templates of >> limits to be applied if the calling project doesn't have any stored >> project-specific limits. >> >> Has anyone ever created a quota class that is different from "default"? > > The Compute API specifically says: > > "Only ‘default’ quota class is valid and used to set the default quotas, > all other quota class would not be used anywhere." > > What this API does provide is the ability to set new default quotas for > *all* projects at once rather than individually specifying new defaults > for each project. It's a "defaults template", yes. The alternative is, you know, to just set the default values in CONF.quota, which is what I said above. Or, if you want project X to have different quota limits from those CONF-driven defaults, then set the quotas for the project to some different values via the os-quota-sets API (or better yet, just use Keystone's /limits API when we write the "limits driver" into Nova). The issue is that the os-quota-classes API is currently blocking *me* writing that "limits driver" in Nova because I don't want to port nova-specific functionality (like quota classes) to a limits driver when the Keystone /limits endpoint doesn't have that functionality and nobody I know of has ever used it. Chris, are you advocating for *keeping* the os-quota-classes API? Best, -jay From melwittt at gmail.com Thu Oct 25 18:44:23 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 25 Oct 2018 11:44:23 -0700 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: <0c7d005e-3fdd-2f63-8e40-789c83d7c58b@gmail.com> References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> <0c7d005e-3fdd-2f63-8e40-789c83d7c58b@gmail.com> Message-ID: On Thu, 25 Oct 2018 14:00:08 -0400, Jay Pipes wrote: > On 10/25/2018 01:38 PM, Chris Friesen wrote: >> On 10/24/2018 9:10 AM, Jay Pipes wrote: >>> Nova's API has the ability to create "quota classes", which are >>> basically limits for a set of resource types. There is something >>> called the "default quota class" which corresponds to the limits in >>> the CONF.quota section. Quota classes are basically templates of >>> limits to be applied if the calling project doesn't have any stored >>> project-specific limits. >>> >>> Has anyone ever created a quota class that is different from "default"? >> >> The Compute API specifically says: >> >> "Only ‘default’ quota class is valid and used to set the default quotas, >> all other quota class would not be used anywhere." >> >> What this API does provide is the ability to set new default quotas for >> *all* projects at once rather than individually specifying new defaults >> for each project. > > It's a "defaults template", yes. > > The alternative is, you know, to just set the default values in > CONF.quota, which is what I said above. Or, if you want project X to > have different quota limits from those CONF-driven defaults, then set > the quotas for the project to some different values via the > os-quota-sets API (or better yet, just use Keystone's /limits API when > we write the "limits driver" into Nova). The issue is that the > os-quota-classes API is currently blocking *me* writing that "limits > driver" in Nova because I don't want to port nova-specific functionality > (like quota classes) to a limits driver when the Keystone /limits > endpoint doesn't have that functionality and nobody I know of has ever > used it. When you say it's blocking you from writing the "limits driver" in nova, are you saying you're picking up John's unified limits spec [1]? It's been in -W mode and hasn't been updated in 4 weeks. In the spec, migration from quota classes => registered limits and deprecation of the existing quota API and quota classes is described. Cheers, -melanie [1] https://review.openstack.org/602201 From chris.friesen at windriver.com Thu Oct 25 19:55:46 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 25 Oct 2018 13:55:46 -0600 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: <0c7d005e-3fdd-2f63-8e40-789c83d7c58b@gmail.com> References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> <0c7d005e-3fdd-2f63-8e40-789c83d7c58b@gmail.com> Message-ID: <00144681-ec2f-fbba-7014-7e22ca16d46d@windriver.com> On 10/25/2018 12:00 PM, Jay Pipes wrote: > On 10/25/2018 01:38 PM, Chris Friesen wrote: >> On 10/24/2018 9:10 AM, Jay Pipes wrote: >>> Nova's API has the ability to create "quota classes", which are >>> basically limits for a set of resource types. There is something >>> called the "default quota class" which corresponds to the limits in >>> the CONF.quota section. Quota classes are basically templates of >>> limits to be applied if the calling project doesn't have any stored >>> project-specific limits. >>> >>> Has anyone ever created a quota class that is different from "default"? >> >> The Compute API specifically says: >> >> "Only ‘default’ quota class is valid and used to set the default >> quotas, all other quota class would not be used anywhere." >> >> What this API does provide is the ability to set new default quotas >> for *all* projects at once rather than individually specifying new >> defaults for each project. > > It's a "defaults template", yes. > > > Chris, are you advocating for *keeping* the os-quota-classes API? Nope. I had two points: 1) It's kind of irrelevant whether anyone has created a quota class other than "default" because nova wouldn't use it anyways. 2) The main benefit (as I see it) of the quota class API is to allow dynamic adjustment of the default quotas without restarting services. I totally agree that keystone limits should replace it. I just didn't want the discussion to be focused on the non-default class portion because it doesn't matter. Chris From mriedemos at gmail.com Thu Oct 25 20:06:38 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 25 Oct 2018 15:06:38 -0500 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: <00144681-ec2f-fbba-7014-7e22ca16d46d@windriver.com> References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> <0c7d005e-3fdd-2f63-8e40-789c83d7c58b@gmail.com> <00144681-ec2f-fbba-7014-7e22ca16d46d@windriver.com> Message-ID: <771484d8-1ef6-e2a7-038b-11e142c04831@gmail.com> On 10/25/2018 2:55 PM, Chris Friesen wrote: > 2) The main benefit (as I see it) of the quota class API is to allow > dynamic adjustment of the default quotas without restarting services. I could be making this up, but I want to say back at the Pike PTG people were also complaining that not having an API to change this, and only do it via config, was not good. But if the keystone limits API solves that then it's a non-issue. -- Thanks, Matt From melwittt at gmail.com Thu Oct 25 20:15:33 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 25 Oct 2018 13:15:33 -0700 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: <771484d8-1ef6-e2a7-038b-11e142c04831@gmail.com> References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> <0c7d005e-3fdd-2f63-8e40-789c83d7c58b@gmail.com> <00144681-ec2f-fbba-7014-7e22ca16d46d@windriver.com> <771484d8-1ef6-e2a7-038b-11e142c04831@gmail.com> Message-ID: <9d3ee0ba-43f2-7aad-9cb8-1cc5faef2efe@gmail.com> On Thu, 25 Oct 2018 15:06:38 -0500, Matt Riedemann wrote: > On 10/25/2018 2:55 PM, Chris Friesen wrote: >> 2) The main benefit (as I see it) of the quota class API is to allow >> dynamic adjustment of the default quotas without restarting services. > > I could be making this up, but I want to say back at the Pike PTG people > were also complaining that not having an API to change this, and only do > it via config, was not good. But if the keystone limits API solves that > then it's a non-issue. Right, the default limits are "registered limits" [1] in the keystone API. And "project limits" can be set to override "registered limits". So the keystone limits API does solve that case. -melanie [1] https://docs.openstack.org/keystone/latest/admin/identity-unified-limits.html#registered-limits From zbitter at redhat.com Thu Oct 25 20:43:09 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 25 Oct 2018 16:43:09 -0400 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: References: <5cf5bee4-d780-57fb-6099-fa018e3ab097@debian.org> <33f9616c-4d81-e239-a1f2-8b1696b8ddd7@redhat.com> Message-ID: On 25/10/18 1:38 PM, William M Edmonds wrote: > Zane Bitter wrote on 10/22/2018 03:12:46 PM: > > On 22/10/18 10:33 AM, Thomas Goirand wrote: > > > On 10/19/18 5:17 PM, Zane Bitter wrote: > > > > > >> Integration Tests > > >> ----------------- > > >> > > >> Integration tests do test, amongst other things, integration with > > >> non-openstack-supplied things in the distro, so it's important that we > > >> test on the actual distros we have identified as popular.[2] It's also > > >> important that every project be testing on the same distro at the > end of > > >> a release, so we can be sure they all work together for users. > > > > > > I find very disturbing to see the project only leaning toward these > only > > > 2 distributions. Why not SuSE & Debian? > > > > The bottom line is it's because targeting those two catches 88% of our > > users. (For once I did not make this statistic up.) > > > > Also note that in practice I believe almost everything is actually > > tested on Ubuntu LTS, and only TripleO is testing on CentOS. It's > > difficult to imagine how to slot another distro into the mix without > > doubling up on jobs. > > I think you meant 78%, assuming you were looking at the latest User > Survey results [1], page 55. Still a hefty number. I never know how to read those weird 3-way bar charts they have in the user survey, but that actually adds up to 91% by the looks of it (I believe you forgot to count RHEL). The numbers were actually slightly lower in the full-year data for 2017 that I used (from https://www.openstack.org/analytics - I can't give you a direct link because Javascript ). > It is important to note that the User Survey lumps all versions of a > given OS together, whereas the TC reference [2] only considers the > latest LTS/stable version. If the User Survey split out latests > LTS/stable versions vs. others (e.g. Ubuntu 16.04 LTS), I expect we'd > see Ubuntu 18.04 LTS + Centos 7 adding up to much less than 78%. This is true, although we don't know by how much. (FWIW I can almost guarantee that virtually all of the CentOS/RHEL users are on 7, but I'm sure the same is not the case for Ubuntu 16.04.) > [1] https://www.openstack.org/assets/survey/April2017SurveyReport.pdf > [2] > https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dprince at redhat.com Thu Oct 25 21:43:06 2018 From: dprince at redhat.com (Dan Prince) Date: Thu, 25 Oct 2018 17:43:06 -0400 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: References: <67a8a99e-6d65-6798-6863-73cc7afca4ec@redhat.com> Message-ID: On Thu, Oct 25, 2018 at 11:26 AM Alex Schultz wrote: > > On Thu, Oct 25, 2018 at 9:16 AM Bogdan Dobrelya wrote: > > > > > > On 10/19/18 8:04 PM, Alex Schultz wrote: > > > On Fri, Oct 19, 2018 at 10:53 AM James Slagle wrote: > > >> > > >> On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz wrote: > > >> > Additionally I took a stab at combining the puppet/docker service > > >> > definitions for the aodh services in a similar structure to start > > >> > reducing the overhead we've had from maintaining the docker/puppet > > >> > implementations seperately. You can see the patch > > >> > https://review.openstack.org/#/c/611188/ for an additional example of > > >> > this. > > >> > > >> That patch takes the approach of removing baremetal support. Is that > > >> what we agreed to do? > > >> > > > > > > Since it's deprecated since Queens[0], yes? I think it is time to stop > > > continuing this method of installation. Given that I'm not even sure > > > > My point and concern retains as before, unless we fully dropped the > > docker support for Queens (and downstream LTS released for it), we > > should not modify the t-h-t directory tree, due to associated > > maintenance of backports complexity reasons > > > > This is why we have duplication of things in THT. For environment > files this is actually an issue due to the fact they are the end user > interface. But these service files should be internal and where they > live should not matter. We already have had this in the past and have > managed to continue to do backports so I don't think this as a reason > not to do this clean up. It feels like we use this as a reason not to > actually move forward on cleanup and we end up carrying the tech debt. > By this logic, we'll never be able to cleanup anything if we can't > handle moving files around. Yeah. The environment files would contain some level of duplication until we refactor our plan storage mechanism to use a plain old tarball (stored in Swift still) instead of storing files in the expanded format. Swift does not support softlinks, but a tarball would and thus would allow us to de-dup things in the future. The patch is here but it needs some love: https://review.openstack.org/#/c/581153/ Dan > > I think there are some patches to do soft links (dprince might be able > to provide the patches) which could at least handle this backward > compatibility around locations, but I think we need to actually move > forward on the simplification of the service definitions unless > there's a blocking technical issue with this effort. > > Thanks, > -Alex > > > > the upgrade process even works anymore with baremetal, I don't think > > > there's a reason to keep it as it directly impacts the time it takes > > > to perform deployments and also contributes to increased complexity > > > all around. > > > > > > [0] http://lists.openstack.org/pipermail/openstack-dev/2017-September/122248.html > > > > > >> I'm not specifically opposed, as I'm pretty sure the baremetal > > >> implementations are no longer tested anywhere, but I know that Dan had > > >> some concerns about that last time around. > > >> > > >> The alternative we discussed was using jinja2 to include common > > >> data/tasks in both the puppet/docker/ansible implementations. That > > >> would also result in reducing the number of Heat resources in these > > >> stacks and hopefully reduce the amount of time it takes to > > >> create/update the ServiceChain stacks. > > >> > > > > > > I'd rather we officially get rid of the one of the two methods and > > > converge on a single method without increasing the complexity via > > > jinja to continue to support both. If there's an improvement to be had > > > after we've converged on a single structure for including the base > > > bits, maybe we could do that then? > > > > > > Thanks, > > > -Alex > > > > > > -- > > Best regards, > > Bogdan Dobrelya, > > Irc #bogdando > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dprince at redhat.com Thu Oct 25 22:05:05 2018 From: dprince at redhat.com (Dan Prince) Date: Thu, 25 Oct 2018 18:05:05 -0400 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> Message-ID: On Wed, Oct 17, 2018 at 11:15 AM Alex Schultz wrote: > > Time to resurrect this thread. > > On Thu, Jul 5, 2018 at 12:14 PM James Slagle wrote: > > > > On Thu, Jul 5, 2018 at 1:50 PM, Dan Prince wrote: > > > Last week I was tinkering with my docker configuration a bit and was a > > > bit surprised that puppet/services/docker.yaml no longer used puppet to > > > configure the docker daemon. It now uses Ansible [1] which is very cool > > > but brings up the question of how should we clearly indicate to > > > developers and users that we are using Ansible vs Puppet for > > > configuration? > > > > > > TripleO has been around for a while now, has supported multiple > > > configuration ans service types over the years: os-apply-config, > > > puppet, containers, and now Ansible. In the past we've used rigid > > > directory structures to identify which "service type" was used. More > > > recently we mixed things up a bit more even by extending one service > > > type from another ("docker" services all initially extended the > > > "puppet" services to generate config files and provide an easy upgrade > > > path). > > > > > > Similarly we now use Ansible all over the place for other things in > > > many of or docker and puppet services for things like upgrades. That is > > > all good too. I guess the thing I'm getting at here is just a way to > > > cleanly identify which services are configured via Puppet vs. Ansible. > > > And how can we do that in the least destructive way possible so as not > > > to confuse ourselves and our users in the process. > > > > > > Also, I think its work keeping in mind that TripleO was once a multi- > > > vendor project with vendors that had different preferences on service > > > configuration. Also having the ability to support multiple > > > configuration mechanisms in the future could once again present itself > > > (thinking of Kubernetes as an example). Keeping in mind there may be a > > > conversion period that could well last more than a release or two. > > > > > > I suggested a 'services/ansible' directory with mixed responces in our > > > #tripleo meeting this week. Any other thoughts on the matter? > > > > I would almost rather see us organize the directories by service > > name/project instead of implementation. > > > > Instead of: > > > > puppet/services/nova-api.yaml > > puppet/services/nova-conductor.yaml > > docker/services/nova-api.yaml > > docker/services/nova-conductor.yaml > > > > We'd have: > > > > services/nova/nova-api-puppet.yaml > > services/nova/nova-conductor-puppet.yaml > > services/nova/nova-api-docker.yaml > > services/nova/nova-conductor-docker.yaml > > > > (or perhaps even another level of directories to indicate > > puppet/docker/ansible?) > > > > Personally, such an organization is something I'm more used to. It > > feels more similar to how most would expect a puppet module or ansible > > role to be organized, where you have the abstraction (service > > configuration) at a higher directory level than specific > > implementations. > > > > It would also lend itself more easily to adding implementations only > > for specific services, and address the question of if a new top level > > implementation directory needs to be created. For example, adding a > > services/nova/nova-api-chef.yaml seems a lot less contentious than > > adding a top level chef/services/nova-api.yaml. > > > > It'd also be nice if we had a way to mark the default within a given > > service's directory. Perhaps services/nova/nova-api-default.yaml, > > which would be a new template that just consumes the default? Or > > perhaps a symlink, although it was pointed out symlinks don't work in > > swift containers. Still, that could possibly be addressed in our plan > > upload workflows. Then the resource-registry would point at > > nova-api-default.yaml. One could easily tell which is the default > > without having to cross reference with the resource-registry. > > > > So since I'm adding a new ansible service, I thought I'd try and take > a stab at this naming thing. I've taken James's idea and proposed an > implementation here: > https://review.openstack.org/#/c/588111/ > > The idea would be that the THT code for the service deployment would > end up in something like: > > deployment//-.yaml A matter of preference but I can live with this. > > Additionally I took a stab at combining the puppet/docker service > definitions for the aodh services in a similar structure to start > reducing the overhead we've had from maintaining the docker/puppet > implementations seperately. You can see the patch > https://review.openstack.org/#/c/611188/ for an additional example of > this. > > Please let me know what you think. I'm okay with it in that it consolidates some things (which we greatly need to do). It does address my initial concern in that people are now putting Ansible services into the puppet/services directory albeit a bit heavy handed in that it changes everything (rather than just the new Ansible services). Understood that you are also eliminating the "inheritance" from the docker/services to the puppet/services files... by simply eliminating the Puppet varients. I had hoped to implement a baremetal Packstack like installer (still somewhat popular) with t-h-t but there doesn't appear to be anyone in the Packstack camp speaking up for the idea here. So much for consolidation. There doesn't seem to be anyone else in TripleO (aside from myself) who wants baremetal so I guess be gone with it. Opportunity to test on baremetal lost! Like we discussed we could have easily kept both and used jinja templates to avoid the "inheritance" across the baremetal(puppet) and docker/services and thus also minimized our Heat resources which have grown. There *are* ways to implement this addressing all concerns and keeping both features. We are choosing not do this however. I see people are already +2'ing the patches so it appears to have been decided. But, Are we going to commit to trying to move all the existing services to this new format within a release? There are 200 or so t-h-t patches up for review that would likely need rebased. And yes this will make backports more difficult as well. In all this is we are changing our entire directory structure but not really removing any dependencies. I suppose if we are going to do the work I'd rather see us drop some dependencies in the process too :/. Where destructive meets productive is our best investment I think. Dan > > Thanks, > -Alex > > > > > -- > > -- James Slagle > > -- > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dangtrinhnt at gmail.com Fri Oct 26 00:15:19 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 26 Oct 2018 09:15:19 +0900 Subject: [openstack-dev] [nova] nova cellsv2 and DBs / down cells / quotas In-Reply-To: <2d1ae8ec-0bf4-2761-3781-151e3fc62303@gmail.com> References: <929B1777-594D-440A-9847-84EE66A7146F@gmail.com> <4f38170f-75c2-8e8a-92fc-92e4ec47f356@gmail.com> <48AF63AB-8FDD-4653-938D-8B3144FE8507@gmail.com> <2d1ae8ec-0bf4-2761-3781-151e3fc62303@gmail.com> Message-ID: Hi Matt, The Searchlight team decided to revive the required feature in Stein [1] and We're working with Kevin to review the patch this weekend. If Nova team needs some help, just let me know. [1] https://review.openstack.org/#/c/453352/ Bests, On Fri, Oct 26, 2018 at 12:58 AM Matt Riedemann wrote: > On 10/24/2018 6:55 PM, Sam Morrison wrote: > > I’ve been thinking of some hybrid cellsv1/v2 thing where we’d still have > the top level api cell DB but the API would only ever read from it. > Nova-api would only write to the compute cell DBs. > > Then keep the nova-cells processes just doing instance_update_at_top to > keep the nova-cell-api db up to date. > > There was also the "read from searchlight" idea [1] but that died in > Boston. > > [1] > > https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/list-instances-using-searchlight.html > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Oct 26 00:34:02 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 25 Oct 2018 17:34:02 -0700 Subject: [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface In-Reply-To: <94c3392c-a4b3-6cfa-4b14-83818807f25a@binero.se> References: <8138b9f3-ae41-43af-1be1-2182a6c6777d@binero.se> <3f27e1b3-1bce-dd31-d81a-5352ca900ccc@binero.se> <94c3392c-a4b3-6cfa-4b14-83818807f25a@binero.se> Message-ID: FYI, I took some time out this afternoon and wrote a detailed certificate configuration guide. Hopefully this will help. https://review.openstack.org/613454 Reviews would be welcome! Michael On Thu, Oct 25, 2018 at 7:00 AM Tobias Urdin wrote: > > Might as well throw it out here. > > After a lot of troubleshooting we were able to narrow our issue down to > our test environment running qemu virtualization, we moved our compute > node to hardware and > used kvm full virtualization instead. > > We could properly reproduce the issue where generating a CSR from a > private key and then trying to verify the CSR would fail complaining about > "Signature did not match the certificate request" > > We suspect qemu floating point emulation caused this, the same OpenSSL > function that validates a CSR is the one used when validating the SSL > handshake which caused our issue. > After going through the whole stack, we have Octavia working flawlessly > without any issues at all. > > Best regards > Tobias > > On 10/23/2018 04:31 PM, Tobias Urdin wrote: > > Hello Erik, > > > > Could you specify the DNs you used for all certificates just so that I > > can rule it out on my side. > > You can redact anything sensitive with some to just get the feel on how > > it's configured. > > > > Best regards > > Tobias > > > > On 10/22/2018 04:47 PM, Erik McCormick wrote: > >> On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin wrote: > >>> Hello, > >>> > >>> I've been having a lot of issues with SSL certificates myself, on my > >>> second trip now trying to get it working. > >>> > >>> Before I spent a lot of time walking through every line in the DevStack > >>> plugin and fixing my config options, used the generate > >>> script [1] and still it didn't work. > >>> > >>> When I got the "invalid padding" issue it was because of the DN I used > >>> for the CA and the certificate IIRC. > >>> > >>> > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING > >>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > >>> to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa > >>> routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'), > >>> ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'), > >>> ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",) > >>> > 19:47 < tobias-urdin> after a quick google "The problem was that my > >>> CA DN was the same as the certificate DN." > >>> > >>> IIRC I think that solved it, but then again I wouldn't remember fully > >>> since I've been at so many different angles by now. > >>> > >>> Here is my IRC logs history from the #openstack-lbaas channel, perhaps > >>> it can help you out > >>> http://paste.openstack.org/show/732575/ > >>> > >> Tobias, I owe you a beer. This was precisely the issue. I'm deploying > >> Octavia with kolla-ansible. It only deploys a single CA. After hacking > >> the templates and playbook to incorporate a separate server CA, the > >> amphorae now load and provision the required namespace. I'm adding a > >> kolla tag to the subject of this in hopes that someone might want to > >> take on changing this behavior in the project. Hopefully after I get > >> through Upstream Institute in Berlin I'll be able to do it myself if > >> nobody else wants to do it. > >> > >> For certificate generation, I extracted the contents of > >> octavia_certs_install.yml (which sets up the directory structure, > >> openssl.cnf, and the client CA), and octavia_certs.yml (which creates > >> the server CA and the client certificate) and mashed them into a > >> separate playbook just for this purpose. At the end I get: > >> > >> ca_01.pem - Client CA Certificate > >> ca_01.key - Client CA Key > >> ca_server_01.pem - Server CA Certificate > >> cakey.pem - Server CA Key > >> client.pem - Concatenated Client Key and Certificate > >> > >> If it would help to have the playbook, I can stick it up on github > >> with a huge "This is a hack" disclaimer on it. > >> > >>> ----- > >>> > >>> Sorry for hijacking the thread but I'm stuck as well. > >>> > >>> I've in the past tried to generate the certificates with [1] but now > >>> moved on to using the openstack-ansible way of generating them [2] > >>> with some modifications. > >>> > >>> Right now I'm just getting: Could not connect to instance. Retrying.: > >>> SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579) > >>> from the amphoras, haven't got any further but I've eliminated a lot of > >>> stuck in the middle. > >>> > >>> Tried deploying Ocatavia on Ubuntu with python3 to just make sure there > >>> wasn't an issue with CentOS and OpenSSL versions since it tends to lag > >>> behind. > >>> Checking the amphora with openssl s_client [3] it gives the same one, > >>> but the verification is successful just that I don't understand what the > >>> bad signature > >>> part is about, from browsing some OpenSSL code it seems to be related to > >>> RSA signatures somehow. > >>> > >>> 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad > >>> signature:s3_clnt.c:2032: > >>> > >>> So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS > >>> (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm > >>> back to something related > >>> to the certificates or the communication between the endpoints, or what > >>> actually responds inside the amphora (gunicorn IIUC?). Based on the > >>> "verify" functions actually causing that bad signature error I would > >>> assume it's the generated certificate that the amphora presents that is > >>> causing it. > >>> > >>> I'll have to continue the troubleshooting to the inside of the amphora, > >>> I've used the test-only amphora image before but have now built my own > >>> one that is > >>> using the amphora-agent from the actual stable branch, but same issue > >>> (bad signature). > >>> > >>> For verbosity this is the config options set for the certificates in > >>> octavia.conf and which file it was copied from [4], same here, a > >>> replication of what openstack-ansible does. > >>> > >>> Appreciate any feedback or help :) > >>> > >>> Best regards > >>> Tobias > >>> > >>> [1] > >>> https://github.com/openstack/octavia/blob/master/bin/create_certificates.sh > >>> [2] http://paste.openstack.org/show/732483/ > >>> [3] http://paste.openstack.org/show/732486/ > >>> [4] http://paste.openstack.org/show/732487/ > >>> > >>> On 10/20/2018 01:53 AM, Michael Johnson wrote: > >>>> Hi Erik, > >>>> > >>>> Sorry to hear you are still having certificate issues. > >>>> > >>>> Issue #2 is probably caused by issue #1. Since we hot-plug the tenant > >>>> network for the VIP, one of the first steps after the worker connects > >>>> to the amphora agent is finishing the required configuration of the > >>>> VIP interface inside the network namespace on the amphroa. > >>>> > >> Thanks for the hint on the workflow of this. I hadn't gotten deep > >> enough into the code to find that yet, but I suspected it was blocking > >> since the namespace never got created either. Thanks > >> > >>>> If I remember correctly, you are attempting to configure Octavia with > >>>> the dual CA option (which is good for non-development use). > >>>> > >>>> This is what I have for notes: > >>>> > >>>> [certificates] gets the following: > >>>> cert_generator = local_cert_generator > >>>> ca_certificate = server CA's "server.pem" file > >>>> ca_private_key = server CA's "server.key" file > >>>> ca_private_key_passphrase = pass phrase for ca_private_key > >>>> [controller_worker] > >>>> client_ca = Client CA's ca_cert file > >>>> [haproxy_amphora] > >>>> client_cert = Client CA's client.pem file (I think with it's key > >>>> concatenated is what rm_work said the other day) > >>>> server_ca = Server CA's ca_cert file > >>>> > >> This is all very helpful. It's a bit difficult to know what goes where > >> the way the documentation is written presently. For something that's > >> going to be the defacto standard for loadbalancing, we as a community > >> need to do a better job of documenting how to set up, configure, and > >> manage this in production. I'm trying to capture my lessons learned > >> and processes as I go to help with that if I can. > >> > >> -Erik > >> > >>>> That said, I can probably run through this and write something up next > >>>> week that is more step-by-step/detailed. > >>>> > >>>> Michael > >>>> > >>>> On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick > >>>> wrote: > >>>>> Apologies for cross-posting, but in the event that these might be > >>>>> worth filing as bugs, I wanted the Octavia devs to see it as well... > >>>>> > >>>>> I've been wrestling with getting Octavia up and running and have > >>>>> become stuck on two issues. I'm hoping someone has run into these > >>>>> before. My google foo has come up empty. > >>>>> > >>>>> Issue 1: > >>>>> When the Octavia controller tries to poll the amphora instance, it > >>>>> tries repeatedly and eventually fails. The error on the controller > >>>>> side is: > >>>>> > >>>>> 2018-10-19 14:17:39.181 26 ERROR > >>>>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection > >>>>> retries (currently set to 300) exhausted. The amphora is unavailable. > >>>>> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries > >>>>> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by > >>>>> SSLError(SSLError("bad handshake: Error([('rsa routines', > >>>>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > >>>>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > >>>>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > >>>>> 'tls_process_server_certificate', 'certificate verify > >>>>> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112', > >>>>> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15 > >>>>> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines', > >>>>> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines', > >>>>> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding > >>>>> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines', > >>>>> 'tls_process_server_certificate', 'certificate verify > >>>>> failed')],)",),)) > >>>>> > >>>>> On the amphora side I see: > >>>>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Error processing SSL request. > >>>>> [2018-10-19 17:52:54 +0000] [1331] [DEBUG] Invalid request from > >>>>> ip=::ffff:10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake > >>>>> failure (_ssl.c:1754) > >>>>> > >>>>> I've generated certificates both with the script in the Octavia git > >>>>> repo, and with the Openstack Ansible playbook. I can see that they are > >>>>> present in /etc/octavia/certs. > >>>>> > >>>>> I'm using the Kolla (Queens) containers for the control plane so I'm > >>>>> sure I've satisfied all the python library constraints. > >>>>> > >>>>> Issue 2: > >>>>> I"m not sure how it gets configured, but the tenant network interface > >>>>> (ens6) never comes up. I can spawn other instances on that network > >>>>> with no issue, and I can see that Neutron has the port attached to the > >>>>> instance. However, in the instance this is all I get: > >>>>> > >>>>> ubuntu at amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a > >>>>> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > >>>>> group default qlen 1 > >>>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > >>>>> inet 127.0.0.1/8 scope host lo > >>>>> valid_lft forever preferred_lft forever > >>>>> inet6 ::1/128 scope host > >>>>> valid_lft forever preferred_lft forever > >>>>> 2: ens3: mtu 9000 qdisc pfifo_fast > >>>>> state UP group default qlen 1000 > >>>>> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff > >>>>> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3 > >>>>> valid_lft forever preferred_lft forever > >>>>> inet6 fe80::f816:3eff:fe30:c460/64 scope link > >>>>> valid_lft forever preferred_lft forever > >>>>> 3: ens6: mtu 1500 qdisc noop state DOWN group > >>>>> default qlen 1000 > >>>>> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff > >>>>> > >>>>> There's no evidence of the interface anywhere else including udev rules. > >>>>> > >>>>> Any help with either or both issues would be greatly appreciated. > >>>>> > >>>>> Cheers, > >>>>> Erik > >>>>> > >>>>> __________________________________________________________________________ > >>>>> OpenStack Development Mailing List (not for usage questions) > >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> __________________________________________________________________________ > >>>> OpenStack Development Mailing List (not for usage questions) > >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > >>> __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dangtrinhnt at gmail.com Fri Oct 26 01:07:38 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 26 Oct 2018 10:07:38 +0900 Subject: [openstack-dev] [Searchlight] Weekly report - Stein R-25 Message-ID: Hi team, It's a little late but finally, I can find some time to write the report for last week [1]. Just a reminder that at the end of this week we will release Stein-1 for all of the Searchlight projects. It's not required but I would like to do it to evaluate our team's effort. Happy Searching!!! :) [1] https://www.dangtrinh.com/2018/10/searchlight-weekly-report-stein-r-25.html -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From donghm at vn.fujitsu.com Fri Oct 26 03:13:54 2018 From: donghm at vn.fujitsu.com (Ha Manh, Dong) Date: Fri, 26 Oct 2018 03:13:54 +0000 Subject: [openstack-dev] [Kolla] Implement upgrade/rolling-upgrade for services in Kolla-Ansible Message-ID: <6b8a4145cffc42ca9559fc70c19ab3dd@G07SGEXCMSGPS05.g07.fujitsu.local> Hi all, I'm working on the bp about implementing upgrade/rolling-upgrade for services in Kolla [1]. And there're three patch-sets remaining about developing the rolling upgrade logic: Apply Swift rolling upgrade at https://review.openstack.org/#/c/582103/ Apply Neutron rolling upgrade logic https://review.openstack.org/#/c/407922/ The reviewing for two patch-sets above is nearly done, so could other core reviewers help to review them. And the third patch-set is about rolling upgrade logic for Heat https://review.openstack.org/#/c/555199/ . This patch-set is under review by Eduardo. But any review is welcome. Thx a lot in advance for any help :) [1] https://blueprints.launchpad.net/kolla-ansible/+spec/apply-service-upgrade-procedure -- Dong Ha-Manh Fujitsu -------------- next part -------------- An HTML attachment was scrubbed... URL: From wjstk16 at gmail.com Fri Oct 26 07:33:44 2018 From: wjstk16 at gmail.com (Won) Date: Fri, 26 Oct 2018 16:33:44 +0900 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: Hi Ifat, I'm sorry for the late reply. I solved the problem of not updating the Prometheus alarm. Alarms with the same Prometheus alarm name are recognized as the same alarm in vitrage. ------- alert.rules.yml groups: - name: alert.rules rules: - alert: InstanceDown expr: up == 0 for: 60s labels: severity: warning annotations: description: '{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 30 seconds.' summary: Instance {{ $labels.instance }} down ------ This is the contents of the alert.rules.yml file before I modify it. This is a yml file that generates an alarm when the cardvisor stops(instance down). Alarm is triggered depending on which instance is down, but all alarms have the same name as 'instance down'. Vitrage recognizes all of these alarms as the same alarm. Thus, until all 'instance down' alarms were cleared, the 'instance down' alarm was recognized as unresolved and the alarm was not extinguished. ------alert.rules.yml(modified) groups: - name: alert.rules rules: - alert: InstanceDown on Apigateway expr: up{instance="192.168.12.164:31121"} == 0 for: 5s labels: severity: warning annotations: description: '{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 30 seconds.' summary: Instance {{ $labels.instance }} down - alert: InstanceDown on Signup expr: up{instance="192.168.12.164:31122"} == 0 for: 5s labels: severity: warning annotations: description: '{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 30 seconds.' summary: Instance {{ $labels.instance }} down . . . --------------- By modifying the rules as above, the problem has been solved. > Can you please show me where you saw the 2001 timestamp? I didn't find it > in the log. > [image: image.png] The time stamp is recorded well in log(vitrage-graph,collect etc), but in vitrage-dashboard it is marked 2001-01-01. However, it seems that the time stamp is recognized well internally because the alarm can be resolved and is recorded well in log. Let me make sure I understand the problem. > When you create a new vm in Nova, does it immediately appear in the entity > graph? > When you delete a vm, it remains? does it remain in a multi-node > environment and deleted in a single node environment? > [image: image.png] Host name ubuntu is my main server. I install openstack all in one in this server and i install compute node in host name compute1. When i create a new vm in nova(compute1) it immediately appear in the entity graph. But in does not disappear in the entity graph when i delete the vm. No matter how long i wait, it doesn't disappear. Afther i execute 'vitrage-purge-data' command and reboot the Openstack(execute reboot command in openstack server(host name ubuntu)), it disappear. Only execute 'vitrage-purge-data' does not work. It need a reboot to disappear. When i create a new vm in nova(ubuntu) there is no problem. I implemented the web service of the micro service architecture and applied the RCA. Attached file picture shows the structure of the web service I have implemented. I wonder what data I receive and what can i do when I link vitrage with kubernetes. As i know, the vitrage graph does not present information about containers or pods inside the vm. If that is correct, I would like to make the information of the pod level appear on the entity graph. I follow ( https://docs.openstack.org/vitrage/latest/contributor/k8s_datasource.html) this step. I attached the vitage.conf file and the kubeconfig file. The contents of the Kubeconconfig file are copied from the contents of the admin.conf file on the master node. I want to check my settings are right and connected, but I don't know how. It would be very much appreciated if you let me know how. Br, Won 2018년 10월 10일 (수) 오후 11:52, Ifat Afek 님이 작성: > Hi Won, > > On Wed, Oct 10, 2018 at 11:58 AM Won wrote: > >> >> my prometheus version : 2.3.2 and alertmanager version : 0.15.2 and I >> attached files.(vitrage collector,graph logs and apache log and >> prometheus.yml alertmanager.yml alarm rule file etc..) >> I think the problem that resolved alarm does not disappear is the time >> stamp problem of the alarm. >> >> -gray alarm info >> severity:PAGE >> vitrage id: c6a94386-3879-499e-9da0-2a5b9d3294b8 , >> e2c5eae9-dba9-4f64-960b-b964f1c01dfe , 3d3c903e-fe09-4a6f-941f-1a2adb09feca >> , 8c6e7906-9e66-404f-967f-40037a6afc83 , >> e291662b-115d-42b5-8863-da8243dd06b4 , 8abd2a2f-c830-453c-a9d0-55db2bf72d46 >> ---------- >> >> The alarms marked with the blue circle are already resolved. However, it >> does not disappear from the entity graph and alarm list. >> There were seven more gray alarms at the top screenshot in active alarms >> like entity graph. It disappeared by deleting gray alarms from the >> vitrage-alarms table in the DB or changing the end timestamp value to an >> earlier time than the current time. >> > > I checked the files that you sent, and it appears that the connection > between Prometheus and Vitrage works well. I see in vitrage-graph log that > Prometheus notified both on alert firing and on alert resolved statuses. > I still don't understand why the alarms were not removed from Vitrage, > though. Can you please send me the output of 'vitrage topology show' CLI > command? > Also, did you happen to restart vitrage-graph or vitrage-collector during > your tests? > > >> At the log, it seems that the first problem is that the timestamp value >> from the vitrage comes to 2001-01-01, even though the starting value in the >> Prometheus alarm information has the correct value. >> When the alarm is solved, the end time stamp value is not updated so >> alarm does not disappear from the alarm list. >> > > Can you please show me where you saw the 2001 timestamp? I didn't find it > in the log. > > >> The second problem is that even if the time stamp problem is solved, the >> entity graph problem will not be solved. Gray alarm information is not in >> the vitage-collector log but in the vitrage graph and apache log. >> I want to know how to forcefully delete entity from a vitage graph. >> > > You shouldn't do it :-) there is no API for deleting entities, and messing > with the database may cause unexpected results. > The only thing that you can safely do is to stop all Vitrage services, > execute 'vitrage-purge-data' command, and start the services again. This > will cause rebuilding of the entity graph. > > >> Regarding the multi nodes, I mean, 1 controll node(pc1) & 1 compute >> node(pc2). So one openstack. >> >> The test VM in the picture is an instance on compute node that has >> already been deleted. I waited for hours and checked nova.conf but it was >> not removed. >> This was not the occur in the queens version; in the rocky version, >> multinode environment, there seems to be a bug in VM creation on multi node. >> The same situation occurred in multi-node environments that were >> configured with different PCs. >> > > Let me make sure I understand the problem. > When you create a new vm in Nova, does it immediately appear in the entity > graph? > When you delete a vm, it remains? does it remain in a multi-node > environment and deleted in a single node environment? > > Br, > Ifat > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 42202 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 11247 bytes Desc: not available URL: -------------- next part -------------- apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1Ea3dPREUzTURneE1Gb1hEVEk0TURrd05URTNNRGd4TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTENnCktWR3NLbis5Tlk1WTM0MkhTQzFmYjNsWVVWV1BvYXd4U01oV1FFbktHVFRpbG9XRHRMMVFVUG9VQlRwUGJuTm8KU2IzaTJ3WjlMZDQyTUE5SlNsS1dxTk5HeTZPRUthWHFoRHN6c1RtNU1sckxvKy9tNFZCOWVjY0hhc25pb1Z6QwpLU1BxcndEd2tPaE5CMGRvemQxL09zbXZRNHd2Q3BHWTlsOUJOSFVWbWszUVBTVW9Gc25ncW5LR2pXZFROWFU0Ci9vMnFFWFFEcU15YVI2bnQ0S2JXcFdOMlEwL29MNnk2SGxzQUw4MS9rT2dUWXE2NmhWU1pnTTREWE5UM1gzTlcKV0tELzRJV0VPcDRqL3pCaG41eTNTMWd5SWpUcTVkeTgvMEpDT1R4VHBVV2Zmc3FhemxyWHFYQ0wzeUNmYWZ4dwpJQzVJaFZSNzJGSmpVSkt0OFBrQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFLd3AvQm9VYURjZTk0YmpDZDkyMFQyMDgrODcKZ1hpeTBqaGp1UE5OWFJlcjRyZkcySm1ENXBmQzhFdjJJNG5EQzVnaktJQVhxMjlGSW1xY1pLU1doKzVYZHQvYwpxT1l3SGlBOGtKL1VKeDRDWmQrUmRMZEM5RmZiWmM0akpvWXJIei9xcHRxVEl4aExGMVphc01YNVdZZ0FDV3dvCmtLUzU4NTVBbk55blNMNUZNSHp6VWRoQWNmOGJpbFBSNGlBR1pHMTZiL01CTmFmc1hoSS9rQU9neGgybHNySzYKWWtnMkIxcDdPaG5SUnExamE1c1UvSTQwSTFJeVpEcWJldW91ZUZjS2p5MmRIb2JnOEVoVXNUSWF1eVZnMEhIUApmN1BCVU1BdTMvaDF3ZkFXaDNzL09BVDhSN0tabHJob1ZhMnV3MmhRZTYyYm4vUEZFZWNwb2FDSXF5VT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://192.168.12.164:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin at kubernetes current-context: kubernetes-admin at kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJZWJBM0liN1R2Njh3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB4T0RBNU1EZ3hOekE0TVRCYUZ3MHhPVEE1TURneE56QTRNVE5hTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQWxCbkM0dUl6NUV4WUdYdVYKYkQ3TEZxSFBWVzF1UDhMSmR6dG5ETVh5TjRSL1RjUUxGQ0lyclJwTURrSElUb2tHTURKMTZkUm50dzlhbW5qcQpzTFZjbHRRcUNQWHJrSXVQcXo1cHplanBuUU90MzRXdFRKVW1RYUFOTGZLdzMxKzJZVmU2MnZqZm9IRFpWZWZBCmFxVlA4cldQK0piajM4QUsrQlFGR29BbWFxUWJMaHZkL3grbjRMMW84cWJrNXp3d1Z6RnFNTXBTajFBWlp6QSsKelkwWDFEL21MczVtWUZDQUdLUGtYT25aTzZFVTZsQk5KUkg1MmE1K1hiRXphZW1WZXB3b0p5NForUlVDNHFUMgpZbVJsUWNwb2tZMng0bDFIa1hXVU12cVVFdmRIWUhjV01rNTBlMmg0MiswWWQvQk9WbEoycXVNNDVZdzk5K0pRCmZmTkllUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFBekcyaEkxTCt6bG9na1JMN2krOFBhdzk1eWpWdVA0V1l1UwpOekpSQUk5QWlEUlN4ZWp3VHFzU2xwclMvRFFiR0xWMHk0YWRuRUJlLzBVSi9nM3RpTEMvQVJ6V1JINk5NTldqCi9uQUFBbmNYMkRKdGlHb3FCRTVoVXVndTY1cE96MjRpN1VBbWdVbktpTGc1bjdld2tXWnFYbVV3MysvSHR0S2kKenNKQkZteHdLZUJET0dMVWdpdlNacGtEMWs1K3ltMDVuV01FSWJmQndIL1MvOE9kWk1IZ1VwY3Z1OExPODFPTQpPSk1EWjc5RzJJdlFRL1BRTEpoNWNCV3hlcVFWaisxS1EwQU1DbFZDeDB2ekNOYytpYURVallhWVZTT0NmVVdrCjg0emUyUUU1TU9oTldmd0J1Z0k0b3Z4WEJyRUIxbVhkQlM3T3VTa1RERlh2NXViVnl3dz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBbEJuQzR1SXo1RXhZR1h1VmJEN0xGcUhQVlcxdVA4TEpkenRuRE1YeU40Ui9UY1FMCkZDSXJyUnBNRGtISVRva0dNREoxNmRSbnR3OWFtbmpxc0xWY2x0UXFDUFhya0l1UHF6NXB6ZWpwblFPdDM0V3QKVEpVbVFhQU5MZkt3MzErMllWZTYydmpmb0hEWlZlZkFhcVZQOHJXUCtKYmozOEFLK0JRRkdvQW1hcVFiTGh2ZAoveCtuNEwxbzhxYms1end3VnpGcU1NcFNqMUFaWnpBK3pZMFgxRC9tTHM1bVlGQ0FHS1BrWE9uWk82RVU2bEJOCkpSSDUyYTUrWGJFemFlbVZlcHdvSnk0WitSVUM0cVQyWW1SbFFjcG9rWTJ4NGwxSGtYV1VNdnFVRXZkSFlIY1cKTWs1MGUyaDQyKzBZZC9CT1ZsSjJxdU00NVl3OTkrSlFmZk5JZVFJREFRQUJBb0lCQUd5c2dwOHQwVm9pMHpyUAp2cE9SZUVFQk56eStjZm9EbXdZTTVzOHVxVkFudjZwMndwSmhpSjhhL3RndldTYVgwWnlvU25IczFMWTFaQXlaCjBjMGRKL1hkZFlMaHdadHRiVjBCRFc1MURJZVUzWTk1YmZNV050NU03WjdieVFJQUg3cEtQK2pTV25aR21KUTYKM0t6azVVZDZCMDBvbThuaUI2cUdOa0I5N0xLdTN5Z1RTTkdhWjd1RHM0SnI5Z0VXdzZBVjQ2WWFkQkxYRHNKYQpIY2xGK0pQZStnRCtxUVZBNkxyUVJCSnBlTUhvY3R5S28vbzMvWXVTblZ3Y3dKSTlMZGxDZmR3SWFVVERNMVhYCnlRcUFVUHIxK0xiTDNGUm4xYlNjOHRiQmdnT0ErL0wzOHdLMGZqcGU5RmVXb0txOHhHWUxBeXAxR1B2aVAyam0KS1FvUU9Na0NnWUVBd3AxcldMbFlaaE5nZXplUmpsZzZTN3lDMjQ5NEZkZHRnMEN5Ym9iSUt2TlJ1dFdHSmg0MQpFK3p1bDB2ajJON05uUGNVYkNYU1o2T1IxcUErbkw1dTV5TVRtLyttSXg5L3EwQ2NxbkRFREpOUUlnbjJvNEI3CkM3aU1FNnc4Z2g3MjA1UzExb1BQOFhIaTIzNE1wRHF0aEJlUDdoaXV1dURBc1NjS1NDelFNQThDZ1lFQXd0QnoKUnFlU2pIcUdKYkJ3Z09xUW9Tblp4Z1ptaDJNc2hQV0Z5WWVaS3d1cktONUVST0VkczNacExDSDBRZjRBQlZxNAo5dG1rODBlYWhyaUJ4eHRveTBtNUNZbXhPVlVWSjgxcGFVTnE4Zjk1QXFHcjRzTHF1ckNVTkIzVDkxK2lkMVMwCk42RVUvTWNVaUVNVm9ndDRJbldkWnVpNEFOblJodERLL1pneWR2Y0NnWUFoY1dmSEFXSzlkOHIyb1ovenRCbWcKZGk2T2lHTDhiZDYxMVdKVU4va2gyRnBOSHZCRWtLQlNZajdGNVJhc1orMHhjZ3dpWVlWOHBkRWo3cm1UdWUzWQo3bUFxU0k1R0x0MkRra0RaMFRML2Jqa3hBRUZQNjM0NWoyY1M0bUFyaENLcVRUM0tOVENBcnk5cXhJaHJtR0hFCjl6K1dqTXRKOWVGbkQreG1acjBINVFLQmdRQ1hVRDdwTndqZHNlRDE3eWhEQ1czU3IrWGxLRjJFZE9SRVZVdFgKNzhscEpNUUpsekhoYWhTZXFxOGZ4ek9uK2poYjhFNVA5VlpvVzBwTHI0MmxiOFdpZUIyUHFmSU1QT2lVcExobQpPU1ljMXJoUDhmREd6V3h5R3VyUjNBVWlVNWFtSnhWZlMrODRNd3pnbFhKOURYbC9FbWx5WC9sak44dkZjZkRvCnJja3Ntd0tCZ0RnNHhtREQrMTRrbDFVcW03c0tmNVNvRlVmMmNxdnpvcHAzTlR1UXRxVERNQmUzQUZ2R25DUzYKOTU3NU5GNHlNZkNhdDl6bStyK2pKRFdyNzhmR3doaHM4dU1jVUFNdkJJYUlibWtmMXRtYnh0RXQrUHNXRG5Iagp6WVYrRkdqYUtVOWdtaEFNdTdielUvMzA5dGkrc280ZUllOU5Pa3BGNE15ejJXclZYcklwCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== -------------- next part -------------- A non-text attachment was scrubbed... Name: micro service architecture.png Type: image/png Size: 361336 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vitrage.conf Type: application/octet-stream Size: 1196 bytes Desc: not available URL: From zigo at debian.org Fri Oct 26 09:09:00 2018 From: zigo at debian.org (Thomas Goirand) Date: Fri, 26 Oct 2018 11:09:00 +0200 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: <33f9616c-4d81-e239-a1f2-8b1696b8ddd7@redhat.com> References: <5cf5bee4-d780-57fb-6099-fa018e3ab097@debian.org> <33f9616c-4d81-e239-a1f2-8b1696b8ddd7@redhat.com> Message-ID: <094360b0-ddc4-b9a7-9950-77ea42cad310@debian.org> On 10/22/18 9:12 PM, Zane Bitter wrote: > On 22/10/18 10:33 AM, Thomas Goirand wrote: >> This can only happen if we have supporting distribution packages for it. >> IMO, this is a call for using Debian Testing or even Sid in the gate. > > It depends on which versions we choose to support, but if necessary yes. If what we want is to have early detection of problems with latest versions of Python, then there's not so many alternatives. I don't really understand why you're writing that it "depends on which version we choose to support". That's the kind of answer which I found very frustrating when I submit a bug, and I'm being replied "we don't support this version". My reasoning is, the earlier we detect and fix problems, the better, and that's orthogonal to to what version of Python we want to support. Delaying bugfix and latest Python version compat leads to nowhere, and best is to test with it if possible (even in a non-voting mode). Cheers, Thomas Goirand (zigo) From sean.mcginnis at gmail.com Fri Oct 26 11:46:23 2018 From: sean.mcginnis at gmail.com (Sean McGinnis) Date: Fri, 26 Oct 2018 06:46:23 -0500 Subject: [openstack-dev] [Release-job-failures] Release of openstack/python-apmecclient failed In-Reply-To: References: Message-ID: On Fri, Oct 26, 2018 at 4:42 AM wrote: > Build failed. > > - release-openstack-python3 > http://logs.openstack.org/d3/d39466cf752f2a20a3047b9ca537b2b6adccb154/release/release-openstack-python3/345a591/ > : POST_FAILURE in 3m 57s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > > Release failed for this due to openstackci not being properly configured for the pypi package upload. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Oct 26 12:07:19 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 26 Oct 2018 14:07:19 +0200 Subject: [openstack-dev] [all][release] Document deliverables without release management Message-ID: <0e5506d0-e5d9-c64a-e08a-16029c92b7c6@openstack.org> Hi, The Release Management team is handling release management for deliverables from all official OpenStack teams. However, there are a number of exceptions: - deliverables that are not using tags or branches for their 'release', or that are directly published (docs, specs, cookiecutters...) - deployment deliverables that are released on a specific marketplace (Chef supermarket, Charm store...) and that are not relying on the OpenStack release management team help To facilitate tracking the intention of the teams for their deliverables, I proposed the following change: https://review.openstack.org/#/c/613268/ It introduces a "release-management:" key to the deliverable entries in the governance projects.yaml file that lists official repos. By default (if the key is not present), the deliverable would be handled by the OpenStack Release Management team using the openstack/release repository. This change applies the key to already-known exceptions: please review those and suggest any other exception that I missed! Cheers, -- Thierry Carrez (ttx) From fungi at yuggoth.org Fri Oct 26 13:24:47 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 26 Oct 2018 13:24:47 +0000 Subject: [openstack-dev] [Release-job-failures] Release of openstack/python-apmecclient failed In-Reply-To: References: Message-ID: <20181026132447.r3q2gs5ki2xcjuqq@yuggoth.org> On 2018-10-26 06:46:23 -0500 (-0500), Sean McGinnis wrote: [...] > Release failed for this due to openstackci not being properly configured > for the pypi package upload. If whoever "pineunity" is can add the "openstackci" account as another maintainer for https://pypi.org/project/python-apmecclient/ then I'm happy to reenqueue that tag into Zuul. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cdent+os at anticdent.org Fri Oct 26 13:37:29 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 26 Oct 2018 14:37:29 +0100 (BST) Subject: [openstack-dev] [placement] update 18-43 Message-ID: HTML: https://anticdent.org/placement-update-18-43.html A placement update for you. # Most Important Same as last week: The major factors that need attention are managing database migrations and associated tooling and getting the ball rolling on properly producing documentation. More on both of these things in the extraction section below. Matt has sent out [an email](http://lists.openstack.org/pipermail/openstack-dev/2018-October/136075.html) seeking volunteers from OpenStack Ansible or TripleO to get placement upgrade tooling in one of those projects. # Bugs I guess it is because of various people doing upgrades, and some of the downstream projects starting to take more advantage of placement, but there's been a raft of interesting bugs recently. Many related to some of the more esoteric aspects of the ProviderTree handling in the resource tracker, the SQL in the placement service, or management of global state in WSGI servers. Initially this is a bit frustrating, but it's also a good thing: Finding and fixing bugs is the beating heart of an open source project. So thanks to everyone finding and fixing them. * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 15. -1. * [In progress placement bugs](https://goo.gl/vzGGDQ) 11. # Specs The spec review sprint happened and managed to get some specs merged, so this list should have shrunk some. * Account for host agg allocation ratio in placement (Still in rocky/) * Add subtree filter for GET /resource_providers * Resource provider - request group mapping in allocation candidate * VMware: place instances on resource pool (still in rocky/) * Standardize CPU resource tracking * Allow overcommit of dedicated CPU (Has an alternative which changes allocations to a float) * Modelling passthrough devices for report to placement * Spec: allocation candidates in tree * [WIP] generic device discovery policy * Nova Cyborg interaction specification. * supporting virtual NVDIMM devices * Spec: Support filtering by forbidden aggregate * Proposes NUMA topology with RPs * Count quota based on resource class * WIP: High Precision Event Timer (HPET) on x86 guests * Add support for emulated virtual TPM * Adds spec for instance live resize * Provider config YAML file # Main Themes ## Making Nested Useful Work on getting nova's use of nested resource providers happy and fixing bugs discovered in placement in the process. This is creeping ahead. We recently confirmed that end-to-end success with nested providers is priority one for resource provider related work. * There's a topic for reshaper that still has some open patches: * ## Extraction There continue to be three main tasks in regard to placement extraction: 1. upgrade and integration testing 2. database schema migration and management 3. documentation publishing There's been some good progress here. The [grenade job works](https://review.openstack.org/#/c/604454/) and is ready to merge independent of other things. The related [devstack change](https://review.openstack.org/#/c/600162/) is still waiting on the database management that's part of (2). As mentioned above, volunteers from OSA or TripleO are being recruited. That db management is making some good headway with a [working alembic setup](https://review.openstack.org/#/c/611441/) but the tooling to use it needs to be formalized. The [command line hack](https://review.openstack.org/#/c/600161/) has been updated to use the alembic setup. We have work in progress to tune up the documentation but we are not yet publishing documentation (3). The plan here is to incrementally improve things as we have attention and discover things. One goal with this is to keep the process moving and use followups to avoid nitpicking each other too much. # Other Various placement changes out in the world. * Generate sample policy in placement directory (This is a bit stuck on not being sure what the right thing to do is.) * Improve handling of default allocation ratios * Neutron minimum bandwidth implementation * Add OWNERSHIP $SERVICE traits * Puppet: Initial cookiecutter and import from nova::placement * zun: Use placement for unified resource management * Update allocation ratio when config changes * Deal with root_id None in resource provider * Use long rpc timeout in select_destinations * Cleanups for scheduler code * Bandwith Resource Providers! * Harden placement init under wsgi * Using gabbi-tempest for integration tests. * Make tox -ereleasenotes work * placement: Add a doc describing a quick live environment # End It's tired around here. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From jaypipes at gmail.com Fri Oct 26 13:55:00 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 26 Oct 2018 09:55:00 -0400 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> <0c7d005e-3fdd-2f63-8e40-789c83d7c58b@gmail.com> Message-ID: <42478f21-7547-5597-bcc6-c645c9531f59@gmail.com> On 10/25/2018 02:44 PM, melanie witt wrote: > On Thu, 25 Oct 2018 14:00:08 -0400, Jay Pipes wrote: >> On 10/25/2018 01:38 PM, Chris Friesen wrote: >>> On 10/24/2018 9:10 AM, Jay Pipes wrote: >>>> Nova's API has the ability to create "quota classes", which are >>>> basically limits for a set of resource types. There is something >>>> called the "default quota class" which corresponds to the limits in >>>> the CONF.quota section. Quota classes are basically templates of >>>> limits to be applied if the calling project doesn't have any stored >>>> project-specific limits. >>>> >>>> Has anyone ever created a quota class that is different from "default"? >>> >>> The Compute API specifically says: >>> >>> "Only ‘default’ quota class is valid and used to set the default quotas, >>> all other quota class would not be used anywhere." >>> >>> What this API does provide is the ability to set new default quotas for >>> *all* projects at once rather than individually specifying new defaults >>> for each project. >> >> It's a "defaults template", yes. >> >> The alternative is, you know, to just set the default values in >> CONF.quota, which is what I said above. Or, if you want project X to >> have different quota limits from those CONF-driven defaults, then set >> the quotas for the project to some different values via the >> os-quota-sets API (or better yet, just use Keystone's /limits API when >> we write the "limits driver" into Nova). The issue is that the >> os-quota-classes API is currently blocking *me* writing that "limits >> driver" in Nova because I don't want to port nova-specific functionality >> (like quota classes) to a limits driver when the Keystone /limits >> endpoint doesn't have that functionality and nobody I know of has ever >> used it. > > When you say it's blocking you from writing the "limits driver" in nova, > are you saying you're picking up John's unified limits spec [1]? It's > been in -W mode and hasn't been updated in 4 weeks. In the spec, > migration from quota classes => registered limits and deprecation of the > existing quota API and quota classes is described. > > Cheers, > -melanie > > [1] https://review.openstack.org/602201 Actually, I wasn't familiar with John's spec. I'll review it today. I was referring to my own attempts to clean up the quota system and remove all the limits-related methods from the QuotaDriver class... Best, -jay From openstack at nemebean.com Fri Oct 26 14:27:01 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 26 Oct 2018 09:27:01 -0500 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: References: <5cf5bee4-d780-57fb-6099-fa018e3ab097@debian.org> <33f9616c-4d81-e239-a1f2-8b1696b8ddd7@redhat.com> Message-ID: On 10/25/18 3:43 PM, Zane Bitter wrote: > On 25/10/18 1:38 PM, William M Edmonds wrote: >> Zane Bitter wrote on 10/22/2018 03:12:46 PM: >>  > On 22/10/18 10:33 AM, Thomas Goirand wrote: >>  > > On 10/19/18 5:17 PM, Zane Bitter wrote: >> >> >> >>  > >> Integration Tests >>  > >> ----------------- >>  > >> >>  > >> Integration tests do test, amongst other things, integration with >>  > >> non-openstack-supplied things in the distro, so it's important >> that we >>  > >> test on the actual distros we have identified as popular.[2] >> It's also >>  > >> important that every project be testing on the same distro at >> the end of >>  > >> a release, so we can be sure they all work together for users. >>  > > >>  > > I find very disturbing to see the project only leaning toward >> these only >>  > > 2 distributions. Why not SuSE & Debian? >>  > >>  > The bottom line is it's because targeting those two catches 88% of our >>  > users. (For once I did not make this statistic up.) >>  > >>  > Also note that in practice I believe almost everything is actually >>  > tested on Ubuntu LTS, and only TripleO is testing on CentOS. It's >>  > difficult to imagine how to slot another distro into the mix without >>  > doubling up on jobs. >> >> I think you meant 78%, assuming you were looking at the latest User >> Survey results [1], page 55. Still a hefty number. > > I never know how to read those weird 3-way bar charts they have in the > user survey, but that actually adds up to 91% by the looks of it (I > believe you forgot to count RHEL). The numbers were actually slightly > lower in the full-year data for 2017 that I used (from > https://www.openstack.org/analytics - I can't give you a direct link > because Javascript ). > >> It is important to note that the User Survey lumps all versions of a >> given OS together, whereas the TC reference [2] only considers the >> latest LTS/stable version. If the User Survey split out latests >> LTS/stable versions vs. others (e.g. Ubuntu 16.04 LTS), I expect we'd >> see Ubuntu 18.04 LTS + Centos 7 adding up to much less than 78%. > > This is true, although we don't know by how much. (FWIW I can almost > guarantee that virtually all of the CentOS/RHEL users are on 7, but I'm > sure the same is not the case for Ubuntu 16.04.) In this context I don't think the version matters though. The original question was why we are focusing our test efforts on Ubuntu and CentOS, and the answer is that ~90% of our users are on those platforms. The specific version they're on right now doesn't really matter - even if they're on an older one, chances are eventually they'll move to a newer release of that same OS. > >> [1] https://www.openstack.org/assets/survey/April2017SurveyReport.pdf >> [2] >> https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aschultz at redhat.com Fri Oct 26 15:52:58 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 26 Oct 2018 09:52:58 -0600 Subject: [openstack-dev] [tripleo] retirement of instack-undercloud Message-ID: We have officially moved off of the instack-undercloud deployment process in Rocky and have officially removed it's support from python-tripleoclient in Stein. In order to prevent confusion I have proposed a patch to start the retirement of instack-undercloud[0]. We will continue to support the stable branches for their life but we don't want any further patches to instack-undercloud going forward. Please let me know if there are any issues. Thanks, -Alex [0] https://review.openstack.org/#/c/613621/ From dmsimard at redhat.com Fri Oct 26 16:53:17 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Fri, 26 Oct 2018 12:53:17 -0400 Subject: [openstack-dev] [all] Update to flake8 and failures despite # flake8: noqa Message-ID: Hi openstack-dev, I stumbled on odd and sudden pep8 failures with ARA recently and brought it up in #openstack-infra [1]. It was my understanding that appending " # flake8: noqa" to a line of code would have flake8 ignore this line if it happened to violate any linting rules. It turns out that, at least according to the flake8 release notes [2], "flake8: noqa" is actually meant to ignore the linting on an entire file. The correct way to ignore a specific line appears to be to append " # noqa" to the line... without "flake8: ". Looking at codesearch [3], there is a lot of projects using the "flake8: noqa" approach with the intent of ignoring a specific line. It would be important to fix that in order to make sure we're only ignoring the specific lines we're interested in ignoring and prevent upcoming failures in the jobs. [1]: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-10-26.log.html#t2018-10-26T16:18:38 [2]: http://flake8.pycqa.org/en/latest/release-notes/3.6.0.html [3]: http://codesearch.openstack.org/?q=flake8%3A%20noqa&i=nope&files=&repos= David Moreau Simard dmsimard = [irc, github, twitter] From zbitter at redhat.com Fri Oct 26 17:11:21 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 26 Oct 2018 13:11:21 -0400 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: <094360b0-ddc4-b9a7-9950-77ea42cad310@debian.org> References: <5cf5bee4-d780-57fb-6099-fa018e3ab097@debian.org> <33f9616c-4d81-e239-a1f2-8b1696b8ddd7@redhat.com> <094360b0-ddc4-b9a7-9950-77ea42cad310@debian.org> Message-ID: On 26/10/18 5:09 AM, Thomas Goirand wrote: > On 10/22/18 9:12 PM, Zane Bitter wrote: >> On 22/10/18 10:33 AM, Thomas Goirand wrote: >>> This can only happen if we have supporting distribution packages for it. >>> IMO, this is a call for using Debian Testing or even Sid in the gate. >> >> It depends on which versions we choose to support, but if necessary yes. > > If what we want is to have early detection of problems with latest > versions of Python, then there's not so many alternatives. I think a lot depends on the relative timing of the Python release, the various distro release cycles, and the OpenStack release cycle. We established that for 3.7 that's the only way we could have done it in Rocky; for 3.8, who knows. > I don't really understand why you're writing that it "depends on which > version we choose to support". The current version of the resolution[1] says that we'll choose the latest released version "we can feasibly use for testing", while making clear that availability in an Ubuntu LTS release is *not* a requirement for feasibility. But it doesn't require the TC to choose the latest version available from python.org if we're not able to build an image that we can successfully use for testing in time before the beginning of the release cycle. [1] https://review.openstack.org/613145 > That's the kind of answer which I found > very frustrating when I submit a bug, and I'm being replied "we don't > support this version". My reasoning is, the earlier we detect and fix > problems, the better, and that's orthogonal to to what version of Python > we want to support. Delaying bugfix and latest Python version compat > leads to nowhere, and best is to test with it if possible (even in a > non-voting mode). I agree that bugs with future versions of Python are always worth fixing ASAP, whether or not we are able to test them in the gate. cheers, Zane. From sean.mcginnis at gmx.com Fri Oct 26 18:22:58 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 26 Oct 2018 13:22:58 -0500 Subject: [openstack-dev] [all] Update to flake8 and failures despite # flake8: noqa In-Reply-To: References: Message-ID: <20181026182258.GA18550@sm-workstation> On Fri, Oct 26, 2018 at 12:53:17PM -0400, David Moreau Simard wrote: > Hi openstack-dev, > > I stumbled on odd and sudden pep8 failures with ARA recently and > brought it up in #openstack-infra [1]. > > It was my understanding that appending " # flake8: noqa" to a line of > code would have flake8 ignore this line if it happened to violate any > linting rules. > It turns out that, at least according to the flake8 release notes [2], > "flake8: noqa" is actually meant to ignore the linting on an entire > file. > > The correct way to ignore a specific line appears to be to append " # > noqa" to the line... without "flake8: ". > Looking at codesearch [3], there is a lot of projects using the > "flake8: noqa" approach with the intent of ignoring a specific line. > > It would be important to fix that in order to make sure we're only > ignoring the specific lines we're interested in ignoring and prevent > upcoming failures in the jobs. > > [1]: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-10-26.log.html#t2018-10-26T16:18:38 > [2]: http://flake8.pycqa.org/en/latest/release-notes/3.6.0.html > [3]: http://codesearch.openstack.org/?q=flake8%3A%20noqa&i=nope&files=&repos= > > David Moreau Simard > dmsimard = [irc, github, twitter] > Thanks for raising this. We have a few of these in python-cinderclient, and after correcting the usage, it was indeed suppressing some valid errors by skipping the entire file. For those looking at fixing this in your repos - this may help: for file in $(grep -r -l "flake8.*noqa" cinderclient/*); do sed -i 's/flake8.*noqa/noqa/g' $file done Of course check the git diff and run any linting jobs before accepting the changes from that. Sean From aschultz at redhat.com Fri Oct 26 18:29:42 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 26 Oct 2018 12:29:42 -0600 Subject: [openstack-dev] [puppet][tripleo] puppet 5.5.7 breaks a bunch of stuff Message-ID: Just a heads up. I've been battling with some unit test issues with the latest version of puppet 5.5. I've proposed some fixes[0][1], but it appears that there is a larger issue with legacy functions which affects the stable branches. I've reported the issues[2][3] upstream to Puppetlabs, but it'll likely be some time before we have any resolution. In the mean time I would recommend pinning to 5.5.6 if possible. Thanks, -Alex [0] https://bugs.launchpad.net/puppet-nova/+bug/1799757 [1] https://bugs.launchpad.net/tripleo/+bug/1799786 [2] https://tickets.puppetlabs.com/browse/PUP-9270 [3] https://tickets.puppetlabs.com/browse/PUP-9271 From mriedemos at gmail.com Fri Oct 26 18:44:27 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 26 Oct 2018 13:44:27 -0500 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-24 Update Message-ID: <82e9f774-d7b1-9520-bfb7-0d493f665be6@gmail.com> There isn't much news this week except some of the base framework changes being proposed to projects are getting merged which is nice to see. https://storyboard.openstack.org/#!/story/2003657 https://review.openstack.org/#/q/topic:upgrade-checkers+status:merged And there are a lot of patches that are ready for review: https://review.openstack.org/#/q/topic:upgrade-checkers+status:open -- Thanks, Matt From sundar.nadathur at intel.com Fri Oct 26 22:05:18 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Fri, 26 Oct 2018 15:05:18 -0700 Subject: [openstack-dev] [cyborg] [nova] Poll: Name for VARs In-Reply-To: References: Message-ID: <90579df9-5474-3a60-b00b-f55ad7a9f957@intel.com> Thanks for all who participated in the discussion and/or voted. The most votes, such as there were, went for the name 'Accelerator Requests' abbrev. ARQs. The specs will be updated over the next couple of days. Have a good weekend. Best Regards, Sundar On 10/22/2018 11:37 AM, Nadathur, Sundar wrote: > Hi, > The name VAR (Virtual Accelerator Request) is introduced in > https://review.openstack.org/#/c/603955/. It came up during the Stein > PTG and is being used by default, but some folks have said they find > the name VAR to be confusing. I would like to resolve this to > completion, so that whatever name we choose is not subject to > recurrent debates in the future. > > Here is a poll for Cyborg and Nova developers to indicate their > preferences for existing or proposed options: > https://docs.google.com/spreadsheets/d/179Q8J9qIJNOiVm86K7bWPxo7otTsU18XVCI32V77JaU/edit?usp=sharing > > > 1. Please add your name, if not already listed, and please feel free > to propose additional options as you see fit. > 2. The voting is by rank -- 1 indicates most preferred. > 3. If you strongly oppose a term, you may say 'No' and justify with a > comment. >    (Comments are added by pressing Ctrl-Alt-M on a cell.) > > I'll keep this open for a minimum of two days and possibly for a week > depending on feedback. > > Regards, > Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Sat Oct 27 16:36:42 2018 From: gkotton at vmware.com (Gary Kotton) Date: Sat, 27 Oct 2018 16:36:42 +0000 Subject: [openstack-dev] [heat][nova] Does Heat support Nova micro versions Message-ID: <03F2A3FD-8369-45CC-ABE9-1678280F6A90@vmware.com> Hi, Does heat support os-compute-api-version? Say for example I am using queens but have a use case via heat that requires a API parameter that was capped in the 2.56 microversion. Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Sun Oct 28 14:38:10 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Sun, 28 Oct 2018 15:38:10 +0100 Subject: [openstack-dev] [keystone] Keystone Team Update - Catch-up report 8 October - 28 October 2018 Message-ID: <1540737490.2443421.1557308552.65EDF5FE@webmail.messagingengine.com> # Keystone Team Update - Catch-up report 8 October - 28 October 2018 It's been a few weeks since I've been able to get one of these out, so here's a summary of what's been happening in that time. ## News ### Community Goals Status Mutable config: Kristi has a patch under review[1]. Python3-first: Keystone has python3 functional tests completed, working our way through the remainder of our repositories. Upgrade status checks: Scaffolding is in place[2] but we need to decide what checks should be included. [1] https://review.openstack.org/585417 [2] https://review.openstack.org/608785 ### Flask conversion complete The last patch to migrate keystone to Flask has merged[3]. Thanks Morgan for pushing all this through! There is still some work to be done to migrate keystonemiddleware away from the Webob implementation. With the migration to Flask, some users have noticed that the healthcheck middleware no longer works the same way[4]. Custom middleware is also no longer possible, but there are workarounds[5]. [3] https://review.openstack.org/609839 [4] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135696.html [5] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-10-02.log.html#t2018-10-02T13:53:12 ### Oath federation examples Oath open-sourced their custom auth plugin for Athenz[6], which we may want to model our federated shadow user implementation after. We're analyzing the difference and collaborating on a path to move in this direction in keystone[7]. [6] https://github.com/yahoo/openstack-collab/tree/master/keystone-federation-ocata [7] https://etherpad.openstack.org/p/keystone-shadow-mapping-athenz-delta ### Outreachy I submitted two projects for Outreachy and there has been a lot of interest in both of them. Applicants now need to log a contribution so they can be eligible to apply for the project, so you may see a lot of new faces before the November 6 deadline. If you have ideas for beginner-friendly tasks, let me know so I can hand them out to our newcomers. ## Open Specs Search query: https://bit.ly/2Pi6dGj In addition to the three Stein specs that have been open for a while, we opened and closed another to allow for explicit domain IDs upon domain creation[8]. There are also a number of "ongoing" specs proposed that need attention[9] [8] https://bit.ly/2OyDLTh [9] https://review.openstack.org/611201 ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 From sorrison at gmail.com Mon Oct 29 00:05:09 2018 From: sorrison at gmail.com (Sam Morrison) Date: Mon, 29 Oct 2018 11:05:09 +1100 Subject: [openstack-dev] [nova] nova cellsv2 and DBs / down cells / quotas In-Reply-To: References: <929B1777-594D-440A-9847-84EE66A7146F@gmail.com> <4f38170f-75c2-8e8a-92fc-92e4ec47f356@gmail.com> <48AF63AB-8FDD-4653-938D-8B3144FE8507@gmail.com> Message-ID: > On 26 Oct 2018, at 1:42 am, Dan Smith wrote: > >> I guess our architecture is pretty unique in a way but I wonder if >> other people are also a little scared about the whole all DB servers >> need to up to serve API requests? > > When we started down this path, we acknowledged that this would create a > different access pattern which would require ops to treat the cell > databases differently. The input we were getting at the time was that > the benefits outweighed the costs here, and that we'd work on caching to > deal with performance issues if/when that became necessary. > >> I’ve been thinking of some hybrid cellsv1/v2 thing where we’d still >> have the top level api cell DB but the API would only ever read from >> it. Nova-api would only write to the compute cell DBs. >> Then keep the nova-cells processes just doing instance_update_at_top to keep the nova-cell-api db up to date. > > I'm definitely not in favor of doing more replication in python to > address this. What was there in cellsv1 was lossy, even for the subset > of things it actually supported (which didn't cover all nova features at > the time and hasn't kept pace with features added since, obviously). > > About a year ago, I proposed that we add another "read only mirror" > field to the cell mapping, which nova would use if and only if the > primary cell database wasn't reachable, and only for read > operations. The ops, if they wanted to use this, would configure plain > old one-way mysql replication of the cell databases to a > highly-available server (probably wherever the api_db is) and nova could > use that as a read-only cache for things like listing instances and > calculating quotas. The reaction was (very surprisingly to me) negative > to this option. It seems very low-effort, high-gain, and proper re-use > of existing technologies to me, without us having to replicate a > replication engine (hah) in python. So, I'm curious: does that sound > more palatable to you? Yeah I think that could work for us, so far I can’t think of anything better. Thanks, Sam > > --Dan From ramishra at redhat.com Mon Oct 29 07:18:11 2018 From: ramishra at redhat.com (Rabi Mishra) Date: Mon, 29 Oct 2018 12:48:11 +0530 Subject: [openstack-dev] [heat][nova] Does Heat support Nova micro versions In-Reply-To: <03F2A3FD-8369-45CC-ABE9-1678280F6A90@vmware.com> References: <03F2A3FD-8369-45CC-ABE9-1678280F6A90@vmware.com> Message-ID: On Sat, Oct 27, 2018 at 10:08 PM Gary Kotton wrote: > Hi, > > Does heat support os-compute-api-version? Say for example I am using > queens but have a use case via heat that requires a API parameter that was > capped in the 2.56 > There isn't a way to specify compute microversion in the template. Till queens, heat client plugin for nova was using the base api version[1], unless a specific feature/property requires a higher mircoversion[2]. So features capped in newer api versions should be usable without any change. However, we moved to use the "max api microversion supported" as the default[3] since rocky to support new features without much changes and not to have too many versioned clients. As a side-effect, features/properties capped by new nova api microversions can't be used any more. We probably have to look for a better way to handle this in the future. [1] https://github.com/openstack/heat/blob/stable/queens/heat/engine/clients/os/nova.py#L61 [2] https://github.com/openstack/heat/blob/stable/queens/heat/engine/resources/openstack/nova/server.py#L838 [3] https://github.com/openstack/heat/blob/stable/rocky/heat/engine/clients/os/nova.py#L100 > microversion. > > Thanks > > Gary > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Oct 29 14:58:02 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 29 Oct 2018 15:58:02 +0100 Subject: [openstack-dev] [ironic] Team gathering at the Forum? In-Reply-To: References: Message-ID: <627c8ab5-1918-8ca4-1dae-29cb78859d57@redhat.com> Hi folks! This is your friendly reminder to vote on the day. Even if you're fine with all days, please leave a vote, so that we know how many people are coming. We will need to make a reservation, and we may not be able to accommodate more people than voted! Dmitry On 10/22/18 6:06 PM, Dmitry Tantsur wrote: > Hi ironicers! :) > > We are trying to plan an informal Ironic team gathering in Berlin. If you care > about Ironic and would like to participate, please fill in > https://doodle.com/poll/iw5992px765nthde. Note that the location is tentative, > also depending on how many people sign up. > > Dmitry From mriedemos at gmail.com Mon Oct 29 15:25:00 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 29 Oct 2018 10:25:00 -0500 Subject: [openstack-dev] [nova] Volunteer needed to write reshaper FFU hook Message-ID: Given the outstanding results of my recruiting job last week [1] I have been tasked with recruiting one of our glorious and most talented contributors to work on the fast-forward-upgrade script changes needed for the reshape-provider-tree blueprint. The work item is nicely detailed in the spec [2]. A few things to keep in mind: 1. There are currently no virt drivers which run the reshape routine. However, patches are up for review for libvirt [3] and xen [4]. There are also functional tests which exercise the ResourceTracker code with a faked out virt driver interface to test reshaping [5]. 2. The FFU entry point will mimic the reshape routine that will happen on nova-compute service startup in the ResourceTracker [6]. 3. The FFU script will need to run per-compute service rather than globally (or per cell) since it actually needs to call the virt driver's update_provider_tree() interface which might need to inspect the hardware (like for GPUs). Given there is already a model to follow from the ResourceTracker this should not be too hard, the work will likely mostly be writing tests. What do you get if you volunteer? The usual: fame, fortune, the respect of your peers, etc. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-October/136075.html [2] https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/reshape-provider-tree.html#offline-upgrade-script [3] https://review.openstack.org/#/c/599208/ [4] https://review.openstack.org/#/c/521041/ [5] https://github.com/openstack/nova/blob/a0eacbf7f/nova/tests/functional/test_servers.py#L1839 [6] https://github.com/openstack/nova/blob/a0eacbf7f/nova/compute/resource_tracker.py#L917-L940 -- Thanks, Matt From doug at doughellmann.com Mon Oct 29 15:47:06 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 29 Oct 2018 11:47:06 -0400 Subject: [openstack-dev] [publiccloud-wg][sdk][osc][tc] Extracting vendor profiles from openstacksdk In-Reply-To: <6365d18b-5550-ed93-8296-d74661cbe104@inaugust.com> References: <6365d18b-5550-ed93-8296-d74661cbe104@inaugust.com> Message-ID: Monty Taylor writes: > Heya, > > Tobias and I were chatting at OpenStack Days Nordic about the Public > Cloud Working Group potentially taking over as custodians of the vendor > profile information [0][1] we keep in openstacksdk (and previously in > os-client-config) > > I think this is a fine idea, but we've got some dancing to do I think. > > A few years ago Dean and I talked about splitting the vendor data into > its own repo. We decided not to at the time because it seemed like extra > unnecessary complication. But I think we may have reached that time. > > We should split out a new repo to hold the vendor data json files. We > can manage this repo pretty much how we manage the > service-types-authority [2] data now. Also similar to that (and similar > to tzdata) these are files that contain information that is true > currently and is not release specific - so it should be possible to > update to the latest vendor files without updating to the latest > openstacksdk. > > If nobody objects, I'll start working through getting a couple of new > repos created. I'm thinking openstack/vendor-profile-data, owned/managed > by Public Cloud WG, with the json files, docs, json schema, etc, and a > second one, openstack/os-vendor-profiles - owned/managed by the > openstacksdk team that's just like os-service-types [3] and is a > tiny/thin library that exposes the files to python (so there's something > to depend on) and gets proposed patches from zuul when new content is > landed in openstack/vendor-profile-data. > > How's that sound? I understand the benefit of separating the data files from the SDK, but what is the benefit of separating the data files from the code that reads them? Doug > > Thanks! > Monty > > [0] > http://git.openstack.org/cgit/openstack/openstacksdk/tree/openstack/config/vendors > [1] > https://docs.openstack.org/openstacksdk/latest/user/config/vendor-support.html > [2] http://git.openstack.org/cgit/openstack/service-types-authority > [3] http://git.openstack.org/cgit/openstack/os-service-types > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ifatafekn at gmail.com Mon Oct 29 16:27:24 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Mon, 29 Oct 2018 18:27:24 +0200 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: Hi, On Fri, Oct 26, 2018 at 10:34 AM Won wrote: > > I solved the problem of not updating the Prometheus alarm. > Alarms with the same Prometheus alarm name are recognized as the same > alarm in vitrage. > > ------- alert.rules.yml > groups: > - name: alert.rules > rules: > - alert: InstanceDown > expr: up == 0 > for: 60s > labels: > severity: warning > annotations: > description: '{{ $labels.instance }} of job {{ $labels.job }} has > been down > for more than 30 seconds.' > summary: Instance {{ $labels.instance }} down > ------ > This is the contents of the alert.rules.yml file before I modify it. > This is a yml file that generates an alarm when the cardvisor > stops(instance down). Alarm is triggered depending on which instance is > down, but all alarms have the same name as 'instance down'. Vitrage > recognizes all of these alarms as the same alarm. Thus, until all 'instance > down' alarms were cleared, the 'instance down' alarm was recognized as > unresolved and the alarm was not extinguished. > This is strange. I would expect your original definition to work as well, since the alarm key in Vitrage is defined by a combination of the alert name and the instance. We will check it again. BTW, we solved a different bug related to Prometheus alarms not being cleared [1]. Could it be related? > Can you please show me where you saw the 2001 timestamp? I didn't find it >> in the log. >> > > [image: image.png] > The time stamp is recorded well in log(vitrage-graph,collect etc), but in > vitrage-dashboard it is marked 2001-01-01. > However, it seems that the time stamp is recognized well internally > because the alarm can be resolved and is recorded well in log. > Does the wrong timestamp appear if you run 'vitrage alarm list' cli command? please try running 'vitrage alarm list --debug' and send me the output. > [image: image.png] > Host name ubuntu is my main server. I install openstack all in one in this > server and i install compute node in host name compute1. > When i create a new vm in nova(compute1) it immediately appear in the > entity graph. But in does not disappear in the entity graph when i delete > the vm. No matter how long i wait, it doesn't disappear. > Afther i execute 'vitrage-purge-data' command and reboot the > Openstack(execute reboot command in openstack server(host name ubuntu)), it > disappear. Only execute 'vitrage-purge-data' does not work. It need a > reboot to disappear. > When i create a new vm in nova(ubuntu) there is no problem. > Please send me vitrage-collector.log and vitrage-graph.log from the time that the problematic vm was created and deleted. Please also create and delete a vm on your 'ubuntu' server, so I can check the differences in the log. I implemented the web service of the micro service architecture and applied > the RCA. Attached file picture shows the structure of the web service I > have implemented. I wonder what data I receive and what can i do when I > link vitrage with kubernetes. > As i know, the vitrage graph does not present information about containers > or pods inside the vm. If that is correct, I would like to make the > information of the pod level appear on the entity graph. > > I follow ( > https://docs.openstack.org/vitrage/latest/contributor/k8s_datasource.html) > this step. I attached the vitage.conf file and the kubeconfig file. The > contents of the Kubeconconfig file are copied from the contents of the > admin.conf file on the master node. > I want to check my settings are right and connected, but I don't know how. > It would be very much appreciated if you let me know how. > Unfortunately, Vitrage does not hold pods and containers information at the moment. We discussed the option of adding it in Stein release, but I'm not sure we will get to do it. Br, Ifat [1] https://review.openstack.org/#/c/611258/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 42202 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 11247 bytes Desc: not available URL: From aschultz at redhat.com Mon Oct 29 16:29:46 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 29 Oct 2018 10:29:46 -0600 Subject: [openstack-dev] [tripleo] retirement of instack Message-ID: With the proposed retirement of instack-undercloud[0], we will no longer be supporting future development of the instack project as well. As with instack-undercloud, we will continue to support the stable branches of instack for their life but will not being doing any future development. Please let me know if there are any issues. Thanks, -Alex [0] http://lists.openstack.org/pipermail/openstack-dev/2018-October/136098.html From fungi at yuggoth.org Mon Oct 29 16:53:47 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 29 Oct 2018 16:53:47 +0000 Subject: [openstack-dev] [all] We're combining the lists! (was: Bringing the community together...) In-Reply-To: <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> Message-ID: <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> REMINDER: The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists (to which this is being sent) will be replaced by a new openstack-discuss at lists.openstack.org mailing list. The new list is open for subscriptions[0] now, but is not yet accepting posts until Monday November 19 and it's strongly recommended to subscribe before that date so as not to miss any messages posted there. The old lists will be configured to no longer accept posts starting on Monday December 3, but in the interim posts to the old lists will also get copied to the new list so it's safe to unsubscribe from them any time after the 19th and not miss any messages. See my previous notice[1] for details. For those wondering, we have 127 subscribers so far on openstack-discuss with 3 weeks to go before it will be put into use (and 5 weeks now before the old lists are closed down for good). [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jaypipes at gmail.com Mon Oct 29 16:58:09 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 29 Oct 2018 12:58:09 -0400 Subject: [openstack-dev] [all] We're combining the lists! In-Reply-To: <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> Message-ID: I'm not willing to subscribe with a password over a non-TLS connection... -jay On 10/29/2018 12:53 PM, Jeremy Stanley wrote: > REMINDER: The openstack, openstack-dev, openstack-sigs and > openstack-operators mailing lists (to which this is being sent) will > be replaced by a new openstack-discuss at lists.openstack.org mailing > list. The new list is open for subscriptions[0] now, but is not yet > accepting posts until Monday November 19 and it's strongly > recommended to subscribe before that date so as not to miss any > messages posted there. The old lists will be configured to no longer > accept posts starting on Monday December 3, but in the interim posts > to the old lists will also get copied to the new list so it's safe > to unsubscribe from them any time after the 19th and not miss any > messages. See my previous notice[1] for details. > > For those wondering, we have 127 subscribers so far on > openstack-discuss with 3 weeks to go before it will be put into use > (and 5 weeks now before the old lists are closed down for good). > > [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mriedemos at gmail.com Mon Oct 29 17:02:28 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 29 Oct 2018 12:02:28 -0500 Subject: [openstack-dev] [nova] FYI: change in semantics for virt driver update_provider_tree() Message-ID: <843dc414-d045-8c29-bd64-97926151592b@gmail.com> This is a notice to any out of tree virt driver implementors of the ComputeDriver.update_provider_tree() interface that they now need to set the allocation_ratio and reserved amounts for VCPU, MEMORY_MB and DISK_GB inventory from the update_provider_tree() method assuming [1] merges. The patch below that one in the series shows how it's implemented for the libvirt driver. [1] https://review.openstack.org/#/c/613991/ -- Thanks, Matt From mrhillsman at gmail.com Mon Oct 29 17:28:50 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 29 Oct 2018 12:28:50 -0500 Subject: [openstack-dev] [user-committee] UC Meeting Reminder Message-ID: UC meeting in #openstack-uc at 1800UTC -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Oct 29 17:50:43 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 29 Oct 2018 17:50:43 +0000 Subject: [openstack-dev] [all] We're combining the lists! In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> Message-ID: <20181029174426.wwa7r52ngm5gm7rm@yuggoth.org> On 2018-10-29 12:58:09 -0400 (-0400), Jay Pipes wrote: > I'm not willing to subscribe with a password over a non-TLS > connection... [...] Up to you, certainly. You don't actually need to enter anything in the password fields on the subscription page (as the instructions on it also indicate). You can alternatively subscribe by sending E-mail to openstack-discuss-request at lists.openstack.org with a subject line of "subscribe" if you prefer. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mordred at inaugust.com Mon Oct 29 18:27:23 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 29 Oct 2018 13:27:23 -0500 Subject: [openstack-dev] [publiccloud-wg][sdk][osc][tc] Extracting vendor profiles from openstacksdk In-Reply-To: References: <6365d18b-5550-ed93-8296-d74661cbe104@inaugust.com> Message-ID: <066c9d51-a617-ae3d-f59b-c9b500cde038@inaugust.com> On 10/29/18 10:47 AM, Doug Hellmann wrote: > Monty Taylor writes: > >> Heya, >> >> Tobias and I were chatting at OpenStack Days Nordic about the Public >> Cloud Working Group potentially taking over as custodians of the vendor >> profile information [0][1] we keep in openstacksdk (and previously in >> os-client-config) >> >> I think this is a fine idea, but we've got some dancing to do I think. >> >> A few years ago Dean and I talked about splitting the vendor data into >> its own repo. We decided not to at the time because it seemed like extra >> unnecessary complication. But I think we may have reached that time. >> >> We should split out a new repo to hold the vendor data json files. We >> can manage this repo pretty much how we manage the >> service-types-authority [2] data now. Also similar to that (and similar >> to tzdata) these are files that contain information that is true >> currently and is not release specific - so it should be possible to >> update to the latest vendor files without updating to the latest >> openstacksdk. >> >> If nobody objects, I'll start working through getting a couple of new >> repos created. I'm thinking openstack/vendor-profile-data, owned/managed >> by Public Cloud WG, with the json files, docs, json schema, etc, and a >> second one, openstack/os-vendor-profiles - owned/managed by the >> openstacksdk team that's just like os-service-types [3] and is a >> tiny/thin library that exposes the files to python (so there's something >> to depend on) and gets proposed patches from zuul when new content is >> landed in openstack/vendor-profile-data. >> >> How's that sound? > > I understand the benefit of separating the data files from the SDK, but > what is the benefit of separating the data files from the code that > reads them? I'd say primarily so that the same data files can be used from other languages. (similar to having the service-types-authority data exist separate from the python library that consumes it.) Also - there is a separation of concerns, potentially. The review team for a vendor-data repo could just be public cloud sig folks - and what they are reviewing is the accuracy of the data. The python code to consume that and interpret it is likely a different set of humans. From doug at doughellmann.com Mon Oct 29 18:41:18 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 29 Oct 2018 14:41:18 -0400 Subject: [openstack-dev] [publiccloud-wg][sdk][osc][tc] Extracting vendor profiles from openstacksdk In-Reply-To: <066c9d51-a617-ae3d-f59b-c9b500cde038@inaugust.com> References: <6365d18b-5550-ed93-8296-d74661cbe104@inaugust.com> <066c9d51-a617-ae3d-f59b-c9b500cde038@inaugust.com> Message-ID: Monty Taylor writes: > On 10/29/18 10:47 AM, Doug Hellmann wrote: >> Monty Taylor writes: >> >>> Heya, >>> >>> Tobias and I were chatting at OpenStack Days Nordic about the Public >>> Cloud Working Group potentially taking over as custodians of the vendor >>> profile information [0][1] we keep in openstacksdk (and previously in >>> os-client-config) >>> >>> I think this is a fine idea, but we've got some dancing to do I think. >>> >>> A few years ago Dean and I talked about splitting the vendor data into >>> its own repo. We decided not to at the time because it seemed like extra >>> unnecessary complication. But I think we may have reached that time. >>> >>> We should split out a new repo to hold the vendor data json files. We >>> can manage this repo pretty much how we manage the >>> service-types-authority [2] data now. Also similar to that (and similar >>> to tzdata) these are files that contain information that is true >>> currently and is not release specific - so it should be possible to >>> update to the latest vendor files without updating to the latest >>> openstacksdk. >>> >>> If nobody objects, I'll start working through getting a couple of new >>> repos created. I'm thinking openstack/vendor-profile-data, owned/managed >>> by Public Cloud WG, with the json files, docs, json schema, etc, and a >>> second one, openstack/os-vendor-profiles - owned/managed by the >>> openstacksdk team that's just like os-service-types [3] and is a >>> tiny/thin library that exposes the files to python (so there's something >>> to depend on) and gets proposed patches from zuul when new content is >>> landed in openstack/vendor-profile-data. >>> >>> How's that sound? >> >> I understand the benefit of separating the data files from the SDK, but >> what is the benefit of separating the data files from the code that >> reads them? > > I'd say primarily so that the same data files can be used from other > languages. (similar to having the service-types-authority data exist > separate from the python library that consumes it.) > > Also - there is a separation of concerns, potentially. The review team > for a vendor-data repo could just be public cloud sig folks - and what > they are reviewing is the accuracy of the data. The python code to > consume that and interpret it is likely a different set of humans. The argument about languages is more convincing but I'll accept both answers. The plan makes sense to me, now. Doug From rb560u at att.com Mon Oct 29 18:56:14 2018 From: rb560u at att.com (BARTRA, RICK) Date: Mon, 29 Oct 2018 18:56:14 +0000 Subject: [openstack-dev] [qa] patrole] Nominating Sergey Vilgelm and Mykola Yakovliev for Patrole core In-Reply-To: <7D5E803080EF7047850D309B333CB94E22EC70F3@GAALPA1MSGUSRBI.ITServices.sbc.com> References: <7D5E803080EF7047850D309B333CB94E22EBA021@GAALPA1MSGUSRBI.ITServices.sbc.com> <1669e0a173d.c3a7d12643083.6400498436645570362@ghanshyammann.com> <7D5E803080EF7047850D309B333CB94E22EC70F3@GAALPA1MSGUSRBI.ITServices.sbc.com> Message-ID: +1 for both of them as well. On 10/29/18, 2:54 PM, "MONTEIRO, FELIPE C" wrote: -----Original Message----- From: Ghanshyam Mann [mailto:gmann at ghanshyammann.com] Sent: Monday, October 22, 2018 7:09 PM To: OpenStack Development Mailing List \ Subject: Re: [openstack-dev] [qa] patrole] Nominating Sergey Vilgelm and Mykola Yakovliev for Patrole core +1 for both of them. They have been doing great work in Patrole and will be good addition in team. -gmann ---- On Tue, 23 Oct 2018 03:34:51 +0900 MONTEIRO, FELIPE C wrote ---- > > Hi, > > I would like to nominate Sergey Vilgelm and Mykola Yakovliev for Patrole core as they have both done excellent work the past cycle in improving the Patrole framework as well as increasing Neutron Patrole test coverage, which includes various Neutron plugins/extensions as well like fwaas. I believe they will both make an excellent addition to the Patrole core team. > > Please vote with a +1/-1 for the nomination, which will stay open for one week. > > Felipe > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze-aw&m=Hr9uSwCAFUivOWpV_I3WfWWX2j2FaDOJgydFtf1xADs&s=tM-1KJHy12lSqbDfRmZNuc_dgRsAjBqLMZchmEVHGEo&e= > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze-aw&m=Hr9uSwCAFUivOWpV_I3WfWWX2j2FaDOJgydFtf1xADs&s=tM-1KJHy12lSqbDfRmZNuc_dgRsAjBqLMZchmEVHGEo&e= From cdent+os at anticdent.org Mon Oct 29 20:30:40 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 29 Oct 2018 20:30:40 +0000 (GMT) Subject: [openstack-dev] [qa] [api] [all] gabbi-tempest for integration tests Message-ID: Earlier this month I produced a blog post on something I was working on to combine gabbi (the API tester used in placement, gnocchi, heat, and a few other projects) with tempest to create a simple two step process for having some purely YAML-driven and HTTP API-based of any project that can test with tempest. That blog posting is at: https://anticdent.org/gabbi-in-the-gate.html I've got it working now and the necessary patches have merged in tempest and gabbi-tempest is now part of openstack's infra. A pending patch in nova shows how it can work: https://review.openstack.org/#/c/613386/ The two steps are: * Add a new job in .zuul.yaml with a parent of 'gabbi-tempest' * Create some gabbi YAML files containing tests in a directory named in that zuul job. * Profit. There are a few different pieces that have come together to make this possible: * The magic of zuul v3, local job config and job inheritance. * gabbi: https://gabbi.readthedocs.io/ * gabbi-tempest: https://gabbi-tempest.readthedocs.io/ and https://git.openstack.org/cgit/openstack/gabbi-tempest and the specific gabbi-tempest zuul job: https://git.openstack.org/cgit/openstack/gabbi-tempest/tree/.zuul.yaml#n11 * tempest plugins and other useful ways of getting placement to run in different ways I hope this is useful for people. Using gabbi is a great way to make sure that your HTTP API is usable with lots of different clients and without maintaining a lot of state. Let me know if you have any questions or if you are interested in helping to make gabbi-tempest more complete and well documented. I've been approving my own code the past few patches and that feels a bit dirty. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From manuel.sb at garvan.org.au Mon Oct 29 23:20:52 2018 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Mon, 29 Oct 2018 23:20:52 +0000 Subject: [openstack-dev] [NOVA] nova GPU suppot and find GPU type Message-ID: <9D8A2486E35F0941A60430473E29F15B017BAD7B79@MXDB2.ad.garvan.unsw.edu.au> Dear Nova community, This is the first time I work with GPUs. I have a Dell C4140 with x4 Nvidia Tesla V100 SXM2 16GB I would like to setup on Openstack Rocky. I checked the documentation and I have 2 questions I would like to ask: 1. Docs (1) says As of the Queens release, there is no upstream continuous integration testing with a hardware environment that has virtual GPUs and therefore this feature is considered experimental. Does it means nova will stop supporting GPUs? Is GPU support being transferred to a different project? 2. I installed cuda-repo-rhel7-10-0-local-10.0.130-410.48-1.0-1.x86_64 on the physical host but I can't find the type of GPUs installed (2) (/sys/class/mdev_bus doesn't exists). What should I do then? What should I put in devices.enabled_vgpu_types? (1) - https://docs.openstack.org/nova/rocky/admin/virtual-gpu.html (2)- https://docs.openstack.org/nova/rocky/admin/virtual-gpu.html#how-to-discover-a-gpu-type Thank you very much Manuel Sopena Ballesteros | Big data Engineer Garvan Institute of Medical Research The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010 T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel.sb at garvan.org.au NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Tue Oct 30 03:29:39 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Tue, 30 Oct 2018 11:29:39 +0800 Subject: [openstack-dev] [nova] Volunteer needed to write reshaper FFU hook In-Reply-To: References: Message-ID: I would like to help since I have now finished all the downstream works. But I may need to take some time understanding all the background information On Mon, Oct 29, 2018 at 11:25 PM Matt Riedemann wrote: > Given the outstanding results of my recruiting job last week [1] I have > been tasked with recruiting one of our glorious and most talented > contributors to work on the fast-forward-upgrade script changes needed > for the reshape-provider-tree blueprint. > > The work item is nicely detailed in the spec [2]. A few things to keep > in mind: > > 1. There are currently no virt drivers which run the reshape routine. > However, patches are up for review for libvirt [3] and xen [4]. There > are also functional tests which exercise the ResourceTracker code with a > faked out virt driver interface to test reshaping [5]. > > 2. The FFU entry point will mimic the reshape routine that will happen > on nova-compute service startup in the ResourceTracker [6]. > > 3. The FFU script will need to run per-compute service rather than > globally (or per cell) since it actually needs to call the virt driver's > update_provider_tree() interface which might need to inspect the > hardware (like for GPUs). > > Given there is already a model to follow from the ResourceTracker this > should not be too hard, the work will likely mostly be writing tests. > > What do you get if you volunteer? The usual: fame, fortune, the respect > of your peers, etc. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-October/136075.html > [2] > > https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/reshape-provider-tree.html#offline-upgrade-script > [3] https://review.openstack.org/#/c/599208/ > [4] https://review.openstack.org/#/c/521041/ > [5] > > https://github.com/openstack/nova/blob/a0eacbf7f/nova/tests/functional/test_servers.py#L1839 > [6] > > https://github.com/openstack/nova/blob/a0eacbf7f/nova/compute/resource_tracker.py#L917-L940 > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Tue Oct 30 05:09:18 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 30 Oct 2018 14:09:18 +0900 Subject: [openstack-dev] [goal][upgrade-checkers][Searchlight] Upgrade check command framework initial code Message-ID: Hi team, The initial code for upgrade-checker framework has been submitted to Searchlight [1]. Please take some time to review. [1] https://review.openstack.org/#/c/613789/ Thanks, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Tue Oct 30 05:36:34 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 30 Oct 2018 14:36:34 +0900 Subject: [openstack-dev] [Searchlight][release] Searchlight will release Stein-1 Message-ID: Hi team, I'm doing a release for Searchlight projects (searchlight, searchlight-ui, python-searchlightclient) [1]. Please help to review and make sure everything is ok. [1] https://review.openstack.org/#/c/614066/ Finally \m/ :D Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Tue Oct 30 05:40:25 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 30 Oct 2018 16:40:25 +1100 Subject: [openstack-dev] [all]Naming the T release of OpenStack -- Poll open Message-ID: <20181030054024.GC2343@thor.bakeyournoodle.com> Hi folks, It is time again to cast your vote for the naming of the T Release. As with last time we'll use a public polling option over per user private URLs for voting. This means, everybody should proceed to use the following URL to cast their vote: https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_aac97f1cbb6c61df&akey=b9e448b340787f0e We've selected a public poll to ensure that the whole community, not just gerrit change owners get a vote. Also the size of our community has grown such that we can overwhelm CIVS if using private urls. A public can mean that users behind NAT, proxy servers or firewalls may receive an message saying that your vote has already been lodged, if this happens please try another IP. Because this is a public poll, results will currently be only viewable by myself until the poll closes. Once closed, I'll post the URL making the results viewable to everybody. This was done to avoid everybody seeing the results while the public poll is running. The poll will officially end on 2018-11-08 00:00:00+00:00[1], and results will be posted shortly after. [1] https://governance.openstack.org/tc/reference/release-naming.html --- According to the Release Naming Process, this poll is to determine the community preferences for the name of the T release of OpenStack. It is possible that the top choice is not viable for legal reasons, so the second or later community preference could wind up being the name. Release Name Criteria --------------------- Each release name must start with the letter of the ISO basic Latin alphabet following the initial letter of the previous release, starting with the initial release of "Austin". After "Z", the next name should start with "A" again. The name must be composed only of the 26 characters of the ISO basic Latin alphabet. Names which can be transliterated into this character set are also acceptable. The name must refer to the physical or human geography of the region encompassing the location of the OpenStack design summit for the corresponding release. The exact boundaries of the geographic region under consideration must be declared before the opening of nominations, as part of the initiation of the selection process. The name must be a single word with a maximum of 10 characters. Words that describe the feature should not be included, so "Foo City" or "Foo Peak" would both be eligible as "Foo". Names which do not meet these criteria but otherwise sound really cool should be added to a separate section of the wiki page and the TC may make an exception for one or more of them to be considered in the Condorcet poll. The naming official is responsible for presenting the list of exceptional names for consideration to the TC before the poll opens. Exact Geographic Region ----------------------- The Geographic Region from where names for the S release will come is Colorado Proposed Names -------------- * Tarryall * Teakettle * Teller * Telluride * Thomas : the Tank Engine * Thornton * Tiger * Tincup * Timnath * Timber * Tiny Town * Torreys * Trail * Trinidad * Treasure * Troublesome * Trussville * Turret * Tyrone Proposed Names that do not meet the criteria (accepted by the TC) ----------------------------------------------------------------- * Train🚂 : Many Attendees of the first Denver PTG have a story to tell about the trains near the PTG hotel. We could celebrate those stories with this name Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue Oct 30 09:04:35 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 30 Oct 2018 18:04:35 +0900 Subject: [openstack-dev] [tc][all] TC office hours is started now on #openstack-tc Message-ID: <166c437afad.cfaea82b213892.2928695558974026338@ghanshyammann.com> Hi All, TC office hour is started on #openstack-tc channel. Feel free to reach to us for anything you want discuss/input/feedback/help from TC. -gmann From akhil.jain at india.nec.com Tue Oct 30 09:59:26 2018 From: akhil.jain at india.nec.com (AKHIL Jain) Date: Tue, 30 Oct 2018 09:59:26 +0000 Subject: [openstack-dev] [goals][upgrade-checkers] [telemetry] Upgrade checks for telemetry services Message-ID: Hi Matt and Telemetry Team, I was going through the remaining project to be implemented with upgrade-checkers placeholder framework. I would like to know about the projects to implement the same under telemetry tab. According to my understanding from below link, multiple projects come under telemetry: https://wiki.openstack.org/wiki/Telemetry#Managed Aodh being alarming service triggers alarm when collected data breaks over set rules. Also, Aodh work as a standalone project using any backend(ceilometer, gnocchi. etc.): So there are expected changes b/w releases. Ceilometer being data collection service(that helps in customer billing, resource tracking, and alarming): As this service is involved in data pooling from other projects. So there can be chances to perform upgrade checks. Panko being indexing service, that provides the ability to store and query event data. Related changes to indexing objects can be checked while upgrading So, should we add upgrade-check command in each project or single command upgrade-checks should be there to tell each service upgrade status? Thanks, Akhil From rico.lin.guanyu at gmail.com Tue Oct 30 11:15:21 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 30 Oct 2018 19:15:21 +0800 Subject: [openstack-dev] [openstack-sigs][all] Berlin Forum for `expose SIGs and WGs` Message-ID: Hi all To continue our discussion in Denver, we will have a forum [1] in Berlin on *Wednesday, November 14, 11:50am-12:30pm CityCube Berlin - Level 3 - M-Räume 8* We will host the forum in an open discussion format, and try to get actions from forum to make sure we can keep push what people need. So if you have any feedback or idea, please join us. I created an etherpad for this forum so we can collect information, get feedback, and mark actions. *https://etherpad.openstack.org/p/expose-sigs-and-wgs * *For who don't know what is `expose SIGs and WGs`* There is some started discussion in ML [2] , and in PTG session [3]. The basic concept for this is to allow users/ops get a single window for important scenario/user cases or issues into traceable tasks in single story/place and ask developers be responsible (by changing the mission of government policy) to co-work on that task. SIGs/WGs are so desired to get feedbacks or use cases, so as for project teams (not gonna speak for all projects/SIGs/WGs but we like to collect for more idea for sure). And project teams got a central place to develop for specific user requirements, or give document for more general OpenStack information. So would like to have more discussion on how can we reach the goal by actions? How can we change in TC, UC, Projects, SIGs, WGs's policy to bridge up from user/ops to developers. [1] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22750/expose-sigs-and-wgs [2] http://lists.openstack.org/pipermail/openstack-sigs/2018-August/000453.html [3] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134689.html -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Tue Oct 30 11:24:49 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 30 Oct 2018 12:24:49 +0100 Subject: [openstack-dev] [CFP] FOSDEM 2019 IaaS and Virt DevRoom Message-ID: <20181030112449.GN22057@paraplu> Dear OpenStack community, FOSDEM 2019 will feature a Virtualization & IaaS DevRoom again. Here is the call for proposals. Please check it out if you would like to submit a talk. Regards, Kashyap ----------------------------------------------------------------------- We are excited to announce that the call for proposals is now open for the Virtualization & IaaS devroom at the upcoming FOSDEM 2019, to be hosted on February 2nd 2019. This year will mark FOSDEM’s 19th anniversary as one of the longest-running free and open source software developer events, attracting thousands of developers and users from all over the world. FOSDEM will be held once again in Brussels, Belgium, on February 2nd & 3rd, 2019. This devroom is a collaborative effort, and is organized by dedicated folks from projects such as OpenStack, Xen Project, oVirt, QEMU, KVM, and Foreman. We would like to invite all those who are involved in these fields to submit your proposals by December 1st, 2018. Important Dates --------------- Submission deadline: 1 December 2019 Acceptance notifications: 14 December 2019 Final schedule announcement: 21 December 2019 Devroom: 2nd February 2019 About the Devroom ----------------- The Virtualization & IaaS devroom will feature session topics such as open source hypervisors and virtual machine managers such as Xen Project, KVM, bhyve, and VirtualBox, and Infrastructure-as-a-Service projects such as KubeVirt, Apache CloudStack, OpenStack, oVirt, QEMU and OpenNebula. This devroom will host presentations that focus on topics of shared interest, such as KVM; libvirt; shared storage; virtualized networking; cloud security; clustering and high availability; interfacing with multiple hypervisors; hyperconverged deployments; and scaling across hundreds or thousands of servers. Presentations in this devroom will be aimed at developers working on these platforms who are looking to collaborate and improve shared infrastructure or solve common problems. We seek topics that encourage dialog between projects and continued work post-FOSDEM. Submit Your Proposal -------------------- All submissions must be made via the Pentabarf event planning site[1]. If you have not used Pentabarf before, you will need to create an account. If you submitted proposals for FOSDEM in previous years, you can use your existing account. After creating the account, select Create Event to start the submission process. Make sure to select Virtualization and IaaS devroom from the Track list. Please fill out all the required fields, and provide a meaningful abstract and description of your proposed session. Submission Guidelines --------------------- We expect more proposals than we can possibly accept, so it is vitally important that you submit your proposal on or before the deadline. Late submissions are unlikely to be considered. All presentation slots are 30 minutes, with 20 minutes planned for presentations, and 10 minutes for Q&A. All presentations will be recorded and made available under Creative Commons licenses. In the Submission notes field, please indicate that you agree that your presentation will be licensed under the CC-By-SA-4.0 or CC-By-4.0 license and that you agree to have your presentation recorded. For example: "If my presentation is accepted for FOSDEM, I hereby agree to license all recordings, slides, and other associated materials under the Creative Commons Attribution Share-Alike 4.0 International License. Sincerely, ." In the Submission notes field, please also confirm that if your talk is accepted, you will be able to attend FOSDEM and deliver your presentation. We will not consider proposals from prospective speakers who are unsure whether they will be able to secure funds for travel and lodging to attend FOSDEM. (Sadly, we are not able to offer travel funding for prospective speakers.) Speaker Mentoring Program As a part of the rising efforts to grow our communities and encourage a diverse and inclusive conference ecosystem, we're happy to announce that we'll be offering mentoring for new speakers. Our mentors can help you with tasks such as reviewing your abstract, reviewing your presentation outline or slides, or practicing your talk with you. You may apply to the mentoring program as a newcomer speaker if you: Never presented before or Presented only lightning talks or Presented full-length talks at small meetups (<50 ppl) Submission Guidelines --------------------- Mentored presentations will have 25-minute slots, where 20 minutes will include the presentation and 5 minutes will be reserved for questions. The number of newcomer session slots is limited, so we will probably not be able to accept all applications. You must submit your talk and abstract to apply for the mentoring program, our mentors are volunteering their time and will happily provide feedback but won't write your presentation for you! If you are experiencing problems with Pentabarf, the proposal submission interface, or have other questions, you can email our devroom mailing list[2] and we will try to help you. How to Apply ------------ In addition to agreeing to video recording and confirming that you can attend FOSDEM in case your session is accepted, please write "speaker mentoring program application" in the "Submission notes" field, and list any prior speaking experience or other relevant information for your application. Call for Mentors ---------------- Interested in mentoring newcomer speakers? We'd love to have your help! Please email iaas-virt-devroom at lists.fosdem.org with a short speaker biography and any specific fields of expertise (for example, KVM, OpenStack, storage, etc.) so that we can match you with a newcomer speaker from a similar field. Estimated time investment can be as low as a 5-10 hours in total, usually distributed weekly or bi-weekly. Never mentored a newcomer speaker but interested to try? As the mentoring program coordinator, email Brian Proffitt[3] and he will be happy to answer your questions! Code of Conduct --------------- Following the release of the updated code of conduct for FOSDEM, we'd like to remind all speakers and attendees that all of the presentations and discussions in our devroom are held under the guidelines set in the CoC and we expect attendees, speakers, and volunteers to follow the CoC at all times. If you submit a proposal and it is accepted, you will be required to confirm that you accept the FOSDEM CoC. If you have any questions about the CoC or wish to have one of the devroom organizers review your presentation slides or any other content for CoC compliance, please email us and we will do our best to assist you. Call for Volunteers ------------------- We are also looking for volunteers to help run the devroom. We need assistance watching time for the speakers, and helping with video for the devroom. Please contact Brian Proffitt, for more information. Questions? If you have any questions about this devroom, please send your questions to our devroom mailing list. You can also subscribe to the list to receive updates about important dates, session announcements, and to connect with other attendees. See you all at FOSDEM! [1] https://penta.fosdem.org/submission/FOSDEM19 [2] iaas-virt-devroom at lists.fosdem.org [3] bkp at redhat.com ----------------------------------------------------------------------- From doug at doughellmann.com Tue Oct 30 12:33:24 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 30 Oct 2018 08:33:24 -0400 Subject: [openstack-dev] [goals][upgrade-checkers] [telemetry] Upgrade checks for telemetry services In-Reply-To: References: Message-ID: AKHIL Jain writes: > Hi Matt and Telemetry Team, > > I was going through the remaining project to be implemented with upgrade-checkers placeholder framework. I would like to know about the projects to implement the same under telemetry tab. > > According to my understanding from below link, multiple projects come under telemetry: > https://wiki.openstack.org/wiki/Telemetry#Managed > > Aodh being alarming service triggers alarm when collected data breaks over set rules. Also, Aodh work as a standalone project using any backend(ceilometer, gnocchi. etc.): > So there are expected changes b/w releases. > > Ceilometer being data collection service(that helps in customer billing, resource tracking, and alarming): As this service is involved in data pooling from other projects. So there can be chances to perform upgrade checks. > > Panko being indexing service, that provides the ability to store and query event data. Related changes to indexing objects can be checked while upgrading > > So, should we add upgrade-check command in each project or single command upgrade-checks should be there to tell each service upgrade status? > > Thanks, > Akhil > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Each of those services has its own configuration file and database, and the code is in separate repositories, so it seems like we would want a separate upgrade check command for each one. Doug From dmendiza at redhat.com Tue Oct 30 13:23:47 2018 From: dmendiza at redhat.com (Douglas Mendizabal) Date: Tue, 30 Oct 2018 08:23:47 -0500 Subject: [openstack-dev] [barbican] Adjust weekly meeting time for US DST Message-ID: Hi openstack-dev@, During the weekly meeting today the topic of moving the weekly meeting forward by an hour to adjust for US Daylight Savings Time ending was brought up. All contributors in attendance unanimously voted for the move. [1] If you would like to participate in the meetings and didn't have a chance to attend to day, or are unable to make the new proposed time of Tuesdays at 1300 UTC, please respond to this thread and we can try to find a time that works for everyone. Otherwise we'll be meeting at the new proposed time next week. Thanks, - Douglas Mendizábal [1] http://eavesdrop.openstack.org/meetings/barbican/2018/barbican.2018-10-30-12.01.txt From akhil.jain at india.nec.com Tue Oct 30 13:30:32 2018 From: akhil.jain at india.nec.com (AKHIL Jain) Date: Tue, 30 Oct 2018 13:30:32 +0000 Subject: [openstack-dev] [goals][upgrade-checkers] [telemetry] Upgrade checks for telemetry services In-Reply-To: References: , Message-ID: Thanks Doug for quick response. I will start working accordingly. Akhil ________________________________________ From: Doug Hellmann Sent: Tuesday, October 30, 2018 6:03:24 PM To: AKHIL Jain; openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [goals][upgrade-checkers] [telemetry] Upgrade checks for telemetry services AKHIL Jain writes: > Hi Matt and Telemetry Team, > > I was going through the remaining project to be implemented with upgrade-checkers placeholder framework. I would like to know about the projects to implement the same under telemetry tab. > > According to my understanding from below link, multiple projects come under telemetry: > https://wiki.openstack.org/wiki/Telemetry#Managed > > Aodh being alarming service triggers alarm when collected data breaks over set rules. Also, Aodh work as a standalone project using any backend(ceilometer, gnocchi. etc.): > So there are expected changes b/w releases. > > Ceilometer being data collection service(that helps in customer billing, resource tracking, and alarming): As this service is involved in data pooling from other projects. So there can be chances to perform upgrade checks. > > Panko being indexing service, that provides the ability to store and query event data. Related changes to indexing objects can be checked while upgrading > > So, should we add upgrade-check command in each project or single command upgrade-checks should be there to tell each service upgrade status? > > Thanks, > Akhil > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Each of those services has its own configuration file and database, and the code is in separate repositories, so it seems like we would want a separate upgrade check command for each one. Doug From ianyrchoi at gmail.com Tue Oct 30 14:10:42 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Tue, 30 Oct 2018 23:10:42 +0900 Subject: [openstack-dev] Sharing upstream contribution mentoring result with Korea user group Message-ID: <09a62e2d-6caf-da1a-71ac-989aeea494ec@gmail.com> Hello, I got involved organizing & mentoring Korean people for OpenStack upstream contribution for about last two months, and would like to share with community members. Total nine mentees had started to learn OpenStack, contributed, and finally survived as volunteers for  1) developing OpenStack mobile app for better mobile user interfaces and experiences     (inspired from https://github.com/stackerz/app which worked on Juno release), and  2) translating OpenStack official project artifacts including documents,      and Container Whitepaper ( https://www.openstack.org/containers/leveraging-containers-and-openstack/ ). Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin, Sungjin Kang, and Andrew Yongjoon Kong) all helped to organize total 8 offline meetups + one mini-hackathon and mentored to attendees. The followings are brief summary:  - "OpenStack Controller" Android app is available on Play Store   : https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller    (GitHub: https://github.com/kosslab-kr/openstack-controller )  - Most high-priority projects (although it is not during string freeze period) and documents are    100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, and Container Whitepaper.  - Total 18,695 words were translated into Korean by four contributors   (confirmed through Zanata API: https://translate.openstack.org/rest/stats/user/[Zanata ID]/2018-08-16..2018-10-25 ): +------------+---------------+-----------------+ | Zanata ID  | Name          | Number of words | +------------+---------------+-----------------+ | ardentpark | Soonyeul Park | 12517           | +------------+---------------+-----------------+ | bnitech    | Dongbim Im    | 693             | +------------+---------------+-----------------+ | csucom     | Sungwook Choi | 4397            | +------------+---------------+-----------------+ | jaeho93    | Jaeho Cho     | 1088            | +------------+---------------+-----------------+  - The list of projects translated into Korean are described as: +-------------------------------------+-----------------+ | Project                             | Number of words | +-------------------------------------+-----------------+ | api-site                            | 20              | +-------------------------------------+-----------------+ | cinder                              | 405             | +-------------------------------------+-----------------+ | designate-dashboard                 | 4               | +-------------------------------------+-----------------+ | horizon                             | 3226            | +-------------------------------------+-----------------+ | i18n                                | 434             | +-------------------------------------+-----------------+ | ironic                              | 4               | +-------------------------------------+-----------------+ | Leveraging Containers and OpenStack | 5480            | +-------------------------------------+-----------------+ | neutron-lbaas-dashboard             | 5               | +-------------------------------------+-----------------+ | openstack-helm                      | 8835            | +-------------------------------------+-----------------+ | trove-dashboard                     | 89              | +-------------------------------------+-----------------+ | zun-ui                              | 193             | +-------------------------------------+-----------------+ I would like to really appreciate all co-mentors and participants on such a big event for promoting OpenStack contribution. The venue and food were supported by Korea Open Source Software Development Center ( https://kosslab.kr/ ). With many thanks, /Ian From doug at doughellmann.com Tue Oct 30 14:40:32 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 30 Oct 2018 10:40:32 -0400 Subject: [openstack-dev] [all] linter jobs testing packaging data Message-ID: Earlier today I learned that we have a few repositories with linter jobs failing (or at least reporting warnings) because they are running "python setup.py check" to test that the packaging meta-data is OK. This method of testing has been deprecated in favor of using the command "twine check", which requires a bit of extra setup but performs multiple checks on the built packages. Luckily, test-release-openstack-python3 job already runs "twine check". Since it is part of the publish-to-pypi-python3 template, any python-based projects that are releasing using the new job template (which should be all official projects now) have test-release-openstack-python3 configured to run when any files related to packaging are modified. Therefore, rather than updating the failing linter jobs to perform the steps necessary to run twine, teams should simply remove the check and allow the existing test job to perform that check. In addition to avoiding redundancy, this means we will be able update the job in one place instead of having to touch every repo when twine inevitably changes in the future. Sean is working on a set of patches to fix up some of the repos that have issues, so please approve those quickly when they come in. Doug From john at johngarbutt.com Tue Oct 30 14:49:21 2018 From: john at johngarbutt.com (John Garbutt) Date: Tue, 30 Oct 2018 14:49:21 +0000 Subject: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova? In-Reply-To: <42478f21-7547-5597-bcc6-c645c9531f59@gmail.com> References: <8492889a-abdb-bf4e-1f2f-785368795e0c@gmail.com> <0c7d005e-3fdd-2f63-8e40-789c83d7c58b@gmail.com> <42478f21-7547-5597-bcc6-c645c9531f59@gmail.com> Message-ID: Hi, Basically we should kill quota classes. It required out of tree stuff that was never implemented, AFAIK. When I checked with Kevin about this, my memory says the idea was out of tree authorization plugin would populate context.quota_class with something like "i_have_big_credit_limit" or "i_have_prepaid_loads_limit", and if blank fall back to the default. I don't believe anyone ever used that system. Gives you like groups of pre-defined quota limits, rather than per project overrides. Either way, it should die, and now its keystone's problem. I subscribe to the idea that downstream operational scripting is the currently preferred solution. Thanks, johnthetubaguy PS Sorry been busy on SKA architecture last month or so, slowing getting back up to speed. On Fri, 26 Oct 2018 at 14:55, Jay Pipes wrote: > > On 10/25/2018 02:44 PM, melanie witt wrote: > > On Thu, 25 Oct 2018 14:00:08 -0400, Jay Pipes wrote: > >> On 10/25/2018 01:38 PM, Chris Friesen wrote: > >>> On 10/24/2018 9:10 AM, Jay Pipes wrote: > >>>> Nova's API has the ability to create "quota classes", which are > >>>> basically limits for a set of resource types. There is something > >>>> called the "default quota class" which corresponds to the limits in > >>>> the CONF.quota section. Quota classes are basically templates of > >>>> limits to be applied if the calling project doesn't have any stored > >>>> project-specific limits. > >>>> > >>>> Has anyone ever created a quota class that is different from "default"? > >>> > >>> The Compute API specifically says: > >>> > >>> "Only ‘default’ quota class is valid and used to set the default quotas, > >>> all other quota class would not be used anywhere." > >>> > >>> What this API does provide is the ability to set new default quotas for > >>> *all* projects at once rather than individually specifying new defaults > >>> for each project. > >> > >> It's a "defaults template", yes. > >> > >> The alternative is, you know, to just set the default values in > >> CONF.quota, which is what I said above. Or, if you want project X to > >> have different quota limits from those CONF-driven defaults, then set > >> the quotas for the project to some different values via the > >> os-quota-sets API (or better yet, just use Keystone's /limits API when > >> we write the "limits driver" into Nova). The issue is that the > >> os-quota-classes API is currently blocking *me* writing that "limits > >> driver" in Nova because I don't want to port nova-specific functionality > >> (like quota classes) to a limits driver when the Keystone /limits > >> endpoint doesn't have that functionality and nobody I know of has ever > >> used it. > > > > When you say it's blocking you from writing the "limits driver" in nova, > > are you saying you're picking up John's unified limits spec [1]? It's > > been in -W mode and hasn't been updated in 4 weeks. In the spec, > > migration from quota classes => registered limits and deprecation of the > > existing quota API and quota classes is described. > > > > Cheers, > > -melanie > > > > [1] https://review.openstack.org/602201 > > Actually, I wasn't familiar with John's spec. I'll review it today. > > I was referring to my own attempts to clean up the quota system and > remove all the limits-related methods from the QuotaDriver class... > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From eumel at arcor.de Tue Oct 30 15:27:21 2018 From: eumel at arcor.de (Frank Kloeker) Date: Tue, 30 Oct 2018 16:27:21 +0100 Subject: [openstack-dev] [openstack-community] Sharing upstream contribution mentoring result with Korea user group In-Reply-To: <09a62e2d-6caf-da1a-71ac-989aeea494ec@gmail.com> References: <09a62e2d-6caf-da1a-71ac-989aeea494ec@gmail.com> Message-ID: <4b3bddc23f79663c0d8ea4188f13c3b7@arcor.de> Hi Ian, thanks for sharing. What a great user story about community work and contributing to OpenStack. I think you did a great job as mentor and organizer. I want to keep you with us. Welcome new contributors any many thanks for translation and programming. Hopefully you feel comfortable and have enough energy and fun to work further on OpenStack. kind regards Frank Am 2018-10-30 15:10, schrieb Ian Y. Choi: > Hello, > > I got involved organizing & mentoring Korean people for OpenStack > upstream contribution for about last two months, > and would like to share with community members. > > Total nine mentees had started to learn OpenStack, contributed, and > finally survived as volunteers for >  1) developing OpenStack mobile app for better mobile user interfaces > and experiences >     (inspired from https://github.com/stackerz/app which worked on > Juno release), and >  2) translating OpenStack official project artifacts including > documents, >      and Container Whitepaper ( > https://www.openstack.org/containers/leveraging-containers-and-openstack/ > ). > > Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin, > Sungjin Kang, and Andrew Yongjoon Kong) > all helped to organize total 8 offline meetups + one mini-hackathon > and mentored to attendees. > > The followings are brief summary: >  - "OpenStack Controller" Android app is available on Play Store >   : > https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller >    (GitHub: https://github.com/kosslab-kr/openstack-controller ) > >  - Most high-priority projects (although it is not during string > freeze period) and documents are >    100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, > and Container Whitepaper. > >  - Total 18,695 words were translated into Korean by four contributors >   (confirmed through Zanata API: > https://translate.openstack.org/rest/stats/user/[Zanata > ID]/2018-08-16..2018-10-25 ): > > +------------+---------------+-----------------+ > | Zanata ID  | Name          | Number of words | > +------------+---------------+-----------------+ > | ardentpark | Soonyeul Park | 12517           | > +------------+---------------+-----------------+ > | bnitech    | Dongbim Im    | 693             | > +------------+---------------+-----------------+ > | csucom     | Sungwook Choi | 4397            | > +------------+---------------+-----------------+ > | jaeho93    | Jaeho Cho     | 1088            | > +------------+---------------+-----------------+ > >  - The list of projects translated into Korean are described as: > > +-------------------------------------+-----------------+ > | Project                             | Number of words | > +-------------------------------------+-----------------+ > | api-site                            | 20              | > +-------------------------------------+-----------------+ > | cinder                              | 405             | > +-------------------------------------+-----------------+ > | designate-dashboard                 | 4               | > +-------------------------------------+-----------------+ > | horizon                             | 3226            | > +-------------------------------------+-----------------+ > | i18n                                | 434             | > +-------------------------------------+-----------------+ > | ironic                              | 4               | > +-------------------------------------+-----------------+ > | Leveraging Containers and OpenStack | 5480            | > +-------------------------------------+-----------------+ > | neutron-lbaas-dashboard             | 5               | > +-------------------------------------+-----------------+ > | openstack-helm                      | 8835            | > +-------------------------------------+-----------------+ > | trove-dashboard                     | 89              | > +-------------------------------------+-----------------+ > | zun-ui                              | 193             | > +-------------------------------------+-----------------+ > > I would like to really appreciate all co-mentors and participants on > such a big event for promoting OpenStack contribution. > The venue and food were supported by Korea Open Source Software > Development Center ( https://kosslab.kr/ ). > > > With many thanks, > > /Ian > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community From openstack at nemebean.com Tue Oct 30 15:30:32 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 30 Oct 2018 10:30:32 -0500 Subject: [openstack-dev] [oslo] Project Update Etherpad Message-ID: Good news! The Foundation found space for us to do a project update session, so now we need to figure out what to talk about. I've started an etherpad at https://etherpad.openstack.org/p/oslo-project-update-stein to list the possible topics. Please add or expand on the ones I've pre-populated if there's something you want to have covered. The current list is a five minute off-the-top-of-my-head thing, so don't assume it's complete. :-) -Ben From doug at doughellmann.com Tue Oct 30 15:41:41 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 30 Oct 2018 11:41:41 -0400 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: <4678c020-ba2d-4086-b1b5-f57286ae5a6d@redhat.com> References: <4678c020-ba2d-4086-b1b5-f57286ae5a6d@redhat.com> Message-ID: Zane Bitter writes: > On 19/10/18 11:17 AM, Zane Bitter wrote: >> I'd like to propose that we handle this by setting up a unit test >> template in openstack-zuul-jobs for each release. So for Stein we'd have >> openstack-python3-stein-jobs. This template would contain: >> >> * A voting gate job for the highest minor version of py3 we want to >> support in that release. >> * A voting gate job for the lowest minor version of py3 we want to >> support in that release. >> * A periodic job for any interim minor releases. >> * (Starting late in the cycle) a non-voting check job for the highest >> minor version of py3 we want to support in the *next* release (if >> different), on the master branch only. >> >> So, for example, (and this is still under active debate) for Stein we >> might have gating jobs for py35 and py37, with a periodic job for py36. >> The T jobs might only have voting py36 and py37 jobs, but late in the T >> cycle we might add a non-voting py38 job on master so that people who >> haven't switched to the U template yet can see what, if anything, >> they'll need to fix. > > Just to make it easier to visualise, here is an example for how the Zuul > config _might_ look now if we had adopted this proposal during Rocky: > > https://review.openstack.org/611947 > > And instead of having a project-wide goal in Stein to add > `openstack-python36-jobs` to the list that currently includes > `openstack-python35-jobs` in each project's Zuul config[1], we'd have > had a goal to change `openstack-python3-rocky-jobs` to > `openstack-python3-stein-jobs` in each project's Zuul config. If we set up the template before we branch stein for T, we could generate a patch as part of the branching process. Doug > > - ZB > > > [1] > https://governance.openstack.org/tc/goals/stein/python3-first.html#python-3-6-unit-test-jobs > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cboylan at sapwetik.org Tue Oct 30 16:03:37 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 30 Oct 2018 09:03:37 -0700 Subject: [openstack-dev] Zuul Queue backlogs and resource usage Message-ID: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> Hello everyone, A little while back I sent email explaining how the gate queues work and how fixing bugs helps us test and merge more code. All of this still is still true and we should keep pushing to improve our testing to avoid gate resets. Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In the process of doing this we had to restart Zuul which brought in a new logging feature that exposes node resource usage by jobs. Using this data I've been able to generate some report information on where our node demand is going. This change [0] produces this report [1]. As with optimizing software we want to identify which changes will have the biggest impact and to be able to measure whether or not changes have had an impact once we have made them. Hopefully this information is a start at doing that. Currently we can only look back to the point Zuul was restarted, but we have a thirty day log rotation for this service and should be able to look at a month's worth of data going forward. Looking at the data you might notice that Tripleo is using many more node resources than our other projects. They are aware of this and have a plan [2] to reduce their resource consumption. We'll likely be using this report generator to check progress of this plan over time. Also related to the long queue backlogs is this proposal [3] to change how Zuul prioritizes resource allocations to try to be more fair. [0] https://review.openstack.org/#/c/613674/ [1] http://paste.openstack.org/show/733644/ [2] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135396.html [3] http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-October/000575.html If you find any of this interesting and would like to help feel free to reach out to myself or the infra team. Thank you, Clark From johnsomor at gmail.com Tue Oct 30 16:39:16 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 30 Oct 2018 09:39:16 -0700 Subject: [openstack-dev] [openstack-community] Sharing upstream contribution mentoring result with Korea user group In-Reply-To: <4b3bddc23f79663c0d8ea4188f13c3b7@arcor.de> References: <09a62e2d-6caf-da1a-71ac-989aeea494ec@gmail.com> <4b3bddc23f79663c0d8ea4188f13c3b7@arcor.de> Message-ID: This is awesome Ian. Thanks for all of the work on this! Michael On Tue, Oct 30, 2018 at 8:28 AM Frank Kloeker wrote: > > Hi Ian, > > thanks for sharing. What a great user story about community work and > contributing to OpenStack. I think you did a great job as mentor and > organizer. I want to keep you with us. > > Welcome new contributors any many thanks for translation and > programming. Hopefully you feel comfortable and have enough energy and > fun to work further on OpenStack. > > kind regards > Frank > > Am 2018-10-30 15:10, schrieb Ian Y. Choi: > > Hello, > > > > I got involved organizing & mentoring Korean people for OpenStack > > upstream contribution for about last two months, > > and would like to share with community members. > > > > Total nine mentees had started to learn OpenStack, contributed, and > > finally survived as volunteers for > > 1) developing OpenStack mobile app for better mobile user interfaces > > and experiences > > (inspired from https://github.com/stackerz/app which worked on > > Juno release), and > > 2) translating OpenStack official project artifacts including > > documents, > > and Container Whitepaper ( > > https://www.openstack.org/containers/leveraging-containers-and-openstack/ > > ). > > > > Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin, > > Sungjin Kang, and Andrew Yongjoon Kong) > > all helped to organize total 8 offline meetups + one mini-hackathon > > and mentored to attendees. > > > > The followings are brief summary: > > - "OpenStack Controller" Android app is available on Play Store > > : > > https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller > > (GitHub: https://github.com/kosslab-kr/openstack-controller ) > > > > - Most high-priority projects (although it is not during string > > freeze period) and documents are > > 100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, > > and Container Whitepaper. > > > > - Total 18,695 words were translated into Korean by four contributors > > (confirmed through Zanata API: > > https://translate.openstack.org/rest/stats/user/[Zanata > > ID]/2018-08-16..2018-10-25 ): > > > > +------------+---------------+-----------------+ > > | Zanata ID | Name | Number of words | > > +------------+---------------+-----------------+ > > | ardentpark | Soonyeul Park | 12517 | > > +------------+---------------+-----------------+ > > | bnitech | Dongbim Im | 693 | > > +------------+---------------+-----------------+ > > | csucom | Sungwook Choi | 4397 | > > +------------+---------------+-----------------+ > > | jaeho93 | Jaeho Cho | 1088 | > > +------------+---------------+-----------------+ > > > > - The list of projects translated into Korean are described as: > > > > +-------------------------------------+-----------------+ > > | Project | Number of words | > > +-------------------------------------+-----------------+ > > | api-site | 20 | > > +-------------------------------------+-----------------+ > > | cinder | 405 | > > +-------------------------------------+-----------------+ > > | designate-dashboard | 4 | > > +-------------------------------------+-----------------+ > > | horizon | 3226 | > > +-------------------------------------+-----------------+ > > | i18n | 434 | > > +-------------------------------------+-----------------+ > > | ironic | 4 | > > +-------------------------------------+-----------------+ > > | Leveraging Containers and OpenStack | 5480 | > > +-------------------------------------+-----------------+ > > | neutron-lbaas-dashboard | 5 | > > +-------------------------------------+-----------------+ > > | openstack-helm | 8835 | > > +-------------------------------------+-----------------+ > > | trove-dashboard | 89 | > > +-------------------------------------+-----------------+ > > | zun-ui | 193 | > > +-------------------------------------+-----------------+ > > > > I would like to really appreciate all co-mentors and participants on > > such a big event for promoting OpenStack contribution. > > The venue and food were supported by Korea Open Source Software > > Development Center ( https://kosslab.kr/ ). > > > > > > With many thanks, > > > > /Ian > > > > _______________________________________________ > > Community mailing list > > Community at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zigo at debian.org Tue Oct 30 16:49:08 2018 From: zigo at debian.org (Thomas Goirand) Date: Tue, 30 Oct 2018 17:49:08 +0100 Subject: [openstack-dev] Proposal for a process to keep up with Python releases In-Reply-To: References: <5cf5bee4-d780-57fb-6099-fa018e3ab097@debian.org> <33f9616c-4d81-e239-a1f2-8b1696b8ddd7@redhat.com> <094360b0-ddc4-b9a7-9950-77ea42cad310@debian.org> Message-ID: <18408301-250f-b6bc-28ea-32f5d69c8b88@debian.org> On 10/26/18 7:11 PM, Zane Bitter wrote: > On 26/10/18 5:09 AM, Thomas Goirand wrote: >> On 10/22/18 9:12 PM, Zane Bitter wrote: >>> On 22/10/18 10:33 AM, Thomas Goirand wrote: >>>> This can only happen if we have supporting distribution packages for >>>> it. >>>> IMO, this is a call for using Debian Testing or even Sid in the gate. >>> >>> It depends on which versions we choose to support, but if necessary yes. >> >> If what we want is to have early detection of problems with latest >> versions of Python, then there's not so many alternatives. > > I think a lot depends on the relative timing of the Python release, the > various distro release cycles, and the OpenStack release cycle. We > established that for 3.7 that's the only way we could have done it in > Rocky; for 3.8, who knows. No need for a crystal ball... Python 3.8 is scheduled to be released in summer 2019. As Buster is to be frozen early the same year, it should be out before it. So, there's a lot of chance that Python 3.8 will be in Debian Sid/Bullseye before anywhere else again, probably just after the release of the OpenStack T release, meaning it most likely will be broken again in Debian Sid. > I agree that bugs with future versions of Python are always worth fixing > ASAP, whether or not we are able to test them in the gate. :) From doug at doughellmann.com Tue Oct 30 17:14:05 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 30 Oct 2018 13:14:05 -0400 Subject: [openstack-dev] [openstack-community] Sharing upstream contribution mentoring result with Korea user group In-Reply-To: <09a62e2d-6caf-da1a-71ac-989aeea494ec@gmail.com> References: <09a62e2d-6caf-da1a-71ac-989aeea494ec@gmail.com> Message-ID: "Ian Y. Choi" writes: > Hello, > > I got involved organizing & mentoring Korean people for OpenStack > upstream contribution for about last two months, > and would like to share with community members. > > Total nine mentees had started to learn OpenStack, contributed, and > finally survived as volunteers for >  1) developing OpenStack mobile app for better mobile user interfaces > and experiences >     (inspired from https://github.com/stackerz/app which worked on Juno > release), and >  2) translating OpenStack official project artifacts including documents, >      and Container Whitepaper ( > https://www.openstack.org/containers/leveraging-containers-and-openstack/ ). > > Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin, > Sungjin Kang, and Andrew Yongjoon Kong) > all helped to organize total 8 offline meetups + one mini-hackathon and > mentored to attendees. > > The followings are brief summary: >  - "OpenStack Controller" Android app is available on Play Store >   : > https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller >    (GitHub: https://github.com/kosslab-kr/openstack-controller ) > >  - Most high-priority projects (although it is not during string freeze > period) and documents are >    100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, > and Container Whitepaper. > >  - Total 18,695 words were translated into Korean by four contributors >   (confirmed through Zanata API: > https://translate.openstack.org/rest/stats/user/[Zanata > ID]/2018-08-16..2018-10-25 ): > > +------------+---------------+-----------------+ > | Zanata ID  | Name          | Number of words | > +------------+---------------+-----------------+ > | ardentpark | Soonyeul Park | 12517           | > +------------+---------------+-----------------+ > | bnitech    | Dongbim Im    | 693             | > +------------+---------------+-----------------+ > | csucom     | Sungwook Choi | 4397            | > +------------+---------------+-----------------+ > | jaeho93    | Jaeho Cho     | 1088            | > +------------+---------------+-----------------+ > >  - The list of projects translated into Korean are described as: > > +-------------------------------------+-----------------+ > | Project                             | Number of words | > +-------------------------------------+-----------------+ > | api-site                            | 20              | > +-------------------------------------+-----------------+ > | cinder                              | 405             | > +-------------------------------------+-----------------+ > | designate-dashboard                 | 4               | > +-------------------------------------+-----------------+ > | horizon                             | 3226            | > +-------------------------------------+-----------------+ > | i18n                                | 434             | > +-------------------------------------+-----------------+ > | ironic                              | 4               | > +-------------------------------------+-----------------+ > | Leveraging Containers and OpenStack | 5480            | > +-------------------------------------+-----------------+ > | neutron-lbaas-dashboard             | 5               | > +-------------------------------------+-----------------+ > | openstack-helm                      | 8835            | > +-------------------------------------+-----------------+ > | trove-dashboard                     | 89              | > +-------------------------------------+-----------------+ > | zun-ui                              | 193             | > +-------------------------------------+-----------------+ > > I would like to really appreciate all co-mentors and participants on > such a big event for promoting OpenStack contribution. > The venue and food were supported by Korea Open Source Software > Development Center ( https://kosslab.kr/ ). > > > With many thanks, > > /Ian > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community This is an excellent success story, Ian, thank you for sharing it and for leading the effort. Doug From openstack at nemebean.com Tue Oct 30 17:34:22 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 30 Oct 2018 12:34:22 -0500 Subject: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage In-Reply-To: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> References: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> Message-ID: <1077343a-b708-4fa2-0f44-2575363419e9@nemebean.com> Tagging with tripleo since my suggestion below is specific to that project. On 10/30/18 11:03 AM, Clark Boylan wrote: > Hello everyone, > > A little while back I sent email explaining how the gate queues work and how fixing bugs helps us test and merge more code. All of this still is still true and we should keep pushing to improve our testing to avoid gate resets. > > Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In the process of doing this we had to restart Zuul which brought in a new logging feature that exposes node resource usage by jobs. Using this data I've been able to generate some report information on where our node demand is going. This change [0] produces this report [1]. > > As with optimizing software we want to identify which changes will have the biggest impact and to be able to measure whether or not changes have had an impact once we have made them. Hopefully this information is a start at doing that. Currently we can only look back to the point Zuul was restarted, but we have a thirty day log rotation for this service and should be able to look at a month's worth of data going forward. > > Looking at the data you might notice that Tripleo is using many more node resources than our other projects. They are aware of this and have a plan [2] to reduce their resource consumption. We'll likely be using this report generator to check progress of this plan over time. I know at one point we had discussed reducing the concurrency of the tripleo gate to help with this. Since tripleo is still using >50% of the resources it seems like maybe we should revisit that, at least for the short-term until the more major changes can be made? Looking through the merge history for tripleo projects I don't see a lot of cases (any, in fact) where more than a dozen patches made it through anyway*, so I suspect it wouldn't have a significant impact on gate throughput, but it would free up quite a few nodes for other uses. *: I have no actual stats to back that up, I'm just looking through the IRC backlog for merge bot messages. If such stats do exist somewhere we should look at them instead. :-) > > Also related to the long queue backlogs is this proposal [3] to change how Zuul prioritizes resource allocations to try to be more fair. > > [0] https://review.openstack.org/#/c/613674/ > [1] http://paste.openstack.org/show/733644/ > [2] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135396.html > [3] http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-October/000575.html > > If you find any of this interesting and would like to help feel free to reach out to myself or the infra team. > > Thank you, > Clark > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Tue Oct 30 17:38:25 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 30 Oct 2018 13:38:25 -0400 Subject: [openstack-dev] [tc] agenda for TC meeting 1 Nov 1400 UTC Message-ID: TC members, The TC will be meeting on 1 Nov at 1400 UTC in #openstack-tc to discuss some of our ongoing initiatives. Here is the agenda for this week. * meeting procedures * discussion of topics for joint leadership meeting at Summit in Berlin * completing TC liaison assignments ** https://wiki.openstack.org/wiki/OpenStack_health_tracker#Project_Teams * documenting chair responsibilities ** https://etherpad.openstack.org/p/tc-chair-responsibilities * reviewing the health-check check list ** https://wiki.openstack.org/wiki/OpenStack_health_tracker#Health_check_list * deciding next steps on technical vision statement ** https://review.openstack.org/592205 * deciding next steps on python 3 and distro versions for PTI ** https://review.openstack.org/610708 Add optional python3.7 unit test enablement to python3-first ** https://review.openstack.org/611010 Make Python 3 testing requirement less specific ** https://review.openstack.org/611080 Explicitly declare Stein supported runtimes ** https://review.openstack.org/613145 Resolution on keeping up with Python 3 releases * reviews needing attention ** https://review.openstack.org/613268 Indicate relmgt style for each deliverable ** https://review.openstack.org/613856 Remove Dragonflow from the official projects list If you have suggestions for topics for the next meeting (6 Dec), please add them to the wiki at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions Doug From mihalis68 at gmail.com Tue Oct 30 17:58:13 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 30 Oct 2018 13:58:13 -0400 Subject: [openstack-dev] Ops Meetups team meeting 2018-10-30 Message-ID: Brief meeting today on #openstack-operators, minutes below. If you are attending Berlin, please start contributing to the Forum by selecting sesions of interest and then adding to the etherpads (see https://wiki.openstack.org/wiki/Forum/Berlin2018). I hear there's going to be a really great one about ceph, for example. Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-30-14.01.html Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-30-14.01.txt Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-30-14.01.log.html Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Tue Oct 30 17:42:10 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 30 Oct 2018 11:42:10 -0600 Subject: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage In-Reply-To: <1077343a-b708-4fa2-0f44-2575363419e9@nemebean.com> References: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> <1077343a-b708-4fa2-0f44-2575363419e9@nemebean.com> Message-ID: On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec wrote: > > Tagging with tripleo since my suggestion below is specific to that project. > > On 10/30/18 11:03 AM, Clark Boylan wrote: > > Hello everyone, > > > > A little while back I sent email explaining how the gate queues work and how fixing bugs helps us test and merge more code. All of this still is still true and we should keep pushing to improve our testing to avoid gate resets. > > > > Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In the process of doing this we had to restart Zuul which brought in a new logging feature that exposes node resource usage by jobs. Using this data I've been able to generate some report information on where our node demand is going. This change [0] produces this report [1]. > > > > As with optimizing software we want to identify which changes will have the biggest impact and to be able to measure whether or not changes have had an impact once we have made them. Hopefully this information is a start at doing that. Currently we can only look back to the point Zuul was restarted, but we have a thirty day log rotation for this service and should be able to look at a month's worth of data going forward. > > > > Looking at the data you might notice that Tripleo is using many more node resources than our other projects. They are aware of this and have a plan [2] to reduce their resource consumption. We'll likely be using this report generator to check progress of this plan over time. > > I know at one point we had discussed reducing the concurrency of the > tripleo gate to help with this. Since tripleo is still using >50% of the > resources it seems like maybe we should revisit that, at least for the > short-term until the more major changes can be made? Looking through the > merge history for tripleo projects I don't see a lot of cases (any, in > fact) where more than a dozen patches made it through anyway*, so I > suspect it wouldn't have a significant impact on gate throughput, but it > would free up quite a few nodes for other uses. > It's the failures in gate and resets. At this point I think it would be a good idea to turn down the concurrency of the tripleo queue in the gate if possible. As of late it's been timeouts but we've been unable to track down why it's timing out specifically. I personally have a feeling it's the container download times since we do not have a local registry available and are only able to leverage the mirrors for some levels of caching. Unfortunately we don't get the best information about this out of docker (or the mirrors) and it's really hard to determine what exactly makes things run a bit slower. I've asked about the status of moving the scenarios off of multinode to standalone which would half the number of systems being run for these jobs. It's currently next on the list of things to tackle after we get a single fedora28 job up and running. Thanks, -Alex > *: I have no actual stats to back that up, I'm just looking through the > IRC backlog for merge bot messages. If such stats do exist somewhere we > should look at them instead. :-) > > > > > Also related to the long queue backlogs is this proposal [3] to change how Zuul prioritizes resource allocations to try to be more fair. > > > > [0] https://review.openstack.org/#/c/613674/ > > [1] http://paste.openstack.org/show/733644/ > > [2] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135396.html > > [3] http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-October/000575.html > > > > If you find any of this interesting and would like to help feel free to reach out to myself or the infra team. > > > > Thank you, > > Clark > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mnaser at vexxhost.com Tue Oct 30 18:18:29 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 30 Oct 2018 19:18:29 +0100 Subject: [openstack-dev] [tripleo][openstack-ansible][nova][placement] Owners needed for placement extraction upgrade deployment tooling In-Reply-To: References: Message-ID: Hi there: We spoke about this today in the OpenStack Ansible meeting, we've come up with the following steps: 1) Create a role for placement which will be called `os_placement` located in `openstack/openstack-ansible-os_placement` 2) Integrate that role with the OSA master and stop using the built-in placement service 3) Update the playbooks to handle upgrades and verify using our periodic upgrade jobs For #1, Guilherme from the OSA team will be taking care of creating the role initially, I'm hoping that maybe we can get it done sometime this week. I think it'll probably take another week to integrate it into the main repo. The difficult task really comes in the upgrade jobs, I really hope that we can get some help on this as this probably puts a bit of a load already on Guilherme, so anyone up to look into that part when the first 2 are completed? :) Thanks, Mohammed On Thu, Oct 25, 2018 at 7:34 PM Matt Riedemann wrote: > > Hello OSA/TripleO people, > > A plan/checklist was put in place at the Stein PTG for extracting > placement from nova [1]. The first item in that list is done in grenade > [2], which is the devstack-based upgrade project in the integrated gate. > That should serve as a template for the necessary upgrade steps in > deployment projects. The related devstack change for extracted placement > on the master branch (Stein) is [3]. Note that change has some dependencies. > > The second point in the plan from the PTG was getting extracted > placement upgrade tooling support in a deployment project, notably > TripleO (and/or OpenStackAnsible). > > Given the grenade change is done and passing tests, TripleO/OSA should > be able to start coding up and testing an upgrade step when going from > Rocky to Stein. My question is who can we name as an owner in either > project to start this work? Because we really need to be starting this > as soon as possible to flush out any issues before they are too late to > correct in Stein. > > So if we have volunteers or better yet potential patches that I'm just > not aware of, please speak up here so we know who to contact about > status updates and if there are any questions with the upgrade. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html > [2] https://review.openstack.org/#/c/604454/ > [3] https://review.openstack.org/#/c/600162/ > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From openstack-dev at dseven.org Tue Oct 30 18:25:02 2018 From: openstack-dev at dseven.org (iain macdonnell) Date: Tue, 30 Oct 2018 11:25:02 -0700 Subject: [openstack-dev] [all]Naming the T release of OpenStack -- Poll open In-Reply-To: <20181030054024.GC2343@thor.bakeyournoodle.com> References: <20181030054024.GC2343@thor.bakeyournoodle.com> Message-ID: I must be losing it. On what planet is "Tiny Town" a single word, and "Troublesome" not more than 10 characters? ~iain On Mon, Oct 29, 2018 at 10:41 PM Tony Breeds wrote: > > Hi folks, > > It is time again to cast your vote for the naming of the T Release. > As with last time we'll use a public polling option over per user private URLs > for voting. This means, everybody should proceed to use the following URL to > cast their vote: > > https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_aac97f1cbb6c61df&akey=b9e448b340787f0e > > We've selected a public poll to ensure that the whole community, not just gerrit > change owners get a vote. Also the size of our community has grown such that we > can overwhelm CIVS if using private urls. A public can mean that users > behind NAT, proxy servers or firewalls may receive an message saying > that your vote has already been lodged, if this happens please try > another IP. > > Because this is a public poll, results will currently be only viewable by myself > until the poll closes. Once closed, I'll post the URL making the results > viewable to everybody. This was done to avoid everybody seeing the results while > the public poll is running. > > The poll will officially end on 2018-11-08 00:00:00+00:00[1], and results will be > posted shortly after. > > [1] https://governance.openstack.org/tc/reference/release-naming.html > --- > > According to the Release Naming Process, this poll is to determine the > community preferences for the name of the T release of OpenStack. It is > possible that the top choice is not viable for legal reasons, so the second or > later community preference could wind up being the name. > > Release Name Criteria > --------------------- > > Each release name must start with the letter of the ISO basic Latin alphabet > following the initial letter of the previous release, starting with the > initial release of "Austin". After "Z", the next name should start with > "A" again. > > The name must be composed only of the 26 characters of the ISO basic Latin > alphabet. Names which can be transliterated into this character set are also > acceptable. > > The name must refer to the physical or human geography of the region > encompassing the location of the OpenStack design summit for the > corresponding release. The exact boundaries of the geographic region under > consideration must be declared before the opening of nominations, as part of > the initiation of the selection process. > > The name must be a single word with a maximum of 10 characters. Words that > describe the feature should not be included, so "Foo City" or "Foo Peak" > would both be eligible as "Foo". > > Names which do not meet these criteria but otherwise sound really cool > should be added to a separate section of the wiki page and the TC may make > an exception for one or more of them to be considered in the Condorcet poll. > The naming official is responsible for presenting the list of exceptional > names for consideration to the TC before the poll opens. > > Exact Geographic Region > ----------------------- > > The Geographic Region from where names for the S release will come is Colorado > > Proposed Names > -------------- > > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas : the Tank Engine > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria (accepted by the TC) > ----------------------------------------------------------------- > > * Train : Many Attendees of the first Denver PTG have a story to tell about the trains near the PTG hotel. We could celebrate those stories with this name > > Yours Tony. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cboylan at sapwetik.org Tue Oct 30 18:25:27 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 30 Oct 2018 11:25:27 -0700 Subject: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage In-Reply-To: References: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> <1077343a-b708-4fa2-0f44-2575363419e9@nemebean.com> Message-ID: <1540923927.492007.1559968608.2F968E3E@webmail.messagingengine.com> On Tue, Oct 30, 2018, at 10:42 AM, Alex Schultz wrote: > On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec wrote: > > > > Tagging with tripleo since my suggestion below is specific to that project. > > > > On 10/30/18 11:03 AM, Clark Boylan wrote: > > > Hello everyone, > > > > > > A little while back I sent email explaining how the gate queues work and how fixing bugs helps us test and merge more code. All of this still is still true and we should keep pushing to improve our testing to avoid gate resets. > > > > > > Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In the process of doing this we had to restart Zuul which brought in a new logging feature that exposes node resource usage by jobs. Using this data I've been able to generate some report information on where our node demand is going. This change [0] produces this report [1]. > > > > > > As with optimizing software we want to identify which changes will have the biggest impact and to be able to measure whether or not changes have had an impact once we have made them. Hopefully this information is a start at doing that. Currently we can only look back to the point Zuul was restarted, but we have a thirty day log rotation for this service and should be able to look at a month's worth of data going forward. > > > > > > Looking at the data you might notice that Tripleo is using many more node resources than our other projects. They are aware of this and have a plan [2] to reduce their resource consumption. We'll likely be using this report generator to check progress of this plan over time. > > > > I know at one point we had discussed reducing the concurrency of the > > tripleo gate to help with this. Since tripleo is still using >50% of the > > resources it seems like maybe we should revisit that, at least for the > > short-term until the more major changes can be made? Looking through the > > merge history for tripleo projects I don't see a lot of cases (any, in > > fact) where more than a dozen patches made it through anyway*, so I > > suspect it wouldn't have a significant impact on gate throughput, but it > > would free up quite a few nodes for other uses. > > > > It's the failures in gate and resets. At this point I think it would > be a good idea to turn down the concurrency of the tripleo queue in > the gate if possible. As of late it's been timeouts but we've been > unable to track down why it's timing out specifically. I personally > have a feeling it's the container download times since we do not have > a local registry available and are only able to leverage the mirrors > for some levels of caching. Unfortunately we don't get the best > information about this out of docker (or the mirrors) and it's really > hard to determine what exactly makes things run a bit slower. We actually tried this not too long ago https://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=22d98f7aab0fb23849f715a8796384cffa84600b but decided to revert it because it didn't decrease the check queue backlog significantly. We were still running at several hours behind most of the time. If we want to set up better monitoring and measuring and try it again we can do that. But we probably want to measure queue sizes with and without the change like that to better understand if it helps. As for container image download times can we quantify that via docker logs? Basically sum up the amount of time spent by a job downloading images so that we can see what the impact is but also measure if changes improve that? As for other ideas improving things seems like many of the images that tripleo use are quite large. I recall seeing a > 600MB image just for rsyslog. Wouldn't it be advantageous for both the gate and tripleo in the real world to trim the size of those images (which should improve download times). In any case quantifying the size of the downloads and trimming those if possible is likely also worthwhile. Clark From emilien at redhat.com Tue Oct 30 18:29:12 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 30 Oct 2018 14:29:12 -0400 Subject: [openstack-dev] [tripleo][openstack-ansible][nova][placement] Owners needed for placement extraction upgrade deployment tooling In-Reply-To: References: Message-ID: On the TripleO side, it sounds like Lee Yarwood is taking the lead with a first commit in puppet-placement: https://review.openstack.org/#/c/604182/ Lee, can you confirm that you and your team are working on it for Stein cycle? On Thu, Oct 25, 2018 at 1:34 PM Matt Riedemann wrote: > Hello OSA/TripleO people, > > A plan/checklist was put in place at the Stein PTG for extracting > placement from nova [1]. The first item in that list is done in grenade > [2], which is the devstack-based upgrade project in the integrated gate. > That should serve as a template for the necessary upgrade steps in > deployment projects. The related devstack change for extracted placement > on the master branch (Stein) is [3]. Note that change has some > dependencies. > > The second point in the plan from the PTG was getting extracted > placement upgrade tooling support in a deployment project, notably > TripleO (and/or OpenStackAnsible). > > Given the grenade change is done and passing tests, TripleO/OSA should > be able to start coding up and testing an upgrade step when going from > Rocky to Stein. My question is who can we name as an owner in either > project to start this work? Because we really need to be starting this > as soon as possible to flush out any issues before they are too late to > correct in Stein. > > So if we have volunteers or better yet potential patches that I'm just > not aware of, please speak up here so we know who to contact about > status updates and if there are any questions with the upgrade. > > [1] > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html > [2] https://review.openstack.org/#/c/604454/ > [3] https://review.openstack.org/#/c/600162/ > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Oct 30 18:41:11 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 30 Oct 2018 19:41:11 +0100 Subject: [openstack-dev] [openstack-community] Sharing upstream contribution mentoring result with Korea user group In-Reply-To: References: <09a62e2d-6caf-da1a-71ac-989aeea494ec@gmail.com> Message-ID: <6164E40D-80B7-4EDD-A848-9765DBE0E0F9@vexxhost.com> In echoing the words of everyone, it takes a tremendous amount of effort and patience to lead this effort. THANK YOU! Sent from my iPhone > On Oct 30, 2018, at 6:14 PM, Doug Hellmann wrote: > > "Ian Y. Choi" writes: > >> Hello, >> >> I got involved organizing & mentoring Korean people for OpenStack >> upstream contribution for about last two months, >> and would like to share with community members. >> >> Total nine mentees had started to learn OpenStack, contributed, and >> finally survived as volunteers for >> 1) developing OpenStack mobile app for better mobile user interfaces >> and experiences >> (inspired from https://github.com/stackerz/app which worked on Juno >> release), and >> 2) translating OpenStack official project artifacts including documents, >> and Container Whitepaper ( >> https://www.openstack.org/containers/leveraging-containers-and-openstack/ ). >> >> Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin, >> Sungjin Kang, and Andrew Yongjoon Kong) >> all helped to organize total 8 offline meetups + one mini-hackathon and >> mentored to attendees. >> >> The followings are brief summary: >> - "OpenStack Controller" Android app is available on Play Store >> : >> https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller >> (GitHub: https://github.com/kosslab-kr/openstack-controller ) >> >> - Most high-priority projects (although it is not during string freeze >> period) and documents are >> 100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, >> and Container Whitepaper. >> >> - Total 18,695 words were translated into Korean by four contributors >> (confirmed through Zanata API: >> https://translate.openstack.org/rest/stats/user/[Zanata >> ID]/2018-08-16..2018-10-25 ): >> >> +------------+---------------+-----------------+ >> | Zanata ID | Name | Number of words | >> +------------+---------------+-----------------+ >> | ardentpark | Soonyeul Park | 12517 | >> +------------+---------------+-----------------+ >> | bnitech | Dongbim Im | 693 | >> +------------+---------------+-----------------+ >> | csucom | Sungwook Choi | 4397 | >> +------------+---------------+-----------------+ >> | jaeho93 | Jaeho Cho | 1088 | >> +------------+---------------+-----------------+ >> >> - The list of projects translated into Korean are described as: >> >> +-------------------------------------+-----------------+ >> | Project | Number of words | >> +-------------------------------------+-----------------+ >> | api-site | 20 | >> +-------------------------------------+-----------------+ >> | cinder | 405 | >> +-------------------------------------+-----------------+ >> | designate-dashboard | 4 | >> +-------------------------------------+-----------------+ >> | horizon | 3226 | >> +-------------------------------------+-----------------+ >> | i18n | 434 | >> +-------------------------------------+-----------------+ >> | ironic | 4 | >> +-------------------------------------+-----------------+ >> | Leveraging Containers and OpenStack | 5480 | >> +-------------------------------------+-----------------+ >> | neutron-lbaas-dashboard | 5 | >> +-------------------------------------+-----------------+ >> | openstack-helm | 8835 | >> +-------------------------------------+-----------------+ >> | trove-dashboard | 89 | >> +-------------------------------------+-----------------+ >> | zun-ui | 193 | >> +-------------------------------------+-----------------+ >> >> I would like to really appreciate all co-mentors and participants on >> such a big event for promoting OpenStack contribution. >> The venue and food were supported by Korea Open Source Software >> Development Center ( https://kosslab.kr/ ). >> >> >> With many thanks, >> >> /Ian >> >> _______________________________________________ >> Community mailing list >> Community at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/community > > This is an excellent success story, Ian, thank you for sharing it and > for leading the effort. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Tue Oct 30 18:41:17 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 30 Oct 2018 18:41:17 +0000 (GMT) Subject: [openstack-dev] [tripleo][openstack-ansible][nova][placement] Owners needed for placement extraction upgrade deployment tooling In-Reply-To: References: Message-ID: On Tue, 30 Oct 2018, Mohammed Naser wrote: > We spoke about this today in the OpenStack Ansible meeting, we've come > up with the following steps: Great! Thank you, Guilherme, and Lee very much. > 1) Create a role for placement which will be called `os_placement` > located in `openstack/openstack-ansible-os_placement` > 2) Integrate that role with the OSA master and stop using the built-in > placement service > 3) Update the playbooks to handle upgrades and verify using our > periodic upgrade jobs Makes sense. > The difficult task really comes in the upgrade jobs, I really hope > that we can get some help on this as this probably puts a bit of a > load already on Guilherme, so anyone up to look into that part when > the first 2 are completed? :) The upgrade-nova script in https://review.openstack.org/#/c/604454/ has been written to make it pretty clear what each of the steps mean. With luck those steps can translate to both the ansible and tripleo environments. Please feel free to add me to any of the reviews and come calling in #openstack-placement with questions if there are any. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From mnaser at vexxhost.com Tue Oct 30 18:59:56 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 30 Oct 2018 19:59:56 +0100 Subject: [openstack-dev] [Searchlight][release] Searchlight will release Stein-1 In-Reply-To: References: Message-ID: Yay! Congratulations on the first Stein release, well done with your work in looking after Searchlight so far. On Tue, Oct 30, 2018 at 6:37 AM Trinh Nguyen wrote: > > Hi team, > > I'm doing a release for Searchlight projects (searchlight, searchlight-ui, python-searchlightclient) [1]. Please help to review and make sure everything is ok. > > [1] https://review.openstack.org/#/c/614066/ > > Finally \m/ :D > > Bests, > > -- > Trinh Nguyen > www.edlab.xyz > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From openstack at nemebean.com Tue Oct 30 20:01:19 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 30 Oct 2018 15:01:19 -0500 Subject: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage In-Reply-To: <1540923927.492007.1559968608.2F968E3E@webmail.messagingengine.com> References: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> <1077343a-b708-4fa2-0f44-2575363419e9@nemebean.com> <1540923927.492007.1559968608.2F968E3E@webmail.messagingengine.com> Message-ID: <2aafb949-e76b-f1a9-19bf-890fc7d99179@nemebean.com> On 10/30/18 1:25 PM, Clark Boylan wrote: > On Tue, Oct 30, 2018, at 10:42 AM, Alex Schultz wrote: >> On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec wrote: >>> >>> Tagging with tripleo since my suggestion below is specific to that project. >>> >>> On 10/30/18 11:03 AM, Clark Boylan wrote: >>>> Hello everyone, >>>> >>>> A little while back I sent email explaining how the gate queues work and how fixing bugs helps us test and merge more code. All of this still is still true and we should keep pushing to improve our testing to avoid gate resets. >>>> >>>> Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In the process of doing this we had to restart Zuul which brought in a new logging feature that exposes node resource usage by jobs. Using this data I've been able to generate some report information on where our node demand is going. This change [0] produces this report [1]. >>>> >>>> As with optimizing software we want to identify which changes will have the biggest impact and to be able to measure whether or not changes have had an impact once we have made them. Hopefully this information is a start at doing that. Currently we can only look back to the point Zuul was restarted, but we have a thirty day log rotation for this service and should be able to look at a month's worth of data going forward. >>>> >>>> Looking at the data you might notice that Tripleo is using many more node resources than our other projects. They are aware of this and have a plan [2] to reduce their resource consumption. We'll likely be using this report generator to check progress of this plan over time. >>> >>> I know at one point we had discussed reducing the concurrency of the >>> tripleo gate to help with this. Since tripleo is still using >50% of the >>> resources it seems like maybe we should revisit that, at least for the >>> short-term until the more major changes can be made? Looking through the >>> merge history for tripleo projects I don't see a lot of cases (any, in >>> fact) where more than a dozen patches made it through anyway*, so I >>> suspect it wouldn't have a significant impact on gate throughput, but it >>> would free up quite a few nodes for other uses. >>> >> >> It's the failures in gate and resets. At this point I think it would >> be a good idea to turn down the concurrency of the tripleo queue in >> the gate if possible. As of late it's been timeouts but we've been >> unable to track down why it's timing out specifically. I personally >> have a feeling it's the container download times since we do not have >> a local registry available and are only able to leverage the mirrors >> for some levels of caching. Unfortunately we don't get the best >> information about this out of docker (or the mirrors) and it's really >> hard to determine what exactly makes things run a bit slower. > > We actually tried this not too long ago https://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=22d98f7aab0fb23849f715a8796384cffa84600b but decided to revert it because it didn't decrease the check queue backlog significantly. We were still running at several hours behind most of the time. I'm surprised to hear that. Counting the tripleo jobs in the gate at positions 11-20 right now, I see around 84 nodes tied up in long-running jobs and another 32 for shorter unit test jobs. The latter probably don't have much impact, but the former is a non-trivial amount. It may not erase the entire 2300+ job queue that we have right now, but it seems like it should help. > > If we want to set up better monitoring and measuring and try it again we can do that. But we probably want to measure queue sizes with and without the change like that to better understand if it helps. This seems like good information to start capturing, otherwise we are kind of just guessing. Is there something in infra already that we could use or would it need to be new tooling? > > As for container image download times can we quantify that via docker logs? Basically sum up the amount of time spent by a job downloading images so that we can see what the impact is but also measure if changes improve that? As for other ideas improving things seems like many of the images that tripleo use are quite large. I recall seeing a > 600MB image just for rsyslog. Wouldn't it be advantageous for both the gate and tripleo in the real world to trim the size of those images (which should improve download times). In any case quantifying the size of the downloads and trimming those if possible is likely also worthwhile. > > Clark > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emilien at redhat.com Tue Oct 30 20:55:43 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 30 Oct 2018 16:55:43 -0400 Subject: [openstack-dev] [tripleo] request for feedback/review on docker2podman upgrade In-Reply-To: References: <4374516d-8e61-baf8-440b-e76cafd84874@redhat.com> Message-ID: A bit of an update here: - We merged the patch in openstack/paunch that stop the Docker container if we try to start a Podman container. - We switched the undercloud upgrade job to test upgrades from Docker to Podman (for now containers are stopped in Docker and then started in Podman). - We are now looking how and where to remove the Docker containers once the upgrade finished. For that work, I started with the Undercloud and patched tripleoclient to run the post_upgrade_tasks which to me is a good candidate to run docker rm. Please look: - tripleoclient / run post_upgrade_tasks when upgrading standalone/undercloud: https://review.openstack.org/614349 - THT: prototype on how we would remove the Docker containers: https://review.openstack.org/611092 Note: for now we assume that Docker is still available on the host after the upgrade as we are testing things under centos7. I'm aware that this assumption can change in the future but we'll probably re-iterate. What I need from the upgrade team is feedback on this workflow, and see if we can re-use these bits originally tested on Undercloud / Standalone, for the Overcloud as well. Thanks for the feedback, On Fri, Oct 19, 2018 at 8:00 AM Emilien Macchi wrote: > On Fri, Oct 19, 2018 at 4:24 AM Giulio Fidente > wrote: > >> 1) create the podman systemd unit >> 2) delete the docker container >> > > We finally went with "stop the docker container" > > 3) start the podman container >> > > and 4) delete the docker container later in THT upgrade_tasks. > > And yes +1 to do the same in ceph-ansible if possible. > -- > Emilien Macchi > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Tue Oct 30 21:00:19 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 30 Oct 2018 15:00:19 -0600 Subject: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage In-Reply-To: <1540923927.492007.1559968608.2F968E3E@webmail.messagingengine.com> References: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> <1077343a-b708-4fa2-0f44-2575363419e9@nemebean.com> <1540923927.492007.1559968608.2F968E3E@webmail.messagingengine.com> Message-ID: On Tue, Oct 30, 2018 at 12:25 PM Clark Boylan wrote: > > On Tue, Oct 30, 2018, at 10:42 AM, Alex Schultz wrote: > > On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec wrote: > > > > > > Tagging with tripleo since my suggestion below is specific to that project. > > > > > > On 10/30/18 11:03 AM, Clark Boylan wrote: > > > > Hello everyone, > > > > > > > > A little while back I sent email explaining how the gate queues work and how fixing bugs helps us test and merge more code. All of this still is still true and we should keep pushing to improve our testing to avoid gate resets. > > > > > > > > Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In the process of doing this we had to restart Zuul which brought in a new logging feature that exposes node resource usage by jobs. Using this data I've been able to generate some report information on where our node demand is going. This change [0] produces this report [1]. > > > > > > > > As with optimizing software we want to identify which changes will have the biggest impact and to be able to measure whether or not changes have had an impact once we have made them. Hopefully this information is a start at doing that. Currently we can only look back to the point Zuul was restarted, but we have a thirty day log rotation for this service and should be able to look at a month's worth of data going forward. > > > > > > > > Looking at the data you might notice that Tripleo is using many more node resources than our other projects. They are aware of this and have a plan [2] to reduce their resource consumption. We'll likely be using this report generator to check progress of this plan over time. > > > > > > I know at one point we had discussed reducing the concurrency of the > > > tripleo gate to help with this. Since tripleo is still using >50% of the > > > resources it seems like maybe we should revisit that, at least for the > > > short-term until the more major changes can be made? Looking through the > > > merge history for tripleo projects I don't see a lot of cases (any, in > > > fact) where more than a dozen patches made it through anyway*, so I > > > suspect it wouldn't have a significant impact on gate throughput, but it > > > would free up quite a few nodes for other uses. > > > > > > > It's the failures in gate and resets. At this point I think it would > > be a good idea to turn down the concurrency of the tripleo queue in > > the gate if possible. As of late it's been timeouts but we've been > > unable to track down why it's timing out specifically. I personally > > have a feeling it's the container download times since we do not have > > a local registry available and are only able to leverage the mirrors > > for some levels of caching. Unfortunately we don't get the best > > information about this out of docker (or the mirrors) and it's really > > hard to determine what exactly makes things run a bit slower. > > We actually tried this not too long ago https://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=22d98f7aab0fb23849f715a8796384cffa84600b but decided to revert it because it didn't decrease the check queue backlog significantly. We were still running at several hours behind most of the time. > > If we want to set up better monitoring and measuring and try it again we can do that. But we probably want to measure queue sizes with and without the change like that to better understand if it helps. > > As for container image download times can we quantify that via docker logs? Basically sum up the amount of time spent by a job downloading images so that we can see what the impact is but also measure if changes improve that? As for other ideas improving things seems like many of the images that tripleo use are quite large. I recall seeing a > 600MB image just for rsyslog. Wouldn't it be advantageous for both the gate and tripleo in the real world to trim the size of those images (which should improve download times). In any case quantifying the size of the downloads and trimming those if possible is likely also worthwhile. > So it's not that simple as we don't just download all the images in a distinct task and there isn't any information provided around size/speed AFAIK. Additionally we aren't doing anything special with the images (it's mostly kolla built containers with a handful of tweaks) so that's just the size of the containers. I am currently working on reducing any tripleo specific dependencies (ie removal of instack-undercloud, etc) in hopes that we'll shave off some of the dependencies but it seems that there's a larger (bloat) issue around containers in general. I have no idea why the rsyslog container would be 600M, but yea that does seem excessive. > Clark > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cboylan at sapwetik.org Tue Oct 30 21:16:07 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 30 Oct 2018 14:16:07 -0700 Subject: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage In-Reply-To: <2aafb949-e76b-f1a9-19bf-890fc7d99179@nemebean.com> References: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> <1077343a-b708-4fa2-0f44-2575363419e9@nemebean.com> <1540923927.492007.1559968608.2F968E3E@webmail.messagingengine.com> <2aafb949-e76b-f1a9-19bf-890fc7d99179@nemebean.com> Message-ID: <1540934167.1342260.1560169400.784F8BDF@webmail.messagingengine.com> On Tue, Oct 30, 2018, at 1:01 PM, Ben Nemec wrote: > > > On 10/30/18 1:25 PM, Clark Boylan wrote: > > On Tue, Oct 30, 2018, at 10:42 AM, Alex Schultz wrote: > >> On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec wrote: > >>> > >>> Tagging with tripleo since my suggestion below is specific to that project. > >>> > >>> On 10/30/18 11:03 AM, Clark Boylan wrote: > >>>> Hello everyone, > >>>> > >>>> A little while back I sent email explaining how the gate queues work and how fixing bugs helps us test and merge more code. All of this still is still true and we should keep pushing to improve our testing to avoid gate resets. > >>>> > >>>> Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In the process of doing this we had to restart Zuul which brought in a new logging feature that exposes node resource usage by jobs. Using this data I've been able to generate some report information on where our node demand is going. This change [0] produces this report [1]. > >>>> > >>>> As with optimizing software we want to identify which changes will have the biggest impact and to be able to measure whether or not changes have had an impact once we have made them. Hopefully this information is a start at doing that. Currently we can only look back to the point Zuul was restarted, but we have a thirty day log rotation for this service and should be able to look at a month's worth of data going forward. > >>>> > >>>> Looking at the data you might notice that Tripleo is using many more node resources than our other projects. They are aware of this and have a plan [2] to reduce their resource consumption. We'll likely be using this report generator to check progress of this plan over time. > >>> > >>> I know at one point we had discussed reducing the concurrency of the > >>> tripleo gate to help with this. Since tripleo is still using >50% of the > >>> resources it seems like maybe we should revisit that, at least for the > >>> short-term until the more major changes can be made? Looking through the > >>> merge history for tripleo projects I don't see a lot of cases (any, in > >>> fact) where more than a dozen patches made it through anyway*, so I > >>> suspect it wouldn't have a significant impact on gate throughput, but it > >>> would free up quite a few nodes for other uses. > >>> > >> > >> It's the failures in gate and resets. At this point I think it would > >> be a good idea to turn down the concurrency of the tripleo queue in > >> the gate if possible. As of late it's been timeouts but we've been > >> unable to track down why it's timing out specifically. I personally > >> have a feeling it's the container download times since we do not have > >> a local registry available and are only able to leverage the mirrors > >> for some levels of caching. Unfortunately we don't get the best > >> information about this out of docker (or the mirrors) and it's really > >> hard to determine what exactly makes things run a bit slower. > > > > We actually tried this not too long ago https://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=22d98f7aab0fb23849f715a8796384cffa84600b but decided to revert it because it didn't decrease the check queue backlog significantly. We were still running at several hours behind most of the time. > > I'm surprised to hear that. Counting the tripleo jobs in the gate at > positions 11-20 right now, I see around 84 nodes tied up in long-running > jobs and another 32 for shorter unit test jobs. The latter probably > don't have much impact, but the former is a non-trivial amount. It may > not erase the entire 2300+ job queue that we have right now, but it > seems like it should help. > > > > > If we want to set up better monitoring and measuring and try it again we can do that. But we probably want to measure queue sizes with and without the change like that to better understand if it helps. > > This seems like good information to start capturing, otherwise we are > kind of just guessing. Is there something in infra already that we could > use or would it need to be new tooling? Digging around in graphite we currently track mean in pipelines. This is probably a reasonable metric to use for this specific case. Looking at the check queue [3] shows the mean time enqueued in check during the rough period window floor was 10 and [4] shows it since then. The 26th and 27th are bigger peaks than previously seen (possibly due to losing inap temporarily) but otherwise a queue backlog of ~200 minutes was "normal" in both time periods. [3] http://graphite.openstack.org/render/?from=20181015&until=20181019&target=scale(stats.timers.zuul.tenant.openstack.pipeline.check.resident_time.mean,%200.00001666666) [4] http://graphite.openstack.org/render/?from=20181019&until=20181030&target=scale(stats.timers.zuul.tenant.openstack.pipeline.check.resident_time.mean,%200.00001666666) You should be able to change check to eg gate or other queue names and poke around more if you like. Note the scale factor scales from milliseconds to minutes. Clark From sean.mcginnis at gmx.com Tue Oct 30 21:19:36 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 30 Oct 2018 16:19:36 -0500 Subject: [openstack-dev] Sharing upstream contribution mentoring result with Korea user group In-Reply-To: <09a62e2d-6caf-da1a-71ac-989aeea494ec@gmail.com> References: <09a62e2d-6caf-da1a-71ac-989aeea494ec@gmail.com> Message-ID: <20181030211936.GA32756@sm-workstation> On Tue, Oct 30, 2018 at 11:10:42PM +0900, Ian Y. Choi wrote: > Hello, > > I got involved organizing & mentoring Korean people for OpenStack upstream > contribution for about last two months, > and would like to share with community members. > Very cool! Thanks for organizing this Ian. And thank you to all that contributed. Some really fun and useful stuff! Sean > Total nine mentees had started to learn OpenStack, contributed, and finally > survived as volunteers for >  1) developing OpenStack mobile app for better mobile user interfaces and > experiences >     (inspired from https://github.com/stackerz/app which worked on Juno > release), and >  2) translating OpenStack official project artifacts including documents, >      and Container Whitepaper ( > https://www.openstack.org/containers/leveraging-containers-and-openstack/ ). > > Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin, > Sungjin Kang, and Andrew Yongjoon Kong) > all helped to organize total 8 offline meetups + one mini-hackathon and > mentored to attendees. > > The followings are brief summary: >  - "OpenStack Controller" Android app is available on Play Store >   : https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller >    (GitHub: https://github.com/kosslab-kr/openstack-controller ) > >  - Most high-priority projects (although it is not during string freeze > period) and documents are >    100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, and > Container Whitepaper. > >  - Total 18,695 words were translated into Korean by four contributors >   (confirmed through Zanata API: > https://translate.openstack.org/rest/stats/user/[Zanata > ID]/2018-08-16..2018-10-25 ): > > +------------+---------------+-----------------+ > | Zanata ID  | Name          | Number of words | > +------------+---------------+-----------------+ > | ardentpark | Soonyeul Park | 12517           | > +------------+---------------+-----------------+ > | bnitech    | Dongbim Im    | 693             | > +------------+---------------+-----------------+ > | csucom     | Sungwook Choi | 4397            | > +------------+---------------+-----------------+ > | jaeho93    | Jaeho Cho     | 1088            | > +------------+---------------+-----------------+ > >  - The list of projects translated into Korean are described as: > > +-------------------------------------+-----------------+ > | Project                             | Number of words | > +-------------------------------------+-----------------+ > | api-site                            | 20              | > +-------------------------------------+-----------------+ > | cinder                              | 405             | > +-------------------------------------+-----------------+ > | designate-dashboard                 | 4               | > +-------------------------------------+-----------------+ > | horizon                             | 3226            | > +-------------------------------------+-----------------+ > | i18n                                | 434             | > +-------------------------------------+-----------------+ > | ironic                              | 4               | > +-------------------------------------+-----------------+ > | Leveraging Containers and OpenStack | 5480            | > +-------------------------------------+-----------------+ > | neutron-lbaas-dashboard             | 5               | > +-------------------------------------+-----------------+ > | openstack-helm                      | 8835            | > +-------------------------------------+-----------------+ > | trove-dashboard                     | 89              | > +-------------------------------------+-----------------+ > | zun-ui                              | 193             | > +-------------------------------------+-----------------+ > > I would like to really appreciate all co-mentors and participants on such a > big event for promoting OpenStack contribution. > The venue and food were supported by Korea Open Source Software Development > Center ( https://kosslab.kr/ ). > > > With many thanks, > > /Ian > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Tue Oct 30 21:39:00 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 30 Oct 2018 16:39:00 -0500 Subject: [openstack-dev] Zuul Queue backlogs and resource usage In-Reply-To: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> References: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> Message-ID: On 10/30/2018 11:03 AM, Clark Boylan wrote: > If you find any of this interesting and would like to help feel free to reach out to myself or the infra team. I find this interesting and thanks for providing the update to the mailing list. That's mostly what I wanted to say. FWIW I've still got https://review.openstack.org/#/c/606981/ and the related changes to drop the nova-multiattach job and enable the multiattach volume tests in the integrated gate, but am hung up on some test failures in the multi-node tempest job as a result of that (the nova-multiattach job is single-node). There must be something weird that tickles those tests in a multi-node configuration and I just haven't dug into it yet, but maybe one of our intrepid contributors can lend a hand and debug it. -- Thanks, Matt From juliaashleykreger at gmail.com Wed Oct 31 00:36:09 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 30 Oct 2018 17:36:09 -0700 Subject: [openstack-dev] Ironic integration CI jobs Message-ID: With the discussion of CI jobs and the fact that I have been finding myself checking job status several times a day so early in the cycle, I think it is time for ironic to revisit many of our CI jobs. The bottom line is ironic is very resource intensive to test. A lot of that is because of the underlying way we enroll/manage nodes and then execute the integration scenarios emulating bare metal. I think we can improve that with some ansible. In the mean time I created a quick chart[1] to try and make sense out overall integration coverage and I think it makes sense to remove three of the jobs. ironic-tempest-dsvm-ipa-wholedisk-agent_ipmitool-tinyipa-multinode - This job is essentially the same as our grenade mutlinode job, the only difference being grenade. ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa - This job essentially just duplicates the functionality already covered in other jobs, including the grenade job. ironic-tempest-dsvm-bfv - This presently non-voting job validates that the iPXE mode of the 'pxe' boot interface supports boot from volume. It was superseded by ironic-tempest-dsvm-ipxe-bfv which focuses on the use of the 'ipxe' boot interface. The underlying code is all the same deep down in all of the helper methods. I'll go ahead and put this up as a topic for our weekly meeting next week so we can discuss. Thanks, -Julia [1]: https://ethercalc.openstack.org/ces0z3xjb1ir -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Wed Oct 31 00:48:59 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 31 Oct 2018 09:48:59 +0900 Subject: [openstack-dev] [Searchlight][release] Searchlight will release Stein-1 In-Reply-To: References: Message-ID: Thanks :) On Wed, Oct 31, 2018 at 4:00 AM Mohammed Naser wrote: > Yay! > > Congratulations on the first Stein release, well done with your work > in looking after Searchlight so far. > On Tue, Oct 30, 2018 at 6:37 AM Trinh Nguyen > wrote: > > > > Hi team, > > > > I'm doing a release for Searchlight projects (searchlight, > searchlight-ui, python-searchlightclient) [1]. Please help to review and > make sure everything is ok. > > > > [1] https://review.openstack.org/#/c/614066/ > > > > Finally \m/ :D > > > > Bests, > > > > -- > > Trinh Nguyen > > www.edlab.xyz > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Wed Oct 31 00:52:21 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 31 Oct 2018 09:52:21 +0900 Subject: [openstack-dev] Sharing upstream contribution mentoring result with Korea user group In-Reply-To: <20181030211936.GA32756@sm-workstation> References: <09a62e2d-6caf-da1a-71ac-989aeea494ec@gmail.com> <20181030211936.GA32756@sm-workstation> Message-ID: Awesome work, Ian!!!! \m/\m/\m/ On Wed, Oct 31, 2018 at 6:19 AM Sean McGinnis wrote: > On Tue, Oct 30, 2018 at 11:10:42PM +0900, Ian Y. Choi wrote: > > Hello, > > > > I got involved organizing & mentoring Korean people for OpenStack > upstream > > contribution for about last two months, > > and would like to share with community members. > > > > Very cool! Thanks for organizing this Ian. And thank you to all that > contributed. Some really fun and useful stuff! > > Sean > > > > Total nine mentees had started to learn OpenStack, contributed, and > finally > > survived as volunteers for > > 1) developing OpenStack mobile app for better mobile user interfaces and > > experiences > > (inspired from https://github.com/stackerz/app which worked on Juno > > release), and > > 2) translating OpenStack official project artifacts including documents, > > and Container Whitepaper ( > > > https://www.openstack.org/containers/leveraging-containers-and-openstack/ > ). > > > > Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin, > > Sungjin Kang, and Andrew Yongjoon Kong) > > all helped to organize total 8 offline meetups + one mini-hackathon and > > mentored to attendees. > > > > The followings are brief summary: > > - "OpenStack Controller" Android app is available on Play Store > > : > https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller > > (GitHub: https://github.com/kosslab-kr/openstack-controller ) > > > > - Most high-priority projects (although it is not during string freeze > > period) and documents are > > 100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, and > > Container Whitepaper. > > > > - Total 18,695 words were translated into Korean by four contributors > > (confirmed through Zanata API: > > https://translate.openstack.org/rest/stats/user/[Zanata > > ID]/2018-08-16..2018-10-25 ): > > > > +------------+---------------+-----------------+ > > | Zanata ID | Name | Number of words | > > +------------+---------------+-----------------+ > > | ardentpark | Soonyeul Park | 12517 | > > +------------+---------------+-----------------+ > > | bnitech | Dongbim Im | 693 | > > +------------+---------------+-----------------+ > > | csucom | Sungwook Choi | 4397 | > > +------------+---------------+-----------------+ > > | jaeho93 | Jaeho Cho | 1088 | > > +------------+---------------+-----------------+ > > > > - The list of projects translated into Korean are described as: > > > > +-------------------------------------+-----------------+ > > | Project | Number of words | > > +-------------------------------------+-----------------+ > > | api-site | 20 | > > +-------------------------------------+-----------------+ > > | cinder | 405 | > > +-------------------------------------+-----------------+ > > | designate-dashboard | 4 | > > +-------------------------------------+-----------------+ > > | horizon | 3226 | > > +-------------------------------------+-----------------+ > > | i18n | 434 | > > +-------------------------------------+-----------------+ > > | ironic | 4 | > > +-------------------------------------+-----------------+ > > | Leveraging Containers and OpenStack | 5480 | > > +-------------------------------------+-----------------+ > > | neutron-lbaas-dashboard | 5 | > > +-------------------------------------+-----------------+ > > | openstack-helm | 8835 | > > +-------------------------------------+-----------------+ > > | trove-dashboard | 89 | > > +-------------------------------------+-----------------+ > > | zun-ui | 193 | > > +-------------------------------------+-----------------+ > > > > I would like to really appreciate all co-mentors and participants on > such a > > big event for promoting OpenStack contribution. > > The venue and food were supported by Korea Open Source Software > Development > > Center ( https://kosslab.kr/ ). > > > > > > With many thanks, > > > > /Ian > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Wed Oct 31 00:59:14 2018 From: amy at demarco.com (Amy Marrich) Date: Tue, 30 Oct 2018 19:59:14 -0500 Subject: [openstack-dev] [openstack-community] Sharing upstream contribution mentoring result with Korea user group In-Reply-To: <09a62e2d-6caf-da1a-71ac-989aeea494ec@gmail.com> References: <09a62e2d-6caf-da1a-71ac-989aeea494ec@gmail.com> Message-ID: Ian, Great job by yourself, your mentees and last but not least your mentors! Way to go!!! Amy (spotz) On Tue, Oct 30, 2018 at 9:10 AM, Ian Y. Choi wrote: > Hello, > > I got involved organizing & mentoring Korean people for OpenStack upstream > contribution for about last two months, > and would like to share with community members. > > Total nine mentees had started to learn OpenStack, contributed, and > finally survived as volunteers for > 1) developing OpenStack mobile app for better mobile user interfaces and > experiences > (inspired from https://github.com/stackerz/app which worked on Juno > release), and > 2) translating OpenStack official project artifacts including documents, > and Container Whitepaper ( https://www.openstack.org/cont > ainers/leveraging-containers-and-openstack/ ). > > Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin, > Sungjin Kang, and Andrew Yongjoon Kong) > all helped to organize total 8 offline meetups + one mini-hackathon and > mentored to attendees. > > The followings are brief summary: > - "OpenStack Controller" Android app is available on Play Store > : https://play.google.com/store/apps/details?id=openstack.cont > ributhon.com.openstackcontroller > (GitHub: https://github.com/kosslab-kr/openstack-controller ) > > - Most high-priority projects (although it is not during string freeze > period) and documents are > 100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, and > Container Whitepaper. > > - Total 18,695 words were translated into Korean by four contributors > (confirmed through Zanata API: https://translate.openstack.or > g/rest/stats/user/[Zanata ID]/2018-08-16..2018-10-25 ): > > +------------+---------------+-----------------+ > | Zanata ID | Name | Number of words | > +------------+---------------+-----------------+ > | ardentpark | Soonyeul Park | 12517 | > +------------+---------------+-----------------+ > | bnitech | Dongbim Im | 693 | > +------------+---------------+-----------------+ > | csucom | Sungwook Choi | 4397 | > +------------+---------------+-----------------+ > | jaeho93 | Jaeho Cho | 1088 | > +------------+---------------+-----------------+ > > - The list of projects translated into Korean are described as: > > +-------------------------------------+-----------------+ > | Project | Number of words | > +-------------------------------------+-----------------+ > | api-site | 20 | > +-------------------------------------+-----------------+ > | cinder | 405 | > +-------------------------------------+-----------------+ > | designate-dashboard | 4 | > +-------------------------------------+-----------------+ > | horizon | 3226 | > +-------------------------------------+-----------------+ > | i18n | 434 | > +-------------------------------------+-----------------+ > | ironic | 4 | > +-------------------------------------+-----------------+ > | Leveraging Containers and OpenStack | 5480 | > +-------------------------------------+-----------------+ > | neutron-lbaas-dashboard | 5 | > +-------------------------------------+-----------------+ > | openstack-helm | 8835 | > +-------------------------------------+-----------------+ > | trove-dashboard | 89 | > +-------------------------------------+-----------------+ > | zun-ui | 193 | > +-------------------------------------+-----------------+ > > I would like to really appreciate all co-mentors and participants on such > a big event for promoting OpenStack contribution. > The venue and food were supported by Korea Open Source Software > Development Center ( https://kosslab.kr/ ). > > > With many thanks, > > /Ian > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Oct 31 01:01:31 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 31 Oct 2018 12:01:31 +1100 Subject: [openstack-dev] [all]Naming the T release of OpenStack -- Poll open In-Reply-To: References: <20181030054024.GC2343@thor.bakeyournoodle.com> Message-ID: <20181031010130.GE2343@thor.bakeyournoodle.com> On Tue, Oct 30, 2018 at 11:25:02AM -0700, iain macdonnell wrote: > I must be losing it. On what planet is "Tiny Town" a single word, and > "Troublesome" not more than 10 characters? Sorry for the mistake. Should either of these names win the popular vote clearly they would not be viable. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From dangtrinhnt at gmail.com Wed Oct 31 01:03:37 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 31 Oct 2018 10:03:37 +0900 Subject: [openstack-dev] [Freezer][release] A reminder to release Freezer at Stein-1 Message-ID: Hi Geng and team, This is just a reminder that we are in the probation period of keeping Freezer as an official project (deadline is Stein-2). So, we need to release Freezer at Stein-1 this week (actually it's last week). Even though it's not required anymore [1], we need to do this to evaluate our effort to revive Freezer as we agreed. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/135088.html Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Oct 31 01:10:22 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 31 Oct 2018 10:10:22 +0900 Subject: [openstack-dev] [tc][all] TC office hours is started now on #openstack-tc Message-ID: <166c7abe39b.e329087c242103.438436736870380641@ghanshyammann.com> Hi All, TC office hour is started on #openstack-tc channel. Feel free to reach to us for anything you want discuss/input/feedback/help from TC. -gmann From liliueecg at gmail.com Wed Oct 31 03:37:59 2018 From: liliueecg at gmail.com (Li Liu) Date: Tue, 30 Oct 2018 23:37:59 -0400 Subject: [openstack-dev] [cyborg] [weekly-meeting] Message-ID: Weekly meeting tomorrow will be held tomorrow at the usual time10AM EST/10PM BJ time Planned Agenda: 1. Status updates on patches: https://review.openstack.org/#/q/status:open%20project:openstack/cyborg https://review.openstack.org/#/q/project:openstack/cyborg-specs 2. Berlin Summit Planning Just opened an etherpad for tracking summit related stuff. https://etherpad.openstack.org/p/cyborg-berlin-summit-2018-plans -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From wjstk16 at gmail.com Wed Oct 31 08:58:11 2018 From: wjstk16 at gmail.com (Won) Date: Wed, 31 Oct 2018 17:58:11 +0900 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: Hi, > > This is strange. I would expect your original definition to work as well, > since the alarm key in Vitrage is defined by a combination of the alert > name and the instance. We will check it again. > BTW, we solved a different bug related to Prometheus alarms not being > cleared [1]. Could it be related? > Using the original definition, no matter how different the instances are, the alarm names are recognized as the same alarm in vitrage. And I tried to install the rocky version and the master version on the new server and retest but the problem was not solved. The latest bugfix seems irrelevant. Does the wrong timestamp appear if you run 'vitrage alarm list' cli > command? please try running 'vitrage alarm list --debug' and send me the > output. > I have attached 'vitrage-alarm-list.txt.' > Please send me vitrage-collector.log and vitrage-graph.log from the time > that the problematic vm was created and deleted. Please also create and > delete a vm on your 'ubuntu' server, so I can check the differences in the > log. > I have attached 'vitrage_log_on_compute1.zip' and 'vitrage_log_on_ubuntu.zip' files. When creating a vm on computer1, a vitrage-collect log occurred, but no log occurred when it was removed. Br, Won 2018년 10월 30일 (화) 오전 1:28, Ifat Afek 님이 작성: > Hi, > > On Fri, Oct 26, 2018 at 10:34 AM Won wrote: > >> >> I solved the problem of not updating the Prometheus alarm. >> Alarms with the same Prometheus alarm name are recognized as the same >> alarm in vitrage. >> >> ------- alert.rules.yml >> groups: >> - name: alert.rules >> rules: >> - alert: InstanceDown >> expr: up == 0 >> for: 60s >> labels: >> severity: warning >> annotations: >> description: '{{ $labels.instance }} of job {{ $labels.job }} has >> been down >> for more than 30 seconds.' >> summary: Instance {{ $labels.instance }} down >> ------ >> This is the contents of the alert.rules.yml file before I modify it. >> This is a yml file that generates an alarm when the cardvisor >> stops(instance down). Alarm is triggered depending on which instance is >> down, but all alarms have the same name as 'instance down'. Vitrage >> recognizes all of these alarms as the same alarm. Thus, until all 'instance >> down' alarms were cleared, the 'instance down' alarm was recognized as >> unresolved and the alarm was not extinguished. >> > > This is strange. I would expect your original definition to work as well, > since the alarm key in Vitrage is defined by a combination of the alert > name and the instance. We will check it again. > BTW, we solved a different bug related to Prometheus alarms not being > cleared [1]. Could it be related? > > >> Can you please show me where you saw the 2001 timestamp? I didn't find it >>> in the log. >>> >> >> [image: image.png] >> The time stamp is recorded well in log(vitrage-graph,collect etc), but in >> vitrage-dashboard it is marked 2001-01-01. >> However, it seems that the time stamp is recognized well internally >> because the alarm can be resolved and is recorded well in log. >> > > Does the wrong timestamp appear if you run 'vitrage alarm list' cli > command? please try running 'vitrage alarm list --debug' and send me the > output. > > >> [image: image.png] >> Host name ubuntu is my main server. I install openstack all in one in >> this server and i install compute node in host name compute1. >> When i create a new vm in nova(compute1) it immediately appear in the >> entity graph. But in does not disappear in the entity graph when i delete >> the vm. No matter how long i wait, it doesn't disappear. >> Afther i execute 'vitrage-purge-data' command and reboot the >> Openstack(execute reboot command in openstack server(host name ubuntu)), it >> disappear. Only execute 'vitrage-purge-data' does not work. It need a >> reboot to disappear. >> When i create a new vm in nova(ubuntu) there is no problem. >> > Please send me vitrage-collector.log and vitrage-graph.log from the time > that the problematic vm was created and deleted. Please also create and > delete a vm on your 'ubuntu' server, so I can check the differences in the > log. > > I implemented the web service of the micro service architecture and >> applied the RCA. Attached file picture shows the structure of the web >> service I have implemented. I wonder what data I receive and what can i do >> when I link vitrage with kubernetes. >> As i know, the vitrage graph does not present information about >> containers or pods inside the vm. If that is correct, I would like to make >> the information of the pod level appear on the entity graph. >> >> I follow ( >> https://docs.openstack.org/vitrage/latest/contributor/k8s_datasource.html) >> this step. I attached the vitage.conf file and the kubeconfig file. The >> contents of the Kubeconconfig file are copied from the contents of the >> admin.conf file on the master node. >> I want to check my settings are right and connected, but I don't know >> how. It would be very much appreciated if you let me know how. >> > Unfortunately, Vitrage does not hold pods and containers information at > the moment. We discussed the option of adding it in Stein release, but I'm > not sure we will get to do it. > > Br, > Ifat > > [1] https://review.openstack.org/#/c/611258/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 11247 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 42202 bytes Desc: not available URL: From wjstk16 at gmail.com Wed Oct 31 08:59:32 2018 From: wjstk16 at gmail.com (Won) Date: Wed, 31 Oct 2018 17:59:32 +0900 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: 2018년 10월 31일 (수) 오후 5:58, Won 님이 작성: > Hi, > >> >> This is strange. I would expect your original definition to work as well, >> since the alarm key in Vitrage is defined by a combination of the alert >> name and the instance. We will check it again. >> BTW, we solved a different bug related to Prometheus alarms not being >> cleared [1]. Could it be related? >> > > Using the original definition, no matter how different the instances are, > the alarm names are recognized as the same alarm in vitrage. > And I tried to install the rocky version and the master version on the new > server and retest but the problem was not solved. The latest bugfix seems > irrelevant. > > Does the wrong timestamp appear if you run 'vitrage alarm list' cli >> command? please try running 'vitrage alarm list --debug' and send me the >> output. >> > > I have attached 'vitrage-alarm-list.txt.' > > >> Please send me vitrage-collector.log and vitrage-graph.log from the time >> that the problematic vm was created and deleted. Please also create and >> delete a vm on your 'ubuntu' server, so I can check the differences in the >> log. >> > > I have attached 'vitrage_log_on_compute1.zip' and > 'vitrage_log_on_ubuntu.zip' files. > When creating a vm on computer1, a vitrage-collect log occurred, but no > log occurred when it was removed. > > Br, > Won > > > > 2018년 10월 30일 (화) 오전 1:28, Ifat Afek 님이 작성: > >> Hi, >> >> On Fri, Oct 26, 2018 at 10:34 AM Won wrote: >> >>> >>> I solved the problem of not updating the Prometheus alarm. >>> Alarms with the same Prometheus alarm name are recognized as the same >>> alarm in vitrage. >>> >>> ------- alert.rules.yml >>> groups: >>> - name: alert.rules >>> rules: >>> - alert: InstanceDown >>> expr: up == 0 >>> for: 60s >>> labels: >>> severity: warning >>> annotations: >>> description: '{{ $labels.instance }} of job {{ $labels.job }} has >>> been down >>> for more than 30 seconds.' >>> summary: Instance {{ $labels.instance }} down >>> ------ >>> This is the contents of the alert.rules.yml file before I modify it. >>> This is a yml file that generates an alarm when the cardvisor >>> stops(instance down). Alarm is triggered depending on which instance is >>> down, but all alarms have the same name as 'instance down'. Vitrage >>> recognizes all of these alarms as the same alarm. Thus, until all 'instance >>> down' alarms were cleared, the 'instance down' alarm was recognized as >>> unresolved and the alarm was not extinguished. >>> >> >> This is strange. I would expect your original definition to work as well, >> since the alarm key in Vitrage is defined by a combination of the alert >> name and the instance. We will check it again. >> BTW, we solved a different bug related to Prometheus alarms not being >> cleared [1]. Could it be related? >> >> >>> Can you please show me where you saw the 2001 timestamp? I didn't find >>>> it in the log. >>>> >>> >>> [image: image.png] >>> The time stamp is recorded well in log(vitrage-graph,collect etc), but >>> in vitrage-dashboard it is marked 2001-01-01. >>> However, it seems that the time stamp is recognized well internally >>> because the alarm can be resolved and is recorded well in log. >>> >> >> Does the wrong timestamp appear if you run 'vitrage alarm list' cli >> command? please try running 'vitrage alarm list --debug' and send me the >> output. >> >> >>> [image: image.png] >>> Host name ubuntu is my main server. I install openstack all in one in >>> this server and i install compute node in host name compute1. >>> When i create a new vm in nova(compute1) it immediately appear in the >>> entity graph. But in does not disappear in the entity graph when i delete >>> the vm. No matter how long i wait, it doesn't disappear. >>> Afther i execute 'vitrage-purge-data' command and reboot the >>> Openstack(execute reboot command in openstack server(host name ubuntu)), it >>> disappear. Only execute 'vitrage-purge-data' does not work. It need a >>> reboot to disappear. >>> When i create a new vm in nova(ubuntu) there is no problem. >>> >> Please send me vitrage-collector.log and vitrage-graph.log from the time >> that the problematic vm was created and deleted. Please also create and >> delete a vm on your 'ubuntu' server, so I can check the differences in the >> log. >> >> I implemented the web service of the micro service architecture and >>> applied the RCA. Attached file picture shows the structure of the web >>> service I have implemented. I wonder what data I receive and what can i do >>> when I link vitrage with kubernetes. >>> As i know, the vitrage graph does not present information about >>> containers or pods inside the vm. If that is correct, I would like to make >>> the information of the pod level appear on the entity graph. >>> >>> I follow ( >>> https://docs.openstack.org/vitrage/latest/contributor/k8s_datasource.html) >>> this step. I attached the vitage.conf file and the kubeconfig file. The >>> contents of the Kubeconconfig file are copied from the contents of the >>> admin.conf file on the master node. >>> I want to check my settings are right and connected, but I don't know >>> how. It would be very much appreciated if you let me know how. >>> >> Unfortunately, Vitrage does not hold pods and containers information at >> the moment. We discussed the option of adding it in Stein release, but I'm >> not sure we will get to do it. >> >> Br, >> Ifat >> >> [1] https://review.openstack.org/#/c/611258/ >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 42202 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 11247 bytes Desc: not available URL: -------------- next part -------------- Oct 10 14:33:18 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:18 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:18 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:18 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:18 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:18 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:18 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:18 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:18 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:18 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:18.570 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:22 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:22.253 18905 DEBUG futurist.periodics [-] Submitting periodic callback 'vitrage.entity_graph.scheduler.get_changes_periodic' _process_scheduled /usr/local/lib/python2.7/dist-packages/futurist/periodics.py:639 Oct 10 14:33:22 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:22.253 18905 INFO vitrage.entity_graph.datasource_rpc [-] get_changes starting static Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.127 18960 DEBUG vitrage.api_handler.apis.topology [-] TopologyApis get_topology - root: None, all_tenants=False get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:43 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.127 18960 DEBUG vitrage.api_handler.apis.topology [-] project_id = 9443a10af02945eca0d0c8e777b19dfb, is_admin_project True get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:50 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.128 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_is_deleted')== False) and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None))) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.129 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, find graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'name': u'SignUp', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'name': 'openstack.cluster', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': 'OpenStack Cluster', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:23 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'name': u'ubuntu', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'ubuntu', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'name': u'Apigateway', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'update_timestamp': '2018-10-10 05:32:49.639158+ Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', 'vitrage_is_deleted': False, 'name': u'GetList', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'update_timestamp': u'0001-01-01T00:00:00Z', u'vitrage_category': u'ALARM', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_is_deleted': False, u'severity': u'page', u'name': u'InstanceDown', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'name': u'Kube-Master', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df7dde', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': Oct 10 14:33:23 ubuntu vitrage-graph[18905]: True}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'vitrage_is_deleted': False, 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'nova', 'is_real_vitrage_id': True, 'name': Oct 10 14:33:23 ubuntu vitrage-graph[18905]: u'nova'}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'name': u'HighCpuUsage', 'severity': u'warning', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'is_real_vitrage_id': True, 'name': u'public'}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'name': u'Login', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'name': u'EditProduct', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'name': u'Search', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26', 'vitrage_is_placeholder Oct 10 14:33:23 ubuntu vitrage-graph[18905]: ': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'warning', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_i Oct 10 14:33:23 ubuntu vitrage-graph[18905]: s_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f395 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 7', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', '791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c Oct 10 14:33:23 ubuntu vitrage-graph[18905]: -4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relati Oct 10 14:33:23 ubuntu vitrage-graph[18905]: onship_type': 'attached', 'vitrage_is_deleted': False})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:153 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.129 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, real graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'SignUp', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d'}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': 'OpenStack Cluster', 'name': 'openstack.cluster', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad'}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:23 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'ubuntu', 'name': u'ubuntu', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Apigateway', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d'}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359'}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'update_timestamp': '2018-10-10 05:32:49.639158+00:00', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'GetList', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07'}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d'}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'vitrage_is_deleted': False, u'update_timestamp': u'0001-01-01T00:00:00Z', u'severity': u'page', u'vitrage_category': u'ALARM', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True, u'name': u'InstanceDown'}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a'}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Kube-Master', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:23 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:23 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:23 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:23 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:23 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:23 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.133 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.675 18960 DEBUG vitrage.api_handler.apis.topology [-] TopologyApis get_topology - root: None, all_tenants=False get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:43 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.676 18960 DEBUG vitrage.api_handler.apis.topology [-] project_id = 9443a10af02945eca0d0c8e777b19dfb, is_admin_project True get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:50 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.677 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_is_deleted')== False) and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None))) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.679 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, find graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'name': u'SignUp', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'name': 'openstack.cluster', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': 'OpenStack Cluster', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:23 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'name': u'ubuntu', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'ubuntu', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'name': u'Apigateway', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'update_timestamp': '2018-10-10 05:32:49.639158+ Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', 'vitrage_is_deleted': False, 'name': u'GetList', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'update_timestamp': u'0001-01-01T00:00:00Z', u'vitrage_category': u'ALARM', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_is_deleted': False, u'severity': u'page', u'name': u'InstanceDown', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'name': u'Kube-Master', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df7dde', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': Oct 10 14:33:23 ubuntu vitrage-graph[18905]: True}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'vitrage_is_deleted': False, 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'nova', 'is_real_vitrage_id': True, 'name': Oct 10 14:33:23 ubuntu vitrage-graph[18905]: u'nova'}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'name': u'HighCpuUsage', 'severity': u'warning', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'is_real_vitrage_id': True, 'name': u'public'}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'name': u'Login', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'name': u'EditProduct', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'name': u'Search', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26', 'vitrage_is_placeholder Oct 10 14:33:23 ubuntu vitrage-graph[18905]: ': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'warning', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_i Oct 10 14:33:23 ubuntu vitrage-graph[18905]: s_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f395 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 7', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', '791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c Oct 10 14:33:23 ubuntu vitrage-graph[18905]: -4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relati Oct 10 14:33:23 ubuntu vitrage-graph[18905]: onship_type': 'attached', 'vitrage_is_deleted': False})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:153 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.680 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, real graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'SignUp', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d'}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': 'OpenStack Cluster', 'name': 'openstack.cluster', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad'}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:23 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'ubuntu', 'name': u'ubuntu', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Apigateway', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d'}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359'}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'update_timestamp': '2018-10-10 05:32:49.639158+00:00', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'GetList', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07'}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d'}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'vitrage_is_deleted': False, u'update_timestamp': u'0001-01-01T00:00:00Z', u'severity': u'page', u'vitrage_category': u'ALARM', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True, u'name': u'InstanceDown'}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a'}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Kube-Master', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:23 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:23 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:23 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:23 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:23 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:23 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.685 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.263 18960 DEBUG vitrage.api_handler.apis.topology [-] TopologyApis get_topology - root: None, all_tenants=False get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:43 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.263 18960 DEBUG vitrage.api_handler.apis.topology [-] project_id = 9443a10af02945eca0d0c8e777b19dfb, is_admin_project True get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:50 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.265 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_is_deleted')== False) and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None))) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.267 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, find graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'name': u'SignUp', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'name': 'openstack.cluster', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': 'OpenStack Cluster', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:28 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'name': u'ubuntu', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'ubuntu', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'name': u'Apigateway', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'update_timestamp': '2018-10-10 05:32:49.639158+ Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', 'vitrage_is_deleted': False, 'name': u'GetList', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'update_timestamp': u'0001-01-01T00:00:00Z', u'vitrage_category': u'ALARM', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_is_deleted': False, u'severity': u'page', u'name': u'InstanceDown', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'name': u'Kube-Master', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df7dde', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': Oct 10 14:33:28 ubuntu vitrage-graph[18905]: True}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'vitrage_is_deleted': False, 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'nova', 'is_real_vitrage_id': True, 'name': Oct 10 14:33:28 ubuntu vitrage-graph[18905]: u'nova'}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'name': u'HighCpuUsage', 'severity': u'warning', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'is_real_vitrage_id': True, 'name': u'public'}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'name': u'Login', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'name': u'EditProduct', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'name': u'Search', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26', 'vitrage_is_placeholder Oct 10 14:33:28 ubuntu vitrage-graph[18905]: ': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'warning', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_i Oct 10 14:33:28 ubuntu vitrage-graph[18905]: s_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f395 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 7', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', '791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c Oct 10 14:33:28 ubuntu vitrage-graph[18905]: -4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relati Oct 10 14:33:28 ubuntu vitrage-graph[18905]: onship_type': 'attached', 'vitrage_is_deleted': False})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:153 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.272 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, real graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'SignUp', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d'}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': 'OpenStack Cluster', 'name': 'openstack.cluster', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad'}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:28 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'ubuntu', 'name': u'ubuntu', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Apigateway', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d'}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359'}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'update_timestamp': '2018-10-10 05:32:49.639158+00:00', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'GetList', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07'}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d'}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'vitrage_is_deleted': False, u'update_timestamp': u'0001-01-01T00:00:00Z', u'severity': u'page', u'vitrage_category': u'ALARM', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True, u'name': u'InstanceDown'}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a'}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Kube-Master', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:28 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:28 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:28 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:28 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:28 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:28 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.281 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.801 18960 DEBUG vitrage.api_handler.apis.topology [-] TopologyApis get_topology - root: None, all_tenants=False get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:43 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.802 18960 DEBUG vitrage.api_handler.apis.topology [-] project_id = 9443a10af02945eca0d0c8e777b19dfb, is_admin_project True get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:50 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.803 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_is_deleted')== False) and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None))) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.806 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, find graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'name': u'SignUp', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'name': 'openstack.cluster', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': 'OpenStack Cluster', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:28 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'name': u'ubuntu', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'ubuntu', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'name': u'Apigateway', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'update_timestamp': '2018-10-10 05:32:49.639158+ Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', 'vitrage_is_deleted': False, 'name': u'GetList', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'update_timestamp': u'0001-01-01T00:00:00Z', u'vitrage_category': u'ALARM', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_is_deleted': False, u'severity': u'page', u'name': u'InstanceDown', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'name': u'Kube-Master', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df7dde', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': Oct 10 14:33:28 ubuntu vitrage-graph[18905]: True}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'vitrage_is_deleted': False, 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'nova', 'is_real_vitrage_id': True, 'name': Oct 10 14:33:28 ubuntu vitrage-graph[18905]: u'nova'}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'name': u'HighCpuUsage', 'severity': u'warning', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'is_real_vitrage_id': True, 'name': u'public'}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'name': u'Login', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'name': u'EditProduct', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'name': u'Search', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26', 'vitrage_is_placeholder Oct 10 14:33:28 ubuntu vitrage-graph[18905]: ': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'warning', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_i Oct 10 14:33:28 ubuntu vitrage-graph[18905]: s_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f395 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 7', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', '791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c Oct 10 14:33:28 ubuntu vitrage-graph[18905]: -4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relati Oct 10 14:33:28 ubuntu vitrage-graph[18905]: onship_type': 'attached', 'vitrage_is_deleted': False})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:153 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.811 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, real graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'SignUp', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d'}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': 'OpenStack Cluster', 'name': 'openstack.cluster', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad'}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:28 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'ubuntu', 'name': u'ubuntu', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Apigateway', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d'}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359'}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'update_timestamp': '2018-10-10 05:32:49.639158+00:00', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'GetList', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07'}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d'}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'vitrage_is_deleted': False, u'update_timestamp': u'0001-01-01T00:00:00Z', u'severity': u'page', u'vitrage_category': u'ALARM', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True, u'name': u'InstanceDown'}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a'}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Kube-Master', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:28 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:28 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:28 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:28 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:28 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:28 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.822 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.519 18960 DEBUG vitrage.api_handler.apis.topology [-] TopologyApis get_topology - root: None, all_tenants=False get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:43 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.520 18960 DEBUG vitrage.api_handler.apis.topology [-] project_id = 9443a10af02945eca0d0c8e777b19dfb, is_admin_project True get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:50 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.521 18960 DEBUG vitrage.graph.query [-] create_predicate::(((item.get('vitrage_is_deleted')== False) and (item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_category')== 'RESOURCE') and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None))) or ((item.get('vitrage_is_deleted')== False) and (item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_category')== 'ALARM') and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None)))) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.525 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, find graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'name': u'SignUp', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'name': 'openstack.cluster', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': 'OpenStack Cluster', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'name': u'ubuntu', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'ubuntu', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'name': u'Apigateway', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'update_timestamp': '2018-10-10 05:32:49.639158+ Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', 'vitrage_is_deleted': False, 'name': u'GetList', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'update_timestamp': u'0001-01-01T00:00:00Z', u'vitrage_category': u'ALARM', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_is_deleted': False, u'severity': u'page', u'name': u'InstanceDown', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'name': u'Kube-Master', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df7dde', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': Oct 10 14:33:29 ubuntu vitrage-graph[18905]: True}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'vitrage_is_deleted': False, 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'nova', 'is_real_vitrage_id': True, 'name': Oct 10 14:33:29 ubuntu vitrage-graph[18905]: u'nova'}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'name': u'HighCpuUsage', 'severity': u'warning', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'is_real_vitrage_id': True, 'name': u'public'}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'name': u'Login', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'name': u'EditProduct', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'name': u'Search', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26', 'vitrage_is_placeholder Oct 10 14:33:29 ubuntu vitrage-graph[18905]: ': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'warning', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_i Oct 10 14:33:29 ubuntu vitrage-graph[18905]: s_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f395 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 7', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', '791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c Oct 10 14:33:29 ubuntu vitrage-graph[18905]: -4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relati Oct 10 14:33:29 ubuntu vitrage-graph[18905]: onship_type': 'attached', 'vitrage_is_deleted': False})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:153 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.544 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, real graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'SignUp', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d'}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': 'OpenStack Cluster', 'name': 'openstack.cluster', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad'}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'ubuntu', 'name': u'ubuntu', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Apigateway', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d'}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359'}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'update_timestamp': '2018-10-10 05:32:49.639158+00:00', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'GetList', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07'}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d'}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'vitrage_is_deleted': False, u'update_timestamp': u'0001-01-01T00:00:00Z', u'severity': u'page', u'vitrage_category': u'ALARM', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True, u'name': u'InstanceDown'}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a'}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Kube-Master', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:29 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:29 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:29 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:29 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:29 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.558 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.562 18960 DEBUG vitrage.api_handler.apis.topology [-] TopologyApis get_topology - root: None, all_tenants=False get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:43 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.562 18960 DEBUG vitrage.api_handler.apis.topology [-] project_id = 9443a10af02945eca0d0c8e777b19dfb, is_admin_project True get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:50 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.563 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_is_deleted')== False) and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None))) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.565 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, find graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'name': u'SignUp', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'name': 'openstack.cluster', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': 'OpenStack Cluster', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'name': u'ubuntu', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'ubuntu', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'name': u'Apigateway', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'update_timestamp': '2018-10-10 05:32:49.639158+ Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', 'vitrage_is_deleted': False, 'name': u'GetList', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'update_timestamp': u'0001-01-01T00:00:00Z', u'vitrage_category': u'ALARM', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_is_deleted': False, u'severity': u'page', u'name': u'InstanceDown', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'name': u'Kube-Master', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df7dde', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': Oct 10 14:33:29 ubuntu vitrage-graph[18905]: True}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'vitrage_is_deleted': False, 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'nova', 'is_real_vitrage_id': True, 'name': Oct 10 14:33:29 ubuntu vitrage-graph[18905]: u'nova'}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'name': u'HighCpuUsage', 'severity': u'warning', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'is_real_vitrage_id': True, 'name': u'public'}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'name': u'Login', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'name': u'EditProduct', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'name': u'Search', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26', 'vitrage_is_placeholder Oct 10 14:33:29 ubuntu vitrage-graph[18905]: ': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'warning', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_i Oct 10 14:33:29 ubuntu vitrage-graph[18905]: s_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f395 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 7', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', '791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c Oct 10 14:33:29 ubuntu vitrage-graph[18905]: -4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relati Oct 10 14:33:29 ubuntu vitrage-graph[18905]: onship_type': 'attached', 'vitrage_is_deleted': False})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:153 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.567 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, real graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'SignUp', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d'}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': 'OpenStack Cluster', 'name': 'openstack.cluster', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad'}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'ubuntu', 'name': u'ubuntu', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Apigateway', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d'}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359'}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'update_timestamp': '2018-10-10 05:32:49.639158+00:00', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'GetList', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07'}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d'}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'vitrage_is_deleted': False, u'update_timestamp': u'0001-01-01T00:00:00Z', u'severity': u'page', u'vitrage_category': u'ALARM', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True, u'name': u'InstanceDown'}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a'}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Kube-Master', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:29 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:29 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:29 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:29 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:29 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.571 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.636 18960 DEBUG vitrage.api_handler.apis.alarm [-] AlarmApis get_alarm_counts - all_tenants=False get_alarm_counts /opt/stack/vitrage/vitrage/api_handler/apis/alarm.py:78 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.792 18960 DEBUG vitrage.api_handler.apis.topology [-] TopologyApis get_topology - root: None, all_tenants=False get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:43 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.792 18960 DEBUG vitrage.api_handler.apis.topology [-] project_id = 9443a10af02945eca0d0c8e777b19dfb, is_admin_project True get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:50 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.793 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_is_deleted')== False) and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None))) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.796 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, find graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'name': u'SignUp', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'name': 'openstack.cluster', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': 'OpenStack Cluster', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'name': u'ubuntu', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'ubuntu', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'name': u'Apigateway', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'update_timestamp': '2018-10-10 05:32:49.639158+ Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', 'vitrage_is_deleted': False, 'name': u'GetList', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'update_timestamp': u'0001-01-01T00:00:00Z', u'vitrage_category': u'ALARM', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_is_deleted': False, u'severity': u'page', u'name': u'InstanceDown', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'name': u'Kube-Master', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df7dde', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': Oct 10 14:33:29 ubuntu vitrage-graph[18905]: True}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'vitrage_is_deleted': False, 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'nova', 'is_real_vitrage_id': True, 'name': Oct 10 14:33:29 ubuntu vitrage-graph[18905]: u'nova'}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'name': u'HighCpuUsage', 'severity': u'warning', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'is_real_vitrage_id': True, 'name': u'public'}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'name': u'Login', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'name': u'EditProduct', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'name': u'Search', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26', 'vitrage_is_placeholder Oct 10 14:33:29 ubuntu vitrage-graph[18905]: ': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'warning', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_i Oct 10 14:33:29 ubuntu vitrage-graph[18905]: s_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f395 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 7', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', '791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c Oct 10 14:33:29 ubuntu vitrage-graph[18905]: -4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relati Oct 10 14:33:29 ubuntu vitrage-graph[18905]: onship_type': 'attached', 'vitrage_is_deleted': False})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:153 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.798 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, real graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'SignUp', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d'}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': 'OpenStack Cluster', 'name': 'openstack.cluster', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad'}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'ubuntu', 'name': u'ubuntu', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Apigateway', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d'}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359'}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'update_timestamp': '2018-10-10 05:32:49.639158+00:00', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'GetList', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07'}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d'}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'vitrage_is_deleted': False, u'update_timestamp': u'0001-01-01T00:00:00Z', u'severity': u'page', u'vitrage_category': u'ALARM', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True, u'name': u'InstanceDown'}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a'}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Kube-Master', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:29 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:29 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:29 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:29 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:29 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.805 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 -------------- next part -------------- A non-text attachment was scrubbed... Name: vitrage_log_on_compute1.zip Type: application/x-zip-compressed Size: 8349 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vitrage_log_on_ubuntu.zip Type: application/x-zip-compressed Size: 6625 bytes Desc: not available URL: From wjstk16 at gmail.com Wed Oct 31 09:00:33 2018 From: wjstk16 at gmail.com (Won) Date: Wed, 31 Oct 2018 18:00:33 +0900 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: Hi, > > This is strange. I would expect your original definition to work as well, > since the alarm key in Vitrage is defined by a combination of the alert > name and the instance. We will check it again. > BTW, we solved a different bug related to Prometheus alarms not being > cleared [1]. Could it be related? > Using the original definition, no matter how different the instances are, the alarm names are recognized as the same alarm in vitrage. And I tried to install the rocky version and the master version on the new server and retest but the problem was not solved. The latest bugfix seems irrelevant. Does the wrong timestamp appear if you run 'vitrage alarm list' cli > command? please try running 'vitrage alarm list --debug' and send me the > output. > I have attached 'vitrage-alarm-list.txt.' > Please send me vitrage-collector.log and vitrage-graph.log from the time > that the problematic vm was created and deleted. Please also create and > delete a vm on your 'ubuntu' server, so I can check the differences in the > log. > I have attached 'vitrage_log_on_compute1.zip' and 'vitrage_log_on_ubuntu.zip' files. When creating a vm on computer1, a vitrage-collect log occurred, but no log occurred when it was removed. Br, Won 2018년 10월 31일 (수) 오후 5:59, Won 님이 작성: > > > 2018년 10월 31일 (수) 오후 5:58, Won 님이 작성: > >> Hi, >> >>> >>> This is strange. I would expect your original definition to work as >>> well, since the alarm key in Vitrage is defined by a combination of the >>> alert name and the instance. We will check it again. >>> BTW, we solved a different bug related to Prometheus alarms not being >>> cleared [1]. Could it be related? >>> >> >> Using the original definition, no matter how different the instances are, >> the alarm names are recognized as the same alarm in vitrage. >> And I tried to install the rocky version and the master version on the >> new server and retest but the problem was not solved. The latest bugfix >> seems irrelevant. >> >> Does the wrong timestamp appear if you run 'vitrage alarm list' cli >>> command? please try running 'vitrage alarm list --debug' and send me the >>> output. >>> >> >> I have attached 'vitrage-alarm-list.txt.' >> >> >>> Please send me vitrage-collector.log and vitrage-graph.log from the time >>> that the problematic vm was created and deleted. Please also create and >>> delete a vm on your 'ubuntu' server, so I can check the differences in the >>> log. >>> >> >> I have attached 'vitrage_log_on_compute1.zip' and >> 'vitrage_log_on_ubuntu.zip' files. >> When creating a vm on computer1, a vitrage-collect log occurred, but no >> log occurred when it was removed. >> >> Br, >> Won >> >> >> >> 2018년 10월 30일 (화) 오전 1:28, Ifat Afek 님이 작성: >> >>> Hi, >>> >>> On Fri, Oct 26, 2018 at 10:34 AM Won wrote: >>> >>>> >>>> I solved the problem of not updating the Prometheus alarm. >>>> Alarms with the same Prometheus alarm name are recognized as the same >>>> alarm in vitrage. >>>> >>>> ------- alert.rules.yml >>>> groups: >>>> - name: alert.rules >>>> rules: >>>> - alert: InstanceDown >>>> expr: up == 0 >>>> for: 60s >>>> labels: >>>> severity: warning >>>> annotations: >>>> description: '{{ $labels.instance }} of job {{ $labels.job }} has >>>> been down >>>> for more than 30 seconds.' >>>> summary: Instance {{ $labels.instance }} down >>>> ------ >>>> This is the contents of the alert.rules.yml file before I modify it. >>>> This is a yml file that generates an alarm when the cardvisor >>>> stops(instance down). Alarm is triggered depending on which instance is >>>> down, but all alarms have the same name as 'instance down'. Vitrage >>>> recognizes all of these alarms as the same alarm. Thus, until all 'instance >>>> down' alarms were cleared, the 'instance down' alarm was recognized as >>>> unresolved and the alarm was not extinguished. >>>> >>> >>> This is strange. I would expect your original definition to work as >>> well, since the alarm key in Vitrage is defined by a combination of the >>> alert name and the instance. We will check it again. >>> BTW, we solved a different bug related to Prometheus alarms not being >>> cleared [1]. Could it be related? >>> >>> >>>> Can you please show me where you saw the 2001 timestamp? I didn't find >>>>> it in the log. >>>>> >>>> >>>> [image: image.png] >>>> The time stamp is recorded well in log(vitrage-graph,collect etc), but >>>> in vitrage-dashboard it is marked 2001-01-01. >>>> However, it seems that the time stamp is recognized well internally >>>> because the alarm can be resolved and is recorded well in log. >>>> >>> >>> Does the wrong timestamp appear if you run 'vitrage alarm list' cli >>> command? please try running 'vitrage alarm list --debug' and send me the >>> output. >>> >>> >>>> [image: image.png] >>>> Host name ubuntu is my main server. I install openstack all in one in >>>> this server and i install compute node in host name compute1. >>>> When i create a new vm in nova(compute1) it immediately appear in the >>>> entity graph. But in does not disappear in the entity graph when i delete >>>> the vm. No matter how long i wait, it doesn't disappear. >>>> Afther i execute 'vitrage-purge-data' command and reboot the >>>> Openstack(execute reboot command in openstack server(host name ubuntu)), it >>>> disappear. Only execute 'vitrage-purge-data' does not work. It need a >>>> reboot to disappear. >>>> When i create a new vm in nova(ubuntu) there is no problem. >>>> >>> Please send me vitrage-collector.log and vitrage-graph.log from the time >>> that the problematic vm was created and deleted. Please also create and >>> delete a vm on your 'ubuntu' server, so I can check the differences in the >>> log. >>> >>> I implemented the web service of the micro service architecture and >>>> applied the RCA. Attached file picture shows the structure of the web >>>> service I have implemented. I wonder what data I receive and what can i do >>>> when I link vitrage with kubernetes. >>>> As i know, the vitrage graph does not present information about >>>> containers or pods inside the vm. If that is correct, I would like to make >>>> the information of the pod level appear on the entity graph. >>>> >>>> I follow ( >>>> https://docs.openstack.org/vitrage/latest/contributor/k8s_datasource.html) >>>> this step. I attached the vitage.conf file and the kubeconfig file. The >>>> contents of the Kubeconconfig file are copied from the contents of the >>>> admin.conf file on the master node. >>>> I want to check my settings are right and connected, but I don't know >>>> how. It would be very much appreciated if you let me know how. >>>> >>> Unfortunately, Vitrage does not hold pods and containers information at >>> the moment. We discussed the option of adding it in Stein release, but I'm >>> not sure we will get to do it. >>> >>> Br, >>> Ifat >>> >>> [1] https://review.openstack.org/#/c/611258/ >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 42202 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 11247 bytes Desc: not available URL: -------------- next part -------------- Oct 10 14:33:18 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:18 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:18 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:18 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:18 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:18 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:18 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:18 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:18 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:18 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:18.570 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:22 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:22.253 18905 DEBUG futurist.periodics [-] Submitting periodic callback 'vitrage.entity_graph.scheduler.get_changes_periodic' _process_scheduled /usr/local/lib/python2.7/dist-packages/futurist/periodics.py:639 Oct 10 14:33:22 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:22.253 18905 INFO vitrage.entity_graph.datasource_rpc [-] get_changes starting static Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.127 18960 DEBUG vitrage.api_handler.apis.topology [-] TopologyApis get_topology - root: None, all_tenants=False get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:43 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.127 18960 DEBUG vitrage.api_handler.apis.topology [-] project_id = 9443a10af02945eca0d0c8e777b19dfb, is_admin_project True get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:50 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.128 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_is_deleted')== False) and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None))) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.129 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, find graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'name': u'SignUp', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'name': 'openstack.cluster', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': 'OpenStack Cluster', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:23 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'name': u'ubuntu', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'ubuntu', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'name': u'Apigateway', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'update_timestamp': '2018-10-10 05:32:49.639158+ Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', 'vitrage_is_deleted': False, 'name': u'GetList', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'update_timestamp': u'0001-01-01T00:00:00Z', u'vitrage_category': u'ALARM', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_is_deleted': False, u'severity': u'page', u'name': u'InstanceDown', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'name': u'Kube-Master', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df7dde', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': Oct 10 14:33:23 ubuntu vitrage-graph[18905]: True}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'vitrage_is_deleted': False, 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'nova', 'is_real_vitrage_id': True, 'name': Oct 10 14:33:23 ubuntu vitrage-graph[18905]: u'nova'}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'name': u'HighCpuUsage', 'severity': u'warning', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'is_real_vitrage_id': True, 'name': u'public'}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'name': u'Login', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'name': u'EditProduct', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'name': u'Search', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26', 'vitrage_is_placeholder Oct 10 14:33:23 ubuntu vitrage-graph[18905]: ': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'warning', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_i Oct 10 14:33:23 ubuntu vitrage-graph[18905]: s_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f395 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 7', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', '791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c Oct 10 14:33:23 ubuntu vitrage-graph[18905]: -4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relati Oct 10 14:33:23 ubuntu vitrage-graph[18905]: onship_type': 'attached', 'vitrage_is_deleted': False})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:153 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.129 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, real graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'SignUp', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d'}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': 'OpenStack Cluster', 'name': 'openstack.cluster', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad'}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:23 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'ubuntu', 'name': u'ubuntu', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Apigateway', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d'}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359'}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'update_timestamp': '2018-10-10 05:32:49.639158+00:00', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'GetList', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07'}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d'}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'vitrage_is_deleted': False, u'update_timestamp': u'0001-01-01T00:00:00Z', u'severity': u'page', u'vitrage_category': u'ALARM', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True, u'name': u'InstanceDown'}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a'}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Kube-Master', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:23 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:23 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:23 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:23 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:23 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:23 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.133 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.675 18960 DEBUG vitrage.api_handler.apis.topology [-] TopologyApis get_topology - root: None, all_tenants=False get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:43 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.676 18960 DEBUG vitrage.api_handler.apis.topology [-] project_id = 9443a10af02945eca0d0c8e777b19dfb, is_admin_project True get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:50 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.677 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_is_deleted')== False) and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None))) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.679 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, find graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'name': u'SignUp', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'name': 'openstack.cluster', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': 'OpenStack Cluster', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:23 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'name': u'ubuntu', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'ubuntu', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'name': u'Apigateway', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'update_timestamp': '2018-10-10 05:32:49.639158+ Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', 'vitrage_is_deleted': False, 'name': u'GetList', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'update_timestamp': u'0001-01-01T00:00:00Z', u'vitrage_category': u'ALARM', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_is_deleted': False, u'severity': u'page', u'name': u'InstanceDown', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'name': u'Kube-Master', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df7dde', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': Oct 10 14:33:23 ubuntu vitrage-graph[18905]: True}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'vitrage_is_deleted': False, 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'nova', 'is_real_vitrage_id': True, 'name': Oct 10 14:33:23 ubuntu vitrage-graph[18905]: u'nova'}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'name': u'HighCpuUsage', 'severity': u'warning', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'is_real_vitrage_id': True, 'name': u'public'}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'name': u'Login', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'name': u'EditProduct', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'name': u'Search', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26', 'vitrage_is_placeholder Oct 10 14:33:23 ubuntu vitrage-graph[18905]: ': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'warning', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_i Oct 10 14:33:23 ubuntu vitrage-graph[18905]: s_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f395 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 7', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', '791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c Oct 10 14:33:23 ubuntu vitrage-graph[18905]: -4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relati Oct 10 14:33:23 ubuntu vitrage-graph[18905]: onship_type': 'attached', 'vitrage_is_deleted': False})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:153 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.680 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, real graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'SignUp', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d'}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': 'OpenStack Cluster', 'name': 'openstack.cluster', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad'}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:23 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'ubuntu', 'name': u'ubuntu', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Apigateway', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d'}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359'}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'update_timestamp': '2018-10-10 05:32:49.639158+00:00', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'GetList', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07'}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d'}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'vitrage_is_deleted': False, u'update_timestamp': u'0001-01-01T00:00:00Z', u'severity': u'page', u'vitrage_category': u'ALARM', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True, u'name': u'InstanceDown'}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a'}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Kube-Master', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:23 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:23 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:23 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:23 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:23 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:23 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:23 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:23.685 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.263 18960 DEBUG vitrage.api_handler.apis.topology [-] TopologyApis get_topology - root: None, all_tenants=False get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:43 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.263 18960 DEBUG vitrage.api_handler.apis.topology [-] project_id = 9443a10af02945eca0d0c8e777b19dfb, is_admin_project True get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:50 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.265 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_is_deleted')== False) and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None))) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.267 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, find graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'name': u'SignUp', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'name': 'openstack.cluster', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': 'OpenStack Cluster', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:28 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'name': u'ubuntu', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'ubuntu', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'name': u'Apigateway', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'update_timestamp': '2018-10-10 05:32:49.639158+ Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', 'vitrage_is_deleted': False, 'name': u'GetList', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'update_timestamp': u'0001-01-01T00:00:00Z', u'vitrage_category': u'ALARM', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_is_deleted': False, u'severity': u'page', u'name': u'InstanceDown', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'name': u'Kube-Master', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df7dde', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': Oct 10 14:33:28 ubuntu vitrage-graph[18905]: True}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'vitrage_is_deleted': False, 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'nova', 'is_real_vitrage_id': True, 'name': Oct 10 14:33:28 ubuntu vitrage-graph[18905]: u'nova'}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'name': u'HighCpuUsage', 'severity': u'warning', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'is_real_vitrage_id': True, 'name': u'public'}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'name': u'Login', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'name': u'EditProduct', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'name': u'Search', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26', 'vitrage_is_placeholder Oct 10 14:33:28 ubuntu vitrage-graph[18905]: ': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'warning', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_i Oct 10 14:33:28 ubuntu vitrage-graph[18905]: s_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f395 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 7', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', '791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c Oct 10 14:33:28 ubuntu vitrage-graph[18905]: -4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relati Oct 10 14:33:28 ubuntu vitrage-graph[18905]: onship_type': 'attached', 'vitrage_is_deleted': False})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:153 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.272 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, real graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'SignUp', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d'}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': 'OpenStack Cluster', 'name': 'openstack.cluster', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad'}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:28 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'ubuntu', 'name': u'ubuntu', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Apigateway', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d'}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359'}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'update_timestamp': '2018-10-10 05:32:49.639158+00:00', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'GetList', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07'}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d'}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'vitrage_is_deleted': False, u'update_timestamp': u'0001-01-01T00:00:00Z', u'severity': u'page', u'vitrage_category': u'ALARM', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True, u'name': u'InstanceDown'}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a'}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Kube-Master', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:28 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:28 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:28 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:28 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:28 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:28 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.281 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.801 18960 DEBUG vitrage.api_handler.apis.topology [-] TopologyApis get_topology - root: None, all_tenants=False get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:43 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.802 18960 DEBUG vitrage.api_handler.apis.topology [-] project_id = 9443a10af02945eca0d0c8e777b19dfb, is_admin_project True get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:50 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.803 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_is_deleted')== False) and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None))) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.806 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, find graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'name': u'SignUp', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'name': 'openstack.cluster', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': 'OpenStack Cluster', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:28 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'name': u'ubuntu', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'ubuntu', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'name': u'Apigateway', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'update_timestamp': '2018-10-10 05:32:49.639158+ Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', 'vitrage_is_deleted': False, 'name': u'GetList', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'update_timestamp': u'0001-01-01T00:00:00Z', u'vitrage_category': u'ALARM', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_is_deleted': False, u'severity': u'page', u'name': u'InstanceDown', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'name': u'Kube-Master', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df7dde', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': Oct 10 14:33:28 ubuntu vitrage-graph[18905]: True}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'vitrage_is_deleted': False, 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'nova', 'is_real_vitrage_id': True, 'name': Oct 10 14:33:28 ubuntu vitrage-graph[18905]: u'nova'}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'name': u'HighCpuUsage', 'severity': u'warning', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'is_real_vitrage_id': True, 'name': u'public'}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'name': u'Login', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'name': u'EditProduct', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'name': u'Search', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26', 'vitrage_is_placeholder Oct 10 14:33:28 ubuntu vitrage-graph[18905]: ': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'warning', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_i Oct 10 14:33:28 ubuntu vitrage-graph[18905]: s_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f395 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 7', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', '791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c Oct 10 14:33:28 ubuntu vitrage-graph[18905]: -4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relati Oct 10 14:33:28 ubuntu vitrage-graph[18905]: onship_type': 'attached', 'vitrage_is_deleted': False})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:153 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.811 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, real graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'SignUp', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d'}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': 'OpenStack Cluster', 'name': 'openstack.cluster', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad'}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:28 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'ubuntu', 'name': u'ubuntu', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Apigateway', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d'}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359'}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'update_timestamp': '2018-10-10 05:32:49.639158+00:00', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'GetList', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07'}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d'}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'vitrage_is_deleted': False, u'update_timestamp': u'0001-01-01T00:00:00Z', u'severity': u'page', u'vitrage_category': u'ALARM', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True, u'name': u'InstanceDown'}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a'}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Kube-Master', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:28 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:28 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:28 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:28 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:28 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:28 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:28 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:28.822 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.519 18960 DEBUG vitrage.api_handler.apis.topology [-] TopologyApis get_topology - root: None, all_tenants=False get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:43 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.520 18960 DEBUG vitrage.api_handler.apis.topology [-] project_id = 9443a10af02945eca0d0c8e777b19dfb, is_admin_project True get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:50 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.521 18960 DEBUG vitrage.graph.query [-] create_predicate::(((item.get('vitrage_is_deleted')== False) and (item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_category')== 'RESOURCE') and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None))) or ((item.get('vitrage_is_deleted')== False) and (item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_category')== 'ALARM') and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None)))) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.525 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, find graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'name': u'SignUp', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'name': 'openstack.cluster', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': 'OpenStack Cluster', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'name': u'ubuntu', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'ubuntu', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'name': u'Apigateway', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'update_timestamp': '2018-10-10 05:32:49.639158+ Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', 'vitrage_is_deleted': False, 'name': u'GetList', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'update_timestamp': u'0001-01-01T00:00:00Z', u'vitrage_category': u'ALARM', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_is_deleted': False, u'severity': u'page', u'name': u'InstanceDown', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'name': u'Kube-Master', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df7dde', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': Oct 10 14:33:29 ubuntu vitrage-graph[18905]: True}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'vitrage_is_deleted': False, 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'nova', 'is_real_vitrage_id': True, 'name': Oct 10 14:33:29 ubuntu vitrage-graph[18905]: u'nova'}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'name': u'HighCpuUsage', 'severity': u'warning', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'is_real_vitrage_id': True, 'name': u'public'}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'name': u'Login', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'name': u'EditProduct', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'name': u'Search', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26', 'vitrage_is_placeholder Oct 10 14:33:29 ubuntu vitrage-graph[18905]: ': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'warning', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_i Oct 10 14:33:29 ubuntu vitrage-graph[18905]: s_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f395 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 7', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', '791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c Oct 10 14:33:29 ubuntu vitrage-graph[18905]: -4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relati Oct 10 14:33:29 ubuntu vitrage-graph[18905]: onship_type': 'attached', 'vitrage_is_deleted': False})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:153 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.544 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, real graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'SignUp', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d'}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': 'OpenStack Cluster', 'name': 'openstack.cluster', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad'}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'ubuntu', 'name': u'ubuntu', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Apigateway', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d'}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359'}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'update_timestamp': '2018-10-10 05:32:49.639158+00:00', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'GetList', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07'}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d'}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'vitrage_is_deleted': False, u'update_timestamp': u'0001-01-01T00:00:00Z', u'severity': u'page', u'vitrage_category': u'ALARM', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True, u'name': u'InstanceDown'}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a'}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Kube-Master', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:29 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:29 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:29 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:29 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:29 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.558 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.562 18960 DEBUG vitrage.api_handler.apis.topology [-] TopologyApis get_topology - root: None, all_tenants=False get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:43 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.562 18960 DEBUG vitrage.api_handler.apis.topology [-] project_id = 9443a10af02945eca0d0c8e777b19dfb, is_admin_project True get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:50 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.563 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_is_deleted')== False) and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None))) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.565 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, find graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'name': u'SignUp', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'name': 'openstack.cluster', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': 'OpenStack Cluster', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'name': u'ubuntu', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'ubuntu', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'name': u'Apigateway', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'update_timestamp': '2018-10-10 05:32:49.639158+ Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', 'vitrage_is_deleted': False, 'name': u'GetList', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'update_timestamp': u'0001-01-01T00:00:00Z', u'vitrage_category': u'ALARM', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_is_deleted': False, u'severity': u'page', u'name': u'InstanceDown', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'name': u'Kube-Master', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df7dde', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': Oct 10 14:33:29 ubuntu vitrage-graph[18905]: True}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'vitrage_is_deleted': False, 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'nova', 'is_real_vitrage_id': True, 'name': Oct 10 14:33:29 ubuntu vitrage-graph[18905]: u'nova'}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'name': u'HighCpuUsage', 'severity': u'warning', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'is_real_vitrage_id': True, 'name': u'public'}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'name': u'Login', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'name': u'EditProduct', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'name': u'Search', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26', 'vitrage_is_placeholder Oct 10 14:33:29 ubuntu vitrage-graph[18905]: ': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'warning', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_i Oct 10 14:33:29 ubuntu vitrage-graph[18905]: s_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f395 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 7', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', '791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c Oct 10 14:33:29 ubuntu vitrage-graph[18905]: -4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relati Oct 10 14:33:29 ubuntu vitrage-graph[18905]: onship_type': 'attached', 'vitrage_is_deleted': False})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:153 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.567 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, real graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'SignUp', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d'}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': 'OpenStack Cluster', 'name': 'openstack.cluster', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad'}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'ubuntu', 'name': u'ubuntu', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Apigateway', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d'}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359'}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'update_timestamp': '2018-10-10 05:32:49.639158+00:00', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'GetList', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07'}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d'}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'vitrage_is_deleted': False, u'update_timestamp': u'0001-01-01T00:00:00Z', u'severity': u'page', u'vitrage_category': u'ALARM', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True, u'name': u'InstanceDown'}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a'}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Kube-Master', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:29 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:29 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:29 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:29 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:29 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.571 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.636 18960 DEBUG vitrage.api_handler.apis.alarm [-] AlarmApis get_alarm_counts - all_tenants=False get_alarm_counts /opt/stack/vitrage/vitrage/api_handler/apis/alarm.py:78 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.792 18960 DEBUG vitrage.api_handler.apis.topology [-] TopologyApis get_topology - root: None, all_tenants=False get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:43 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.792 18960 DEBUG vitrage.api_handler.apis.topology [-] project_id = 9443a10af02945eca0d0c8e777b19dfb, is_admin_project True get_topology /opt/stack/vitrage/vitrage/api_handler/apis/topology.py:50 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.793 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_is_placeholder')== False) and (item.get('vitrage_is_deleted')== False) and ((item.get('project_id')== '9443a10af02945eca0d0c8e777b19dfb') or (item.get('project_id')== None))) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.796 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, find graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'name': u'SignUp', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'name': 'openstack.cluster', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': 'OpenStack Cluster', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'name': u'ubuntu', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'ubuntu', 'is_real_vitrage_id': True, 'vitrage_is_deleted': False}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'name': u'Apigateway', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'update_timestamp': '2018-10-10 05:32:49.639158+ Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', 'vitrage_is_deleted': False, 'name': u'GetList', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'update_timestamp': u'0001-01-01T00:00:00Z', u'vitrage_category': u'ALARM', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_is_deleted': False, u'severity': u'page', u'name': u'InstanceDown', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'name': u'Kube-Master', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df7dde', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': Oct 10 14:33:29 ubuntu vitrage-graph[18905]: True}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'vitrage_is_deleted': False, 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'vitrage_is_placeholder': False, 'id': u'nova', 'is_real_vitrage_id': True, 'name': Oct 10 14:33:29 ubuntu vitrage-graph[18905]: u'nova'}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'name': u'HighCpuUsage', 'severity': u'warning', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'is_real_vitrage_id': True, 'name': u'public'}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'name': u'Login', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'name': u'EditProduct', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'name': u'Search', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26', 'vitrage_is_placeholder Oct 10 14:33:29 ubuntu vitrage-graph[18905]: ': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'warning', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_i Oct 10 14:33:29 ubuntu vitrage-graph[18905]: s_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'severity': u'page', 'name': u'InstanceDown', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'name': u'', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f395 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 7', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', '791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c Oct 10 14:33:29 ubuntu vitrage-graph[18905]: -4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relati Oct 10 14:33:29 ubuntu vitrage-graph[18905]: onship_type': 'attached', 'vitrage_is_deleted': False})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:153 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.798 18960 DEBUG vitrage.graph.algo_driver.networkx_algorithm [-] match query, real graph: nodes [('b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'update_timestamp': '2018-10-10 05:32:49.639187+00:00', 'id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639187+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'SignUp', 'vitrage_cached_id': '7ea7f86a20579de2a5436445e9dc515d'}), ('3138c0ae-f8ff-42a0-b434-1deec95d1327', {'vitrage_id': '3138c0ae-f8ff-42a0-b434-1deec95d1327', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'openstack.cluster', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': 'OpenStack Cluster', 'name': 'openstack.cluster', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '3c7f9d22d9dd1615a00404f86cb3e289', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', {'update_timestamp': u'2018-10-04T11:53:37Z', 'ip_addresses': (u'192.168.12.152',), 'id': u'd21817e4-a39c-435b-9707-44a2e418291a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'e1afc13b-01da-4a53-81bc-a1971e59d8e7', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372538+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '826091053119655c3dfa18fbf1a515ad'}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'vitrage Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'update_timestamp': '2018-10-10 05:32:49.176252+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.host', 'vitrage_type': 'nova.host', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'ubuntu', 'name': u'ubuntu', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '0681113f6ea79361f31c82efa720efdf', 'vitrage_is_placeholder': False, 'is_real_vitrage_id': True}), ('bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'update_timestamp': '2018-10-10 05:32:49.639196+00:00', 'id': u'9fc3f462-d7bb-4014-910d-86df049e8001', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639196+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Apigateway', 'vitrage_cached_id': '5edd4f997925cc62373306e78455664d'}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'update_timestamp': u'2018-10-04T11:53:03Z', 'ip_addresses': (u'192.168.12.166',), 'id': u'73df4aaa-e4cd-4a82-9d93-f5355f4da629', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372516+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '1a5f4a1e73ce9cf1ce92a86df6e41359'}), ('74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'update_timestamp': '2018-10-10 05:32:49.639158+00:00', 'id': u'd09ebfd3-3d28-44df-b067-139f75d9eaed', Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639158+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'GetList', 'vitrage_cached_id': '85c6a79f08bb8507006548e931418c07'}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'update_timestamp': u'2018-10-04T12:01:23Z', 'ip_addresses': (u'192.168.12.165',), 'id': u'8612d74b-5358-4352-bcf2-eef4c1499ae7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372523+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': 'ae5385a569599c567b66b4fd586e662d'}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', {u'vitrage_id': u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'status': u'firing', u'vitrage_is_deleted': False, u'update_timestamp': u'0001-01-01T00:00:00Z', u'severity': u'page', u'vitrage_category': u'ALARM', u'state': u'Active', u'vitrage_cached_id': u'd3ee9ea8aa4a72f73f23fa6246af07a8', u'vitrage_type': u'prometheus', u'vitrage_sample_timestamp': u'2018-10-05 07:49:47.182009+00:00', u'vitrage_operational_severity': u'N/A', u'vitrage_is_placeholder': False, u'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', u'vitrage_aggregated_severity': u'PAGE', u'vitrage_resource_id': u'791f71cb-7071-4031-b794-37c94b66c96f', u'vitrage_resource_type': u'nova.instance', u'is_real_vitrage_id': True, u'name': u'InstanceDown'}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', {'vitrage_id': '8abd2a2 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: f-c830-453c-a9d0-55db2bf72d46', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T21:02:16.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '959c2301fdcdec19edb3b25407ac649f', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:16:48.178739+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'update_timestamp': u'2018-10-04T11:52:50Z', 'ip_addresses': (u'192.168.12.164',), 'id': u'404c8cb3-33b7-48fc-8003-3f4f32731d2c', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372507+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '75704085257c9224b147bb4fa455281a'}), ('791f71cb-7071-4031-b794-37c94b66c96f', {'update_timestamp': '2018-10-10 05:32:49.639207+00:00', 'id': u'c0511b75-6a93-4396-83a4-e686deb83ef4', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639207+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Kube-Master', 'vitrage_cached_id': 'adc0671dd778336ffe6af73ec6df Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 7dde'}), ('62b131e9-f20b-4530-af07-797e2724d8f1', {'update_timestamp': u'2018-10-04T11:52:56Z', 'ip_addresses': (u'192.168.12.154',), 'id': u'20bd8194-0604-43d9-b11c-6b853fd49a38', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '62b131e9-f20b-4530-af07-797e2724d8f1', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372477+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '2dba214c495c39f45260dfd7f53f3b1d'}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', {'status': u'firing', 'vitrage_id': 'c6a94386-3879-499e-9da0-2a5b9d3294b8', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '8697f63708f1edf633def58a278ed87d', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050153+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', {'vitrage_id': '2f8048ef-53cf-4256-8131-d9a63acbc43c', 'update_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_category': 'RESOURCE', 'vitrage_datasource_name': u'nova.zone', 'vitrage_type': 'nova.zone', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.153910+00:00', 'vitrage_aggregated_state': 'AVAILABLE', 'id': u'nova', 'vitrage_is_deleted': False, 'name': u'nova', 'vitrage_operational_state': u'OK', 'state': 'available', 'vitrage_cached_id': '125f1d8c4451a6385cc2cfa2b0ba45be', 'vitrage_is_placeholder': False, 'is_real_vitrage_id Oct 10 14:33:29 ubuntu vitrage-graph[18905]: ': True}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', {'status': u'firing', 'vitrage_id': '399238f2-eb07-4b6d-9818-68de1825a5e4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-05T22:10:12.913720458+09:00', 'vitrage_category': 'ALARM', 'name': u'HighCpuUsage', 'state': 'Active', 'vitrage_cached_id': '76b11696f162464d015f42efdbd47957', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 13:25:20.603578+00:00', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'warning'}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', {'vitrage_id': '91fd1236-0a07-415b-aee8-c43abd4a6306', 'update_timestamp': u'2018-10-02T02:44:11Z', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.network', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.280024+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'id': u'88998a1e-1a6a-4f28-8539-84ae5dc67d41', 'vitrage_is_deleted': False, 'name': u'public', 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_cached_id': 'c30358950af0b7508d2c149a532d96c2', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True}), ('8c6e7906-9e66-404f-967f-40037a6afc83', {'status': u'firing', 'vitrage_id': '8c6e7906-9e66-404f-967f-40037a6afc83', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '971787e9dbaa90ec10a64d2d674a097a', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050149+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource Oct 10 14:33:29 ubuntu vitrage-graph[18905]: _id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('2d0942d6-86c9-4495-853c-e76f291e3aac', {'update_timestamp': '2018-10-10 05:32:49.639134+00:00', 'id': u'adc5b224-8573-401b-b855-e82a62e47be7', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '2d0942d6-86c9-4495-853c-e76f291e3aac', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639134+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'Login', 'vitrage_cached_id': '4c1549684dc368810827d04a08918e84'}), ('42d8b73e-0629-47f9-af43-fda9d03329e4', {'update_timestamp': '2018-10-10 05:32:49.639178+00:00', 'id': u'f46bfb8d-729d-49da-bee0-770ab5c7342a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '42d8b73e-0629-47f9-af43-fda9d03329e4', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639178+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'EditProduct', 'vitrage_cached_id': 'd4dcf835f5e9a50d6fa7ce660ea8bea4'}), ('1beb5e67-7d13-4801-812b-756f0cd5449a', {'update_timestamp': '2018-10-10 05:32:49.639169+00:00', 'id': u'7dac9710-e44f-4765-88b7-babda1626661', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '1beb5e67-7d13-4801-812b-756f0cd5449a', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'nova.instance', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.639169+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id' Oct 10 14:33:29 ubuntu vitrage-graph[18905]: : u'ubuntu', 'name': u'Search', 'vitrage_cached_id': '394e7f1d1452f41200410d0daa6e2f26'}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', {'status': u'resolved', 'update_timestamp': u'2018-10-10T14:23:49.778862633+09:00', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': True, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': '2018-10-10 05:23:52.860913+00:00', 'severity': u'warning', 'name': u'HighCpuUsage', 'vitrage_cached_id': 'e46fd9f96dfceb154bb802dc9a632bbe'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', {'vitrage_id': 'e2c5eae9-dba9-4f64-960b-b964f1c01dfe', 'status': u'firing', 'name': u'InstanceDown', 'update_timestamp': u'2018-10-05T17:13:01.733973211+09:00', 'vitrage_category': 'ALARM', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_cached_id': '19f9bf836e702e29c827a1afcaaa3479', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-05 08:28:02.564854+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'severity': u'page'}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', {'status': u'firing', 'update_timestamp': u'0001-01-01T00:00:00Z', 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_is_deleted': False, 'state': 'Active', 'vitrage_operational_severity': u'WARNING', 'vitrage_is_placeholder': False, 'vitrage_aggregated_severity': u'WARNING', 'vitrage_resource_id': 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', 'vi Oct 10 14:33:29 ubuntu vitrage-graph[18905]: trage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'vitrage_id': '894cb4d8-37b9-47d0-b6db-cce0255affee', 'vitrage_category': 'ALARM', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-10 05:33:02.272261+00:00', 'severity': u'warning', 'name': u'InstanceDown', 'vitrage_cached_id': '0603a97cf264e49b5b206cd747b15d92'}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'update_timestamp': u'2018-10-04T11:53:27Z', 'ip_addresses': (u'192.168.12.176',), 'id': u'35315a06-9dc6-41be-a0d4-2833575ec8aa', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'state': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372498+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '10e987c9d76ed8100e7a91138b7aa751'}), ('e291662b-115d-42b5-8863-da8243dd06b4', {'status': u'firing', 'vitrage_id': 'e291662b-115d-42b5-8863-da8243dd06b4', 'vitrage_is_deleted': False, 'update_timestamp': u'2018-10-04T20:54:46.733973211+09:00', 'severity': u'page', 'vitrage_category': 'ALARM', 'state': 'Active', 'vitrage_cached_id': '6c3d4621c8fec1b30616f2dc77ab1ea4', 'vitrage_type': u'prometheus', 'vitrage_sample_timestamp': u'2018-10-04 12:09:19.050145+00:00', 'vitrage_operational_severity': 'N/A', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': u'PAGE', 'vitrage_resource_id': '791f71cb-7071-4031-b794-37c94b66c96f', 'vitrage_resource_type': 'nova.instance', 'is_real_vitrage_id': True, 'name': u'InstanceDown'}), ('79886fcd-faf4-438d-8886-67f421f9043d', {'update_timestamp': u'2018-10-04T11:53:49Z', 'ip_addresses': (u'192.168.12.170',), 'id': u'c3c6b7d6-eca1-47eb-a722-a4670f41fe1a', 'vitrage_is_deleted': False, 'vitrage_operational_state': u'OK', 'stat Oct 10 14:33:29 ubuntu vitrage-graph[18905]: e': u'ACTIVE', 'vitrage_is_placeholder': False, 'project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'is_real_vitrage_id': True, 'vitrage_id': '79886fcd-faf4-438d-8886-67f421f9043d', 'vitrage_category': 'RESOURCE', 'vitrage_type': 'neutron.port', 'vitrage_sample_timestamp': '2018-10-10 05:32:49.372531+00:00', 'vitrage_aggregated_state': u'ACTIVE', 'host_id': u'ubuntu', 'name': u'', 'vitrage_cached_id': '04145bfb9ee1a2e575c6480b1aa3c24f'}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', {'rawtext': u'signup: STATUS', 'vitrage_id': 'b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08T22:13:37Z', 'resource_id': u'af81298e-183d-4dde-b9cd-1d7192453fc0', 'vitrage_category': 'ALARM', 'name': u'signup: STATUS', 'state': 'Inactive', 'vitrage_cached_id': 'a75cca518e82be8745813c3b37d4c862', 'vitrage_type': 'zabbix', 'vitrage_sample_timestamp': '2018-10-08 13:21:21.872361+00:00', 'vitrage_operational_severity': u'CRITICAL', 'vitrage_is_placeholder': False, 'vitrage_resource_project_id': u'9443a10af02945eca0d0c8e777b19dfb', 'vitrage_aggregated_severity': 'DISASTER', 'vitrage_resource_id': 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', 'vitrage_resource_type': u'nova.instance', 'is_real_vitrage_id': True, 'severity': 'DISASTER'})], edges [('3138c0ae-f8ff-42a0-b434-1deec95d1327', '2f8048ef-53cf-4256-8131-d9a63acbc43c', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('e1afc13b-01da-4a53-81bc-a1971e59d8e7', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('587141fe-fe4c-4697-8e69-bf8bfc6f3957', '1beb5e67-7d13-4801-812b-756f0cd5449a', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('5bb8a01c-7f90-4202-83ef-178d7a36d0b2', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('3353f3f7-eaf4-4dc9-9e89-8e37c8403022', '2d0942d6-86c9-4495-853c-e76f291e3aac', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), (u'3d3c903e-fe09-4a6f-941f-1a2adb09feca', u'791f71cb-7071-4031-b794-37c94b66c96f', {u'relationship_type': u'on', u'vitrage_is_deleted': False}), ('8abd2a2f-c830-453c-a9d0-55db2bf72d46', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('049b6ed5-cf67-4867-a15d-c457e6c0ac4d', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('62b131e9-f20b-4530-af07-797e2724d8f1', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('c6a94386-3879-499e-9da0-2a5b9d3294b8', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('2f8048ef-53cf-4256-8131-d9a63acbc43c', '587141fe-fe4c-4697-8e69-bf8bfc6f3957', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('399238f2-eb07-4b6d-9818-68de1825a5e4', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '049b6ed5-cf67-4867-a15d-c457e6c0ac4d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', 'e1afc13b-01da-4a53-81b Oct 10 14:33:29 ubuntu vitrage-graph[18905]: c-a1971e59d8e7', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '5bb8a01c-7f90-4202-83ef-178d7a36d0b2', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '79886fcd-faf4-438d-8886-67f421f9043d', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '3353f3f7-eaf4-4dc9-9e89-8e37c8403022', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('91fd1236-0a07-415b-aee8-c43abd4a6306', '62b131e9-f20b-4530-af07-797e2724d8f1', {'relationship_type': 'contains', 'vitrage_is_deleted': False}), ('8c6e7906-9e66-404f-967f-40037a6afc83', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('6467e1b5-8f70-423f-b23b-4dd133ce49a7', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-10 05:23:52.851685+00:00'}), ('e2c5eae9-dba9-4f64-960b-b964f1c01dfe', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('894cb4d8-37b9-47d0-b6db-cce0255affee', 'bc991ce9-0fe1-4d3b-8272-1f656fa85d98', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('4ced8f6d-fc95-41b2-8d02-0b5e866e02a3', '42d8b73e-0629-47f9-af43-fda9d03329e4', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('e291662b-115d-42b5-8863-da8243dd06b4', '791f71cb-7071-4031-b794-37c94b66c96f', {'relationship_type': 'on', 'vitrage_is_deleted': False}), ('79886fcd-faf4-438d-8886-67f421f9043d', '74ad4034-9758-4c3c-b737-ffa9ee8afbd1', {'relationship_type': 'attached', 'vitrage_is_deleted': False}), ('b7524d37-e09c-46e6-8c53-4cd24e67d0e9', 'b8890f0f-2a0c-4f13-aab9-7f0a0aa338bf', {'relationship_type': 'on', 'vitrage_is_deleted': True, 'update_timestamp': '2018-10-08 13:21:21.851194+00:0 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 0'})] create_graph_from_matching_vertices /opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py:156 Oct 10 14:33:29 ubuntu vitrage-graph[18905]: 2018-10-10 14:33:29.805 18960 DEBUG vitrage.graph.query [-] create_predicate::((item.get('vitrage_category')== 'ALARM') and (item.get('vitrage_is_deleted')== False)) create_predicate /opt/stack/vitrage/vitrage/graph/query.py:69 -------------- next part -------------- A non-text attachment was scrubbed... Name: vitrage_log_on_compute1.zip Type: application/x-zip-compressed Size: 8349 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vitrage_log_on_ubuntu.zip Type: application/x-zip-compressed Size: 6625 bytes Desc: not available URL: From lyarwood at redhat.com Wed Oct 31 09:19:10 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 31 Oct 2018 09:19:10 +0000 Subject: [openstack-dev] [tripleo][openstack-ansible][nova][placement] Owners needed for placement extraction upgrade deployment tooling In-Reply-To: References: Message-ID: <20181031091910.ojxptyif2ihte6ug@lyarwood.usersys.redhat.com> On 30-10-18 14:29:12, Emilien Macchi wrote: > On the TripleO side, it sounds like Lee Yarwood is taking the lead with a > first commit in puppet-placement: > https://review.openstack.org/#/c/604182/ > > Lee, can you confirm that you and your team are working on it for Stein > cycle? ACK, just getting back online after being out for three weeks but still planning on getting everything in place by the original M2 goal we agreed to at PTG. I'll try to post more details by the end of the week. Cheers, Lee > On Thu, Oct 25, 2018 at 1:34 PM Matt Riedemann wrote: > > > Hello OSA/TripleO people, > > > > A plan/checklist was put in place at the Stein PTG for extracting > > placement from nova [1]. The first item in that list is done in grenade > > [2], which is the devstack-based upgrade project in the integrated gate. > > That should serve as a template for the necessary upgrade steps in > > deployment projects. The related devstack change for extracted placement > > on the master branch (Stein) is [3]. Note that change has some > > dependencies. > > > > The second point in the plan from the PTG was getting extracted > > placement upgrade tooling support in a deployment project, notably > > TripleO (and/or OpenStackAnsible). > > > > Given the grenade change is done and passing tests, TripleO/OSA should > > be able to start coding up and testing an upgrade step when going from > > Rocky to Stein. My question is who can we name as an owner in either > > project to start this work? Because we really need to be starting this > > as soon as possible to flush out any issues before they are too late to > > correct in Stein. > > > > So if we have volunteers or better yet potential patches that I'm just > > not aware of, please speak up here so we know who to contact about > > status updates and if there are any questions with the upgrade. > > > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html > > [2] https://review.openstack.org/#/c/604454/ > > [3] https://review.openstack.org/#/c/600162/ -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From zigo at debian.org Wed Oct 31 09:39:04 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 31 Oct 2018 10:39:04 +0100 Subject: [openstack-dev] Announcing OpenStack Cluster Installer (OCI) Message-ID: Hi, After about 6 months of development, I'm proud to announce the first fully working version of OCI. Here's the description: OCI (OpenStack Cluster Installer) is a software to provision an OpenStack cluster automatically. This package install a provisioning machine, which consists of a DHCP server, a PXE boot server, a web server, and a puppet-master. Once computers in the cluster boot for the first time, a Debian live system is served by OCI, to act as a discovery image. This live system then reports the hardware features back to OCI. The computers can then be installed with Debian from that live system, configured with a puppet-agent that will connect to the puppet-master of OCI. After Debian is installed, the server reboots under it, and OpenStack services is then provisionned in these machines, depending on their role in the cluster. Currently, OCI can only install a highly available Swift cluster. In the future, it will be able to deploy full compute clouds. Stay tuned, or contribute. Now is the perfect time to influence OCI's design. OCI has been deployed in production at Infomaniak and has been used for deploying a cross data-center fully redondant swift cluster. We're currently working on adding the compute feature to it. OCI, internally, uses the Puppet modules of puppet-openstack, and is fully packaged and tested in Debian Sid, Buster and Stretch (including the Puppet modules). It is available from your closest Debian mirror in Sid and Buster, and it is also available through the unofficial stretch-rocky.debian.net backport repository. A simple "apt-get install openstack-cluster-installer" is enough to install all needed modules. For further information, see: https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer To get in touch, contribute, or ask for support, please join the team's IRC channel #debian-openstack on the OFTC network. Cheers, Thomas Goirand (zigo) From dabarren at gmail.com Wed Oct 31 09:44:50 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Wed, 31 Oct 2018 10:44:50 +0100 Subject: [openstack-dev] [tripleo][openstack-ansible][nova][placement] Owners needed for placement extraction upgrade deployment tooling In-Reply-To: <20181031091910.ojxptyif2ihte6ug@lyarwood.usersys.redhat.com> References: <20181031091910.ojxptyif2ihte6ug@lyarwood.usersys.redhat.com> Message-ID: Hi, from kolla side I've started the work. In kolla images [0] for now only placement is separated to an independent image, only source is code is being installed, binary still uses nova-placement packages until a binary package exists from the debian and centos families. In kolla-ansible [1] placement service has been moved into a separate role applied just before nova. Things missing for now: - Binary packages from distributions - Run db syncs as there is not command for that yet in the master branch - Apply upgrade process for db changes [0] https://review.openstack.org/#/c/613589/ [1] https://review.openstack.org/#/c/613629/ Regards El mié., 31 oct. 2018 a las 10:19, Lee Yarwood () escribió: > On 30-10-18 14:29:12, Emilien Macchi wrote: > > On the TripleO side, it sounds like Lee Yarwood is taking the lead with a > > first commit in puppet-placement: > > https://review.openstack.org/#/c/604182/ > > > > Lee, can you confirm that you and your team are working on it for Stein > > cycle? > > ACK, just getting back online after being out for three weeks but still > planning on getting everything in place by the original M2 goal we > agreed to at PTG. I'll try to post more details by the end of the week. > > Cheers, > > Lee > > > On Thu, Oct 25, 2018 at 1:34 PM Matt Riedemann > wrote: > > > > > Hello OSA/TripleO people, > > > > > > A plan/checklist was put in place at the Stein PTG for extracting > > > placement from nova [1]. The first item in that list is done in grenade > > > [2], which is the devstack-based upgrade project in the integrated > gate. > > > That should serve as a template for the necessary upgrade steps in > > > deployment projects. The related devstack change for extracted > placement > > > on the master branch (Stein) is [3]. Note that change has some > > > dependencies. > > > > > > The second point in the plan from the PTG was getting extracted > > > placement upgrade tooling support in a deployment project, notably > > > TripleO (and/or OpenStackAnsible). > > > > > > Given the grenade change is done and passing tests, TripleO/OSA should > > > be able to start coding up and testing an upgrade step when going from > > > Rocky to Stein. My question is who can we name as an owner in either > > > project to start this work? Because we really need to be starting this > > > as soon as possible to flush out any issues before they are too late to > > > correct in Stein. > > > > > > So if we have volunteers or better yet potential patches that I'm just > > > not aware of, please speak up here so we know who to contact about > > > status updates and if there are any questions with the upgrade. > > > > > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html > > > [2] https://review.openstack.org/#/c/604454/ > > > [3] https://review.openstack.org/#/c/600162/ > > -- > Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 > 2D76 > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed Oct 31 09:48:18 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 31 Oct 2018 09:48:18 +0000 (GMT) Subject: [openstack-dev] [tripleo][openstack-ansible][nova][placement] Owners needed for placement extraction upgrade deployment tooling In-Reply-To: References: <20181031091910.ojxptyif2ihte6ug@lyarwood.usersys.redhat.com> Message-ID: On Wed, 31 Oct 2018, Eduardo Gonzalez wrote: > - Run db syncs as there is not command for that yet in the master branch > - Apply upgrade process for db changes The placement-side pieces for this are nearly ready, see the stack beginning at https://review.openstack.org/#/c/611441/ -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From nate.johnston at redhat.com Wed Oct 31 12:40:27 2018 From: nate.johnston at redhat.com (Nate Johnston) Date: Wed, 31 Oct 2018 08:40:27 -0400 Subject: [openstack-dev] Ironic integration CI jobs In-Reply-To: References: Message-ID: <20181031124027.uh7cbn5fteom5fqi@bishop> On Tue, Oct 30, 2018 at 05:36:09PM -0700, Julia Kreger wrote: > ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa - This job > essentially just duplicates the functionality already covered in other > jobs, including the grenade job. FYI this job runs in the Neutron check queue (non-voting). Would you like to substitute a different Ironic job to run on all neutron changes, or should that requirement be revisited? Thanks! Nate From dtantsur at redhat.com Wed Oct 31 12:44:14 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 31 Oct 2018 13:44:14 +0100 Subject: [openstack-dev] Ironic integration CI jobs In-Reply-To: References: Message-ID: <48177c01-6678-9294-8562-650645307d7b@redhat.com> Hi, On 10/31/18 1:36 AM, Julia Kreger wrote: > With the discussion of CI jobs and the fact that I have been finding myself > checking job status several times a day so early in the cycle, I think it is > time for ironic to revisit many of our CI jobs. > > The bottom line is ironic is very resource intensive to test. A lot of that is > because of the underlying way we enroll/manage nodes and then execute the > integration scenarios emulating bare metal. I think we can improve that with > some ansible. > > In the mean time I created a quick chart[1] to try and make sense out overall > integration coverage and I think it makes sense to remove three of the jobs. > > ironic-tempest-dsvm-ipa-wholedisk-agent_ipmitool-tinyipa-multinode - This job is > essentially the same as our grenade mutlinode job, the only difference being > grenade. Nope, not the same. Grenade jobs run only smoke tests, this job runs https://github.com/openstack/ironic-tempest-plugin/blob/master/ironic_tempest_plugin/tests/scenario/test_baremetal_multitenancy.py > ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa - This job > essentially just duplicates the functionality already covered in other jobs, > including the grenade job. Ditto, grenade jobs do not cover our tests at all. Also this is the very job we run on other projects (nova, neutron, maybe more), so it will be a bit painful to remove it. > ironic-tempest-dsvm-bfv - This presently non-voting job validates that the iPXE > mode of the 'pxe' boot interface supports boot from volume. It was superseded by > ironic-tempest-dsvm-ipxe-bfv which focuses on the use of the 'ipxe' boot > interface. The underlying code is all the same deep down in all of the helper > methods. +1 to this. Dmitry > > I'll go ahead and put this up as a topic for our weekly meeting next week so we > can discuss. > > Thanks, > > -Julia > > [1]: https://ethercalc.openstack.org/ces0z3xjb1ir > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zigo at debian.org Wed Oct 31 13:29:58 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 31 Oct 2018 14:29:58 +0100 Subject: [openstack-dev] [oslo] No complains about rabbitmq SSL problems: could we have this in the logs? Message-ID: Hi, It took me a long long time to figure out that my SSL setup was wrong when trying to connect Heat to rabbitmq over SSL. Unfortunately, Oslo (or heat itself) never warn me that something was wrong, I just got nothing working, and no log at all. I'm sure I wouldn't be the only one happy about having this type of problems being yelled out loud in the logs. Right now, it does work if I turn off SSL, though I'm still not sure what's wrong in my setup, and I'm given no clue if the issue is on rabbitmq-server or on the client side (ie: heat, in my current case). Just a wishlist... :) Cheers, Thomas Goirand (zigo) From gmann at ghanshyammann.com Wed Oct 31 13:31:33 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 31 Oct 2018 22:31:33 +0900 Subject: [openstack-dev] Neutron stadium project Tempest plugins In-Reply-To: <60857DF6-9E5D-4BCB-80DC-EEF49072C61A@redhat.com> References: <60857DF6-9E5D-4BCB-80DC-EEF49072C61A@redhat.com> Message-ID: <166ca52736f.f16929c5258677.8758503537679470411@ghanshyammann.com> ---- On Wed, 24 Oct 2018 05:08:11 +0900 Slawomir Kaplonski wrote ---- > Hi, > > Thx Miguel for raising this. > List of tempest plugins is on https://docs.openstack.org/tempest/latest/plugin-registry.html - if URL for Your plugin is the same as Your main repo, You should move Your tempest plugin code. Thanks mlavalle, slaweq for bringing up this discussion. Separating the Tempest plugin from service repo was Queens goal and that goal clearly state the benefit of having the separate plugins repo [1]. For Neturon, that goal was marked as Complete after creating the neutron-temepst-plugin[2] and work to separate the neutron stadium project's tempest plugins was left out. I think many of the projects did not all. This came up while discussing the tempest plugins CI setup [3]. If you need any help from QA team, feel free to ping us on #openstack-qa channel. [1] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html [2] https://review.openstack.org/#/c/524605/ [3] https://etherpad.openstack.org/p/tempest-plugins-ci-release-tagging-clarification -gmann > > > > Wiadomość napisana przez Miguel Lavalle w dniu 23.10.2018, o godz. 16:59: > > > > Dear Neutron Stadium projects, > > > > In a QA session during the recent PTG in Denver, it was suggested that the Stadium projects should move their Tempest plugins to a repository of their own or added to the Neutron Tempest plugin repository (https://github.com/openstack/neutron-tempest-plugin). The purpose of this message is to start a conversation for the Stadium projects to indicate what is their preference. Please respond to this thread indicating how do you want to move forward. > > > > Best regards > > > > Miguel > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mnaser at vexxhost.com Wed Oct 31 13:40:04 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 31 Oct 2018 14:40:04 +0100 Subject: [openstack-dev] [oslo] No complains about rabbitmq SSL problems: could we have this in the logs? In-Reply-To: References: Message-ID: <08C09D03-132D-406E-AFDE-7845B2D55B5F@vexxhost.com> For what it’s worth: I ran into the same issue. I think the problem lies a bit deeper because it’s a problem with kombu as when debugging I saw that Oslo messaging tried to connect and hung after. Sent from my iPhone > On Oct 31, 2018, at 2:29 PM, Thomas Goirand wrote: > > Hi, > > It took me a long long time to figure out that my SSL setup was wrong > when trying to connect Heat to rabbitmq over SSL. Unfortunately, Oslo > (or heat itself) never warn me that something was wrong, I just got > nothing working, and no log at all. > > I'm sure I wouldn't be the only one happy about having this type of > problems being yelled out loud in the logs. Right now, it does work if I > turn off SSL, though I'm still not sure what's wrong in my setup, and > I'm given no clue if the issue is on rabbitmq-server or on the client > side (ie: heat, in my current case). > > Just a wishlist... :) > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sbauza at redhat.com Wed Oct 31 13:49:11 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 31 Oct 2018 14:49:11 +0100 Subject: [openstack-dev] [NOVA] nova GPU suppot and find GPU type In-Reply-To: <9D8A2486E35F0941A60430473E29F15B017BAD7B79@MXDB2.ad.garvan.unsw.edu.au> References: <9D8A2486E35F0941A60430473E29F15B017BAD7B79@MXDB2.ad.garvan.unsw.edu.au> Message-ID: On Tue, Oct 30, 2018 at 12:21 AM Manuel Sopena Ballesteros < manuel.sb at garvan.org.au> wrote: > Dear Nova community, > > > > This is the first time I work with GPUs. > > > > I have a Dell C4140 with x4 Nvidia Tesla V100 SXM2 16GB I would like to > setup on Openstack Rocky. > > > > I checked the documentation and I have 2 questions I would like to ask: > > > > 1. Docs (1) says *As of the Queens release, there is no upstream > continuous integration testing with a hardware environment that has virtual > GPUs and therefore this feature is considered experimental*. Does it > means nova will stop supporting GPUs? Is GPU support being transferred to a > different project? > No. We told about "experimental" because given we didn't have a CI verifying the features, we were not sure operators were not having a lot of bugs. After 2 cycles, we don't have a lot of bugs and some operators use it, so we could remove the "experimental" situation. 2. I installed > cuda-repo-rhel7-10-0-local-10.0.130-410.48-1.0-1.x86_64 on the physical > host but I can’t find the type of GPUs installed (2) (/sys/class/mdev_bus > doesn’t exists). What should I do then? What should I put in > devices.enabled_vgpu_types > > ? > > Make sure you use the correct grid server driver from nvidia and then follow those steps : https://docs.nvidia.com/grid/6.0/grid-vgpu-user-guide/index.html#install-vgpu-package-generic-linux-kvm Once the driver will be installed (make sure tho you remove first the nouveau driver as said above) and the system rebooted, you should see the above sysfs directory. HTH, -Sylvain > > (1) - https://docs.openstack.org/nova/rocky/admin/virtual-gpu.html > > (2)- > https://docs.openstack.org/nova/rocky/admin/virtual-gpu.html#how-to-discover-a-gpu-type > > > > Thank you very much > > > > *Manuel Sopena Ballesteros *| Big data Engineer > *Garvan Institute of Medical Research * > The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010 > *T:* + 61 (0)2 9355 5760 | *F:* +61 (0)2 9295 8507 | *E:* > manuel.sb at garvan.org.au > > > NOTICE > Please consider the environment before printing this email. This message > and any attachments are intended for the addressee named and may contain > legally privileged/confidential/copyright information. If you are not the > intended recipient, you should not read, use, disclose, copy or distribute > this communication. If you have received this message in error please > notify us at once by return email and then delete both messages. We accept > no liability for the distribution of viruses or similar in electronic > communications. This notice should not be removed. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Wed Oct 31 13:57:08 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 31 Oct 2018 06:57:08 -0700 Subject: [openstack-dev] Ironic integration CI jobs In-Reply-To: <48177c01-6678-9294-8562-650645307d7b@redhat.com> References: <48177c01-6678-9294-8562-650645307d7b@redhat.com> Message-ID: On Wed, Oct 31, 2018 at 5:44 AM Dmitry Tantsur wrote: > Hi, > > On 10/31/18 1:36 AM, Julia Kreger wrote: > [trim] > > > > ironic-tempest-dsvm-ipa-wholedisk-agent_ipmitool-tinyipa-multinode - > This job is > > essentially the same as our grenade mutlinode job, the only difference > being > > grenade. > > Nope, not the same. Grenade jobs run only smoke tests, this job runs > > https://github.com/openstack/ironic-tempest-plugin/blob/master/ironic_tempest_plugin/tests/scenario/test_baremetal_multitenancy.py > Ugh, Looking closer, we still end up deploying when the smoke tests run. It feels like the only real difference between what is being exercised is that one our explicit test scenario of putting two instances on two separate networks and validating connectivity is not present between the two. I guess I'm failing to see why we need all of the setup and infrastructure when we're just testing pluggable network bits and settings their upon. Maybe it is a good cantidate for looking at evolving how we handle scenario testing so we reduce our gate load and resulting wait for test results. > ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa - This job > > essentially just duplicates the functionality already covered in other > jobs, > > including the grenade job. > > Ditto, grenade jobs do not cover our tests at all. Also this is the very > job we > run on other projects (nova, neutron, maybe more), so it will be a bit > painful > to remove it. > We run the basic baremetal ops test, which tests deploy. If we're already covering the same code paths in other tests (which I feel we are), then the test feels redundant to me. I'm not worried about the effort to change the job in other gates. We really need to pull agent_ipmitool out of the name if we keep it anyway... which still means going through zuul configs. > [trim] -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Oct 31 14:07:12 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 31 Oct 2018 23:07:12 +0900 Subject: [openstack-dev] Sharing upstream contribution mentoring result with Korea user group In-Reply-To: <09a62e2d-6caf-da1a-71ac-989aeea494ec@gmail.com> References: <09a62e2d-6caf-da1a-71ac-989aeea494ec@gmail.com> Message-ID: <166ca73173a.e68647f5260223.2926726366150620868@ghanshyammann.com> That's great job Ian and team. It is really great when local user groups doing so much effort for upstream contribution mentoring. >From FirstContact SIG point of view, feel free to let us know any help you need in term of engaging new contributors with their interesting projects team and working items. -gmann ---- On Tue, 30 Oct 2018 23:10:42 +0900 Ian Y. Choi wrote ---- > Hello, > > I got involved organizing & mentoring Korean people for OpenStack > upstream contribution for about last two months, > and would like to share with community members. > > Total nine mentees had started to learn OpenStack, contributed, and > finally survived as volunteers for > 1) developing OpenStack mobile app for better mobile user interfaces > and experiences > (inspired from https://github.com/stackerz/app which worked on Juno > release), and > 2) translating OpenStack official project artifacts including documents, > and Container Whitepaper ( > https://www.openstack.org/containers/leveraging-containers-and-openstack/ ). > > Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin, > Sungjin Kang, and Andrew Yongjoon Kong) > all helped to organize total 8 offline meetups + one mini-hackathon and > mentored to attendees. > > The followings are brief summary: > - "OpenStack Controller" Android app is available on Play Store > : > https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller > (GitHub: https://github.com/kosslab-kr/openstack-controller ) > > - Most high-priority projects (although it is not during string freeze > period) and documents are > 100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, > and Container Whitepaper. > > - Total 18,695 words were translated into Korean by four contributors > (confirmed through Zanata API: > https://translate.openstack.org/rest/stats/user/[Zanata > ID]/2018-08-16..2018-10-25 ): > > +------------+---------------+-----------------+ > | Zanata ID | Name | Number of words | > +------------+---------------+-----------------+ > | ardentpark | Soonyeul Park | 12517 | > +------------+---------------+-----------------+ > | bnitech | Dongbim Im | 693 | > +------------+---------------+-----------------+ > | csucom | Sungwook Choi | 4397 | > +------------+---------------+-----------------+ > | jaeho93 | Jaeho Cho | 1088 | > +------------+---------------+-----------------+ > > - The list of projects translated into Korean are described as: > > +-------------------------------------+-----------------+ > | Project | Number of words | > +-------------------------------------+-----------------+ > | api-site | 20 | > +-------------------------------------+-----------------+ > | cinder | 405 | > +-------------------------------------+-----------------+ > | designate-dashboard | 4 | > +-------------------------------------+-----------------+ > | horizon | 3226 | > +-------------------------------------+-----------------+ > | i18n | 434 | > +-------------------------------------+-----------------+ > | ironic | 4 | > +-------------------------------------+-----------------+ > | Leveraging Containers and OpenStack | 5480 | > +-------------------------------------+-----------------+ > | neutron-lbaas-dashboard | 5 | > +-------------------------------------+-----------------+ > | openstack-helm | 8835 | > +-------------------------------------+-----------------+ > | trove-dashboard | 89 | > +-------------------------------------+-----------------+ > | zun-ui | 193 | > +-------------------------------------+-----------------+ > > I would like to really appreciate all co-mentors and participants on > such a big event for promoting OpenStack contribution. > The venue and food were supported by Korea Open Source Software > Development Center ( https://kosslab.kr/ ). > > > With many thanks, > > /Ian > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dtantsur at redhat.com Wed Oct 31 14:38:00 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 31 Oct 2018 15:38:00 +0100 Subject: [openstack-dev] Ironic integration CI jobs In-Reply-To: References: <48177c01-6678-9294-8562-650645307d7b@redhat.com> Message-ID: <3bd075a8-0b14-ab35-e038-ad6b2ab376a7@redhat.com> On 10/31/18 2:57 PM, Julia Kreger wrote: > > > On Wed, Oct 31, 2018 at 5:44 AM Dmitry Tantsur > wrote: > > Hi, > > On 10/31/18 1:36 AM, Julia Kreger wrote: > [trim] > > > > ironic-tempest-dsvm-ipa-wholedisk-agent_ipmitool-tinyipa-multinode - This > job is > > essentially the same as our grenade mutlinode job, the only difference being > > grenade. > > Nope, not the same. Grenade jobs run only smoke tests, this job runs > https://github.com/openstack/ironic-tempest-plugin/blob/master/ironic_tempest_plugin/tests/scenario/test_baremetal_multitenancy.py > > > Ugh, Looking closer, we still end up deploying when the smoke tests run. It > feels like the only real difference between what is being exercised is that one > our explicit test scenario of putting two instances on two separate networks and > validating connectivity is not present between the two. I guess I'm failing to > see why we need all of the setup and infrastructure when we're just testing > pluggable network bits and settings their upon. Maybe it is a good cantidate for > looking at evolving how we handle scenario testing so we reduce our gate load > and resulting wait for test results. > > > ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa - This job > > essentially just duplicates the functionality already covered in other jobs, > > including the grenade job. > > Ditto, grenade jobs do not cover our tests at all. Also this is the very job we > run on other projects (nova, neutron, maybe more), so it will be a bit painful > to remove it. > > > We run the basic baremetal ops test, which tests deploy. If we're already > covering the same code paths in other tests (which I feel we are), then the test > feels redundant to me. I'm not worried about the effort to change the job in > other gates. We really need to pull agent_ipmitool out of the name if we keep it > anyway... which still means going through zuul configs. Do not smoke tests cover rescue with bare metal? Because our jobs do. > > [trim] > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From bdobreli at redhat.com Wed Oct 31 14:57:00 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 31 Oct 2018 15:57:00 +0100 Subject: [openstack-dev] [Edge-computing][tripleo][FEMDC] IEEE Fog Computing: Call for Contributions - Deadline Approaching In-Reply-To: References: Message-ID: <53cf2d33-cee7-3f0d-7f28-74b29091a7ef@redhat.com> (cross-posting openstack-dev) Hello. [tl;dr] I'm looking for co-author(s) to come up with "Edge clouds data consistency requirements and challenges" a position paper [0] (papers submitting deadline is Nov 8). The problem scope is synchronizing control plane and/or deployments-specific data (not necessary limited to OpenStack) across remote Edges and central Edge and management site(s). Including the same aspects for overclouds and undercloud(s), in terms of TripleO; and other deployment tools of your choice. Another problem is to not go into different solutions for Edge deployments management and control planes of edges. And for tenants as well, if we think of tenants also doing Edge deployments based on Edge Data Replication as a Service, say for Kubernetes/OpenShift on top of OpenStack. So the paper should name the outstanding problems, define data consistency requirements and pose possible solutions for synchronization and conflicts resolving. Having maximum autonomy cases supported for isolated sites, with a capability to eventually catch up its distributed state. Like global database [1], or something different perhaps (see causal-real-time consistency model [2],[3]), or even using git. And probably more than that?.. (looking for ideas) See also the "check" list in-line, which I think also meets the data consistency topics well - it would be always nice to have some theoretical foundations at hand, when repairing some 1000-edges-spread-off and fully broken global database, by hand :) PS. I must admit I have yet any experience with those IEEE et al academic things and looking for someone who has it, to team and co-author that positioning paper by. That's as a start, then we can think of presenting it and expanding into work items for OpenStack Edge WG and future development plans. [0] http://conferences.computer.org/ICFC/2019/Paper_Submission.html [1] https://review.openstack.org/600555 [2] https://jepsen.io/consistency [3] http://www.cs.cornell.edu/lorenzo/papers/cac-tr.pdf On 10/22/18 3:44 PM, Flavia Delicato wrote: > ================================================================================= > IEEE International Conference on Fog Computing (ICFC 2019) > June 24-26, 2019 > Prague, Czech Republic > http://conferences.computer.org/ICFC/2019/ > Co-located with the IEEE International Conference on Cloud Engineering > (IC2E 2019) > ================================================================================== > > Important Dates > --------------- > Paper registration and abstract: Nov 1st, 2018 > Full paper submission due: Nov 8th, 2018 > Notification of paper acceptance: Jan. 20th, 2019 > Workshop and tutorial proposals due: Nov 11, 2018 > Notification of proposal acceptance: Nov 18, 2018 > > Call for Contributions > ---------------------- > Fog computing is the extension of cloud computing into its edge and > the physical world to meet the data volume and decision velocity > requirements in many emerging applications, such as augmented and > virtual realities (AR/VR), cyber-physical systems (CPS), intelligent > and autonomous systems, and mission-critical systems. The boundary > between centralized, powerful computing cloud and massively > distributed, Internet connected sensors, actuators, and things is > blurred in this new computing paradigm. > > The ICFC 2019 technical program will feature tutorials, workshops, and > research paper sessions. We solicit high-quality contributions in the > above categories. Details of submission is available on the conference > Web site. Topics of interest include but are not limited to: > > * System architecture for fog computing (check) > * Coordination between cloud, fog, and sensing/actuation endpoints > * Connectivity, storage, and computation in the edge > * Data processing and management for fog computing (check) > * Efficient and embedded AI in the fog > * System and network manageability > * Middleware and coordination platforms > * Power, energy, and resource management > * Device and hardware support for fog computing > * Programming models, abstractions, and software engineering for fog computing (check) > * Security, privacy, and ethics issues related to fog computing > * Theoretical foundations and formal methods for fog computing systems (check) > * Applications and experiences > > Organizing Committee > -------------------- > General Chairs: > Hui Lei, IBM > Albert Zomaya, The University of Sydney > > PC Co-chairs: > Erol Gelenbe, Imperial College London > Jie Liu, Microsoft Research > > Tutorials and Workshops Chair: > David Bermbach, TU Berlin > > Publicity Co-chairs: > Flavia Delicato,Federal University of Rio de Janeiro > Mathias Fischer, University Hamburg > > Publication Chair > Javid Taheri, Karlstad University > > Webmaster > Wei Li, The University of Sydney > > Steering Committee > ------------------ > Mung Chiang, Purdue University > Erol Gelenbe, Imperial College London > Christos Kozarakis, Stanford University > Hui Lei, IBM > Chenyang Lu, Washington University in St Louis > Beng Chin Ooi, National University of Singapore > Neeraj Suri, TU Darmstadt > Albert Zomaya, The University of Sydney > > Program Committee > ------------------ > > Tarek Abdelzaher, UIUC > Anne Benoit, ENS Lyon > David Bermbach, TU Berlin > Bharat Bhargava, Purdue University > Olivier Brun, LAAS/CNRS Laboratory > Jiannong Cao, Hong Kong Polytech > Flavia C. Delicato, UFRJ, Brazil > Xiaotie Deng, Peking University, China > Schahram Dustdar, TU Wien, Germany > Maria Gorlatova, Duke University > Dharanipragada Janakiram, IIT Madras > Wenjing Luo, Virginia Tech > Pedro José Marrón, Universität Duisburg-Essen > Geyong Min, University of Exeter > Suman Nath, Microsoft Research > Vincenzo Piuri, Universita Degli Studi Di Milano > Yong Meng Teo, National University of Singapore > Guoliang Xing, Chinese University of Hong Kong > Yuanyuan Yang, SUNY Stony Brook > Xiaoyun Zhu, Cloudera > -- Best regards, Bogdan Dobrelya, Irc #bogdando From juliaashleykreger at gmail.com Wed Oct 31 15:34:01 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 31 Oct 2018 08:34:01 -0700 Subject: [openstack-dev] Ironic integration CI jobs In-Reply-To: <3bd075a8-0b14-ab35-e038-ad6b2ab376a7@redhat.com> References: <48177c01-6678-9294-8562-650645307d7b@redhat.com> <3bd075a8-0b14-ab35-e038-ad6b2ab376a7@redhat.com> Message-ID: On Wed, Oct 31, 2018 at 7:38 AM Dmitry Tantsur wrote: > [trim] > > > Ditto, grenade jobs do not cover our tests at all. Also this is the > very job we > > run on other projects (nova, neutron, maybe more), so it will be a > bit painful > > to remove it. > > > > > > We run the basic baremetal ops test, which tests deploy. If we're > already > > covering the same code paths in other tests (which I feel we are), then > the test > > feels redundant to me. I'm not worried about the effort to change the > job in > > other gates. We really need to pull agent_ipmitool out of the name if we > keep it > > anyway... which still means going through zuul configs. > > Do not smoke tests cover rescue with bare metal? Because our jobs do. > > Smoke tests do not as far as I can tell, but I believe we run rescue by default when our test scenarios execute on our other tempest executing jobs as well as it is a superset of the main scenario. Random example testr results: http://logs.openstack.org/72/614472/1/check/ironic-tempest-dsvm-ipa-partition-redfish-tinyipa/7537b02/testr_results.html.gz -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Wed Oct 31 15:39:00 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 31 Oct 2018 09:39:00 -0600 Subject: [openstack-dev] [tripleo] gate issues please do not approve/recheck Message-ID: Hey folks, So we have identified an issue that has been causing a bunch of failures and proposed a revert of our podman testing[0]. We have cleared the gate and are asking that you not approve or recheck any patches at this time. We will let you know when it is safe to start approving things. Thanks, -Alex [0] https://review.openstack.org/#/c/614537/ From bdobreli at redhat.com Wed Oct 31 16:53:38 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 31 Oct 2018 17:53:38 +0100 Subject: [openstack-dev] [Edge-computing][tripleo][FEMDC] IEEE Fog Computing: Call for Contributions - Deadline Approaching In-Reply-To: <53cf2d33-cee7-3f0d-7f28-74b29091a7ef@redhat.com> References: <53cf2d33-cee7-3f0d-7f28-74b29091a7ef@redhat.com> Message-ID: <8acb2f9e-9fbd-77be-a274-eb3d54ae2ab4@redhat.com> I forgot to mention the submission registration and abstract has to be submitted today. I've created it as #1570506394, and the paper itself can be uploaded until the Nov 8 (or Nov 9 perhaps as the registration system shows to me). I'm not sure that paper number is searchable publicly, so here is the paper name and abstract for your kind review please: name: "Edge clouds control plane and management data consistency challenges" abstract: "Fog computing is emerging Cloud of (Edge) Clouds technology. Its control plane and deployments data synchronization is a major challenge. Autonomy requirements expect even the most distant edge sites always manageable, available for monitoring and alerting, scaling up/down, upgrading and applying security fixes. Whenever temporary disconnected sites are managed locally or centrally, some changes and data need to be eventually synchronized back to the central site(s) with having its merge-conflicts resolved for the central data hub(s). While some data needs to be pushed from the central site(s) to the Edge, which might require resolving data collisions at the remote sites as well. In this paper, we position the outstanding data synchronization problems for OpenStack cloud platform becoming a solution number one for fog computing. We outline the data consistency requirements and design approaches to meet the AA (Always Available) autonomy expectations. Finally, the paper brings the vision of unified tooling, which solves the data synchronization problems the same way for infrastructure owners, IaaS cloud operators and tenants running workloads for PaaS like OpenShift or Kubernetes deployed on top of OpenStack. The secondary goal of this work is to help cloud architects and developers to federate stateful cloud components over reliable distributed data backends and having its failure modes known." Thank you for your time, if still reading this. On 10/31/18 3:57 PM, Bogdan Dobrelya wrote: > (cross-posting openstack-dev) > > Hello. > [tl;dr] I'm looking for co-author(s) to come up with "Edge clouds data > consistency requirements and challenges" a position paper [0] (papers > submitting deadline is Nov 8). > > The problem scope is synchronizing control plane and/or > deployments-specific data (not necessary limited to OpenStack) across > remote Edges and central Edge and management site(s). Including the same > aspects for overclouds and undercloud(s), in terms of TripleO; and other > deployment tools of your choice. > > Another problem is to not go into different solutions for Edge > deployments management and control planes of edges. And for tenants as > well, if we think of tenants also doing Edge deployments based on Edge > Data Replication as a Service, say for Kubernetes/OpenShift on top of > OpenStack. > > So the paper should name the outstanding problems, define data > consistency requirements and pose possible solutions for synchronization > and conflicts resolving. Having maximum autonomy cases supported for > isolated sites, with a capability to eventually catch up its distributed > state. Like global database [1], or something different perhaps (see > causal-real-time consistency model [2],[3]), or even using git. And > probably more than that?.. (looking for ideas) > > See also the "check" list in-line, which I think also meets the data > consistency topics well - it would be always nice to have some > theoretical foundations at hand, when repairing some > 1000-edges-spread-off and fully broken global database, by hand :) > > PS. I must admit I have yet any experience with those IEEE et al > academic things and looking for someone who has it, to team and > co-author that positioning paper by. That's as a start, then we can > think of presenting it and expanding into work items for OpenStack Edge > WG and future development plans. > > [0] http://conferences.computer.org/ICFC/2019/Paper_Submission.html > [1] https://review.openstack.org/600555 > [2] https://jepsen.io/consistency > [3] http://www.cs.cornell.edu/lorenzo/papers/cac-tr.pdf > > On 10/22/18 3:44 PM, Flavia Delicato wrote: >> ================================================================================= >> >> IEEE International Conference on Fog Computing (ICFC 2019) >> June 24-26, 2019 >> Prague, Czech Republic >> http://conferences.computer.org/ICFC/2019/ >> Co-located with the IEEE International Conference on Cloud Engineering >> (IC2E 2019) >> ================================================================================== >> >> >> Important Dates >> --------------- >> Paper registration and abstract: Nov 1st, 2018 >> Full paper submission due: Nov 8th, 2018 >> Notification of paper acceptance: Jan. 20th, 2019 >> Workshop and tutorial proposals due: Nov 11, 2018 >> Notification of proposal acceptance: Nov 18, 2018 >> >> Call for Contributions >> ---------------------- >> Fog computing is the extension of cloud computing into its edge and >> the physical world to meet the data volume and decision velocity >> requirements in many emerging applications, such as augmented and >> virtual realities (AR/VR), cyber-physical systems (CPS), intelligent >> and autonomous systems, and mission-critical systems. The boundary >> between centralized, powerful computing cloud and massively >> distributed, Internet connected sensors, actuators, and things is >> blurred in this new computing paradigm. >> >> The ICFC 2019 technical program will feature tutorials, workshops, and >> research paper sessions. We solicit high-quality contributions in the >> above categories. Details of submission is available on the conference >> Web site. Topics of interest include but are not limited to: >> >> * System architecture for fog computing > > (check) > >> * Coordination between cloud, fog, and sensing/actuation endpoints >> * Connectivity, storage, and computation in the edge >> * Data processing and management for fog computing > > (check) > >> * Efficient and embedded AI in the fog >> * System and network manageability >> * Middleware and coordination platforms >> * Power, energy, and resource management >> * Device and hardware support for fog computing >> * Programming models, abstractions, and software engineering for fog >> computing > > (check) > >> * Security, privacy, and ethics issues related to fog computing >> * Theoretical foundations and formal methods for fog computing systems > > (check) > >> * Applications and experiences >> >> Organizing Committee >> -------------------- >> General Chairs: >> Hui Lei, IBM >> Albert Zomaya, The University of Sydney >> >> PC Co-chairs: >> Erol Gelenbe, Imperial College London >> Jie Liu, Microsoft Research >> >> Tutorials and Workshops Chair: >> David Bermbach, TU Berlin >> >> Publicity Co-chairs: >> Flavia Delicato,Federal University of Rio de Janeiro >> Mathias Fischer, University Hamburg >> >> Publication Chair >> Javid Taheri, Karlstad University >> >> Webmaster >> Wei Li, The University of Sydney >> >> Steering Committee >> ------------------ >> Mung Chiang, Purdue University >> Erol Gelenbe, Imperial College London >> Christos Kozarakis, Stanford University >> Hui Lei, IBM >> Chenyang Lu, Washington University in St Louis >> Beng Chin Ooi, National University of Singapore >> Neeraj Suri, TU Darmstadt >> Albert Zomaya, The University of Sydney >> >> Program Committee >> ------------------ >> >> Tarek Abdelzaher, UIUC >> Anne Benoit, ENS Lyon >> David Bermbach, TU Berlin >> Bharat Bhargava, Purdue University >> Olivier Brun, LAAS/CNRS Laboratory >> Jiannong Cao, Hong Kong Polytech >> Flavia C. Delicato, UFRJ, Brazil >> Xiaotie Deng, Peking University, China >> Schahram Dustdar, TU Wien, Germany >> Maria Gorlatova, Duke University >> Dharanipragada Janakiram, IIT Madras >> Wenjing Luo, Virginia Tech >> Pedro José Marrón, Universität Duisburg-Essen >> Geyong Min, University of Exeter >> Suman Nath, Microsoft Research >> Vincenzo Piuri, Universita Degli Studi Di Milano >> Yong Meng Teo, National University of Singapore >> Guoliang Xing, Chinese University of Hong Kong >> Yuanyuan Yang, SUNY Stony Brook >> Xiaoyun Zhu, Cloudera >> > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From sathlang at redhat.com Wed Oct 31 17:09:54 2018 From: sathlang at redhat.com (Sofer Athlan-Guyot) Date: Wed, 31 Oct 2018 18:09:54 +0100 Subject: [openstack-dev] [tripleo] request for feedback/review on docker2podman upgrade In-Reply-To: References: <4374516d-8e61-baf8-440b-e76cafd84874@redhat.com> Message-ID: <87zhuuhwkd.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> Emilien Macchi writes: > A bit of an update here: > > - We merged the patch in openstack/paunch that stop the Docker container if > we try to start a Podman container. > - We switched the undercloud upgrade job to test upgrades from Docker to > Podman (for now containers are stopped in Docker and then started in > Podman). > - We are now looking how and where to remove the Docker containers once the > upgrade finished. For that work, I started with the Undercloud and patched > tripleoclient to run the post_upgrade_tasks which to me is a good candidate > to run docker rm. +1 > > Please look: > - tripleoclient / run post_upgrade_tasks when upgrading > standalone/undercloud: https://review.openstack.org/614349 > - THT: prototype on how we would remove the Docker containers: > https://review.openstack.org/611092 > reviewed. > Note: for now we assume that Docker is still available on the host after > the upgrade as we are testing things under centos7. I'm aware that this > assumption can change in the future but we'll probably re-iterate. > > What I need from the upgrade team is feedback on this workflow, and see if > we can re-use these bits originally tested on Undercloud / Standalone, for > the Overcloud as well. > So that workflow won't break in any case for the overcloud. For an inplace upgrade then we need that clean up anyway given how paunch detect the need for an upgrade. For other upgrade scenario this won't do anything bad. So +1 for me. > Thanks for the feedback, > > > On Fri, Oct 19, 2018 at 8:00 AM Emilien Macchi wrote: > >> On Fri, Oct 19, 2018 at 4:24 AM Giulio Fidente >> wrote: >> >>> 1) create the podman systemd unit >>> 2) delete the docker container >>> >> >> We finally went with "stop the docker container" >> >> 3) start the podman container >>> >> >> and 4) delete the docker container later in THT upgrade_tasks. >> >> And yes +1 to do the same in ceph-ansible if possible. >> -- >> Emilien Macchi >> > > > -- > Emilien Macchi -- Sofer Athlan-Guyot From hjensas at redhat.com Wed Oct 31 17:15:27 2018 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Wed, 31 Oct 2018 18:15:27 +0100 Subject: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage In-Reply-To: References: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> <1077343a-b708-4fa2-0f44-2575363419e9@nemebean.com> <1540923927.492007.1559968608.2F968E3E@webmail.messagingengine.com> Message-ID: On Tue, 2018-10-30 at 15:00 -0600, Alex Schultz wrote: > On Tue, Oct 30, 2018 at 12:25 PM Clark Boylan > wrote: > > > > On Tue, Oct 30, 2018, at 10:42 AM, Alex Schultz wrote: > > > On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec < > > > openstack at nemebean.com> wrote: > > > > > > > > Tagging with tripleo since my suggestion below is specific to > > > > that project. > > > > > > > > On 10/30/18 11:03 AM, Clark Boylan wrote: > > > > > Hello everyone, > > > > > > > > > > A little while back I sent email explaining how the gate > > > > > queues work and how fixing bugs helps us test and merge more > > > > > code. All of this still is still true and we should keep > > > > > pushing to improve our testing to avoid gate resets. > > > > > > > > > > Last week we migrated Zuul and Nodepool to a new Zookeeper > > > > > cluster. In the process of doing this we had to restart Zuul > > > > > which brought in a new logging feature that exposes node > > > > > resource usage by jobs. Using this data I've been able to > > > > > generate some report information on where our node demand is > > > > > going. This change [0] produces this report [1]. > > > > > > > > > > As with optimizing software we want to identify which changes > > > > > will have the biggest impact and to be able to measure > > > > > whether or not changes have had an impact once we have made > > > > > them. Hopefully this information is a start at doing that. > > > > > Currently we can only look back to the point Zuul was > > > > > restarted, but we have a thirty day log rotation for this > > > > > service and should be able to look at a month's worth of data > > > > > going forward. > > > > > > > > > > Looking at the data you might notice that Tripleo is using > > > > > many more node resources than our other projects. They are > > > > > aware of this and have a plan [2] to reduce their resource > > > > > consumption. We'll likely be using this report generator to > > > > > check progress of this plan over time. > > > > > > > > I know at one point we had discussed reducing the concurrency > > > > of the > > > > tripleo gate to help with this. Since tripleo is still using > > > > >50% of the > > > > resources it seems like maybe we should revisit that, at least > > > > for the > > > > short-term until the more major changes can be made? Looking > > > > through the > > > > merge history for tripleo projects I don't see a lot of cases > > > > (any, in > > > > fact) where more than a dozen patches made it through anyway*, > > > > so I > > > > suspect it wouldn't have a significant impact on gate > > > > throughput, but it > > > > would free up quite a few nodes for other uses. > > > > > > > > > > It's the failures in gate and resets. At this point I think it > > > would > > > be a good idea to turn down the concurrency of the tripleo queue > > > in > > > the gate if possible. As of late it's been timeouts but we've > > > been > > > unable to track down why it's timing out specifically. I > > > personally > > > have a feeling it's the container download times since we do not > > > have > > > a local registry available and are only able to leverage the > > > mirrors > > > for some levels of caching. Unfortunately we don't get the best > > > information about this out of docker (or the mirrors) and it's > > > really > > > hard to determine what exactly makes things run a bit slower. > > > > We actually tried this not too long ago > > https://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=22d98f7aab0fb23849f715a8796384cffa84600b > > but decided to revert it because it didn't decrease the check > > queue backlog significantly. We were still running at several hours > > behind most of the time. > > > > If we want to set up better monitoring and measuring and try it > > again we can do that. But we probably want to measure queue sizes > > with and without the change like that to better understand if it > > helps. > > > > As for container image download times can we quantify that via > > docker logs? Basically sum up the amount of time spent by a job > > downloading images so that we can see what the impact is but also > > measure if changes improve that? As for other ideas improving > > things seems like many of the images that tripleo use are quite > > large. I recall seeing a > 600MB image just for rsyslog. Wouldn't > > it be advantageous for both the gate and tripleo in the real world > > to trim the size of those images (which should improve download > > times). In any case quantifying the size of the downloads and > > trimming those if possible is likely also worthwhile. > > > > So it's not that simple as we don't just download all the images in a > distinct task and there isn't any information provided around > size/speed AFAIK. Additionally we aren't doing anything special with > the images (it's mostly kolla built containers with a handful of > tweaks) so that's just the size of the containers. I am currently > working on reducing any tripleo specific dependencies (ie removal of > instack-undercloud, etc) in hopes that we'll shave off some of the > dependencies but it seems that there's a larger (bloat) issue around > containers in general. I have no idea why the rsyslog container > would > be 600M, but yea that does seem excessive. > We add this to all images: https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35 /bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 python socat sudo which openstack-tripleo-common-container-base rsync cronie crudini openstack-selinux ansible python-shade puppet-tripleo python2- kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB Is the additional 276 MB reasonable here? openstack-selinux <- This package run relabling, does that kind of touching the filesystem impact the size due to docker layers? Also: python2-kubernetes is a fairly large package (18007990) do we use that in every image? I don't see any tripleo related repos importing from that when searching on Hound? The original commit message[1] adding it states it is for future convenience. On my undercloud we have 101 images, if we are downloading every 18 MB per image thats almost 1.8 GB for a package we don't use? (I hope it's not like this? With docker layers, we only download that 276 MB transaction once? Or?) [1] https://review.openstack.org/527927 > > Clark > > > > ___________________________________________________________________ > > _______ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > bscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aschultz at redhat.com Wed Oct 31 17:20:48 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 31 Oct 2018 11:20:48 -0600 Subject: [openstack-dev] [tripleo] reducing our upstream CI footprint Message-ID: Hey everyone, Based on previous emails around this[0][1], I have proposed a possible reducing in our usage by switching the scenario001--011 jobs to non-voting and removing them from the gate[2]. This will reduce the likelihood of causing gate resets and hopefully allow us to land corrective patches sooner. In terms of risks, there is a risk that we might introduce breaking changes in the scenarios because they are officially non-voting, and we will still be gating promotions on these scenarios. This means that if they are broken, they will need the same attention and care to fix them so we should be vigilant when the jobs are failing. The hope is that we can switch these scenarios out with voting standalone versions in the next few weeks, but until that I think we should proceed by removing them from the gate. I know this is less than ideal but as most failures with these jobs in the gate are either timeouts or unrelated to the changes (or gate queue), they are more of hindrance than a help at this point. Thanks, -Alex [0] http://lists.openstack.org/pipermail/openstack-dev/2018-October/136141.html [1] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135396.html [2] https://review.openstack.org/#/q/topic:reduce-tripleo-usage+(status:open+OR+status:merged) From aschultz at redhat.com Wed Oct 31 17:35:59 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 31 Oct 2018 11:35:59 -0600 Subject: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage In-Reply-To: References: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> <1077343a-b708-4fa2-0f44-2575363419e9@nemebean.com> <1540923927.492007.1559968608.2F968E3E@webmail.messagingengine.com> Message-ID: On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås wrote: > > On Tue, 2018-10-30 at 15:00 -0600, Alex Schultz wrote: > > On Tue, Oct 30, 2018 at 12:25 PM Clark Boylan > > wrote: > > > > > > On Tue, Oct 30, 2018, at 10:42 AM, Alex Schultz wrote: > > > > On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec < > > > > openstack at nemebean.com> wrote: > > > > > > > > > > Tagging with tripleo since my suggestion below is specific to > > > > > that project. > > > > > > > > > > On 10/30/18 11:03 AM, Clark Boylan wrote: > > > > > > Hello everyone, > > > > > > > > > > > > A little while back I sent email explaining how the gate > > > > > > queues work and how fixing bugs helps us test and merge more > > > > > > code. All of this still is still true and we should keep > > > > > > pushing to improve our testing to avoid gate resets. > > > > > > > > > > > > Last week we migrated Zuul and Nodepool to a new Zookeeper > > > > > > cluster. In the process of doing this we had to restart Zuul > > > > > > which brought in a new logging feature that exposes node > > > > > > resource usage by jobs. Using this data I've been able to > > > > > > generate some report information on where our node demand is > > > > > > going. This change [0] produces this report [1]. > > > > > > > > > > > > As with optimizing software we want to identify which changes > > > > > > will have the biggest impact and to be able to measure > > > > > > whether or not changes have had an impact once we have made > > > > > > them. Hopefully this information is a start at doing that. > > > > > > Currently we can only look back to the point Zuul was > > > > > > restarted, but we have a thirty day log rotation for this > > > > > > service and should be able to look at a month's worth of data > > > > > > going forward. > > > > > > > > > > > > Looking at the data you might notice that Tripleo is using > > > > > > many more node resources than our other projects. They are > > > > > > aware of this and have a plan [2] to reduce their resource > > > > > > consumption. We'll likely be using this report generator to > > > > > > check progress of this plan over time. > > > > > > > > > > I know at one point we had discussed reducing the concurrency > > > > > of the > > > > > tripleo gate to help with this. Since tripleo is still using > > > > > >50% of the > > > > > resources it seems like maybe we should revisit that, at least > > > > > for the > > > > > short-term until the more major changes can be made? Looking > > > > > through the > > > > > merge history for tripleo projects I don't see a lot of cases > > > > > (any, in > > > > > fact) where more than a dozen patches made it through anyway*, > > > > > so I > > > > > suspect it wouldn't have a significant impact on gate > > > > > throughput, but it > > > > > would free up quite a few nodes for other uses. > > > > > > > > > > > > > It's the failures in gate and resets. At this point I think it > > > > would > > > > be a good idea to turn down the concurrency of the tripleo queue > > > > in > > > > the gate if possible. As of late it's been timeouts but we've > > > > been > > > > unable to track down why it's timing out specifically. I > > > > personally > > > > have a feeling it's the container download times since we do not > > > > have > > > > a local registry available and are only able to leverage the > > > > mirrors > > > > for some levels of caching. Unfortunately we don't get the best > > > > information about this out of docker (or the mirrors) and it's > > > > really > > > > hard to determine what exactly makes things run a bit slower. > > > > > > We actually tried this not too long ago > > > https://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=22d98f7aab0fb23849f715a8796384cffa84600b > > > but decided to revert it because it didn't decrease the check > > > queue backlog significantly. We were still running at several hours > > > behind most of the time. > > > > > > If we want to set up better monitoring and measuring and try it > > > again we can do that. But we probably want to measure queue sizes > > > with and without the change like that to better understand if it > > > helps. > > > > > > As for container image download times can we quantify that via > > > docker logs? Basically sum up the amount of time spent by a job > > > downloading images so that we can see what the impact is but also > > > measure if changes improve that? As for other ideas improving > > > things seems like many of the images that tripleo use are quite > > > large. I recall seeing a > 600MB image just for rsyslog. Wouldn't > > > it be advantageous for both the gate and tripleo in the real world > > > to trim the size of those images (which should improve download > > > times). In any case quantifying the size of the downloads and > > > trimming those if possible is likely also worthwhile. > > > > > > > So it's not that simple as we don't just download all the images in a > > distinct task and there isn't any information provided around > > size/speed AFAIK. Additionally we aren't doing anything special with > > the images (it's mostly kolla built containers with a handful of > > tweaks) so that's just the size of the containers. I am currently > > working on reducing any tripleo specific dependencies (ie removal of > > instack-undercloud, etc) in hopes that we'll shave off some of the > > dependencies but it seems that there's a larger (bloat) issue around > > containers in general. I have no idea why the rsyslog container > > would > > be 600M, but yea that does seem excessive. > > > > We add this to all images: > > https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35 > > /bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 python > socat sudo which openstack-tripleo-common-container-base rsync cronie > crudini openstack-selinux ansible python-shade puppet-tripleo python2- > kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB > > Is the additional 276 MB reasonable here? > openstack-selinux <- This package run relabling, does that kind of > touching the filesystem impact the size due to docker layers? > > Also: python2-kubernetes is a fairly large package (18007990) do we use > that in every image? I don't see any tripleo related repos importing > from that when searching on Hound? The original commit message[1] > adding it states it is for future convenience. > > On my undercloud we have 101 images, if we are downloading every 18 MB > per image thats almost 1.8 GB for a package we don't use? (I hope it's > not like this? With docker layers, we only download that 276 MB > transaction once? Or?) > So this is a single layer that is updated once and shared by all the containers that inherit from it. I did notice the same thing and have proposed a change in the layering of these packages last night. https://review.openstack.org/#/c/614371/ In general this does raise a point about dependencies of services and what the actual impact of adding new ones to projects is. Especially in the container world where this might be duplicated N times depending on the number of services deployed. With the move to containers, much of the sharedness that being on a single host provided has been lost at a cost of increased bandwidth, memory, and storage usage. Thanks, -Alex > > [1] https://review.openstack.org/527927 > > > > > > Clark > > > > > > ___________________________________________________________________ > > > _______ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > > bscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _____________________________________________________________________ > > _____ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > > cribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From whayutin at redhat.com Wed Oct 31 17:39:59 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 31 Oct 2018 11:39:59 -0600 Subject: [openstack-dev] [tripleo] reducing our upstream CI footprint In-Reply-To: References: Message-ID: On Wed, Oct 31, 2018 at 11:21 AM Alex Schultz wrote: > Hey everyone, > > Based on previous emails around this[0][1], I have proposed a possible > reducing in our usage by switching the scenario001--011 jobs to > non-voting and removing them from the gate[2]. This will reduce the > likelihood of causing gate resets and hopefully allow us to land > corrective patches sooner. In terms of risks, there is a risk that we > might introduce breaking changes in the scenarios because they are > officially non-voting, and we will still be gating promotions on these > scenarios. This means that if they are broken, they will need the > same attention and care to fix them so we should be vigilant when the > jobs are failing. > > The hope is that we can switch these scenarios out with voting > standalone versions in the next few weeks, but until that I think we > should proceed by removing them from the gate. I know this is less > than ideal but as most failures with these jobs in the gate are either > timeouts or unrelated to the changes (or gate queue), they are more of > hindrance than a help at this point. > > Thanks, > -Alex > I think I also have to agree. Having to deploy with containers, update containers and run with two nodes is no longer a very viable option upstream. It's not impossible but it should be the exception and not the rule for all our jobs. Thanks Alex > > [0] > http://lists.openstack.org/pipermail/openstack-dev/2018-October/136141.html > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-October/135396.html > [2] > https://review.openstack.org/#/q/topic:reduce-tripleo-usage+(status:open+OR+status:merged) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Wes Hayutin Associate MANAGER Red Hat whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Oct 31 17:54:46 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 31 Oct 2018 18:54:46 +0100 Subject: [openstack-dev] [Edge-computing] [tripleo][FEMDC] IEEE Fog Computing: Call for Contributions - Deadline Approaching In-Reply-To: <8acb2f9e-9fbd-77be-a274-eb3d54ae2ab4@redhat.com> References: <53cf2d33-cee7-3f0d-7f28-74b29091a7ef@redhat.com> <8acb2f9e-9fbd-77be-a274-eb3d54ae2ab4@redhat.com> Message-ID: Hi, Thank you for sharing your proposal. I think this is a very interesting topic with a list of possible solutions some of which this group is also discussing. It would also be great to learn more about the IEEE activities and have experience about the process in this group on the way forward. I personally do not have experience with IEEE conferences, but I’m happy to help with the paper if I can. Thanks, Ildikó > On 2018. Oct 31., at 17:53, Bogdan Dobrelya wrote: > > I forgot to mention the submission registration and abstract has to be submitted today. I've created it as #1570506394, and the paper itself can be uploaded until the Nov 8 (or Nov 9 perhaps as the registration system shows to me). I'm not sure that paper number is searchable publicly, so here is the paper name and abstract for your kind review please: > > name: "Edge clouds control plane and management data consistency challenges" > abstract: "Fog computing is emerging Cloud of (Edge) Clouds technology. Its control plane and deployments data synchronization is a major challenge. Autonomy requirements expect even the most distant edge sites always manageable, available for monitoring and alerting, scaling up/down, upgrading and applying security fixes. Whenever temporary disconnected sites are managed locally or centrally, some changes and data need to be eventually synchronized back to the central site(s) with having its merge-conflicts resolved for the central data hub(s). While some data needs to be pushed from the central site(s) to the Edge, which might require resolving data collisions at the remote sites as well. In this paper, we position the outstanding data synchronization problems for OpenStack cloud platform becoming a solution number one for fog computing. We outline the data consistency requirements and design approaches to meet the AA (Always Available) autonomy expectations. Finally, the paper brings the vision of unified tooling, which solves the data synchronization problems the same way for infrastructure owners, IaaS cloud operators and tenants running workloads for PaaS like OpenShift or Kubernetes deployed on top of OpenStack. The secondary goal of this work is to help cloud architects and developers to federate stateful cloud components over reliable distributed data backends and having its failure modes known." > Thank you for your time, if still reading this. > > On 10/31/18 3:57 PM, Bogdan Dobrelya wrote: >> (cross-posting openstack-dev) >> Hello. >> [tl;dr] I'm looking for co-author(s) to come up with "Edge clouds data consistency requirements and challenges" a position paper [0] (papers submitting deadline is Nov 8). >> The problem scope is synchronizing control plane and/or deployments-specific data (not necessary limited to OpenStack) across remote Edges and central Edge and management site(s). Including the same aspects for overclouds and undercloud(s), in terms of TripleO; and other deployment tools of your choice. >> Another problem is to not go into different solutions for Edge deployments management and control planes of edges. And for tenants as well, if we think of tenants also doing Edge deployments based on Edge Data Replication as a Service, say for Kubernetes/OpenShift on top of OpenStack. >> So the paper should name the outstanding problems, define data consistency requirements and pose possible solutions for synchronization and conflicts resolving. Having maximum autonomy cases supported for isolated sites, with a capability to eventually catch up its distributed state. Like global database [1], or something different perhaps (see causal-real-time consistency model [2],[3]), or even using git. And probably more than that?.. (looking for ideas) >> See also the "check" list in-line, which I think also meets the data consistency topics well - it would be always nice to have some theoretical foundations at hand, when repairing some 1000-edges-spread-off and fully broken global database, by hand :) >> PS. I must admit I have yet any experience with those IEEE et al academic things and looking for someone who has it, to team and co-author that positioning paper by. That's as a start, then we can think of presenting it and expanding into work items for OpenStack Edge WG and future development plans. >> [0] http://conferences.computer.org/ICFC/2019/Paper_Submission.html >> [1] https://review.openstack.org/600555 >> [2] https://jepsen.io/consistency >> [3] http://www.cs.cornell.edu/lorenzo/papers/cac-tr.pdf >> On 10/22/18 3:44 PM, Flavia Delicato wrote: >>> ================================================================================= >>> IEEE International Conference on Fog Computing (ICFC 2019) >>> June 24-26, 2019 >>> Prague, Czech Republic >>> http://conferences.computer.org/ICFC/2019/ >>> Co-located with the IEEE International Conference on Cloud Engineering >>> (IC2E 2019) >>> ================================================================================== >>> >>> Important Dates >>> --------------- >>> Paper registration and abstract: Nov 1st, 2018 >>> Full paper submission due: Nov 8th, 2018 >>> Notification of paper acceptance: Jan. 20th, 2019 >>> Workshop and tutorial proposals due: Nov 11, 2018 >>> Notification of proposal acceptance: Nov 18, 2018 >>> >>> Call for Contributions >>> ---------------------- >>> Fog computing is the extension of cloud computing into its edge and >>> the physical world to meet the data volume and decision velocity >>> requirements in many emerging applications, such as augmented and >>> virtual realities (AR/VR), cyber-physical systems (CPS), intelligent >>> and autonomous systems, and mission-critical systems. The boundary >>> between centralized, powerful computing cloud and massively >>> distributed, Internet connected sensors, actuators, and things is >>> blurred in this new computing paradigm. >>> >>> The ICFC 2019 technical program will feature tutorials, workshops, and >>> research paper sessions. We solicit high-quality contributions in the >>> above categories. Details of submission is available on the conference >>> Web site. Topics of interest include but are not limited to: >>> >>> * System architecture for fog computing >> (check) >>> * Coordination between cloud, fog, and sensing/actuation endpoints >>> * Connectivity, storage, and computation in the edge >>> * Data processing and management for fog computing >> (check) >>> * Efficient and embedded AI in the fog >>> * System and network manageability >>> * Middleware and coordination platforms >>> * Power, energy, and resource management >>> * Device and hardware support for fog computing >>> * Programming models, abstractions, and software engineering for fog computing >> (check) >>> * Security, privacy, and ethics issues related to fog computing >>> * Theoretical foundations and formal methods for fog computing systems >> (check) >>> * Applications and experiences >>> >>> Organizing Committee >>> -------------------- >>> General Chairs: >>> Hui Lei, IBM >>> Albert Zomaya, The University of Sydney >>> >>> PC Co-chairs: >>> Erol Gelenbe, Imperial College London >>> Jie Liu, Microsoft Research >>> >>> Tutorials and Workshops Chair: >>> David Bermbach, TU Berlin >>> >>> Publicity Co-chairs: >>> Flavia Delicato,Federal University of Rio de Janeiro >>> Mathias Fischer, University Hamburg >>> >>> Publication Chair >>> Javid Taheri, Karlstad University >>> >>> Webmaster >>> Wei Li, The University of Sydney >>> >>> Steering Committee >>> ------------------ >>> Mung Chiang, Purdue University >>> Erol Gelenbe, Imperial College London >>> Christos Kozarakis, Stanford University >>> Hui Lei, IBM >>> Chenyang Lu, Washington University in St Louis >>> Beng Chin Ooi, National University of Singapore >>> Neeraj Suri, TU Darmstadt >>> Albert Zomaya, The University of Sydney >>> >>> Program Committee >>> ------------------ >>> >>> Tarek Abdelzaher, UIUC >>> Anne Benoit, ENS Lyon >>> David Bermbach, TU Berlin >>> Bharat Bhargava, Purdue University >>> Olivier Brun, LAAS/CNRS Laboratory >>> Jiannong Cao, Hong Kong Polytech >>> Flavia C. Delicato, UFRJ, Brazil >>> Xiaotie Deng, Peking University, China >>> Schahram Dustdar, TU Wien, Germany >>> Maria Gorlatova, Duke University >>> Dharanipragada Janakiram, IIT Madras >>> Wenjing Luo, Virginia Tech >>> Pedro José Marrón, Universität Duisburg-Essen >>> Geyong Min, University of Exeter >>> Suman Nath, Microsoft Research >>> Vincenzo Piuri, Universita Degli Studi Di Milano >>> Yong Meng Teo, National University of Singapore >>> Guoliang Xing, Chinese University of Hong Kong >>> Yuanyuan Yang, SUNY Stony Brook >>> Xiaoyun Zhu, Cloudera >>> > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > _______________________________________________ > Edge-computing mailing list > Edge-computing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing From doug at doughellmann.com Wed Oct 31 18:15:32 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 31 Oct 2018 14:15:32 -0400 Subject: [openstack-dev] [tripleo] reducing our upstream CI footprint In-Reply-To: References: Message-ID: Alex Schultz writes: > Hey everyone, > > Based on previous emails around this[0][1], I have proposed a possible > reducing in our usage by switching the scenario001--011 jobs to > non-voting and removing them from the gate[2]. This will reduce the > likelihood of causing gate resets and hopefully allow us to land > corrective patches sooner. In terms of risks, there is a risk that we > might introduce breaking changes in the scenarios because they are > officially non-voting, and we will still be gating promotions on these > scenarios. This means that if they are broken, they will need the > same attention and care to fix them so we should be vigilant when the > jobs are failing. > > The hope is that we can switch these scenarios out with voting > standalone versions in the next few weeks, but until that I think we > should proceed by removing them from the gate. I know this is less > than ideal but as most failures with these jobs in the gate are either > timeouts or unrelated to the changes (or gate queue), they are more of > hindrance than a help at this point. > > Thanks, > -Alex > > [0] http://lists.openstack.org/pipermail/openstack-dev/2018-October/136141.html > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135396.html > [2] https://review.openstack.org/#/q/topic:reduce-tripleo-usage+(status:open+OR+status:merged) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev This makes a lot of sense as a temporary measure. Thanks for continuing to drive these changes! Doug From hjensas at redhat.com Wed Oct 31 18:17:30 2018 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Wed, 31 Oct 2018 19:17:30 +0100 Subject: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage In-Reply-To: References: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> <1077343a-b708-4fa2-0f44-2575363419e9@nemebean.com> <1540923927.492007.1559968608.2F968E3E@webmail.messagingengine.com> Message-ID: On Wed, 2018-10-31 at 11:35 -0600, Alex Schultz wrote: > On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås > wrote: > > > > On Tue, 2018-10-30 at 15:00 -0600, Alex Schultz wrote: > > > On Tue, Oct 30, 2018 at 12:25 PM Clark Boylan < > > > cboylan at sapwetik.org> > > > wrote: > > > > > > > > On Tue, Oct 30, 2018, at 10:42 AM, Alex Schultz wrote: > > > > > On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec < > > > > > openstack at nemebean.com> wrote: > > > > > > > > > > > > Tagging with tripleo since my suggestion below is specific > > > > > > to > > > > > > that project. > > > > > > > > > > > > On 10/30/18 11:03 AM, Clark Boylan wrote: > > > > > > > Hello everyone, > > > > > > > > > > > > > > A little while back I sent email explaining how the gate > > > > > > > queues work and how fixing bugs helps us test and merge > > > > > > > more > > > > > > > code. All of this still is still true and we should keep > > > > > > > pushing to improve our testing to avoid gate resets. > > > > > > > > > > > > > > Last week we migrated Zuul and Nodepool to a new > > > > > > > Zookeeper > > > > > > > cluster. In the process of doing this we had to restart > > > > > > > Zuul > > > > > > > which brought in a new logging feature that exposes node > > > > > > > resource usage by jobs. Using this data I've been able to > > > > > > > generate some report information on where our node demand > > > > > > > is > > > > > > > going. This change [0] produces this report [1]. > > > > > > > > > > > > > > As with optimizing software we want to identify which > > > > > > > changes > > > > > > > will have the biggest impact and to be able to measure > > > > > > > whether or not changes have had an impact once we have > > > > > > > made > > > > > > > them. Hopefully this information is a start at doing > > > > > > > that. > > > > > > > Currently we can only look back to the point Zuul was > > > > > > > restarted, but we have a thirty day log rotation for this > > > > > > > service and should be able to look at a month's worth of > > > > > > > data > > > > > > > going forward. > > > > > > > > > > > > > > Looking at the data you might notice that Tripleo is > > > > > > > using > > > > > > > many more node resources than our other projects. They > > > > > > > are > > > > > > > aware of this and have a plan [2] to reduce their > > > > > > > resource > > > > > > > consumption. We'll likely be using this report generator > > > > > > > to > > > > > > > check progress of this plan over time. > > > > > > > > > > > > I know at one point we had discussed reducing the > > > > > > concurrency > > > > > > of the > > > > > > tripleo gate to help with this. Since tripleo is still > > > > > > using > > > > > > > 50% of the > > > > > > > > > > > > resources it seems like maybe we should revisit that, at > > > > > > least > > > > > > for the > > > > > > short-term until the more major changes can be made? > > > > > > Looking > > > > > > through the > > > > > > merge history for tripleo projects I don't see a lot of > > > > > > cases > > > > > > (any, in > > > > > > fact) where more than a dozen patches made it through > > > > > > anyway*, > > > > > > so I > > > > > > suspect it wouldn't have a significant impact on gate > > > > > > throughput, but it > > > > > > would free up quite a few nodes for other uses. > > > > > > > > > > > > > > > > It's the failures in gate and resets. At this point I think > > > > > it > > > > > would > > > > > be a good idea to turn down the concurrency of the tripleo > > > > > queue > > > > > in > > > > > the gate if possible. As of late it's been timeouts but we've > > > > > been > > > > > unable to track down why it's timing out specifically. I > > > > > personally > > > > > have a feeling it's the container download times since we do > > > > > not > > > > > have > > > > > a local registry available and are only able to leverage the > > > > > mirrors > > > > > for some levels of caching. Unfortunately we don't get the > > > > > best > > > > > information about this out of docker (or the mirrors) and > > > > > it's > > > > > really > > > > > hard to determine what exactly makes things run a bit slower. > > > > > > > > We actually tried this not too long ago > > > > https://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=22d98f7aab0fb23849f715a8796384cffa84600b > > > > but decided to revert it because it didn't decrease the check > > > > queue backlog significantly. We were still running at several > > > > hours > > > > behind most of the time. > > > > > > > > If we want to set up better monitoring and measuring and try it > > > > again we can do that. But we probably want to measure queue > > > > sizes > > > > with and without the change like that to better understand if > > > > it > > > > helps. > > > > > > > > As for container image download times can we quantify that via > > > > docker logs? Basically sum up the amount of time spent by a job > > > > downloading images so that we can see what the impact is but > > > > also > > > > measure if changes improve that? As for other ideas improving > > > > things seems like many of the images that tripleo use are quite > > > > large. I recall seeing a > 600MB image just for rsyslog. > > > > Wouldn't > > > > it be advantageous for both the gate and tripleo in the real > > > > world > > > > to trim the size of those images (which should improve download > > > > times). In any case quantifying the size of the downloads and > > > > trimming those if possible is likely also worthwhile. > > > > > > > > > > So it's not that simple as we don't just download all the images > > > in a > > > distinct task and there isn't any information provided around > > > size/speed AFAIK. Additionally we aren't doing anything special > > > with > > > the images (it's mostly kolla built containers with a handful of > > > tweaks) so that's just the size of the containers. I am > > > currently > > > working on reducing any tripleo specific dependencies (ie removal > > > of > > > instack-undercloud, etc) in hopes that we'll shave off some of > > > the > > > dependencies but it seems that there's a larger (bloat) issue > > > around > > > containers in general. I have no idea why the rsyslog container > > > would > > > be 600M, but yea that does seem excessive. > > > > > > > We add this to all images: > > > > https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35 > > > > /bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 python > > socat sudo which openstack-tripleo-common-container-base rsync > > cronie > > crudini openstack-selinux ansible python-shade puppet-tripleo > > python2- > > kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB > > > > Is the additional 276 MB reasonable here? > > openstack-selinux <- This package run relabling, does that kind of > > touching the filesystem impact the size due to docker layers? > > > > Also: python2-kubernetes is a fairly large package (18007990) do we > > use > > that in every image? I don't see any tripleo related repos > > importing > > from that when searching on Hound? The original commit message[1] > > adding it states it is for future convenience. > > > > On my undercloud we have 101 images, if we are downloading every 18 > > MB > > per image thats almost 1.8 GB for a package we don't use? (I hope > > it's > > not like this? With docker layers, we only download that 276 MB > > transaction once? Or?) > > > > So this is a single layer that is updated once and shared by all the > containers that inherit from it. I did notice the same thing and have > proposed a change in the layering of these packages last night. > Thanks, that's a releif then! > https://review.openstack.org/#/c/614371/ > cool, +1 > In general this does raise a point about dependencies of services and > what the actual impact of adding new ones to projects is. Especially > in the container world where this might be duplicated N times > depending on the number of services deployed. With the move to > containers, much of the sharedness that being on a single host > provided has been lost at a cost of increased bandwidth, memory, and > storage usage. > > Thanks, > -Alex > > > > > [1] https://review.openstack.org/527927 > > > > > > > > > > Clark > > > > > > > > _______________________________________________________________ > > > > ____ > > > > _______ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > > > > unsu > > > > bscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > _________________________________________________________________ > > > ____ > > > _____ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:un > > > subs > > > cribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > ___________________________________________________________________ > > _______ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > bscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ildiko.vancsa at gmail.com Wed Oct 31 18:20:38 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 31 Oct 2018 19:20:38 +0100 Subject: [openstack-dev] [Edge-computing] [tripleo][FEMDC] IEEE Fog Computing: Call for Contributions - Deadline Approaching In-Reply-To: References: <53cf2d33-cee7-3f0d-7f28-74b29091a7ef@redhat.com> Message-ID: > On 2018. Oct 31., at 19:11, Mike Bayer wrote: > > On Wed, Oct 31, 2018 at 10:57 AM Bogdan Dobrelya wrote: >> >> (cross-posting openstack-dev) >> >> Hello. >> [tl;dr] I'm looking for co-author(s) to come up with "Edge clouds data >> consistency requirements and challenges" a position paper [0] (papers >> submitting deadline is Nov 8). >> >> The problem scope is synchronizing control plane and/or >> deployments-specific data (not necessary limited to OpenStack) across >> remote Edges and central Edge and management site(s). Including the same >> aspects for overclouds and undercloud(s), in terms of TripleO; and other >> deployment tools of your choice. >> >> Another problem is to not go into different solutions for Edge >> deployments management and control planes of edges. And for tenants as >> well, if we think of tenants also doing Edge deployments based on Edge >> Data Replication as a Service, say for Kubernetes/OpenShift on top of >> OpenStack. >> >> So the paper should name the outstanding problems, define data >> consistency requirements and pose possible solutions for synchronization >> and conflicts resolving. Having maximum autonomy cases supported for >> isolated sites, with a capability to eventually catch up its distributed >> state. Like global database [1], or something different perhaps (see >> causal-real-time consistency model [2],[3]), or even using git. And >> probably more than that?.. (looking for ideas) > > > I can offer detail on whatever aspects of the "shared / global > database" idea. The way we're doing it with Galera for now is all > about something simple and modestly effective for the moment, but it > doesn't have any of the hallmarks of a long-term, canonical solution, > because Galera is not well suited towards being present on many > (dozens) of endpoints. The concept that the StarlingX folks were > talking about, that of independent databases that are synchronized > using some kind of middleware is potentially more scalable, however I > think the best approach would be API-level replication, that is, you > have a bunch of Keystone services and there is a process that is > regularly accessing the APIs of these keystone services and > cross-publishing state amongst all of them. Clearly the big > challenge with that is how to resolve conflicts, I think the answer > would lie in the fact that the data being replicated would be of > limited scope and potentially consist of mostly or fully > non-overlapping records. > > That is, I think "global database" is a cheap way to get what would be > more effective as asynchronous state synchronization between identity > services. Recently we’ve been also exploring federation with an IdP (Identity Provider) master: https://wiki.openstack.org/wiki/Keystone_edge_architectures#Identity_Provider_.28IdP.29_Master_with_shadow_users One of the pros is that it removes the need for synchronization and potentially increases scalability. Thanks, Ildikó > >> >> See also the "check" list in-line, which I think also meets the data >> consistency topics well - it would be always nice to have some >> theoretical foundations at hand, when repairing some >> 1000-edges-spread-off and fully broken global database, by hand :) >> >> PS. I must admit I have yet any experience with those IEEE et al >> academic things and looking for someone who has it, to team and >> co-author that positioning paper by. That's as a start, then we can >> think of presenting it and expanding into work items for OpenStack Edge >> WG and future development plans. >> >> [0] http://conferences.computer.org/ICFC/2019/Paper_Submission.html >> [1] https://review.openstack.org/600555 >> [2] https://jepsen.io/consistency >> [3] http://www.cs.cornell.edu/lorenzo/papers/cac-tr.pdf >> >> On 10/22/18 3:44 PM, Flavia Delicato wrote: >>> ================================================================================= >>> IEEE International Conference on Fog Computing (ICFC 2019) >>> June 24-26, 2019 >>> Prague, Czech Republic >>> http://conferences.computer.org/ICFC/2019/ >>> Co-located with the IEEE International Conference on Cloud Engineering >>> (IC2E 2019) >>> ================================================================================== >>> >>> Important Dates >>> --------------- >>> Paper registration and abstract: Nov 1st, 2018 >>> Full paper submission due: Nov 8th, 2018 >>> Notification of paper acceptance: Jan. 20th, 2019 >>> Workshop and tutorial proposals due: Nov 11, 2018 >>> Notification of proposal acceptance: Nov 18, 2018 >>> >>> Call for Contributions >>> ---------------------- >>> Fog computing is the extension of cloud computing into its edge and >>> the physical world to meet the data volume and decision velocity >>> requirements in many emerging applications, such as augmented and >>> virtual realities (AR/VR), cyber-physical systems (CPS), intelligent >>> and autonomous systems, and mission-critical systems. The boundary >>> between centralized, powerful computing cloud and massively >>> distributed, Internet connected sensors, actuators, and things is >>> blurred in this new computing paradigm. >>> >>> The ICFC 2019 technical program will feature tutorials, workshops, and >>> research paper sessions. We solicit high-quality contributions in the >>> above categories. Details of submission is available on the conference >>> Web site. Topics of interest include but are not limited to: >>> >>> * System architecture for fog computing >> >> (check) >> >>> * Coordination between cloud, fog, and sensing/actuation endpoints >>> * Connectivity, storage, and computation in the edge >>> * Data processing and management for fog computing >> >> (check) >> >>> * Efficient and embedded AI in the fog >>> * System and network manageability >>> * Middleware and coordination platforms >>> * Power, energy, and resource management >>> * Device and hardware support for fog computing >>> * Programming models, abstractions, and software engineering for fog computing >> >> (check) >> >>> * Security, privacy, and ethics issues related to fog computing >>> * Theoretical foundations and formal methods for fog computing systems >> >> (check) >> >>> * Applications and experiences >>> >>> Organizing Committee >>> -------------------- >>> General Chairs: >>> Hui Lei, IBM >>> Albert Zomaya, The University of Sydney >>> >>> PC Co-chairs: >>> Erol Gelenbe, Imperial College London >>> Jie Liu, Microsoft Research >>> >>> Tutorials and Workshops Chair: >>> David Bermbach, TU Berlin >>> >>> Publicity Co-chairs: >>> Flavia Delicato,Federal University of Rio de Janeiro >>> Mathias Fischer, University Hamburg >>> >>> Publication Chair >>> Javid Taheri, Karlstad University >>> >>> Webmaster >>> Wei Li, The University of Sydney >>> >>> Steering Committee >>> ------------------ >>> Mung Chiang, Purdue University >>> Erol Gelenbe, Imperial College London >>> Christos Kozarakis, Stanford University >>> Hui Lei, IBM >>> Chenyang Lu, Washington University in St Louis >>> Beng Chin Ooi, National University of Singapore >>> Neeraj Suri, TU Darmstadt >>> Albert Zomaya, The University of Sydney >>> >>> Program Committee >>> ------------------ >>> >>> Tarek Abdelzaher, UIUC >>> Anne Benoit, ENS Lyon >>> David Bermbach, TU Berlin >>> Bharat Bhargava, Purdue University >>> Olivier Brun, LAAS/CNRS Laboratory >>> Jiannong Cao, Hong Kong Polytech >>> Flavia C. Delicato, UFRJ, Brazil >>> Xiaotie Deng, Peking University, China >>> Schahram Dustdar, TU Wien, Germany >>> Maria Gorlatova, Duke University >>> Dharanipragada Janakiram, IIT Madras >>> Wenjing Luo, Virginia Tech >>> Pedro José Marrón, Universität Duisburg-Essen >>> Geyong Min, University of Exeter >>> Suman Nath, Microsoft Research >>> Vincenzo Piuri, Universita Degli Studi Di Milano >>> Yong Meng Teo, National University of Singapore >>> Guoliang Xing, Chinese University of Hong Kong >>> Yuanyuan Yang, SUNY Stony Brook >>> Xiaoyun Zhu, Cloudera >>> >> >> >> -- >> Best regards, >> Bogdan Dobrelya, >> Irc #bogdando > > _______________________________________________ > Edge-computing mailing list > Edge-computing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing From akekane at redhat.com Wed Oct 31 18:30:41 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 1 Nov 2018 00:00:41 +0530 Subject: [openstack-dev] [all] Zuul job backlog In-Reply-To: References: <1537384298.1009431.1513843728.3812FDA4@webmail.messagingengine.com> <1538165560.3935414.1524300072.5C31EEA9@webmail.messagingengine.com> <1538662401.2558963.1530696776.04282A5E@webmail.messagingengine.com> Message-ID: Hi All, I have fixed the glance functional test issue, patch [1] is merged in master. I hope the mentioned issue is now resolved. Kindly let me know. [1] https://review.openstack.org/#/c/608856/ Thank you, Abhishek On Mon, 8 Oct 2018 at 11:37 PM, Doug Hellmann wrote: > Abhishek Kekane writes: > > > Hi Doug, > > > > Should I use something like SimpleHttpServer to upload a file and > download > > the same, or there are other efficient ways to handle it, > > Kindly let me know if you are having any suggestions for the same. > > Sure, that would work, especially if your tests are running in the unit > test jobs. If you're running a functional test, it seems like it would > also be OK to just copy a file into the directory Apache is serving from > and then download it from there. > > Doug > > > > > Thanks & Best Regards, > > > > Abhishek Kekane > > > > > > On Fri, Oct 5, 2018 at 4:57 PM Doug Hellmann > wrote: > > > >> Abhishek Kekane writes: > >> > >> > Hi Matt, > >> > > >> > Thanks for the input, I guess I should use ' > >> > http://git.openstack.org/static/openstack.png' which will definitely > >> work. > >> > Clark, Matt, Kindly let me know your opinion about the same. > >> > >> That URL would not be on the local node running the test, and would > >> eventually exhibit the same problems. In fact we have seen issues > >> cloning git repositories as part of the tests in the past. > >> > >> You need to use a localhost URL to ensure that the download doesn't have > >> to go off of the node. That may mean placing something into the > directory > >> where Apache is serving files as part of the test setup. > >> > >> Doug > >> > -- Thanks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Oct 31 18:43:56 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 31 Oct 2018 14:43:56 -0400 Subject: [openstack-dev] [goal][python3] week 12 update Message-ID: This is week 12 of the "Run under Python 3 by default" goal (https://governance.openstack.org/tc/goals/stein/python3-first.html). Observant readers will notice that the last update email was during week 9. I've been out for a couple of weeks, but you've all been busy in that time! == What we learned last week == I'm still working on an upgrade script for heat to allow us to rename it and publish releases. * https://review.openstack.org/#/c/606160/ == Ongoing and Completed Work == We are very very close to finishing the phase of work that updates the tox settings and documentation build jobs. Those documentation updates should be relatively quick to review because they're very minimal patches. Please take a few minutes to look for them and let's try to get them merged before the first milestone. The tox patches may require a bit more work to update pylint and the goal champions could use your help there (see below). +---------------------+--------------+---------+----------+---------+------------+-------+--------------------+ | Team | tox defaults | Docs | 3.6 unit | Failing | Unreviewed | Total | Champion | +---------------------+--------------+---------+----------+---------+------------+-------+--------------------+ | adjutant | 1/ 1 | - | + | 0 | 1 | 2 | Doug Hellmann | | barbican | + | 1/ 3 | + | 1 | 1 | 7 | Doug Hellmann | | blazar | + | + | + | 0 | 0 | 9 | Nguyen Hai | | Chef OpenStack | + | - | - | 0 | 0 | 2 | Doug Hellmann | | cinder | + | + | + | 0 | 0 | 11 | Doug Hellmann | | cloudkitty | + | + | + | 0 | 0 | 9 | Doug Hellmann | | congress | + | + | + | 0 | 0 | 9 | Nguyen Hai | | cyborg | + | + | + | 0 | 0 | 7 | Nguyen Hai | | designate | + | + | + | 0 | 0 | 9 | Nguyen Hai | | Documentation | + | + | + | 0 | 0 | 10 | Doug Hellmann | | dragonflow | - | + | + | 0 | 0 | 2 | Nguyen Hai | | ec2-api | 2/ 2 | + | + | 2 | 2 | 7 | | | freezer | + | + | + | 0 | 0 | 11 | | | glance | + | + | + | 0 | 0 | 10 | Nguyen Hai | | heat | 3/ 8 | + | 1/ 7 | 2 | 0 | 21 | Doug Hellmann | | horizon | + | + | + | 0 | 0 | 34 | Nguyen Hai | | I18n | + | - | - | 0 | 0 | 1 | Doug Hellmann | | InteropWG | 3/ 4 | + | 1/ 3 | 2 | 2 | 10 | Doug Hellmann | | ironic | 1/ 10 | + | + | 0 | 0 | 35 | Doug Hellmann | | karbor | + | + | + | 0 | 0 | 7 | Nguyen Hai | | keystone | + | + | + | 0 | 0 | 18 | Doug Hellmann | | kolla | + | + | + | 0 | 0 | 5 | | | kuryr | + | + | + | 0 | 0 | 9 | Doug Hellmann | | magnum | 2/ 5 | + | + | 0 | 1 | 10 | | | manila | + | + | + | 0 | 0 | 13 | Goutham Pacha Ravi | | masakari | 2/ 5 | + | - | 0 | 2 | 6 | Nguyen Hai | | mistral | + | + | + | 0 | 0 | 13 | Nguyen Hai | | monasca | 1/ 17 | + | + | 1 | 1 | 34 | Doug Hellmann | | murano | + | + | + | 0 | 0 | 14 | | | neutron | 7/ 19 | 1/ 14 | 1/ 13 | 6 | 3 | 46 | Doug Hellmann | | nova | + | + | + | 0 | 0 | 14 | | | octavia | + | + | + | 0 | 0 | 12 | Nguyen Hai | | OpenStack Charms | 36/ 73 | - | - | 36 | 15 | 73 | Doug Hellmann | | OpenStack-Helm | + | + | - | 0 | 0 | 4 | | | OpenStackAnsible | + | + | - | 0 | 0 | 154 | | | OpenStackClient | 1/ 4 | + | + | 0 | 1 | 11 | | | OpenStackSDK | + | + | + | 0 | 0 | 10 | | | oslo | + | + | + | 0 | 0 | 63 | Doug Hellmann | | Packaging-rpm | 2/ 3 | + | + | 0 | 1 | 7 | Doug Hellmann | | PowerVMStackers | - | - | + | 0 | 0 | 3 | Doug Hellmann | | Puppet OpenStack | + | + | - | 0 | 0 | 44 | Doug Hellmann | | qinling | + | + | + | 0 | 0 | 6 | | | Quality Assurance | 3/ 11 | + | + | 0 | 3 | 32 | Doug Hellmann | | rally | 1/ 3 | + | - | 1 | 1 | 5 | Nguyen Hai | | Release Management | - | - | + | 0 | 0 | 1 | Doug Hellmann | | requirements | - | + | + | 0 | 0 | 2 | Doug Hellmann | | sahara | 1/ 6 | + | + | 0 | 0 | 13 | Doug Hellmann | | searchlight | + | + | + | 0 | 0 | 9 | Nguyen Hai | | senlin | + | + | + | 0 | 0 | 9 | Nguyen Hai | | SIGs | 1/ 8 | + | + | 0 | 1 | 11 | Doug Hellmann | | solum | + | + | + | 0 | 0 | 7 | Nguyen Hai | | storlets | + | + | + | 0 | 0 | 4 | | | swift | 2/ 3 | + | + | 1 | 1 | 6 | Nguyen Hai | | tacker | 1/ 3 | + | + | 0 | 1 | 8 | Nguyen Hai | | Technical Committee | 1/ 2 | - | + | 0 | 0 | 4 | Doug Hellmann | | Telemetry | 1/ 7 | + | + | 0 | 1 | 19 | Doug Hellmann | | tricircle | + | + | + | 0 | 0 | 5 | Nguyen Hai | | tripleo | 6/ 55 | + | + | 3 | 1 | 93 | Doug Hellmann | | trove | 1/ 5 | + | + | 0 | 1 | 11 | Doug Hellmann | | User Committee | 3/ 3 | + | - | 0 | 2 | 5 | Doug Hellmann | | vitrage | + | + | + | 0 | 0 | 9 | Nguyen Hai | | watcher | + | + | + | 0 | 0 | 10 | Nguyen Hai | | winstackers | + | + | + | 0 | 0 | 6 | | | zaqar | 1/ 3 | + | + | 0 | 0 | 8 | | | zun | + | + | + | 0 | 0 | 8 | Nguyen Hai | | | 37/ 61 | 56/ 58 | 53/ 56 | 55 | 42 | 1075 | | +---------------------+--------------+---------+----------+---------+------------+-------+--------------------+ == Next Steps == Quite a few of the recent tox updates also exposed issues with using pylint under python 3, mostly due to having an older version of the tool pinned. This is a known issue, which was discussed in an earlier update email. The fixes are usually pretty straightforward, and good opportunities to contribute while you're waiting for tests to run or if you're just starting to get into the community. The series of patches preceding https://review.openstack.org/#/c/606676/ in the openstack/neutron repository are examples of some of the sorts of changes needed. If you're interested in helping to fix these sorts of issues, please leave a comment on the patch that changes the tox configuration so that we don't have multiple folks working on the same failures. We need to to approve the patches proposed by the goal champions, and then to expand functional test coverage for python 3. PTLs, please document your team's status in the wiki as well: https://wiki.openstack.org/wiki/Python3 == How can you help? == 1. Choose a patch that has failing tests and help fix it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) 2. Review the patches for the zuul changes. Keep in mind that some of those patches will be on the stable branches for projects. 3. Work on adding functional test jobs that run under Python 3. == How can you ask for help? == If you have any questions, please post them here to the openstack-dev list with the topic tag [python3] in the subject line. Posting questions to the mailing list will give the widest audience the chance to see the answers. We are using the #openstack-dev IRC channel for discussion as well, but I'm not sure how good our timezone coverage is so it's probably better to use the mailing list. == Reference Material == Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open Storyboard: https://storyboard.openstack.org/#!/board/104 Zuul migration notes: https://etherpad.openstack.org/p/python3-first Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 From tpb at dyncloud.net Wed Oct 31 20:01:38 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 31 Oct 2018 16:01:38 -0400 Subject: [openstack-dev] [manila][tc] Seeking feedback on the OpenStack cloud vision In-Reply-To: <4df1c910-ffec-dff8-ff13-c8184bbc4173@redhat.com> References: <4df1c910-ffec-dff8-ff13-c8184bbc4173@redhat.com> Message-ID: <20181031200138.4pxfcwqgrmu55gfx@barron.net> On 24/10/18 11:14 -0400, Zane Bitter wrote: >Greetings, Manila team! >As you may be aware, I've been working with other folks in the >community on documenting a vision for OpenStack clouds (formerly known >as the 'Technical Vision') - essentially to interpret the mission >statement in long-form, in a way that we can use to actually help >guide decisions. You can read the latest draft here: >https://review.openstack.org/592205 > >We're trying to get feedback from as many people as possible - in many >ways the value is in the process of coming together to figure out what >we're trying to achieve as a community with OpenStack and how we can >work together to build it. The document is there to help us remember >what we decided so we don't have to do it all again over and over. > >The vision is structured with two sections that apply broadly to every >project in OpenStack - describing the principles that we believe are >essential to every cloud, and the ones that make OpenStack different >from some other clouds. The third section is a list of design goals >that we want OpenStack as a whole to be able to meet - ideally each >project would be contributing toward one or more of these design >goals. > >I think that, like Cinder, Manila would qualify as contributing to the >'Basic Physical Data Center Management' goal, since it also allows >users to access external storage providers through a standardised API. > >If you would like me or another TC member to join one of your team IRC >meetings to discuss further what the vision means for your team, >please reply to this thread to set it up. You are also welcome to >bring up any questions in the TC IRC channel, #openstack-tc - there's >more of us around during Office Hours >(https://governance.openstack.org/tc/#office-hours), but you can talk >to us at any time. > >Feedback can also happen either in this thread or on the review >https://review.openstack.org/592205 > >If the team is generally happy with the vision as it is and doesn't >have any specific feedback, that's cool but I'd like to request that >at least the PTL leave a vote on the review. It's important to know >whether we are actually developing a consensus in the community or >just talking to ourselves :) > >many thanks, >Zane. Zane and I chatted on IRC and he is going to attend the manila community meeting tomorrow, November 1, at 1500 UTC in #openstack-meeting-alt to follow up and solicit feedback. If you are unable to attend the meeting and have points to make or questions please follow up in this thread or in the review mentioned above. Cheers, -- Tom From hjensas at redhat.com Wed Oct 31 21:59:03 2018 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Wed, 31 Oct 2018 22:59:03 +0100 Subject: [openstack-dev] [tripleo] reducing our upstream CI footprint In-Reply-To: References: Message-ID: <54a795fdc643cb2e8151b1ada611760d8fa20fab.camel@redhat.com> On Wed, 2018-10-31 at 11:39 -0600, Wesley Hayutin wrote: > > > On Wed, Oct 31, 2018 at 11:21 AM Alex Schultz > wrote: > > Hey everyone, > > > > Based on previous emails around this[0][1], I have proposed a > > possible > > reducing in our usage by switching the scenario001--011 jobs to > > non-voting and removing them from the gate[2]. This will reduce the > > likelihood of causing gate resets and hopefully allow us to land > > corrective patches sooner. In terms of risks, there is a risk that > > we > > might introduce breaking changes in the scenarios because they are > > officially non-voting, and we will still be gating promotions on > > these > > scenarios. This means that if they are broken, they will need the > > same attention and care to fix them so we should be vigilant when > > the > > jobs are failing. > > > > The hope is that we can switch these scenarios out with voting > > standalone versions in the next few weeks, but until that I think > > we > > should proceed by removing them from the gate. I know this is less > > than ideal but as most failures with these jobs in the gate are > > either > > timeouts or unrelated to the changes (or gate queue), they are more > > of > > hindrance than a help at this point. > > > > Thanks, > > -Alex > > I think I also have to agree. > Having to deploy with containers, update containers and run with two > nodes is no longer a very viable option upstream. It's not > impossible but it should be the exception and not the rule for all > our jobs. > afaict in my local environment, the container prep stuff takes ages when adding the playbooks to update them with yum. We will still have to do this for every standalone job right? Also, I enabled profiling for ansible tasks on the undercloud and noticed that the UndercloudPostDeploy was high on the list, actually the longest running task when re-running the undercloud install ... Moving from shell script using openstack cli to python reduced the time for this task dramatically in my environment, see: https://review.openstack.org/614540. 6 and half minutes reduced to 40 seconds ... How much time would we save in the gates if we converted some of the shell scripting to python, or if we want to stay in shell script we can use the interactive shell or use the client-as-a-service[2]? Interactive shell: time openstack <<-EOC server list workflow list workflow execution list EOC real 0m2.852s time (openstack server list; \ openstack workflow list; \ openstack workflow execution list) real 0m7.119s The difference is significant. We could cache a token[1], and specify the end-point on each command, but doing so is still far from as effective as using the interactive. There is an old thread[2] on the mailing list, which contain a server/client solution. If we run this service in CI nodes and drop in the replacement openstack command in /usr/local/bin/openstack we would use ~1/5 of the time for each command. (undercloud) [stack at leafs ~]$ time (/usr/bin/openstack network list -f value -c ID; /usr/bin/openstack network segment list -f value -c ID; /usr/bin/openstack subnet list -f value -c ID) real 0m6.443s user 0m2.171s sys 0m0.366s (undercloud) [stack at leafs ~]$ time (/usr/local/bin/openstack network list -f value -c ID; /usr/local/bin/openstack network segment list -f value -c ID; /usr/local/bin/openstack subnet list -f value -c ID) real 0m1.698s user 0m0.042s sys 0m0.018s I relize this is a kind of hacky approch, but it does seem to work and it should be fairly quick to get in there. (With the Undercloud post script I see 6 minutes returned, what can we get in CI, 10-15 minutes? Then we could look at moving these scripts to python or use ansible openstack modules which hopefully does'nt share the same issues with loading as the python clients? [1] https://wiki.openstack.org/wiki/OpenStackClient/Authentication [2] http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html > Thanks Alex > > > > [0] > > http://lists.openstack.org/pipermail/openstack-dev/2018-October/136141.html > > [1] > > http://lists.openstack.org/pipermail/openstack-dev/2018-October/135396.html > > [2] > > https://review.openstack.org/#/q/topic:reduce-tripleo-usage+(status:open+OR+status:merged > > ) > > > > ___________________________________________________________________ > > _______ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > bscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > WES HAYUTIN > ASSOCIATE MANAGER > Red Hat > > whayutin at redhat.com T: +19194232509 IRC: weshay > > > View my calendar and check my availability for meetings HERE > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From whayutin at redhat.com Wed Oct 31 22:14:03 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 31 Oct 2018 16:14:03 -0600 Subject: [openstack-dev] [tripleo] shutting down 3rd party TripleO CI for measurements Message-ID: Greetings, The TripleO-CI team would like to consider shutting down all the third party check jobs running against TripleO projects in order to measure results with and without load on the cloud for some amount of time. I suspect we would want to shut things down for roughly 24-48 hours. If there are any strong objects please let us know. Thank you -- Wes Hayutin Associate MANAGER Red Hat whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Oct 31 22:19:22 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 31 Oct 2018 17:19:22 -0500 Subject: [openstack-dev] [tripleo] reducing our upstream CI footprint In-Reply-To: <54a795fdc643cb2e8151b1ada611760d8fa20fab.camel@redhat.com> References: <54a795fdc643cb2e8151b1ada611760d8fa20fab.camel@redhat.com> Message-ID: <43eaaa50-01b7-4888-3fc8-edff9aa5aaea@nemebean.com> On 10/31/18 4:59 PM, Harald Jensås wrote: > On Wed, 2018-10-31 at 11:39 -0600, Wesley Hayutin wrote: >> >> >> On Wed, Oct 31, 2018 at 11:21 AM Alex Schultz >> wrote: >>> Hey everyone, >>> >>> Based on previous emails around this[0][1], I have proposed a >>> possible >>> reducing in our usage by switching the scenario001--011 jobs to >>> non-voting and removing them from the gate[2]. This will reduce the >>> likelihood of causing gate resets and hopefully allow us to land >>> corrective patches sooner. In terms of risks, there is a risk that >>> we >>> might introduce breaking changes in the scenarios because they are >>> officially non-voting, and we will still be gating promotions on >>> these >>> scenarios. This means that if they are broken, they will need the >>> same attention and care to fix them so we should be vigilant when >>> the >>> jobs are failing. >>> >>> The hope is that we can switch these scenarios out with voting >>> standalone versions in the next few weeks, but until that I think >>> we >>> should proceed by removing them from the gate. I know this is less >>> than ideal but as most failures with these jobs in the gate are >>> either >>> timeouts or unrelated to the changes (or gate queue), they are more >>> of >>> hindrance than a help at this point. >>> >>> Thanks, >>> -Alex >> >> I think I also have to agree. >> Having to deploy with containers, update containers and run with two >> nodes is no longer a very viable option upstream. It's not >> impossible but it should be the exception and not the rule for all >> our jobs. >> > afaict in my local environment, the container prep stuff takes ages > when adding the playbooks to update them with yum. We will still have > to do this for every standalone job right? > > > > Also, I enabled profiling for ansible tasks on the undercloud and > noticed that the UndercloudPostDeploy was high on the list, actually > the longest running task when re-running the undercloud install ... > > Moving from shell script using openstack cli to python reduced the time > for this task dramatically in my environment, see: > https://review.openstack.org/614540. 6 and half minutes reduced to 40 > seconds ... Everything old is new again: https://github.com/openstack/instack-undercloud/commit/0eb1b59926c7dc46e321c56db29af95b3d755f34#diff-5602f1b710e86ca1eb7334cb0632f9ee :-) > > > How much time would we save in the gates if we converted some of the > shell scripting to python, or if we want to stay in shell script we can > use the interactive shell or use the client-as-a-service[2]? > > Interactive shell: > time openstack <<-EOC > server list > workflow list > workflow execution list > EOC > > real 0m2.852s > time (openstack server list; \ > openstack workflow list; \ > openstack workflow execution list) > > real 0m7.119s > > The difference is significant. > > We could cache a token[1], and specify the end-point on each command, > but doing so is still far from as effective as using the interactive. > > > There is an old thread[2] on the mailing list, which contain a > server/client solution. If we run this service in CI nodes and drop in > the replacement openstack command in /usr/local/bin/openstack we would > use ~1/5 of the time for each command. > > (undercloud) [stack at leafs ~]$ time (/usr/bin/openstack network list -f > value -c ID; /usr/bin/openstack network segment list -f value -c ID; > /usr/bin/openstack subnet list -f value -c ID) > > > real 0m6.443s > user 0m2.171s > sys 0m0.366s > > (undercloud) [stack at leafs ~]$ time (/usr/local/bin/openstack network > list -f value -c ID; /usr/local/bin/openstack network segment list -f > value -c ID; /usr/local/bin/openstack subnet list -f value -c ID) > > real 0m1.698s > user 0m0.042s > sys 0m0.018s > > > > I relize this is a kind of hacky approch, but it does seem to work and > it should be fairly quick to get in there. (With the Undercloud post > script I see 6 minutes returned, what can we get in CI, 10-15 minutes? > Then we could look at moving these scripts to python or use ansible > openstack modules which hopefully does'nt share the same issues with > loading as the python clients? I'm personally a fan of using Python as then it is unit-testable, but I'm not sure how that works with the tht-based code so maybe it's not a factor. > > > > [1] https://wiki.openstack.org/wiki/OpenStackClient/Authentication > [2] > http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html > > >> Thanks Alex >> >> >>> [0] >>> http://lists.openstack.org/pipermail/openstack-dev/2018-October/136141.html >>> [1] >>> http://lists.openstack.org/pipermail/openstack-dev/2018-October/135396.html >>> [2] >>> https://review.openstack.org/#/q/topic:reduce-tripleo-usage+(status:open+OR+status:merged >>> ) >>> >>> ___________________________________________________________________ >>> _______ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu >>> bscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -- >> WES HAYUTIN >> ASSOCIATE MANAGER >> Red Hat >> >> whayutin at redhat.com T: +19194232509 IRC: weshay >> >> >> View my calendar and check my availability for meetings HERE >> _____________________________________________________________________ >> _____ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs >> cribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jaink at hotmail.com Wed Oct 31 22:39:01 2018 From: jaink at hotmail.com (Krishna Jain) Date: Wed, 31 Oct 2018 22:39:01 +0000 Subject: [openstack-dev] Storyboard python script Message-ID: Hi, I’m an animator with some coding experience picking up Python. I came across your python-storyboardclient library, which would be very helpful for automating our pipeline in Toon Boom Storyboard Pro. I’d like to have Python call some of the Javascript scripts I’ve written to extend SBPro. Or at least make it possible to rewrite the scripts in Python if need be. Unfortunately, when I try to install it, I get: Command ""c:\program files\python37\python.exe" -u -c "import setuptools, tokeni ze;__file__='C:\\Users\\kjain\\AppData\\Local\\Temp\\pip-install-gli0gz3z\\netif aces\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replac e('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --recor d C:\Users\kjain\AppData\Local\Temp\pip-record-1qhmhrv5\install-record.txt --sin gle-version-externally-managed --compile" failed with error code 1 in C:\Users\k jain\AppData\Local\Temp\pip-install-gli0gz3z\netifaces\ Do you know what might be going wrong here? Thanks! -Krishna Jain -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Oct 31 22:46:57 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 31 Oct 2018 23:46:57 +0100 Subject: [openstack-dev] Storyboard python script In-Reply-To: References: Message-ID: <0FE77BAD-4339-42EB-A509-5C335831431E@vexxhost.com> I believe this project is a client for Storyboard, an OpenStack project and not the commercial product you’re mentioning Sent from my iPhone > On Oct 31, 2018, at 11:39 PM, Krishna Jain wrote: > > Hi, > I’m an animator with some coding experience picking up Python. I came across your python-storyboardclient library, which would be very helpful for automating our pipeline in Toon Boom Storyboard Pro. I’d like to have Python call some of the Javascript scripts I’ve written to extend SBPro. Or at least make it possible to rewrite the scripts in Python if need be. Unfortunately, when I try to install it, I get: > > Command ""c:\program files\python37\python.exe" -u -c "import setuptools, tokeni > ze;__file__='C:\\Users\\kjain\\AppData\\Local\\Temp\\pip-install-gli0gz3z\\netif > aces\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replac > e('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --recor > d C:\Users\kjain\AppData\Local\Temp\pip-record-1qhmhrv5\install-record.txt --sin > gle-version-externally-managed --compile" failed with error code 1 in C:\Users\k > jain\AppData\Local\Temp\pip-install-gli0gz3z\netifaces\ > > Do you know what might be going wrong here? > Thanks! > -Krishna Jain > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Wed Oct 31 23:11:00 2018 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 31 Oct 2018 23:11:00 +0000 Subject: [openstack-dev] [ironic] Team gathering at the Forum? In-Reply-To: <627c8ab5-1918-8ca4-1dae-29cb78859d57@redhat.com> References: <627c8ab5-1918-8ca4-1dae-29cb78859d57@redhat.com> Message-ID: <7D82DA8E-0159-46F1-9987-3E97450BBB8E@telfer.org> Hello Ironicers - We’ve booked the same venue for the Scientific SIG for Wednesday evening, and hopefully we’ll see you there. There’s plenty of cross-over between our groups, particularly at an operator level. Cheers, Stig > On 29 Oct 2018, at 14:58, Dmitry Tantsur wrote: > > Hi folks! > > This is your friendly reminder to vote on the day. Even if you're fine with all days, please leave a vote, so that we know how many people are coming. We will need to make a reservation, and we may not be able to accommodate more people than voted! > > Dmitry > > On 10/22/18 6:06 PM, Dmitry Tantsur wrote: >> Hi ironicers! :) >> We are trying to plan an informal Ironic team gathering in Berlin. If you care about Ironic and would like to participate, please fill in https://doodle.com/poll/iw5992px765nthde. Note that the location is tentative, also depending on how many people sign up. >> Dmitry > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev