From fungi at yuggoth.org Thu Nov 1 01:14:56 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Nov 2018 01:14:56 +0000 Subject: [openstack-dev] Storyboard python script In-Reply-To: References: Message-ID: <20181101011455.2kzewydkraar7ea5@yuggoth.org> On 2018-10-31 22:39:01 +0000 (+0000), Krishna Jain wrote: > I’m an animator with some coding experience picking up Python. I > came across your python-storyboardclient library, which would be > very helpful for automating our pipeline in Toon Boom Storyboard > Pro. [...] The python-storyboardclient library is for use with the free/libre open source StoryBoard task and defect tracker project described by the documentation located at https://docs.openstack.org/infra/storyboard/ and not the commercial "Toon Boom Storyboard Pro" animation software to which you seem to be referring. Sorry for the confusion! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zigo at debian.org Thu Nov 1 07:52:50 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 1 Nov 2018 08:52:50 +0100 Subject: [openstack-dev] [oslo] No complains about rabbitmq SSL problems: could we have this in the logs? In-Reply-To: <08C09D03-132D-406E-AFDE-7845B2D55B5F@vexxhost.com> References: <08C09D03-132D-406E-AFDE-7845B2D55B5F@vexxhost.com> Message-ID: <1d5364ed-076b-ac86-6c54-9d9a16bc87a2@debian.org> On 10/31/18 2:40 PM, Mohammed Naser wrote: > For what it’s worth: I ran into the same issue. I think the problem lies a bit deeper because it’s a problem with kombu as when debugging I saw that Oslo messaging tried to connect and hung after. > > Sent from my iPhone > >> On Oct 31, 2018, at 2:29 PM, Thomas Goirand wrote: >> >> Hi, >> >> It took me a long long time to figure out that my SSL setup was wrong >> when trying to connect Heat to rabbitmq over SSL. Unfortunately, Oslo >> (or heat itself) never warn me that something was wrong, I just got >> nothing working, and no log at all. >> >> I'm sure I wouldn't be the only one happy about having this type of >> problems being yelled out loud in the logs. Right now, it does work if I >> turn off SSL, though I'm still not sure what's wrong in my setup, and >> I'm given no clue if the issue is on rabbitmq-server or on the client >> side (ie: heat, in my current case). >> >> Just a wishlist... :) >> Cheers, >> >> Thomas Goirand (zigo) I've opened a bug here: https://bugs.launchpad.net/oslo.messaging/+bug/1801011 Cheers, Thomas Goirand (zigo) From dtantsur at redhat.com Thu Nov 1 10:53:24 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 1 Nov 2018 11:53:24 +0100 Subject: [openstack-dev] [ironic] Team gathering at the Forum? In-Reply-To: <7D82DA8E-0159-46F1-9987-3E97450BBB8E@telfer.org> References: <627c8ab5-1918-8ca4-1dae-29cb78859d57@redhat.com> <7D82DA8E-0159-46F1-9987-3E97450BBB8E@telfer.org> Message-ID: <34fbcaf6-cdcd-7bc1-56ce-651e19d14d22@redhat.com> Hi, You mean Lindenbrau, right? I was thinking of changing the venue, but if you're there, I think we should go there too! I just hope they can still fit 15 ppl :) Will try reserving tomorrow. Dmitry On 11/1/18 12:11 AM, Stig Telfer wrote: > Hello Ironicers - > > We’ve booked the same venue for the Scientific SIG for Wednesday evening, and hopefully we’ll see you there. There’s plenty of cross-over between our groups, particularly at an operator level. > > Cheers, > Stig > > >> On 29 Oct 2018, at 14:58, Dmitry Tantsur wrote: >> >> Hi folks! >> >> This is your friendly reminder to vote on the day. Even if you're fine with all days, please leave a vote, so that we know how many people are coming. We will need to make a reservation, and we may not be able to accommodate more people than voted! >> >> Dmitry >> >> On 10/22/18 6:06 PM, Dmitry Tantsur wrote: >>> Hi ironicers! :) >>> We are trying to plan an informal Ironic team gathering in Berlin. If you care about Ironic and would like to participate, please fill in https://doodle.com/poll/iw5992px765nthde. Note that the location is tentative, also depending on how many people sign up. >>> Dmitry >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mrhillsman at gmail.com Thu Nov 1 11:25:31 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 1 Nov 2018 06:25:31 -0500 Subject: [openstack-dev] [openlab] October Report Message-ID: Hi everyone, Here are some highlights from OpenLab for the month of October: CI additions - cluster-api-provider-openstack - AdoptOpenJDK - very important open source project - many Java developers - strategic for open source ecosystem Website redesign completed - fielding resource and support requests via GitHub - ML sign up via website - Community page - CI Infrastructure and High level request pipeline still manual but driven by Google Sheets - closer to being fully automated; easier to manage via spreadsheet instead of website backend Promotion - OSN Day Dallas, November 6th, 2018 https://events.linuxfoundation.org/events/osn_days_2018/north-america/ dallas/ - Twitter account is live - @askopenlab Mailing List - https://lists.openlabtesting.org - running latest mailman - postorious frontend - net new members - 7 OpenLab Tests (October) - total number of tests run - 3504 - SUCCESS - 2421 - FAILURE - 871 - POST_FAILURE- 72 - RETRY_LIMIT - 131 - TIMED_OUT - 9 - NODE_FAILURE - 0 - SKIPPED - 0 - 69.0925% : 30.9075% (success to fail/other job ratio) (September) - total number of tests run - 4350 - SUCCESS - 2611 - FAILURE - 1326 - POST_FAILURE- 336 - RETRY_LIMIT - 66 - TIMED_OUT - 11 - NODE_FAILURE - 0 - SKIPPED - 0 - 60.0230% : 39.9770% (success to fail/other job ratio) Delta - 9.0695% increase in success to fail/other job ratio - testament to great support by Chenrui and Liusheng "keeping the lights on". Additional Infrastructure - Packet - 80 vCPUs, 80G RAM, 1000G Disk - ARM - ARM-based OpenStack Cloud - Managed by codethink.co.uk - single compute node - 96 vCPUs, 128G RAM, 800G Disk - Massachusetts Open Cloud - in progress - small project for now - academia partner Build Status Legend: SUCCESS job executed correctly and exited without failure FAILURE job executed correctly, but exited with a failure RETRY_LIMIT pre-build tasks/plays failed more than the maximum number of retry attempts POST_FAILURE post-build tasks/plays failed SKIPPED one of the build dependencies failed and this job was not executed NODE_FAILURE no device available to run the build TIMED_OUT build got stuck at some point and hit the timeout limit Thank you to everyone who has read through this month’s update. If you have any question/concerns please feel free to start a thread on the mailing list or if it is something not to be shared publicly right now you can email info at openlabtesting.org Kind regards, OpenLab Governance Team -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekh at redhat.com Thu Nov 1 11:46:40 2018 From: derekh at redhat.com (Derek Higgins) Date: Thu, 1 Nov 2018 11:46:40 +0000 Subject: [openstack-dev] [tripleo] reducing our upstream CI footprint In-Reply-To: References: Message-ID: On Wed, 31 Oct 2018 at 17:22, Alex Schultz wrote: > > Hey everyone, > > Based on previous emails around this[0][1], I have proposed a possible > reducing in our usage by switching the scenario001--011 jobs to > non-voting and removing them from the gate[2]. This will reduce the > likelihood of causing gate resets and hopefully allow us to land > corrective patches sooner. In terms of risks, there is a risk that we > might introduce breaking changes in the scenarios because they are > officially non-voting, and we will still be gating promotions on these > scenarios. This means that if they are broken, they will need the > same attention and care to fix them so we should be vigilant when the > jobs are failing. > > The hope is that we can switch these scenarios out with voting > standalone versions in the next few weeks, but until that I think we > should proceed by removing them from the gate. I know this is less > than ideal but as most failures with these jobs in the gate are either > timeouts or unrelated to the changes (or gate queue), they are more of > hindrance than a help at this point. > > Thanks, > -Alex While on the topic of reducing the CI footprint something worth considering when pushing up a string of patches would be to remove a bunch of the check jobs at the start of the patch set. e.g. If I'm working on t-h-t and have a series of 10 patches, while looking for feedback I could remove most of the jobs from zuul.d/layout.yaml in patch 1 so all 10 patches don't run the entire suite of CI jobs. Once it becomes clear that the patchset is nearly ready to merge, I change patch 1 leave zuul.d/layout.yaml as is. I'm not suggesting everybody does this but anybody who tends to push up multiple patch sets together could consider it to not tie up resources for hours. > > [0] http://lists.openstack.org/pipermail/openstack-dev/2018-October/136141.html > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135396.html > [2] https://review.openstack.org/#/q/topic:reduce-tripleo-usage+(status:open+OR+status:merged) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gokhan.isik at tubitak.gov.tr Thu Nov 1 11:52:56 2018 From: gokhan.isik at tubitak.gov.tr (=?utf-8?Q?G=C3=B6khan_I=C5=9EIK_=28B=C4=B0LGEM_BTE=29?=) Date: Thu, 1 Nov 2018 14:52:56 +0300 (EET) Subject: [openstack-dev] [Oslo][nova][openstack-ansible] About Rabbitmq warning problems on nova-compute side Message-ID: <1101609091.2040456.1541073176882.JavaMail.zimbra@tubitak.gov.tr> Hi folks, I have problems about rabbitmq on nova-compute side. I see lots of warnings in log file like that “client unexpectedly closed TCP connection”.[1] I have a HA OpenStack environment on ubuntu 16.04.5 which is installed with Openstack Ansible Project. My OpenStack environment version is Pike. My environment consists of 3 controller nodes ,23 compute nodes and 1 log node. Cinder volume service is installed on compute nodes and I am using NetApp Storage. I tried lots of configs on nova about oslo messaging and rabbitmq side, but I didn’t resolve this problem. My latest configs are below: rabbitmq.config is : http://paste.openstack.org/show/733767/ nova.conf is: [ http://paste.openstack.org/show/733768/ | http://paste.openstack.org/show/733768/ ] Services versions are : http://paste.openstack.org/show/733769/ Can you share your experiences on rabbitmq side and How can I solve these warnings on nova-compute side ? What will you advice ? [1] http://paste.openstack.org/show/733766/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Nov 1 12:52:05 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 01 Nov 2018 08:52:05 -0400 Subject: [openstack-dev] [Release-job-failures][release][infra] Tag of openstack/keystone failed In-Reply-To: References: Message-ID: zuul at openstack.org writes: > Build failed. > > - publish-openstack-releasenotes-python3 http://logs.openstack.org/86/86022835fb39cb318e28994ed0290caddfb451be/tag/publish-openstack-releasenotes-python3/7347218/ : POST_FAILURE in 13m 53s > - publish-openstack-releasenotes http://logs.openstack.org/86/86022835fb39cb318e28994ed0290caddfb451be/tag/publish-openstack-releasenotes/a4c2e21/ : SUCCESS in 13m 54s > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures Keystone is configured in stable/queens with the release-notes-jobs project template and in master with the release-notes-jobs-python3 template. This is, AFAICT, exactly what we said should be done. However, both of the templates include jobs in the "tag" pipeline, and tags are not branch-aware. That means we end up running both versions of the release notes publishing jobs when we tag a release, and the results may be unpredictable. This problem will affect other projects as well. I think we have a few options. 1. Move the job settings out of the source repository into project-config, like we do for the release jobs, so that we always run the same version in all cases. This has two downsides. First, we would have to support the python3 variant even on very old stable branches. That shouldn't be a very big concern, though, because that job does not install the source for the project or its dependencies. Second, we would have to modify all of the zuul configurations for all of our old branches, again. As much as I'm enjoying the jokes about how I'm padding my contribution stats this cycle, I don't really want to do that. 2. Make the job itself smart enough to figure out whether to run with python 2 or 3. We could update both jobs to look at some setting in the repo to decide which version to use. This feels excessively complicated, might still result in having to modify some settings in the stable branches, and would mean we would have to customize the shared versions of the jobs defined in the zuul-jobs repo. 3. Modify the python 2 version of the project template ("release-notes-jobs") to remove the "tag" pipeline settings. This would let us continue to use the python 2 variant for tests and when patches merge, and only use the python 3 job for tags. As with option 1, we would have to be sure that the release notes job works under python 3 for all repos, but as described above that isn't a big concern. I think we should take option 3, and will be preparing a patch to do that shortly. Did I miss any options or issues with this approach? Doug From doug at doughellmann.com Thu Nov 1 12:52:54 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 01 Nov 2018 08:52:54 -0400 Subject: [openstack-dev] [Release-job-failures][release][infra] Tag of openstack/keystone failed In-Reply-To: References: Message-ID: Doug Hellmann writes: > zuul at openstack.org writes: > >> Build failed. >> >> - publish-openstack-releasenotes-python3 http://logs.openstack.org/86/86022835fb39cb318e28994ed0290caddfb451be/tag/publish-openstack-releasenotes-python3/7347218/ : POST_FAILURE in 13m 53s >> - publish-openstack-releasenotes http://logs.openstack.org/86/86022835fb39cb318e28994ed0290caddfb451be/tag/publish-openstack-releasenotes/a4c2e21/ : SUCCESS in 13m 54s >> >> _______________________________________________ >> Release-job-failures mailing list >> Release-job-failures at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > > Keystone is configured in stable/queens with the release-notes-jobs > project template and in master with the release-notes-jobs-python3 > template. This is, AFAICT, exactly what we said should be done. However, > both of the templates include jobs in the "tag" pipeline, and tags are > not branch-aware. That means we end up running both versions of the > release notes publishing jobs when we tag a release, and the results may > be unpredictable. This problem will affect other projects as well. > > I think we have a few options. > > 1. Move the job settings out of the source repository into > project-config, like we do for the release jobs, so that we always > run the same version in all cases. > > This has two downsides. > > First, we would have to support the python3 variant even on very old > stable branches. That shouldn't be a very big concern, though, > because that job does not install the source for the project or its > dependencies. > > Second, we would have to modify all of the zuul configurations for > all of our old branches, again. As much as I'm enjoying the jokes > about how I'm padding my contribution stats this cycle, I don't > really want to do that. > > 2. Make the job itself smart enough to figure out whether to run with > python 2 or 3. > > We could update both jobs to look at some setting in the repo to > decide which version to use. This feels excessively complicated, > might still result in having to modify some settings in the stable > branches, and would mean we would have to customize the shared > versions of the jobs defined in the zuul-jobs repo. > > 3. Modify the python 2 version of the project template > ("release-notes-jobs") to remove the "tag" pipeline settings. > > This would let us continue to use the python 2 variant for tests and > when patches merge, and only use the python 3 job for tags. As with > option 1, we would have to be sure that the release notes job works > under python 3 for all repos, but as described above that isn't a big > concern. > > I think we should take option 3, and will be preparing a patch to do > that shortly. > > Did I miss any options or issues with this approach? > > Doug See https://review.openstack.org/614758 Doug From sean.mcginnis at gmx.com Thu Nov 1 13:07:09 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 1 Nov 2018 08:07:09 -0500 Subject: [openstack-dev] [release] Release countdown for week R-22, November 5-9 Message-ID: <20181101130709.GA17076@sm-workstation> Development Focus ----------------- Teams should now be focused on feature development and completion of release goals [0]. [0] https://governance.openstack.org/tc/goals/stein/index.html General Information ------------------- We are now past the Stein-1 milestone. Following the changes described in [0] we have release most of the cycle-with-intermediary libraries. It is a good time to think about any additional client (or deliverables marked as type:"other") releases. All projects with these deliverables should try to have a release done before Stein-2. All cycle-with-milestone deliverables have been switched over to be cycle-with-rc [1]. Just a reminder that we can still do milestone beta releases if there is a need for them, but we want to avoid doing so just because that is what we've historically done. Any questions about these changes, or any release planning or process questions in general, please reach out to us in the #openstack-release channel or on the mailing list. [0] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135689.html [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/135088.html Upcoming Deadlines & Dates -------------------------- Forum at OpenStack Summit in Berlin: November 13-15 Start using openstack-discuss ML: November 19 Stein-2 Milestone: January 10 -- Sean McGinnis (smcginnis) From doug at doughellmann.com Thu Nov 1 13:15:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 01 Nov 2018 09:15:11 -0400 Subject: [openstack-dev] [tc] publishing openstack_governance library to PyPI Message-ID: TC members, I have a series of patches up for review starting at [1] to create a small library for reading and manipulating the project reference data in the YAML files in the openstack/governance repository. After copying similar code into a 3rd repository that wants to consume it last cycle, I thought I ought to just go ahead and do this properly. The patch in [2] is an example of how the releases repository can take advantage of having a library like this. Please add the series to your review queue. Doug [1] https://review.openstack.org/#/c/614597 [2] https://review.openstack.org/614606 From fungi at yuggoth.org Thu Nov 1 13:18:20 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Nov 2018 13:18:20 +0000 Subject: [openstack-dev] [Release-job-failures][release][infra] Tag of openstack/keystone failed In-Reply-To: References: Message-ID: <20181101131819.rbrezm4pi37x6vpx@yuggoth.org> On 2018-11-01 08:52:05 -0400 (-0400), Doug Hellmann wrote: [...] > Did I miss any options or issues with this approach? https://review.openstack.org/578557 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Thu Nov 1 13:27:03 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 01 Nov 2018 09:27:03 -0400 Subject: [openstack-dev] [Release-job-failures][release][infra] Tag of openstack/keystone failed In-Reply-To: <20181101131819.rbrezm4pi37x6vpx@yuggoth.org> References: <20181101131819.rbrezm4pi37x6vpx@yuggoth.org> Message-ID: Jeremy Stanley writes: > On 2018-11-01 08:52:05 -0400 (-0400), Doug Hellmann wrote: > [...] >> Did I miss any options or issues with this approach? > > https://review.openstack.org/578557 How high of a priority is rolling that feature out? It does seem to eliminate this particular issue (even the edge cases described in the commit message shouldn't affect us based on our typical practices), but until we have one of the two changes in place we're going to have this issue with every release we tag. Doug > > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Thu Nov 1 14:57:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 01 Nov 2018 10:57:34 -0400 Subject: [openstack-dev] [tc] agenda for TC meeting 1 Nov 1400 UTC In-Reply-To: References: Message-ID: Doug Hellmann writes: > TC members, > > The TC will be meeting on 1 Nov at 1400 UTC in #openstack-tc to discuss > some of our ongoing initiatives. Here is the agenda for this week. > > * meeting procedures > > * discussion of topics for joint leadership meeting at Summit in > Berlin > > * completing TC liaison assignments > ** https://wiki.openstack.org/wiki/OpenStack_health_tracker#Project_Teams > > * documenting chair responsibilities > ** https://etherpad.openstack.org/p/tc-chair-responsibilities > > * reviewing the health-check check list > ** https://wiki.openstack.org/wiki/OpenStack_health_tracker#Health_check_list > > * deciding next steps on technical vision statement > ** https://review.openstack.org/592205 > > * deciding next steps on python 3 and distro versions for PTI > ** https://review.openstack.org/610708 Add optional python3.7 unit test enablement to python3-first > ** https://review.openstack.org/611010 Make Python 3 testing requirement less specific > ** https://review.openstack.org/611080 Explicitly declare Stein supported runtimes > ** https://review.openstack.org/613145 Resolution on keeping up with Python 3 releases > > * reviews needing attention > ** https://review.openstack.org/613268 Indicate relmgt style for each deliverable > ** https://review.openstack.org/613856 Remove Dragonflow from the official projects list > > If you have suggestions for topics for the next meeting (6 Dec), please > add them to the wiki at > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev The logs for this meeting are available now at Minutes: http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-11-01-14.01.html Minutes (text): http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-11-01-14.01.txt Log: http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-11-01-14.01.log.html Doug From mriedemos at gmail.com Thu Nov 1 14:57:56 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 1 Nov 2018 09:57:56 -0500 Subject: [openstack-dev] [nova] Is anyone running their own script to purge old instance_faults table entries? Message-ID: <856536e5-631c-c975-7a6f-91a2167e9baf@gmail.com> I came across this bug [1] in triage today and I thought this was fixed already [2] but either something regressed or there is more to do here. I'm mostly just wondering, are operators already running any kind of script which purges old instance_faults table records before an instance is deleted and archived/purged? Because if so, that might be something we want to add as a nova-manage command. [1] https://bugs.launchpad.net/nova/+bug/1800755 [2] https://review.openstack.org/#/c/409943/ -- Thanks, Matt From fungi at yuggoth.org Thu Nov 1 15:08:09 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 1 Nov 2018 15:08:09 +0000 Subject: [openstack-dev] [Release-job-failures][release][infra] Tag of openstack/keystone failed In-Reply-To: References: <20181101131819.rbrezm4pi37x6vpx@yuggoth.org> Message-ID: <20181101150809.7xjutvh4kh5p2533@yuggoth.org> On 2018-11-01 09:27:03 -0400 (-0400), Doug Hellmann wrote: > Jeremy Stanley writes: > > > On 2018-11-01 08:52:05 -0400 (-0400), Doug Hellmann wrote: > > [...] > >> Did I miss any options or issues with this approach? > > > > https://review.openstack.org/578557 > > How high of a priority is rolling that feature out? It does seem to > eliminate this particular issue (even the edge cases described in the > commit message shouldn't affect us based on our typical practices), but > until we have one of the two changes in place we're going to have this > issue with every release we tag. It was written as a potential solution to this problem back when we first discussed it in June, but at the time we thought it might be solvable via job configuration with minimal inconvenience so that feature was put on hold as a fallback option in case we ended up needing it. I expect since it's already seen some review and is passing tests it could probably be picked back up fairly quickly now that alternative solutions have proven more complex than originally envisioned. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Thu Nov 1 15:38:14 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 1 Nov 2018 10:38:14 -0500 Subject: [openstack-dev] [Release-job-failures][release][infra] Tag of openstack/keystone failed In-Reply-To: <20181101150809.7xjutvh4kh5p2533@yuggoth.org> References: <20181101131819.rbrezm4pi37x6vpx@yuggoth.org> <20181101150809.7xjutvh4kh5p2533@yuggoth.org> Message-ID: <20181101153813.GA18688@sm-workstation> On Thu, Nov 01, 2018 at 03:08:09PM +0000, Jeremy Stanley wrote: > On 2018-11-01 09:27:03 -0400 (-0400), Doug Hellmann wrote: > > Jeremy Stanley writes: > > > > > On 2018-11-01 08:52:05 -0400 (-0400), Doug Hellmann wrote: > > > [...] > > >> Did I miss any options or issues with this approach? > > > > > > https://review.openstack.org/578557 > > > > How high of a priority is rolling that feature out? It does seem to > > eliminate this particular issue (even the edge cases described in the > > commit message shouldn't affect us based on our typical practices), but > > until we have one of the two changes in place we're going to have this > > issue with every release we tag. > > It was written as a potential solution to this problem back when we > first discussed it in June, but at the time we thought it might be > solvable via job configuration with minimal inconvenience so that > feature was put on hold as a fallback option in case we ended up > needing it. I expect since it's already seen some review and is > passing tests it could probably be picked back up fairly quickly now > that alternative solutions have proven more complex than originally > envisioned. > -- > Jeremy Stanley Doug's option 3 made sense to me as a way to address this for now. I could see doing that for the time being, but if this is coming in the near future, we can wait and go with it as option 4. Sean From lijie at unitedstack.com Thu Nov 1 15:43:25 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Thu, 1 Nov 2018 23:43:25 +0800 Subject: [openstack-dev] [cinder] about use nfs driver to backup the volume snapshot Message-ID: Hi,all Recently, I use the nfs driver as the cinder-backup backend, when I use it to backup the volume snapshot, the result is return the NotImplementedError[1].And the nfs.py doesn't has the create_volume_from_snapshot function. Does the community plan to achieve it which is as nfs as the cinder-backup backend?Can you tell me about this?Thank you very much! [1]https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L2142 Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Thu Nov 1 16:26:36 2018 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 1 Nov 2018 10:26:36 -0600 Subject: [openstack-dev] [tripleo] gate issues please do not approve/recheck In-Reply-To: References: Message-ID: Ok since the podman revert patche has been successfully merged and we've landed most of the non-voting scenario patches, it should be OK to restore/recheck. It would be a good idea to prioritized things to land and if it's not critical, let's hold off on approving until we're sure the gate is much better. Thanks, -Alex On Wed, Oct 31, 2018 at 9:39 AM Alex Schultz wrote: > > Hey folks, > > So we have identified an issue that has been causing a bunch of > failures and proposed a revert of our podman testing[0]. We have > cleared the gate and are asking that you not approve or recheck any > patches at this time. We will let you know when it is safe to start > approving things. > > Thanks, > -Alex > > [0] https://review.openstack.org/#/c/614537/ From ifatafekn at gmail.com Thu Nov 1 17:26:00 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Thu, 1 Nov 2018 19:26:00 +0200 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: Hi, On Wed, Oct 31, 2018 at 11:00 AM Won wrote: > Hi, > >> >> This is strange. I would expect your original definition to work as well, >> since the alarm key in Vitrage is defined by a combination of the alert >> name and the instance. We will check it again. >> BTW, we solved a different bug related to Prometheus alarms not being >> cleared [1]. Could it be related? >> > > Using the original definition, no matter how different the instances are, > the alarm names are recognized as the same alarm in vitrage. > And I tried to install the rocky version and the master version on the new > server and retest but the problem was not solved. The latest bugfix seems > irrelevant. > Ok. We will check this issue. For now your workaround is ok, right? > Does the wrong timestamp appear if you run 'vitrage alarm list' cli >> command? please try running 'vitrage alarm list --debug' and send me the >> output. >> > > I have attached 'vitrage-alarm-list.txt.' > I believe that you attached the wrong file. It seems like another log of vitrage-graph. > > >> Please send me vitrage-collector.log and vitrage-graph.log from the time >> that the problematic vm was created and deleted. Please also create and >> delete a vm on your 'ubuntu' server, so I can check the differences in the >> log. >> > > I have attached 'vitrage_log_on_compute1.zip' and > 'vitrage_log_on_ubuntu.zip' files. > When creating a vm on computer1, a vitrage-collect log occurred, but no > log occurred when it was removed. > Looking at the logs, I see two issues: 1. On ubuntu server, you get a notification about the vm deletion, while on compute1 you don't get it. Please make sure that Nova sends notifications to 'vitrage_notifications' - it should be configured in /etc/nova/nova.conf. 2. Once in 10 minutes (by default) nova.instance datasource queries all instances. The deleted vm is supposed to be deleted in Vitrage at this stage, even if the notification was lost. Please check in your collector log for the a message of "novaclient.v2.client [-] RESP BODY" before and after the deletion, and send me its content. Br, Ifat -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Nov 1 17:24:47 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 1 Nov 2018 12:24:47 -0500 Subject: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage In-Reply-To: <1540934167.1342260.1560169400.784F8BDF@webmail.messagingengine.com> References: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> <1077343a-b708-4fa2-0f44-2575363419e9@nemebean.com> <1540923927.492007.1559968608.2F968E3E@webmail.messagingengine.com> <2aafb949-e76b-f1a9-19bf-890fc7d99179@nemebean.com> <1540934167.1342260.1560169400.784F8BDF@webmail.messagingengine.com> Message-ID: On 10/30/18 4:16 PM, Clark Boylan wrote: > On Tue, Oct 30, 2018, at 1:01 PM, Ben Nemec wrote: >> >> >> On 10/30/18 1:25 PM, Clark Boylan wrote: >>> On Tue, Oct 30, 2018, at 10:42 AM, Alex Schultz wrote: >>>> On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec wrote: >>>>> >>>>> Tagging with tripleo since my suggestion below is specific to that project. >>>>> >>>>> On 10/30/18 11:03 AM, Clark Boylan wrote: >>>>>> Hello everyone, >>>>>> >>>>>> A little while back I sent email explaining how the gate queues work and how fixing bugs helps us test and merge more code. All of this still is still true and we should keep pushing to improve our testing to avoid gate resets. >>>>>> >>>>>> Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In the process of doing this we had to restart Zuul which brought in a new logging feature that exposes node resource usage by jobs. Using this data I've been able to generate some report information on where our node demand is going. This change [0] produces this report [1]. >>>>>> >>>>>> As with optimizing software we want to identify which changes will have the biggest impact and to be able to measure whether or not changes have had an impact once we have made them. Hopefully this information is a start at doing that. Currently we can only look back to the point Zuul was restarted, but we have a thirty day log rotation for this service and should be able to look at a month's worth of data going forward. >>>>>> >>>>>> Looking at the data you might notice that Tripleo is using many more node resources than our other projects. They are aware of this and have a plan [2] to reduce their resource consumption. We'll likely be using this report generator to check progress of this plan over time. >>>>> >>>>> I know at one point we had discussed reducing the concurrency of the >>>>> tripleo gate to help with this. Since tripleo is still using >50% of the >>>>> resources it seems like maybe we should revisit that, at least for the >>>>> short-term until the more major changes can be made? Looking through the >>>>> merge history for tripleo projects I don't see a lot of cases (any, in >>>>> fact) where more than a dozen patches made it through anyway*, so I >>>>> suspect it wouldn't have a significant impact on gate throughput, but it >>>>> would free up quite a few nodes for other uses. >>>>> >>>> >>>> It's the failures in gate and resets. At this point I think it would >>>> be a good idea to turn down the concurrency of the tripleo queue in >>>> the gate if possible. As of late it's been timeouts but we've been >>>> unable to track down why it's timing out specifically. I personally >>>> have a feeling it's the container download times since we do not have >>>> a local registry available and are only able to leverage the mirrors >>>> for some levels of caching. Unfortunately we don't get the best >>>> information about this out of docker (or the mirrors) and it's really >>>> hard to determine what exactly makes things run a bit slower. >>> >>> We actually tried this not too long ago https://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=22d98f7aab0fb23849f715a8796384cffa84600b but decided to revert it because it didn't decrease the check queue backlog significantly. We were still running at several hours behind most of the time. >> >> I'm surprised to hear that. Counting the tripleo jobs in the gate at >> positions 11-20 right now, I see around 84 nodes tied up in long-running >> jobs and another 32 for shorter unit test jobs. The latter probably >> don't have much impact, but the former is a non-trivial amount. It may >> not erase the entire 2300+ job queue that we have right now, but it >> seems like it should help. >> >>> >>> If we want to set up better monitoring and measuring and try it again we can do that. But we probably want to measure queue sizes with and without the change like that to better understand if it helps. >> >> This seems like good information to start capturing, otherwise we are >> kind of just guessing. Is there something in infra already that we could >> use or would it need to be new tooling? > > Digging around in graphite we currently track mean in pipelines. This is probably a reasonable metric to use for this specific case. > > Looking at the check queue [3] shows the mean time enqueued in check during the rough period window floor was 10 and [4] shows it since then. The 26th and 27th are bigger peaks than previously seen (possibly due to losing inap temporarily) but otherwise a queue backlog of ~200 minutes was "normal" in both time periods. > > [3] http://graphite.openstack.org/render/?from=20181015&until=20181019&target=scale(stats.timers.zuul.tenant.openstack.pipeline.check.resident_time.mean,%200.00001666666) > [4] http://graphite.openstack.org/render/?from=20181019&until=20181030&target=scale(stats.timers.zuul.tenant.openstack.pipeline.check.resident_time.mean,%200.00001666666) > > You should be able to change check to eg gate or other queue names and poke around more if you like. Note the scale factor scales from milliseconds to minutes. > > Clark > Cool, thanks. Seems like things have been better for the past couple of days, but I'll keep this in my back pocket for future reference. From doug at doughellmann.com Thu Nov 1 17:48:39 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 01 Nov 2018 13:48:39 -0400 Subject: [openstack-dev] [tc][all][goals] selecting community-wide goals for T development cycle Message-ID: We will have a session at the forum to discuss the community-wide goals [0] we want to select for the T cycle. That's a long way off, but given the fact that we don't have a PTG between Berlin and Denver, and that we've recently had feedback that we need to have tools finished better before starting, I think we do need to begin the conversation now. The Forum session is on Thursday at 17:10, at the end of the Forum [1]. There's an etherpad up [2] for folks to start offering suggestions. And the etherpad with the archives of past ideas is also still online [3]. Let's use this thread to talk about the Forum session. If you want to propose a specific goal, please start a *new* thread here on openstack-dev so that we can keep all of the discussion clearly separated. Please keep in mind that these goals should apply to most, if not all, projects. Work to integrate 2 services can be managed through the normal feature prioritization processes of the teams involved. I will be moderating the discussion at the Forum, but I am not signing up as a goal champion or to drive the goal selection. If you are interested in helping with either of those things, please let me know. Doug [0] https://governance.openstack.org/tc/goals/index.html [1] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22814/t-series-community-goal-discussion [2] https://etherpad.openstack.org/p/BER-t-series-goals [3] https://etherpad.openstack.org/p/community-goals From doug at doughellmann.com Thu Nov 1 18:01:06 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 01 Nov 2018 14:01:06 -0400 Subject: [openstack-dev] [goals][python3][heat][manila][qinling][zaqar][magnum][keystone][congress] switching python package jobs In-Reply-To: References: Message-ID: Doug Hellmann writes: > Doug Hellmann writes: > > I asked the owners of the name "heat" to allow us to use it, and they > rejected the request. So, I proposed a change to heat to update the > sdist name to "openstack-heat". > > * https://review.openstack.org/606160 This patch is now passing tests. Please prioritize reviewing it, because it is going to block releases of heat. Doug From whayutin at redhat.com Thu Nov 1 19:13:51 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 1 Nov 2018 13:13:51 -0600 Subject: [openstack-dev] [tripleo] gate issues please do not approve/recheck In-Reply-To: References: Message-ID: Thanks Alex! On Thu, Nov 1, 2018 at 10:27 AM Alex Schultz wrote: > Ok since the podman revert patche has been successfully merged and > we've landed most of the non-voting scenario patches, it should be OK > to restore/recheck. It would be a good idea to prioritized things to > land and if it's not critical, let's hold off on approving until we're > sure the gate is much better. > > Thanks, > -Alex > > On Wed, Oct 31, 2018 at 9:39 AM Alex Schultz wrote: > > > > Hey folks, > > > > So we have identified an issue that has been causing a bunch of > > failures and proposed a revert of our podman testing[0]. We have > > cleared the gate and are asking that you not approve or recheck any > > patches at this time. We will let you know when it is safe to start > > approving things. > > > > Thanks, > > -Alex > > > > [0] https://review.openstack.org/#/c/614537/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Wes Hayutin Associate MANAGER Red Hat whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Nov 1 19:25:58 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 1 Nov 2018 12:25:58 -0700 Subject: [openstack-dev] [nova] Stein summit forum sessions and presentations of interest Message-ID: <120e35c9-c2b0-640f-3d9e-ca6023144daf@gmail.com> Howdy all, We've made a list of cross-project forum sessions and nova-related sessions/presentations that you might be interested in attending at the summit and added them to our forum etherpad: https://etherpad.openstack.org/p/nova-forum-stein The "Cross-project Forum sessions that should include nova participation" section contains a list of community sessions where it would be nice to have a nova representative in attendance. Please feel free to add your name to sessions you think you could attend and bring back any interesting info to the team. Let know if I've missed any cross-project sessions or nova-related sessions/presentations and I can add them. Looking forward to seeing you all at the summit! Cheers, -melanie From emilien at redhat.com Thu Nov 1 20:24:31 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 1 Nov 2018 16:24:31 -0400 Subject: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer In-Reply-To: <836ebd56-2150-5fda-2f18-ca79038ba44f@redhat.com> References: <836ebd56-2150-5fda-2f18-ca79038ba44f@redhat.com> Message-ID: done, you're now core in TripleO; Thanks Bob for your hard work! On Mon, Oct 22, 2018 at 4:55 PM Jason E. Rist wrote: > On 10/19/2018 06:23 AM, Juan Antonio Osorio Robles wrote: > > Hello! > > > > > > I would like to propose Bob Fournier (bfournie) as a core reviewer in > > TripleO. His patches and reviews have spanned quite a wide range in our > > project, his reviews show great insight and quality and I think he would > > be a addition to the core team. > > > > What do you folks think? > > > > > > Best Regards > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > yup. > > -- > Jason E. Rist > Senior Software Engineer > OpenStack User Interfaces > Red Hat, Inc. > Freenode: jrist > github/twitter: knowncitizen > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From bfournie at redhat.com Thu Nov 1 20:29:52 2018 From: bfournie at redhat.com (Bob Fournier) Date: Thu, 1 Nov 2018 16:29:52 -0400 Subject: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer In-Reply-To: References: <836ebd56-2150-5fda-2f18-ca79038ba44f@redhat.com> Message-ID: On Thu, Nov 1, 2018 at 4:26 PM Emilien Macchi wrote: > done, you're now core in TripleO; Thanks Bob for your hard work! > Thank you Emilien, Juan, and everyone else! > > On Mon, Oct 22, 2018 at 4:55 PM Jason E. Rist wrote: > >> On 10/19/2018 06:23 AM, Juan Antonio Osorio Robles wrote: >> > Hello! >> > >> > >> > I would like to propose Bob Fournier (bfournie) as a core reviewer in >> > TripleO. His patches and reviews have spanned quite a wide range in our >> > project, his reviews show great insight and quality and I think he would >> > be a addition to the core team. >> > >> > What do you folks think? >> > >> > >> > Best Regards >> > >> > >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> yup. >> >> -- >> Jason E. Rist >> Senior Software Engineer >> OpenStack User Interfaces >> Red Hat, Inc. >> Freenode: jrist >> github/twitter: knowncitizen >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsbryant at electronicjungle.net Thu Nov 1 20:44:33 2018 From: jsbryant at electronicjungle.net (Jay Bryant) Date: Thu, 1 Nov 2018 15:44:33 -0500 Subject: [openstack-dev] [cinder] about use nfs driver to backup the volume snapshot In-Reply-To: References: Message-ID: On Thu, Nov 1, 2018, 10:44 AM Rambo wrote: > Hi,all > > Recently, I use the nfs driver as the cinder-backup backend, when I > use it to backup the volume snapshot, the result is return the > NotImplementedError[1].And the nfs.py doesn't has the > create_volume_from_snapshot function. Does the community plan to achieve > it which is as nfs as the cinder-backup backend?Can you tell me about > this?Thank you very much! > > Rambo, The NFS driver doesn't have full snapshot support. I am not sure if that function missing was an oversight or not. I would reach out to Eric Harney as he implemented that code. Jay > > > [1] > https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L2142 > > > > > > > > > Best Regards > Rambo > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From duc.openstack at gmail.com Thu Nov 1 22:59:45 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Thu, 1 Nov 2018 15:59:45 -0700 Subject: [openstack-dev] [senlin] Meeting cancelled for Nov 2 Message-ID: Everyone, There isn't much to talk about this week so I'm cancelling the weekly meeting. The only thing that needs attention is the upgrade checker patch set: https://review.openstack.org/#/c/613788/ If anybody has time, please take a look and see if you can figure out why it is not working. Thanks, Duc From kennelson11 at gmail.com Fri Nov 2 00:22:49 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 1 Nov 2018 17:22:49 -0700 Subject: [openstack-dev] StoryBoard Forum Session: Remaining Blockers Message-ID: Hello Everyone! We've made a lot of progress in StoryBoard-land over the last couple of releases cleaning up bugs, fixing UI annoyances, and adding features that people have requested. All along we've also continued to migrate projects as they've become unblocked. While there are still a few blockers on our to-do list, we want to make sure our list is complete[1]. We have a session at the upcoming forum to collect any remaining blockers that you may have encountered while messing around with the dev storyboard[2] site or using the real storyboard interacting with projects that have already migrated. If you encountered any issues that are blocking your project from migrating, please come share them with with us[3]. Hope to see you there! -Kendall (diablo_rojo) & the StoryBoard team [1] https://storyboard.openstack.org/#!/worklist/493 [2] https://storyboard-dev.openstack.org/ [2] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22839/storyboard-migration-the-remaining-blockers -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Fri Nov 2 02:41:23 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 2 Nov 2018 11:41:23 +0900 Subject: [openstack-dev] The first Vietnam OSUG upstream contribution mentoring webinar Message-ID: Hello, The Vietnam OpenStack User Group has planned to organize an upstream contribution mentoring program for a while and finally, we decided the first webinar will be on next Monday, 5 November. Attendees will be anyone who is interested in the development process of OpenStack (mostly Vietnamese, students, developer, etc.). Mentors will be some experienced OpenStack upstream developers, core members of some projects, etc. You can check out the details on the Meetup link below [1]. We also want to say thank to the work of Ian Y. Choi and the Korea user group [2]. You inspired us. [1] https://www.meetup.com/VietOpenStack/events/hpcglqyxpbhb/ [2] http://lists.openstack.org/pipermail/community/2018-October/001909.html?fbclid=IwAR0nTZFMzc56PYT0jMDf02Wci2qbaGxhay3EU53Wfd_RntgFqZ0ybwzpFY8 Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ducnv at vn.fujitsu.com Fri Nov 2 03:26:27 2018 From: ducnv at vn.fujitsu.com (Nguyen, Van Duc) Date: Fri, 2 Nov 2018 03:26:27 +0000 Subject: [openstack-dev] The first Vietnam OSUG upstream contribution mentoring webinar In-Reply-To: References: Message-ID: <53ceb7dffc164a8b86d96f2b87217bfc@G07SGEXCMSGPS05.g07.fujitsu.local> Thank you and the Vietnamese user group for your contributions. I look forward to participating in the webinar. +1 ☺ From: Trinh Nguyen Sent: Friday, November 02, 2018 9:41 AM To: OpenStack Development Mailing List (not for usage questions) Cc: community at lists.openstack.org Subject: [openstack-dev] The first Vietnam OSUG upstream contribution mentoring webinar Hello, The Vietnam OpenStack User Group has planned to organize an upstream contribution mentoring program for a while and finally, we decided the first webinar will be on next Monday, 5 November. Attendees will be anyone who is interested in the development process of OpenStack (mostly Vietnamese, students, developer, etc.). Mentors will be some experienced OpenStack upstream developers, core members of some projects, etc. You can check out the details on the Meetup link below [1]. We also want to say thank to the work of Ian Y. Choi and the Korea user group [2]. You inspired us. [1] https://www.meetup.com/VietOpenStack/events/hpcglqyxpbhb/ [2] http://lists.openstack.org/pipermail/community/2018-October/001909.html?fbclid=IwAR0nTZFMzc56PYT0jMDf02Wci2qbaGxhay3EU53Wfd_RntgFqZ0ybwzpFY8 Bests, -- Trinh Nguyen www.edlab.xyz -------------- next part -------------- An HTML attachment was scrubbed... URL: From scheuran at linux.vnet.ibm.com Fri Nov 2 08:47:42 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Fri, 2 Nov 2018 09:47:42 +0100 Subject: [openstack-dev] [nova] Announcing new Focal Point for s390x libvirt/kvm Nova Message-ID: Dear Nova Community, I want to announce the new focal point for Nova s390x libvirt/kvm. Please welcome "Cathy Zhang” to the Nova team. She and her team will be responsible for maintaining the s390x libvirt/kvm Thirdparty CI [1] and any s390x specific code in nova and os-brick. I personally took a new opportunity already a few month ago but kept maintaining the CI as good as possible. With new manpower we can hopefully contribute more to the community again. You can reach her via * email: bjzhjing at linux.vnet.ibm.com * IRC: Cathyz Cathy, I wish you and your team all the best for this exciting role! I also want to say thank you for the last years. It was a great time, I learned a lot from you all, will miss it! Cheers, Andreas (irc: scheuran) [1] https://wiki.openstack.org/wiki/ThirdPartySystems/IBM_zKVM_CI From cdent+os at anticdent.org Fri Nov 2 11:48:37 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 2 Nov 2018 11:48:37 +0000 (GMT) Subject: [openstack-dev] [placement] update 18-44 Message-ID: HTML: https://anticdent.org/placement-update-18-44.html Good morning, it's placement update time. # Most Important Lately attention has been primarily on specs, database migration tooling, and progress on documentation. These remain the important areas. # What's Changed * [Placement docs](https://docs.openstack.org/placement/latest/) * Upgrade-to-placement in deployment tooling [thread](http://lists.openstack.org/pipermail/openstack-dev/2018-October/136075.html) # Bugs * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 16. +1. * [In progress placement bugs](https://goo.gl/vzGGDQ) 11. # Specs Progress continues on reviewing specs. * Account for host agg allocation ratio in placement (Still in rocky/) * Add subtree filter for GET /resource_providers * Resource provider - request group mapping in allocation candidate * VMware: place instances on resource pool (still in rocky/) * Standardize CPU resource tracking * Allow overcommit of dedicated CPU (Has an alternative which changes allocations to a float) * Modelling passthrough devices for report to placement * Spec: allocation candidates in tree * Nova Cyborg interaction specification. * supporting virtual NVDIMM devices * Spec: Support filtering by forbidden aggregate * Proposes NUMA topology with RPs * Count quota based on resource class * WIP: High Precision Event Timer (HPET) on x86 guests * Add support for emulated virtual TPM * Adds spec for instance live resize * Provider config YAML file # Main Themes ## Making Nested Useful The nested allocations support has merged. That was the stuff that was on this topic: * There are some reshaper patches in progress. * I suspect we need some real world fiddling with nested workloads to have any real confidence with this stuff. ## Extraction There continue to be three main tasks in regard to placement extraction: 1. upgrade and integration testing 2. database schema migration and management 3. documentation publishing Most of this work is now being tracked on a [new etherpad](https://etherpad.openstack.org/p/placement-extract-stein-4). If you're looking for something to do (either code or review), there is a good place to look to find something. The db-related work is getting very close, which will allow grenade and devstack changes to merge. # Other Various placement changes out in the world. * Improve handling of default allocation ratios * Neutron minimum bandwidth implementation * Add OWNERSHIP $SERVICE traits * Puppet: Initial cookiecutter and import from nova::placement * zun: Use placement for unified resource management * Update allocation ratio when config changes * Deal with root_id None in resource provider * Use long rpc timeout in select_destinations * Bandwith Resource Providers! * Harden placement init under wsgi * Using gabbi-tempest for integration tests. * Make tox -ereleasenotes work * placement: Add a doc describing a quick live environment * Adding alembic environment * Blazar using the placement-api * Placement role for ansible project config * hyperv bump placement version # End Apologies if this is messier than normal, I'm rushing to get it out before I travel. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From kgiusti at gmail.com Fri Nov 2 13:14:21 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Fri, 2 Nov 2018 09:14:21 -0400 Subject: [openstack-dev] [Oslo][nova][openstack-ansible] About Rabbitmq warning problems on nova-compute side In-Reply-To: <1101609091.2040456.1541073176882.JavaMail.zimbra@tubitak.gov.tr> References: <1101609091.2040456.1541073176882.JavaMail.zimbra@tubitak.gov.tr> Message-ID: Hi Gokhan, There's been a flurry of folks reporting issues recently related to pike and SSL. See: https://bugs.launchpad.net/oslo.messaging/+bug/1800957 and https://bugs.launchpad.net/oslo.messaging/+bug/1801011 I'm currently working on this - no status yet. As a test would it be possible to try disabling SSL in your configuration to see if the problem persists? On Thu, Nov 1, 2018 at 7:53 AM Gökhan IŞIK (BİLGEM BTE) < gokhan.isik at tubitak.gov.tr> wrote: > Hi folks, > > I have problems about rabbitmq on nova-compute side. I see lots of > warnings in log file like that “client unexpectedly closed TCP > connection”.[1] > > I have a HA OpenStack environment on ubuntu 16.04.5 which is installed > with Openstack Ansible Project. My OpenStack environment version is Pike. > My environment consists of 3 controller nodes ,23 compute nodes and 1 log > node. Cinder volume service is installed on compute nodes and I am using > NetApp Storage. > > I tried lots of configs on nova about oslo messaging and rabbitmq side, > but I didn’t resolve this problem. My latest configs are below: > > rabbitmq.config is : http://paste.openstack.org/show/733767/ > > nova.conf is: http://paste.openstack.org/show/733768/ > > Services versions are : http://paste.openstack.org/show/733769/ > > > Can you share your experiences on rabbitmq side and How can I solve these > warnings on nova-compute side ? What will you advice ? > > > [1] http://paste.openstack.org/show/733766/ > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Ken Giusti (kgiusti at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgiusti at gmail.com Fri Nov 2 13:24:07 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Fri, 2 Nov 2018 09:24:07 -0400 Subject: [openstack-dev] [oslo] No complains about rabbitmq SSL problems: could we have this in the logs? In-Reply-To: <1d5364ed-076b-ac86-6c54-9d9a16bc87a2@debian.org> References: <08C09D03-132D-406E-AFDE-7845B2D55B5F@vexxhost.com> <1d5364ed-076b-ac86-6c54-9d9a16bc87a2@debian.org> Message-ID: Hi, There does seem to be something currently wonky with SSL & oslo.messaging. I'm looking into it now. And there's this recently reported issue: https://bugs.launchpad.net/oslo.messaging/+bug/1800957 In the above bug something seems to have broken SSL between ocata and pike. The current suspected change is a patch that fixed a threading issue. Stay tuned... On Thu, Nov 1, 2018 at 3:53 AM Thomas Goirand wrote: > On 10/31/18 2:40 PM, Mohammed Naser wrote: > > For what it’s worth: I ran into the same issue. I think the problem > lies a bit deeper because it’s a problem with kombu as when debugging I saw > that Oslo messaging tried to connect and hung after. > > > > Sent from my iPhone > > > >> On Oct 31, 2018, at 2:29 PM, Thomas Goirand wrote: > >> > >> Hi, > >> > >> It took me a long long time to figure out that my SSL setup was wrong > >> when trying to connect Heat to rabbitmq over SSL. Unfortunately, Oslo > >> (or heat itself) never warn me that something was wrong, I just got > >> nothing working, and no log at all. > >> > >> I'm sure I wouldn't be the only one happy about having this type of > >> problems being yelled out loud in the logs. Right now, it does work if I > >> turn off SSL, though I'm still not sure what's wrong in my setup, and > >> I'm given no clue if the issue is on rabbitmq-server or on the client > >> side (ie: heat, in my current case). > >> > >> Just a wishlist... :) > >> Cheers, > >> > >> Thomas Goirand (zigo) > > I've opened a bug here: > > https://bugs.launchpad.net/oslo.messaging/+bug/1801011 > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Ken Giusti (kgiusti at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dprince at redhat.com Fri Nov 2 13:39:07 2018 From: dprince at redhat.com (Dan Prince) Date: Fri, 2 Nov 2018 09:39:07 -0400 Subject: [openstack-dev] [TripleO] PSA lets use deploy_steps_tasks Message-ID: I pushed a patch[1] to update our containerized deployment architecture docs yesterday. There are 2 new fairly useful sections we can leverage with TripleO's stepwise deployment. They appear to be used somewhat sparingly so I wanted to get the word out. The first is 'deploy_steps_tasks' which gives you a means to run Ansible snippets on each node/role in a stepwise fashion during deployment. Previously it was only possible to execute puppet or docker commands where as now that we have deploy_steps_tasks we can execute ad-hoc ansible in the same manner. The second is 'external_deploy_tasks' which allows you to use run Ansible snippets on the Undercloud during stepwise deployment. This is probably most useful for driving an external installer but might also help with some complex tasks that need to originate from a single Ansible client. The only downside I see to these approaches is that both appear to be implemented with Ansible's default linear strategy. I saw shardy's comment here [2] that the :free strategy does not yet apparently work with the any_errors_fatal option. Perhaps we can reach out to someone in the Ansible community in this regard to improve running these things in parallel like TripleO used to work with Heat agents. This is also how host_prep_tasks is implemented which BTW we should now get rid of as a duplicate architectural step since we have deploy_steps_tasks anyway. [1] https://review.openstack.org/#/c/614822/ [2] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/common/deploy-steps.j2#n554 From jaosorior at redhat.com Fri Nov 2 13:43:32 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Fri, 2 Nov 2018 15:43:32 +0200 Subject: [openstack-dev] [TripleO] PSA lets use deploy_steps_tasks In-Reply-To: References: Message-ID: Thanks! We have been slow to update our docs. I did put up a blog post about these sections of the templates [1], in case folks find that useful. [1] http://jaormx.github.io/2018/dissecting-tripleo-service-templates-p2/ On 11/2/18 3:39 PM, Dan Prince wrote: > I pushed a patch[1] to update our containerized deployment > architecture docs yesterday. There are 2 new fairly useful sections we > can leverage with TripleO's stepwise deployment. They appear to be > used somewhat sparingly so I wanted to get the word out. > > The first is 'deploy_steps_tasks' which gives you a means to run > Ansible snippets on each node/role in a stepwise fashion during > deployment. Previously it was only possible to execute puppet or > docker commands where as now that we have deploy_steps_tasks we can > execute ad-hoc ansible in the same manner. > > The second is 'external_deploy_tasks' which allows you to use run > Ansible snippets on the Undercloud during stepwise deployment. This is > probably most useful for driving an external installer but might also > help with some complex tasks that need to originate from a single > Ansible client. > > The only downside I see to these approaches is that both appear to be > implemented with Ansible's default linear strategy. I saw shardy's > comment here [2] that the :free strategy does not yet apparently work > with the any_errors_fatal option. Perhaps we can reach out to someone > in the Ansible community in this regard to improve running these > things in parallel like TripleO used to work with Heat agents. > > This is also how host_prep_tasks is implemented which BTW we should > now get rid of as a duplicate architectural step since we have > deploy_steps_tasks anyway. > > [1] https://review.openstack.org/#/c/614822/ > [2] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/common/deploy-steps.j2#n554 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Fri Nov 2 13:45:05 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 2 Nov 2018 08:45:05 -0500 Subject: [openstack-dev] StoryBoard Forum Session: Remaining Blockers In-Reply-To: References: Message-ID: On 11/1/2018 7:22 PM, Kendall Nelson wrote: > We've made a lot of progress in StoryBoard-land over the last couple of > releases cleaning up bugs, fixing UI annoyances, and adding features > that people have requested. All along we've also continued to migrate > projects as they've become unblocked. While there are still a few > blockers on our to-do list, we want to make sure our list is complete[1]. > > We have a session at the upcoming forum to collect any remaining > blockers that you may have encountered while messing around with the dev > storyboard[2] site or using the real storyboard interacting with > projects that have already migrated. If you encountered any issues that > are blocking your project from migrating, please come share them with > with us[3]. Hope to see you there! > > -Kendall (diablo_rojo) & the StoryBoard team > > [1] https://storyboard.openstack.org/#!/worklist/493 > I'm not sure why/how but you seem to have an encoded URL for this [1] link, which when I was using it redirected me to my own dashboard in storyboard. The real link, https://storyboard.openstack.org/#!/worklist/493, does work though. Just FYI for anyone else having the same problem. > [2] https://storyboard-dev.openstack.org/ > [2] > https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22839/storyboard-migration-the-remaining-blockers > -- Thanks, Matt From eharney at redhat.com Fri Nov 2 14:00:13 2018 From: eharney at redhat.com (Eric Harney) Date: Fri, 2 Nov 2018 10:00:13 -0400 Subject: [openstack-dev] [cinder] about use nfs driver to backup the volume snapshot In-Reply-To: References: Message-ID: <501142a5-ff4d-9671-d3b9-34f5fd2da01a@redhat.com> On 11/1/18 4:44 PM, Jay Bryant wrote: > On Thu, Nov 1, 2018, 10:44 AM Rambo wrote: > >> Hi,all >> >> Recently, I use the nfs driver as the cinder-backup backend, when I >> use it to backup the volume snapshot, the result is return the >> NotImplementedError[1].And the nfs.py doesn't has the >> create_volume_from_snapshot function. Does the community plan to achieve >> it which is as nfs as the cinder-backup backend?Can you tell me about >> this?Thank you very much! >> >> Rambo, > > The NFS driver doesn't have full snapshot support. I am not sure if that > function missing was an oversight or not. I would reach out to Eric Harney > as he implemented that code. > > Jay > create_volume_from_snapshot is implemented in the NFS driver. It is in the remotefs code that the NFS driver inherits from. But, I'm not sure I understand what's being asked here -- how is this related to using NFS as the backup backend? >> >> >> [1] >> https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L2142 >> >> >> >> >> >> >> >> >> Best Regards >> Rambo From amy at demarco.com Fri Nov 2 15:14:24 2018 From: amy at demarco.com (Amy Marrich) Date: Fri, 2 Nov 2018 10:14:24 -0500 Subject: [openstack-dev] OpenStack Diversity and Inclusion Survey Message-ID: The Diversity and Inclusion WG is still looking for your assistance in reaching and including data from as many members of our community as possible. We revised the Diversity Survey that was originally distributed to the Community in the Fall of 2015 and reached out in August with our new survey. We are looking to update our view of the OpenStack community and it's diversity. We are pleased to be working with members of the CHAOSS project who have signed confidentiality agreements in order to assist us in the following ways: 1) Assistance in analyzing the results 2) And feeding the results into the CHAOSS software and metrics development work so that we can help other Open Source projects Please take the time to fill out the survey and share it with others in the community. The survey can be found at: https://www.surveymonkey.com/r/OpenStackDiversity Thank you for assisting us in this important task! Please feel free to reach out to me via email, in Berlin, or to myself or any WG member in #openstack-diversity! Amy Marrich (spotz) Diversity and Inclusion Working Group Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Fri Nov 2 15:27:43 2018 From: james.slagle at gmail.com (James Slagle) Date: Fri, 2 Nov 2018 11:27:43 -0400 Subject: [openstack-dev] [TripleO] PSA lets use deploy_steps_tasks In-Reply-To: References: Message-ID: On Fri, Nov 2, 2018 at 9:39 AM Dan Prince wrote: > > I pushed a patch[1] to update our containerized deployment > architecture docs yesterday. There are 2 new fairly useful sections we > can leverage with TripleO's stepwise deployment. They appear to be > used somewhat sparingly so I wanted to get the word out. > > The first is 'deploy_steps_tasks' which gives you a means to run > Ansible snippets on each node/role in a stepwise fashion during > deployment. Previously it was only possible to execute puppet or > docker commands where as now that we have deploy_steps_tasks we can > execute ad-hoc ansible in the same manner. > > The second is 'external_deploy_tasks' which allows you to use run > Ansible snippets on the Undercloud during stepwise deployment. This is > probably most useful for driving an external installer but might also > help with some complex tasks that need to originate from a single > Ansible client. +1 > The only downside I see to these approaches is that both appear to be > implemented with Ansible's default linear strategy. I saw shardy's > comment here [2] that the :free strategy does not yet apparently work > with the any_errors_fatal option. Perhaps we can reach out to someone > in the Ansible community in this regard to improve running these > things in parallel like TripleO used to work with Heat agents. It's effectively parallel across one role at a time at the moment, up to the number of configured forks (default: 25). The reason it won't parallelize across roles, is because it's a different task file used with import_tasks for each role. Ansible won't run that in parallel since the task list is different. I was able to make this parallel across roles for the pre and post deployments by making the task file the same for each role, and controlling the difference with group and host vars: https://review.openstack.org/#/c/574474/ >From Ansible's perspective, the task list is now the same for each host, although different things will be done depending on the value of vars for each host. It's possible a similar approach could be done with the other interfaces you point out here. In addition to the any_errors_fatal issue when using strategy:free, is that you'd also lose the grouping of the task output per role after each task finishes. This is mostly cosmetic, but using free does create a lot more noisier output IMO. -- -- James Slagle -- From colleen at gazlene.net Fri Nov 2 15:33:36 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 02 Nov 2018 16:33:36 +0100 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 29 October 2018 Message-ID: <1541172816.512691.1563565520.2DCA999B@webmail.messagingengine.com> # Keystone Team Update - Week of 29 October 2018 ## News ### Berlin Summit Somewhat quiet week as we've been getting into summit prep mode. We'll have two forum sessions, one for operator feedback[1] and one to discuss Keystone as an IdP Proxy[2]. We'll have our traditional project update[3] and project onboarding[4] along with many keystone-related talks from the team[5][6][7][8][9][10]. [1] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22792/keystone-operator-feedback [2] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22791/keystone-as-an-identity-provider-proxy [3] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22728/keystone-project-updates [4] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22727/keystone-project-onboarding [5] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22557/enforcing-quota-consistently-with-unified-limits [6] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22320/towards-an-open-cloud-exchange [7] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22044/pushing-keystone-over-the-edge [8] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22607/a-seamlessly-federated-multi-cloud [9] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/21977/openstack-policy-101 [10] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/21976/dynamic-policy-for-openstack-with-open-policy-agent ## Open Specs Stein specs: https://bit.ly/2Pi6dGj Ongoing specs: https://bit.ly/2OyDLTh ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 31 changes this week. ## Changes that need Attention Search query: https://bit.ly/2PUk84S There are 45 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs This week we opened 5 new bugs and closed 9. Bugs opened (5) Bug #1801111 (keystone:Low) opened by Egor Panfilov https://bugs.launchpad.net/keystone/+bug/1801111 Bug #1800651 (keystone:Undecided) opened by Ramon Heidrich https://bugs.launchpad.net/keystone/+bug/1800651 Bug #1801095 (keystone:Undecided) opened by artem.v.vasilyev https://bugs.launchpad.net/keystone/+bug/1801095 Bug #1801309 (keystone:Undecided) opened by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1801309 Bug #1801101 (keystoneauth:Undecided) opened by Egor Panfilov https://bugs.launchpad.net/keystoneauth/+bug/1801101 Bugs closed (5) Bug #1553224 (keystone:Wishlist) https://bugs.launchpad.net/keystone/+bug/1553224 Bug #1642988 (keystone:Wishlist) https://bugs.launchpad.net/keystone/+bug/1642988 Bug #1710329 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1710329 Bug #1713574 (keystoneauth:Undecided) https://bugs.launchpad.net/keystoneauth/+bug/1713574 Bug #1801101 (keystoneauth:Undecided) https://bugs.launchpad.net/keystoneauth/+bug/1801101 Bugs fixed (4) Bug #1788415 (keystone:High) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1788415 Bug #1797876 (keystone:High) fixed by Vishakha Agarwal https://bugs.launchpad.net/keystone/+bug/1797876 Bug #1798716 (keystone:Low) fixed by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1798716 Bug #1800017 (keystonemiddleware:Medium) fixed by Guang Yee https://bugs.launchpad.net/keystonemiddleware/+bug/1800017 ## Milestone Outlook https://releases.openstack.org/stein/schedule.html Our spec proposal freeze for Stein was two weeks ago, so barring extraordinary circumstances we'll be working on refining our remaining three Stein specs for the spec freeze after the new year. ## Shout-outs Thanks Nathan Kinder for all the ldappool fixes! ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 From bdobreli at redhat.com Fri Nov 2 16:32:43 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 2 Nov 2018 17:32:43 +0100 Subject: [openstack-dev] [Edge-computing] [tripleo][FEMDC] IEEE Fog Computing: Call for Contributions - Deadline Approaching In-Reply-To: References: <53cf2d33-cee7-3f0d-7f28-74b29091a7ef@redhat.com> <8acb2f9e-9fbd-77be-a274-eb3d54ae2ab4@redhat.com> Message-ID: Hello folks. Here is an update for today. I crated a draft [0], and spend some time with building LaTeX with live-updating for the compiled PDF... The latter is only informational, if someone wants to contribute, please follow the instructions listed by the link (hint: you need no to have any LaTeX experience, only basic markdown knowledge should be enough!) [0] https://github.com/bogdando/papers-ieee/#in-the-current-development-looking-for-co-authors On 10/31/18 6:54 PM, Ildiko Vancsa wrote: > Hi, > > Thank you for sharing your proposal. > > I think this is a very interesting topic with a list of possible solutions some of which this group is also discussing. It would also be great to learn more about the IEEE activities and have experience about the process in this group on the way forward. > > I personally do not have experience with IEEE conferences, but I’m happy to help with the paper if I can. > > Thanks, > Ildikó > > (added from the parallel thread) >> On 2018. Oct 31., at 19:11, Mike Bayer wrote: >> >> On Wed, Oct 31, 2018 at 10:57 AM Bogdan Dobrelya wrote: >>> >>> (cross-posting openstack-dev) >>> >>> Hello. >>> [tl;dr] I'm looking for co-author(s) to come up with "Edge clouds data >>> consistency requirements and challenges" a position paper [0] (papers >>> submitting deadline is Nov 8). >>> >>> The problem scope is synchronizing control plane and/or >>> deployments-specific data (not necessary limited to OpenStack) across >>> remote Edges and central Edge and management site(s). Including the same >>> aspects for overclouds and undercloud(s), in terms of TripleO; and other >>> deployment tools of your choice. >>> >>> Another problem is to not go into different solutions for Edge >>> deployments management and control planes of edges. And for tenants as >>> well, if we think of tenants also doing Edge deployments based on Edge >>> Data Replication as a Service, say for Kubernetes/OpenShift on top of >>> OpenStack. >>> >>> So the paper should name the outstanding problems, define data >>> consistency requirements and pose possible solutions for synchronization >>> and conflicts resolving. Having maximum autonomy cases supported for >>> isolated sites, with a capability to eventually catch up its distributed >>> state. Like global database [1], or something different perhaps (see >>> causal-real-time consistency model [2],[3]), or even using git. And >>> probably more than that?.. (looking for ideas) >> >> >> I can offer detail on whatever aspects of the "shared / global >> database" idea. The way we're doing it with Galera for now is all >> about something simple and modestly effective for the moment, but it >> doesn't have any of the hallmarks of a long-term, canonical solution, >> because Galera is not well suited towards being present on many >> (dozens) of endpoints. The concept that the StarlingX folks were >> talking about, that of independent databases that are synchronized >> using some kind of middleware is potentially more scalable, however I >> think the best approach would be API-level replication, that is, you >> have a bunch of Keystone services and there is a process that is >> regularly accessing the APIs of these keystone services and >> cross-publishing state amongst all of them. Clearly the big >> challenge with that is how to resolve conflicts, I think the answer >> would lie in the fact that the data being replicated would be of >> limited scope and potentially consist of mostly or fully >> non-overlapping records. >> >> That is, I think "global database" is a cheap way to get what would be >> more effective as asynchronous state synchronization between identity >> services. > > Recently we’ve been also exploring federation with an IdP (Identity Provider) master: https://wiki.openstack.org/wiki/Keystone_edge_architectures#Identity_Provider_.28IdP.29_Master_with_shadow_users > > One of the pros is that it removes the need for synchronization and potentially increases scalability. > > Thanks, > Ildikó -- Best regards, Bogdan Dobrelya, Irc #bogdando From openstack at fried.cc Fri Nov 2 19:22:53 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 2 Nov 2018 14:22:53 -0500 Subject: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker Message-ID: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> All- Based on a (long) discussion yesterday [1] I have put up a patch [2] whereby you can set [compute]resource_provider_association_refresh to zero and the resource tracker will never* refresh the report client's provider cache. Philosophically, we're removing the "healing" aspect of the resource tracker's periodic and trusting that placement won't diverge from whatever's in our cache. (If it does, it's because the op hit the CLI, in which case they should SIGHUP - see below.) *except: - When we initially create the compute node record and bootstrap its resource provider. - When the virt driver's update_provider_tree makes a change, update_from_provider_tree reflects them in the cache as well as pushing them back to placement. - If update_from_provider_tree fails, the cache is cleared and gets rebuilt on the next periodic. - If you send SIGHUP to the compute process, the cache is cleared. This should dramatically reduce the number of calls to placement from the compute service. Like, to nearly zero, unless something is actually changing. Can I get some initial feedback as to whether this is worth polishing up into something real? (It will probably need a bp/spec if so.) [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-11-01.log.html#t2018-11-01T17:32:03 [2] https://review.openstack.org/#/c/614886/ ========== Background ========== In the Queens release, our friends at CERN noticed a serious spike in the number of requests to placement from compute nodes, even in a stable-state cloud. Given that we were in the process of adding a ton of infrastructure to support sharing and nested providers, this was not unexpected. Roughly, what was previously: @periodic_task: GET /resource_providers/$compute_uuid GET /resource_providers/$compute_uuid/inventories became more like: @periodic_task: # In Queens/Rocky, this would still just return the compute RP GET /resource_providers?in_tree=$compute_uuid # In Queens/Rocky, this would return nothing GET /resource_providers?member_of=...&required=MISC_SHARES... for each provider returned above: # i.e. just one in Q/R GET /resource_providers/$compute_uuid/inventories GET /resource_providers/$compute_uuid/traits GET /resource_providers/$compute_uuid/aggregates In a cloud the size of CERN's, the load wasn't acceptable. But at the time, CERN worked around the problem by disabling refreshing entirely. (The fact that this seems to have worked for them is an encouraging sign for the proposed code change.) We're not actually making use of most of that information, but it sets the stage for things that we're working on in Stein and beyond, like multiple VGPU types, bandwidth resource providers, accelerators, NUMA, etc., so removing/reducing the amount of information we look at isn't really an option strategically. From torin.woltjer at granddial.com Fri Nov 2 19:27:30 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Fri, 02 Nov 2018 19:27:30 GMT Subject: [openstack-dev] [Openstack] DHCP not accessible on new compute node. Message-ID: I've completely wiped the node and reinstalled it, and the problem still persists. I can't ping instances on other compute nodes, or ping the DHCP ports. Instances don't get addresses or metadata when started on this node. From: Marcio Prado Sent: 11/1/18 9:51 AM To: torin.woltjer at granddial.com Cc: openstack at lists.openstack.org Subject: Re: [Openstack] DHCP not accessible on new compute node. I believe you have not forgotten anything. This should probably be bug ... As my cloud is not production, but rather masters research. I migrate the VM live to a node that is working, restart it, after that I migrate back to the original node that was not working and it keeps running ... Em 30-10-2018 17:50, Torin Woltjer escreveu: > Interestingly, I created a brand new selfservice network and DHCP > doesn't work on that either. I've followed the instructions in the > minimal setup (excluding the controllers as they're already set up) > but the new node has no access to the DHCP agent in neutron it seems. > Is there a likely component that I've overlooked? > > _TORIN WOLTJER_ > > GRAND DIAL COMMUNICATIONS - A ZK TECH INC. COMPANY > > 616.776.1066 EXT. 2006 > _WWW.GRANDDIAL.COM [1]_ > > ------------------------- > > FROM: "Torin Woltjer" > SENT: 10/30/18 10:48 AM > TO: , "openstack at lists.openstack.org" > > SUBJECT: Re: [Openstack] DHCP not accessible on new compute node. > > I deleted both DHCP ports and they recreated as you said. However, > instances are still unable to get network addresses automatically. > > _TORIN WOLTJER_ > > GRAND DIAL COMMUNICATIONS - A ZK TECH INC. COMPANY > > 616.776.1066 EXT. 2006 > _ [1] [1]WWW.GRANDDIAL.COM [1]_ > > ------------------------- > > FROM: Marcio Prado > SENT: 10/29/18 6:23 PM > TO: torin.woltjer at granddial.com > SUBJECT: Re: [Openstack] DHCP not accessible on new compute node. > The door is recreated automatically. The problem like I said is not in > DHCP, but for some reason, erasing and waiting for OpenStack to > re-create the port often solves the problem. > > Please, if you can find out the problem in fact, let me know. I'm very > interested to know. > > You can delete the door without fear. OpenStack will recreate in a > short > time. > > Links: > ------ > [1] http://www.granddial.com -- Marcio Prado Analista de TI - Infraestrutura e Redes Fone: (35) 9.9821-3561 www.marcioprado.eti.br -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Nov 2 20:31:48 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 2 Nov 2018 15:31:48 -0500 Subject: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker In-Reply-To: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> References: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> Message-ID: On 11/2/2018 2:22 PM, Eric Fried wrote: > Based on a (long) discussion yesterday [1] I have put up a patch [2] > whereby you can set [compute]resource_provider_association_refresh to > zero and the resource tracker will never* refresh the report client's > provider cache. Philosophically, we're removing the "healing" aspect of > the resource tracker's periodic and trusting that placement won't > diverge from whatever's in our cache. (If it does, it's because the op > hit the CLI, in which case they should SIGHUP - see below.) > > *except: > - When we initially create the compute node record and bootstrap its > resource provider. > - When the virt driver's update_provider_tree makes a change, > update_from_provider_tree reflects them in the cache as well as pushing > them back to placement. > - If update_from_provider_tree fails, the cache is cleared and gets > rebuilt on the next periodic. > - If you send SIGHUP to the compute process, the cache is cleared. > > This should dramatically reduce the number of calls to placement from > the compute service. Like, to nearly zero, unless something is actually > changing. > > Can I get some initial feedback as to whether this is worth polishing up > into something real? (It will probably need a bp/spec if so.) > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-11-01.log.html#t2018-11-01T17:32:03 > [2]https://review.openstack.org/#/c/614886/ > > ========== > Background > ========== > In the Queens release, our friends at CERN noticed a serious spike in > the number of requests to placement from compute nodes, even in a > stable-state cloud. Given that we were in the process of adding a ton of > infrastructure to support sharing and nested providers, this was not > unexpected. Roughly, what was previously: > > @periodic_task: > GET/resource_providers/$compute_uuid > GET/resource_providers/$compute_uuid/inventories > > became more like: > > @periodic_task: > # In Queens/Rocky, this would still just return the compute RP > GET /resource_providers?in_tree=$compute_uuid > # In Queens/Rocky, this would return nothing > GET /resource_providers?member_of=...&required=MISC_SHARES... > for each provider returned above: # i.e. just one in Q/R > GET/resource_providers/$compute_uuid/inventories > GET/resource_providers/$compute_uuid/traits > GET/resource_providers/$compute_uuid/aggregates > > In a cloud the size of CERN's, the load wasn't acceptable. But at the > time, CERN worked around the problem by disabling refreshing entirely. > (The fact that this seems to have worked for them is an encouraging sign > for the proposed code change.) > > We're not actually making use of most of that information, but it sets > the stage for things that we're working on in Stein and beyond, like > multiple VGPU types, bandwidth resource providers, accelerators, NUMA, > etc., so removing/reducing the amount of information we look at isn't > really an option strategically. A few random points from the long discussion that should probably re-posed here for wider thought: * There was probably a lot of discussion about why we needed to do this caching and stuff in the compute in the first place. What has changed that we no longer need to aggressively refresh the cache on every periodic? I thought initially it came up because people really wanted the compute to be fully self-healing to any external changes, including hot plugging resources like disk on the host to automatically reflect those changes in inventory. Similarly, external user/service interactions with the placement API which would then be automatically picked up by the next periodic run - is that no longer a desire, and/or how was the decision made previously that simply requiring a SIGHUP in that case wasn't sufficient/desirable. * I believe I made the point yesterday that we should probably not refresh by default, and let operators opt-in to that behavior if they really need it, i.e. they are frequently making changes to the environment, potentially by some external service (I could think of vCenter doing this to reflect changes from vCenter back into nova/placement), but I don't think that should be the assumed behavior by most and our defaults should reflect the "normal" use case. * I think I've noted a few times now that we don't actually use the provider aggregates information (yet) in the compute service. Nova host aggregate membership is mirror to placement since Rocky [1] but that happens in the API, not the the compute. The only thing I can think of that relied on resource provider aggregate information in the compute is the shared storage providers concept, but that's not supported (yet) [2]. So do we need to keep retrieving aggregate information when nothing in compute uses it yet? * Similarly, why do we need to get traits on each periodic? The only in-tree virt driver I'm aware of that *reports* traits is the libvirt driver for CPU features [3]. Otherwise I think the idea behind getting the latest traits is so the virt driver doesn't overwrite any traits set externally on the compute node root resource provider. I think that still stands and is probably OK, even though we have generations now which should keep us from overwriting if we don't have the latest traits, but I wanted to bring it up since it's related to the "why do we need provider aggregates in the compute?" question. * Regardless of what we do, I think we should probably *at least* make that refresh associations config allow 0 to disable it so CERN (and others) can avoid the need to continually forward-porting code to disable it. [1] https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-mirror-host-aggregates.html [2] https://bugs.launchpad.net/nova/+bug/1784020 [3] https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/report-cpu-features-as-traits.html -- Thanks, Matt From aj at suse.com Fri Nov 2 20:35:42 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 2 Nov 2018 21:35:42 +0100 Subject: [openstack-dev] [all][release][infra] changing release template publish-to-pypi Message-ID: <351ec585-b16f-9c08-a18e-3358c1f7fb16@suse.com> Doug recently introduced the template publish-to-pypi-python3 and used it for all official projects. It only has a minimal python3 usage (calling setup.py sdist bdist_wheel) that should work with any repo. Thus, I pushed a a series of changes up - see https://review.openstack.org/#/q/topic:publish-pypi - to rename publish-to-pypi-python3 back to publish-to-pypi. Thus, everybody uses again the same template and testing. Note that the template publish-to-pypi-python3 contained a test to see whether the release jobs works, this new job is now run as well. It is only run if one of the packaging files is updated - and ensures that uploading to pypi will work fine. If this new job now fails, please fix the fallout so that you can safely release next time. The first changes in this series are merging now - if this causes problems and there is a strong need for an explicit python2 version, please tell me, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From dtantsur at redhat.com Sat Nov 3 12:26:47 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Sat, 3 Nov 2018 13:26:47 +0100 Subject: [openstack-dev] [ironic] Team gathering at the Forum In-Reply-To: <627c8ab5-1918-8ca4-1dae-29cb78859d57@redhat.com> References: <627c8ab5-1918-8ca4-1dae-29cb78859d57@redhat.com> Message-ID: <3c422c26-46c9-37b1-9e8d-c720b80a42f6@redhat.com> Hi Ironicers! Good news: I have made the reservation for the gathering! :) It will happen on Wednesday, November 14, 2018 at 7 p.m. in restaurant Lindenbräu am Potsdamer Platz (https://goo.gl/maps/DYb5ikGGmdw). I will depart from the venue at around 6:15. Follow the hangouts chat (to be set up) for any last-minute changes. If you go along, you will need to get to Potsdamer Platz, there are S-Bahn, U-Bahn, train and bus stations there. Google suggests we take S3/S9 (direction Erkner/Airport) from Messe Süd first, then switch to S1/S2/S25/S26 on Friedrichstraße. Those going from Crowne Plaza Hotel can take bus 200 directly to Postdamer Platz. You'll need an A-B zone ticket for the travel. The restaurant is located in a court behind the tall DB building. If you want to come but did not sign up in the doodle, please let me know off-list. See you in Berlin, Dmitry On 10/29/18 3:58 PM, Dmitry Tantsur wrote: > Hi folks! > > This is your friendly reminder to vote on the day. Even if you're fine with all > days, please leave a vote, so that we know how many people are coming. We will > need to make a reservation, and we may not be able to accommodate more people > than voted! > > Dmitry > > On 10/22/18 6:06 PM, Dmitry Tantsur wrote: >> Hi ironicers! :) >> >> We are trying to plan an informal Ironic team gathering in Berlin. If you care >> about Ironic and would like to participate, please fill in >> https://doodle.com/poll/iw5992px765nthde. Note that the location is tentative, >> also depending on how many people sign up. >> >> Dmitry > From mnaser at vexxhost.com Sun Nov 4 10:11:59 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sun, 4 Nov 2018 11:11:59 +0100 Subject: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker In-Reply-To: References: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> Message-ID: On Fri, Nov 2, 2018 at 9:32 PM Matt Riedemann wrote: > > On 11/2/2018 2:22 PM, Eric Fried wrote: > > Based on a (long) discussion yesterday [1] I have put up a patch [2] > > whereby you can set [compute]resource_provider_association_refresh to > > zero and the resource tracker will never* refresh the report client's > > provider cache. Philosophically, we're removing the "healing" aspect of > > the resource tracker's periodic and trusting that placement won't > > diverge from whatever's in our cache. (If it does, it's because the op > > hit the CLI, in which case they should SIGHUP - see below.) > > > > *except: > > - When we initially create the compute node record and bootstrap its > > resource provider. > > - When the virt driver's update_provider_tree makes a change, > > update_from_provider_tree reflects them in the cache as well as pushing > > them back to placement. > > - If update_from_provider_tree fails, the cache is cleared and gets > > rebuilt on the next periodic. > > - If you send SIGHUP to the compute process, the cache is cleared. > > > > This should dramatically reduce the number of calls to placement from > > the compute service. Like, to nearly zero, unless something is actually > > changing. > > > > Can I get some initial feedback as to whether this is worth polishing up > > into something real? (It will probably need a bp/spec if so.) > > > > [1] > > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-11-01.log.html#t2018-11-01T17:32:03 > > [2]https://review.openstack.org/#/c/614886/ > > > > ========== > > Background > > ========== > > In the Queens release, our friends at CERN noticed a serious spike in > > the number of requests to placement from compute nodes, even in a > > stable-state cloud. Given that we were in the process of adding a ton of > > infrastructure to support sharing and nested providers, this was not > > unexpected. Roughly, what was previously: > > > > @periodic_task: > > GET/resource_providers/$compute_uuid > > GET/resource_providers/$compute_uuid/inventories > > > > became more like: > > > > @periodic_task: > > # In Queens/Rocky, this would still just return the compute RP > > GET /resource_providers?in_tree=$compute_uuid > > # In Queens/Rocky, this would return nothing > > GET /resource_providers?member_of=...&required=MISC_SHARES... > > for each provider returned above: # i.e. just one in Q/R > > GET/resource_providers/$compute_uuid/inventories > > GET/resource_providers/$compute_uuid/traits > > GET/resource_providers/$compute_uuid/aggregates > > > > In a cloud the size of CERN's, the load wasn't acceptable. But at the > > time, CERN worked around the problem by disabling refreshing entirely. > > (The fact that this seems to have worked for them is an encouraging sign > > for the proposed code change.) > > > > We're not actually making use of most of that information, but it sets > > the stage for things that we're working on in Stein and beyond, like > > multiple VGPU types, bandwidth resource providers, accelerators, NUMA, > > etc., so removing/reducing the amount of information we look at isn't > > really an option strategically. > > A few random points from the long discussion that should probably > re-posed here for wider thought: > > * There was probably a lot of discussion about why we needed to do this > caching and stuff in the compute in the first place. What has changed > that we no longer need to aggressively refresh the cache on every > periodic? I thought initially it came up because people really wanted > the compute to be fully self-healing to any external changes, including > hot plugging resources like disk on the host to automatically reflect > those changes in inventory. Similarly, external user/service > interactions with the placement API which would then be automatically > picked up by the next periodic run - is that no longer a desire, and/or > how was the decision made previously that simply requiring a SIGHUP in > that case wasn't sufficient/desirable. > > * I believe I made the point yesterday that we should probably not > refresh by default, and let operators opt-in to that behavior if they > really need it, i.e. they are frequently making changes to the > environment, potentially by some external service (I could think of > vCenter doing this to reflect changes from vCenter back into > nova/placement), but I don't think that should be the assumed behavior > by most and our defaults should reflect the "normal" use case. > > * I think I've noted a few times now that we don't actually use the > provider aggregates information (yet) in the compute service. Nova host > aggregate membership is mirror to placement since Rocky [1] but that > happens in the API, not the the compute. The only thing I can think of > that relied on resource provider aggregate information in the compute is > the shared storage providers concept, but that's not supported (yet) > [2]. So do we need to keep retrieving aggregate information when nothing > in compute uses it yet? > > * Similarly, why do we need to get traits on each periodic? The only > in-tree virt driver I'm aware of that *reports* traits is the libvirt > driver for CPU features [3]. Otherwise I think the idea behind getting > the latest traits is so the virt driver doesn't overwrite any traits set > externally on the compute node root resource provider. I think that > still stands and is probably OK, even though we have generations now > which should keep us from overwriting if we don't have the latest > traits, but I wanted to bring it up since it's related to the "why do we > need provider aggregates in the compute?" question. > > * Regardless of what we do, I think we should probably *at least* make > that refresh associations config allow 0 to disable it so CERN (and > others) can avoid the need to continually forward-porting code to > disable it. > > [1] > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-mirror-host-aggregates.html > [2] https://bugs.launchpad.net/nova/+bug/1784020 > [3] > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/report-cpu-features-as-traits.html > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Sun Nov 4 10:22:46 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sun, 4 Nov 2018 11:22:46 +0100 Subject: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker In-Reply-To: References: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> Message-ID: Ugh, hit send accidentally. Please take my comments lightly as I have not been as involved with the developments but just chiming in as an operator with some ideas. On Fri, Nov 2, 2018 at 9:32 PM Matt Riedemann wrote: > > On 11/2/2018 2:22 PM, Eric Fried wrote: > > Based on a (long) discussion yesterday [1] I have put up a patch [2] > > whereby you can set [compute]resource_provider_association_refresh to > > zero and the resource tracker will never* refresh the report client's > > provider cache. Philosophically, we're removing the "healing" aspect of > > the resource tracker's periodic and trusting that placement won't > > diverge from whatever's in our cache. (If it does, it's because the op > > hit the CLI, in which case they should SIGHUP - see below.) > > > > *except: > > - When we initially create the compute node record and bootstrap its > > resource provider. > > - When the virt driver's update_provider_tree makes a change, > > update_from_provider_tree reflects them in the cache as well as pushing > > them back to placement. > > - If update_from_provider_tree fails, the cache is cleared and gets > > rebuilt on the next periodic. > > - If you send SIGHUP to the compute process, the cache is cleared. > > > > This should dramatically reduce the number of calls to placement from > > the compute service. Like, to nearly zero, unless something is actually > > changing. > > > > Can I get some initial feedback as to whether this is worth polishing up > > into something real? (It will probably need a bp/spec if so.) > > > > [1] > > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-11-01.log.html#t2018-11-01T17:32:03 > > [2]https://review.openstack.org/#/c/614886/ > > > > ========== > > Background > > ========== > > In the Queens release, our friends at CERN noticed a serious spike in > > the number of requests to placement from compute nodes, even in a > > stable-state cloud. Given that we were in the process of adding a ton of > > infrastructure to support sharing and nested providers, this was not > > unexpected. Roughly, what was previously: > > > > @periodic_task: > > GET/resource_providers/$compute_uuid > > GET/resource_providers/$compute_uuid/inventories > > > > became more like: > > > > @periodic_task: > > # In Queens/Rocky, this would still just return the compute RP > > GET /resource_providers?in_tree=$compute_uuid > > # In Queens/Rocky, this would return nothing > > GET /resource_providers?member_of=...&required=MISC_SHARES... > > for each provider returned above: # i.e. just one in Q/R > > GET/resource_providers/$compute_uuid/inventories > > GET/resource_providers/$compute_uuid/traits > > GET/resource_providers/$compute_uuid/aggregates > > > > In a cloud the size of CERN's, the load wasn't acceptable. But at the > > time, CERN worked around the problem by disabling refreshing entirely. > > (The fact that this seems to have worked for them is an encouraging sign > > for the proposed code change.) > > > > We're not actually making use of most of that information, but it sets > > the stage for things that we're working on in Stein and beyond, like > > multiple VGPU types, bandwidth resource providers, accelerators, NUMA, > > etc., so removing/reducing the amount of information we look at isn't > > really an option strategically. > > A few random points from the long discussion that should probably > re-posed here for wider thought: > > * There was probably a lot of discussion about why we needed to do this > caching and stuff in the compute in the first place. What has changed > that we no longer need to aggressively refresh the cache on every > periodic? I thought initially it came up because people really wanted > the compute to be fully self-healing to any external changes, including > hot plugging resources like disk on the host to automatically reflect > those changes in inventory. Similarly, external user/service > interactions with the placement API which would then be automatically > picked up by the next periodic run - is that no longer a desire, and/or > how was the decision made previously that simply requiring a SIGHUP in > that case wasn't sufficient/desirable. I think that would be nice to have however at the current moment, based from operators perspective, it looks like the placement service can really get out of sync pretty easily.. so I think it'd be good to commit to either really making it self-heal (delete stale allocations, create ones that should be there) or remove all self-healing stuff Also, for the self healing work, if we take that route and implement it fully, it might make placement split much easier, because we just switch over and wait for the computes to automagically populate everything, but that's the type of operation that happens once in the lifetime of a cloud. Just for information sake, a clean state cloud which had no reported issues over maybe a period of 2-3 months already has 4 allocations which are incorrect and 12 allocations pointing to the wrong resource provider, so I think this comes does to committing to either "self-healing" to fix those issues or not. > * I believe I made the point yesterday that we should probably not > refresh by default, and let operators opt-in to that behavior if they > really need it, i.e. they are frequently making changes to the > environment, potentially by some external service (I could think of > vCenter doing this to reflect changes from vCenter back into > nova/placement), but I don't think that should be the assumed behavior > by most and our defaults should reflect the "normal" use case. I agree. For 99% of the deployments out there, placement service will likely not be touched by anyone except the services and at this stage, probably just Nova talking to placement directly. I really do agree on the statement that the "normal" use case is of a user playing around with placement out-of-band is not common at all. > * I think I've noted a few times now that we don't actually use the > provider aggregates information (yet) in the compute service. Nova host > aggregate membership is mirror to placement since Rocky [1] but that > happens in the API, not the the compute. The only thing I can think of > that relied on resource provider aggregate information in the compute is > the shared storage providers concept, but that's not supported (yet) > [2]. So do we need to keep retrieving aggregate information when nothing > in compute uses it yet? Is there anything stopping us here from polling that information during the time when the VM is spawning? It doesn't seem like something that the compute node always needs to check.. > * Similarly, why do we need to get traits on each periodic? The only > in-tree virt driver I'm aware of that *reports* traits is the libvirt > driver for CPU features [3]. Otherwise I think the idea behind getting > the latest traits is so the virt driver doesn't overwrite any traits set > externally on the compute node root resource provider. I think that > still stands and is probably OK, even though we have generations now > which should keep us from overwriting if we don't have the latest > traits, but I wanted to bring it up since it's related to the "why do we > need provider aggregates in the compute?" question. Forgive my ignorance on this subject, but would traits really be only set when the service is first started (so that check can happens only once on startup) and then the compute nodes never really ever consume that information (but the scheduler does?). Also, AFAIK I doubt virt drivers actually report much change in traits (CPU flags changing in runtime?) > * Regardless of what we do, I think we should probably *at least* make > that refresh associations config allow 0 to disable it so CERN (and > others) can avoid the need to continually forward-porting code to > disable it. +1 > [1] > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-mirror-host-aggregates.html > [2] https://bugs.launchpad.net/nova/+bug/1784020 > [3] > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/report-cpu-features-as-traits.html > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Sun Nov 4 11:53:22 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sun, 4 Nov 2018 12:53:22 +0100 Subject: [openstack-dev] [nova][cinder] Using externally stored keys for encryption Message-ID: Hi everyone: I've been digging around the documentation of Nova, Cinder and the encrypted disks feature and I've been a bit stumped on something which I think is a very relevant use case that might not be possible (or it is and I have totally missed it!) It seems that both Cinder and Nova assume that secrets are always stored within the Barbican deployment in the same cloud. This makes a lot of sense however in scenarios where the consumer of an OpenStack cloud wants to operate it without trusting the cloud, they won't be able to have encrypted volumes that make sense, an example: - Create encrypted volume, keys are stored in Barbican - Boot VM using said encrypted volume, Nova pulls keys from Barbican, starts VM.. However, this means that the deployer can at anytime pull down the keys and decrypt things locally to do $bad_things. However, if we had something like any of the following two ideas: - Allow for "run-time" providing secret on boot (maybe something added to the start/boot VM API?) - Allow for pointing towards an external instance of Barbican By using those 2, we allow OpenStack users to operate their VMs securely and allowing them to have control over their keys. If they want to revoke all access, they can shutdown all the VMs and cut access to their key storage management and not worry about someone just pulling them down from the internal Barbican. Hopefully I did a good job explaining this use case and I'm just wondering if this is a thing that's possible at the moment or if we perhaps need to look into it. Thanks, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From jaypipes at gmail.com Sun Nov 4 12:01:25 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Sun, 4 Nov 2018 07:01:25 -0500 Subject: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker In-Reply-To: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> References: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> Message-ID: <007518d4-daed-4065-7044-40443564f6cb@gmail.com> On 11/02/2018 03:22 PM, Eric Fried wrote: > All- > > Based on a (long) discussion yesterday [1] I have put up a patch [2] > whereby you can set [compute]resource_provider_association_refresh to > zero and the resource tracker will never* refresh the report client's > provider cache. Philosophically, we're removing the "healing" aspect of > the resource tracker's periodic and trusting that placement won't > diverge from whatever's in our cache. (If it does, it's because the op > hit the CLI, in which case they should SIGHUP - see below.) > > *except: > - When we initially create the compute node record and bootstrap its > resource provider. > - When the virt driver's update_provider_tree makes a change, > update_from_provider_tree reflects them in the cache as well as pushing > them back to placement. > - If update_from_provider_tree fails, the cache is cleared and gets > rebuilt on the next periodic. > - If you send SIGHUP to the compute process, the cache is cleared. > > This should dramatically reduce the number of calls to placement from > the compute service. Like, to nearly zero, unless something is actually > changing. > > Can I get some initial feedback as to whether this is worth polishing up > into something real? (It will probably need a bp/spec if so.) > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-11-01.log.html#t2018-11-01T17:32:03 > [2] https://review.openstack.org/#/c/614886/ > > ========== > Background > ========== > In the Queens release, our friends at CERN noticed a serious spike in > the number of requests to placement from compute nodes, even in a > stable-state cloud. Given that we were in the process of adding a ton of > infrastructure to support sharing and nested providers, this was not > unexpected. Roughly, what was previously: > > @periodic_task: > GET /resource_providers/$compute_uuid > GET /resource_providers/$compute_uuid/inventories > > became more like: > > @periodic_task: > # In Queens/Rocky, this would still just return the compute RP > GET /resource_providers?in_tree=$compute_uuid > # In Queens/Rocky, this would return nothing > GET /resource_providers?member_of=...&required=MISC_SHARES... > for each provider returned above: # i.e. just one in Q/R > GET /resource_providers/$compute_uuid/inventories > GET /resource_providers/$compute_uuid/traits > GET /resource_providers/$compute_uuid/aggregates > > In a cloud the size of CERN's, the load wasn't acceptable. But at the > time, CERN worked around the problem by disabling refreshing entirely. > (The fact that this seems to have worked for them is an encouraging sign > for the proposed code change.) > > We're not actually making use of most of that information, but it sets > the stage for things that we're working on in Stein and beyond, like > multiple VGPU types, bandwidth resource providers, accelerators, NUMA, > etc., so removing/reducing the amount of information we look at isn't > really an option strategically. I support your idea of getting rid of the periodic refresh of the cache in the scheduler report client. Much of that was added in order to emulate the original way the resource tracker worked. Most of the behaviour in the original resource tracker (and some of the code still in there for dealing with (surprise!) PCI passthrough devices and NUMA topology) was due to doing allocations on the compute node (the whole claims stuff). We needed to always be syncing the state of the compute_nodes and pci_devices table in the cell database with whatever usage information was being created/modified on the compute nodes [0]. All of the "healing" code that's in the resource tracker was basically to deal with "soft delete", migrations that didn't complete or work properly, and, again, to handle allocations becoming out-of-sync because the compute nodes were responsible for allocating (as opposed to the current situation we have where the placement service -- via the scheduler's call to claim_resources() -- is responsible for allocating resources [1]). Now that we have generation markers protecting both providers and consumers, we can rely on those generations to signal to the scheduler report client that it needs to pull fresh information about a provider or consumer. So, there's really no need to automatically and blindly refresh any more. Best, -jay [0] We always need to be syncing those tables because those tables, unlike the placement database's data modeling, couple both inventory AND usage in the same table structure... [1] again, except for PCI devices and NUMA topology, because of the tight coupling in place with the different resource trackers those types of resources use... From mordred at inaugust.com Sun Nov 4 15:12:13 2018 From: mordred at inaugust.com (Monty Taylor) Date: Sun, 4 Nov 2018 09:12:13 -0600 Subject: [openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir Message-ID: <7fd557b7-b789-ab1b-8175-4b1b506c45b2@inaugust.com> Heya, I've floated a half-baked version of this idea to a few people, but lemme try again with some new words. What if we added support for serving vendor data files from the root of a primary URL as-per RFC 5785. Specifically, support deployers adding a json file to .well-known/openstack/client that would contain what we currently store in the openstacksdk repo and were just discussing splitting out. Then, an end-user could put a url into the 'cloud' parameter. Using vexxhost as an example, if Mohammed served: { "name": "vexxhost", "profile": { "auth_type": "v3password", "auth": { "auth_url": "https://auth.vexxhost.net/v3" }, "regions": [ "ca-ymq-1", "sjc1" ], "identity_api_version": "3", "image_format": "raw", "requires_floating_ip": false } } from https://vexxhost.com/.well-known/openstack/client And then in my local config I did: import openstack conn = openstack.connect( cloud='https://vexxhost.com', username='my-awesome-user', ...) The client could know to go fetch https://vexxhost.com/.well-known/openstack/client to use as the profile information needed for that cloud. If I wanted to configure a clouds.yaml entry, it would look like: clouds: mordred-vexxhost: profile: https://vexxhost.com auth: username: my-awesome-user And I could just conn = openstack.connect(cloud='mordred-vexxhost') The most important part being that we define the well-known structure and interaction. Then we don't need the info in a git repo managed by the publiccloud-wg - each public cloud can manage it itself. But also - non-public clouds can take advantage of being able to define such information for their users too - and can hand a user a simple global entrypoint for discover. As they add regions - or if they decide to switch from global keystone to per-region keystone, they can just update their profile file and all will be good with the world. Of course, it's a convenience, so nothing forces anyone to deploy one. For backwards compat, as public clouds we have vendor profiles for start deploying a well-known profile, we can update the baked-in named profile in openstacksdk to simply reference the remote url and over time hopefully there will cease to be any information that's useful in the openstacksdk repo. What do people think? Monty From mnaser at vexxhost.com Sun Nov 4 17:32:23 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sun, 4 Nov 2018 18:32:23 +0100 Subject: [openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir In-Reply-To: <7fd557b7-b789-ab1b-8175-4b1b506c45b2@inaugust.com> References: <7fd557b7-b789-ab1b-8175-4b1b506c45b2@inaugust.com> Message-ID: On Sun, Nov 4, 2018 at 4:12 PM Monty Taylor wrote: > > Heya, > > I've floated a half-baked version of this idea to a few people, but > lemme try again with some new words. > > What if we added support for serving vendor data files from the root of > a primary URL as-per RFC 5785. Specifically, support deployers adding a > json file to .well-known/openstack/client that would contain what we > currently store in the openstacksdk repo and were just discussing > splitting out. > > Then, an end-user could put a url into the 'cloud' parameter. > > Using vexxhost as an example, if Mohammed served: > > { > "name": "vexxhost", > "profile": { > "auth_type": "v3password", > "auth": { > "auth_url": "https://auth.vexxhost.net/v3" > }, > "regions": [ > "ca-ymq-1", > "sjc1" > ], > "identity_api_version": "3", > "image_format": "raw", > "requires_floating_ip": false > } > } > > from https://vexxhost.com/.well-known/openstack/client > > And then in my local config I did: > > import openstack > conn = openstack.connect( > cloud='https://vexxhost.com', > username='my-awesome-user', > ...) > > The client could know to go fetch > https://vexxhost.com/.well-known/openstack/client to use as the profile > information needed for that cloud. Mohammed likes this idea and would like to present this for you to hack on: https://vexxhost.com/.well-known/openstack/client > If I wanted to configure a clouds.yaml entry, it would look like: > > clouds: > mordred-vexxhost: > profile: https://vexxhost.com > auth: > username: my-awesome-user > > And I could just > > conn = openstack.connect(cloud='mordred-vexxhost') > > The most important part being that we define the well-known structure > and interaction. Then we don't need the info in a git repo managed by > the publiccloud-wg - each public cloud can manage it itself. But also - > non-public clouds can take advantage of being able to define such > information for their users too - and can hand a user a simple global > entrypoint for discover. As they add regions - or if they decide to > switch from global keystone to per-region keystone, they can just update > their profile file and all will be good with the world. > > Of course, it's a convenience, so nothing forces anyone to deploy one. > > For backwards compat, as public clouds we have vendor profiles for start > deploying a well-known profile, we can update the baked-in named profile > in openstacksdk to simply reference the remote url and over time > hopefully there will cease to be any information that's useful in the > openstacksdk repo. > > What do people think? I really like this idea, the only concern is fallbacks. I can imagine that in some arbitrary world, things might get restructured in a web structure and that URL magically disappears but shifting the responsibilities on the provider rather than maintainers of this project is a much cleaner alternative, IMHO. > Monty > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From lijie at unitedstack.com Mon Nov 5 02:26:40 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Mon, 5 Nov 2018 10:26:40 +0800 Subject: [openstack-dev] [cinder] about use nfs driver to backup the volume snapshot In-Reply-To: <501142a5-ff4d-9671-d3b9-34f5fd2da01a@redhat.com> References: <501142a5-ff4d-9671-d3b9-34f5fd2da01a@redhat.com> Message-ID: Sorry , I mean use the NFS driver as the cinder-backup_driver.I see the remotefs code achieve the create_volume_from snapshot[1],in this function the snapshot.status must be available. But before this in the api part, the snapshot.status was changed to the backing_up status[2].Is there something wrong?Can you tell me more about this?Thank you very much. [1]https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/remotefs.py#L1259 [2]https://github.com/openstack/cinder/blob/master/cinder/backup/api.py#L292 ------------------ Original ------------------ From: "Eric Harney"; Date: Fri, Nov 2, 2018 10:00 PM To: "jsbryant"; "OpenStack Developmen"; Subject: Re: [openstack-dev] [cinder] about use nfs driver to backup the volume snapshot On 11/1/18 4:44 PM, Jay Bryant wrote: > On Thu, Nov 1, 2018, 10:44 AM Rambo wrote: > >> Hi,all >> >> Recently, I use the nfs driver as the cinder-backup backend, when I >> use it to backup the volume snapshot, the result is return the >> NotImplementedError[1].And the nfs.py doesn't has the >> create_volume_from_snapshot function. Does the community plan to achieve >> it which is as nfs as the cinder-backup backend?Can you tell me about >> this?Thank you very much! >> >> Rambo, > > The NFS driver doesn't have full snapshot support. I am not sure if that > function missing was an oversight or not. I would reach out to Eric Harney > as he implemented that code. > > Jay > create_volume_from_snapshot is implemented in the NFS driver. It is in the remotefs code that the NFS driver inherits from. But, I'm not sure I understand what's being asked here -- how is this related to using NFS as the backup backend? >> >> >> [1] >> https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L2142 >> >> >> >> >> >> >> >> >> Best Regards >> Rambo __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Mon Nov 5 02:27:51 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Mon, 5 Nov 2018 10:27:51 +0800 Subject: [openstack-dev] [nova] about live-resize the instance Message-ID: Hi,all I find it is important that live-resize the instance in production environment. We have talked it many years and we agreed this in Rocky PTG, then the author remove the spec to Stein, but there is no information about this spec, is there anyone to push the spec and achieve it? Can you tell me more about this ?Thank you very much. [1]https://review.openstack.org/#/c/141219/ Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Mon Nov 5 04:17:53 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Mon, 5 Nov 2018 04:17:53 +0000 Subject: [openstack-dev] [nova] about live-resize the instance In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From 270162781 at qq.com Mon Nov 5 08:01:58 2018 From: 270162781 at qq.com (=?ISO-8859-1?B?MjcwMTYyNzgx?=) Date: Mon, 5 Nov 2018 16:01:58 +0800 Subject: [openstack-dev] [neutron] Bug deputy report Message-ID: Hi all, I'm zhaobo, I was the bug deputy for the last week and I'm afraid that cannot attending the comming upstream meeting so I'm sending out this report: Last week there are some high priority bugs for neutron . What a quiet week. ;-) Also some bugs need to attention, I list them here: [High] Race conditions in neutron_tempest_plugin/scenario/test_security_groups.py https://bugs.launchpad.net/neutron/+bug/1801306 TEMPEST CI FAILURE, the result is caused by some tempest tests operate the default SG, that will affect other test cases. Migration causes downtime while doing bulk_pull https://bugs.launchpad.net/neutron/+bug/1801104 Seems refresh the local cache so frequently, also if the records we want from neutron-server are so large, could we improve the query filter for improve performance on agent rpc? [Need Attention] Network: concurrent issue for create network operation https://bugs.launchpad.net/neutron/+bug/1800417 This looks like an exist issue for master. I think this must be an issue, but the bug lacks some logs for deep into, so hope the reporter can provider more details about it at first. [RFE]Neutron API Server: unexpected behavior with multiple long live clients https://bugs.launchpad.net/neutron/+bug/1800599 I have changed this bug to a new RFE, as it introduces an new mechanism to make sure each request can be processed to the maximum extend if possible, without long time waiting on client side. Thanks, Best Regards, ZhaoBo -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Mon Nov 5 09:05:03 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Mon, 5 Nov 2018 10:05:03 +0100 Subject: [openstack-dev] [TripleO] PSA lets use deploy_steps_tasks In-Reply-To: References: Message-ID: <1632c103-0eff-ad35-6dc9-ac0c6f120d69@redhat.com> On 11/2/18 2:39 PM, Dan Prince wrote: > I pushed a patch[1] to update our containerized deployment > architecture docs yesterday. There are 2 new fairly useful sections we > can leverage with TripleO's stepwise deployment. They appear to be > used somewhat sparingly so I wanted to get the word out. Good thing, it's important to highlight this feature and explain how it works, big thumb up Dan! > > The first is 'deploy_steps_tasks' which gives you a means to run > Ansible snippets on each node/role in a stepwise fashion during > deployment. Previously it was only possible to execute puppet or > docker commands where as now that we have deploy_steps_tasks we can > execute ad-hoc ansible in the same manner. I'm wondering if such a thing could be used for the "inflight validations" - i.e. a step to validate a service/container is working as expected once it's deployed, in order to get early failure. For instance, we deploy a rabbitmq container, and right after it's deployed, we'd like to ensure it's actually running and works as expected before going forward in the deploy. Care to have a look at that spec[1] and see if, instead of adding a new "validation_tasks" entry, we could "just" use the "deploy_steps_tasks" with the right step number? That would be really, really cool, and will probably avoid a lot of code in the end :). Thank you! C. [1] https://review.openstack.org/#/c/602007/ > > The second is 'external_deploy_tasks' which allows you to use run > Ansible snippets on the Undercloud during stepwise deployment. This is > probably most useful for driving an external installer but might also > help with some complex tasks that need to originate from a single > Ansible client. > > The only downside I see to these approaches is that both appear to be > implemented with Ansible's default linear strategy. I saw shardy's > comment here [2] that the :free strategy does not yet apparently work > with the any_errors_fatal option. Perhaps we can reach out to someone > in the Ansible community in this regard to improve running these > things in parallel like TripleO used to work with Heat agents. > > This is also how host_prep_tasks is implemented which BTW we should > now get rid of as a duplicate architectural step since we have > deploy_steps_tasks anyway. > > [1] https://review.openstack.org/#/c/614822/ > [2] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/common/deploy-steps.j2#n554 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From moreira.belmiro.email.lists at gmail.com Mon Nov 5 09:16:26 2018 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Mon, 5 Nov 2018 10:16:26 +0100 Subject: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker In-Reply-To: <007518d4-daed-4065-7044-40443564f6cb@gmail.com> References: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> <007518d4-daed-4065-7044-40443564f6cb@gmail.com> Message-ID: Thanks Eric for the patch. This will help keeping placement calls under control. Belmiro On Sun, Nov 4, 2018 at 1:01 PM Jay Pipes wrote: > On 11/02/2018 03:22 PM, Eric Fried wrote: > > All- > > > > Based on a (long) discussion yesterday [1] I have put up a patch [2] > > whereby you can set [compute]resource_provider_association_refresh to > > zero and the resource tracker will never* refresh the report client's > > provider cache. Philosophically, we're removing the "healing" aspect of > > the resource tracker's periodic and trusting that placement won't > > diverge from whatever's in our cache. (If it does, it's because the op > > hit the CLI, in which case they should SIGHUP - see below.) > > > > *except: > > - When we initially create the compute node record and bootstrap its > > resource provider. > > - When the virt driver's update_provider_tree makes a change, > > update_from_provider_tree reflects them in the cache as well as pushing > > them back to placement. > > - If update_from_provider_tree fails, the cache is cleared and gets > > rebuilt on the next periodic. > > - If you send SIGHUP to the compute process, the cache is cleared. > > > > This should dramatically reduce the number of calls to placement from > > the compute service. Like, to nearly zero, unless something is actually > > changing. > > > > Can I get some initial feedback as to whether this is worth polishing up > > into something real? (It will probably need a bp/spec if so.) > > > > [1] > > > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-11-01.log.html#t2018-11-01T17:32:03 > > [2] https://review.openstack.org/#/c/614886/ > > > > ========== > > Background > > ========== > > In the Queens release, our friends at CERN noticed a serious spike in > > the number of requests to placement from compute nodes, even in a > > stable-state cloud. Given that we were in the process of adding a ton of > > infrastructure to support sharing and nested providers, this was not > > unexpected. Roughly, what was previously: > > > > @periodic_task: > > GET /resource_providers/$compute_uuid > > GET /resource_providers/$compute_uuid/inventories > > > > became more like: > > > > @periodic_task: > > # In Queens/Rocky, this would still just return the compute RP > > GET /resource_providers?in_tree=$compute_uuid > > # In Queens/Rocky, this would return nothing > > GET /resource_providers?member_of=...&required=MISC_SHARES... > > for each provider returned above: # i.e. just one in Q/R > > GET /resource_providers/$compute_uuid/inventories > > GET /resource_providers/$compute_uuid/traits > > GET /resource_providers/$compute_uuid/aggregates > > > > In a cloud the size of CERN's, the load wasn't acceptable. But at the > > time, CERN worked around the problem by disabling refreshing entirely. > > (The fact that this seems to have worked for them is an encouraging sign > > for the proposed code change.) > > > > We're not actually making use of most of that information, but it sets > > the stage for things that we're working on in Stein and beyond, like > > multiple VGPU types, bandwidth resource providers, accelerators, NUMA, > > etc., so removing/reducing the amount of information we look at isn't > > really an option strategically. > > I support your idea of getting rid of the periodic refresh of the cache > in the scheduler report client. Much of that was added in order to > emulate the original way the resource tracker worked. > > Most of the behaviour in the original resource tracker (and some of the > code still in there for dealing with (surprise!) PCI passthrough devices > and NUMA topology) was due to doing allocations on the compute node (the > whole claims stuff). We needed to always be syncing the state of the > compute_nodes and pci_devices table in the cell database with whatever > usage information was being created/modified on the compute nodes [0]. > > All of the "healing" code that's in the resource tracker was basically > to deal with "soft delete", migrations that didn't complete or work > properly, and, again, to handle allocations becoming out-of-sync because > the compute nodes were responsible for allocating (as opposed to the > current situation we have where the placement service -- via the > scheduler's call to claim_resources() -- is responsible for allocating > resources [1]). > > Now that we have generation markers protecting both providers and > consumers, we can rely on those generations to signal to the scheduler > report client that it needs to pull fresh information about a provider > or consumer. So, there's really no need to automatically and blindly > refresh any more. > > Best, > -jay > > [0] We always need to be syncing those tables because those tables, > unlike the placement database's data modeling, couple both inventory AND > usage in the same table structure... > > [1] again, except for PCI devices and NUMA topology, because of the > tight coupling in place with the different resource trackers those types > of resources use... > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Nov 5 09:19:28 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 5 Nov 2018 10:19:28 +0100 Subject: [openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir In-Reply-To: <7fd557b7-b789-ab1b-8175-4b1b506c45b2@inaugust.com> References: <7fd557b7-b789-ab1b-8175-4b1b506c45b2@inaugust.com> Message-ID: Monty Taylor wrote: > [...] > What if we added support for serving vendor data files from the root of > a primary URL as-per RFC 5785. Specifically, support deployers adding a > json file to .well-known/openstack/client that would contain what we > currently store in the openstacksdk repo and were just discussing > splitting out. > [...] > What do people think? I love the idea of public clouds serving that file directly, and the user experience you get from it. The only two drawbacks on top of my head would be: - it's harder to discover available compliant openstack clouds from the client. - there is no vetting process, so there may be failures with weird clouds serving half-baked files that people may blame the client tooling for. I still think it's a good idea, as in theory it aligns the incentive of maintaining the file with the most interested stakeholder. It just might need some extra communication to work seamlessly. -- Thierry Carrez (ttx) From mnaser at vexxhost.com Mon Nov 5 09:21:10 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 5 Nov 2018 10:21:10 +0100 Subject: [openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir In-Reply-To: References: <7fd557b7-b789-ab1b-8175-4b1b506c45b2@inaugust.com> Message-ID: Sent from my iPhone > On Nov 5, 2018, at 10:19 AM, Thierry Carrez wrote: > > Monty Taylor wrote: >> [...] >> What if we added support for serving vendor data files from the root of a primary URL as-per RFC 5785. Specifically, support deployers adding a json file to .well-known/openstack/client that would contain what we currently store in the openstacksdk repo and were just discussing splitting out. >> [...] >> What do people think? > > I love the idea of public clouds serving that file directly, and the user experience you get from it. The only two drawbacks on top of my head would be: > > - it's harder to discover available compliant openstack clouds from the client. > > - there is no vetting process, so there may be failures with weird clouds serving half-baked files that people may blame the client tooling for. > > I still think it's a good idea, as in theory it aligns the incentive of maintaining the file with the most interested stakeholder. It just might need some extra communication to work seamlessly. I’m thinking out loud here but perhaps a simple linter that a cloud provider can run will help them make sure that everything is functional. > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From markus.hentsch at secustack.com Mon Nov 5 09:22:24 2018 From: markus.hentsch at secustack.com (Markus Hentsch) Date: Mon, 5 Nov 2018 10:22:24 +0100 Subject: [openstack-dev] [nova][cinder] Using externally stored keys for encryption In-Reply-To: References: Message-ID: Dear Mohammed, with SecuStack we've been integrating end-to-end (E2E) transfer of secrets into the OpenStack code. From your problem description, it sounds like our implementation would address some of your points. For below explanation, I will refer to those secrets as "keys". Our solution works as follows: - when the user creates an encrypted resource, they may specify to use E2E key transfer instead of Barbican - the resource will be allocated and enter a state where it is waiting for the transmission of the key - the user establishes an E2E relationship with the compute/volume host where the resource has been scheduled - the key is encrypted (asymmetrically) on the user side specifically for this host (using its public key) and transferred through the API to this host - the key reaches the compute/volume host, is decrypted by the host's private key and is then used temporarily for the duration of the resource creation and discarded afterwards Whenever such resource is to be used (instance booted or volume attached), a similar workflow is triggered on-demand that requires the key to be transferred via the E2E channel again. Our solution is complemented by an extension of the Barbican workflow which also allows users to specify secret IDs and manage them manually for encrypted resources instead of having OpenStack handle all of that automatically. This represents a solution that is kind of in-between the current OpenStack and our E2E approach. We have not looked into external Barbican integration yet, though. We do plan to contribute our E2E key transfer and user-centric key control to OpenStack, if we can obtain support for this idea. However, we are currently in the middle of trying to contribute image encryption to OpenStack, which is already proving to be a lengthy process as it involves a lot of different teams. The E2E stuff would be an even bigger change across the components. Unfortunately, we currently don't have the resources to tackle two huge contributions at the same time as it requires a lot of effort getting multiple teams to agree on a single solution. Best regards, Markus Hentsch Mohammed Naser wrote: > Hi everyone: > > I've been digging around the documentation of Nova, Cinder and the > encrypted disks feature and I've been a bit stumped on something which > I think is a very relevant use case that might not be possible (or it > is and I have totally missed it!) > > It seems that both Cinder and Nova assume that secrets are always > stored within the Barbican deployment in the same cloud. This makes a > lot of sense however in scenarios where the consumer of an OpenStack > cloud wants to operate it without trusting the cloud, they won't be > able to have encrypted volumes that make sense, an example: > > - Create encrypted volume, keys are stored in Barbican > - Boot VM using said encrypted volume, Nova pulls keys from Barbican, > starts VM.. > > However, this means that the deployer can at anytime pull down the > keys and decrypt things locally to do $bad_things. However, if we had > something like any of the following two ideas: > > - Allow for "run-time" providing secret on boot (maybe something added > to the start/boot VM API?) > - Allow for pointing towards an external instance of Barbican > > By using those 2, we allow OpenStack users to operate their VMs > securely and allowing them to have control over their keys. If they > want to revoke all access, they can shutdown all the VMs and cut > access to their key storage management and not worry about someone > just pulling them down from the internal Barbican. > > Hopefully I did a good job explaining this use case and I'm just > wondering if this is a thing that's possible at the moment or if we > perhaps need to look into it. > > Thanks, > Mohammed > From xiang.edison at gmail.com Mon Nov 5 09:43:48 2018 From: xiang.edison at gmail.com (Edison Xiang) Date: Mon, 5 Nov 2018 17:43:48 +0800 Subject: [openstack-dev] [api] Open API 3.0 for OpenStack API In-Reply-To: <5c1a3f98-2aad-bfe1-45e3-c4ddd8c52615@redhat.com> References: <413d67d8-e4de-51fe-e7cf-8fb6520aed34@redhat.com> <20181009125850.4sj52i7c3mi6m6ay@yuggoth.org> <16397789-98b5-a011-0367-dd5023260870@redhat.com> <20181010131849.ey5yf3zxjtevtnae@yuggoth.org> <5c1a3f98-2aad-bfe1-45e3-c4ddd8c52615@redhat.com> Message-ID: Hi team, I submit a forum [1] named "Cross-project Open API 3.0 support". We can make more discussions about that in this forum in berlin. Feel free to add your ideas here [2]. Welcome to join us. Thanks very much. [1] https://www.openstack.org/summit/berlin-2018/summit-schedule/global-search?t=open+api [2] https://etherpad.openstack.org/p/api-berlin-forum-brainstorming Best Regards, Edison Xiang On Thu, Oct 11, 2018 at 7:48 PM Gilles Dubreuil wrote: > > > On 11/10/18 00:18, Jeremy Stanley wrote: > > On 2018-10-10 13:24:28 +1100 (+1100), Gilles Dubreuil wrote: > > On 09/10/18 23:58, Jeremy Stanley wrote: > > On 2018-10-09 08:52:52 -0400 (-0400), Jim Rollenhagen wrote: > [...] > > It seems to me that a major goal of openstacksdk is to hide > differences between clouds from the user. If the user is meant > to use a GraphQL library themselves, we lose this and the user > needs to figure it out themselves. Did I understand that > correctly? > > This is especially useful where the SDK implements business > logic for common operations like "if the user requested A and > the cloud supports features B+C+D then use those to fulfil the > request, otherwise fall back to using features E+F". > > > The features offered to the user don't have to change, it's just a > different architecture. > > The user doesn't have to deal with a GraphQL library, only the > client applications (consuming OpenStack APIs). And there are also > UI tools such as GraphiQL which allow to interact directly with > GraphQL servers. > > > My point was simply that SDKs provide more than a simple translation > of network API calls and feature discovery. There can also be rather > a lot of "business logic" orchestrating multiple primitive API calls > to reach some more complex outcome. The services don't want to embed > this orchestrated business logic themselves, and it makes little > sense to replicate the same algorithms in every single application > which wants to make use of such composite functionality. There are > common actions an application might wish to take which involve > speaking to multiple APIs for different services to make specific > calls in a particular order, perhaps feeding the results of one into > the next. > > Can you explain how GraphQL eliminates the above reasons for an SDK? > > > What I meant is the communication part of any SDK interfacing between > clients and API services can be handled by GraphQL client librairies. > So instead of having to rely on modules (imported or native) to carry the > REST communications, we're dealing with data provided by GraphQL libraries > (which are also modules but standardized as GraphQL is a specification). > So as you mentioned there is still need to provide the data wrap in > objects or any adequate struct to present to the consumers. > > Having a Schema helps both API and clients developers because the data is > clearly typed and graphed. Backend devs can focus on resolving the data for > each node/leaf while the clients can focus on what they need and not how to > get it. > > To relate to $subject, by building the data model (graph) we obtain a > schema and introspection. That's a big saver in term of resources. > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Mon Nov 5 10:47:04 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 5 Nov 2018 11:47:04 +0100 Subject: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage In-Reply-To: References: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> <1077343a-b708-4fa2-0f44-2575363419e9@nemebean.com> <1540923927.492007.1559968608.2F968E3E@webmail.messagingengine.com> Message-ID: Let's also think of removing puppet-tripleo from the base container. It really brings the world-in (and yum updates in CI!) each job and each container! So if we did so, we should then either install puppet-tripleo and co on the host and bind-mount it for the docker-puppet deployment task steps (bad idea IMO), OR use the magical --volumes-from option to mount volumes from some "puppet-config" sidecar container inside each of the containers being launched by docker-puppet tooling. On 10/31/18 6:35 PM, Alex Schultz wrote: > > So this is a single layer that is updated once and shared by all the > containers that inherit from it. I did notice the same thing and have > proposed a change in the layering of these packages last night. > > https://review.openstack.org/#/c/614371/ > > In general this does raise a point about dependencies of services and > what the actual impact of adding new ones to projects is. Especially > in the container world where this might be duplicated N times > depending on the number of services deployed. With the move to > containers, much of the sharedness that being on a single host > provided has been lost at a cost of increased bandwidth, memory, and > storage usage. > > Thanks, > -Alex > -- Best regards, Bogdan Dobrelya, Irc #bogdando From cjeanner at redhat.com Mon Nov 5 10:56:51 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Mon, 5 Nov 2018 11:56:51 +0100 Subject: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage In-Reply-To: References: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> <1077343a-b708-4fa2-0f44-2575363419e9@nemebean.com> <1540923927.492007.1559968608.2F968E3E@webmail.messagingengine.com> Message-ID: <41b48fc5-3378-10ba-0c46-e7a2a24d2166@redhat.com> On 11/5/18 11:47 AM, Bogdan Dobrelya wrote: > Let's also think of removing puppet-tripleo from the base container. > It really brings the world-in (and yum updates in CI!) each job and each > container! > So if we did so, we should then either install puppet-tripleo and co on > the host and bind-mount it for the docker-puppet deployment task steps > (bad idea IMO), OR use the magical --volumes-from > option to mount volumes from some "puppet-config" sidecar container > inside each of the containers being launched by docker-puppet tooling. And, in addition, I'd rather see the "podman" thingy as a bind-mount, especially since we MUST get the same version in all the calls. > > On 10/31/18 6:35 PM, Alex Schultz wrote: >> >> So this is a single layer that is updated once and shared by all the >> containers that inherit from it. I did notice the same thing and have >> proposed a change in the layering of these packages last night. >> >> https://review.openstack.org/#/c/614371/ >> >> In general this does raise a point about dependencies of services and >> what the actual impact of adding new ones to projects is. Especially >> in the container world where this might be duplicated N times >> depending on the number of services deployed.  With the move to >> containers, much of the sharedness that being on a single host >> provided has been lost at a cost of increased bandwidth, memory, and >> storage usage. >> >> Thanks, >> -Alex >> > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From cdent+os at anticdent.org Mon Nov 5 11:52:58 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 5 Nov 2018 11:52:58 +0000 (GMT) Subject: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker In-Reply-To: <007518d4-daed-4065-7044-40443564f6cb@gmail.com> References: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> <007518d4-daed-4065-7044-40443564f6cb@gmail.com> Message-ID: On Sun, 4 Nov 2018, Jay Pipes wrote: > Now that we have generation markers protecting both providers and consumers, > we can rely on those generations to signal to the scheduler report client > that it needs to pull fresh information about a provider or consumer. So, > there's really no need to automatically and blindly refresh any more. I agree with this ^. I've been trying to tease out the issues in this thread and on the associated review [1] and I've decided that much of my confusion comes from the fact that we refer to a thing which is a "cache" in the resource tracker and either trusting it more or not having it at all, and I think that's misleading. To me a "cache" has multiple clients and there's some need for reconciliation and invalidation amongst them. The thing that's in the resource tracker is in one process, changes to it are synchronized; it's merely a data structure. Some words follow where I try to tease things out a bit more (mostly for my own sake, but if it helps other people, great). At the very end there's a bit of list of suggested todos for us to consider. What we have is a data structure which represents the resource tracker and virtdirver's current view on what providers and associates it is aware of. We maintain a boundary between the RT and the virtdriver that means there's "updating" going on that sometimes is a bit fussy to resolve (cf. recent adjustments to allocation ratio handling). In the old way, every now and again we get a bunch of info from placement to confirm that our view is right and try to reconcile things. What we're considering moving towards is only doing that "get a bunch of info from placement" when we fail to write to placement because of a generation conflict. Thus we should only read from placement: * at compute node startup * when a write fails And we should only write to placement: * at compute node startup * when the virt driver tells us something has changed Is that right? If it is not right, can we do that? If not, why not? Because generations change, often, they guard against us making changes in ignorance and allow us to write blindly and only GET when we fail. We've got this everywhere now, let's use it. So, for example, even if something else besides the compute is adding traits, it's cool. We'll fail when we (the compute) try to clobber. Elsewhere in the thread several other topics were raised. A lot of that boil back to "what are we actually trying to do in the periodics?". As is often the case (and appropriately so) what we're trying to do has evolved and accreted in an organic fashion and it is probably time for us to re-evaluate and make sure we're doing the right stuff. The first step is writing that down. That aspect has always been pretty obscure or tribal to me, I presume so for others. So doing a legit audit of that code and the goals is something we should do. Mohammed's comments about allocations getting out of sync are important. I agree with him that it would be excellent if we could go back to self-healing those, especially because of the "wait for the computes to automagically populate everything" part he mentions. However, that aspect, while related to this, is not quite the same thing. The management of allocations and the management of inventories (and "associates") is happening from different angles. And finally, even if we turn off these refreshes to lighten the load, placement still needs to be capable of dealing with frequent requests, so we have something to fix there. We need to do the analysis to find out where the cost is and implement some solutions. At the moment we don't know where it is. It could be: * In the database server * In the python code that marshals the data around those calls to the database * In the python code that handles the WSGI interactions * In the web server that is talking to the python code belmoreira's document [2] suggests some avenues of investigation (most CPU time is in user space and not waiting) but we'd need a bit more information to plan any concrete next steps: * what's the web server and which wsgi configuration? * where's the database, if it's different what's the load there? I suspect there's a lot we can do to make our code more correct and efficient. And beyond that there is a great deal of standard run-of- the mill server-side caching and etag handling that we could implement if necessary. That is: treat placement like a web app that needs to be optimized in the usual ways. As Eric suggested at the start of the thread, this kind of investigation is expected and normal. We've not done something wrong. Make it, make it correct, make it fast is the process. We're oscillating somewhere between 2 and 3. So in terms of actions: * I'm pretty well situated to do some deeper profiling and benchmarking of placement to find the elbows in that. * It seems like Eric and Jay are probably best situated to define and refine what should really be going on with the resource tracker and other actions on the compute-node. * We need to have further discussion and investigation on allocations getting out of sync. Volunteers? What else? [1] https://review.openstack.org/#/c/614886/ [2] https://docs.google.com/document/d/1d5k1hA3DbGmMyJbXdVcekR12gyrFTaj_tJdFwdQy-8E/edit -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From cdent+os at anticdent.org Mon Nov 5 12:26:05 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 5 Nov 2018 12:26:05 +0000 (GMT) Subject: [openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir In-Reply-To: <7fd557b7-b789-ab1b-8175-4b1b506c45b2@inaugust.com> References: <7fd557b7-b789-ab1b-8175-4b1b506c45b2@inaugust.com> Message-ID: On Sun, 4 Nov 2018, Monty Taylor wrote: > I've floated a half-baked version of this idea to a few people, but lemme try > again with some new words. > > What if we added support for serving vendor data files from the root of a > primary URL as-per RFC 5785. Specifically, support deployers adding a json > file to .well-known/openstack/client that would contain what we currently > store in the openstacksdk repo and were just discussing splitting out. Sounds like a good plan. I'm still a vexed that we need to know a cloud's primary host, then this URL, then get a url for auth and from there start gathering up information about the services and then their endpoints. All of that seems of one piece to me and there should be one way to do it. But in the absence of that, this is a good plan. > What do people think? I think cats are nice and so is this plan. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From bdobreli at redhat.com Mon Nov 5 14:06:34 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 5 Nov 2018 15:06:34 +0100 Subject: [openstack-dev] [Edge-computing] [tripleo][FEMDC] IEEE Fog Computing: Call for Contributions - Deadline Approaching In-Reply-To: References: <53cf2d33-cee7-3f0d-7f28-74b29091a7ef@redhat.com> <8acb2f9e-9fbd-77be-a274-eb3d54ae2ab4@redhat.com> Message-ID: <39eba4f7-7267-72db-548b-f88dbdcf18e7@redhat.com> Thank you for a reply, Flavia: > Hi Bogdan > sorry for the late reply - yesterday was a Holiday here in Brazil! > I am afraid I will not be able to engage in this collaboration with > such a short time...we had to have started this initiative a little > earlier... That's understandable. I hoped though a position paper is something we (all who reads that, not just you and me) could achieve in a couple of days, without a lot of research associated. That's a postion paper, which is not expected to contain formal prove or implementation details. The vision for tooling is the hardest part though, and indeed requires some time. So let me please [tl;dr] the outcome of that position paper: * position: given Always Available autonomy support as a starting point, define invariants for both operational and data storage consistency requirements of control/management plane (I've already drafted some in [0]) * vision: show that in the end that data synchronization and conflict resolving solution just boils down to having a causally consistent KVS (either causal+ or causal-RT, or lazy replication based, or anything like that), and cannot be achieved with *only* transactional distributed database, like Galera cluster. The way how to show that is an open question, we could refer to the existing papers (COPS, causal-RT, lazy replication et al) and claim they fit the defined invariants nicely, while transactional DB cannot fit it by design (it's consensus protocols require majority/quorums to operate and being always available for data put/write operations). We probably may omit proving that obvious thing formally? At least for the postion paper... * opportunity: that is basically designing and implementing of such a causally-consistent KVS solution (see COPS library as example) for OpenStack, and ideally, unifying it for PaaS operators (OpenShift/Kubernetes) and tenants willing to host their containerized workloads on PaaS distributed over a Fog Cloud of Edge clouds and leverage its data synchronization and conflict resolving solution as-a-service. Like Amazon dynamo DB, for example, except that fitting the edge cases of another cloud stack :) [0] https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/challenges.md > As for working collaboratively with latex, I would recommend using > overleaf - it is not that difficult and has lots of edition resources > as markdown and track changes, for instance. > Thanks and good luck! > Flavia On 11/2/18 5:32 PM, Bogdan Dobrelya wrote: > Hello folks. > Here is an update for today. I crated a draft [0], and spend some time > with building LaTeX with live-updating for the compiled PDF... The > latter is only informational, if someone wants to contribute, please > follow the instructions listed by the link (hint: you need no to have > any LaTeX experience, only basic markdown knowledge should be enough!) > > [0] > https://github.com/bogdando/papers-ieee/#in-the-current-development-looking-for-co-authors > > > On 10/31/18 6:54 PM, Ildiko Vancsa wrote: >> Hi, >> >> Thank you for sharing your proposal. >> >> I think this is a very interesting topic with a list of possible >> solutions some of which this group is also discussing. It would also >> be great to learn more about the IEEE activities and have experience >> about the process in this group on the way forward. >> >> I personally do not have experience with IEEE conferences, but I’m >> happy to help with the paper if I can. >> >> Thanks, >> Ildikó >> >> > > (added from the parallel thread) >>> On 2018. Oct 31., at 19:11, Mike Bayer >>> wrote: >>> >>> On Wed, Oct 31, 2018 at 10:57 AM Bogdan Dobrelya >> redhat.com> wrote: >>>> >>>> (cross-posting openstack-dev) >>>> >>>> Hello. >>>> [tl;dr] I'm looking for co-author(s) to come up with "Edge clouds data >>>> consistency requirements and challenges" a position paper [0] (papers >>>> submitting deadline is Nov 8). >>>> >>>> The problem scope is synchronizing control plane and/or >>>> deployments-specific data (not necessary limited to OpenStack) across >>>> remote Edges and central Edge and management site(s). Including the >>>> same >>>> aspects for overclouds and undercloud(s), in terms of TripleO; and >>>> other >>>> deployment tools of your choice. >>>> >>>> Another problem is to not go into different solutions for Edge >>>> deployments management and control planes of edges. And for tenants as >>>> well, if we think of tenants also doing Edge deployments based on Edge >>>> Data Replication as a Service, say for Kubernetes/OpenShift on top of >>>> OpenStack. >>>> >>>> So the paper should name the outstanding problems, define data >>>> consistency requirements and pose possible solutions for >>>> synchronization >>>> and conflicts resolving. Having maximum autonomy cases supported for >>>> isolated sites, with a capability to eventually catch up its >>>> distributed >>>> state. Like global database [1], or something different perhaps (see >>>> causal-real-time consistency model [2],[3]), or even using git. And >>>> probably more than that?.. (looking for ideas) >>> >>> >>> I can offer detail on whatever aspects of the "shared  / global >>> database" idea.  The way we're doing it with Galera for now is all >>> about something simple and modestly effective for the moment, but it >>> doesn't have any of the hallmarks of a long-term, canonical solution, >>> because Galera is not well suited towards being present on many >>> (dozens) of endpoints.     The concept that the StarlingX folks were >>> talking about, that of independent databases that are synchronized >>> using some kind of middleware is potentially more scalable, however I >>> think the best approach would be API-level replication, that is, you >>> have a bunch of Keystone services and there is a process that is >>> regularly accessing the APIs of these keystone services and >>> cross-publishing state amongst all of them.   Clearly the big >>> challenge with that is how to resolve conflicts, I think the answer >>> would lie in the fact that the data being replicated would be of >>> limited scope and potentially consist of mostly or fully >>> non-overlapping records. >>> >>> That is, I think "global database" is a cheap way to get what would be >>> more effective as asynchronous state synchronization between identity >>> services. >> >> Recently we’ve been also exploring federation with an IdP (Identity >> Provider) master: >> https://wiki.openstack.org/wiki/Keystone_edge_architectures#Identity_Provider_.28IdP.29_Master_with_shadow_users >> >> >> One of the pros is that it removes the need for synchronization and >> potentially increases scalability. >> >> Thanks, >> Ildikó > > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From tnakamura.openstack at gmail.com Mon Nov 5 14:55:07 2018 From: tnakamura.openstack at gmail.com (Tetsuro Nakamura) Date: Mon, 5 Nov 2018 23:55:07 +0900 Subject: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker In-Reply-To: References: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> <007518d4-daed-4065-7044-40443564f6cb@gmail.com> Message-ID: Thus we should only read from placement: > * at compute node startup > * when a write fails > And we should only write to placement: > * at compute node startup > * when the virt driver tells us something has changed I agree with this. We could also prepare an interface for operators/other-projects to force nova to pull fresh information from placement and put it into its cache in order to avoid predictable conflicts. Is that right? If it is not right, can we do that? If not, why not? The same question from me. Refreshing periodically strategy might be now an optional optimization for smaller clouds? 2018年11月5日(月) 20:53 Chris Dent : > On Sun, 4 Nov 2018, Jay Pipes wrote: > > > Now that we have generation markers protecting both providers and > consumers, > > we can rely on those generations to signal to the scheduler report > client > > that it needs to pull fresh information about a provider or consumer. > So, > > there's really no need to automatically and blindly refresh any more. > > I agree with this ^. > > I've been trying to tease out the issues in this thread and on the > associated review [1] and I've decided that much of my confusion > comes from the fact that we refer to a thing which is a "cache" in > the resource tracker and either trusting it more or not having it at > all, and I think that's misleading. To me a "cache" has multiple > clients and there's some need for reconciliation and invalidation > amongst them. The thing that's in the resource tracker is in one > process, changes to it are synchronized; it's merely a data structure. > > Some words follow where I try to tease things out a bit more (mostly > for my own sake, but if it helps other people, great). At the very > end there's a bit of list of suggested todos for us to consider. > > What we have is a data structure which represents the resource > tracker and virtdirver's current view on what providers and > associates it is aware of. We maintain a boundary between the RT and > the virtdriver that means there's "updating" going on that sometimes > is a bit fussy to resolve (cf. recent adjustments to allocation > ratio handling). > > In the old way, every now and again we get a bunch of info from > placement to confirm that our view is right and try to reconcile > things. > > What we're considering moving towards is only doing that "get a > bunch of info from placement" when we fail to write to placement > because of a generation conflict. > > Thus we should only read from placement: > > * at compute node startup > * when a write fails > > And we should only write to placement: > > * at compute node startup > * when the virt driver tells us something has changed > > Is that right? If it is not right, can we do that? If not, why not? > > Because generations change, often, they guard against us making > changes in ignorance and allow us to write blindly and only GET when > we fail. We've got this everywhere now, let's use it. So, for > example, even if something else besides the compute is adding > traits, it's cool. We'll fail when we (the compute) try to clobber. > > Elsewhere in the thread several other topics were raised. A lot of > that boil back to "what are we actually trying to do in the > periodics?". As is often the case (and appropriately so) what we're > trying to do has evolved and accreted in an organic fashion and it > is probably time for us to re-evaluate and make sure we're doing the > right stuff. The first step is writing that down. That aspect has > always been pretty obscure or tribal to me, I presume so for others. > So doing a legit audit of that code and the goals is something we > should do. > > Mohammed's comments about allocations getting out of sync are > important. I agree with him that it would be excellent if we could > go back to self-healing those, especially because of the "wait for > the computes to automagically populate everything" part he mentions. > However, that aspect, while related to this, is not quite the same > thing. The management of allocations and the management of > inventories (and "associates") is happening from different angles. > > And finally, even if we turn off these refreshes to lighten the > load, placement still needs to be capable of dealing with frequent > requests, so we have something to fix there. We need to do the > analysis to find out where the cost is and implement some solutions. > At the moment we don't know where it is. It could be: > > * In the database server > * In the python code that marshals the data around those calls to > the database > * In the python code that handles the WSGI interactions > * In the web server that is talking to the python code > > belmoreira's document [2] suggests some avenues of investigation > (most CPU time is in user space and not waiting) but we'd need a bit > more information to plan any concrete next steps: > > * what's the web server and which wsgi configuration? > * where's the database, if it's different what's the load there? > > I suspect there's a lot we can do to make our code more correct and > efficient. And beyond that there is a great deal of standard run-of- > the mill server-side caching and etag handling that we could > implement if necessary. That is: treat placement like a web app that > needs to be optimized in the usual ways. > > As Eric suggested at the start of the thread, this kind of > investigation is expected and normal. We've not done something > wrong. Make it, make it correct, make it fast is the process. > We're oscillating somewhere between 2 and 3. > > So in terms of actions: > > * I'm pretty well situated to do some deeper profiling and > benchmarking of placement to find the elbows in that. > > * It seems like Eric and Jay are probably best situated to define > and refine what should really be going on with the resource > tracker and other actions on the compute-node. > > * We need to have further discussion and investigation on > allocations getting out of sync. Volunteers? > > What else? > > [1] https://review.openstack.org/#/c/614886/ > [2] > https://docs.google.com/document/d/1d5k1hA3DbGmMyJbXdVcekR12gyrFTaj_tJdFwdQy-8E/edit > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: > @anticdent__________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Nov 5 15:04:04 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 5 Nov 2018 08:04:04 -0700 Subject: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage In-Reply-To: References: <1540915417.449870.1559798696.1CFB3E75@webmail.messagingengine.com> <1077343a-b708-4fa2-0f44-2575363419e9@nemebean.com> <1540923927.492007.1559968608.2F968E3E@webmail.messagingengine.com> Message-ID: On Mon, Nov 5, 2018 at 3:47 AM Bogdan Dobrelya wrote: > > Let's also think of removing puppet-tripleo from the base container. > It really brings the world-in (and yum updates in CI!) each job and each > container! > So if we did so, we should then either install puppet-tripleo and co on > the host and bind-mount it for the docker-puppet deployment task steps > (bad idea IMO), OR use the magical --volumes-from > option to mount volumes from some "puppet-config" sidecar container > inside each of the containers being launched by docker-puppet tooling. > This does bring an interesting point as we also include this in overcloud-full. I know Dan had a patch to stop using the puppet-tripleo from the host[0] which is the opposite of this. While these yum updates happen a bunch in CI, they aren't super large updates. But yes I think we need to figure out the correct way forward with these packages. Thanks, -Alex [0] https://review.openstack.org/#/c/550848/ > On 10/31/18 6:35 PM, Alex Schultz wrote: > > > > So this is a single layer that is updated once and shared by all the > > containers that inherit from it. I did notice the same thing and have > > proposed a change in the layering of these packages last night. > > > > https://review.openstack.org/#/c/614371/ > > > > In general this does raise a point about dependencies of services and > > what the actual impact of adding new ones to projects is. Especially > > in the container world where this might be duplicated N times > > depending on the number of services deployed. With the move to > > containers, much of the sharedness that being on a single host > > provided has been lost at a cost of increased bandwidth, memory, and > > storage usage. > > > > Thanks, > > -Alex > > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando From mriedemos at gmail.com Mon Nov 5 15:17:05 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 5 Nov 2018 09:17:05 -0600 Subject: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker In-Reply-To: References: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> Message-ID: On 11/4/2018 4:22 AM, Mohammed Naser wrote: > Just for information sake, a clean state cloud which had no reported issues > over maybe a period of 2-3 months already has 4 allocations which are > incorrect and 12 allocations pointing to the wrong resource provider, so I > think this comes does to committing to either "self-healing" to fix those > issues or not. Is this running Rocky or an older release? Have you dug into any of the operations around these instances to determine what might have gone wrong? For example, was a live migration performed recently on these instances and if so, did it fail? How about evacuations (rebuild from a down host). By "4 allocations which are incorrect" I assume that means they are pointing at the correct compute node resource provider but the values for allocated VCPU, MEMORY_MB and DISK_GB is wrong? If so, how do the allocations align with old/new flavors used to resize the instance? Did the resize fail? Are there mixed compute versions at all, i.e. are you moving instances around during a rolling upgrade? -- Thanks, Matt From mriedemos at gmail.com Mon Nov 5 15:18:07 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 5 Nov 2018 09:18:07 -0600 Subject: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker In-Reply-To: References: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> <007518d4-daed-4065-7044-40443564f6cb@gmail.com> Message-ID: On 11/5/2018 5:52 AM, Chris Dent wrote: > * We need to have further discussion and investigation on >   allocations getting out of sync. Volunteers? This is something I've already spent a lot of time on with the heal_allocations CLI, and have already started asking mnaser questions about this elsewhere in the thread. -- Thanks, Matt From mriedemos at gmail.com Mon Nov 5 16:05:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 5 Nov 2018 10:05:53 -0600 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-23 Update Message-ID: <1ac2c057-7760-51a3-8143-58ec1780d51e@gmail.com> There is not much news this week. There are several open changes which add the base command framework to projects [1]. Those need reviews from the related core teams. gmann and I have been trying to go through them first to make sure they are ready for core review. There is one neutron change to note [2] which adds an extension point for neutron stadium projects (and ML2 plugins?) to hook in their own upgrade checks. Given the neutron architecture, this makes sense. My only worry is about making sure the interface is clearly defined, but I suspect this isn't the first time the neutron team has had to deal with something like this. [1] https://review.openstack.org/#/q/topic:upgrade-checkers+status:open [2] https://review.openstack.org/#/c/615196/ -- Thanks, Matt From dtantsur at redhat.com Mon Nov 5 16:18:40 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 5 Nov 2018 17:18:40 +0100 Subject: [openstack-dev] [all] 2019 summit during May holidays? Message-ID: <4f557d46-6598-90b6-ecc5-541f4d2c4d73@redhat.com> Hi all, Not sure how official the information about the next summit is, but it's on the web site [1], so I guess worth asking.. Are we planning for the summit to overlap with the May holidays? The 1st of May is a holiday in big part of the world. We ask people to skip it in addition to 3+ weekend days they'll have to spend working and traveling. To make it worse, 1-3 May are holidays in Russia this time. To make it even worse than worse, the week of 29th is the Golden Week in Japan [2]. Was it considered? Is it possible to move the days to less conflicting time (mid-May maybe)? Dmitry [1] https://www.openstack.org/summit/denver-2019/ [2] https://en.wikipedia.org/wiki/Golden_Week_(Japan) From fm577c at att.com Mon Nov 5 16:37:27 2018 From: fm577c at att.com (MONTEIRO, FELIPE C) Date: Mon, 5 Nov 2018 16:37:27 +0000 Subject: [openstack-dev] [qa] patrole] Nominating Sergey Vilgelm and Mykola Yakovliev for Patrole core In-Reply-To: References: <7D5E803080EF7047850D309B333CB94E22EBA021@GAALPA1MSGUSRBI.ITServices.sbc.com> <1669e0a173d.c3a7d12643083.6400498436645570362@ghanshyammann.com> <7D5E803080EF7047850D309B333CB94E22EC70F3@GAALPA1MSGUSRBI.ITServices.sbc.com> Message-ID: <7D5E803080EF7047850D309B333CB94E22ECBAD3@GAALPA1MSGUSRBI.ITServices.sbc.com> Since there have only been approvals for Sergey and Mykola for Patrole core, welcome to the team! Felipe > -----Original Message----- > From: BARTRA, RICK > Sent: Monday, October 29, 2018 2:56 PM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [qa] patrole] Nominating Sergey Vilgelm and > Mykola Yakovliev for Patrole core > > *** Security Advisory: This Message Originated Outside of AT&T ***. > Reference http://cso.att.com/EmailSecurity/IDSP.html for more > information. > > +1 for both of them as well. > > > > On 10/29/18, 2:54 PM, "MONTEIRO, FELIPE C" wrote: > > > > > > > > -----Original Message----- > > From: Ghanshyam Mann [mailto:gmann at ghanshyammann.com] > > Sent: Monday, October 22, 2018 7:09 PM > > To: OpenStack Development Mailing List \ dev at lists.openstack.org> > > Subject: Re: [openstack-dev] [qa] patrole] Nominating Sergey Vilgelm and > Mykola Yakovliev for Patrole core > > > > +1 for both of them. They have been doing great work in Patrole and will > be good addition in team. > > > > -gmann > > > > > > ---- On Tue, 23 Oct 2018 03:34:51 +0900 MONTEIRO, FELIPE C > wrote ---- > > > > > > Hi, > > > > > > I would like to nominate Sergey Vilgelm and Mykola Yakovliev for > Patrole core as they have both done excellent work the past cycle in > improving the Patrole framework as well as increasing Neutron Patrole test > coverage, which includes various Neutron plugins/extensions as well like > fwaas. I believe they will both make an excellent addition to the Patrole core > team. > > > > > > Please vote with a +1/-1 for the nomination, which will stay open for > one week. > > > > > > Felipe > > > > ______________________________________________________________ > ____________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > > > https://urldefense.proofpoint.com/v2/url?u=http- > 3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack- > 2Ddev&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > aw&m=Hr9uSwCAFUivOWpV_I3WfWWX2j2FaDOJgydFtf1xADs&s=tM- > 1KJHy12lSqbDfRmZNuc_dgRsAjBqLMZchmEVHGEo&e= > > > > > > > > > > > > ______________________________________________________________ > ____________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > > https://urldefense.proofpoint.com/v2/url?u=http- > 3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack- > 2Ddev&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > aw&m=Hr9uSwCAFUivOWpV_I3WfWWX2j2FaDOJgydFtf1xADs&s=tM- > 1KJHy12lSqbDfRmZNuc_dgRsAjBqLMZchmEVHGEo&e= > > > > > > ______________________________________________________________ > ____________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > https://urldefense.proofpoint.com/v2/url?u=http- > 3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack- > 2Ddev&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > aw&m=VUGWFp3_1Kffqqftl0BNEQGU0tY7YI6cAPQCOO6l4OA&s=- > vCce7mua2bf0wEyUxPbuGJLmhhaV8Geu3ImAgP4MHA&e= From kgiusti at gmail.com Mon Nov 5 16:42:08 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Mon, 5 Nov 2018 11:42:08 -0500 Subject: [openstack-dev] [oslo] No complains about rabbitmq SSL problems: could we have this in the logs? In-Reply-To: <08C09D03-132D-406E-AFDE-7845B2D55B5F@vexxhost.com> References: <08C09D03-132D-406E-AFDE-7845B2D55B5F@vexxhost.com> Message-ID: Hi Mohammed, What release of openstack are you using? (ocata, pike, etc) Also just to confirm my understanding: you do see the SSL connections come up, but after some time they 'hang' - what do you mean by 'hang'? Do the connections drop? Or do the connections remain up but you start seeing messages (RPC calls) time out? thanks, On Wed, Oct 31, 2018 at 9:40 AM Mohammed Naser wrote: > For what it’s worth: I ran into the same issue. I think the problem lies > a bit deeper because it’s a problem with kombu as when debugging I saw that > Oslo messaging tried to connect and hung after. > > Sent from my iPhone > > > On Oct 31, 2018, at 2:29 PM, Thomas Goirand wrote: > > > > Hi, > > > > It took me a long long time to figure out that my SSL setup was wrong > > when trying to connect Heat to rabbitmq over SSL. Unfortunately, Oslo > > (or heat itself) never warn me that something was wrong, I just got > > nothing working, and no log at all. > > > > I'm sure I wouldn't be the only one happy about having this type of > > problems being yelled out loud in the logs. Right now, it does work if I > > turn off SSL, though I'm still not sure what's wrong in my setup, and > > I'm given no clue if the issue is on rabbitmq-server or on the client > > side (ie: heat, in my current case). > > > > Just a wishlist... :) > > Cheers, > > > > Thomas Goirand (zigo) > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Ken Giusti (kgiusti at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Nov 5 16:43:25 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 5 Nov 2018 10:43:25 -0600 Subject: [openstack-dev] [goals][upgrade-checkers] FYI on "TypeError: Message objects do not support addition." errors Message-ID: <231d6a67-e114-a097-7524-57e264d8884e@gmail.com> If you are seeing this error when implementing and running the upgrade check command in your project: Traceback (most recent call last): File "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/oslo_upgradecheck/upgradecheck.py", line 184, in main return conf.command.action_fn() File "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/oslo_upgradecheck/upgradecheck.py", line 134, in check print(t) File "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", line 237, in __str__ return self.__unicode__() File "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", line 243, in __unicode__ return self.get_string() File "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", line 995, in get_string lines.append(self._stringify_header(options)) File "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", line 1066, in _stringify_header bits.append(" " * lpad + self._justify(fieldname, width, self._align[field]) + " " * rpad) File "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", line 187, in _justify return text + excess * " " File "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/oslo_i18n/_message.py", line 230, in __add__ raise TypeError(msg) TypeError: Message objects do not support addition. It is due to calling oslo_i18n.enable_lazy() somewhere in the command import path. That should be removed from the project since lazy translation is not supported in openstack and as an effort was abandoned several years ago. It is probably still called in a lot of "big tent/stackforge" projects because of initially copying it from the more core projects. Anyway, just remove it. I'm talking with the oslo team about deprecating that interface so projects don't mistakenly use it and expect great things to happen. -- Thanks, Matt From mriedemos at gmail.com Mon Nov 5 17:28:02 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 5 Nov 2018 11:28:02 -0600 Subject: [openstack-dev] [nova] about live-resize the instance In-Reply-To: References: Message-ID: <267b0eff-15ea-23bd-7bf8-f319cda79790@gmail.com> On 11/4/2018 10:17 PM, Chen CH Ji wrote: > Yes, this has been discussed for long time and If I remember this > correctly seems S PTG also had some discussion on it (maybe public Cloud > WG ? ), Claudiu has been pushing this for several cycles and he actually > had some code at [1] but no additional progress there... > [1] > https://review.openstack.org/#/q/status:abandoned+topic:bp/instance-live-resize It's a question of priorities. It's a complicated change and low priority, in my opinion. We've said several times before that we'd do it, but there are a lot of other higher priority efforts taking the attention of the core team. Getting agreement on the spec is the first step and then the runways process should be used to deal with actual code reviews, but I think the spec review has stalled (I know I am guilty of not looking at the latest updates to the spec). -- Thanks, Matt From mriedemos at gmail.com Mon Nov 5 17:30:38 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 5 Nov 2018 11:30:38 -0600 Subject: [openstack-dev] [nova] Announcing new Focal Point for s390x libvirt/kvm Nova In-Reply-To: References: Message-ID: <16a44253-620d-9f22-e1b1-b36683f0f15d@gmail.com> On 11/2/2018 3:47 AM, Andreas Scheuring wrote: > Dear Nova Community, > I want to announce the new focal point for Nova s390x libvirt/kvm. > > Please welcome "Cathy Zhang” to the Nova team. She and her team will be responsible for maintaining the s390x libvirt/kvm Thirdparty CI [1] and any s390x specific code in nova and os-brick. > I personally took a new opportunity already a few month ago but kept maintaining the CI as good as possible. With new manpower we can hopefully contribute more to the community again. > > You can reach her via > * email:bjzhjing at linux.vnet.ibm.com > * IRC: Cathyz > > Cathy, I wish you and your team all the best for this exciting role! I also want to say thank you for the last years. It was a great time, I learned a lot from you all, will miss it! > > Cheers, > > Andreas (irc: scheuran) > > > [1]https://wiki.openstack.org/wiki/ThirdPartySystems/IBM_zKVM_CI Welcome Cathy. Andreas - thanks for the update and good luck on the new position. -- Thanks, Matt From bdobreli at redhat.com Mon Nov 5 17:50:32 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 5 Nov 2018 18:50:32 +0100 Subject: [openstack-dev] [Edge-computing] [tripleo][FEMDC] IEEE Fog Computing: Call for Contributions - Deadline Approaching In-Reply-To: <39eba4f7-7267-72db-548b-f88dbdcf18e7@redhat.com> References: <53cf2d33-cee7-3f0d-7f28-74b29091a7ef@redhat.com> <8acb2f9e-9fbd-77be-a274-eb3d54ae2ab4@redhat.com> <39eba4f7-7267-72db-548b-f88dbdcf18e7@redhat.com> Message-ID: <7e97e280-78e4-fe72-7068-49c981d84048@redhat.com> Update: I have yet found co-authors, I'll keep drafting that position paper [0],[1]. Just did some baby steps so far. I'm open for feedback and contributions! PS. Deadline is Nov 9 03:00 UTC, but *may be* it will be extended, if the event chairs decide to do so. Fingers crossed. [0] https://github.com/bogdando/papers-ieee#in-the-current-development-looking-for-co-authors [1] https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/LaTeX/position_paper_1570506394.pdf On 11/5/18 3:06 PM, Bogdan Dobrelya wrote: > Thank you for a reply, Flavia: > >> Hi Bogdan >> sorry for the late reply - yesterday was a Holiday here in Brazil! >> I am afraid I will not be able to engage in this collaboration with >> such a short time...we had to have started this initiative a little >> earlier... > > That's understandable. > > I hoped though a position paper is something we (all who reads that, not > just you and me) could achieve in a couple of days, without a lot of > research associated. That's a postion paper, which is not expected to > contain formal prove or implementation details. The vision for tooling > is the hardest part though, and indeed requires some time. > > So let me please [tl;dr] the outcome of that position paper: > > * position: given Always Available autonomy support as a starting point, >   define invariants for both operational and data storage consistency >   requirements of control/management plane (I've already drafted some in >   [0]) > > * vision: show that in the end that data synchronization and conflict >   resolving solution just boils down to having a causally >   consistent KVS (either causal+ or causal-RT, or lazy replication >   based, or anything like that), and cannot be achieved with *only* >   transactional distributed database, like Galera cluster. The way how >   to show that is an open question, we could refer to the existing >   papers (COPS, causal-RT, lazy replication et al) and claim they fit >   the defined invariants nicely, while transactional DB cannot fit it >   by design (it's consensus protocols require majority/quorums to >   operate and being always available for data put/write operations). >   We probably may omit proving that obvious thing formally? At least for >   the postion paper... > > * opportunity: that is basically designing and implementing of such a >   causally-consistent KVS solution (see COPS library as example) for >   OpenStack, and ideally, unifying it for PaaS operators >   (OpenShift/Kubernetes) and tenants willing to host their containerized >   workloads on PaaS distributed over a Fog Cloud of Edge clouds and >   leverage its data synchronization and conflict resolving solution >   as-a-service. Like Amazon dynamo DB, for example, except that fitting >   the edge cases of another cloud stack :) > > [0] > https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/challenges.md > >> As for working collaboratively with latex, I would recommend using >> overleaf - it is not that difficult and has lots of edition resources >> as markdown and track changes, for instance. >> Thanks and good luck! >> Flavia > > > > On 11/2/18 5:32 PM, Bogdan Dobrelya wrote: >> Hello folks. >> Here is an update for today. I crated a draft [0], and spend some time >> with building LaTeX with live-updating for the compiled PDF... The >> latter is only informational, if someone wants to contribute, please >> follow the instructions listed by the link (hint: you need no to have >> any LaTeX experience, only basic markdown knowledge should be enough!) >> >> [0] >> https://github.com/bogdando/papers-ieee/#in-the-current-development-looking-for-co-authors >> >> >> On 10/31/18 6:54 PM, Ildiko Vancsa wrote: >>> Hi, >>> >>> Thank you for sharing your proposal. >>> >>> I think this is a very interesting topic with a list of possible >>> solutions some of which this group is also discussing. It would also >>> be great to learn more about the IEEE activities and have experience >>> about the process in this group on the way forward. >>> >>> I personally do not have experience with IEEE conferences, but I’m >>> happy to help with the paper if I can. >>> >>> Thanks, >>> Ildikó >>> >>> >> >> (added from the parallel thread) >>>> On 2018. Oct 31., at 19:11, Mike Bayer >>>> wrote: >>>> >>>> On Wed, Oct 31, 2018 at 10:57 AM Bogdan Dobrelya >>> redhat.com> wrote: >>>>> >>>>> (cross-posting openstack-dev) >>>>> >>>>> Hello. >>>>> [tl;dr] I'm looking for co-author(s) to come up with "Edge clouds data >>>>> consistency requirements and challenges" a position paper [0] (papers >>>>> submitting deadline is Nov 8). >>>>> >>>>> The problem scope is synchronizing control plane and/or >>>>> deployments-specific data (not necessary limited to OpenStack) across >>>>> remote Edges and central Edge and management site(s). Including the >>>>> same >>>>> aspects for overclouds and undercloud(s), in terms of TripleO; and >>>>> other >>>>> deployment tools of your choice. >>>>> >>>>> Another problem is to not go into different solutions for Edge >>>>> deployments management and control planes of edges. And for tenants as >>>>> well, if we think of tenants also doing Edge deployments based on Edge >>>>> Data Replication as a Service, say for Kubernetes/OpenShift on top of >>>>> OpenStack. >>>>> >>>>> So the paper should name the outstanding problems, define data >>>>> consistency requirements and pose possible solutions for >>>>> synchronization >>>>> and conflicts resolving. Having maximum autonomy cases supported for >>>>> isolated sites, with a capability to eventually catch up its >>>>> distributed >>>>> state. Like global database [1], or something different perhaps (see >>>>> causal-real-time consistency model [2],[3]), or even using git. And >>>>> probably more than that?.. (looking for ideas) >>>> >>>> >>>> I can offer detail on whatever aspects of the "shared  / global >>>> database" idea.  The way we're doing it with Galera for now is all >>>> about something simple and modestly effective for the moment, but it >>>> doesn't have any of the hallmarks of a long-term, canonical solution, >>>> because Galera is not well suited towards being present on many >>>> (dozens) of endpoints.     The concept that the StarlingX folks were >>>> talking about, that of independent databases that are synchronized >>>> using some kind of middleware is potentially more scalable, however I >>>> think the best approach would be API-level replication, that is, you >>>> have a bunch of Keystone services and there is a process that is >>>> regularly accessing the APIs of these keystone services and >>>> cross-publishing state amongst all of them.   Clearly the big >>>> challenge with that is how to resolve conflicts, I think the answer >>>> would lie in the fact that the data being replicated would be of >>>> limited scope and potentially consist of mostly or fully >>>> non-overlapping records. >>>> >>>> That is, I think "global database" is a cheap way to get what would be >>>> more effective as asynchronous state synchronization between identity >>>> services. >>> >>> Recently we’ve been also exploring federation with an IdP (Identity >>> Provider) master: >>> https://wiki.openstack.org/wiki/Keystone_edge_architectures#Identity_Provider_.28IdP.29_Master_with_shadow_users >>> >>> >>> One of the pros is that it removes the need for synchronization and >>> potentially increases scalability. >>> >>> Thanks, >>> Ildikó >> >> >> > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From mriedemos at gmail.com Mon Nov 5 17:51:57 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 5 Nov 2018 11:51:57 -0600 Subject: [openstack-dev] Dropping lazy translation support Message-ID: This is a follow up to a dev ML email [1] where I noticed that some implementations of the upgrade-checkers goal were failing because some projects still use the oslo_i18n.enable_lazy() hook for lazy log message translation (and maybe API responses?). The very old blueprints related to this can be found here [2][3][4]. If memory serves me correctly from my time working at IBM on this, this was needed to: 1. Generate logs translated in other languages. 2. Return REST API responses if the "Accept-Language" header was used and a suitable translation existed for that language. #1 is a dead horse since I think at least the Ocata summit when we agreed to no longer translate logs since no one used them. #2 is probably something no one knows about. I can't find end-user documentation about it anywhere. It's not tested and therefore I have no idea if it actually works anymore. I would like to (1) deprecate the oslo_i18n.enable_lazy() function so new projects don't use it and (2) start removing the enable_lazy() usage from existing projects like keystone, glance and cinder. Are there any users, deployments or vendor distributions that still rely on this feature? If so, please speak up now. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-November/136285.html [2] https://blueprints.launchpad.net/oslo-incubator/+spec/i18n-messages [3] https://blueprints.launchpad.net/nova/+spec/i18n-messages [4] https://blueprints.launchpad.net/nova/+spec/user-locale-api -- Thanks, Matt From dprince at redhat.com Mon Nov 5 18:26:00 2018 From: dprince at redhat.com (Dan Prince) Date: Mon, 5 Nov 2018 13:26:00 -0500 Subject: [openstack-dev] [TripleO] PSA lets use deploy_steps_tasks In-Reply-To: <1632c103-0eff-ad35-6dc9-ac0c6f120d69@redhat.com> References: <1632c103-0eff-ad35-6dc9-ac0c6f120d69@redhat.com> Message-ID: On Mon, Nov 5, 2018 at 4:06 AM Cédric Jeanneret wrote: > > On 11/2/18 2:39 PM, Dan Prince wrote: > > I pushed a patch[1] to update our containerized deployment > > architecture docs yesterday. There are 2 new fairly useful sections we > > can leverage with TripleO's stepwise deployment. They appear to be > > used somewhat sparingly so I wanted to get the word out. > > Good thing, it's important to highlight this feature and explain how it > works, big thumb up Dan! > > > > > The first is 'deploy_steps_tasks' which gives you a means to run > > Ansible snippets on each node/role in a stepwise fashion during > > deployment. Previously it was only possible to execute puppet or > > docker commands where as now that we have deploy_steps_tasks we can > > execute ad-hoc ansible in the same manner. > > I'm wondering if such a thing could be used for the "inflight > validations" - i.e. a step to validate a service/container is working as > expected once it's deployed, in order to get early failure. > For instance, we deploy a rabbitmq container, and right after it's > deployed, we'd like to ensure it's actually running and works as > expected before going forward in the deploy. > > Care to have a look at that spec[1] and see if, instead of adding a new > "validation_tasks" entry, we could "just" use the "deploy_steps_tasks" > with the right step number? That would be really, really cool, and will > probably avoid a lot of code in the end :). It could work fine I think. As deploy-steps-tasks runs before the "common container/baremetal" actions special care would need to be taken so that validations for a containers startup occur at the beginning of the next step. So a container started at step 2 would be validated early in step 3. This may also require us to have a "post" deploy_steps_tasks" iteration so that we can validate late starting containers. If if we use the more generic deploy_steps_tasks section we'd probably rely on conventions to always add Ansible tags onto the validation tasks. These could be useful for those wanting to selectively execute them externally (not sure if that was part of your spec but I could see someone wanting this). Dan > > Thank you! > > C. > > [1] https://review.openstack.org/#/c/602007/ > > > > > The second is 'external_deploy_tasks' which allows you to use run > > Ansible snippets on the Undercloud during stepwise deployment. This is > > probably most useful for driving an external installer but might also > > help with some complex tasks that need to originate from a single > > Ansible client. > > > > The only downside I see to these approaches is that both appear to be > > implemented with Ansible's default linear strategy. I saw shardy's > > comment here [2] that the :free strategy does not yet apparently work > > with the any_errors_fatal option. Perhaps we can reach out to someone > > in the Ansible community in this regard to improve running these > > things in parallel like TripleO used to work with Heat agents. > > > > This is also how host_prep_tasks is implemented which BTW we should > > now get rid of as a duplicate architectural step since we have > > deploy_steps_tasks anyway. > > > > [1] https://review.openstack.org/#/c/614822/ > > [2] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/common/deploy-steps.j2#n554 > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Cédric Jeanneret > Software Engineer > DFG:DF > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mnaser at vexxhost.com Mon Nov 5 18:28:01 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 5 Nov 2018 19:28:01 +0100 Subject: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker In-Reply-To: References: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> Message-ID: On Mon, Nov 5, 2018 at 4:17 PM Matt Riedemann wrote: > > On 11/4/2018 4:22 AM, Mohammed Naser wrote: > > Just for information sake, a clean state cloud which had no reported issues > > over maybe a period of 2-3 months already has 4 allocations which are > > incorrect and 12 allocations pointing to the wrong resource provider, so I > > think this comes does to committing to either "self-healing" to fix those > > issues or not. > > Is this running Rocky or an older release? In this case, this is inside a Queens cloud, I can run the same script on a Rocky cloud too. > Have you dug into any of the operations around these instances to > determine what might have gone wrong? For example, was a live migration > performed recently on these instances and if so, did it fail? How about > evacuations (rebuild from a down host). To be honest, I have not, however, I suspect a lot of those happen from the fact that it is possible that the service which makes the claim is not the same one that deletes it I'm not sure if this is something that's possible but say the compute2 makes a claim for migrating to compute1 but something fails there, the revert happens in compute1 but compute1 is already borked so it doesn't work This isn't necessarily the exact case that's happening but it's a summary of what I believe happens. > By "4 allocations which are incorrect" I assume that means they are > pointing at the correct compute node resource provider but the values > for allocated VCPU, MEMORY_MB and DISK_GB is wrong? If so, how do the > allocations align with old/new flavors used to resize the instance? Did > the resize fail? The allocated flavours usually are not wrong, they are simply associated to the wrong resource provider (so it feels like failed migration or resize). > Are there mixed compute versions at all, i.e. are you moving instances > around during a rolling upgrade? Nope > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From cboylan at sapwetik.org Mon Nov 5 18:36:27 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 05 Nov 2018 10:36:27 -0800 Subject: [openstack-dev] Community Infrastructure Berlin Summit Onboarding Session Message-ID: <1541442987.3761150.1566423440.2A7B1A3E@webmail.messagingengine.com> Hello everyone, My apologies for cross posting but wanted to make sure the various developer groups saw this. Rather than use the Infrastructure Onboarding session in Berlin [0] for infrastructure sysadmin/developer onboarding, I thought we could use the time for user onboarding. We've got quite a few new groups interacting with us recently, and it would probably be useful to have a session on what we do, how people can take advantage of this, and so on. I've been brainstorming ideas on this etherpad [1]. If you think you'll attend the session and find any of these subjects to be useful please +1 them. Also feel free to add additional topics. I expect this will be an informal session that directly targets the interests of those attending. Please do drop by if you have any interest in using this infrastructure at all. This is your chance to better understand Zuul job configuration, the test environments themselves, the metrics and data we collect, and basically anything else related to the community developer infrastructure. [0] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22950/infrastructure-project-onboarding [1] https://etherpad.openstack.org/p/openstack-infra-berlin-onboarding Hope to see you there, Clark From melwittt at gmail.com Mon Nov 5 18:52:49 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 5 Nov 2018 10:52:49 -0800 Subject: [openstack-dev] [nova] Announcing new Focal Point for s390x libvirt/kvm Nova In-Reply-To: References: Message-ID: <258401ee-5367-1053-2556-edb51320dbae@gmail.com> On Fri, 2 Nov 2018 09:47:42 +0100, Andreas Scheuring wrote: > Dear Nova Community, > I want to announce the new focal point for Nova s390x libvirt/kvm. > > Please welcome "Cathy Zhang” to the Nova team. She and her team will be responsible for maintaining the s390x libvirt/kvm Thirdparty CI [1] and any s390x specific code in nova and os-brick. > I personally took a new opportunity already a few month ago but kept maintaining the CI as good as possible. With new manpower we can hopefully contribute more to the community again. > > You can reach her via > * email: bjzhjing at linux.vnet.ibm.com > * IRC: Cathyz > > Cathy, I wish you and your team all the best for this exciting role! I also want to say thank you for the last years. It was a great time, I learned a lot from you all, will miss it! > > Cheers, > > Andreas (irc: scheuran) > > > [1] https://wiki.openstack.org/wiki/ThirdPartySystems/IBM_zKVM_CI Thanks Andreas, for sending this note. It has been a pleasure working with you over these years. We wish you the best of luck in your new opportunity! Welcome to the Nova community, Cathy! We look forward to working with you. Please feel free to reach out to us on IRC in the #openstack-nova channel and on this mailing list with the [nova] tag to ask questions and share info. Best, -melanie From juliaashleykreger at gmail.com Mon Nov 5 19:06:14 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 5 Nov 2018 11:06:14 -0800 Subject: [openstack-dev] [all] 2019 summit during May holidays? In-Reply-To: <4f557d46-6598-90b6-ecc5-541f4d2c4d73@redhat.com> References: <4f557d46-6598-90b6-ecc5-541f4d2c4d73@redhat.com> Message-ID: *removes all of the hats* *removes years of dust from unrelated event planning hat, and puts it on for a moment* In my experience, events of any nature where convention venue space is involved, are essentially set in stone before being publicly advertised as contracts are put in place for hotel room booking blocks as well as the convention venue space. These spaces are also typically in a relatively high demand limiting the access and available times to schedule. Often venues also give preference (and sometimes even better group discounts) to repeat events as they are typically a known entity and will have somewhat known needs so the venue and hotel(s) can staff appropriately. tl;dr, I personally wouldn't expect any changes to be possible at this point. *removes event planning hat of past life, puts personal scheduling hat on* I imagine that as a community, it is near impossible to schedule something avoiding holidays for everyone in the community. I personally have lost count of the number of holidays and special days that I've spent on business trips over the past four years. While I may be an out-lier in my feelings on this subject, I'm not upset, annoyed, or even bitter about lost times. This community is part of my family. -Julia On Mon, Nov 5, 2018 at 8:19 AM Dmitry Tantsur wrote: > Hi all, > > Not sure how official the information about the next summit is, but it's > on the > web site [1], so I guess worth asking.. > > Are we planning for the summit to overlap with the May holidays? The 1st > of May > is a holiday in big part of the world. We ask people to skip it in > addition to > 3+ weekend days they'll have to spend working and traveling. > > To make it worse, 1-3 May are holidays in Russia this time. To make it > even > worse than worse, the week of 29th is the Golden Week in Japan [2]. Was it > considered? Is it possible to move the days to less conflicting time > (mid-May > maybe)? > > Dmitry > > [1] https://www.openstack.org/summit/denver-2019/ > [2] https://en.wikipedia.org/wiki/Golden_Week_(Japan) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Nov 5 19:17:13 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 5 Nov 2018 13:17:13 -0600 Subject: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker In-Reply-To: References: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> Message-ID: On 11/5/2018 12:28 PM, Mohammed Naser wrote: >> Have you dug into any of the operations around these instances to >> determine what might have gone wrong? For example, was a live migration >> performed recently on these instances and if so, did it fail? How about >> evacuations (rebuild from a down host). > To be honest, I have not, however, I suspect a lot of those happen from the > fact that it is possible that the service which makes the claim is not the > same one that deletes it > > I'm not sure if this is something that's possible but say the compute2 makes > a claim for migrating to compute1 but something fails there, the revert happens > in compute1 but compute1 is already borked so it doesn't work > > This isn't necessarily the exact case that's happening but it's a summary > of what I believe happens. > The computes don't create the resource allocations in placement though, the scheduler does, unless this deployment still has at least one compute that is References: Message-ID: Matt Riedemann writes: > This is a follow up to a dev ML email [1] where I noticed that some > implementations of the upgrade-checkers goal were failing because some > projects still use the oslo_i18n.enable_lazy() hook for lazy log message > translation (and maybe API responses?). > > The very old blueprints related to this can be found here [2][3][4]. > > If memory serves me correctly from my time working at IBM on this, this > was needed to: > > 1. Generate logs translated in other languages. > > 2. Return REST API responses if the "Accept-Language" header was used > and a suitable translation existed for that language. > > #1 is a dead horse since I think at least the Ocata summit when we > agreed to no longer translate logs since no one used them. > > #2 is probably something no one knows about. I can't find end-user > documentation about it anywhere. It's not tested and therefore I have no > idea if it actually works anymore. > > I would like to (1) deprecate the oslo_i18n.enable_lazy() function so > new projects don't use it and (2) start removing the enable_lazy() usage > from existing projects like keystone, glance and cinder. > > Are there any users, deployments or vendor distributions that still rely > on this feature? If so, please speak up now. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-November/136285.html > [2] https://blueprints.launchpad.net/oslo-incubator/+spec/i18n-messages > [3] https://blueprints.launchpad.net/nova/+spec/i18n-messages > [4] https://blueprints.launchpad.net/nova/+spec/user-locale-api > > -- > > Thanks, > > Matt > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs I think the lazy stuff was all about the API responses. The log translations worked a completely different way. Doug From fungi at yuggoth.org Mon Nov 5 19:59:17 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 5 Nov 2018 19:59:17 +0000 Subject: [openstack-dev] [all] 2019 summit during May holidays? In-Reply-To: References: <4f557d46-6598-90b6-ecc5-541f4d2c4d73@redhat.com> Message-ID: <20181105195916.xhy7jfmztbzn7ccv@yuggoth.org> On 2018-11-05 11:06:14 -0800 (-0800), Julia Kreger wrote: [...] > I imagine that as a community, it is near impossible to schedule > something avoiding holidays for everyone in the community. [...] Scheduling events that time of year is particularly challenging anyway because of the proximity of Ramadan, Passover and Easter/Lent. (We've already conflicted with Passover at least once in the past, if memory serves.) So yes, any random week you pick is already likely to hit a major public or religious holiday for some part of the World, and then you also have to factor in availability of venues and other logistics. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jungleboyj at gmail.com Mon Nov 5 19:59:22 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 5 Nov 2018 13:59:22 -0600 Subject: [openstack-dev] [all] 2019 summit during May holidays? In-Reply-To: References: <4f557d46-6598-90b6-ecc5-541f4d2c4d73@redhat.com> Message-ID: <3c0d91b0-2340-fd37-9466-01f54fbfa43f@gmail.com> On 11/5/2018 1:06 PM, Julia Kreger wrote: > *removes all of the hats* > > *removes years of dust from unrelated event planning hat, and puts it > on for a moment* > > In my experience, events of any nature where convention venue space is > involved, are essentially set in stone before being publicly > advertised as contracts are put in place for hotel room booking blocks > as well as the convention venue space. These spaces are also typically > in a relatively high demand limiting the access and available times to > schedule. Often venues also give preference (and sometimes even better > group discounts) to repeat events as they are typically a known entity > and will have somewhat known needs so the venue and hotel(s) can staff > appropriately. > > tl;dr, I personally wouldn't expect any changes to be possible at this > point. > > *removes event planning hat of past life, puts personal scheduling hat on* > > I imagine that as a community, it is near impossible to schedule > something avoiding holidays for everyone in the community. > > I personally have lost count of the number of holidays and special > days that I've spent on business trips over the past four years. While > I may be an out-lier in my feelings on this subject, I'm not upset, > annoyed, or even bitter about lost times. This community is part of my > family. > Agreed. > -Julia > > On Mon, Nov 5, 2018 at 8:19 AM Dmitry Tantsur > wrote: > > Hi all, > > Not sure how official the information about the next summit is, > but it's on the > web site [1], so I guess worth asking.. > > Are we planning for the summit to overlap with the May holidays? > The 1st of May > is a holiday in big part of the world. We ask people to skip it in > addition to > 3+ weekend days they'll have to spend working and traveling. > > To make it worse, 1-3 May are holidays in Russia this time. To > make it even > worse than worse, the week of 29th is the Golden Week in Japan > [2]. Was it > considered? Is it possible to move the days to less conflicting > time (mid-May > maybe)? > Someone else had raised the fact that this also appears to overlap with Pycon and wondered if the date could be changed.  I told them the same thing.  Once these things are announced they are, more or less, an immovable object. > > Dmitry > > [1] https://www.openstack.org/summit/denver-2019/ > [2] https://en.wikipedia.org/wiki/Golden_Week_(Japan) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Nov 5 20:40:23 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 05 Nov 2018 15:40:23 -0500 Subject: [openstack-dev] [tc] liaison assignments Message-ID: TC members, I have updated the liaison assignments to fill in all of the gaps. Please take a moment to review the list [1] so you know your assignments. Next week will be a good opportunity to touch bases with your teams. Doug [1] https://wiki.openstack.org/wiki/OpenStack_health_tracker#Project_Teams From dtantsur at redhat.com Mon Nov 5 20:50:03 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 5 Nov 2018 21:50:03 +0100 Subject: [openstack-dev] [all] 2019 summit during May holidays? In-Reply-To: References: <4f557d46-6598-90b6-ecc5-541f4d2c4d73@redhat.com> Message-ID: On Mon, Nov 5, 2018, 20:07 Julia Kreger *removes all of the hats* > > *removes years of dust from unrelated event planning hat, and puts it on > for a moment* > > In my experience, events of any nature where convention venue space is > involved, are essentially set in stone before being publicly advertised as > contracts are put in place for hotel room booking blocks as well as the > convention venue space. These spaces are also typically in a relatively > high demand limiting the access and available times to schedule. Often > venues also give preference (and sometimes even better group discounts) to > repeat events as they are typically a known entity and will have somewhat > known needs so the venue and hotel(s) can staff appropriately. > > tl;dr, I personally wouldn't expect any changes to be possible at this > point. > > *removes event planning hat of past life, puts personal scheduling hat on* > > I imagine that as a community, it is near impossible to schedule something > avoiding holidays for everyone in the community. > I'm not taking about everyone. And I'm mostly fine with my holiday, but the conflicts with Russia and Japan seem huge. This certainly does not help our effort to engage people outside of NA/EU. Quick googling suggests that the week of May 13th would have much fewer conflicts. > I personally have lost count of the number of holidays and special days > that I've spent on business trips over the past four years. While I may be > an out-lier in my feelings on this subject, I'm not upset, annoyed, or even > bitter about lost times. This community is part of my family. > Sure :) But outside of our small nice circle there is a huge world of people who may not share our feeling and the level of commitment to openstack. These occasional contributors we talked about when discussing the cycle length. I don't think asking them to abandon 3-5 days of holidays is a productive way to engage them. And again, as much as I love meeting you all, I think we're outgrowing the format of these meetings.. Dmitry > -Julia > > On Mon, Nov 5, 2018 at 8:19 AM Dmitry Tantsur wrote: > >> Hi all, >> >> Not sure how official the information about the next summit is, but it's >> on the >> web site [1], so I guess worth asking.. >> >> Are we planning for the summit to overlap with the May holidays? The 1st >> of May >> is a holiday in big part of the world. We ask people to skip it in >> addition to >> 3+ weekend days they'll have to spend working and traveling. >> >> To make it worse, 1-3 May are holidays in Russia this time. To make it >> even >> worse than worse, the week of 29th is the Golden Week in Japan [2]. Was >> it >> considered? Is it possible to move the days to less conflicting time >> (mid-May >> maybe)? >> >> Dmitry >> >> [1] https://www.openstack.org/summit/denver-2019/ >> [2] https://en.wikipedia.org/wiki/Golden_Week_(Japan) >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Nov 5 21:02:45 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 5 Nov 2018 15:02:45 -0600 Subject: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker In-Reply-To: References: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> Message-ID: <52e35017-4f94-0e68-80b9-5588138790b5@gmail.com> On 11/5/2018 1:17 PM, Matt Riedemann wrote: > I'm thinking of a case like, resize and instance but rather than > confirm/revert it, the user deletes the instance. That would cleanup the > allocations from the target node but potentially not from the source node. Well this case is at least not an issue: https://review.openstack.org/#/c/615644/ It took me a bit to sort out how that worked but it does and I've added a test to confirm it. -- Thanks, Matt From doug at doughellmann.com Mon Nov 5 21:11:59 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 05 Nov 2018 16:11:59 -0500 Subject: [openstack-dev] [tc] Technical Committee status update for 5 November Message-ID: This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == It has been three weeks since the last update email, in part due to my absence. We have lots of updates this time around. Project updates: * Add os_manila to openstack-ansible: https://review.openstack.org/#/c/608403/ * Add cells charms and interfaces: https://review.openstack.org/#/c/608866/ * Add octavia charm: https://review.openstack.org/#/c/608283/ * Add puppet-crane: https://review.openstack.org/#/c/610015/ * Add openstack-helm images repository: https://review.openstack.org/#/c/611895/ * Add blazar-specs repository: https://review.openstack.org/#/c/612431/ * Add openstack-helm docs repository: https://review.openstack.org/#/c/611896/ * Retire anchor: https://review.openstack.org/#/c/611187/ * Remove Dragonflow from governance: https://review.openstack.org/#/c/613856/ Other updates: * Reword "open source" definition in 4 Opens document to remove language that does not come through clearly when translated: https://review.openstack.org/#/c/613894/ * Support "Train" as a candidate name for the T series: https://review.openstack.org/#/c/611511/ * Update the charter section on TC meetings: https://review.openstack.org/#/c/608751/ == TC Meetings == In order to fulfill our obligations under the OpenStack Foundation bylaws, the TC needs to hold meetings at least once each quarter. We agreed to meet monthly, and to emphasize agenda items that help us move initiatives forward while leaving most of the discussion of those topics to the mailing list. Our first meeting was held on 1 Nov. The agendas for all of our meetings will be sent to the openstack-dev mailing list in advance, and links to the logs and summary will be sent as a follow up after the meeting. * http://lists.openstack.org/pipermail/openstack-dev/2018-November/136220.html The next meeting will be 6 December 1400 UTC in #openstack-tc == Team Liaisons == The TC liaisons to each project team for the Stein series are now assigned. Please contact your liaison if you have any issues the TC can help with, and watch for email from them to check in with your team before the end of this development cycle. * https://wiki.openstack.org/wiki/OpenStack_health_tracker#Project_Teams == Sessions at the Forum == Many of us will be meeting in Berlin next week for the OpenStack Summit and Forum. There are several sessions related to project governance or community that may be of interest. * Getting OpenStack Users Involved in the Project: https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22813/getting-openstack-users-involved-in-the-project * Community outreach when culture, time zones, and language differ: https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22820/community-outreach-when-culture-time-zones-and-language-differ * Wednesday Keynote segment, Community Contributor Recognition & How to Get Started: https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22959/community-contributor-recognition-and-how-to-get-started * Expose SIGs and WGs: https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22750/expose-sigs-and-wgs * Cross-technical leadership session (OpenStack, Kata, StarlingX, Airship, Zuul): https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22815/cross-technical-leadership-session-openstack-kata-starlingx-airship-zuul * "Vision for OpenStack clouds" discussion: https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22818/vision-for-openstack-clouds-discussion * Technical Committee Vision Retrospective: https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22825/technical-committee-vision-retrospective * T series community goal discussion: https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22814/t-series-community-goal-discussion == Ongoing Discussions == We have several governance changes up for review related to deciding how we will manage future Python 3 upgrades (including adding 3.7 and possibly dropping 3.5 during Stein). * Make python 3 testing requirement less specific: https://review.openstack.org/#/c/611010/ * Explicitly declare stein supported runtimes: https://review.openstack.org/#/c/611080/ * Resolution on keeping up with Python 3 releases: https://review.openstack.org/#/c/613145/ == TC member actions/focus/discussions for the coming week(s) == The TC, UC, and leadership of other foundation projects will join the foundation Board for a joint leadership meeting on 12 November. See the wiki for details. * https://wiki.openstack.org/wiki/Governance/Foundation/12Nov2018BoardMeeting == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: - 09:00 UTC on Tuesdays - 01:00 UTC on Wednesdays - 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. You will find channel logs with past conversations at http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From mriedemos at gmail.com Mon Nov 5 21:13:56 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 5 Nov 2018 15:13:56 -0600 Subject: [openstack-dev] [Openstack-sigs] Dropping lazy translation support In-Reply-To: References: Message-ID: <2ae55c40-297d-3590-84de-665eae797522@gmail.com> On 11/5/2018 1:36 PM, Doug Hellmann wrote: > I think the lazy stuff was all about the API responses. The log > translations worked a completely different way. Yeah maybe. And if so, I came across this in one of the blueprints: https://etherpad.openstack.org/p/disable-lazy-translation Which says that because of a critical bug, the lazy translation was disabled in Havana to be fixed in Icehouse but I don't think that ever happened before IBM developers dropped it upstream, which is further justification for nuking this code from the various projects. -- Thanks, Matt From doug at doughellmann.com Mon Nov 5 21:36:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 05 Nov 2018 16:36:40 -0500 Subject: [openstack-dev] [Openstack-sigs] Dropping lazy translation support In-Reply-To: <2ae55c40-297d-3590-84de-665eae797522@gmail.com> References: <2ae55c40-297d-3590-84de-665eae797522@gmail.com> Message-ID: Matt Riedemann writes: > On 11/5/2018 1:36 PM, Doug Hellmann wrote: >> I think the lazy stuff was all about the API responses. The log >> translations worked a completely different way. > > Yeah maybe. And if so, I came across this in one of the blueprints: > > https://etherpad.openstack.org/p/disable-lazy-translation > > Which says that because of a critical bug, the lazy translation was > disabled in Havana to be fixed in Icehouse but I don't think that ever > happened before IBM developers dropped it upstream, which is further > justification for nuking this code from the various projects. I agree. Doug From openstack at nemebean.com Mon Nov 5 21:39:58 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 5 Nov 2018 15:39:58 -0600 Subject: [openstack-dev] [Openstack-operators] [Openstack-sigs] Dropping lazy translation support In-Reply-To: <2ae55c40-297d-3590-84de-665eae797522@gmail.com> References: <2ae55c40-297d-3590-84de-665eae797522@gmail.com> Message-ID: <8febca8b-d3d9-0f17-44dd-1a1fe8eddff0@nemebean.com> On 11/5/18 3:13 PM, Matt Riedemann wrote: > On 11/5/2018 1:36 PM, Doug Hellmann wrote: >> I think the lazy stuff was all about the API responses. The log >> translations worked a completely different way. > > Yeah maybe. And if so, I came across this in one of the blueprints: > > https://etherpad.openstack.org/p/disable-lazy-translation > > Which says that because of a critical bug, the lazy translation was > disabled in Havana to be fixed in Icehouse but I don't think that ever > happened before IBM developers dropped it upstream, which is further > justification for nuking this code from the various projects. > It was disabled last-minute, but I'm pretty sure it was turned back on (hence why we're hitting issues today). I still see coercion code in oslo.log that was added to fix the problem[1] (I think). I could be wrong about that since this code has undergone significant changes over the years, but it looks to me like we're still forcing things to be unicode.[2] 1: https://review.openstack.org/#/c/49230/3/openstack/common/log.py 2: https://github.com/openstack/oslo.log/blob/a9ba6c544cbbd4bd804dcd5e38d72106ea0b8b8f/oslo_log/formatters.py#L414 From whayutin at redhat.com Mon Nov 5 22:23:44 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 5 Nov 2018 15:23:44 -0700 Subject: [openstack-dev] [tripleo] Ansible getting bumped up from 2.4 -> 2.6.6 Message-ID: Greetings, Please be aware of the following patch [1]. This updates ansible in queens, rocky, and stein. This was just pointed out to me, and I didn't see it coming so I thought I'd email the group. That is all, thanks [1] https://review.rdoproject.org/r/#/c/14960 -- Wes Hayutin Associate MANAGER Red Hat whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From manuel.sb at garvan.org.au Mon Nov 5 23:01:32 2018 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Mon, 5 Nov 2018 23:01:32 +0000 Subject: [openstack-dev] [NOVA] pci alias device_type and numa_policy and device_type meanings Message-ID: <9D8A2486E35F0941A60430473E29F15B017BADDF3D@MXDB2.ad.garvan.unsw.edu.au> Dear Openstack community. I am setting up pci passthrough for GPUs using aliases. I was wondering the meaning of the fields device_type and numa_policy and how should I use them as I could not find much details in the official documentation. https://docs.openstack.org/nova/rocky/admin/pci-passthrough.html#configure-nova-api-controller https://docs.openstack.org/nova/rocky/configuration/config.html#pci thank you very much Manuel NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ATT00001.txt URL: From sean.mcginnis at gmx.com Tue Nov 6 01:54:52 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 5 Nov 2018 19:54:52 -0600 Subject: [openstack-dev] FIPS Compliance Message-ID: <20181106015452.GA14203@sm-workstation> I'm interested in some feedback from the community, particularly those running OpenStack deployments, as to whether FIPS compliance [0][1] is something folks are looking for. I've been seeing small changes starting to be proposed here and there for things like MD5 usage related to its incompatibility to FIPS mode. But looking across a wider stripe of our repos, it appears like it would be a wider effort to be able to get all OpenStack services compatible with FIPS mode. This should be a fairly easy thing to test, but before we put in much effort into updating code and figuring out testing, I'd like to see some input on whether something like this is needed. Thanks for any input on this. Sean [0] https://en.wikipedia.org/wiki/FIPS_140-2 [1] https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.140-2.pdf From tony at bakeyournoodle.com Tue Nov 6 02:19:20 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 6 Nov 2018 13:19:20 +1100 Subject: [openstack-dev] [all]Naming the T release of OpenStack -- Poll open In-Reply-To: <20181030054024.GC2343@thor.bakeyournoodle.com> References: <20181030054024.GC2343@thor.bakeyournoodle.com> Message-ID: <20181106021919.GB20576@thor.bakeyournoodle.com> Hi all, Time is running out for you to have your say in the T release name poll. We have just under 3 days left. If you haven't voted please do! On Tue, Oct 30, 2018 at 04:40:25PM +1100, Tony Breeds wrote: > Hi folks, > > It is time again to cast your vote for the naming of the T Release. > As with last time we'll use a public polling option over per user private URLs > for voting. This means, everybody should proceed to use the following URL to > cast their vote: > > https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_aac97f1cbb6c61df&akey=b9e448b340787f0e > > We've selected a public poll to ensure that the whole community, not just gerrit > change owners get a vote. Also the size of our community has grown such that we > can overwhelm CIVS if using private urls. A public can mean that users > behind NAT, proxy servers or firewalls may receive an message saying > that your vote has already been lodged, if this happens please try > another IP. > > Because this is a public poll, results will currently be only viewable by myself > until the poll closes. Once closed, I'll post the URL making the results > viewable to everybody. This was done to avoid everybody seeing the results while > the public poll is running. > > The poll will officially end on 2018-11-08 00:00:00+00:00[1], and results will be > posted shortly after. > > [1] https://governance.openstack.org/tc/reference/release-naming.html > --- > > According to the Release Naming Process, this poll is to determine the > community preferences for the name of the T release of OpenStack. It is > possible that the top choice is not viable for legal reasons, so the second or > later community preference could wind up being the name. > > Release Name Criteria > --------------------- > > Each release name must start with the letter of the ISO basic Latin alphabet > following the initial letter of the previous release, starting with the > initial release of "Austin". After "Z", the next name should start with > "A" again. > > The name must be composed only of the 26 characters of the ISO basic Latin > alphabet. Names which can be transliterated into this character set are also > acceptable. > > The name must refer to the physical or human geography of the region > encompassing the location of the OpenStack design summit for the > corresponding release. The exact boundaries of the geographic region under > consideration must be declared before the opening of nominations, as part of > the initiation of the selection process. > > The name must be a single word with a maximum of 10 characters. Words that > describe the feature should not be included, so "Foo City" or "Foo Peak" > would both be eligible as "Foo". > > Names which do not meet these criteria but otherwise sound really cool > should be added to a separate section of the wiki page and the TC may make > an exception for one or more of them to be considered in the Condorcet poll. > The naming official is responsible for presenting the list of exceptional > names for consideration to the TC before the poll opens. > > Exact Geographic Region > ----------------------- > > The Geographic Region from where names for the S release will come is Colorado > > Proposed Names > -------------- > > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas : the Tank Engine > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria (accepted by the TC) > ----------------------------------------------------------------- > > * Train🚂 : Many Attendees of the first Denver PTG have a story to tell about the trains near the PTG hotel. We could celebrate those stories with this name > > Yours Tony. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jcornutt at gmail.com Tue Nov 6 03:02:35 2018 From: jcornutt at gmail.com (Joshua Cornutt) Date: Mon, 5 Nov 2018 22:02:35 -0500 Subject: [openstack-dev] FIPS Compliance Message-ID: Sean, I'm, too, am very interested in this particular discussion and working towards getting OpenStack working out-of-the-box on FIPS systems. I've submitted a few patches (https://review.openstack.org/#/q/owner:%22Joshua+Cornutt%22) recently and plan on going down my laundry list of patches I've made while deploying Red Hat OpenStack 10 (Newton), 13 (Queens), and community master on "FIPS mode" RHEL 7 servers. I've seen a lot of debate in other communities on how to approach the subject ranging from full MD5-to-SHAx transitions to putting in FIPS-aware logic to decide hashes based on the system to just deciding that the hashes aren't used for real security and thus are "mostly OK" by FIPS 140-2 standards (resulting in awkward distro-specific versions of popular crypto libraries with built-in FIPS awareness). Personally, I've been more in favor of a sweeping MD5-to-SHAx transition due to popular crypto libraries (OpenSSL, hashlib, NSS) indiscriminately disabling MD5 hash methods on FIPS mode systems. With SHA-1 collisions already happening, I imagine it will meet the FIPS banhammer in the not-so-distant future which is why I have generally been recommending SHA-256 as an MD5 replacement, despite the larger output size (mostly an issue for fixed-sized database columns). There is definite pressure being put on some entities (commercial as well as government / DoD) to move core systems to FIPS mode and auditors are looking more and more closely at this particular subject and requiring strong justification for not meeting FIPS compliance on systems both at the hardware and software levels. From gmann at ghanshyammann.com Tue Nov 6 04:43:28 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 06 Nov 2018 13:43:28 +0900 Subject: [openstack-dev] [all] 2019 summit during May holidays? In-Reply-To: References: <4f557d46-6598-90b6-ecc5-541f4d2c4d73@redhat.com> Message-ID: <166e7552243.bc71185f40518.7827515374845880692@ghanshyammann.com> ---- On Tue, 06 Nov 2018 05:50:03 +0900 Dmitry Tantsur wrote ---- > > > On Mon, Nov 5, 2018, 20:07 Julia Kreger *removes all of the hats* > *removes years of dust from unrelated event planning hat, and puts it on for a moment* > > In my experience, events of any nature where convention venue space is involved, are essentially set in stone before being publicly advertised as contracts are put in place for hotel room booking blocks as well as the convention venue space. These spaces are also typically in a relatively high demand limiting the access and available times to schedule. Often venues also give preference (and sometimes even better group discounts) to repeat events as they are typically a known entity and will have somewhat known needs so the venue and hotel(s) can staff appropriately. > > tl;dr, I personally wouldn't expect any changes to be possible at this point. > > *removes event planning hat of past life, puts personal scheduling hat on* > I imagine that as a community, it is near impossible to schedule something avoiding holidays for everyone in the community. > > I'm not taking about everyone. And I'm mostly fine with my holiday, but the conflicts with Russia and Japan seem huge. This certainly does not help our effort to engage people outside of NA/EU. > Quick googling suggests that the week of May 13th would have much fewer conflicts. > > I personally have lost count of the number of holidays and special days that I've spent on business trips over the past four years. While I may be an out-lier in my feelings on this subject, I'm not upset, annoyed, or even bitter about lost times. This community is part of my family. > > Sure :) > But outside of our small nice circle there is a huge world of people who may not share our feeling and the level of commitment to openstack. These occasional contributors we talked about when discussing the cycle length. I don't think asking them to abandon 3-5 days of holidays is a productive way to engage them. > And again, as much as I love meeting you all, I think we're outgrowing the format of these meetings.. > Dmitry Yeah, in case of Japan it is full week holiday starting from April 29th. I remember most of the May summits did not conflict with Golden week but this is. I am not sure if any solution to this now but we should consider such things in future. -gmann > > -Julia > > On Mon, Nov 5, 2018 at 8:19 AM Dmitry Tantsur wrote: > Hi all, > > Not sure how official the information about the next summit is, but it's on the > web site [1], so I guess worth asking.. > > Are we planning for the summit to overlap with the May holidays? The 1st of May > is a holiday in big part of the world. We ask people to skip it in addition to > 3+ weekend days they'll have to spend working and traveling. > > To make it worse, 1-3 May are holidays in Russia this time. To make it even > worse than worse, the week of 29th is the Golden Week in Japan [2]. Was it > considered? Is it possible to move the days to less conflicting time (mid-May > maybe)? > > Dmitry > > [1] https://www.openstack.org/summit/denver-2019/ > [2] https://en.wikipedia.org/wiki/Golden_Week_(Japan) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dangtrinhnt at gmail.com Tue Nov 6 05:06:08 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 6 Nov 2018 14:06:08 +0900 Subject: [openstack-dev] [Searchlight] Weekly report for Stein R-23 Message-ID: Hi team, *TL;DR,* we now focus on developing the use cases for Searchlight to attract more users as well as contributors. Here is the report for last week, Stein R-23 [1]. Let me know if you have any questions. [1] https://www.dangtrinh.com/2018/11/searchlight-weekly-report-stein-r-23.html Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Nov 6 09:04:01 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 06 Nov 2018 18:04:01 +0900 Subject: [openstack-dev] [tc][all] TC office hours is started now on #openstack-tc Message-ID: <166e843aceb.b98ff10848537.6521669998561147381@ghanshyammann.com> Hi All, TC office hour is started on #openstack-tc channel. Feel free to reach to us for anything you want discuss/input/feedback/help from TC. -gmann From eyalb1 at gmail.com Tue Nov 6 09:25:13 2018 From: eyalb1 at gmail.com (Eyal B) Date: Tue, 6 Nov 2018 11:25:13 +0200 Subject: [openstack-dev] [ceilometer][oslo.cache] ceilometer fails to publish to gnocchi Message-ID: Hi, Because of this fix https://review.openstack.org/#/c/611369/ ceilometer which uses oslo.cache for redis fails to publish to gnocchi see this log: http://logs.openstack.org/15/615415/1/check/vitrage-dsvm-datasources-py27/8d9e39e/logs/screen-ceilometer-anotification.txt.gz#_Nov_04_13_12_28_656863 BR Eyal -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Tue Nov 6 09:27:17 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Tue, 6 Nov 2018 10:27:17 +0100 Subject: [openstack-dev] [tripleo] Ansible getting bumped up from 2.4 -> 2.6.6 In-Reply-To: References: Message-ID: <5f35e47c-e2ec-14ec-b0dd-a192a6370180@redhat.com> On 11/5/18 11:23 PM, Wesley Hayutin wrote: > Greetings, > > Please be aware of the following patch [1].  This updates ansible in > queens, rocky, and stein. >  This was just pointed out to me, and I didn't see it coming so I > thought I'd email the group. > > That is all, thanks > > [1] https://review.rdoproject.org/r/#/c/14960 thanks Wes for bringing this up note that we're trying to update ansible to 2.6 because 2.4 is unsupported and 2.5 is only receiving security fixes already with the upcoming updates for ceph-ansible in ceph luminous, support for older ansible releases will be dropped -- Giulio Fidente GPG KEY: 08D733BA From j.harbott at x-ion.de Tue Nov 6 12:12:55 2018 From: j.harbott at x-ion.de (Dr. Jens Harbott (frickler)) Date: Tue, 6 Nov 2018 13:12:55 +0100 Subject: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04) Message-ID: Dear OpenStackers, earlier this year Ubuntu released their current LTS version 18.04 codenamed "Bionic Beaver" and we are now facing the task to migrate our devstack-based jobs to run on Bionic instead of the previous LTS version 16.04 "Xenial Xerus". The last time this has happened two years ago (migration from 14.04 to 16.04) and at that time it seems the migration was mostly driven by the Infra team (see [1]), mostly because all of the job configuration was still centrally hosted in a single repository (openstack-infra/project-config). In the meantime, however, our CI setup has been updated to use Zuul v3 and one of the new features that come with this development is the introduction of per-project job definitions. So this new flexibility requires us to make a choice between the two possible options we have for migrating jobs now: 1) Change the "devstack" base job to run on Bionic instances instead of Xenial instances 2) Create new "devstack-bionic" and "tempest-full-bionic" base jobs and migrate projects piecewise Choosing option 1) would cause all projects that base their own jobs on this job (possibly indirectly by e.g. being based on the "tempest-full" job) switch automatically. So there would be the possibility that some jobs would break and require to be fixed before patches could be merged again in the affected project(s). To accomodate those risks, QA team can give some time to projects to test their jobs on Bionic with WIP patches (QA can provide Bionic base job as WIP patch). This option does not require any pre/post migration changes on project's jobs. Choosing option 2) would avoid this by letting projects switch at their own pace, but create the risk that some projects would never migrate. It would also make further migrations, like the one expected to happen when 20.04 is released, either having to follow the same scheme or re-introduce the unversioned base job. Other point to note down with this option is, - project job definitions need to change their parent job from "devstack" -> "devstack-bionic" or "tempest-full" -> "tempest-full-bionic" - QA needs to maintain existing jobs ("devstack", "tempest-full") and bionic version jobs ("devstack-bionic", "tempest-full-bionic") In order to prepare the decision, we have created a set of patches that test the Bionic jobs, you can find them under the common topic "devstack-bionic" [2]. There is also an etherpad to give a summarized view of the results of these tests [3]. Please respond to this mail if you want to promote either of the above options or maybe want to propose an even better solution. You can also find us for discussion in the #openstack-qa IRC channel on freenode. Infra team has tried both approaches during precise->trusty & trusty->xenial migration[4]. Note that this mailing-list itself will soon be migrated, too, so if you haven't subscribed to the new list yet, this is a good time to act and avoid missing the best parts [5]. Yours, Jens (frickler at IRC) [1] http://lists.openstack.org/pipermail/openstack-dev/2016-November/106906.html [2] https://review.openstack.org/#/q/topic:devstack-bionic [3] https://etherpad.openstack.org/p/devstack-bionic [4] http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2018-11-01.log.html#t2018-11-01T12:40:22 [5] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html From smooney at redhat.com Tue Nov 6 12:52:58 2018 From: smooney at redhat.com (Sean Mooney) Date: Tue, 6 Nov 2018 12:52:58 +0000 Subject: [openstack-dev] [NOVA] pci alias device_type and numa_policy and device_type meanings In-Reply-To: <9D8A2486E35F0941A60430473E29F15B017BADDF3D@MXDB2.ad.garvan.unsw.edu.au> References: <9D8A2486E35F0941A60430473E29F15B017BADDF3D@MXDB2.ad.garvan.unsw.edu.au> Message-ID: On Mon, Nov 5, 2018 at 11:02 PM Manuel Sopena Ballesteros wrote: > > Dear Openstack community. > > > > I am setting up pci passthrough for GPUs using aliases. > > > > I was wondering the meaning of the fields device_type and numa_policy and how should I use them as I could not find much details in the official documentation. numa policy is used to configure the affinity policy for the passthrough device. it was introduced by https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html#proposed-change not that the flavor extra specs and image metadata also descibed by that spec are stil outstanding and may be added in the future. the device type is a legacy hack that was added when nova's pci passthough support was extended to enable neutron sriov. when using flavor based pci passthough using aliases the type should always be set to Type-PCI Type-PF and Type-VF in general should only be used for nics that will be used with neutron sriov port types. you technically can us the other device types in the alias but nova will only record a device as Type-PF or Type-VF if physical_network is set in the pci whitelist which is used to indicate the neutron physnet the device is attached too. in the case of GPUs you can ignore Type-PF and Type-VF. you should also know that Type-PCI can refer to either the full pci device or a virtual function allocated form a device. an example alias is available in the code here https://github.com/openstack/nova/blob/master/nova/pci/request.py#L18 | [pci] | alias = '{ | "name": "QuickAssist", | "product_id": "0443", | "vendor_id": "8086", | "device_type": "type-PCI", | "numa_policy": "legacy" | }' in this case the alias consumes an entire QuickAssist (QAT) device with the legacy numa policy. if the QAT device was subdivided using sriov VFs you could request the VF instead of the PF by simply changing the product_id to that corresponding to the VF. > > > > https://docs.openstack.org/nova/rocky/admin/pci-passthrough.html#configure-nova-api-controller > > https://docs.openstack.org/nova/rocky/configuration/config.html#pci > > > > thank you very much > > > > Manuel > > > > NOTICE > Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mdulko at redhat.com Tue Nov 6 13:06:47 2018 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Tue, 06 Nov 2018 14:06:47 +0100 Subject: [openstack-dev] [kuryr] Kuryr-Kubernetes gates broken Message-ID: Hi, Kuryr-Kubernetes LBaaSv2 gates are currently broken due to bug [1] in python-neutronclient. Until commit [2] is merged and a new version of the client is released I'm proposing to make those gates non-voting [3]. [1] https://bugs.launchpad.net/python-neutronclient/+bug/1801360 [2] https://review.openstack.org/#/c/615184 [3] https://review.openstack.org/#/c/615861 From doug at doughellmann.com Tue Nov 6 13:06:58 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 06 Nov 2018 08:06:58 -0500 Subject: [openstack-dev] [Openstack-operators] FIPS Compliance In-Reply-To: <20181106015452.GA14203@sm-workstation> References: <20181106015452.GA14203@sm-workstation> Message-ID: Sean McGinnis writes: > I'm interested in some feedback from the community, particularly those running > OpenStack deployments, as to whether FIPS compliance [0][1] is something folks > are looking for. > > I've been seeing small changes starting to be proposed here and there for > things like MD5 usage related to its incompatibility to FIPS mode. But looking > across a wider stripe of our repos, it appears like it would be a wider effort > to be able to get all OpenStack services compatible with FIPS mode. > > This should be a fairly easy thing to test, but before we put in much effort > into updating code and figuring out testing, I'd like to see some input on > whether something like this is needed. > > Thanks for any input on this. > > Sean > > [0] https://en.wikipedia.org/wiki/FIPS_140-2 > [1] https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.140-2.pdf > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators I know we've had some interest in it at different times. I think some of the changes will end up being backwards-incompatible, so we may need a "FIPS-mode" configuration flag for those, but in other places we could just switch hashing algorithms and be fine. I'm not sure if anyone has put together the details of what would be needed to update each project, but this feels like it could be a candidate for a goal for a future cycle once we have that information and can assess the level of effort. Doug From dangtrinhnt at gmail.com Tue Nov 6 13:35:51 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 6 Nov 2018 22:35:51 +0900 Subject: [openstack-dev] Asking for suggestion of video conference tool for team and webinar Message-ID: Hi, I tried several free tools for team meetings, vPTG, and webinars but there are always drawbacks ( because it's free, of course). For example: - Google Hangout: only allow a maximum of 10 people to do the video calls - Zoom: limits about 45m per meeting. So for a webinar or conference call takes more than 45m we have to splits into 2 sessions or so. If anyone in the community can suggest some better video conferencing tools, that would be great. So we can organize better communication for our team and the community's webinars. Many thanks, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From sshnaidm at redhat.com Tue Nov 6 13:46:46 2018 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Tue, 6 Nov 2018 15:46:46 +0200 Subject: [openstack-dev] [tripleo] shutting down 3rd party TripleO CI for measurements In-Reply-To: References: Message-ID: We measured results and would like to shut down check jobs in RDO cloud CI today. Please let us know if you have objections. Thanks On Thu, Nov 1, 2018 at 12:14 AM Wesley Hayutin wrote: > Greetings, > > The TripleO-CI team would like to consider shutting down all the third > party check jobs running against TripleO projects in order to measure > results with and without load on the cloud for some amount of time. I > suspect we would want to shut things down for roughly 24-48 hours. > > If there are any strong objects please let us know. > Thank you > -- > > Wes Hayutin > > Associate MANAGER > > Red Hat > > > > whayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay > > > View my calendar and check my availability for meetings HERE > > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Nov 6 14:03:05 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 6 Nov 2018 06:03:05 -0800 Subject: [openstack-dev] [Openstack-operators] FIPS Compliance In-Reply-To: References: <20181106015452.GA14203@sm-workstation> Message-ID: On Tue, Nov 6, 2018 at 5:07 AM Doug Hellmann wrote: > Sean McGinnis writes: > > > I'm interested in some feedback from the community, particularly those > running > > OpenStack deployments, as to whether FIPS compliance [0][1] is something > folks > > are looking for. > [trim] > > I know we've had some interest in it at different times. I think some of > the changes will end up being backwards-incompatible, so we may need a > "FIPS-mode" configuration flag for those, but in other places we could > just switch hashing algorithms and be fine. > > I'm not sure if anyone has put together the details of what would be > needed to update each project, but this feels like it could be a > candidate for a goal for a future cycle once we have that information > and can assess the level of effort. > > Doug > > +1 to a FIPS-mode. I think it would be fair to ask projects, to over the course of the next month or three, to evaluate their current standing and report what they perceive the effort to be. I think only then we can really determine if it is the right direction to take for a cycle goal. -Julia -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Nov 6 14:07:18 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 6 Nov 2018 06:07:18 -0800 Subject: [openstack-dev] Asking for suggestion of video conference tool for team and webinar In-Reply-To: References: Message-ID: I don't know how the cost structure works for Bluejeans, but I've found it works very well for calls with many people on video. I typically have calls with 14+ people nearly everyone has their video enabled without a problem. The limiting factor is really if one participant has limited or somewhat lossy connectivity, but it appears they have been working to make that experience better. -Julia On Tue, Nov 6, 2018 at 5:36 AM Trinh Nguyen wrote: > Hi, > > I tried several free tools for team meetings, vPTG, and webinars but there > are always drawbacks ( because it's free, of course). For example: > > - Google Hangout: only allow a maximum of 10 people to do the video > calls > - Zoom: limits about 45m per meeting. So for a webinar or conference > call takes more than 45m we have to splits into 2 sessions or so. > > If anyone in the community can suggest some better video conferencing > tools, that would be great. So we can organize better communication for our > team and the community's webinars. > > Many thanks, > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhinds at redhat.com Tue Nov 6 14:12:29 2018 From: lhinds at redhat.com (Luke Hinds) Date: Tue, 6 Nov 2018 14:12:29 +0000 Subject: [openstack-dev] [Openstack-operators] FIPS Compliance In-Reply-To: References: <20181106015452.GA14203@sm-workstation> Message-ID: On Tue, Nov 6, 2018 at 2:04 PM Julia Kreger wrote: > > > On Tue, Nov 6, 2018 at 5:07 AM Doug Hellmann > wrote: > >> Sean McGinnis writes: >> >> > I'm interested in some feedback from the community, particularly those >> running >> > OpenStack deployments, as to whether FIPS compliance [0][1] is >> something folks >> > are looking for. >> [trim] >> >> I know we've had some interest in it at different times. I think some of >> the changes will end up being backwards-incompatible, so we may need a >> "FIPS-mode" configuration flag for those, but in other places we could >> just switch hashing algorithms and be fine. >> >> I'm not sure if anyone has put together the details of what would be >> needed to update each project, but this feels like it could be a >> candidate for a goal for a future cycle once we have that information >> and can assess the level of effort. >> >> Doug >> >> > +1 to a FIPS-mode. I think it would be fair to ask projects, to over the > course of the next month or three, to evaluate their current standing and > report what they perceive the effort to be. > > I think only then we can really determine if it is the right direction to > take for a cycle goal. > > -Julia > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Understand it's early to be discussing design, but would like to get it on record that if we can, we should try to use 'Algorithm Agility' rather then all moving to the next one up and setting to SHA. That way we can deal with what might seem unfathomable now, happening later (strong cryptos getting cracked). -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Nov 6 14:19:55 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 6 Nov 2018 06:19:55 -0800 Subject: [openstack-dev] [all] 2019 summit during May holidays? In-Reply-To: References: <4f557d46-6598-90b6-ecc5-541f4d2c4d73@redhat.com> Message-ID: On Mon, Nov 5, 2018 at 12:50 PM Dmitry Tantsur wrote: > > > On Mon, Nov 5, 2018, 20:07 Julia Kreger wrote: > >> *removes all of the hats* >> >> *removes years of dust from unrelated event planning hat, and puts it on >> for a moment* >> >> In my experience, events of any nature where convention venue space is >> involved, are essentially set in stone before being publicly advertised as >> contracts are put in place for hotel room booking blocks as well as the >> convention venue space. These spaces are also typically in a relatively >> high demand limiting the access and available times to schedule. Often >> venues also give preference (and sometimes even better group discounts) to >> repeat events as they are typically a known entity and will have somewhat >> known needs so the venue and hotel(s) can staff appropriately. >> >> tl;dr, I personally wouldn't expect any changes to be possible at this >> point. >> >> *removes event planning hat of past life, puts personal scheduling hat on* >> >> I imagine that as a community, it is near impossible to schedule >> something avoiding holidays for everyone in the community. >> > > I'm not taking about everyone. And I'm mostly fine with my holiday, but > the conflicts with Russia and Japan seem huge. This certainly does not help > our effort to engage people outside of NA/EU. > > Quick googling suggests that the week of May 13th would have much fewer > conflicts. > > May is also a fairly popular month for weddings... and I've seen wedding parties book out whole floors of hotel rooms. This complicates room block negotiations sadly. > >> I personally have lost count of the number of holidays and special days >> that I've spent on business trips over the past four years. While I may be >> an out-lier in my feelings on this subject, I'm not upset, annoyed, or even >> bitter about lost times. This community is part of my family. >> > > Sure :) > > But outside of our small nice circle there is a huge world of people who > may not share our feeling and the level of commitment to openstack. These > occasional contributors we talked about when discussing the cycle length. I > don't think asking them to abandon 3-5 days of holidays is a productive way > to engage them. > > And again, as much as I love meeting you all, I think we're outgrowing the > format of these meetings.. > > Dmitry > > It is funny you mention this because I was wondering something similar, but possibly for different reasons. I get there needs to be a marketing aspect to aid in sales and visibility. We need to also work on our local visibilities and broadcast out what individual projects and the community is doing on a smaller, more local scale... but I feel very conflicted with air travel and the state of atmospheric carbon levels. I wouldn't call my perception outgrowing, but perhaps growing more ecologically sensitive. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Nov 6 14:28:19 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 6 Nov 2018 08:28:19 -0600 Subject: [openstack-dev] [goals][upgrade-checkers] FYI on "TypeError: Message objects do not support addition." errors In-Reply-To: <231d6a67-e114-a097-7524-57e264d8884e@gmail.com> References: <231d6a67-e114-a097-7524-57e264d8884e@gmail.com> Message-ID: <7c0d80fe-b74c-430f-33d8-f9d93e6d2649@gmail.com> On 11/5/2018 10:43 AM, Matt Riedemann wrote: > If you are seeing this error when implementing and running the upgrade > check command in your project: > > Traceback (most recent call last): >   File > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/oslo_upgradecheck/upgradecheck.py", > line 184, in main >     return conf.command.action_fn() >   File > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/oslo_upgradecheck/upgradecheck.py", > line 134, in check >     print(t) >   File > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", > line 237, in __str__ >     return self.__unicode__() >   File > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", > line 243, in __unicode__ >     return self.get_string() >   File > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", > line 995, in get_string >     lines.append(self._stringify_header(options)) >   File > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", > line 1066, in _stringify_header >     bits.append(" " * lpad + self._justify(fieldname, width, > self._align[field]) + " " * rpad) >   File > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", > line 187, in _justify >     return text + excess * " " >   File > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/oslo_i18n/_message.py", > line 230, in __add__ >     raise TypeError(msg) > TypeError: Message objects do not support addition. > > It is due to calling oslo_i18n.enable_lazy() somewhere in the command > import path. That should be removed from the project since lazy > translation is not supported in openstack and as an effort was abandoned > several years ago. It is probably still called in a lot of "big > tent/stackforge" projects because of initially copying it from the more > core projects. Anyway, just remove it. > > I'm talking with the oslo team about deprecating that interface so > projects don't mistakenly use it and expect great things to happen. If anyone is still running into this, require oslo.upgradecheck>=0.1.1 to pick up this workaround: https://review.openstack.org/#/c/615610/ -- Thanks, Matt From doug at doughellmann.com Tue Nov 6 14:32:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 06 Nov 2018 09:32:29 -0500 Subject: [openstack-dev] [Openstack-operators] FIPS Compliance In-Reply-To: References: <20181106015452.GA14203@sm-workstation> Message-ID: Luke Hinds writes: > On Tue, Nov 6, 2018 at 2:04 PM Julia Kreger > wrote: > >> >> >> On Tue, Nov 6, 2018 at 5:07 AM Doug Hellmann >> wrote: >> >>> Sean McGinnis writes: >>> >>> > I'm interested in some feedback from the community, particularly those >>> running >>> > OpenStack deployments, as to whether FIPS compliance [0][1] is >>> something folks >>> > are looking for. >>> [trim] >>> >>> I know we've had some interest in it at different times. I think some of >>> the changes will end up being backwards-incompatible, so we may need a >>> "FIPS-mode" configuration flag for those, but in other places we could >>> just switch hashing algorithms and be fine. >>> >>> I'm not sure if anyone has put together the details of what would be >>> needed to update each project, but this feels like it could be a >>> candidate for a goal for a future cycle once we have that information >>> and can assess the level of effort. >>> >>> Doug >>> >>> >> +1 to a FIPS-mode. I think it would be fair to ask projects, to over the >> course of the next month or three, to evaluate their current standing and >> report what they perceive the effort to be. >> >> I think only then we can really determine if it is the right direction to >> take for a cycle goal. >> >> -Julia >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > Understand it's early to be discussing design, but would like to get it on > record that if we can, we should try to use 'Algorithm Agility' rather > then all moving to the next one up and setting to SHA. That way we can > deal with what might seem unfathomable now, happening later (strong cryptos > getting cracked). > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Yeah, it feels like we want 1 place to get the right "current" algorithm. Unfortunately, we still need to support those old ones because we need to be able to validate any existing data like stored signatures. Is there already a python library out in the wild that we could use for providing a consistent API for that? If not, perhaps building that is an independent step someone could be working on now. Doug From dangtrinhnt at gmail.com Tue Nov 6 14:54:05 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 6 Nov 2018 23:54:05 +0900 Subject: [openstack-dev] [goals][upgrade-checkers] FYI on "TypeError: Message objects do not support addition." errors In-Reply-To: <7c0d80fe-b74c-430f-33d8-f9d93e6d2649@gmail.com> References: <231d6a67-e114-a097-7524-57e264d8884e@gmail.com> <7c0d80fe-b74c-430f-33d8-f9d93e6d2649@gmail.com> Message-ID: Hi Matt, Thanks for fixing the upgrade checker patch on Searchlight [1]. It works. [1] https://review.openstack.org/#/c/613789/ On Tue, Nov 6, 2018 at 11:28 PM Matt Riedemann wrote: > On 11/5/2018 10:43 AM, Matt Riedemann wrote: > > If you are seeing this error when implementing and running the upgrade > > check command in your project: > > > > Traceback (most recent call last): > > File > > > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/oslo_upgradecheck/upgradecheck.py", > > > line 184, in main > > return conf.command.action_fn() > > File > > > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/oslo_upgradecheck/upgradecheck.py", > > > line 134, in check > > print(t) > > File > > > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", > > > line 237, in __str__ > > return self.__unicode__() > > File > > > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", > > > line 243, in __unicode__ > > return self.get_string() > > File > > > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", > > > line 995, in get_string > > lines.append(self._stringify_header(options)) > > File > > > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", > > > line 1066, in _stringify_header > > bits.append(" " * lpad + self._justify(fieldname, width, > > self._align[field]) + " " * rpad) > > File > > > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/prettytable.py", > > > line 187, in _justify > > return text + excess * " " > > File > > > "/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/oslo_i18n/_message.py", > > > line 230, in __add__ > > raise TypeError(msg) > > TypeError: Message objects do not support addition. > > > > It is due to calling oslo_i18n.enable_lazy() somewhere in the command > > import path. That should be removed from the project since lazy > > translation is not supported in openstack and as an effort was abandoned > > several years ago. It is probably still called in a lot of "big > > tent/stackforge" projects because of initially copying it from the more > > core projects. Anyway, just remove it. > > > > I'm talking with the oslo team about deprecating that interface so > > projects don't mistakenly use it and expect great things to happen. > > If anyone is still running into this, require oslo.upgradecheck>=0.1.1 > to pick up this workaround: > > https://review.openstack.org/#/c/615610/ > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin.lu at huawei.com Tue Nov 6 15:20:49 2018 From: hongbin.lu at huawei.com (Hongbin Lu) Date: Tue, 6 Nov 2018 15:20:49 +0000 Subject: [openstack-dev] [neutron] os-ken report for last week Message-ID: <0957CD8F4B55C0418161614FEC580D6B30436984@yyzeml705-chm.china.huawei.com> Hi all, I am used to report the os-ken related progress in the neutron team meeting. After switching to winter time, I am not able to attend the meeting at Tuesday 1400 UTC. I sent out the report in the ML instead. Below is the progress in last week. * The os-ken 0.2.0 is released: https://review.openstack.org/#/c/614315/ * The os-ken is added to global requirement: https://review.openstack.org/#/c/609018/ * There are experimental patches for testing the switchover from Ryu to os-ken: ** neutron: https://review.openstack.org/#/c/607008/ ** neutron-dynamic-routing: https://review.openstack.org/#/c/608357/ * The os-ken patches in the review queue: https://review.openstack.org/#/q/project:openstack/os-ken+AND+status:open Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Nov 6 16:36:25 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 6 Nov 2018 10:36:25 -0600 Subject: [openstack-dev] [ceilometer][oslo.cache][oslo] ceilometer fails to publish to gnocchi In-Reply-To: References: Message-ID: On 11/6/18 3:25 AM, Eyal B wrote: > Hi, > Because of this fix https://review.openstack.org/#/c/611369/ ceilometer > which uses oslo.cache for redis fails to publish to gnocchi > > see this log: > http://logs.openstack.org/15/615415/1/check/vitrage-dsvm-datasources-py27/8d9e39e/logs/screen-ceilometer-anotification.txt.gz#_Nov_04_13_12_28_656863 Hmm, looks like the problem is that a memached-specific fix is also affecting the redis driver. We probably need to blacklist this release until we can come up with a fix: https://review.openstack.org/615935 I've opened a bug against oslo.cache as well: https://bugs.launchpad.net/oslo.cache/+bug/1801967 -Ben From lbragstad at gmail.com Tue Nov 6 16:58:17 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 6 Nov 2018 10:58:17 -0600 Subject: [openstack-dev] [keystone] Berlin Forum Sessions & Talks Message-ID: Hey all, Here is what's on my radar for keystone-specific sessions and talks next week: *Tuesday* - Change ownership of resources [0] - Keystone Project Update [1] - OpenStack Policy 101 [2] - Keystone Project Onboarding [3] - Gaps between OpenStack and business logic with Adjutant [4] *Wednesday* - Deletion of project and project resources [5] - Enforcing Quota Consistently with Unified Limits [6] *Thursday* - Keystone as an Identity Provider Proxy [7] - Keystone Operator Feedback [8] If you know about a keystone-related session that I've missed, please feel free to follow up. Links to the forum session etherpads are available from the main wiki [9]. [0] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22785/change-of-ownership-of-resources [1] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22728/keystone-project-updates [2] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/21977/openstack-policy-101 [3] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22727/keystone-project-onboarding [4] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22184/bridging-the-gaps-between-openstack-and-business-logic-with-adjutant [5] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22784/deletion-of-project-and-project-resources [6] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22557/enforcing-quota-consistently-with-unified-limits [7] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22791/keystone-as-an-identity-provider-proxy [8] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22792/keystone-operator-feedback [9] https://wiki.openstack.org/wiki/Forum/Berlin2018#Thursday.2C_November_15 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Tue Nov 6 16:59:51 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 6 Nov 2018 10:59:51 -0600 Subject: [openstack-dev] [keystone] No meeting 13 Nov 2018 Message-ID: Just a reminder that we won't be holding a weekly meeting for keystone next week due to the OpenStack Summit in Berlin. Meetings will resume on the 20th of November. Thanks, Lance -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Tue Nov 6 17:16:53 2018 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 6 Nov 2018 11:16:53 -0600 Subject: [openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir In-Reply-To: References: <7fd557b7-b789-ab1b-8175-4b1b506c45b2@inaugust.com> Message-ID: On 11/5/18 3:21 AM, Mohammed Naser wrote: > > > Sent from my iPhone > >> On Nov 5, 2018, at 10:19 AM, Thierry Carrez wrote: >> >> Monty Taylor wrote: >>> [...] >>> What if we added support for serving vendor data files from the root of a primary URL as-per RFC 5785. Specifically, support deployers adding a json file to .well-known/openstack/client that would contain what we currently store in the openstacksdk repo and were just discussing splitting out. >>> [...] >>> What do people think? >> >> I love the idea of public clouds serving that file directly, and the user experience you get from it. The only two drawbacks on top of my head would be: >> >> - it's harder to discover available compliant openstack clouds from the client. >> >> - there is no vetting process, so there may be failures with weird clouds serving half-baked files that people may blame the client tooling for. >> >> I still think it's a good idea, as in theory it aligns the incentive of maintaining the file with the most interested stakeholder. It just might need some extra communication to work seamlessly. > > I’m thinking out loud here but perhaps a simple linter that a cloud provider can run will help them make sure that everything is functional. In fact, once we get it fleshed out and support added - perhaps we could add a tempest test that checks for a well-known file - and include it in compliance testing. Basically - if your cloud publishes a vendor profile, then the information in it should be accurate and should work, right? From jcornutt at gmail.com Tue Nov 6 17:19:38 2018 From: jcornutt at gmail.com (Joshua Cornutt) Date: Tue, 6 Nov 2018 12:19:38 -0500 Subject: [openstack-dev] [Openstack-operators] FIPS Compliance Message-ID: Doug, I have such a list put together (my various installation documents for getting these clouds working in FIPS mode) but it's hardly ready for public consumption. I planned on releasing each bit as a code change and/or bug ticket and letting the community consume it as it figures some of these things out. I agree that some changes may break backwards compatibility (such as Glance's image checksumming), but one approach I think could ease the transition would be the approach I took for SSH key pair fingerprinting (also MD5-based, as is Glance image checksums) found here - https://review.openstack.org/#/c/615460/ . This allows administrators to choose, hopefully at deployment time, the hashing algorithm with the default of being the existing MD5 algorithm. Another approach would be to make the projects "FIPS aware" where we choose the hashing algorithm based on the system's FIPS-enforcing state. An example of doing so is what I'm proposing for Django (another FIPS-related patch that was needed for OSP 13) - https://github.com/django/django/pull/10605 From bdobreli at redhat.com Tue Nov 6 17:44:42 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 6 Nov 2018 18:44:42 +0100 Subject: [openstack-dev] [Edge-computing] [tripleo][FEMDC] IEEE Fog Computing: Call for Contributions - Deadline Approaching In-Reply-To: <7e97e280-78e4-fe72-7068-49c981d84048@redhat.com> References: <53cf2d33-cee7-3f0d-7f28-74b29091a7ef@redhat.com> <8acb2f9e-9fbd-77be-a274-eb3d54ae2ab4@redhat.com> <39eba4f7-7267-72db-548b-f88dbdcf18e7@redhat.com> <7e97e280-78e4-fe72-7068-49c981d84048@redhat.com> Message-ID: Folks, I have drafted a few more sections [0] for your /proof reading and kind review please. Also left some notes for TBD things, either for the potential co-authors' attention or myself :) [0] https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/LaTeX/position_paper_1570506394.pdf On 11/5/18 6:50 PM, Bogdan Dobrelya wrote: > Update: I have yet found co-authors, I'll keep drafting that position > paper [0],[1]. Just did some baby steps so far. I'm open for feedback > and contributions! > > PS. Deadline is Nov 9 03:00 UTC, but *may be* it will be extended, if > the event chairs decide to do so. Fingers crossed. > > [0] > https://github.com/bogdando/papers-ieee#in-the-current-development-looking-for-co-authors > > > [1] > https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/LaTeX/position_paper_1570506394.pdf > > > On 11/5/18 3:06 PM, Bogdan Dobrelya wrote: >> Thank you for a reply, Flavia: >> >>> Hi Bogdan >>> sorry for the late reply - yesterday was a Holiday here in Brazil! >>> I am afraid I will not be able to engage in this collaboration with >>> such a short time...we had to have started this initiative a little >>> earlier... >> >> That's understandable. >> >> I hoped though a position paper is something we (all who reads that, >> not just you and me) could achieve in a couple of days, without a lot >> of research associated. That's a postion paper, which is not expected >> to contain formal prove or implementation details. The vision for >> tooling is the hardest part though, and indeed requires some time. >> >> So let me please [tl;dr] the outcome of that position paper: >> >> * position: given Always Available autonomy support as a starting point, >>    define invariants for both operational and data storage consistency >>    requirements of control/management plane (I've already drafted some in >>    [0]) >> >> * vision: show that in the end that data synchronization and conflict >>    resolving solution just boils down to having a causally >>    consistent KVS (either causal+ or causal-RT, or lazy replication >>    based, or anything like that), and cannot be achieved with *only* >>    transactional distributed database, like Galera cluster. The way how >>    to show that is an open question, we could refer to the existing >>    papers (COPS, causal-RT, lazy replication et al) and claim they fit >>    the defined invariants nicely, while transactional DB cannot fit it >>    by design (it's consensus protocols require majority/quorums to >>    operate and being always available for data put/write operations). >>    We probably may omit proving that obvious thing formally? At least for >>    the postion paper... >> >> * opportunity: that is basically designing and implementing of such a >>    causally-consistent KVS solution (see COPS library as example) for >>    OpenStack, and ideally, unifying it for PaaS operators >>    (OpenShift/Kubernetes) and tenants willing to host their containerized >>    workloads on PaaS distributed over a Fog Cloud of Edge clouds and >>    leverage its data synchronization and conflict resolving solution >>    as-a-service. Like Amazon dynamo DB, for example, except that fitting >>    the edge cases of another cloud stack :) >> >> [0] >> https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/challenges.md >> >> >>> As for working collaboratively with latex, I would recommend using >>> overleaf - it is not that difficult and has lots of edition resources >>> as markdown and track changes, for instance. >>> Thanks and good luck! >>> Flavia >> >> >> >> On 11/2/18 5:32 PM, Bogdan Dobrelya wrote: >>> Hello folks. >>> Here is an update for today. I crated a draft [0], and spend some >>> time with building LaTeX with live-updating for the compiled PDF... >>> The latter is only informational, if someone wants to contribute, >>> please follow the instructions listed by the link (hint: you need no >>> to have any LaTeX experience, only basic markdown knowledge should be >>> enough!) >>> >>> [0] >>> https://github.com/bogdando/papers-ieee/#in-the-current-development-looking-for-co-authors >>> >>> >>> On 10/31/18 6:54 PM, Ildiko Vancsa wrote: >>>> Hi, >>>> >>>> Thank you for sharing your proposal. >>>> >>>> I think this is a very interesting topic with a list of possible >>>> solutions some of which this group is also discussing. It would also >>>> be great to learn more about the IEEE activities and have experience >>>> about the process in this group on the way forward. >>>> >>>> I personally do not have experience with IEEE conferences, but I’m >>>> happy to help with the paper if I can. >>>> >>>> Thanks, >>>> Ildikó >>>> >>>> >>> >>> (added from the parallel thread) >>>>> On 2018. Oct 31., at 19:11, Mike Bayer >>>> zzzcomputing.com> wrote: >>>>> >>>>> On Wed, Oct 31, 2018 at 10:57 AM Bogdan Dobrelya >>>> redhat.com> wrote: >>>>>> >>>>>> (cross-posting openstack-dev) >>>>>> >>>>>> Hello. >>>>>> [tl;dr] I'm looking for co-author(s) to come up with "Edge clouds >>>>>> data >>>>>> consistency requirements and challenges" a position paper [0] (papers >>>>>> submitting deadline is Nov 8). >>>>>> >>>>>> The problem scope is synchronizing control plane and/or >>>>>> deployments-specific data (not necessary limited to OpenStack) across >>>>>> remote Edges and central Edge and management site(s). Including >>>>>> the same >>>>>> aspects for overclouds and undercloud(s), in terms of TripleO; and >>>>>> other >>>>>> deployment tools of your choice. >>>>>> >>>>>> Another problem is to not go into different solutions for Edge >>>>>> deployments management and control planes of edges. And for >>>>>> tenants as >>>>>> well, if we think of tenants also doing Edge deployments based on >>>>>> Edge >>>>>> Data Replication as a Service, say for Kubernetes/OpenShift on top of >>>>>> OpenStack. >>>>>> >>>>>> So the paper should name the outstanding problems, define data >>>>>> consistency requirements and pose possible solutions for >>>>>> synchronization >>>>>> and conflicts resolving. Having maximum autonomy cases supported for >>>>>> isolated sites, with a capability to eventually catch up its >>>>>> distributed >>>>>> state. Like global database [1], or something different perhaps (see >>>>>> causal-real-time consistency model [2],[3]), or even using git. And >>>>>> probably more than that?.. (looking for ideas) >>>>> >>>>> >>>>> I can offer detail on whatever aspects of the "shared  / global >>>>> database" idea.  The way we're doing it with Galera for now is all >>>>> about something simple and modestly effective for the moment, but it >>>>> doesn't have any of the hallmarks of a long-term, canonical solution, >>>>> because Galera is not well suited towards being present on many >>>>> (dozens) of endpoints.     The concept that the StarlingX folks were >>>>> talking about, that of independent databases that are synchronized >>>>> using some kind of middleware is potentially more scalable, however I >>>>> think the best approach would be API-level replication, that is, you >>>>> have a bunch of Keystone services and there is a process that is >>>>> regularly accessing the APIs of these keystone services and >>>>> cross-publishing state amongst all of them.   Clearly the big >>>>> challenge with that is how to resolve conflicts, I think the answer >>>>> would lie in the fact that the data being replicated would be of >>>>> limited scope and potentially consist of mostly or fully >>>>> non-overlapping records. >>>>> >>>>> That is, I think "global database" is a cheap way to get what would be >>>>> more effective as asynchronous state synchronization between identity >>>>> services. >>>> >>>> Recently we’ve been also exploring federation with an IdP (Identity >>>> Provider) master: >>>> https://wiki.openstack.org/wiki/Keystone_edge_architectures#Identity_Provider_.28IdP.29_Master_with_shadow_users >>>> >>>> >>>> One of the pros is that it removes the need for synchronization and >>>> potentially increases scalability. >>>> >>>> Thanks, >>>> Ildikó >>> >>> >>> >> >> > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From juliaashleykreger at gmail.com Tue Nov 6 19:16:19 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 6 Nov 2018 11:16:19 -0800 Subject: [openstack-dev] [Openstack-operators] FIPS Compliance In-Reply-To: References: Message-ID: On Tue, Nov 6, 2018 at 9:19 AM Joshua Cornutt wrote: > > Another approach would be to make the projects "FIPS aware" where we > choose the hashing algorithm based on the system's FIPS-enforcing > state. An example of doing so is what I'm proposing for Django > (another FIPS-related patch that was needed for OSP 13) - > https://github.com/django/django/pull/10605 > > This was the approach I was thinking. We ideally should allow and enable evolution, but we would still need the hard "FIPS 140-2" operating mode flag which would be a hard break for pre-existing clouds whose data and checksum information had not been updated already. Maybe in any process to collect community impact information, we could also suggest projects submit what they perceive an upgrade path to be to take an existing cloud to a FIPS 140-2 enforcing mode of operation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Tue Nov 6 19:21:01 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Tue, 6 Nov 2018 14:21:01 -0500 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: Message-ID: On Wed, Oct 10, 2018 at 8:45 AM Corey Bryant wrote: > Hi All, > > I'd like to enable py37 unit tests in the gate. > > == Background == > > I work on OpenStack packaging for Ubuntu. During the Rocky release (Ubuntu > Cosmic) I tried to fix py37 bugs upstream as I came across them. There > ended up being a lot of py37 issues and after a while, due to time > constraints, I resorted to just opening bugs and disabling py37 unit tests > that were failing in our package builds. Luckily enough, even though Cosmic > ships with python3.6 and python3.7, python3.6 ended up being chosen as the > default for Cosmic. > > == Defaulting to python3.7 == > > The next release of Ubuntu opens in just a few weeks. It will default to > python3.7 and will not include python3.6. My hope is that if I can help > enable py37 unit tests upstream now, we can get a wider view at fixing > issues soon. > > == Enabling py37 unit tests == > > Ubuntu Bionic (18.04 LTS) has the 3.7.0 interpreter and I have reviews up > to define the py37 zuul job and templates here: > https://review.openstack.org/#/c/609066 > > I'd like to start submitting reviews to projects to enable > openstack-python37-jobs (or variant) for projects that already have > openstack-python36-jobs in their .zuul.yaml, zuul.yaml, > .zuul.d/project.yaml. > > == Coinciding work == > > There is python3-first work going on now and I completely understand that > this is going to cause more work for some projects. It seems that now is as > good of a time as ever to catch up and test with a recent python3 version. > I'm sure python3.8 and beyond will be here before we know it. > > Any thoughts or concerns? > > Thanks, > Corey > I'd like to start moving forward with enabling py37 unit tests for a subset of projects. Rather than putting too much load on infra by enabling 3 x py3 unit tests for every project, this would just focus on enablement of py37 unit tests for a subset of projects in the Stein cycle. And just to be clear, I would not be disabling any unit tests (such as py35). I'd just be enabling py37 unit tests. As some background, this ML thread originally led to updating the python3-first governance goal (https://review.openstack.org/#/c/610708/) but has now led back to this ML thread for a +1 rather than updating the governance goal. I'd like to get an official +1 here on the ML from parties such as the TC and infra in particular but anyone else's input would be welcomed too. Obviously individual projects would have the right to reject proposed changes that enable py37 unit tests. Hopefully they wouldn't, of course, but they could individually vote that way. Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcornutt at gmail.com Tue Nov 6 19:22:16 2018 From: jcornutt at gmail.com (Joshua Cornutt) Date: Tue, 6 Nov 2018 14:22:16 -0500 Subject: [openstack-dev] [Openstack-operators] FIPS Compliance Message-ID: The downside of this particular approach is that systems that get promoted to "FIPS mode" will get into a sticky situation as the code originally set hashes to use MD5 but then switches to SHA-x after users may have already used MD5 (and thus have that data stored / recalled). The best way really would be make them as configurable options by the user and only baking in decisions for methods that can handle floating between FIPS and non-FIPS modes. From skaplons at redhat.com Tue Nov 6 21:05:49 2018 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 6 Nov 2018 22:05:49 +0100 Subject: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04) In-Reply-To: References: Message-ID: <20181106210549.4nv6co64qbqk5l7f@skaplons-mac> Hi, Thanks for bringing this up. On Tue, Nov 06, 2018 at 01:12:55PM +0100, Dr. Jens Harbott (frickler) wrote: > Dear OpenStackers, > > earlier this year Ubuntu released their current LTS version 18.04 > codenamed "Bionic Beaver" and we are now facing the task to migrate > our devstack-based jobs to run on Bionic instead of the previous LTS > version 16.04 "Xenial Xerus". > > The last time this has happened two years ago (migration from 14.04 to > 16.04) and at that time it seems the migration was mostly driven by > the Infra team (see [1]), mostly because all of the job configuration > was still centrally hosted in a single repository > (openstack-infra/project-config). In the meantime, however, our CI > setup has been updated to use Zuul v3 and one of the new features that > come with this development is the introduction of per-project job > definitions. > > So this new flexibility requires us to make a choice between the two > possible options we have for migrating jobs now: > > 1) Change the "devstack" base job to run on Bionic instances > instead of Xenial instances > 2) Create new "devstack-bionic" and "tempest-full-bionic" base > jobs and migrate projects piecewise > > Choosing option 1) would cause all projects that base their own jobs > on this job (possibly indirectly by e.g. being based on the > "tempest-full" job) switch automatically. So there would be the > possibility that some jobs would break and require to be fixed before > patches could be merged again in the affected project(s). To > accomodate those risks, QA team can give some time to projects to test > their jobs on Bionic with WIP patches (QA can provide Bionic base job > as WIP patch). This option does not require any pre/post migration > changes on project's jobs. > > Choosing option 2) would avoid this by letting projects switch at > their own pace, but create the risk that some projects would never > migrate. It would also make further migrations, like the one expected > to happen when 20.04 is released, either having to follow the same > scheme or re-introduce the unversioned base job. Other point to note > down with this option is, > - project job definitions need to change their parent job from > "devstack" -> "devstack-bionic" or "tempest-full" -> > "tempest-full-bionic" > - QA needs to maintain existing jobs ("devstack", "tempest-full") and > bionic version jobs ("devstack-bionic", "tempest-full-bionic") How about option 1) and also add jobs like "devstack-xenial" and "tempest-full-xenial" which projects can use still for some time if their job on Bionic would be broken now? > > In order to prepare the decision, we have created a set of patches > that test the Bionic > jobs, you can find them under the common topic "devstack-bionic" [2]. > There is also an > etherpad to give a summarized view of the results of these tests [3]. > > Please respond to this mail if you want to promote either of the above > options or > maybe want to propose an even better solution. You can also find us > for discussion > in the #openstack-qa IRC channel on freenode. > > Infra team has tried both approaches during precise->trusty & > trusty->xenial migration[4]. > > Note that this mailing-list itself will soon be migrated, too, so if > you haven't subscribed > to the new list yet, this is a good time to act and avoid missing the > best parts [5]. > > Yours, > Jens (frickler at IRC) > > > [1] http://lists.openstack.org/pipermail/openstack-dev/2016-November/106906.html > [2] https://review.openstack.org/#/q/topic:devstack-bionic > [3] https://etherpad.openstack.org/p/devstack-bionic > [4] http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2018-11-01.log.html#t2018-11-01T12:40:22 > [5] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Slawek Kaplonski Senior software engineer Red Hat From fungi at yuggoth.org Tue Nov 6 21:25:33 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Nov 2018 21:25:33 +0000 Subject: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04) In-Reply-To: <20181106210549.4nv6co64qbqk5l7f@skaplons-mac> References: <20181106210549.4nv6co64qbqk5l7f@skaplons-mac> Message-ID: <20181106212533.v6eapwxd2ksggrlo@yuggoth.org> On 2018-11-06 22:05:49 +0100 (+0100), Slawek Kaplonski wrote: [...] > also add jobs like "devstack-xenial" and "tempest-full-xenial" > which projects can use still for some time if their job on Bionic > would be broken now? [...] That opens the door to piecemeal migration, which (as we similarly saw during the Trusty to Xenial switch) will inevitably lead to projects who no longer gate on Xenial being unable to integration test against projects who don't yet support Bionic. At the same time, projects which have switched to Bionic will start merging changes which only work on Bionic without realizing it, so that projects which test on Xenial can't use them. In short, you'll be broken either way. On top of that, you can end up with projects that don't get around to switching completely before release comes, and then they're stuck having to manage a test platform transition on a stable branch. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From skaplons at redhat.com Tue Nov 6 21:51:32 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Tue, 6 Nov 2018 22:51:32 +0100 Subject: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04) In-Reply-To: <20181106212533.v6eapwxd2ksggrlo@yuggoth.org> References: <20181106210549.4nv6co64qbqk5l7f@skaplons-mac> <20181106212533.v6eapwxd2ksggrlo@yuggoth.org> Message-ID: <4C6DAE05-6FFB-4671-89DA-5EB07229DB26@redhat.com> Hi, > Wiadomość napisana przez Jeremy Stanley w dniu 06.11.2018, o godz. 22:25: > > On 2018-11-06 22:05:49 +0100 (+0100), Slawek Kaplonski wrote: > [...] >> also add jobs like "devstack-xenial" and "tempest-full-xenial" >> which projects can use still for some time if their job on Bionic >> would be broken now? > [...] > > That opens the door to piecemeal migration, which (as we similarly > saw during the Trusty to Xenial switch) will inevitably lead to > projects who no longer gate on Xenial being unable to integration > test against projects who don't yet support Bionic. At the same > time, projects which have switched to Bionic will start merging > changes which only work on Bionic without realizing it, so that > projects which test on Xenial can't use them. In short, you'll be > broken either way. On top of that, you can end up with projects that > don't get around to switching completely before release comes, and > then they're stuck having to manage a test platform transition on a > stable branch. I understand Your point here but will option 2) from first email lead to the same issues then? > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From gmann at ghanshyammann.com Tue Nov 6 22:02:41 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 07 Nov 2018 07:02:41 +0900 Subject: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04) In-Reply-To: References: Message-ID: <166eb0c91fa.10e8a5be475294.8657485219639890030@ghanshyammann.com> Thanks Jens. As most of the base jobs are in QA repo, QA team will coordinate this migration based on either of the approach mentioned below. Another point to note - This migration will only target the zuulv3 jobs not the legacy jobs. legacy jobs owner should migrate them to bionic when they will be moved to zuulv3 native. Any risk of keeping the legacy on xenial till zullv3 ? Tempest testing patch found that stable queens/pike jobs failing for bionic due to not supported distro in devstack[1]. Fixing in https://review.openstack.org/#/c/616017/ and will backport to pike too. [1] https://review.openstack.org/#/c/611572/ http://logs.openstack.org/72/611572/1/check/tempest-full-queens/7cd3f21/job-output.txt.gz#_2018-11-01_09_57_07_551538 -gmann ---- On Tue, 06 Nov 2018 21:12:55 +0900 Dr. Jens Harbott (frickler) wrote ---- > Dear OpenStackers, > > earlier this year Ubuntu released their current LTS version 18.04 > codenamed "Bionic Beaver" and we are now facing the task to migrate > our devstack-based jobs to run on Bionic instead of the previous LTS > version 16.04 "Xenial Xerus". > > The last time this has happened two years ago (migration from 14.04 to > 16.04) and at that time it seems the migration was mostly driven by > the Infra team (see [1]), mostly because all of the job configuration > was still centrally hosted in a single repository > (openstack-infra/project-config). In the meantime, however, our CI > setup has been updated to use Zuul v3 and one of the new features that > come with this development is the introduction of per-project job > definitions. > > So this new flexibility requires us to make a choice between the two > possible options we have for migrating jobs now: > > 1) Change the "devstack" base job to run on Bionic instances > instead of Xenial instances > 2) Create new "devstack-bionic" and "tempest-full-bionic" base > jobs and migrate projects piecewise > > Choosing option 1) would cause all projects that base their own jobs > on this job (possibly indirectly by e.g. being based on the > "tempest-full" job) switch automatically. So there would be the > possibility that some jobs would break and require to be fixed before > patches could be merged again in the affected project(s). To > accomodate those risks, QA team can give some time to projects to test > their jobs on Bionic with WIP patches (QA can provide Bionic base job > as WIP patch). This option does not require any pre/post migration > changes on project's jobs. > > Choosing option 2) would avoid this by letting projects switch at > their own pace, but create the risk that some projects would never > migrate. It would also make further migrations, like the one expected > to happen when 20.04 is released, either having to follow the same > scheme or re-introduce the unversioned base job. Other point to note > down with this option is, > - project job definitions need to change their parent job from > "devstack" -> "devstack-bionic" or "tempest-full" -> > "tempest-full-bionic" > - QA needs to maintain existing jobs ("devstack", "tempest-full") and > bionic version jobs ("devstack-bionic", "tempest-full-bionic") > > In order to prepare the decision, we have created a set of patches > that test the Bionic > jobs, you can find them under the common topic "devstack-bionic" [2]. > There is also an > etherpad to give a summarized view of the results of these tests [3]. > > Please respond to this mail if you want to promote either of the above > options or > maybe want to propose an even better solution. You can also find us > for discussion > in the #openstack-qa IRC channel on freenode. > > Infra team has tried both approaches during precise->trusty & > trusty->xenial migration[4]. > > Note that this mailing-list itself will soon be migrated, too, so if > you haven't subscribed > to the new list yet, this is a good time to act and avoid missing the > best parts [5]. > > Yours, > Jens (frickler at IRC) > > > [1] http://lists.openstack.org/pipermail/openstack-dev/2016-November/106906.html > [2] https://review.openstack.org/#/q/topic:devstack-bionic > [3] https://etherpad.openstack.org/p/devstack-bionic > [4] http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2018-11-01.log.html#t2018-11-01T12:40:22 > [5] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gmann at ghanshyammann.com Tue Nov 6 22:07:30 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 07 Nov 2018 07:07:30 +0900 Subject: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04) In-Reply-To: <4C6DAE05-6FFB-4671-89DA-5EB07229DB26@redhat.com> References: <20181106210549.4nv6co64qbqk5l7f@skaplons-mac> <20181106212533.v6eapwxd2ksggrlo@yuggoth.org> <4C6DAE05-6FFB-4671-89DA-5EB07229DB26@redhat.com> Message-ID: <166eb10f8fb.117c25ba675341.116732651371514382@ghanshyammann.com> ---- On Wed, 07 Nov 2018 06:51:32 +0900 Slawomir Kaplonski wrote ---- > Hi, > > > Wiadomość napisana przez Jeremy Stanley w dniu 06.11.2018, o godz. 22:25: > > > > On 2018-11-06 22:05:49 +0100 (+0100), Slawek Kaplonski wrote: > > [...] > >> also add jobs like "devstack-xenial" and "tempest-full-xenial" > >> which projects can use still for some time if their job on Bionic > >> would be broken now? > > [...] > > > > That opens the door to piecemeal migration, which (as we similarly > > saw during the Trusty to Xenial switch) will inevitably lead to > > projects who no longer gate on Xenial being unable to integration > > test against projects who don't yet support Bionic. At the same > > time, projects which have switched to Bionic will start merging > > changes which only work on Bionic without realizing it, so that > > projects which test on Xenial can't use them. In short, you'll be > > broken either way. On top of that, you can end up with projects that > > don't get around to switching completely before release comes, and > > then they're stuck having to manage a test platform transition on a > > stable branch. > > I understand Your point here but will option 2) from first email lead to the same issues then? seems so. approach 1 is less risky for such integrated testing issues and requires less work. In approach 1, we can coordinate the base job migration with project side testing with bionic. -gmann > > > -- > > Jeremy Stanley > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From cboylan at sapwetik.org Tue Nov 6 22:07:33 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 06 Nov 2018 14:07:33 -0800 Subject: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04) In-Reply-To: <166eb0c91fa.10e8a5be475294.8657485219639890030@ghanshyammann.com> References: <166eb0c91fa.10e8a5be475294.8657485219639890030@ghanshyammann.com> Message-ID: <1541542053.4055464.1568014040.372F3884@webmail.messagingengine.com> On Tue, Nov 6, 2018, at 2:02 PM, Ghanshyam Mann wrote: > Thanks Jens. > > As most of the base jobs are in QA repo, QA team will coordinate this > migration based on either of the approach mentioned below. > > Another point to note - This migration will only target the zuulv3 jobs > not the legacy jobs. legacy jobs owner should migrate them to bionic > when they will be moved to zuulv3 native. Any risk of keeping the legacy > on xenial till zullv3 ? > > Tempest testing patch found that stable queens/pike jobs failing for > bionic due to not supported distro in devstack[1]. Fixing in > https://review.openstack.org/#/c/616017/ and will backport to pike too. The existing stable branches should continue to test on xenial as that is what they were built on. We aren't asking that everything be ported forward to bionic. Instead the idea is that current development (aka master) switch to bionic and roll forward from that point. This applies to tempest jobs, functional jobs, and unittests, etc. Xenial isn't going away. It is there for the stable branches. > > [1] https://review.openstack.org/#/c/611572/ > > http://logs.openstack.org/72/611572/1/check/tempest-full-queens/7cd3f21/job-output.txt.gz#_2018-11-01_09_57_07_551538 > > > -gmann From gmann at ghanshyammann.com Tue Nov 6 22:15:40 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 07 Nov 2018 07:15:40 +0900 Subject: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04) In-Reply-To: <1541542053.4055464.1568014040.372F3884@webmail.messagingengine.com> References: <166eb0c91fa.10e8a5be475294.8657485219639890030@ghanshyammann.com> <1541542053.4055464.1568014040.372F3884@webmail.messagingengine.com> Message-ID: <166eb187444.10cf316c475405.6656320973643343386@ghanshyammann.com> ---- On Wed, 07 Nov 2018 07:07:33 +0900 Clark Boylan wrote ---- > On Tue, Nov 6, 2018, at 2:02 PM, Ghanshyam Mann wrote: > > Thanks Jens. > > > > As most of the base jobs are in QA repo, QA team will coordinate this > > migration based on either of the approach mentioned below. > > > > Another point to note - This migration will only target the zuulv3 jobs > > not the legacy jobs. legacy jobs owner should migrate them to bionic > > when they will be moved to zuulv3 native. Any risk of keeping the legacy > > on xenial till zullv3 ? > > > > Tempest testing patch found that stable queens/pike jobs failing for > > bionic due to not supported distro in devstack[1]. Fixing in > > https://review.openstack.org/#/c/616017/ and will backport to pike too. > > The existing stable branches should continue to test on xenial as that is what they were built on. We aren't asking that everything be ported forward to bionic. Instead the idea is that current development (aka master) switch to bionic and roll forward from that point. Make sense. Thanks. We can keep stable branch jobs running on Tempest master on xenial only. -gmann > > This applies to tempest jobs, functional jobs, and unittests, etc. Xenial isn't going away. It is there for the stable branches. > > > > > [1] https://review.openstack.org/#/c/611572/ > > > > http://logs.openstack.org/72/611572/1/check/tempest-full-queens/7cd3f21/job-output.txt.gz#_2018-11-01_09_57_07_551538 > > > > > > -gmann > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From phanquochien at gmail.com Tue Nov 6 22:51:43 2018 From: phanquochien at gmail.com (Hien Phan) Date: Wed, 7 Nov 2018 05:51:43 +0700 Subject: [openstack-dev] [openstack-community] Asking for suggestion of video conference tool for team and webinar In-Reply-To: References: Message-ID: Hello, You can try Apache OpenMeetings, it's good enough! Cheers, Hien. On Tue, Nov 6, 2018 at 8:36 PM Trinh Nguyen wrote: > Hi, > > I tried several free tools for team meetings, vPTG, and webinars but there > are always drawbacks ( because it's free, of course). For example: > > - Google Hangout: only allow a maximum of 10 people to do the video > calls > - Zoom: limits about 45m per meeting. So for a webinar or conference > call takes more than 45m we have to splits into 2 sessions or so. > > If anyone in the community can suggest some better video conferencing > tools, that would be great. So we can organize better communication for our > team and the community's webinars. > > Many thanks, > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rochelle.grober at huawei.com Tue Nov 6 23:24:19 2018 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Tue, 6 Nov 2018 23:24:19 +0000 Subject: [openstack-dev] [Openstack-operators] [Openstack-sigs] Dropping lazy translation support In-Reply-To: <8febca8b-d3d9-0f17-44dd-1a1fe8eddff0@nemebean.com> References: <2ae55c40-297d-3590-84de-665eae797522@gmail.com> <8febca8b-d3d9-0f17-44dd-1a1fe8eddff0@nemebean.com> Message-ID: I seem to recall list discussion on this quite a ways back. I think most of it happened on the Docs ml, though. Maybe Juno/Kilo timeframe? If possible, it would be good to search over the code bases for places it was called to see its current footprint. I'm pretty sure it was the docs folks working with the oslo folks to make it work. But then the question was put to the ops folks about translations of logs (maybe the New York midcycle) and ops don't use translation. The ops input was broadcast to dev and docs and most efforts stopped at that point. But, I believe some projects had already done some work on lazy translation. I suspect the amount done, though was pretty low. Maybe the fastest way to get info would be to turn it off and see where the code barfs in a long run (to catch as many projects as possible)? --rocky > From: Ben Nemec > Sent: Monday, November 05, 2018 1:40 PM > > On 11/5/18 3:13 PM, Matt Riedemann wrote: > > On 11/5/2018 1:36 PM, Doug Hellmann wrote: > >> I think the lazy stuff was all about the API responses. The log > >> translations worked a completely different way. > > > > Yeah maybe. And if so, I came across this in one of the blueprints: > > > > https://etherpad.openstack.org/p/disable-lazy-translation > > > > Which says that because of a critical bug, the lazy translation was > > disabled in Havana to be fixed in Icehouse but I don't think that ever > > happened before IBM developers dropped it upstream, which is further > > justification for nuking this code from the various projects. > > > > It was disabled last-minute, but I'm pretty sure it was turned back on (hence > why we're hitting issues today). I still see coercion code in oslo.log that was > added to fix the problem[1] (I think). I could be wrong about that since this > code has undergone significant changes over the years, but it looks to me like > we're still forcing things to be unicode.[2] > > 1: https://review.openstack.org/#/c/49230/3/openstack/common/log.py > 2: > https://github.com/openstack/oslo.log/blob/a9ba6c544cbbd4bd804dcd5e38 > d72106ea0b8b8f/oslo_log/formatters.py#L414 > From tony at bakeyournoodle.com Tue Nov 6 23:39:11 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 7 Nov 2018 10:39:11 +1100 Subject: [openstack-dev] Asking for suggestion of video conference tool for team and webinar In-Reply-To: References: Message-ID: <20181106233911.GD20576@thor.bakeyournoodle.com> On Tue, Nov 06, 2018 at 06:07:18AM -0800, Julia Kreger wrote: > I don't know how the cost structure works for Bluejeans, but I've found it > works very well for calls with many people on video. I typically have calls > with 14+ people nearly everyone has their video enabled without a problem. Just for the record it has a limit of 100 connections[1] before you need to use 'primetime'. Yours Tony. [1] It's kinda sad that I've hit that :( -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From doug at doughellmann.com Tue Nov 6 23:45:45 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 06 Nov 2018 18:45:45 -0500 Subject: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04) In-Reply-To: <166eb10f8fb.117c25ba675341.116732651371514382@ghanshyammann.com> References: <20181106210549.4nv6co64qbqk5l7f@skaplons-mac> <20181106212533.v6eapwxd2ksggrlo@yuggoth.org> <4C6DAE05-6FFB-4671-89DA-5EB07229DB26@redhat.com> <166eb10f8fb.117c25ba675341.116732651371514382@ghanshyammann.com> Message-ID: Ghanshyam Mann writes: > ---- On Wed, 07 Nov 2018 06:51:32 +0900 Slawomir Kaplonski wrote ---- > > Hi, > > > > > Wiadomość napisana przez Jeremy Stanley w dniu 06.11.2018, o godz. 22:25: > > > > > > On 2018-11-06 22:05:49 +0100 (+0100), Slawek Kaplonski wrote: > > > [...] > > >> also add jobs like "devstack-xenial" and "tempest-full-xenial" > > >> which projects can use still for some time if their job on Bionic > > >> would be broken now? > > > [...] > > > > > > That opens the door to piecemeal migration, which (as we similarly > > > saw during the Trusty to Xenial switch) will inevitably lead to > > > projects who no longer gate on Xenial being unable to integration > > > test against projects who don't yet support Bionic. At the same > > > time, projects which have switched to Bionic will start merging > > > changes which only work on Bionic without realizing it, so that > > > projects which test on Xenial can't use them. In short, you'll be > > > broken either way. On top of that, you can end up with projects that > > > don't get around to switching completely before release comes, and > > > then they're stuck having to manage a test platform transition on a > > > stable branch. > > > > I understand Your point here but will option 2) from first email lead to the same issues then? > > seems so. approach 1 is less risky for such integrated testing issues and requires less work. In approach 1, we can coordinate the base job migration with project side testing with bionic. > > -gmann I like the approach of updating the devstack jobs to move everything to Bionic at one time because it sounds like it presents less risk of us ending up with something that looks like it works together but doesn't actually because it's tested on a different platform, as well as being less likely to cause us to have to do major porting work in stable branches after the release. We'll need to take the same approach when updating the version of Python 3 used inside of devstack. Doug From openstack at fried.cc Tue Nov 6 23:53:28 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 6 Nov 2018 17:53:28 -0600 Subject: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker In-Reply-To: References: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> <007518d4-daed-4065-7044-40443564f6cb@gmail.com> Message-ID: <964c6579-52c7-ce1f-cb90-3809410a4c64@fried.cc> I do intend to respond to all the excellent discussion on this thread, but right now I just want to offer an update on the code: I've split the effort apart into multiple changes starting at [1]. A few of these are ready for review. One opinion was that a specless blueprint would be appropriate. If there's consensus on this, I'll spin one up. [1] https://review.openstack.org/#/c/615606/ On 11/5/18 03:16, Belmiro Moreira wrote: > Thanks Eric for the patch. > This will help keeping placement calls under control. > > Belmiro > > > On Sun, Nov 4, 2018 at 1:01 PM Jay Pipes > wrote: > > On 11/02/2018 03:22 PM, Eric Fried wrote: > > All- > > > > Based on a (long) discussion yesterday [1] I have put up a patch [2] > > whereby you can set [compute]resource_provider_association_refresh to > > zero and the resource tracker will never* refresh the report client's > > provider cache. Philosophically, we're removing the "healing" > aspect of > > the resource tracker's periodic and trusting that placement won't > > diverge from whatever's in our cache. (If it does, it's because the op > > hit the CLI, in which case they should SIGHUP - see below.) > > > > *except: > > - When we initially create the compute node record and bootstrap its > > resource provider. > > - When the virt driver's update_provider_tree makes a change, > > update_from_provider_tree reflects them in the cache as well as > pushing > > them back to placement. > > - If update_from_provider_tree fails, the cache is cleared and gets > > rebuilt on the next periodic. > > - If you send SIGHUP to the compute process, the cache is cleared. > > > > This should dramatically reduce the number of calls to placement from > > the compute service. Like, to nearly zero, unless something is > actually > > changing. > > > > Can I get some initial feedback as to whether this is worth > polishing up > > into something real? (It will probably need a bp/spec if so.) > > > > [1] > > > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-11-01.log.html#t2018-11-01T17:32:03 > > [2] https://review.openstack.org/#/c/614886/ > > > > ========== > > Background > > ========== > > In the Queens release, our friends at CERN noticed a serious spike in > > the number of requests to placement from compute nodes, even in a > > stable-state cloud. Given that we were in the process of adding a > ton of > > infrastructure to support sharing and nested providers, this was not > > unexpected. Roughly, what was previously: > > > >   @periodic_task: > >       GET /resource_providers/$compute_uuid > >       GET /resource_providers/$compute_uuid/inventories > > > > became more like: > > > >   @periodic_task: > >       # In Queens/Rocky, this would still just return the compute RP > >       GET /resource_providers?in_tree=$compute_uuid > >       # In Queens/Rocky, this would return nothing > >       GET /resource_providers?member_of=...&required=MISC_SHARES... > >       for each provider returned above:  # i.e. just one in Q/R > >           GET /resource_providers/$compute_uuid/inventories > >           GET /resource_providers/$compute_uuid/traits > >           GET /resource_providers/$compute_uuid/aggregates > > > > In a cloud the size of CERN's, the load wasn't acceptable. But at the > > time, CERN worked around the problem by disabling refreshing entirely. > > (The fact that this seems to have worked for them is an > encouraging sign > > for the proposed code change.) > > > > We're not actually making use of most of that information, but it sets > > the stage for things that we're working on in Stein and beyond, like > > multiple VGPU types, bandwidth resource providers, accelerators, NUMA, > > etc., so removing/reducing the amount of information we look at isn't > > really an option strategically. > > I support your idea of getting rid of the periodic refresh of the cache > in the scheduler report client. Much of that was added in order to > emulate the original way the resource tracker worked. > > Most of the behaviour in the original resource tracker (and some of the > code still in there for dealing with (surprise!) PCI passthrough > devices > and NUMA topology) was due to doing allocations on the compute node > (the > whole claims stuff). We needed to always be syncing the state of the > compute_nodes and pci_devices table in the cell database with whatever > usage information was being created/modified on the compute nodes [0]. > > All of the "healing" code that's in the resource tracker was basically > to deal with "soft delete", migrations that didn't complete or work > properly, and, again, to handle allocations becoming out-of-sync > because > the compute nodes were responsible for allocating (as opposed to the > current situation we have where the placement service -- via the > scheduler's call to claim_resources() -- is responsible for allocating > resources [1]). > > Now that we have generation markers protecting both providers and > consumers, we can rely on those generations to signal to the scheduler > report client that it needs to pull fresh information about a provider > or consumer. So, there's really no need to automatically and blindly > refresh any more. > > Best, > -jay > > [0] We always need to be syncing those tables because those tables, > unlike the placement database's data modeling, couple both inventory > AND > usage in the same table structure... > > [1] again, except for PCI devices and NUMA topology, because of the > tight coupling in place with the different resource trackers those > types > of resources use... > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mriedemos at gmail.com Wed Nov 7 00:28:14 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 6 Nov 2018 18:28:14 -0600 Subject: [openstack-dev] [Openstack-operators] [Openstack-sigs] Dropping lazy translation support In-Reply-To: References: <2ae55c40-297d-3590-84de-665eae797522@gmail.com> <8febca8b-d3d9-0f17-44dd-1a1fe8eddff0@nemebean.com> Message-ID: <5894b2cd-310c-04b4-1c90-859812bd47c9@gmail.com> On 11/6/2018 5:24 PM, Rochelle Grober wrote: > Maybe the fastest way to get info would be to turn it off and see where the code barfs in a long run (to catch as many projects as possible)? There is zero integration testing for lazy translation, so "turning it off and seeing what breaks" wouldn't result in anything breaking. -- Thanks, Matt From mriedemos at gmail.com Wed Nov 7 00:35:04 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 6 Nov 2018 18:35:04 -0600 Subject: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: Message-ID: <53c663d2-7e0f-9a50-4d4b-7ff11ebb02c6@gmail.com> After hacking on the PoC for awhile [1] I have finally pushed up a spec [2]. Behold it in all its dark glory! [1] https://review.openstack.org/#/c/603930/ [2] https://review.openstack.org/#/c/616037/ On 8/22/2018 8:23 PM, Matt Riedemann wrote: > Hi everyone, > > I have started an etherpad for cells topics at the Stein PTG [1]. The > main issue in there right now is dealing with cross-cell cold migration > in nova. > > At a high level, I am going off these requirements: > > * Cells can shard across flavors (and hardware type) so operators would > like to move users off the old flavors/hardware (old cell) to new > flavors in a new cell. > > * There is network isolation between compute hosts in different cells, > so no ssh'ing the disk around like we do today. But the image service is > global to all cells. > > Based on this, for the initial support for cross-cell cold migration, I > am proposing that we leverage something like shelve offload/unshelve > masquerading as resize. We shelve offload from the source cell and > unshelve in the target cell. This should work for both volume-backed and > non-volume-backed servers (we use snapshots for shelved offloaded > non-volume-backed servers). > > There are, of course, some complications. The main ones that I need help > with right now are what happens with volumes and ports attached to the > server. Today we detach from the source and attach at the target, but > that's assuming the storage backend and network are available to both > hosts involved in the move of the server. Will that be the case across > cells? I am assuming that depends on the network topology (are routed > networks being used?) and storage backend (routed storage?). If the > network and/or storage backend are not available across cells, how do we > migrate volumes and ports? Cinder has a volume migrate API for admins > but I do not know how nova would know the proper affinity per-cell to > migrate the volume to the proper host (cinder does not have a routed > storage concept like routed provider networks in neutron, correct?). And > as far as I know, there is no such thing as port migration in Neutron. > > Could Placement help with the volume/port migration stuff? Neutron > routed provider networks rely on placement aggregates to schedule the VM > to a compute host in the same network segment as the port used to create > the VM, however, if that segment does not span cells we are kind of > stuck, correct? > > To summarize the issues as I see them (today): > > * How to deal with the targeted cell during scheduling? This is so we > can even get out of the source cell in nova. > > * How does the API deal with the same instance being in two DBs at the > same time during the move? > > * How to handle revert resize? > > * How are volumes and ports handled? > > I can get feedback from my company's operators based on what their > deployment will look like for this, but that does not mean it will work > for others, so I need as much feedback from operators, especially those > running with multiple cells today, as possible. Thanks in advance. > > [1] https://etherpad.openstack.org/p/nova-ptg-stein-cells > -- Thanks, Matt From dangtrinhnt at gmail.com Wed Nov 7 00:57:55 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 7 Nov 2018 09:57:55 +0900 Subject: [openstack-dev] Asking for suggestion of video conference tool for team and webinar In-Reply-To: <20181106233911.GD20576@thor.bakeyournoodle.com> References: <20181106233911.GD20576@thor.bakeyournoodle.com> Message-ID: Thanks everyone for the suggestions :) I will try those. On Wed, Nov 7, 2018 at 8:39 AM Tony Breeds wrote: > On Tue, Nov 06, 2018 at 06:07:18AM -0800, Julia Kreger wrote: > > I don't know how the cost structure works for Bluejeans, but I've found > it > > works very well for calls with many people on video. I typically have > calls > > with 14+ people nearly everyone has their video enabled without a > problem. > > Just for the record it has a limit of 100 connections[1] before you need to > use 'primetime'. > > Yours Tony. > > [1] It's kinda sad that I've hit that :( > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From liliueecg at gmail.com Wed Nov 7 03:07:34 2018 From: liliueecg at gmail.com (Li Liu) Date: Tue, 6 Nov 2018 22:07:34 -0500 Subject: [openstack-dev] [cyborg]Weekly Meeting Message-ID: Hi Team, We will have our regular weekly meetings tomorrow at usual time UTC 1400. Please be aware that North America has just shifted to winter time this week, which mean it would be 10 pm Beijing time, 6 am PST and 9 am EST. We probably will adjust the time a little bit starting next week. The agenda for this week's meeting is as follows 1. Status Update 2. DB evolution work assignments -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschadin at sbcloud.ru Wed Nov 7 07:23:17 2018 From: aschadin at sbcloud.ru (=?koi8-r?B?/sHEyc4g4czFy9PBzsTSIPPF0sfFxdfJ3g==?=) Date: Wed, 7 Nov 2018 07:23:17 +0000 Subject: [openstack-dev] [watcher] weekly meeting Message-ID: <24BD8295-5F7A-454C-8477-9950686AF28B@sbcloud.ru> Team, We will have a meeting today at 8:00 UTC on openstack-meeting-alt channel. Alex From thierry at openstack.org Wed Nov 7 08:54:12 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 7 Nov 2018 09:54:12 +0100 Subject: [openstack-dev] Asking for suggestion of video conference tool for team and webinar In-Reply-To: References: Message-ID: Trinh Nguyen wrote: > I tried several free tools for team meetings, vPTG, and webinars but > there are always drawbacks ( because it's free, of course). For example: > > * Google Hangout: only allow a maximum of 10 people to do the video calls > * Zoom: limits about 45m per meeting. So for a webinar or conference > call takes more than 45m we have to splits into 2 sessions or so. > > If anyone in the community can suggest some better video conferencing > tools, that would be great. So we can organize better communication for > our team and the community's webinars. Jitsi meet is an open source + free as in beer solution based on WebRTC: https://meet.jit.si/ No account needed, no participant limit. -- Thierry Carrez (ttx) From lajos.katona at ericsson.com Wed Nov 7 10:12:31 2018 From: lajos.katona at ericsson.com (Lajos Katona) Date: Wed, 7 Nov 2018 10:12:31 +0000 Subject: [openstack-dev] [all]Naming the T release of OpenStack -- Poll open In-Reply-To: <20181106021919.GB20576@thor.bakeyournoodle.com> References: <20181030054024.GC2343@thor.bakeyournoodle.com> <20181106021919.GB20576@thor.bakeyournoodle.com> Message-ID: <9676c8e5-8b8d-2b67-e911-ae1a62801864@ericsson.com> Hi, Maybe I missed something but I got the message: "Already voted A vote has already been cast using your voter key. Poll results will be available only to the following users: tony at bakeyournoodle.com" Could you help me to have a correct link? Regards Lajos On 2018. 11. 06. 3:19, Tony Breeds wrote: Hi all, Time is running out for you to have your say in the T release name poll. We have just under 3 days left. If you haven't voted please do! On Tue, Oct 30, 2018 at 04:40:25PM +1100, Tony Breeds wrote: Hi folks, It is time again to cast your vote for the naming of the T Release. As with last time we'll use a public polling option over per user private URLs for voting. This means, everybody should proceed to use the following URL to cast their vote: https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_aac97f1cbb6c61df&akey=b9e448b340787f0e We've selected a public poll to ensure that the whole community, not just gerrit change owners get a vote. Also the size of our community has grown such that we can overwhelm CIVS if using private urls. A public can mean that users behind NAT, proxy servers or firewalls may receive an message saying that your vote has already been lodged, if this happens please try another IP. Because this is a public poll, results will currently be only viewable by myself until the poll closes. Once closed, I'll post the URL making the results viewable to everybody. This was done to avoid everybody seeing the results while the public poll is running. The poll will officially end on 2018-11-08 00:00:00+00:00[1], and results will be posted shortly after. [1] https://governance.openstack.org/tc/reference/release-naming.html --- According to the Release Naming Process, this poll is to determine the community preferences for the name of the T release of OpenStack. It is possible that the top choice is not viable for legal reasons, so the second or later community preference could wind up being the name. Release Name Criteria --------------------- Each release name must start with the letter of the ISO basic Latin alphabet following the initial letter of the previous release, starting with the initial release of "Austin". After "Z", the next name should start with "A" again. The name must be composed only of the 26 characters of the ISO basic Latin alphabet. Names which can be transliterated into this character set are also acceptable. The name must refer to the physical or human geography of the region encompassing the location of the OpenStack design summit for the corresponding release. The exact boundaries of the geographic region under consideration must be declared before the opening of nominations, as part of the initiation of the selection process. The name must be a single word with a maximum of 10 characters. Words that describe the feature should not be included, so "Foo City" or "Foo Peak" would both be eligible as "Foo". Names which do not meet these criteria but otherwise sound really cool should be added to a separate section of the wiki page and the TC may make an exception for one or more of them to be considered in the Condorcet poll. The naming official is responsible for presenting the list of exceptional names for consideration to the TC before the poll opens. Exact Geographic Region ----------------------- The Geographic Region from where names for the S release will come is Colorado Proposed Names -------------- * Tarryall * Teakettle * Teller * Telluride * Thomas : the Tank Engine * Thornton * Tiger * Tincup * Timnath * Timber * Tiny Town * Torreys * Trail * Trinidad * Treasure * Troublesome * Trussville * Turret * Tyrone Proposed Names that do not meet the criteria (accepted by the TC) ----------------------------------------------------------------- * Train🚂 : Many Attendees of the first Denver PTG have a story to tell about the trains near the PTG hotel. We could celebrate those stories with this name Yours Tony. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Yours Tony. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Wed Nov 7 10:20:07 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 07 Nov 2018 11:20:07 +0100 Subject: [openstack-dev] [all]Naming the T release of OpenStack -- Poll open In-Reply-To: <9676c8e5-8b8d-2b67-e911-ae1a62801864@ericsson.com> References: <20181030054024.GC2343@thor.bakeyournoodle.com> <20181106021919.GB20576@thor.bakeyournoodle.com> <9676c8e5-8b8d-2b67-e911-ae1a62801864@ericsson.com> Message-ID: <1541586007.943296.1568537056.255BBABA@webmail.messagingengine.com> On Wed, Nov 7, 2018, at 11:12 AM, Lajos Katona wrote: > Hi, > > Maybe I missed something but I got the message: "Already voted > A vote has already been cast using your voter key. > > Poll results will be available only to the following users: > > tony at bakeyournoodle.com" > > Could you help me to have a correct link? The public polling service limits voting to one vote per IP address. If someone in your office space has already voted, the poll won't let anyone else in the office vote. You need to find a different public IP address to vote from, either by tunneling through a proxy or physically going somewhere else with a different network. Colleen > > Regards > Lajos > > On 2018. 11. 06. 3:19, Tony Breeds wrote: > > > Hi all, > > Time is running out for you to have your say in the T release name > poll. We have just under 3 days left. If you haven't voted please do! > > On Tue, Oct 30, 2018 at 04:40:25PM +1100, Tony Breeds wrote: > > > Hi folks, > > It is time again to cast your vote for the naming of the T Release. > As with last time we'll use a public polling option over per user private URLs > for voting. This means, everybody should proceed to use the following URL to > cast their vote: > > > https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_aac97f1cbb6c61df&akey=b9e448b340787f0e > > We've selected a public poll to ensure that the whole community, not just gerrit > change owners get a vote. Also the size of our community has grown such that we > can overwhelm CIVS if using private urls. A public can mean that users > behind NAT, proxy servers or firewalls may receive an message saying > that your vote has already been lodged, if this happens please try > another IP. > > Because this is a public poll, results will currently be only viewable by myself > until the poll closes. Once closed, I'll post the URL making the results > viewable to everybody. This was done to avoid everybody seeing the results while > the public poll is running. > > The poll will officially end on 2018-11-08 00:00:00+00:00[1], and > results will be > posted shortly after. > > [1] https://governance.openstack.org/tc/reference/release-naming.html > --- > > According to the Release Naming Process, this poll is to determine the > community preferences for the name of the T release of OpenStack. It is > possible that the top choice is not viable for legal reasons, so the second or > later community preference could wind up being the name. > > Release Name Criteria > --------------------- > > Each release name must start with the letter of the ISO basic Latin alphabet > following the initial letter of the previous release, starting with the > initial release of "Austin". After "Z", the next name should start with > "A" again. > > The name must be composed only of the 26 characters of the ISO basic Latin > alphabet. Names which can be transliterated into this character set are also > acceptable. > > The name must refer to the physical or human geography of the region > encompassing the location of the OpenStack design summit for the > corresponding release. The exact boundaries of the geographic region under > consideration must be declared before the opening of nominations, as part of > the initiation of the selection process. > > The name must be a single word with a maximum of 10 characters. Words that > describe the feature should not be included, so "Foo City" or "Foo Peak" > would both be eligible as "Foo". > > Names which do not meet these criteria but otherwise sound really cool > should be added to a separate section of the wiki page and the TC may make > an exception for one or more of them to be considered in the Condorcet poll. > The naming official is responsible for presenting the list of exceptional > names for consideration to the TC before the poll opens. > > Exact Geographic Region > ----------------------- > > The Geographic Region from where names for the S release will come is Colorado > > Proposed Names > -------------- > > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas : the Tank Engine > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria (accepted by the TC) > ----------------------------------------------------------------- > > * Train🚂 : Many Attendees of the first Denver PTG have a story to tell > about the trains near the PTG hotel. We could celebrate those stories > with this name > > Yours Tony. > > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org? > subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > Yours Tony. > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org? > subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lajos.katona at ericsson.com Wed Nov 7 10:22:21 2018 From: lajos.katona at ericsson.com (Lajos Katona) Date: Wed, 7 Nov 2018 10:22:21 +0000 Subject: [openstack-dev] [all]Naming the T release of OpenStack -- Poll open In-Reply-To: <1541586007.943296.1568537056.255BBABA@webmail.messagingengine.com> References: <20181030054024.GC2343@thor.bakeyournoodle.com> <20181106021919.GB20576@thor.bakeyournoodle.com> <9676c8e5-8b8d-2b67-e911-ae1a62801864@ericsson.com> <1541586007.943296.1568537056.255BBABA@webmail.messagingengine.com> Message-ID: Hi, Thanks, I check via mobile net than. Regards Lajos On 2018. 11. 07. 11:20, Colleen Murphy wrote: > On Wed, Nov 7, 2018, at 11:12 AM, Lajos Katona wrote: >> Hi, >> >> Maybe I missed something but I got the message: "Already voted >> A vote has already been cast using your voter key. >> >> Poll results will be available only to the following users: >> >> tony at bakeyournoodle.com" >> >> Could you help me to have a correct link? > The public polling service limits voting to one vote per IP address. If someone in your office space has already voted, the poll won't let anyone else in the office vote. You need to find a different public IP address to vote from, either by tunneling through a proxy or physically going somewhere else with a different network. > > Colleen > >> Regards >> Lajos >> >> On 2018. 11. 06. 3:19, Tony Breeds wrote: >> >> >> Hi all, >> >> Time is running out for you to have your say in the T release name >> poll. We have just under 3 days left. If you haven't voted please do! >> >> On Tue, Oct 30, 2018 at 04:40:25PM +1100, Tony Breeds wrote: >> >> >> Hi folks, >> >> It is time again to cast your vote for the naming of the T Release. >> As with last time we'll use a public polling option over per user private URLs >> for voting. This means, everybody should proceed to use the following URL to >> cast their vote: >> >> >> https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_aac97f1cbb6c61df&akey=b9e448b340787f0e >> >> We've selected a public poll to ensure that the whole community, not just gerrit >> change owners get a vote. Also the size of our community has grown such that we >> can overwhelm CIVS if using private urls. A public can mean that users >> behind NAT, proxy servers or firewalls may receive an message saying >> that your vote has already been lodged, if this happens please try >> another IP. >> >> Because this is a public poll, results will currently be only viewable by myself >> until the poll closes. Once closed, I'll post the URL making the results >> viewable to everybody. This was done to avoid everybody seeing the results while >> the public poll is running. >> >> The poll will officially end on 2018-11-08 00:00:00+00:00[1], and >> results will be >> posted shortly after. >> >> [1] https://governance.openstack.org/tc/reference/release-naming.html >> --- >> >> According to the Release Naming Process, this poll is to determine the >> community preferences for the name of the T release of OpenStack. It is >> possible that the top choice is not viable for legal reasons, so the second or >> later community preference could wind up being the name. >> >> Release Name Criteria >> --------------------- >> >> Each release name must start with the letter of the ISO basic Latin alphabet >> following the initial letter of the previous release, starting with the >> initial release of "Austin". After "Z", the next name should start with >> "A" again. >> >> The name must be composed only of the 26 characters of the ISO basic Latin >> alphabet. Names which can be transliterated into this character set are also >> acceptable. >> >> The name must refer to the physical or human geography of the region >> encompassing the location of the OpenStack design summit for the >> corresponding release. The exact boundaries of the geographic region under >> consideration must be declared before the opening of nominations, as part of >> the initiation of the selection process. >> >> The name must be a single word with a maximum of 10 characters. Words that >> describe the feature should not be included, so "Foo City" or "Foo Peak" >> would both be eligible as "Foo". >> >> Names which do not meet these criteria but otherwise sound really cool >> should be added to a separate section of the wiki page and the TC may make >> an exception for one or more of them to be considered in the Condorcet poll. >> The naming official is responsible for presenting the list of exceptional >> names for consideration to the TC before the poll opens. >> >> Exact Geographic Region >> ----------------------- >> >> The Geographic Region from where names for the S release will come is Colorado >> >> Proposed Names >> -------------- >> >> * Tarryall >> * Teakettle >> * Teller >> * Telluride >> * Thomas : the Tank Engine >> * Thornton >> * Tiger >> * Tincup >> * Timnath >> * Timber >> * Tiny Town >> * Torreys >> * Trail >> * Trinidad >> * Treasure >> * Troublesome >> * Trussville >> * Turret >> * Tyrone >> >> Proposed Names that do not meet the criteria (accepted by the TC) >> ----------------------------------------------------------------- >> >> * Train🚂 : Many Attendees of the first Denver PTG have a story to tell >> about the trains near the PTG hotel. We could celebrate those stories >> with this name >> >> Yours Tony. >> >> >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org? >> subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> Yours Tony. >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org? >> subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaosorior at redhat.com Wed Nov 7 12:28:52 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Wed, 7 Nov 2018 14:28:52 +0200 Subject: [openstack-dev] [tripleo] CI is broken In-Reply-To: References: Message-ID: <2a77baf3-c7cf-37d6-e068-d04df7b28380@redhat.com> Hello folks, Please do not attempt to merge or recheck patches until we get this sorted out. We are dealing with several issues that have broken all jobs. https://bugs.launchpad.net/tripleo/+bug/1801769 https://bugs.launchpad.net/tripleo/+bug/1801969 https://bugs.launchpad.net/tripleo/+bug/1802083 https://bugs.launchpad.net/tripleo/+bug/1802085 Best Regards! From doug at doughellmann.com Wed Nov 7 12:30:02 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 07 Nov 2018 07:30:02 -0500 Subject: [openstack-dev] [Openstack-operators] FIPS Compliance In-Reply-To: References: Message-ID: Joshua Cornutt writes: > Doug, > > I have such a list put together (my various installation documents for > getting these clouds working in FIPS mode) but it's hardly ready for > public consumption. I planned on releasing each bit as a code change > and/or bug ticket and letting the community consume it as it figures > some of these things out. It's likely that the overall migration will go better if we all have the full context. So I hope you can find some time to publish some of the information you've compiled to help with that. > I agree that some changes may break backwards compatibility (such as > Glance's image checksumming), but one approach I think could ease the > transition would be the approach I took for SSH key pair > fingerprinting (also MD5-based, as is Glance image checksums) found > here - https://review.openstack.org/#/c/615460/ . This allows > administrators to choose, hopefully at deployment time, the hashing > algorithm with the default of being the existing MD5 algorithm. That certainly seems like it would provide the most compatibility in the short term. That said, I honestly don't know the best approach for us to take. We're going to need people who understand the issues around FIPS and the issues around maintaining backwards-compatibility to work together to create a recommended approach. Perhaps a few of the folks on this thread would be interested in forming a team to work on that? Doug > Another approach would be to make the projects "FIPS aware" where we > choose the hashing algorithm based on the system's FIPS-enforcing > state. An example of doing so is what I'm proposing for Django > (another FIPS-related patch that was needed for OSP 13) - > https://github.com/django/django/pull/10605 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Wed Nov 7 12:37:06 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 07 Nov 2018 07:37:06 -0500 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: Message-ID: Corey Bryant writes: > On Wed, Oct 10, 2018 at 8:45 AM Corey Bryant > wrote: > > I'd like to start moving forward with enabling py37 unit tests for a subset > of projects. Rather than putting too much load on infra by enabling 3 x py3 > unit tests for every project, this would just focus on enablement of py37 > unit tests for a subset of projects in the Stein cycle. And just to be > clear, I would not be disabling any unit tests (such as py35). I'd just be > enabling py37 unit tests. > > As some background, this ML thread originally led to updating the > python3-first governance goal (https://review.openstack.org/#/c/610708/) > but has now led back to this ML thread for a +1 rather than updating the > governance goal. > > I'd like to get an official +1 here on the ML from parties such as the TC > and infra in particular but anyone else's input would be welcomed too. > Obviously individual projects would have the right to reject proposed > changes that enable py37 unit tests. Hopefully they wouldn't, of course, > but they could individually vote that way. > > Thanks, > Corey This seems like a good way to start. It lets us make incremental progress while we take the time to think about the python version management question more broadly. We can come back to the other projects to add 3.7 jobs and remove 3.5 jobs when we have that plan worked out. Doug From mnaser at vexxhost.com Wed Nov 7 12:47:53 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 7 Nov 2018 13:47:53 +0100 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: Message-ID: On Wed, Nov 7, 2018 at 1:37 PM Doug Hellmann wrote: > > Corey Bryant writes: > > > On Wed, Oct 10, 2018 at 8:45 AM Corey Bryant > > wrote: > > > > I'd like to start moving forward with enabling py37 unit tests for a subset > > of projects. Rather than putting too much load on infra by enabling 3 x py3 > > unit tests for every project, this would just focus on enablement of py37 > > unit tests for a subset of projects in the Stein cycle. And just to be > > clear, I would not be disabling any unit tests (such as py35). I'd just be > > enabling py37 unit tests. > > > > As some background, this ML thread originally led to updating the > > python3-first governance goal (https://review.openstack.org/#/c/610708/) > > but has now led back to this ML thread for a +1 rather than updating the > > governance goal. > > > > I'd like to get an official +1 here on the ML from parties such as the TC > > and infra in particular but anyone else's input would be welcomed too. > > Obviously individual projects would have the right to reject proposed > > changes that enable py37 unit tests. Hopefully they wouldn't, of course, > > but they could individually vote that way. > > > > Thanks, > > Corey > > This seems like a good way to start. It lets us make incremental > progress while we take the time to think about the python version > management question more broadly. We can come back to the other projects > to add 3.7 jobs and remove 3.5 jobs when we have that plan worked out. What's the impact on the number of consumption in upstream CI node usage? > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From j.harbott at x-ion.de Wed Nov 7 13:07:06 2018 From: j.harbott at x-ion.de (Dr. Jens Harbott (frickler)) Date: Wed, 7 Nov 2018 13:07:06 +0000 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: Message-ID: 2018-11-07 12:47 GMT+00:00 Mohammed Naser : > On Wed, Nov 7, 2018 at 1:37 PM Doug Hellmann wrote: >> >> Corey Bryant writes: >> >> > On Wed, Oct 10, 2018 at 8:45 AM Corey Bryant >> > wrote: >> > >> > I'd like to start moving forward with enabling py37 unit tests for a subset >> > of projects. Rather than putting too much load on infra by enabling 3 x py3 >> > unit tests for every project, this would just focus on enablement of py37 >> > unit tests for a subset of projects in the Stein cycle. And just to be >> > clear, I would not be disabling any unit tests (such as py35). I'd just be >> > enabling py37 unit tests. >> > >> > As some background, this ML thread originally led to updating the >> > python3-first governance goal (https://review.openstack.org/#/c/610708/) >> > but has now led back to this ML thread for a +1 rather than updating the >> > governance goal. >> > >> > I'd like to get an official +1 here on the ML from parties such as the TC >> > and infra in particular but anyone else's input would be welcomed too. >> > Obviously individual projects would have the right to reject proposed >> > changes that enable py37 unit tests. Hopefully they wouldn't, of course, >> > but they could individually vote that way. >> > >> > Thanks, >> > Corey >> >> This seems like a good way to start. It lets us make incremental >> progress while we take the time to think about the python version >> management question more broadly. We can come back to the other projects >> to add 3.7 jobs and remove 3.5 jobs when we have that plan worked out. > > What's the impact on the number of consumption in upstream CI node usage? I think the relevant metric here will be nodes_used * time_used. nodes_used will increase by one, time_used for usual unit test jobs seems to be < 10 minutes, so I'd think that the total increase in CI usage should be neglegible compared to full tempest or similar jobs that take 1-2 hours. From zigo at debian.org Wed Nov 7 13:40:39 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 7 Nov 2018 14:40:39 +0100 Subject: [openstack-dev] [barbican] No ca_file in the KeystonePassword class Message-ID: <881815b5-4b1c-4294-9a56-26c79b045489@debian.org> Hi, Trying to implement kms_keymaster in Swift (to enable encryption), I have found out that Castellan's KeystonePassword doesn't include any option for root CA certificates (neither a insecure=True option). In such a configuration, it's not easy to test. So my question is: has anyone from the Barbican thought about this, and/or is there any workaround this? Going to production without any possibility to test with fake certs is a little bit annoying... :P Cheers, Thomas Goirand (zigo) From mnaser at vexxhost.com Wed Nov 7 13:46:38 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 7 Nov 2018 14:46:38 +0100 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: Message-ID: On Wed, Nov 7, 2018 at 2:07 PM Dr. Jens Harbott (frickler) wrote: > > 2018-11-07 12:47 GMT+00:00 Mohammed Naser : > > On Wed, Nov 7, 2018 at 1:37 PM Doug Hellmann wrote: > >> > >> Corey Bryant writes: > >> > >> > On Wed, Oct 10, 2018 at 8:45 AM Corey Bryant > >> > wrote: > >> > > >> > I'd like to start moving forward with enabling py37 unit tests for a subset > >> > of projects. Rather than putting too much load on infra by enabling 3 x py3 > >> > unit tests for every project, this would just focus on enablement of py37 > >> > unit tests for a subset of projects in the Stein cycle. And just to be > >> > clear, I would not be disabling any unit tests (such as py35). I'd just be > >> > enabling py37 unit tests. > >> > > >> > As some background, this ML thread originally led to updating the > >> > python3-first governance goal (https://review.openstack.org/#/c/610708/) > >> > but has now led back to this ML thread for a +1 rather than updating the > >> > governance goal. > >> > > >> > I'd like to get an official +1 here on the ML from parties such as the TC > >> > and infra in particular but anyone else's input would be welcomed too. > >> > Obviously individual projects would have the right to reject proposed > >> > changes that enable py37 unit tests. Hopefully they wouldn't, of course, > >> > but they could individually vote that way. > >> > > >> > Thanks, > >> > Corey > >> > >> This seems like a good way to start. It lets us make incremental > >> progress while we take the time to think about the python version > >> management question more broadly. We can come back to the other projects > >> to add 3.7 jobs and remove 3.5 jobs when we have that plan worked out. > > > > What's the impact on the number of consumption in upstream CI node usage? > > I think the relevant metric here will be nodes_used * time_used. > nodes_used will increase by one, time_used for usual unit test jobs > seems to be < 10 minutes, so I'd think that the total increase in CI > usage should be neglegible compared to full tempest or similar jobs > that take 1-2 hours. Indeed it doesn't look too bad: http://zuul.openstack.org/builds?job_name=openstack-tox-py35 It'll be good to try and aim to transition as quickly as possible to avoid extra 'wasted' resources in the infrastructure side > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From zigo at debian.org Wed Nov 7 13:48:11 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 7 Nov 2018 14:48:11 +0100 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <2a5b274c-659a-21e7-d7aa-5f7bbb5fcbd7@suse.com> <20181010210039.GA15538@sm-workstation> <20181010211033.tatylo4fakiymvtq@yuggoth.org> Message-ID: <4dc17f71-d726-e6e0-e9d3-2bfd9a5ab689@debian.org> On 10/11/18 1:35 AM, Goutham Pacha Ravi wrote: > Thanks Corey for starting this effort. I proposed changes to > manila repos to use your template [1] [2], but the interpreter's not > being installed, > do you need to make any bindep changes to enable the "universe" ppa and install > python3.7 and python3.7-dev? Your best luck in Debian based distribution is probably to attempt installing python3-all *or* python3-dev-all. The -all means all available versions (so, in case of Debian Sid currently, it will install both Python 3.6 and 3.7). The -dev-all will also install the -all package, so you never need *both*. You also don't need the -dev if you don't have Python packages that needs Python.h (ie: embedded C code in a Python module). So, just switch to that, and you're good to go *forever*, independently of what python version is available in a given OS version! :) I hope this helps, Cheers, Thomas Goirand (zigo) From doug at doughellmann.com Wed Nov 7 13:49:52 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 07 Nov 2018 08:49:52 -0500 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: Message-ID: Mohammed Naser writes: > On Wed, Nov 7, 2018 at 2:07 PM Dr. Jens Harbott (frickler) > wrote: >> >> 2018-11-07 12:47 GMT+00:00 Mohammed Naser : >> > On Wed, Nov 7, 2018 at 1:37 PM Doug Hellmann wrote: >> >> >> >> Corey Bryant writes: >> >> >> >> > On Wed, Oct 10, 2018 at 8:45 AM Corey Bryant >> >> > wrote: >> >> > >> >> > I'd like to start moving forward with enabling py37 unit tests for a subset >> >> > of projects. Rather than putting too much load on infra by enabling 3 x py3 >> >> > unit tests for every project, this would just focus on enablement of py37 >> >> > unit tests for a subset of projects in the Stein cycle. And just to be >> >> > clear, I would not be disabling any unit tests (such as py35). I'd just be >> >> > enabling py37 unit tests. >> >> > >> >> > As some background, this ML thread originally led to updating the >> >> > python3-first governance goal (https://review.openstack.org/#/c/610708/) >> >> > but has now led back to this ML thread for a +1 rather than updating the >> >> > governance goal. >> >> > >> >> > I'd like to get an official +1 here on the ML from parties such as the TC >> >> > and infra in particular but anyone else's input would be welcomed too. >> >> > Obviously individual projects would have the right to reject proposed >> >> > changes that enable py37 unit tests. Hopefully they wouldn't, of course, >> >> > but they could individually vote that way. >> >> > >> >> > Thanks, >> >> > Corey >> >> >> >> This seems like a good way to start. It lets us make incremental >> >> progress while we take the time to think about the python version >> >> management question more broadly. We can come back to the other projects >> >> to add 3.7 jobs and remove 3.5 jobs when we have that plan worked out. >> > >> > What's the impact on the number of consumption in upstream CI node usage? >> >> I think the relevant metric here will be nodes_used * time_used. >> nodes_used will increase by one, time_used for usual unit test jobs >> seems to be < 10 minutes, so I'd think that the total increase in CI >> usage should be neglegible compared to full tempest or similar jobs >> that take 1-2 hours. > > Indeed it doesn't look too bad: > > http://zuul.openstack.org/builds?job_name=openstack-tox-py35 > > It'll be good to try and aim to transition as quickly as possible to > avoid extra 'wasted' resources in the infrastructure side Right, I think we can live with it for a few weeks. -- Doug From jcornutt at gmail.com Wed Nov 7 14:52:49 2018 From: jcornutt at gmail.com (Joshua Cornutt) Date: Wed, 7 Nov 2018 09:52:49 -0500 Subject: [openstack-dev] [Openstack-operators] FIPS Compliance In-Reply-To: References: Message-ID: On Wed, Nov 7, 2018 at 7:30 AM Doug Hellmann wrote: > > Joshua Cornutt writes: > > > Doug, > > > > I have such a list put together (my various installation documents for > > getting these clouds working in FIPS mode) but it's hardly ready for > > public consumption. I planned on releasing each bit as a code change > > and/or bug ticket and letting the community consume it as it figures > > some of these things out. > > It's likely that the overall migration will go better if we all have the > full context. So I hope you can find some time to publish some of the > information you've compiled to help with that. > > > I agree that some changes may break backwards compatibility (such as > > Glance's image checksumming), but one approach I think could ease the > > transition would be the approach I took for SSH key pair > > fingerprinting (also MD5-based, as is Glance image checksums) found > > here - https://review.openstack.org/#/c/615460/ . This allows > > administrators to choose, hopefully at deployment time, the hashing > > algorithm with the default of being the existing MD5 algorithm. > > That certainly seems like it would provide the most compatibility in the > short term. > > That said, I honestly don't know the best approach for us to take. We're > going to need people who understand the issues around FIPS and the > issues around maintaining backwards-compatibility to work together to > create a recommended approach. Perhaps a few of the folks on this thread > would be interested in forming a team to work on that? > > Doug > I'd be interested in that. Good idea > > Another approach would be to make the projects "FIPS aware" where we > > choose the hashing algorithm based on the system's FIPS-enforcing > > state. An example of doing so is what I'm proposing for Django > > (another FIPS-related patch that was needed for OSP 13) - > > https://github.com/django/django/pull/10605 > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From liliueecg at gmail.com Wed Nov 7 15:02:52 2018 From: liliueecg at gmail.com (Li Liu) Date: Wed, 7 Nov 2018 10:02:52 -0500 Subject: [openstack-dev] [cyborg]Weekly Meeting In-Reply-To: References: Message-ID: It seems the meeting bot is having issues.. I would just paste the meeting log here [09:19] == Li_Liu [b896edde at gateway/web/freenode/ip.184.150.237.222] has joined #openstack-cyborg [09:19] Hi Xinran [09:20] Looks like just we two and maybe Li so far [09:20] Hi guyss [09:20] Let me ask people in wechat group [09:20] Hi Li [09:20] Hi [09:21] Hi [09:21] #wangzhh [09:21] Sundar, Did you have the change to look at the spec I submit last weekend? [09:21] #info wangzhh [09:21] Hi wangzhh [09:22] Hi Sundar. [09:22] I know i must still miss things, but I just wanna get the thing started first [09:22] Li: Yes, I had some comments. I am still writing them. Will post by today. [09:22] sure, thanks a lot [09:22] #info Li_Liu [09:23] ok, let's get started on the business [09:23] I think CoCo is still on her way home. but we can go ahead for the meeting [09:23] #topic status updates [09:23] == Coco_gao [uid312075 at gateway/web/irccloud.com/x-rybjocunfidqrgrs] has joined #openstack-cyborg [09:24] Hi CoCo [09:24] start with https://review.openstack.org/#/q/status:open%20project:openstack/cyborg [09:24] Storyboard has been updated for db and driver-agent [09:24] Some driver-agent tasks probably need tweaks. I'll also file some for the end-to-end workflow. [09:24] regarding "Add Allocation/Deallocation API" [09:25] still blocked right? xinran [09:25] Apart from that, folks are free to add tasks, modify them or sign up [09:25] Thanks a lot Sundar :) [09:25] Li: np :) [09:25] Hi all [09:26] #info Coco_gao [09:26] is xinran in the meeting [09:27] Thanks Sundar,I will check the storyboard to sign up tomorrow. [09:28] Sundar, do you have any input on "Add Allocation/Deallocation API"? [09:29] Yes I'm here [09:29] I think we should abandon Add Allocation/Deallocation API spec, and follow Sundar's new design [09:30] That one I saw you adandon the other day. [09:30] but for https://review.openstack.org/#/c/596187/ [09:30] do you wanna abandon it as well? [09:32] == shaohe_feng [c066cc26 at gateway/web/freenode/ip.192.102.204.38] has joined #openstack-cyborg [09:32] #info shaohe [09:32] hi all [09:32] Hi shaohe. [09:33] I think it doesn't matter, I can commit new version according to new spec. if you guys think it's better to abandon it and submit another one, that's fine [09:33] hi shaohe [09:33] xinran, it's totally up to you [09:33] That is depends on you [09:33] yes, depends on you. [09:34] We should probably align on the db schema first. Li_Liu, do you want to talk about the db spec here ? [09:34] "Add gpu driver" xiao hei, your comments on this? still blocking by the new db design right? [09:35] Sundar, that might involve too much tech details once we start the discussion. [09:35] if you think there are too many diffrience maybe you can add a new patch [09:35] I wanna focus on the status update first, then we can talk about it [09:35] Yes, Li. [09:35] ok, thanks xiaohei [09:36] "Resource report to placement", xinran, you comments on this? [09:36] yes [09:36] my patch is the same problem, still need to implement new design [09:37] thanks coco, got it [09:38] alright, I think I got most of the feedback on the patches status [09:38] #topic New DB scheme planning [09:39] Sundar, you wanna talk about the storyboard items you created? [09:40] Li_Liu: Sure. #link https://storyboard.openstack.org/#!/project/openstack/cyborg [09:40] Storyboard has a openstack/cyborg project. We can create stories for specific features or topics, and tasks within those stories. [09:42] We can optionally create worklists and group them into boards. For example, we could create a worklist for Stein. However, i have chosen to keep it simple: Stein is just a tag on every story we want to complete in this release [09:42] The tag is a label. we can add more tags as needed [09:42] Regarding the DB work, I think there might be changes on those tasks. Will keep you guys posted [09:43] Time to talk about the sb spec? [09:43] *db [09:43] Yep [09:44] I think we all agreed that the new db schema will not be PCI-specific. [09:45] I think so. [09:46] Great. I thought we also agreed that Deployables should be the equivalent of resource providers [09:47] "equivalent" might not be precise, I think they are counter-part of each other [09:47] one in the context of Cyborg, the other one in the context of Nova [09:47] That's exactly what I meant by 'equivalent' [09:47] Li,is new db spec availale now? [09:48] We had this doc which we reviewed: https://docs.google.com/document/d/1XLQtvyGJeEgo3ztBQiufWLF-E7S7yGLaYrme8iUPtA0/edit [09:49] agree with Li [09:49] Can we please align the spec to it? Otherwise, we will have lots more discussion. The current spec shows a diagram tying Deployables to PF, VF . [09:50] CoCo, check this out #link https://review.openstack.org/#/c/615462/ [09:50] Sundar, I think after my spec is finalized, we should get all the pieces aligned [09:50] okay thank you。 [09:51] Li, the spec is not as per our past discussion [09:52] Why are we still referring to Deployables as PF, VF? [09:54] I was just giving an example. but we can discuss it seperately. The spec is not final, comments and suggetions are welcome [09:57] OK. We can move on to other topics, if any [09:57] I think that's all we need to talk about. It seems the road blocker for most the stuff is the DB thing [09:58] so guys, please provide review and comments for https://review.openstack.org/#/c/615462/ ------------------------------ [09:58] Let's land this one asap [09:58] ok, will review it soon [10:00] Thank you team. Keep claim and code one :P [10:00] have a good night [10:00] should be "Keep claim and code on" =_+ [10:00] #endmeeting On Tue, Nov 6, 2018 at 10:07 PM Li Liu wrote: > Hi Team, > > We will have our regular weekly meetings tomorrow at usual time UTC 1400. > Please be aware that North America has just shifted to winter time this > week, which mean it would be 10 pm Beijing time, 6 am PST and 9 am EST. We > probably will adjust the time a little bit starting next week. > > The agenda for this week's meeting is as follows > 1. Status Update > 2. DB evolution work assignments > > -- > Thank you > > Regards > > Li > -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed Nov 7 15:09:12 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 7 Nov 2018 15:09:12 +0000 (GMT) Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: Message-ID: On Tue, 6 Nov 2018, Corey Bryant wrote: > I'd like to get an official +1 here on the ML from parties such as the TC > and infra in particular but anyone else's input would be welcomed too. > Obviously individual projects would have the right to reject proposed > changes that enable py37 unit tests. Hopefully they wouldn't, of course, > but they could individually vote that way. Speaking as someone on the TC but not "the TC" as well as someone active in a few projects: +1. As shown elsewhere in the thread the impact on node consumption and queue lengths shouldn't be a huge amount and the benefits are high. >From an openstack/placement standpoint, please go for it if nobody else beats you to it. To me the benefits are simply that we find bugs sooner. It's bizarre to me that we even need to think about this. The sooner we find them, the less they impact people who want to use our code. Will it cause breakage and extra work us now? Possibly, but it's like making an early payment on the mortgage: We are saving cost later. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From amy at demarco.com Wed Nov 7 15:18:47 2018 From: amy at demarco.com (Amy Marrich) Date: Wed, 7 Nov 2018 09:18:47 -0600 Subject: [openstack-dev] Diversity and Inclusion at OpenStack Summit Message-ID: I just wanted to pass on a few things we have going on during Summit that might be of interest! *Diversity and Inclusion GroupMe* - Is it your first summit and you don't know anyone else? Maybe you just don't want to travel to and from the venue alone? In the tradition of WoO, I have created a GroupMe so people can communicate with each other. If you would like to be added to the group please let me know and I'll get you added! *Night Watch Tour *- On Wednesday night at 10PM, members of the community will be meeting up to go on a private Night Watch Tour[0]! This is a non-alcoholic activity for those wanting to get with other Stackers, but don't want to partake in the Pub Crawl! We've been doing these since Boston and they're a lot of fun. The cost is 15 euros cash and I do need you to RSVP to me as we will need to get a second guide if we grow too large! Summit sessions you may wish to attend: Tuesday - *Speed Mentoring Lunch* [1] 12:30 -1:40 - We are still looking for both Mentors and Mentees for the session so please RSVP! This is another great way to meet people in the community, learn more and give back!!! *Cohort Mentoring BoF* [2] 4:20 - 5:00 - Come talk to the people in charge of the Cohort Mentoring program and see how you can get involved as a Mentor or Mentee! *D&I WG Update* [3] 5:10- 5:50 - Learn what we've been up to, how you can get involved, and what's next. Wednesday - *Git and Gerrit Hands-On Workshop* [4] 3:20 - 4:20 - So you've seen some exciting stuff this week but don't know how to get setup to start contributing? This session is for you in that we'll walk you through getting your logins, your system configured and if time allows even how to submit a bug and patch! Thursday - *Mentoring Program Reboot* [5] 3:20 - 4:00 - Learn about the importance of mentoring, the changes in the OPenStack mentoring programs and how you can get involved. Hope to see everyone in Berlin next week! Please feel free to contact me or grab me in the hall next week with any questions or to join in the fun! Amy Marrich (spotz) Diversity and Inclusion WG Chair [0] - http://baerentouren.de/nachtwache_en.html [1] - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22873/speed-mentoring-lunch [2] - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22892/long-term-mentoring-keeping-the-party-going [3] - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22893/diversity-and-inclusion-wg-update [4] - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/21943/git-and-gerrit-hands-on-workshop [5] - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22443/mentoring-program-reboot -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Wed Nov 7 15:39:26 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Wed, 7 Nov 2018 16:39:26 +0100 Subject: [openstack-dev] [kolla] Meeting 14-nov-2018 canceled Message-ID: Dear Kolla Team, Due to the OpenStack Summit in Berlin, the Kolla weekly meeting on Wednesday 14th is canceled. Best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Nov 7 16:11:04 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 07 Nov 2018 08:11:04 -0800 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: Message-ID: <1541607064.1040711.1568901040.6867B704@webmail.messagingengine.com> On Wed, Nov 7, 2018, at 4:47 AM, Mohammed Naser wrote: > On Wed, Nov 7, 2018 at 1:37 PM Doug Hellmann wrote: > > > > Corey Bryant writes: > > > > > On Wed, Oct 10, 2018 at 8:45 AM Corey Bryant > > > wrote: > > > > > > I'd like to start moving forward with enabling py37 unit tests for a subset > > > of projects. Rather than putting too much load on infra by enabling 3 x py3 > > > unit tests for every project, this would just focus on enablement of py37 > > > unit tests for a subset of projects in the Stein cycle. And just to be > > > clear, I would not be disabling any unit tests (such as py35). I'd just be > > > enabling py37 unit tests. > > > > > > As some background, this ML thread originally led to updating the > > > python3-first governance goal (https://review.openstack.org/#/c/610708/) > > > but has now led back to this ML thread for a +1 rather than updating the > > > governance goal. > > > > > > I'd like to get an official +1 here on the ML from parties such as the TC > > > and infra in particular but anyone else's input would be welcomed too. > > > Obviously individual projects would have the right to reject proposed > > > changes that enable py37 unit tests. Hopefully they wouldn't, of course, > > > but they could individually vote that way. > > > > > > Thanks, > > > Corey > > > > This seems like a good way to start. It lets us make incremental > > progress while we take the time to think about the python version > > management question more broadly. We can come back to the other projects > > to add 3.7 jobs and remove 3.5 jobs when we have that plan worked out. > > What's the impact on the number of consumption in upstream CI node usage? > For period from 2018-10-25 15:16:32,079 to 2018-11-07 15:59:04,994, openstack-tox-py35 jobs in aggregate represent 0.73% of our total capacity usage. I don't expect py37 to significantly deviate from that. Again the major resource consumption is dominated by a small number of projects/repos/jobs. Generally testing outside of that bubble doesn't represent a significant resource cost. I see no problem with adding python 3.7 unit testing from an infrastructure perspective. Clark From luka.peschke at objectif-libre.com Wed Nov 7 16:14:03 2018 From: luka.peschke at objectif-libre.com (Luka Peschke) Date: Wed, 07 Nov 2018 17:14:03 +0100 Subject: [openstack-dev] [cloudkitty] IRC meetings and community Message-ID: Hello, I'm making this e-mail to announce that the CloudKitty project will be holding IRC meetings from now on. They will be held in the #cloudkitty channel on Freenode. They should (for now) be held on the first Friday of each month at 15h00 UTC. Of course, the time / frequency can be adapted to what suits the community the best. The first meeting will be an exception to this schedule. It will be held on Friday the 9th of November at 15h00 UTC. Topics for this meeting can be found and proposed at https://etherpad.openstack.org/p/cloudkitty-meeting-topics These meetings were a thing in the project's early days and have slowly stopped, which we really regret. This is part of a larger effort to get more in touch with the community. We would gladly welcome new contributors, and any contribution of any kind (bug report, review, documentation suggestion/update, commit...), is welcome. There are several points which should be tackled in order to ease interactions with the community. These will be detailled and discussed during the first meeting. For those interested in CloudKitty attending the Berlin summit, the Project Update will happen on the 14/11 at 2:30pm in M-Räum 3 and the onboarding will be held on the 14/11 at 5:10pm in M-Räum 1. Cheers, Luka From fungi at yuggoth.org Wed Nov 7 16:46:12 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Nov 2018 16:46:12 +0000 Subject: [openstack-dev] [cyborg]Weekly Meeting In-Reply-To: References: Message-ID: <20181107164612.yfqwzokk2csiugww@yuggoth.org> On 2018-11-07 10:02:52 -0500 (-0500), Li Liu wrote: > It seems the meeting bot is having issues.. I would just paste the > meeting log here [...] The meetbot wasn't having any issues. The chair didn't issue an #endmeeting, and only a meeting chair is allowed to do so until an hour has elapsed from the #startmeeting. After an hour, anyone can so I did an #endmeeting in your channel and it wrapped up and wrote the minutes and logs as usual: http://eavesdrop.openstack.org/meetings/openstack_cyborg/2018/openstack_cyborg.2018-11-07-14.16.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mordred at inaugust.com Wed Nov 7 17:26:09 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 7 Nov 2018 11:26:09 -0600 Subject: [openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir In-Reply-To: References: <7fd557b7-b789-ab1b-8175-4b1b506c45b2@inaugust.com> Message-ID: <08fc8957-5fc1-dcf4-d207-f24fb7d2459e@inaugust.com> On 11/5/18 3:21 AM, Mohammed Naser wrote: > > > Sent from my iPhone > >> On Nov 5, 2018, at 10:19 AM, Thierry Carrez wrote: >> >> Monty Taylor wrote: >>> [...] >>> What if we added support for serving vendor data files from the root of a primary URL as-per RFC 5785. Specifically, support deployers adding a json file to .well-known/openstack/client that would contain what we currently store in the openstacksdk repo and were just discussing splitting out. >>> [...] >>> What do people think? >> >> I love the idea of public clouds serving that file directly, and the user experience you get from it. The only two drawbacks on top of my head would be: >> >> - it's harder to discover available compliant openstack clouds from the client. >> >> - there is no vetting process, so there may be failures with weird clouds serving half-baked files that people may blame the client tooling for. >> >> I still think it's a good idea, as in theory it aligns the incentive of maintaining the file with the most interested stakeholder. It just might need some extra communication to work seamlessly. > > I’m thinking out loud here but perhaps a simple linter that a cloud provider can run will help them make sure that everything is functional. I've got an initial patch up: WIP Support remote vendor profiles https://review.openstack.org/616228 it works with vexxhost's published vendor file. From doug at doughellmann.com Wed Nov 7 17:35:07 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 07 Nov 2018 12:35:07 -0500 Subject: [openstack-dev] [Openstack-operators] FIPS Compliance In-Reply-To: References: Message-ID: Joshua Cornutt writes: > On Wed, Nov 7, 2018 at 7:30 AM Doug Hellmann wrote: >> >> Joshua Cornutt writes: >> >> > Doug, >> > >> > I have such a list put together (my various installation documents for >> > getting these clouds working in FIPS mode) but it's hardly ready for >> > public consumption. I planned on releasing each bit as a code change >> > and/or bug ticket and letting the community consume it as it figures >> > some of these things out. >> >> It's likely that the overall migration will go better if we all have the >> full context. So I hope you can find some time to publish some of the >> information you've compiled to help with that. >> >> > I agree that some changes may break backwards compatibility (such as >> > Glance's image checksumming), but one approach I think could ease the >> > transition would be the approach I took for SSH key pair >> > fingerprinting (also MD5-based, as is Glance image checksums) found >> > here - https://review.openstack.org/#/c/615460/ . This allows >> > administrators to choose, hopefully at deployment time, the hashing >> > algorithm with the default of being the existing MD5 algorithm. >> >> That certainly seems like it would provide the most compatibility in the >> short term. >> >> That said, I honestly don't know the best approach for us to take. We're >> going to need people who understand the issues around FIPS and the >> issues around maintaining backwards-compatibility to work together to >> create a recommended approach. Perhaps a few of the folks on this thread >> would be interested in forming a team to work on that? >> >> Doug >> > > I'd be interested in that. Good idea I added "FIPS compliance" to the list of community goal ideas in https://etherpad.openstack.org/p/community-goals (see number 35, currently at the bottom of the etherpad). Please add more detail there about what exactly is involved, references, etc. -- whatever you think would be useful to someone learning about what this is. > >> > Another approach would be to make the projects "FIPS aware" where we >> > choose the hashing algorithm based on the system's FIPS-enforcing >> > state. An example of doing so is what I'm proposing for Django >> > (another FIPS-related patch that was needed for OSP 13) - >> > https://github.com/django/django/pull/10605 >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Doug From doug at doughellmann.com Wed Nov 7 17:43:30 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 07 Nov 2018 12:43:30 -0500 Subject: [openstack-dev] [cloudkitty] IRC meetings and community In-Reply-To: References: Message-ID: Luka Peschke writes: > Hello, > > I'm making this e-mail to announce that the CloudKitty project will be > holding IRC meetings from now on. They will be held in the #cloudkitty > channel on Freenode. They should (for now) be held on the first Friday > of each month at 15h00 UTC. Of course, the time / frequency can be > adapted to what suits the community the best. > > The first meeting will be an exception to this schedule. It will be > held on Friday the 9th of November at 15h00 UTC. Topics for this meeting > can be found and proposed at > https://etherpad.openstack.org/p/cloudkitty-meeting-topics > > These meetings were a thing in the project's early days and have slowly > stopped, which we really regret. > > This is part of a larger effort to get more in touch with the > community. We would gladly welcome new contributors, and any > contribution of any kind (bug report, review, documentation > suggestion/update, commit...), is welcome. There are several points > which should be tackled in order to ease interactions with the > community. These will be detailled and discussed during the first > meeting. > > For those interested in CloudKitty attending the Berlin summit, the > Project Update will happen on the 14/11 at 2:30pm in M-Räum 3 and the > onboarding will be held on the 14/11 at 5:10pm in M-Räum 1. > > Cheers, > > Luka > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Thanks, Luka. Please propose an update to the openstack-infra/irc-meetings repository to register the meeting info so it shows up on eavesdrop.openstack.org and the calendar published there. That will make it easier for other folks to find the meeting and keep it on their schedule. -- Doug From emilien at redhat.com Wed Nov 7 19:22:27 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 7 Nov 2018 14:22:27 -0500 Subject: [openstack-dev] [tripleo] CI is broken In-Reply-To: <2a77baf3-c7cf-37d6-e068-d04df7b28380@redhat.com> References: <2a77baf3-c7cf-37d6-e068-d04df7b28380@redhat.com> Message-ID: I updated the bugs, and so far we have one alert left: https://bugs.launchpad.net/tripleo/+bug/1801969 The patch is in gate, be patient and then we'll be able to +A/recheck stuff again. On Wed, Nov 7, 2018 at 7:30 AM Juan Antonio Osorio Robles < jaosorior at redhat.com> wrote: > Hello folks, > > > Please do not attempt to merge or recheck patches until we get this > sorted out. > > We are dealing with several issues that have broken all jobs. > > https://bugs.launchpad.net/tripleo/+bug/1801769 > https://bugs.launchpad.net/tripleo/+bug/1801969 > https://bugs.launchpad.net/tripleo/+bug/1802083 > https://bugs.launchpad.net/tripleo/+bug/1802085 > > Best Regards! > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From luka.peschke at objectif-libre.com Wed Nov 7 19:35:02 2018 From: luka.peschke at objectif-libre.com (Luka Peschke) Date: Wed, 07 Nov 2018 20:35:02 +0100 Subject: [openstack-dev] [cloudkitty] IRC meetings and community In-Reply-To: References: Message-ID: <165f2ea13952e7cb13a799abe0b4e9e8@objectif-libre.com> Hello Doug, Here's the patch: https://review.openstack.org/#/c/616205/. If it is not merged by Friday (date of the first meeting), I'll send the log of the first meeting to this ML. Luka Le 2018-11-07 18:43, Doug Hellmann a écrit : > Luka Peschke writes: > >> Hello, >> >> I'm making this e-mail to announce that the CloudKitty project will >> be >> holding IRC meetings from now on. They will be held in the >> #cloudkitty >> channel on Freenode. They should (for now) be held on the first >> Friday >> of each month at 15h00 UTC. Of course, the time / frequency can be >> adapted to what suits the community the best. >> >> The first meeting will be an exception to this schedule. It will be >> held on Friday the 9th of November at 15h00 UTC. Topics for this >> meeting >> can be found and proposed at >> https://etherpad.openstack.org/p/cloudkitty-meeting-topics >> >> These meetings were a thing in the project's early days and have >> slowly >> stopped, which we really regret. >> >> This is part of a larger effort to get more in touch with the >> community. We would gladly welcome new contributors, and any >> contribution of any kind (bug report, review, documentation >> suggestion/update, commit...), is welcome. There are several points >> which should be tackled in order to ease interactions with the >> community. These will be detailled and discussed during the first >> meeting. >> >> For those interested in CloudKitty attending the Berlin summit, the >> Project Update will happen on the 14/11 at 2:30pm in M-Räum 3 and the >> onboarding will be held on the 14/11 at 5:10pm in M-Räum 1. >> >> Cheers, >> >> Luka >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Thanks, Luka. > > Please propose an update to the openstack-infra/irc-meetings > repository > to register the meeting info so it shows up on eavesdrop.openstack.org > and the calendar published there. That will make it easier for other > folks to find the meeting and keep it on their schedule. From doug at doughellmann.com Wed Nov 7 20:02:08 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 07 Nov 2018 15:02:08 -0500 Subject: [openstack-dev] [cloudkitty] IRC meetings and community In-Reply-To: <165f2ea13952e7cb13a799abe0b4e9e8@objectif-libre.com> References: <165f2ea13952e7cb13a799abe0b4e9e8@objectif-libre.com> Message-ID: Luka Peschke writes: > Hello Doug, > > Here's the patch: https://review.openstack.org/#/c/616205/. If it is > not merged by Friday (date of the first meeting), I'll send the log of > the first meeting to this ML. > > Luka I was mostly concerned with future meetings, so that sounds like a good plan. Thanks! > > Le 2018-11-07 18:43, Doug Hellmann a écrit : >> Luka Peschke writes: >> >>> Hello, >>> >>> I'm making this e-mail to announce that the CloudKitty project will >>> be >>> holding IRC meetings from now on. They will be held in the >>> #cloudkitty >>> channel on Freenode. They should (for now) be held on the first >>> Friday >>> of each month at 15h00 UTC. Of course, the time / frequency can be >>> adapted to what suits the community the best. >>> >>> The first meeting will be an exception to this schedule. It will be >>> held on Friday the 9th of November at 15h00 UTC. Topics for this >>> meeting >>> can be found and proposed at >>> https://etherpad.openstack.org/p/cloudkitty-meeting-topics >>> >>> These meetings were a thing in the project's early days and have >>> slowly >>> stopped, which we really regret. >>> >>> This is part of a larger effort to get more in touch with the >>> community. We would gladly welcome new contributors, and any >>> contribution of any kind (bug report, review, documentation >>> suggestion/update, commit...), is welcome. There are several points >>> which should be tackled in order to ease interactions with the >>> community. These will be detailled and discussed during the first >>> meeting. >>> >>> For those interested in CloudKitty attending the Berlin summit, the >>> Project Update will happen on the 14/11 at 2:30pm in M-Räum 3 and the >>> onboarding will be held on the 14/11 at 5:10pm in M-Räum 1. >>> >>> Cheers, >>> >>> Luka >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> Thanks, Luka. >> >> Please propose an update to the openstack-infra/irc-meetings >> repository >> to register the meeting info so it shows up on eavesdrop.openstack.org >> and the calendar published there. That will make it easier for other >> folks to find the meeting and keep it on their schedule. -- Doug From james.slagle at gmail.com Wed Nov 7 20:03:19 2018 From: james.slagle at gmail.com (James Slagle) Date: Wed, 7 Nov 2018 15:03:19 -0500 Subject: [openstack-dev] [TripleO] Edge squad meeting this week and next week Message-ID: I won't be around to run the Edge squad meeting this week and next week. If someone else wants to pick it up, that would be great. Otherwise, consider it cancelled :). Thanks! https://etherpad.openstack.org/p/tripleo-edge-squad-status -- -- James Slagle -- From openstack at nemebean.com Wed Nov 7 20:19:20 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 7 Nov 2018 14:19:20 -0600 Subject: [openstack-dev] [oslo] Next two meetings Message-ID: <5842030a-2ce9-77bd-0685-2c0efb926f7d@nemebean.com> Hi, Next week is summit and a lot of our contributors will be traveling there on Monday, so let's skip the meeting. The following week I will also be out, so if anyone wants to run the meeting please let me know. Otherwise we'll skip that one too and reconvene after Thanksgiving. If you need your Oslo fix for next week, come see us in Berlin: https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22952/oslo-project-onboarding (I just noticed that is listed as a project onboarding - it's actually a project update. I'll look into getting that fixed) -Ben From openstack at fried.cc Wed Nov 7 20:25:16 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 7 Nov 2018 14:25:16 -0600 Subject: [openstack-dev] [nova][placement] No n-sch meeting next week Message-ID: ...due to summit. -efried From johnsomor at gmail.com Wed Nov 7 20:27:34 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 7 Nov 2018 12:27:34 -0800 Subject: [openstack-dev] [octavia] Weekly meeting is cancelled 11/7/18 Message-ID: We decided to cancel the weekly Octavia IRC meeting next week due to the OpenStack Summit in Berlin. Some of the Octavia related sessions: Octavia - Project Onboarding - Tue 13, 3:20pm - 4:00pm Extending Your OpenStack Troubleshooting Tool Box - Digging deeper into network operations - Wed 14, 11:00am - 12:30pm Migrate from neutron LBaaS to Octavia LoadBalancing - Wed 14, 1:40pm - 2:20pm How to make a Kubernetes app from an OpenStack service? Tale of kuryr-kubernetes' "kubernetization" - Thu 15, 10:50am - 11:30am Octavia - Project Update - Thu 15, 2:35pm - 2:55pm Michael From emilien at redhat.com Wed Nov 7 21:23:03 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 7 Nov 2018 16:23:03 -0500 Subject: [openstack-dev] [tripleo] CI is broken In-Reply-To: References: <2a77baf3-c7cf-37d6-e068-d04df7b28380@redhat.com> Message-ID: No alert anymore, gate is green. recheck if needed. On Wed, Nov 7, 2018 at 2:22 PM Emilien Macchi wrote: > I updated the bugs, and so far we have one alert left: > https://bugs.launchpad.net/tripleo/+bug/1801969 > > The patch is in gate, be patient and then we'll be able to +A/recheck > stuff again. > > On Wed, Nov 7, 2018 at 7:30 AM Juan Antonio Osorio Robles < > jaosorior at redhat.com> wrote: > >> Hello folks, >> >> >> Please do not attempt to merge or recheck patches until we get this >> sorted out. >> >> We are dealing with several issues that have broken all jobs. >> >> https://bugs.launchpad.net/tripleo/+bug/1801769 >> https://bugs.launchpad.net/tripleo/+bug/1801969 >> https://bugs.launchpad.net/tripleo/+bug/1802083 >> https://bugs.launchpad.net/tripleo/+bug/1802085 >> >> Best Regards! >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Emilien Macchi > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Wed Nov 7 22:40:05 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 7 Nov 2018 16:40:05 -0600 Subject: [openstack-dev] [cinder][manila] Cinder and Friends Dinner at Berlin Summit ... Message-ID: All, I am working on scheduling a dinner for the Cinder team (and our extended family that work on and around Cinder) during the Summit in Berlin.  I have created an etherpad for people to RSVP for dinner [1]. It seemed like Tuesday night after the Marketplace Mixer was the best time for most people. So, it will be a little later dinner ... 8 pm.  Here is the place: Location: http://www.dicke-wirtin.de/ Address:  Carmerstraße 9, 10623 Berlin, Germany It looks like the kind of place that will fit for our usual group. If planning to attend please add your name to the etherpad and I will get a reservation in over the weekend. Hope to see you all on Tuesday! Jay [1] https://etherpad.openstack.org/p/BER-cinder-outing-planning -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Nov 8 00:48:20 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 8 Nov 2018 11:48:20 +1100 Subject: [openstack-dev] [all] Results of the T release naming poll. open Message-ID: <20181108004819.GF20576@thor.bakeyournoodle.com> Hello all! The results of the naming poll are in! **PLEASE REMEMBER** that these now have to go through legal vetting. So it is too soon to say 'OpenStack Train' is our next release, given that previous polls have had some issues with the top choice. In any case, the names will be sent off to legal for vetting. As soon as we have a final winner, I'll let you all know. https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_aac97f1cbb6c61df&rkey=7c8b5588574494c1 Result 1. Train (Condorcet winner: wins contests with all other choices) 2. Tiger loses to Train by 142–70 3. Timber loses to Train by 142–72, loses to Tiger by 100–76 4. Trail loses to Train by 150–55, loses to Timber by 93–62 5. Telluride loses to Train by 155–56, loses to Trail by 81–69 6. Teller loses to Train by 158–46, loses to Telluride by 70–67 7. Treasure loses to Train by 151–52, loses to Teller by 68–67 8. Teakettle loses to Train by 158–49, loses to Treasure by 75–67 9. Tincup loses to Train by 157–47, loses to Teakettle by 67–60 10. Turret loses to Train by 158–48, loses to Tincup by 75–56 11. Thomas loses to Train by 159–42, loses to Turret by 66–63 12. Trinidad loses to Train by 153–44, loses to Thomas by 70–56 13. Troublesome loses to Train by 165–41, loses to Trinidad by 69–62 14. Thornton loses to Train by 163–35, loses to Troublesome by 62–59 15. Tyrone loses to Train by 163–35, loses to Thornton by 58–38 16. Tarryall loses to Train by 170–31, loses to Tyrone by 54–50 17. Timnath loses to Train by 170–23, loses to Tarryall by 60–32 18. Tiny Town loses to Train by 168–29, loses to Timnath by 45–43 19. Torreys loses to Train by 167–29, loses to Tiny Town by 48–40 20. Trussville loses to Train by 169–25, loses to Torreys by 43–34 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Thu Nov 8 01:17:06 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 8 Nov 2018 12:17:06 +1100 Subject: [openstack-dev] [cloudkitty] IRC meetings and community In-Reply-To: <165f2ea13952e7cb13a799abe0b4e9e8@objectif-libre.com> References: <165f2ea13952e7cb13a799abe0b4e9e8@objectif-libre.com> Message-ID: <20181108011705.GG20576@thor.bakeyournoodle.com> On Wed, Nov 07, 2018 at 08:35:02PM +0100, Luka Peschke wrote: > Here's the patch: https://review.openstack.org/#/c/616205/. If it is not > merged by Friday (date of the first meeting), I'll send the log of the first > meeting to this ML. Even if the above doesn't merge you can still use #startmeeting in #cloudkitty and the logs will be published to eavesdrop. The chnage you submitted its just about raising visibility not access control. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From dangtrinhnt at gmail.com Thu Nov 8 08:39:14 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 8 Nov 2018 17:39:14 +0900 Subject: [openstack-dev] [Searchlight] Team meeting cancelled today Message-ID: Hi team, There're not many things have been done last week so we will cancel the team meeting today. Just a reminder that we now focus on developing the use cases for the Stein-2 milestone. See you in the next two weeks. Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Thu Nov 8 09:45:20 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Thu, 8 Nov 2018 17:45:20 +0800 Subject: [openstack-dev] [nova] about resize the instance Message-ID: Hi,all When we resize/migrate instance, if error occurs on source compute node, the instance state can rollback to active currently.But if error occurs in "finish_resize" function on destination compute node, the instance state would not rollback to active. Is there a bug, or if anyone plans to change this?Can you tell me more about this ?Thank you very much. Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From lenikmutungi at gmail.com Thu Nov 8 10:40:18 2018 From: lenikmutungi at gmail.com (Leni Kadali Mutungi) Date: Thu, 8 Nov 2018 13:40:18 +0300 Subject: [openstack-dev] [manila] [contribute] In-Reply-To: <20181019145536.n6ca4dxjwlywb2s3@barron.net> References: <4374516d-8e61-baf8-440b-e76cafd84874@redhat.com> <6b827f3b-3804-8d53-3eb5-900cb7cc5c1d@gmail.com> <20181019145536.n6ca4dxjwlywb2s3@barron.net> Message-ID: <8e9facf2-d24b-da19-45da-13de4db75359@gmail.com> Hi Tom Thanks for the warm welcome. I've gone through the material and I would like to understand a few things: 1. What's the role of devstack in the development workflow? 2. Where can I find good-first-bugs? A bug that is simple to do (relatively ;)) and allows me to practice what I've read up on in Developer's Guide. I looked through the manila bugs on Launchpad but I didn't see anything marked easy or good-first-bug or its equivalent for manila. I am a bit unfamiliar with Launchpad so that may have played a role :). Your guidance is appreciated. On 10/19/18 5:55 PM, Tom Barron wrote: > On 19/10/18 15:27 +0300, Leni Kadali Mutungi wrote: >> Hi all. >> >> I've downloaded the manila project from GitHub as a zip file, unpacked >> it and have run `git fetch --depth=1` and been progressively running >> `git fetch --deepen=5` to get the commit history I need. For future >> reference, would a shallow clone e.g. `git clone depth=1` be enough to >> start working on the project or should one have the full commit >> history of the project? >> >> -- >> -- Kind regards, >> Leni Kadali Mutungi > > Hi Leni, > > First I'd like to extend a warm welcome to you as a new manila project > contributor!  We have some contributor/developer documentation [1] that > you may find useful. If you find any gaps or misinformation, we will be > happy to work with you to address these.  In addition to this email > list, the #openstack-manila IRC channel on freenode is a good place to > ask questions.  Many of us run irc bouncers so we'll see the question > even if we're not looking right when it is asked.  Finally, we have a > meeting most weeks on Thursdays at 1500UTC in #openstack-meeting-alt -- > agendas are posted here [2].  Also, here is our work-plan for the > current Stein development cycle [3]. > > Now for your question about shallow clones.  I hope others who know more > will chime in but here are my thoughts ... > > Although having the full commit history for the project is useful, it is > certainly possible to get started with a shallow clone of the project. > That said, I'm not sure if the space and download-time/bandwidth gains > are going to be that significant because once you have the workspace you > will want to run unit tests, pep8, etc. using tox as explained in the > developer documentation mentioned earlier.   That will download virtual > environments for manila's dependencies in your workspace (under .tox > directory) that dwarf the space used for manila proper. > > $ git clone --depth=1 git at github.com:openstack/manila.git shallow-manila > Cloning into 'shallow-manila'... > ... > $ git clone git at github.com:openstack/manila.git deep-manila > Cloning into 'deep-manila'... > ... > $ du -sh shallow-manila deep-manila/ > 20M    shallow-manila > 35M    deep-manila/ > > But after we run tox inside shallow-manila and deep-manila we see: > > $ du -sh shallow-manila deep-manila/ > 589M    shallow-manila > 603M    deep-manila/ > > Similarly, you are likely to want to run devstack locally and that will > clone the repositories for the other openstack components you need and > the savings from shallow clones won't be that significant relative to > the total needed. > > Happy developing! > > -- Tom Barron (Manila PTL) irc: tbarron > > [1] https://docs.openstack.org/manila/rocky/contributor/index.html > [2] https://wiki.openstack.org/wiki/Manila/Meetings > [3] https://wiki.openstack.org/wiki/Manila/SteinCycle > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -- Kind regards, Leni Kadali Mutungi From lijie at unitedstack.com Thu Nov 8 11:30:03 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Thu, 8 Nov 2018 19:30:03 +0800 Subject: [openstack-dev] [Openstack-operators] [nova] about resize the instance In-Reply-To: References: Message-ID: When I resize the instance, the compute node report that "libvirtError: internal error: qemu unexpectedly closed the monitor: 2018-11-08T09:42:04.695681Z qemu-kvm: cannot set up guest memory 'pc.ram': Cannot allocate memory".Has anyone seen this situation?And the ram_allocation_ratio is set 3 in nova.conf.The total memory is 125G.When I use the "nova hypervisor-show server" command to show the compute node's free_ram_mb is -45G.If it is the result of excessive use of memory? Can you give me some suggestions about this?Thank you very much. ------------------ Original ------------------ From: "Rambo"; Date: Thu, Nov 8, 2018 05:45 PM To: "OpenStack Developmen"; Subject: [openstack-dev] [nova] about resize the instance Hi,all When we resize/migrate instance, if error occurs on source compute node, the instance state can rollback to active currently.But if error occurs in "finish_resize" function on destination compute node, the instance state would not rollback to active. Is there a bug, or if anyone plans to change this?Can you tell me more about this ?Thank you very much. Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifatafekn at gmail.com Thu Nov 8 11:47:00 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Thu, 8 Nov 2018 13:47:00 +0200 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: Hi, We solved the timestamp bug. There are two patches for master [1] and stable/rocky [2]. I'll check the other issues next week. Regards, Ifat [1] https://review.openstack.org/#/c/616468/ [2] https://review.openstack.org/#/c/616469/ On Wed, Oct 31, 2018 at 10:59 AM Won wrote: > >>>> [image: image.png] >>>> The time stamp is recorded well in log(vitrage-graph,collect etc), but >>>> in vitrage-dashboard it is marked 2001-01-01. >>>> However, it seems that the time stamp is recognized well internally >>>> because the alarm can be resolved and is recorded well in log. >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 42202 bytes Desc: not available URL: From tenobreg at redhat.com Thu Nov 8 13:50:33 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Thu, 8 Nov 2018 10:50:33 -0300 Subject: [openstack-dev] [sahara] No meeting today Message-ID: Sorry for the late notice but we are not having sahara meeting today. Thanks folks, -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Thu Nov 8 14:07:18 2018 From: senrique at redhat.com (Sofia Enriquez) Date: Thu, 8 Nov 2018 11:07:18 -0300 Subject: [openstack-dev] [manila] [contribute] In-Reply-To: <8e9facf2-d24b-da19-45da-13de4db75359@gmail.com> References: <4374516d-8e61-baf8-440b-e76cafd84874@redhat.com> <6b827f3b-3804-8d53-3eb5-900cb7cc5c1d@gmail.com> <20181019145536.n6ca4dxjwlywb2s3@barron.net> <8e9facf2-d24b-da19-45da-13de4db75359@gmail.com> Message-ID: Hi Leni, welcome! 1) Devstack[1] plays a *main *role in the development workflow. It's an easier way to get a full environment to work in Manila, we use it every day. I recommend you to use it in a VM. You can find many tutorials about how to use Devstack, I just let you one [2] 2) I can't find *low-hanging-fruit* bugs in Manila. However, good-first-bugs are tagged as *low-hanging-fruit *for example, cinder's[3] Today at *15:00 UTC *It's Weekly Manila Team Meeting at IRC (channel #openstack-meeting-alt) on Freenode. Have fun! Sofia irc: enriquetaso [1] https://docs.openstack.org/zun/latest/contributor/quickstart.html#exercising-the-services-using-devstack [2] https://enriquetaso.wordpress.com/2016/05/07/installing-devstack-on-a-vagrant-virtual-machine/ [3] https://bugs.launchpad.net/cinder/+bugs?field.tag=low-hanging-fruit On Thu, Nov 8, 2018 at 7:41 AM Leni Kadali Mutungi wrote: > Hi Tom > > Thanks for the warm welcome. I've gone through the material and I would > like to understand a few things: > > 1. What's the role of devstack in the development workflow? > 2. Where can I find good-first-bugs? A bug that is simple to do > (relatively ;)) and allows me to practice what I've read up on in > Developer's Guide. I looked through the manila bugs on Launchpad but I > didn't see anything marked easy or good-first-bug or its equivalent for > manila. I am a bit unfamiliar with Launchpad so that may have played a > role :). > > Your guidance is appreciated. > > On 10/19/18 5:55 PM, Tom Barron wrote: > > On 19/10/18 15:27 +0300, Leni Kadali Mutungi wrote: > >> Hi all. > >> > >> I've downloaded the manila project from GitHub as a zip file, unpacked > >> it and have run `git fetch --depth=1` and been progressively running > >> `git fetch --deepen=5` to get the commit history I need. For future > >> reference, would a shallow clone e.g. `git clone depth=1` be enough to > >> start working on the project or should one have the full commit > >> history of the project? > >> > >> -- > >> -- Kind regards, > >> Leni Kadali Mutungi > > > > Hi Leni, > > > > First I'd like to extend a warm welcome to you as a new manila project > > contributor! We have some contributor/developer documentation [1] that > > you may find useful. If you find any gaps or misinformation, we will be > > happy to work with you to address these. In addition to this email > > list, the #openstack-manila IRC channel on freenode is a good place to > > ask questions. Many of us run irc bouncers so we'll see the question > > even if we're not looking right when it is asked. Finally, we have a > > meeting most weeks on Thursdays at 1500UTC in #openstack-meeting-alt -- > > agendas are posted here [2]. Also, here is our work-plan for the > > current Stein development cycle [3]. > > > > Now for your question about shallow clones. I hope others who know more > > will chime in but here are my thoughts ... > > > > Although having the full commit history for the project is useful, it is > > certainly possible to get started with a shallow clone of the project. > > That said, I'm not sure if the space and download-time/bandwidth gains > > are going to be that significant because once you have the workspace you > > will want to run unit tests, pep8, etc. using tox as explained in the > > developer documentation mentioned earlier. That will download virtual > > environments for manila's dependencies in your workspace (under .tox > > directory) that dwarf the space used for manila proper. > > > > $ git clone --depth=1 git at github.com:openstack/manila.git shallow-manila > > Cloning into 'shallow-manila'... > > ... > > $ git clone git at github.com:openstack/manila.git deep-manila > > Cloning into 'deep-manila'... > > ... > > $ du -sh shallow-manila deep-manila/ > > 20M shallow-manila > > 35M deep-manila/ > > > > But after we run tox inside shallow-manila and deep-manila we see: > > > > $ du -sh shallow-manila deep-manila/ > > 589M shallow-manila > > 603M deep-manila/ > > > > Similarly, you are likely to want to run devstack locally and that will > > clone the repositories for the other openstack components you need and > > the savings from shallow clones won't be that significant relative to > > the total needed. > > > > Happy developing! > > > > -- Tom Barron (Manila PTL) irc: tbarron > > > > [1] https://docs.openstack.org/manila/rocky/contributor/index.html > > [2] https://wiki.openstack.org/wiki/Manila/Meetings > > [3] https://wiki.openstack.org/wiki/Manila/SteinCycle > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > -- Kind regards, > Leni Kadali Mutungi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Laura Sofia Enriquez Associate Software Engineer Red Hat PnT Ingeniero Butty 240, Piso 14 (C1001AFB) Buenos Aires - Argentina senrique at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Nov 8 15:01:16 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 09 Nov 2018 00:01:16 +0900 Subject: [openstack-dev] [tc][all] TC office hours is started now on #openstack-tc Message-ID: <166f3d7791e.bb0edaa5128883.3333554287981015277@ghanshyammann.com> Hi All, TC office hour is started on #openstack-tc channel. Feel free to reach to us for anything you want discuss/input/feedback/help from TC. - TC From amy at demarco.com Thu Nov 8 15:06:19 2018 From: amy at demarco.com (Amy Marrich) Date: Thu, 8 Nov 2018 09:06:19 -0600 Subject: [openstack-dev] Diversity and Inclusion at OpenStack Summit In-Reply-To: References: Message-ID: Forgot one important one on Wednesday, 12:30 - 1:40!!! We are very pleased to have the very first *Diversity Networking Lunch* which os being sponsored by Intel[0]. In the past there was feedback that allies and others didn't wish to intrude on the WoO lunch and Intel was all for this change to a more open Diversity lunch! So please come and join us on Wednesday for lunch!! See you all soon, Amy Marrich (spotz) Diversity and Inclusion WG Chair [0] - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22850/diversity-networking-lunch-sponsored-by-intel On Wed, Nov 7, 2018 at 9:18 AM, Amy Marrich wrote: > I just wanted to pass on a few things we have going on during Summit that > might be of interest! > > *Diversity and Inclusion GroupMe* - Is it your first summit and you don't > know anyone else? Maybe you just don't want to travel to and from the venue > alone? In the tradition of WoO, I have created a GroupMe so people can > communicate with each other. If you would like to be added to the group > please let me know and I'll get you added! > > *Night Watch Tour *- On Wednesday night at 10PM, members of the community > will be meeting up to go on a private Night Watch Tour[0]! This is a > non-alcoholic activity for those wanting to get with other Stackers, but > don't want to partake in the Pub Crawl! We've been doing these since Boston > and they're a lot of fun. The cost is 15 euros cash and I do need you to > RSVP to me as we will need to get a second guide if we grow too large! > > Summit sessions you may wish to attend: > Tuesday - > *Speed Mentoring Lunch* [1] 12:30 -1:40 - We are still looking for both > Mentors and Mentees for the session so please RSVP! This is another great > way to meet people in the community, learn more and give back!!! > *Cohort Mentoring BoF* [2] 4:20 - 5:00 - Come talk to the people in > charge of the Cohort Mentoring program and see how you can get involved as > a Mentor or Mentee! > *D&I WG Update* [3] 5:10- 5:50 - Learn what we've been up to, how you can > get involved, and what's next. > > Wednesday - > *Git and Gerrit Hands-On Workshop* [4] 3:20 - 4:20 - So you've seen some > exciting stuff this week but don't know how to get setup to start > contributing? This session is for you in that we'll walk you through > getting your logins, your system configured and if time allows even how to > submit a bug and patch! > > Thursday - > *Mentoring Program Reboot* [5] 3:20 - 4:00 - Learn about the importance > of mentoring, the changes in the OPenStack mentoring programs and how you > can get involved. > > Hope to see everyone in Berlin next week! Please feel free to contact me > or grab me in the hall next week with any questions or to join in the fun! > > Amy Marrich (spotz) > Diversity and Inclusion WG Chair > > [0] - http://baerentouren.de/nachtwache_en.html > [1] - https://www.openstack.org/summit/berlin-2018/summit- > schedule/events/22873/speed-mentoring-lunch > [2] - https://www.openstack.org/summit/berlin-2018/summit- > schedule/events/22892/long-term-mentoring-keeping-the-party-going > [3] - https://www.openstack.org/summit/berlin-2018/summit- > schedule/events/22893/diversity-and-inclusion-wg-update > [4] - https://www.openstack.org/summit/berlin-2018/summit- > schedule/events/21943/git-and-gerrit-hands-on-workshop > [5] - https://www.openstack.org/summit/berlin-2018/summit- > schedule/events/22443/mentoring-program-reboot > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Thu Nov 8 17:31:19 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 8 Nov 2018 11:31:19 -0600 Subject: [openstack-dev] [Openstack-operators] [nova] about resize the instance In-Reply-To: References: Message-ID: <9f3b9524-c5a4-2a5f-720f-a749e128f42e@windriver.com> On 11/8/2018 5:30 AM, Rambo wrote: > >  When I resize the instance, the compute node report that > "libvirtError: internal error: qemu unexpectedly closed the monitor: > 2018-11-08T09:42:04.695681Z qemu-kvm: cannot set up guest memory > 'pc.ram': Cannot allocate memory".Has anyone seen this situation?And > the ram_allocation_ratio is set 3 in nova.conf.The total memory is > 125G.When I use the "nova hypervisor-show server" command to show the > compute node's free_ram_mb is -45G.If it is the result of excessive use > of memory? > Can you give me some suggestions about this?Thank you very much. I suspect that you simply don't have any available memory on that system. What is your kernel overcommit setting on the host? If /proc/sys/vm/overcommit_memory is set to 2, then try either changing the overcommit ratio or setting it to 1 to see if that makes a difference. Chris From bdobreli at redhat.com Thu Nov 8 17:58:50 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 8 Nov 2018 18:58:50 +0100 Subject: [openstack-dev] [Edge-computing] [tripleo][FEMDC] IEEE Fog Computing: Call for Contributions - Deadline Approaching Message-ID: <5d1b7e47-ebd9-089e-30e9-15464ae6e49b@redhat.com> Hi folks. The deadline for papers seems to be extended till Nov 17, so that's a great news! I finished drafting the position paper [0],[1]. Please /proof read and review. There is also open questions placed there and it would be really nice to have a co-author here for any of those items remaining... I'm also looking for some help with... **uploading PDF** to EDAS system! :) It throws on me: pdf notstampable The PDF file is not compliant with PDF standards and cannot be stamped (see FAQ)... And FAQ says: > "First, try using the most current version of dvipdf for LaTeX or the most current version of Word. You can also distill the file by using Acrobat (Pro, not Acrobat Reader): > * Open the PDF file in Acrobat Pro; > * Go to the File Menu > Save As or File > Export To... (in Adobe DC Pro) or File > Save As Other... > More Options > Postscript (in Adobe Pro version 11) > * Give the file a new name (do not overwrite the original file); > * Under "Save As Type", choose "PostScript File (*.ps)" > * Open Distiller and browse for this file or go to the directory where the file exists and double click on the file - this will open and run Distiller and regenerate the PDF file. > > If you do not have Acrobat Pro, you can also try to save the PostScript version via Apple Preview, using the "Print..." menu and the "PDF v" selector in the lower left hand corner to pick "Save as PostScript...". Unfortunately, Apple Preview saves PDF files as version 1.3, which is not acceptable to PDF Xpress, but tools such as docupub appear to produce compliant PDF." I have yet tried those MS word/adobe pro and distutils dances ( I think I should try that as well... ), but neither docupub [2] nor dvipdf(m) for LaTeX helped to produce a pdf edas can eat :-( [0] https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/LaTeX/position_paper_1570506394.pdf [1] https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/LaTeX/position_paper_1570506394.tex [2] https://docupub.com/pdfconvert/ > Folks, I have drafted a few more sections [0] for your /proof reading > and kind review please. Also left some notes for TBD things, either for > the potential co-authors' attention or myself :) > > [0] > https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/LaTeX/position_paper_1570506394.pdf -- Best regards, Bogdan Dobrelya, Irc #bogdando From openstack at nemebean.com Thu Nov 8 19:01:03 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 8 Nov 2018 13:01:03 -0600 Subject: [openstack-dev] [oslo] Next two meetings In-Reply-To: <5842030a-2ce9-77bd-0685-2c0efb926f7d@nemebean.com> References: <5842030a-2ce9-77bd-0685-2c0efb926f7d@nemebean.com> Message-ID: On 11/7/18 2:19 PM, Ben Nemec wrote: > Hi, > > Next week is summit and a lot of our contributors will be traveling > there on Monday, so let's skip the meeting. The following week I will > also be out, so if anyone wants to run the meeting please let me know. > Otherwise we'll skip that one too and reconvene after Thanksgiving. > > If you need your Oslo fix for next week, come see us in Berlin: > https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22952/oslo-project-onboarding > > > (I just noticed that is listed as a project onboarding - it's actually a > project update. I'll look into getting that fixed) This is fixed, so you can now find the session at https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22952/oslo-project-update From cboylan at sapwetik.org Thu Nov 8 19:42:19 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 08 Nov 2018 11:42:19 -0800 Subject: [openstack-dev] OpenDev, the future of OpenStack Infra Message-ID: <1541706139.393458.1570439656.675141A3@webmail.messagingengine.com> Hello everyone, Sorry for another cross post so soon. In the land before time we had Stackforge. Stackforge gave non-OpenStack projects a place to live with their own clearly defined "not OpenStack" namespacing. As the wheel of time spun we realized that many Stackforge projects were becoming OpenStack projects and we would have to migrate them. This involved Gerrit downtimes to rename things safely. To ease the pain of this, the TC decided that all projects developed in the OpenStack Infrastructure could live under the OpenStack git namespace to simplify migrations. Unfortunately this had the effect of creating confusion over which projects were officially a part of OpenStack, and whether or not projects that were not OpenStack could use our project hosting. Stackforge lived on under a different name, "unofficial project hosting", but many potential infrastructure users either didn't understand this or didn't want that strong association to OpenStack for their project hosting [0]. Turns out that we want to be able to host OpenStack and non-OpenStack projects together without confusion in a way that makes all of the projects involved happy. In an effort to make this a reality the OpenStack Infra team has been working through a process to rename itself to make it clear that our awesome project infrastructure and open collaboration tooling is community run, not just for OpenStack, but for others that want to be involved. To this end we've acquired the opendev.org domain which will allow us to host services under a neutral name as the OpenDev Infrastructure team. The OpenStack community will continue to be the largest and a primary user for the OpenDev Infrastructure team, but our hope in making our infrastructure services more inclusive is that we'll also attract new contributors, which will ultimately benefit OpenStack and other open infrastructure projects. Our goals for OpenDev are to: * Encourage additional active infrastructure contributors to help us scale. Make it clear that this is community-run tooling & infrastructure and everyone can get involved. * Make open source collaboration tools and project infrastructure more accessible to those that want it. * Have exposure to and dogfooding of OpenStack clouds as viable open source cloud providers. * Enable more projects to take advantage of the OpenStack-pioneered model of development and collaboration, including recommended practices like code review and gating. * Help build relationships with new and adjacent open source projects and create an inclusive space for collaboration and open source development. Much of this is still in the early planning stages. This is the infrastructure team's current thinking on the subject, but understand we have an existing community from which we'd like to see buy-in and involvement. To that end we have tried to compile a list of expected FAQ/Q&A information below, but feel free to followup either on this thread or with myself for anything we haven't considered already. Any transition will be slow and considered so don't expect everything to change overnight. But don't be surprised if you run into some new HTTP redirects as we reorganize the names under which services run. We'll also be sure to keep you informed on any major (and probably minor) transition steps so that they won't surprise you. Thank you, Clark [0] It should be noted that some projects did not mind this and hosted with OpenStack Infra anyway. ARA is an excellent example of this. FAQ * What is OpenDev? OpenDev is community-run tools and infrastructure services for collaboratively developing open source software. The OpenDev infrastruture team is the community of people who operate the infrastructure under the umbrella of the OpenStack Foundation. * What services are you offering? What is the expected timeline? In the near-term we expect to transition simple services like etherpad hosting to the OpenDev domain. It wil take us months and potentially up to a year to transition key infrastructure pieces like Git and Gerrit. Example services managed by the team today include etherpad, wiki, the zuul and nodepool CI system, git and gerrit, and other minor systems like pbx conferencing and survey tools. * Where will these services live? We've acquired opendev.org and are planning to set up DNS hosting very soon. We will post a simple information page and FAQ on the website and build it out as necessary over time. * Why are you changing from the OpenStack infrastructure team to the OpenDev infrastructure team? In the same way we want to signal that our services are not strictly for OpenStack projects, and that not every project using our services is an official part of OpenStack, we want to make it clear that our team also serves this larger community. * Who should use OpenDev services? Does it have to be projects related to OpenStack, or any open source projects? In short, open source contributors who share our community values, especially those who might want to help contribute to improving and maintaining OpenDev infrastructure over time. Projects using OpenDev hosted git and gerrit services should have an OSI-approved license. * Will the OpenStack projects live at git.opendev.org? All projects hosted with OpenDev will live at git.opendev.org. For backwards compatibility reasons, at the very least git.openstack.org will be an alias for git.opendev.org for the forseeable future. The same is true of the other existing whitelabel git domains such as git.starlingx.io and git.zuul-ci.org. Whether or not other 'whitelabel' domains are created is an open question. Given a neutral domain name, the desire for such sites may not seem as necessary. * Does this mean the infrastruture team will be spending less time on OpenStack? OpenStack will continue to be the largest and a primary user for the OpenDev Infrastructure team, and we expect that our work will benefit all users. There will be additional effort required as we transition to the new namespace and reorganize, but over the long term we hope this inclusive approach will help us attract new contributors and ultimately benefit OpenStack. * Are OpenStack cloud test resources the only resources that will be used? At the present time all of the donated resources come to us from a combination of OpenStack Public and Private clouds. Nobody from any of the proprietary clouds has asked to donate resources to us. It is conceivable that the shift to OpenDev could open the door to those cloud providers wanting to donate some cloud resources. Assuming nodepool supports talking to those clouds, it is certainly a possibility, but at the moment it's all speculation. * Is this name associated with the OpenDev Conferences (opendevconf.com) that OpenStack Foundation has previously organized? Yes! They are related, albeit indirectly. To quote from the conference's promotional site, "the focus is on bringing together composable open infrastructure technologies across communities and industries." The possibility of cross-promotional tie-ins could prove synergistic, since the conference and the collaboratory we're building share a lot of similar values and goals, and are ultimately supported by the same donors, community and foundation. * How will OpenDev be governed? Will the OpenStack TC retain oversight over it? The OpenDev governance discussion is just getting started, but like all OSF-supported initiatives, OpenDev follows the Four Opens, so it will ultimately be directly governed by OpenDev contributors. While it won't be under the sole oversight of the OpenStack Technical Committee anymore, OpenDev users (in particular OpenStack) should be represented in the governance model so that they can feed back their requirements to the OpenDev team. From melwittt at gmail.com Thu Nov 8 21:50:35 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 8 Nov 2018 13:50:35 -0800 Subject: [openstack-dev] [nova] no meeting the next two weeks Message-ID: Howdy all, This is a heads up that we will not hold a nova meeting next week November 15 because of summit week. And we will also not hold a nova meeting the following week November 22 because of the US holiday of Thanksgiving, as we're unlikely to find anyone to run it during the 2100 UTC time slot. Meetings will resume on November 29 at 1400 UTC. I'm looking forward to seeing all of you at the summit next week! Cheers, -melanie From najoy at cisco.com Thu Nov 8 22:04:40 2018 From: najoy at cisco.com (Naveen Joy (najoy)) Date: Thu, 8 Nov 2018 22:04:40 +0000 Subject: [openstack-dev] networking-vpp 18.10 for VPP 18.10 is now available Message-ID: Hello All, In conjunction with the release of VPP 18.10, we'd like to invite you all to try out networking-vpp 18.10 for VPP 18.10. As many of you may already know, VPP is a fast user space forwarder based on the DPDK toolkit. VPP uses vector packet processing algorithms to minimize the CPU time spent on each packet to maximize throughput. Networking-vpp is a ML2 mechanism driver that controls VPP on your control and compute hosts to provide fast L2 forwarding under Neutron. In this release, we have made improvements to fully support the network trunk service plugin. Using this plugin, you can attach multiple networks to an instance by binding it to a single vhostuser trunk port. The APIs are the same as the OpenStack Neutron trunk service APIs. You can also now bind and unbind subports to a bound network trunk. Another feature we have improved in this release is the Tap-as-a-service(TaaS). The TaaS code has been updated to handle any out of order etcd messages received during agent restarts. You can use this service to create remote port mirroring capability for tenant virtual networks. Besides the above, this release also has several bug fixes, VPP 18.10 API compatibility and stability related improvements. The README [1] explains how you can try out VPP using devstack: the devstack plugin will deploy the mechanism driver and VPP 18.10 and should give you a working system with a minimum of hassle. We will be continuing our development between now and VPP's 19.01 release. There are several features we're planning to work on and we will keep you updated through our bugs list [2]. We welcome anyone who would like to come help us. Everyone is welcome to join our biweekly IRC meetings, every other Monday (the next one is due this Monday at 0800 PT = 1600 GMT. -- Ian & Naveen [1]https://github.com/openstack/networking-vpp/blob/master/README.rst [2]http://goo.gl/i3TzAt -------------- next part -------------- An HTML attachment was scrubbed... URL: From duc.openstack at gmail.com Thu Nov 8 22:39:02 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Thu, 8 Nov 2018 14:39:02 -0800 Subject: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration In-Reply-To: References: <20180927022738.GA22304@rcp.sl.cloud9.ibm.com> <5d7129ce-39c8-fb9e-517c-64d030f71963@redhat.com> <20181009062143.GA32130@rcp.sl.cloud9.ibm.com> Message-ID: Thanks for getting the forum setup. I have added some high-level autoscaling use case to the etherpad that will help during the discussion. Duc (dtruong) On Wed, Oct 24, 2018 at 2:10 AM Rico Lin wrote: > Hi all, I'm glad to notify you all that our forum session has been > accepted [1] and that forum time schedule (Thursday, November 15, > 9:50am-10:30am) should be stable by now. So please save your schedule for > it!! > Any feedback are welcome! > > > > [1] > https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22753/autoscaling-integration-improvement-and-feedback > > On Tue, Oct 9, 2018 at 3:07 PM Rico Lin wrote: > >> a reminder for all, please put your ideas/thoughts/suggest actions in our >> etherpad [1], >> which we gonna use for further discussion in Forum, or in PTG if we got >> no forum for it. >> So we won't be missing anything. >> >> >> >> [1] https://etherpad.openstack.org/p/autoscaling-integration-and-feedback >> >> On Tue, Oct 9, 2018 at 2:22 PM Qiming Teng wrote: >> >>> > >One approach would be to switch the underlying Heat AutoScalingGroup >>> > >implementation to use Senlin and then deprecate the AutoScalingGroup >>> > >resource type in favor of the Senlin resource type over several >>> > >cycles. >>> > >>> > The hard part (or one hard part, at least) of that is migrating the >>> existing >>> > data. >>> >>> Agreed. In an ideal world, we can transparently transplant the "scaling >>> group" resource implementation onto something (e.g. a library or an >>> interface). This sounds like an option for both teams to brainstorm >>> together. >>> >>> - Qiming >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> -- >> May The Force of OpenStack Be With You, >> >> *Rico Lin*irc: ricolin >> >> > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Fri Nov 9 01:19:22 2018 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 8 Nov 2018 17:19:22 -0800 Subject: [openstack-dev] [manila] [contribute] In-Reply-To: References: <4374516d-8e61-baf8-440b-e76cafd84874@redhat.com> <6b827f3b-3804-8d53-3eb5-900cb7cc5c1d@gmail.com> <20181019145536.n6ca4dxjwlywb2s3@barron.net> <8e9facf2-d24b-da19-45da-13de4db75359@gmail.com> Message-ID: On Thu, Nov 8, 2018 at 6:07 AM Sofia Enriquez wrote: > Hi Leni, welcome! > > 1) Devstack[1] plays a *main *role in the development workflow. > It's an easier way to get a full environment to work in Manila, we use it > every day. I recommend you to use it in a VM. > You can find many tutorials about how to use Devstack, I just let you one > [2] > > 2) I can't find *low-hanging-fruit* bugs in Manila. However, > good-first-bugs are tagged as *low-hanging-fruit *for example, cinder's[3] > > Today at *15:00 UTC *It's Weekly Manila Team Meeting at IRC (channel > #openstack-meeting-alt) on Freenode. > > Thank you so much for looking Sofia, and thanks for reaching out Leni! I just reported a low-hanging-fruit bug in manila https://bugs.launchpad.net/manila/+bugs?field.tag=low-hanging-fruit I've added some steps to get you started to land a bug fix on the bug report. Please feel free to join #openstack-manila on IRC ( https://docs.openstack.org/infra/manual/irc.html) and reach out in case you're stuck. I'd also encourage you to look at low-hanging-fruit in the python-manilaclient and manila-ui projects as well: https://bugs.launchpad.net/python-manilaclient/+bugs?field.tag=low-hanging-fruit https://bugs.launchpad.net/manila-ui/+bugs?field.tag=low-hanging-fruit Happy coding, Goutham (gouthamr) > Have fun! > Sofia > irc: enriquetaso > [1] > https://docs.openstack.org/zun/latest/contributor/quickstart.html#exercising-the-services-using-devstack > [2] > https://enriquetaso.wordpress.com/2016/05/07/installing-devstack-on-a-vagrant-virtual-machine/ > [3] https://bugs.launchpad.net/cinder/+bugs?field.tag=low-hanging-fruit > > On Thu, Nov 8, 2018 at 7:41 AM Leni Kadali Mutungi > wrote: > >> Hi Tom >> >> Thanks for the warm welcome. I've gone through the material and I would >> like to understand a few things: >> >> 1. What's the role of devstack in the development workflow? >> 2. Where can I find good-first-bugs? A bug that is simple to do >> (relatively ;)) and allows me to practice what I've read up on in >> Developer's Guide. I looked through the manila bugs on Launchpad but I >> didn't see anything marked easy or good-first-bug or its equivalent for >> manila. I am a bit unfamiliar with Launchpad so that may have played a >> role :). >> >> Your guidance is appreciated. >> >> On 10/19/18 5:55 PM, Tom Barron wrote: >> > On 19/10/18 15:27 +0300, Leni Kadali Mutungi wrote: >> >> Hi all. >> >> >> >> I've downloaded the manila project from GitHub as a zip file, unpacked >> >> it and have run `git fetch --depth=1` and been progressively running >> >> `git fetch --deepen=5` to get the commit history I need. For future >> >> reference, would a shallow clone e.g. `git clone depth=1` be enough to >> >> start working on the project or should one have the full commit >> >> history of the project? >> >> >> >> -- >> >> -- Kind regards, >> >> Leni Kadali Mutungi >> > >> > Hi Leni, >> > >> > First I'd like to extend a warm welcome to you as a new manila project >> > contributor! We have some contributor/developer documentation [1] that >> > you may find useful. If you find any gaps or misinformation, we will be >> > happy to work with you to address these. In addition to this email >> > list, the #openstack-manila IRC channel on freenode is a good place to >> > ask questions. Many of us run irc bouncers so we'll see the question >> > even if we're not looking right when it is asked. Finally, we have a >> > meeting most weeks on Thursdays at 1500UTC in #openstack-meeting-alt -- >> > agendas are posted here [2]. Also, here is our work-plan for the >> > current Stein development cycle [3]. >> > >> > Now for your question about shallow clones. I hope others who know >> more >> > will chime in but here are my thoughts ... >> > >> > Although having the full commit history for the project is useful, it >> is >> > certainly possible to get started with a shallow clone of the project. >> > That said, I'm not sure if the space and download-time/bandwidth gains >> > are going to be that significant because once you have the workspace >> you >> > will want to run unit tests, pep8, etc. using tox as explained in the >> > developer documentation mentioned earlier. That will download virtual >> > environments for manila's dependencies in your workspace (under .tox >> > directory) that dwarf the space used for manila proper. >> > >> > $ git clone --depth=1 git at github.com:openstack/manila.git >> shallow-manila >> > Cloning into 'shallow-manila'... >> > ... >> > $ git clone git at github.com:openstack/manila.git deep-manila >> > Cloning into 'deep-manila'... >> > ... >> > $ du -sh shallow-manila deep-manila/ >> > 20M shallow-manila >> > 35M deep-manila/ >> > >> > But after we run tox inside shallow-manila and deep-manila we see: >> > >> > $ du -sh shallow-manila deep-manila/ >> > 589M shallow-manila >> > 603M deep-manila/ >> > >> > Similarly, you are likely to want to run devstack locally and that will >> > clone the repositories for the other openstack components you need and >> > the savings from shallow clones won't be that significant relative to >> > the total needed. >> > >> > Happy developing! >> > >> > -- Tom Barron (Manila PTL) irc: tbarron >> > >> > [1] https://docs.openstack.org/manila/rocky/contributor/index.html >> > [2] https://wiki.openstack.org/wiki/Manila/Meetings >> > [3] https://wiki.openstack.org/wiki/Manila/SteinCycle >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -- >> -- Kind regards, >> Leni Kadali Mutungi >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > > Laura Sofia Enriquez > > Associate Software Engineer > Red Hat PnT > > Ingeniero Butty 240, Piso 14 > > (C1001AFB) Buenos Aires - Argentina > > senrique at redhat.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From duc.openstack at gmail.com Fri Nov 9 01:33:17 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Thu, 8 Nov 2018 17:33:17 -0800 Subject: [openstack-dev] [senlin] Meeting today at 530 UTC Message-ID: Everyone, As a reminder, we will be having our regular meeting today at 530 UTC. The meetings for Nov 16 and Nov 23 will be cancelled due to the summit next week and the U.S. Thanksgiving holiday the following week, Duc From gmann at ghanshyammann.com Fri Nov 9 02:02:54 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 09 Nov 2018 11:02:54 +0900 Subject: [openstack-dev] [openstack-operators] [qa] [berlin] QA Team & related sessions at berlin summit Message-ID: <166f635357e.bf214dd5138768.7752784958860017838@ghanshyammann.com> Hello everyone, Along with project updates & onboarding sessions, QA team will host QA feedback sessions in berlin summit. Feel free to catch us next week for any QA related questions or if you need help to contribute in QA (we are really looking forward to onbaord new contributor in QA). Below are the QA related sessions, feel free to append the list if i missed anything. I am working on onboarding/forum sessions etherpad and will send the link tomorrow. Tuesday: 1. OpenStack QA - Project Update. [1] 2. OpenStack QA - Project Onboarding. [2] 3. OpenStack Patrole – Foolproofing your OpenStack Deployment [3] Wednesday: 4. Forum: Users / Operators adoption of QA tools / plugins. [4] Thursday: 5. Using Rally/Tempest for change validation (OPS session) [5] [1] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22763/openstack-qa-project-update [2] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22762/openstack-qa-project-onboarding [3] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22148/openstack-patrole-foolproofing-your-openstack-deployment [4] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22788/users-operators-adoption-of-qa-tools-plugins [5] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22837/using-rallytempest-for-change-validation-ops-session -gmann From gdubreui at redhat.com Fri Nov 9 05:35:22 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Fri, 9 Nov 2018 16:35:22 +1100 Subject: [openstack-dev] [neutron][neutron-release] feature/graphql branch rebase In-Reply-To: <55ba80c3-bbfc-76dd-affb-f0c4023af33e@redhat.com> References: <89b44820-47c0-0a8b-a363-cf3ff4e1879a@redhat.com> <649c9384-9d5c-3859-f893-71b48ecd675a@redhat.com> <55ba80c3-bbfc-76dd-affb-f0c4023af33e@redhat.com> Message-ID: Hi Miguel, Could you please provide permission [1] to upload commit merges? I'm getting the following error after merging HEAD: $ git review -R feature/graphql remote: remote: Processing changes: refs: 1 remote: Processing changes: refs: 1, done To ssh://review.openstack.org:29418/openstack/neutron.git  ! [remote rejected]       HEAD -> refs/publish/feature/graphql/bug/1802101 (you are not allowed to upload merges) error: failed to push some refs to 'ssh://q-1illes-a at review.openstack.org:29418/openstack/neutron.git' Thanks, Gilles [1] https://github.com/openstack-infra/gerrit/blob/master/Documentation/error-not-allowed-to-upload-merges.txt On 23/10/18 7:30 pm, Gilles Dubreuil wrote: > > Hi Miguel, > > Thank you for your help. > > I'll use those precious instructions next time. > > Cheers, > Gilles > > On 16/10/18 1:32 am, Miguel Lavalle wrote: >> Hi Gilles, >> >> The merge of master into feature/graphql  has been approved: >> https://review.openstack.org/#/c/609455. In the future, you can >> create your own merge patch following the instructions here: >> https://docs.openstack.org/infra/manual/drivers.html#merge-master-into-feature-branch. >> The Neutron team will catch it in Gerrit and review it >> >> Regards >> >> Miguel >> >> On Thu, Oct 4, 2018 at 11:44 PM Gilles Dubreuil > > wrote: >> >> Hey Neutron folks, >> >> I'm just reiterating the request. >> >> Thanks >> >> >> On 20/06/18 11:34, Gilles Dubreuil wrote: >> > Could someone from the Neutron release group rebase >> feature/graphql >> > branch against master/HEAD branch please? >> > >> > Regards, >> > Gilles >> > >> > >> > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email:gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjzhjing at linux.vnet.ibm.com Fri Nov 9 05:56:35 2018 From: bjzhjing at linux.vnet.ibm.com (zhangjing) Date: Fri, 9 Nov 2018 13:56:35 +0800 Subject: [openstack-dev] Fw: Re: [nova] Announcing new Focal Point for s390x libvirt/kvm Nova In-Reply-To: References: Message-ID: Thanks Andreas! also thanks Melanie's warm welcome! It's a great honor for us to join Openstack project, hope to have more communications with all of you in the future, and do more contributions in the community! > ----- Original message ----- > From: melanie witt > To: openstack-dev at lists.openstack.org > Cc: > Subject: Re: [openstack-dev] [nova] Announcing new Focal Point for > s390x libvirt/kvm Nova > Date: Tue, Nov 6, 2018 2:53 AM > On Fri, 2 Nov 2018 09:47:42 +0100, Andreas Scheuring wrote: > > Dear Nova Community, > > I want to announce the new focal point for Nova s390x libvirt/kvm. > > > > Please welcome "Cathy Zhang” to the Nova team. She and her team > will be responsible for maintaining the s390x libvirt/kvm > Thirdparty CI  [1] and any s390x specific code in nova and os-brick. > > I personally took a new opportunity already a few month ago but > kept maintaining the CI as good as possible. With new manpower we > can hopefully contribute more to the community again. > > > > You can reach her via > > * email: bjzhjing at linux.vnet.ibm.com > > * IRC: Cathyz > > > > Cathy, I wish you and your team all the best for this exciting > role! I also want to say thank you for the last years. It was a > great time, I learned a lot from you all, will miss it! > > > > Cheers, > > > > Andreas (irc: scheuran) > > > > > > [1] https://wiki.openstack.org/wiki/ThirdPartySystems/IBM_zKVM_CI > > Thanks Andreas, for sending this note. It has been a pleasure working > with you over these years. We wish you the best of luck in your new > opportunity! > > Welcome to the Nova community, Cathy! We look forward to working with > you. Please feel free to reach out to us on IRC in the #openstack-nova > channel and on this mailing list with the [nova] tag to ask questions > and share info. > > Best, > -melanie > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Nov 9 07:00:49 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 9 Nov 2018 01:00:49 -0600 Subject: [openstack-dev] [release] Release countdown for week R-21 and R-20, November 12-23 Message-ID: <20181109070048.GA18128@sm-workstation> Development Focus ----------------- Teams should now be focused on feature development and preparing for Berlin Forum discussions. General Information ------------------- The OpenStack Summit and Forum start on Tuesday. Hopefully many teams will have an opportunity to work together as well as get input from folks they don't normally get to interact with as often. This is a great opportunity to get feedback from operators and other users to help shape the direction of the project. Upcoming Deadlines & Dates -------------------------- Forum at OpenStack Summit in Berlin: November 13-15 Start using openstack-discuss ML: November 19 Stein-2 Milestone: January 10 -- Sean McGinnis (smcginnis) From sungbogo28 at gmail.com Fri Nov 9 07:49:59 2018 From: sungbogo28 at gmail.com (=?UTF-8?B?67Cx7Zi47ISx?=) Date: Fri, 9 Nov 2018 16:49:59 +0900 Subject: [openstack-dev] [Tacker] Redundancy VNFFG module in tacker Message-ID: Dear tacker project members, Hello. I am Hosung Baek and Ph.D.student at Korea Univ. I am interested in the VNFFG feature in Tacker project, and I want to develop the "Redundancy VNFFG module" in tacker. As far as I know. it is difficult to apply the VNFFG configuration where there is low loss requirement such as DetNet and URLLC. Because there is no module for reliable VNFFG. The module to be developed is described as follows. It is a function to construct a disjoint VNFFG by adding redundancy VNFFG module to the tacker VNFFG feature and to transmit the data to two disjoint paths. To implement this module, two VNFFGs must be configured for one network service and the function to remove redundant packets in two different paths is required. I want to develop this module and contribute this to tacker project. So could you tell me the contribution process in OpenStack tacker project if possible? Yours sincerely, Hosung Baek. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Nov 9 08:28:48 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Fri, 9 Nov 2018 09:28:48 +0100 Subject: [openstack-dev] [neutron] CI meeting on 13.11.2018 cancelled Message-ID: Hi, Due to summit in Berlin Neutron CI meeting on 13.11.2018 is cancelled. Next meeting will be as usual on 20.11.2018. — Slawek Kaplonski Senior software engineer Red Hat From dangtrinhnt at gmail.com Fri Nov 9 09:03:31 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 9 Nov 2018 18:03:31 +0900 Subject: [openstack-dev] [Tacker] Redundancy VNFFG module in tacker In-Reply-To: References: Message-ID: Hi Hosung, Here are a couple suggestions you can follow. 1. You can first follow the OpenStack contributor guidelines and set up the needed accounts [1]. 2. After you have the launchpad account, register a blueprint for your feature in Tacker's launchpad [2]. 3. Clone the Tacker Specs repository to your local dev environment [3]. Write a spec (detail about your feature, implementation plan, etc.) for your feature. Commit. and then 'git review' to push your spec to Gerrit [4] for the Tacker team to review and comment. 4. After the spec has been approved by the core reviewers and Tacker's PTL, you can start working on the implementation the same way when you write the spec: - Clone the needed repositories (tacker, python-tackerclient, or tacker-horizon) to your local dev environment - Make changes - Commit and review - Wait for other to comments and refine your changes. You can join the Tacker's IRC chat channel (#tacker) to talk to other Tacker developers. Hope that will help. [1] https://docs.openstack.org/contributors/code-and-documentation/index.html [2] https://blueprints.launchpad.net/tacker [3] https://github.com/openstack/tacker-specs [4] http://review.openstack.org On Fri, Nov 9, 2018 at 4:50 PM 백호성 wrote: > Dear tacker project members, > > Hello. I am Hosung Baek and Ph.D.student at Korea Univ. > > I am interested in the VNFFG feature in Tacker project, and I want to > develop the "Redundancy VNFFG module" in tacker. > > As far as I know. it is difficult to apply the VNFFG configuration where > there is low loss requirement such as DetNet and URLLC. > > Because there is no module for reliable VNFFG. > > The module to be developed is described as follows. > > It is a function to construct a disjoint VNFFG by adding redundancy VNFFG > module to the tacker VNFFG feature and to transmit the data to two disjoint > paths. > > To implement this module, two VNFFGs must be configured for one network > service and the function to remove redundant packets in two different paths > is required. > > I want to develop this module and contribute this to tacker project. > > So could you tell me the contribution process in OpenStack tacker project > if possible? > > Yours sincerely, > > Hosung Baek. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Fri Nov 9 09:32:40 2018 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 9 Nov 2018 04:32:40 -0500 Subject: [openstack-dev] [manila] [contribute] In-Reply-To: References: <4374516d-8e61-baf8-440b-e76cafd84874@redhat.com> <6b827f3b-3804-8d53-3eb5-900cb7cc5c1d@gmail.com> <20181019145536.n6ca4dxjwlywb2s3@barron.net> <8e9facf2-d24b-da19-45da-13de4db75359@gmail.com> Message-ID: <20181109093240.yk7sls3hjacfqjx5@barron.net> On 08/11/18 11:07 -0300, Sofia Enriquez wrote: >Hi Leni, welcome! > >1) Devstack[1] plays a *main *role in the development workflow. >It's an easier way to get a full environment to work in Manila, we use it >every day. I recommend you to use it in a VM. >You can find many tutorials about how to use Devstack, I just let you one >[2] Nice blog/guide, Sofia! I'll just add [4] as a followup for anyone particularly wanting to install devstack with manila and a cephfs with nfs back end. > >2) I can't find *low-hanging-fruit* bugs in Manila. However, >good-first-bugs are tagged as *low-hanging-fruit *for example, cinder's[3] And Goutham followed up with some manila low-hanging-fruit too. Thanks, Goutham! > >Today at *15:00 UTC *It's Weekly Manila Team Meeting at IRC (channel >#openstack-meeting-alt) on Freenode. And as we may have mentioned, you can ask questions on irc [5] [6] #openstack-manila any time. Ask even if no one is responding right then, most of us have bouncers and will see and get back to you. -- Tom Barron [4] https://github.com/tombarron/vagrant-libvirt-devstack [5] https://docs.openstack.org/contributors/common/irc.html [6] https://docs.openstack.org/infra/manual/irc.html > >Have fun! >Sofia >irc: enriquetaso >[1] >https://docs.openstack.org/zun/latest/contributor/quickstart.html#exercising-the-services-using-devstack >[2] >https://enriquetaso.wordpress.com/2016/05/07/installing-devstack-on-a-vagrant-virtual-machine/ >[3] https://bugs.launchpad.net/cinder/+bugs?field.tag=low-hanging-fruit > >On Thu, Nov 8, 2018 at 7:41 AM Leni Kadali Mutungi >wrote: > >> Hi Tom >> >> Thanks for the warm welcome. I've gone through the material and I would >> like to understand a few things: >> >> 1. What's the role of devstack in the development workflow? >> 2. Where can I find good-first-bugs? A bug that is simple to do >> (relatively ;)) and allows me to practice what I've read up on in >> Developer's Guide. I looked through the manila bugs on Launchpad but I >> didn't see anything marked easy or good-first-bug or its equivalent for >> manila. I am a bit unfamiliar with Launchpad so that may have played a >> role :). >> >> Your guidance is appreciated. >> >> On 10/19/18 5:55 PM, Tom Barron wrote: >> > On 19/10/18 15:27 +0300, Leni Kadali Mutungi wrote: >> >> Hi all. >> >> >> >> I've downloaded the manila project from GitHub as a zip file, unpacked >> >> it and have run `git fetch --depth=1` and been progressively running >> >> `git fetch --deepen=5` to get the commit history I need. For future >> >> reference, would a shallow clone e.g. `git clone depth=1` be enough to >> >> start working on the project or should one have the full commit >> >> history of the project? >> >> >> >> -- >> >> -- Kind regards, >> >> Leni Kadali Mutungi >> > >> > Hi Leni, >> > >> > First I'd like to extend a warm welcome to you as a new manila project >> > contributor! We have some contributor/developer documentation [1] that >> > you may find useful. If you find any gaps or misinformation, we will be >> > happy to work with you to address these. In addition to this email >> > list, the #openstack-manila IRC channel on freenode is a good place to >> > ask questions. Many of us run irc bouncers so we'll see the question >> > even if we're not looking right when it is asked. Finally, we have a >> > meeting most weeks on Thursdays at 1500UTC in #openstack-meeting-alt -- >> > agendas are posted here [2]. Also, here is our work-plan for the >> > current Stein development cycle [3]. >> > >> > Now for your question about shallow clones. I hope others who know more >> > will chime in but here are my thoughts ... >> > >> > Although having the full commit history for the project is useful, it is >> > certainly possible to get started with a shallow clone of the project. >> > That said, I'm not sure if the space and download-time/bandwidth gains >> > are going to be that significant because once you have the workspace you >> > will want to run unit tests, pep8, etc. using tox as explained in the >> > developer documentation mentioned earlier. That will download virtual >> > environments for manila's dependencies in your workspace (under .tox >> > directory) that dwarf the space used for manila proper. >> > >> > $ git clone --depth=1 git at github.com:openstack/manila.git shallow-manila >> > Cloning into 'shallow-manila'... >> > ... >> > $ git clone git at github.com:openstack/manila.git deep-manila >> > Cloning into 'deep-manila'... >> > ... >> > $ du -sh shallow-manila deep-manila/ >> > 20M shallow-manila >> > 35M deep-manila/ >> > >> > But after we run tox inside shallow-manila and deep-manila we see: >> > >> > $ du -sh shallow-manila deep-manila/ >> > 589M shallow-manila >> > 603M deep-manila/ >> > >> > Similarly, you are likely to want to run devstack locally and that will >> > clone the repositories for the other openstack components you need and >> > the savings from shallow clones won't be that significant relative to >> > the total needed. >> > >> > Happy developing! >> > >> > -- Tom Barron (Manila PTL) irc: tbarron >> > >> > [1] https://docs.openstack.org/manila/rocky/contributor/index.html >> > [2] https://wiki.openstack.org/wiki/Manila/Meetings >> > [3] https://wiki.openstack.org/wiki/Manila/SteinCycle >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -- >> -- Kind regards, >> Leni Kadali Mutungi >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > >-- > >Laura Sofia Enriquez > >Associate Software Engineer >Red Hat PnT > >Ingeniero Butty 240, Piso 14 > >(C1001AFB) Buenos Aires - Argentina > >senrique at redhat.com > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From nguyentrihai93 at gmail.com Fri Nov 9 09:48:10 2018 From: nguyentrihai93 at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gVHLDrSBI4bqjaQ==?=) Date: Fri, 9 Nov 2018 18:48:10 +0900 Subject: [openstack-dev] [Tacker] Redundancy VNFFG module in tacker In-Reply-To: References: Message-ID: This is the document how to contribute to Tacker: https://docs.openstack.org/tacker/latest/contributor/index.html On Fri, Nov 9, 2018, 4:50 PM 백호성 wrote: > Dear tacker project members, > > Hello. I am Hosung Baek and Ph.D.student at Korea Univ. > > I am interested in the VNFFG feature in Tacker project, and I want to > develop the "Redundancy VNFFG module" in tacker. > > As far as I know. it is difficult to apply the VNFFG configuration where > there is low loss requirement such as DetNet and URLLC. > > Because there is no module for reliable VNFFG. > > The module to be developed is described as follows. > > It is a function to construct a disjoint VNFFG by adding redundancy VNFFG > module to the tacker VNFFG feature and to transmit the data to two disjoint > paths. > > To implement this module, two VNFFGs must be configured for one network > service and the function to remove redundant packets in two different paths > is required. > > I want to develop this module and contribute this to tacker project. > > So could you tell me the contribution process in OpenStack tacker project > if possible? > > Yours sincerely, > > Hosung Baek. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- *Nguyen Tri Hai */ Ph.D. Student ANDA Lab., Soongsil Univ., Seoul, South Korea -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Nov 9 13:03:55 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 9 Nov 2018 13:03:55 +0000 Subject: [openstack-dev] [neutron][neutron-release] feature/graphql branch rebase In-Reply-To: References: <89b44820-47c0-0a8b-a363-cf3ff4e1879a@redhat.com> <649c9384-9d5c-3859-f893-71b48ecd675a@redhat.com> <55ba80c3-bbfc-76dd-affb-f0c4023af33e@redhat.com> Message-ID: <20181109130355.2pphqjgsfvnymx52@yuggoth.org> On 2018-11-09 16:35:22 +1100 (+1100), Gilles Dubreuil wrote: > Could you please provide permission [1] to upload commit merges? > > I'm getting the following error after merging HEAD: > > $ git review -R feature/graphql > remote: > remote: Processing changes: refs: 1 > remote: Processing changes: refs: 1, done > To ssh://review.openstack.org:29418/openstack/neutron.git >  ! [remote rejected]       HEAD -> refs/publish/feature/graphql/bug/1802101 > (you are not allowed to upload merges) > error: failed to push some refs to > 'ssh://q-1illes-a at review.openstack.org:29418/openstack/neutron.git' [...] Per openstack/neutron's ACL[*] you need to be made a member of the neutron-release group in Gerrit[**]. (This permission is tightly controlled to avoid people accidentally pushing merge commits, which is all too easy if you're not careful to keep your branches clean.) [*] https://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/openstack/neutron.config [**] https://review.openstack.org/#/admin/groups/neutron-release -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From colleen at gazlene.net Fri Nov 9 15:02:38 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 09 Nov 2018 16:02:38 +0100 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 5 November 2018 Message-ID: <1541775758.1782694.1571414760.203BF0FB@webmail.messagingengine.com> # Keystone Team Update - Week of 5 November 2018 ## News ### Community Goals Status Python3-first[1]: completed Upgrade status check[2]: scaffolding is completed but we don't have actual checks yet Mutable config[3] (goal from last cycle): review in progress but needs work [1] https://governance.openstack.org/tc/goals/stein/python3-first.html [2] https://governance.openstack.org/tc/goals/stein/upgrade-checkers.html [3] https://governance.openstack.org/tc/goals/rocky/enable-mutable-configuration.html ### Berlin Summit The Berlin summit is next week so we won't be holding the weekly meeting or office hours. As mentioned last week, we a few keystone-related forum sessions and talks: * Operator feedback https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22792/keystone-operator-feedback * Keystone as an Identity Provider Proxy - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22791/keystone-as-an-identity-provider-proxy * Keystone project update - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22728/keystone-project-updates * Keystone project onboarding - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22727/keystone-project-onboarding * Deletion of project and project resources - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22784/deletion-of-project-and-project-resources * Enforcing quota consistently with Unified Limits - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22557/enforcing-quota-consistently-with-unified-limits * Towards an Open Cloud Exchange - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22320/towards-an-open-cloud-exchange * Pushing keystone over the edge - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22044/pushing-keystone-over-the-edge * A seamlessly federated multi-cloud - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22607/a-seamlessly-federated-multi-cloud * OpenStack policy 101 - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/21977/openstack-policy-101 * Dynamic policy for OpenStack with Open Policy Agent - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/21976/dynamic-policy-for-openstack-with-open-policy-agent * Identity integration between OpenStack and Kubernetes - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22123/identity-integration-between-openstack-and-kubernetes ## Open Specs Stein specs: https://bit.ly/2Pi6dGj Ongoing specs: https://bit.ly/2OyDLTh ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 34 changes this week. ## Changes that need Attention Search query: https://bit.ly/2RLApdA There are 40 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs This week we opened 4 new bugs and closed 3. Bugs opened (4) Bug #1801873 (keystone:Undecided) opened by Izak A Eygelaar https://bugs.launchpad.net/keystone/+bug/1801873 Bug #1802035 (keystone:Undecided) opened by Joshua Cornutt https://bugs.launchpad.net/keystone/+bug/1802035 Bug #1802136 (keystone:Undecided) opened by Simon Reinkemeier https://bugs.launchpad.net/keystone/+bug/1802136 Bug #1802357 (oslo.policy:Undecided) opened by Alfredo Moralejo https://bugs.launchpad.net/oslo.policy/+bug/1802357 Bugs fixed (3) Bug #1800124 (keystone:Critical) fixed by Morgan Fainberg https://bugs.launchpad.net/keystone/+bug/1800124 Bug #1797584 (keystonemiddleware:Medium) fixed by Michael Johnson https://bugs.launchpad.net/keystonemiddleware/+bug/1797584 Bug #1361743 (keystonemiddleware:Wishlist) fixed by Morgan Fainberg https://bugs.launchpad.net/keystonemiddleware/+bug/1361743 ## Milestone Outlook https://releases.openstack.org/stein/schedule.html ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 From lbragstad at gmail.com Fri Nov 9 15:28:11 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 9 Nov 2018 09:28:11 -0600 Subject: [openstack-dev] [nova][cinder] about unified limits In-Reply-To: References: Message-ID: Sending a follow up here since there has been some movement on this recently. There is a nova specification up for review that goes through the work to consume unified limits out of keystone [0]. John and Jay have also been working through the oslo.limit integration, which is forcing us to think about the interface. There are a few patches up that take different approaches [1][2]. If anyone is still interested in helping out with this work, please don't hesitate to reach out. [0] https://review.openstack.org/#/c/602201/ [1] https://review.openstack.org/#/c/615180/ [2] https://review.openstack.org/#/q/project:openstack/oslo.limit+status:open On Tue, Sep 11, 2018 at 8:10 AM Lance Bragstad wrote: > Extra eyes on the API would be appreciated. We're also close to the point > where we can start incorporating oslo.limit into services, so preparing > those changes might be useful, too. > > One of the outcomes from yesterday's session was that Jay and Mel (from > nova) were going to work out some examples we could use to finish up the > enforcement code in oslo.limit. Helping out with that or picking it up > would certainly help move the ball forward in nova. > > > > > On Tue, Sep 11, 2018 at 1:15 AM Jaze Lee wrote: > >> I recommend lijie at unitedstack.com to join in to help to work forward. >> May be first we should the keystone unified limits api really ok or >> something else ? >> >> Lance Bragstad 于2018年9月8日周六 上午2:35写道: >> > >> > That would be great! I can break down the work a little bit to help >> describe where we are at with different parts of the initiative. Hopefully >> it will be useful for your colleagues in case they haven't been closely >> following the effort. >> > >> > # keystone >> > >> > Based on the initial note in this thread, I'm sure you're aware of >> keystone's status with respect to unified limits. But to recap, the initial >> implementation landed in Queens and targeted flat enforcement [0]. During >> the Rocky PTG we sat down with other services and a few operators to >> explain the current status in keystone and if either developers or >> operators had feedback on the API specifically. Notes were captured in >> etherpad [1]. We spent the Rocky cycle fixing usability issues with the API >> [2] and implementing support for a hierarchical enforcement model [3]. >> > >> > At this point keystone is ready for services to start consuming the >> unified limits work. The unified limits API is still marked as stable and >> it will likely stay that way until we have at least one project using >> unified limits. We can use that as an opportunity to do a final flush of >> any changes that need to be made to the API before fully supporting it. The >> keystone team expects that to be a quick transition, as we don't want to >> keep the API hanging in an experimental state. It's really just a safe >> guard to make sure we have the opportunity to use it in another service >> before fully committing to the API. Ultimately, we don't want to >> prematurely mark the API as supported when other services aren't even using >> it yet, and then realize it has issues that could have been fixed prior to >> the adoption phase. >> > >> > # oslo.limit >> > >> > In parallel with the keystone work, we created a new library to aid >> services in consuming limits. Currently, the sole purpose of oslo.limit is >> to abstract project and project hierarchy information away from the >> service, so that services don't have to reimplement client code to >> understand project trees, which could arguably become complex and lead to >> inconsistencies in u-x across services. >> > >> > Ideally, a service should be able to pass some relatively basic >> information to oslo.limit and expect an answer on whether or not usage for >> that claim is valid. For example, here is a project ID, resource name, and >> resource quantity, tell me if this project is over it's associated limit or >> default limit. >> > >> > We're currently working on implementing the enforcement bits of >> oslo.limit, which requires making API calls to keystone in order to >> retrieve the deployed enforcement model, limit information, and project >> hierarchies. Then it needs to reason about those things and calculate usage >> from the service in order to determine if the request claim is valid or >> not. There are patches up for this work, and reviews are always welcome [4]. >> > >> > Note that we haven't released oslo.limit yet, but once the basic >> enforcement described above is implemented we will. Then services can >> officially pull it into their code as a dependency and we can work out >> remaining bugs in both keystone and oslo.limit. Once we're confident in >> both the API and the library, we'll bump oslo.limit to version 1.0 at the >> same time we graduate the unified limits API from "experimental" to >> "supported". Note that oslo libraries <1.0 are considered experimental, >> which fits nicely with the unified limit API being experimental as we shake >> out usability issues in both pieces of software. >> > >> > # services >> > >> > Finally, we'll be in a position to start integrating oslo.limit into >> services. I imagine this to be a coordinated effort between keystone, oslo, >> and service developers. I do have a patch up that adds a conceptual >> overview for developers consuming oslo.limit [5], which renders into [6]. >> > >> > To be honest, this is going to be a very large piece of work and it's >> going to require a lot of communication. In my opinion, I think we can use >> the first couple iterations to generate some well-written usage >> documentation. Any questions coming from developers in this phase should >> probably be answered in documentation if we want to enable folks to pick >> this up and run with it. Otherwise, I could see the handful of people >> pushing the effort becoming a bottle neck in adoption. >> > >> > Hopefully this helps paint the landscape of where things are currently >> with respect to each piece. As always, let me know if you have any >> additional questions. If people want to discuss online, you can find me, >> and other contributors familiar with this topic, in #openstack-keystone or >> #openstack-dev on IRC (nic: lbragstad). >> > >> > [0] >> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html >> > [1] https://etherpad.openstack.org/p/unified-limits-rocky-ptg >> > [2] https://tinyurl.com/y6ucarwm >> > [3] >> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/strict-two-level-enforcement-model.html >> > [4] >> https://review.openstack.org/#/q/project:openstack/oslo.limit+status:open >> > [5] https://review.openstack.org/#/c/600265/ >> > [6] >> http://logs.openstack.org/65/600265/3/check/openstack-tox-docs/a6bcf38/html/user/usage.html >> > >> > On Thu, Sep 6, 2018 at 8:56 PM Jaze Lee wrote: >> >> >> >> Lance Bragstad 于2018年9月6日周四 下午10:01写道: >> >> > >> >> > I wish there was a better answer for this question, but currently >> there are only a handful of us working on the initiative. If you, or >> someone you know, is interested in getting involved, I'll happily help >> onboard people. >> >> >> >> Well,I can recommend some my colleges to work on this. I wish in S, >> >> all service can use unified limits to do quota job. >> >> >> >> > >> >> > On Wed, Sep 5, 2018 at 8:52 PM Jaze Lee wrote: >> >> >> >> >> >> On Stein only one service? >> >> >> Is there some methods to move this more fast? >> >> >> Lance Bragstad 于2018年9月5日周三 下午9:29写道: >> >> >> > >> >> >> > Not yet. Keystone worked through a bunch of usability >> improvements with the unified limits API last release and created the >> oslo.limit library. We have a patch or two left to land in oslo.limit >> before projects can really start using unified limits [0]. >> >> >> > >> >> >> > We're hoping to get this working with at least one resource in >> another service (nova, cinder, etc...) in Stein. >> >> >> > >> >> >> > [0] >> https://review.openstack.org/#/q/status:open+project:openstack/oslo.limit+branch:master+topic:limit_init >> >> >> > >> >> >> > On Wed, Sep 5, 2018 at 5:20 AM Jaze Lee >> wrote: >> >> >> >> >> >> >> >> Hello, >> >> >> >> Does nova and cinder use keystone's unified limits api to >> do quota job? >> >> >> >> If not, is there a plan to do this? >> >> >> >> Thanks a lot. >> >> >> >> >> >> >> >> -- >> >> >> >> 谦谦君子 >> >> >> >> >> >> >> >> >> __________________________________________________________________________ >> >> >> >> OpenStack Development Mailing List (not for usage questions) >> >> >> >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> > >> >> >> > >> __________________________________________________________________________ >> >> >> > OpenStack Development Mailing List (not for usage questions) >> >> >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> >> >> >> >> >> >> -- >> >> >> 谦谦君子 >> >> >> >> >> >> >> __________________________________________________________________________ >> >> >> OpenStack Development Mailing List (not for usage questions) >> >> >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > >> >> > >> __________________________________________________________________________ >> >> > OpenStack Development Mailing List (not for usage questions) >> >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> >> >> -- >> >> 谦谦君子 >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> -- >> 谦谦君子 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at redhat.com Fri Nov 9 16:29:13 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Fri, 9 Nov 2018 18:29:13 +0200 Subject: [openstack-dev] [TripleO] No weekly meeting next week Message-ID: There will be no meeting next Tuesday 13th of November since there's the OpenStack summit. Best Regards From jaosorior at redhat.com Fri Nov 9 16:30:01 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Fri, 9 Nov 2018 18:30:01 +0200 Subject: [openstack-dev] [TripleO] No meeting next week for the security squad Message-ID: <51123fac-ea26-df07-d384-b31db10ba0cf@redhat.com> There will be no meeting for the security squad next Tuesday 13th of November since there's the OpenStack summit. Best Regards From liliueecg at gmail.com Fri Nov 9 16:49:06 2018 From: liliueecg at gmail.com (Li Liu) Date: Fri, 9 Nov 2018 11:49:06 -0500 Subject: [openstack-dev] [cyborg] [weekly-meeting] No Weekly Meeting Next week Message-ID: Hi Team, There is no weekly meeting for the week of Nov 12th due to the OpenStack Summit. -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Nov 9 18:14:47 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 9 Nov 2018 18:14:47 +0000 Subject: [openstack-dev] [all] We're combining the lists! In-Reply-To: <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> Message-ID: <20181109181447.qhutsauxl4fuinnh@yuggoth.org> REMINDER: The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists (to which this is being sent) will be replaced by a new openstack-discuss at lists.openstack.org mailing list. The new list is open for subscriptions[0] now, but is not yet accepting posts until Monday November 19 and it's strongly recommended to subscribe before that date so as not to miss any messages posted there. The old lists will be configured to no longer accept posts starting on Monday December 3, but in the interim posts to the old lists will also get copied to the new list so it's safe to unsubscribe from them any time after the 19th and not miss any messages. See my previous notice[1] for details. For those wondering, we have 207 subscribers so far on openstack-discuss with a little over a week to go before it will be put into use (and less than a month before the old lists are closed down for good). [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From lenikmutungi at gmail.com Fri Nov 9 18:17:30 2018 From: lenikmutungi at gmail.com (Leni Kadali Mutungi) Date: Fri, 9 Nov 2018 21:17:30 +0300 Subject: [openstack-dev] [manila] [contribute] In-Reply-To: <20181109093240.yk7sls3hjacfqjx5@barron.net> References: <4374516d-8e61-baf8-440b-e76cafd84874@redhat.com> <6b827f3b-3804-8d53-3eb5-900cb7cc5c1d@gmail.com> <20181019145536.n6ca4dxjwlywb2s3@barron.net> <8e9facf2-d24b-da19-45da-13de4db75359@gmail.com> <20181109093240.yk7sls3hjacfqjx5@barron.net> Message-ID: <0ed25aae-17da-3a97-22fa-957d11e8408c@gmail.com> Hi all Thanks Sofia, Tom and Goutham for responding. Will checkout all the links you've shared. On 11/9/18 12:32 PM, Tom Barron wrote: > On 08/11/18 11:07 -0300, Sofia Enriquez wrote: >> Hi Leni, welcome! >> >> 1) Devstack[1] plays a *main *role in the development workflow. >> It's an easier way to get a full environment to work in Manila, we use it >> every day. I recommend you to use it in a VM. >> You can find many tutorials about how to use Devstack, I just let you one >> [2] > > Nice blog/guide, Sofia!  I'll just add [4] as a followup for anyone > particularly wanting to install devstack with manila and a cephfs with > nfs back end. > >> >> 2) I can't find *low-hanging-fruit* bugs in Manila. However, >> good-first-bugs are tagged as *low-hanging-fruit *for example, >> cinder's[3] > > And Goutham followed up with some manila low-hanging-fruit too. Thanks, > Goutham! > >> >> Today at *15:00 UTC *It's  Weekly Manila Team Meeting at IRC (channel >> #openstack-meeting-alt) on Freenode. > > And as we may have mentioned, you can ask questions on irc [5] [6] > #openstack-manila any time.  Ask even if no one is responding right > then, most of us have bouncers and will see and get back to you. > > -- Tom Barron > > [4] https://github.com/tombarron/vagrant-libvirt-devstack > > [5] https://docs.openstack.org/contributors/common/irc.html > > [6] https://docs.openstack.org/infra/manual/irc.html > >> >> Have fun! >> Sofia >> irc: enriquetaso >> [1] >> https://docs.openstack.org/zun/latest/contributor/quickstart.html#exercising-the-services-using-devstack >> >> [2] >> https://enriquetaso.wordpress.com/2016/05/07/installing-devstack-on-a-vagrant-virtual-machine/ >> >> [3] https://bugs.launchpad.net/cinder/+bugs?field.tag=low-hanging-fruit >> >> On Thu, Nov 8, 2018 at 7:41 AM Leni Kadali Mutungi >> >> wrote: >> >>> Hi Tom >>> >>> Thanks for the warm welcome. I've gone through the material and I would >>> like to understand a few things: >>> >>> 1. What's the role of devstack in the development workflow? >>> 2. Where can I find good-first-bugs? A bug that is simple to do >>> (relatively ;)) and allows me to practice what I've read up on in >>> Developer's Guide. I looked through the manila bugs on Launchpad but I >>> didn't see anything marked easy or good-first-bug or its equivalent for >>> manila. I am a bit unfamiliar with Launchpad so that may have played a >>> role :). >>> >>> Your guidance is appreciated. >>> >>> On 10/19/18 5:55 PM, Tom Barron wrote: >>> > On 19/10/18 15:27 +0300, Leni Kadali Mutungi wrote: >>> >> Hi all. >>> >> >>> >> I've downloaded the manila project from GitHub as a zip file, >>> unpacked >>> >> it and have run `git fetch --depth=1` and been progressively running >>> >> `git fetch --deepen=5` to get the commit history I need. For future >>> >> reference, would a shallow clone e.g. `git clone depth=1` be >>> enough to >>> >> start working on the project or should one have the full commit >>> >> history of the project? >>> >> >>> >> -- >>> >> -- Kind regards, >>> >> Leni Kadali Mutungi >>> > >>> > Hi Leni, >>> > >>> > First I'd like to extend a warm welcome to you as a new manila project >>> > contributor!  We have some contributor/developer documentation [1] >>> that >>> > you may find useful. If you find any gaps or misinformation, we >>> will be >>> > happy to work with you to address these.  In addition to this email >>> > list, the #openstack-manila IRC channel on freenode is a good place to >>> > ask questions.  Many of us run irc bouncers so we'll see the question >>> > even if we're not looking right when it is asked.  Finally, we have a >>> > meeting most weeks on Thursdays at 1500UTC in >>> #openstack-meeting-alt -- >>> > agendas are posted here [2].  Also, here is our work-plan for the >>> > current Stein development cycle [3]. >>> > >>> > Now for your question about shallow clones.  I hope others who know >>> more >>> > will chime in but here are my thoughts ... >>> > >>> > Although having the full commit history for the project is useful, >>> it is >>> > certainly possible to get started with a shallow clone of the project. >>> > That said, I'm not sure if the space and download-time/bandwidth gains >>> > are going to be that significant because once you have the >>> workspace you >>> > will want to run unit tests, pep8, etc. using tox as explained in the >>> > developer documentation mentioned earlier.   That will download >>> virtual >>> > environments for manila's dependencies in your workspace (under .tox >>> > directory) that dwarf the space used for manila proper. >>> > >>> > $ git clone --depth=1 git at github.com:openstack/manila.git >>> shallow-manila >>> > Cloning into 'shallow-manila'... >>> > ... >>> > $ git clone git at github.com:openstack/manila.git deep-manila >>> > Cloning into 'deep-manila'... >>> > ... >>> > $ du -sh shallow-manila deep-manila/ >>> > 20M    shallow-manila >>> > 35M    deep-manila/ >>> > >>> > But after we run tox inside shallow-manila and deep-manila we see: >>> > >>> > $ du -sh shallow-manila deep-manila/ >>> > 589M    shallow-manila >>> > 603M    deep-manila/ >>> > >>> > Similarly, you are likely to want to run devstack locally and that >>> will >>> > clone the repositories for the other openstack components you need and >>> > the savings from shallow clones won't be that significant relative to >>> > the total needed. >>> > >>> > Happy developing! >>> > >>> > -- Tom Barron (Manila PTL) irc: tbarron >>> > >>> > [1] https://docs.openstack.org/manila/rocky/contributor/index.html >>> > [2] https://wiki.openstack.org/wiki/Manila/Meetings >>> > [3] https://wiki.openstack.org/wiki/Manila/SteinCycle >>> > >>> > >>> __________________________________________________________________________ >>> >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> -- >>> -- Kind regards, >>> Leni Kadali Mutungi >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> -- >> >> Laura Sofia Enriquez >> >> Associate Software Engineer >> Red Hat PnT >> >> Ingeniero Butty 240, Piso 14 >> >> (C1001AFB) Buenos Aires - Argentina >> >> senrique at redhat.com >> > >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -- Kind regards, Leni Kadali Mutungi From rfolco at redhat.com Fri Nov 9 19:49:28 2018 From: rfolco at redhat.com (Rafael Folco) Date: Fri, 9 Nov 2018 17:49:28 -0200 Subject: [openstack-dev] [tripleo] TripleO CI Summary: Sprint 21 Message-ID: Greetings, The TripleO CI team has just completed Sprint 21 (Oct-18 thru Nov-07). The following is a summary of completed work during this sprint cycle: - Created an initial base Standalone job for Fedora 28. - Added initial support for installing Tempest rpm in openstack-ansible_os-tempest. - Started running project specific tempest tests against puppet-projects in tripleo-standalone gates. - Added initial support to python-tempestconf on openstack-ansible_os-tempest. - Prepared grounds to make all required variables for the zuulv3 workflow available for the reproducer. The sprint task board for CI team has moved from Trello to Taiga [1]. The Ruck and Rover notes for this sprint has been tracked in the etherpad [2]. The planned work for the next sprint focuses in iterating on the upstream standalone job for Fedora 28 to bring it to completion. This includes moving the multinode scenarios to the standalone jobs. The team continues to work on the reproducer, enabling Tempest coverage in puppet-* projects, and preparing a CI environment for OVB. The Ruck and Rover for this sprint are Gabriele Cerami (panda) and Chandan Kumar (chkumar). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Thanks, Folco [1] https://tree.taiga.io/project/tripleo-ci-board/taskboard/sprint-21-175 [2] https://review.rdoproject.org/etherpad/p/ruckrover-sprint21 -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Sat Nov 10 07:41:13 2018 From: robertc at robertcollins.net (Robert Collins) Date: Sat, 10 Nov 2018 20:41:13 +1300 Subject: [openstack-dev] [all] We're combining the lists! In-Reply-To: <20181109181447.qhutsauxl4fuinnh@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> <20181109181447.qhutsauxl4fuinnh@yuggoth.org> Message-ID: On Sat, 10 Nov 2018 at 07:15, Jeremy Stanley wrote: > > REMINDER: The openstack, openstack-dev, openstack-sigs and > openstack-operators mailing lists (to which this is being sent) will > be replaced by a new openstack-discuss at lists.openstack.org mailing > list. The new list is open for subscriptions[0] now, but is not yet > accepting posts until Monday November 19 and it's strongly > recommended to subscribe before that date so as not to miss any > messages posted there. The old lists will be configured to no longer > accept posts starting on Monday December 3, but in the interim posts > to the old lists will also get copied to the new list so it's safe > to unsubscribe from them any time after the 19th and not miss any > messages. See my previous notice[1] for details. > > For those wondering, we have 207 subscribers so far on > openstack-discuss with a little over a week to go before it will be > put into use (and less than a month before the old lists are closed > down for good). There don't seem to be any topics defined for -discuss yet, I hope there will be, as I'm certainly not in a position of enough bandwidth to handle everything *stack related. I'd suggest one for each previously list, at minimum. -Rob From thierry at openstack.org Sat Nov 10 10:02:15 2018 From: thierry at openstack.org (Thierry Carrez) Date: Sat, 10 Nov 2018 11:02:15 +0100 Subject: [openstack-dev] [all] We're combining the lists! In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> <20181109181447.qhutsauxl4fuinnh@yuggoth.org> Message-ID: Robert Collins wrote: > There don't seem to be any topics defined for -discuss yet, I hope > there will be, as I'm certainly not in a position of enough bandwidth > to handle everything *stack related. > > I'd suggest one for each previously list, at minimum. As we are ultimately planning to move lists to mailman3 (which decided to drop the "topics" concept altogether), I don't think we planned to add serverside mailman topics to the new list. We'll still have standardized subject line topics. The current list lives at: https://etherpad.openstack.org/p/common-openstack-ml-topics -- Thierry Carrez (ttx) From e0ne at e0ne.info Sat Nov 10 14:07:51 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Sat, 10 Nov 2018 16:07:51 +0200 Subject: [openstack-dev] [cinder][manila] Cinder and Friends Dinner at Berlin Summit ... In-Reply-To: References: Message-ID: Thanks for organizing this, Jay, Just in case if you missed it, Matrix Party hosted by Trilio + Red Hat will be on Tuesday too. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Thu, Nov 8, 2018 at 12:43 AM Jay S Bryant wrote: > All, > > I am working on scheduling a dinner for the Cinder team (and our extended > family that work on and around Cinder) during the Summit in Berlin. I have > created an etherpad for people to RSVP for dinner [1]. > > It seemed like Tuesday night after the Marketplace Mixer was the best time > for most people. > > So, it will be a little later dinner ... 8 pm. Here is the place: > Location: http://www.dicke-wirtin.de/ > Address: Carmerstraße 9, 10623 Berlin, Germany > > It looks like the kind of place that will fit for our usual group. > > If planning to attend please add your name to the etherpad and I will get > a reservation in over the weekend. > > Hope to see you all on Tuesday! > > Jay > > [1] https://etherpad.openstack.org/p/BER-cinder-outing-planning > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat Nov 10 15:39:51 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 11 Nov 2018 00:39:51 +0900 Subject: [openstack-dev] [openstack-operators] [qa] [berlin] QA Team & related sessions at berlin summit In-Reply-To: <166f635357e.bf214dd5138768.7752784958860017838@ghanshyammann.com> References: <166f635357e.bf214dd5138768.7752784958860017838@ghanshyammann.com> Message-ID: <166fe478287.eab83f3e160854.6859983257172890856@ghanshyammann.com> Hello Everyone, I have created the below etherpads to use during QA Forum sessions: - Users / Operators adoption of QA tools: https://etherpad.openstack.org/p/BER-qa-ops-user-feedback - QA Onboarding: https://etherpad.openstack.org/p/BER-qa-onboarding-vancouver -gmann ---- On Fri, 09 Nov 2018 11:02:54 +0900 Ghanshyam Mann wrote ---- > Hello everyone, > > Along with project updates & onboarding sessions, QA team will host QA feedback sessions in berlin summit. Feel free to catch us next week for any QA related questions or if you need help to contribute in QA (we are really looking forward to onbaord new contributor in QA). > > Below are the QA related sessions, feel free to append the list if i missed anything. I am working on onboarding/forum sessions etherpad and will send the link tomorrow. > > Tuesday: > 1. OpenStack QA - Project Update. [1] > 2. OpenStack QA - Project Onboarding. [2] > 3. OpenStack Patrole – Foolproofing your OpenStack Deployment [3] > > Wednesday: > 4. Forum: Users / Operators adoption of QA tools / plugins. [4] > > Thursday: > 5. Using Rally/Tempest for change validation (OPS session) [5] > > [1] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22763/openstack-qa-project-update > [2] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22762/openstack-qa-project-onboarding > [3] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22148/openstack-patrole-foolproofing-your-openstack-deployment > [4] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22788/users-operators-adoption-of-qa-tools-plugins > [5] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22837/using-rallytempest-for-change-validation-ops-session > > -gmann > From fungi at yuggoth.org Sat Nov 10 15:53:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 10 Nov 2018 15:53:22 +0000 Subject: [openstack-dev] [all] We're combining the lists! In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> <20181109181447.qhutsauxl4fuinnh@yuggoth.org> Message-ID: <20181110155322.nedu4jiammpi537x@yuggoth.org> On 2018-11-10 11:02:15 +0100 (+0100), Thierry Carrez wrote: [...] > As we are ultimately planning to move lists to mailman3 (which decided > to drop the "topics" concept altogether), I don't think we planned to > add serverside mailman topics to the new list. Correct, that was covered in more detail in the longer original announcement linked from my past couple of reminders: http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html In short, we're recommending client-side filtering because server-side topic selection/management was not retained in Mailman 3 as Thierry indicates and we hope we might move our lists to an MM3 instance sometime in the not-too-distant future. > We'll still have standardized subject line topics. The current list > lives at: > > https://etherpad.openstack.org/p/common-openstack-ml-topics Which is its initial location for crowd-sourcing/brainstorming, but will get published to a more durable location like on lists.openstack.org itself or perhaps the Project-Team Guide once the list is in use. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From robertc at robertcollins.net Sat Nov 10 19:06:24 2018 From: robertc at robertcollins.net (Robert Collins) Date: Sun, 11 Nov 2018 08:06:24 +1300 Subject: [openstack-dev] [all] We're combining the lists! In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> <20181109181447.qhutsauxl4fuinnh@yuggoth.org> Message-ID: On Sat, 10 Nov 2018 at 23:03, Thierry Carrez wrote: > > Robert Collins wrote: > > There don't seem to be any topics defined for -discuss yet, I hope > > there will be, as I'm certainly not in a position of enough bandwidth > > to handle everything *stack related. > > > > I'd suggest one for each previously list, at minimum. > > As we are ultimately planning to move lists to mailman3 (which decided > to drop the "topics" concept altogether), I don't think we planned to > add serverside mailman topics to the new list. Ah, fair enough, I'll unsubscribe from the new list then; If folk need me, you know where to find me. -Rob From mriedemos at gmail.com Sun Nov 11 06:17:15 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sun, 11 Nov 2018 07:17:15 +0100 Subject: [openstack-dev] [goals][upgrade-checkers] Week R-22 Update Message-ID: <46bd666d-4de5-9a72-e6bb-592f8052786c@gmail.com> No major updates this week, but there is some decent progress in more projects getting their framework patch merged [1]. Thanks again to Rajat and Akhil for their persistent effort. There are more open reviews available for adding the framework to projects [2]. Some projects, like cloudkitty [3], are going beyond the initial placeholder framework check and adding a real upgrade check which is nice to see. [1] https://review.openstack.org/#/q/topic:upgrade-checkers+status:merged [2] https://review.openstack.org/#/q/topic:upgrade-checkers+status:open [3] https://review.openstack.org/#/c/613076/ -- Thanks, Matt From ssbarnea at redhat.com Sun Nov 11 11:29:43 2018 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Sun, 11 Nov 2018 11:29:43 +0000 Subject: [openstack-dev] [tripleo] using molecule to test ansible playbooks and roles Message-ID: I recently came across molecule a project originated at Cisco which recently become an officially Ansible project, at the same time as ansible-lint. Both projects were transferred from their former locations to Ansible github organization -- I guess as a confirmation that they are now officially supported by the core team. I used ansible-lint for years and it did same me a lot of time, molecule is still new to me. Few weeks back I started to play with molecule as at least on paper it was supposed to resolve the problem of testing roles on multiple platforms and usage scenarios and while the work done for enabling tripleo-quickstart to support fedora-28 (py3). I was trying to find a faster way to test these changes faster and locally --- and avoid increasing the load on CI before I get the confirmation that code works locally. The results of my testing that started about two weeks ago are very positive and can be seen on: https://review.openstack.org/#/c/613672/ You can find there a job names opentstack-tox-molecule which runs in ~15minutes but this is only because on CI docker caching does not work as well as locally, locally it re-runs in ~2-3minutes. I would like to hear your thoughts on this and if you also have some time to checkout that change and run it yourself it would be wonderful. Once you download the change you only have to run "tox -e molecule", (or "make" which also clones sister extras repo if needed) Feel free to send questions to the change itself, on #oooq or by email. Cheers Sorin Sbarnea -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Mon Nov 12 00:28:33 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Mon, 12 Nov 2018 11:28:33 +1100 Subject: [openstack-dev] [neutron][neutron-release] feature/graphql branch rebase In-Reply-To: <20181109130355.2pphqjgsfvnymx52@yuggoth.org> References: <89b44820-47c0-0a8b-a363-cf3ff4e1879a@redhat.com> <649c9384-9d5c-3859-f893-71b48ecd675a@redhat.com> <55ba80c3-bbfc-76dd-affb-f0c4023af33e@redhat.com> <20181109130355.2pphqjgsfvnymx52@yuggoth.org> Message-ID: <6b63468b-07c0-3072-96d0-6e5a22938232@redhat.com> On 10/11/18 12:03 am, Jeremy Stanley wrote: > On 2018-11-09 16:35:22 +1100 (+1100), Gilles Dubreuil wrote: >> Could you please provide permission [1] to upload commit merges? >> >> I'm getting the following error after merging HEAD: >> >> $ git review -R feature/graphql >> remote: >> remote: Processing changes: refs: 1 >> remote: Processing changes: refs: 1, done >> To ssh://review.openstack.org:29418/openstack/neutron.git >>  ! [remote rejected]       HEAD -> refs/publish/feature/graphql/bug/1802101 >> (you are not allowed to upload merges) >> error: failed to push some refs to >> 'ssh://q-1illes-a at review.openstack.org:29418/openstack/neutron.git' > [...] > > Per openstack/neutron's ACL[*] you need to be made a member of the > neutron-release group in Gerrit[**]. (This permission is tightly > controlled to avoid people accidentally pushing merge commits, which > is all too easy if you're not careful to keep your branches clean.) That's fair enough. I'll ask the neutron-release group then. Thanks > [*] https://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/openstack/neutron.config > [**] https://review.openstack.org/#/admin/groups/neutron-release > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Mon Nov 12 01:20:42 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Sun, 11 Nov 2018 20:20:42 -0500 Subject: [openstack-dev] [tripleo] using molecule to test ansible playbooks and roles In-Reply-To: References: Message-ID: <20181112012042.GA15111@localhost.localdomain> On Sun, Nov 11, 2018 at 11:29:43AM +0000, Sorin Sbarnea wrote: > I recently came across molecule a project originated at Cisco which recently become an officially Ansible project, at the same time as ansible-lint. Both projects were transferred from their former locations to Ansible github organization -- I guess as a confirmation that they are now officially supported by the core team. I used ansible-lint for years and it did same me a lot of time, molecule is still new to me. > > Few weeks back I started to play with molecule as at least on paper it was supposed to resolve the problem of testing roles on multiple platforms and usage scenarios and while the work done for enabling tripleo-quickstart to support fedora-28 (py3). I was trying to find a faster way to test these changes faster and locally --- and avoid increasing the load on CI before I get the confirmation that code works locally. > > The results of my testing that started about two weeks ago are very positive and can be seen on: > https://review.openstack.org/#/c/613672/ > You can find there a job names opentstack-tox-molecule which runs in ~15minutes but this is only because on CI docker caching does not work as well as locally, locally it re-runs in ~2-3minutes. > > I would like to hear your thoughts on this and if you also have some time to checkout that change and run it yourself it would be wonderful. > > Once you download the change you only have to run "tox -e molecule", (or "make" which also clones sister extras repo if needed) > > Feel free to send questions to the change itself, on #oooq or by email. > I've been doing this for a while with ansible-role-nodepool[1], same idea you run tox -emolecule and the role will use the docker backend to validate. I also run it in the gate (with docker backend) however this is online to validate that end users will not be broken locally if they run tox -emolecule. There is a downside with docker, no systemd integration, which is fine for me as I have other tests that are able to provide coverage. With zuul, it really isn't needed to run nested docker for linters and smoke testing, as it mostly creates unneeded overhead. However, if you do want to standardize on molecule, I recommend you don't use docker backend but use the delegated and reused the inventory provided by zuul. Then you still use molecule but get the bonus of using the VMs presented by zuul / nodepool. - Paul [1] http://git.openstack.org/cgit/openstack/ansible-role-nodepool From manuel.sb at garvan.org.au Mon Nov 12 01:22:20 2018 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Mon, 12 Nov 2018 01:22:20 +0000 Subject: [openstack-dev] [kolla][iconic] baremetal network Message-ID: <9D8A2486E35F0941A60430473E29F15B017BAE35C9@MXDB2.ad.garvan.unsw.edu.au> Dear Kolla-ansible team, I am trying to deploy ironic through kolla-ansible. According to ironic documentation https://docs.openstack.org/ironic/rocky/install/configure-networking.html we need bare metal network with tenant_network_types = flat. However kolla-ansible configures: [root at TEST-openstack-controller ~]# grep -R -i "baremetal" -R /etc/kolla/* /etc/kolla/neutron-openvswitch-agent/ml2_conf.ini:mechanism_drivers = openvswitch,baremetal,l2population /etc/kolla/neutron-server/ml2_conf.ini:mechanism_drivers = openvswitch,baremetal,l2population [root at TEST-openstack-controller ~]# grep -R -i "tenant_network_types" -R /etc/kolla/* /etc/kolla/neutron-openvswitch-agent/ml2_conf.ini:tenant_network_types = vxlan /etc/kolla/neutron-server/ml2_conf.ini:tenant_network_types = vxlan This is my filtered globals.yml: [root at openstack-deployment ~]# grep -E -i "(^[^#]|ironic)" /etc/kolla/globals.yml --- openstack_release: "rocky" kolla_internal_vip_address: "192.168.1.51" neutron_external_interface: "ens161" enable_cinder: "yes" enable_cinder_backend_nfs: "yes" #enable_horizon_ironic: "{{ enable_ironic | bool }}" enable_ironic: "yes" #enable_ironic_ipxe: "no" #enable_ironic_neutron_agent: "no" #enable_ironic_pxe_uefi: "no" glance_enable_rolling_upgrade: "no" # Ironic options # following value must be set when enable ironic, the value format ironic_dnsmasq_dhcp_range: "192.168.1.100,192.168.1.150" # PXE bootloader file for Ironic Inspector, relative to /tftpboot. ironic_dnsmasq_boot_file: "pxelinux.0" ironic_cleaning_network: "ens224" #ironic_dnsmasq_default_gateway: 192.168.1.255 # Configure ironic upgrade option, due to currently kolla support # two upgrade ways for ironic: legacy_upgrade and rolling_upgrade # The variable "ironic_enable_rolling_upgrade: yes" is meaning legacy_upgrade #ironic_enable_rolling_upgrade: "yes" #ironic_inspector_kernel_cmdline_extras: [] tempest_image_id: tempest_flavor_ref_id: tempest_public_network_id: tempest_floating_network_name: ens224 is a my management network for admins to ssh and install and manage the physical nodes. Any idea why tenant_network_types = vxlan and not flat as suggested by the ironic documentation? Thank you Manuel Sopena Ballesteros | Big data Engineer Garvan Institute of Medical Research The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010 T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel.sb at garvan.org.au NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon Nov 12 08:02:26 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 12 Nov 2018 01:02:26 -0700 Subject: [openstack-dev] [ironic] Meetings cancelled this and next week - resumes November 26th Message-ID: Greetings everyone! We're cancelling this week's meeting and next week's meeting due to the OpenStack Summit and US holidays the following week where some of our core reviewers will also be on vacation. If there are any questions, please feel to ask in #openstack-ironic. See you all in IRC. -Julia -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Mon Nov 12 08:18:39 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 12 Nov 2018 02:18:39 -0600 Subject: [openstack-dev] [cinder] No meeting this week ... Message-ID: <4be43dc8-d473-c304-7fa3-5a042e3f5f11@gmail.com> Team, Just a friendly reminder that we will not have our weekly meeting this week due to the OpenStack Summit. Hope to see some of you here.  Otherwise, talk to you next week! Thanks, Jay From mnaser at vexxhost.com Mon Nov 12 08:23:39 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 12 Nov 2018 09:23:39 +0100 Subject: [openstack-dev] [openstack-ansible] meeting cancelled Message-ID: Hi everyone, Due to most of us being at the OpenStack Summit, we're cancelling the meeting tomorrow. Thanks everyone and see you in Berlin. Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Mon Nov 12 08:23:56 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 12 Nov 2018 02:23:56 -0600 Subject: [openstack-dev] [cinder][manila] Cinder and Friends Dinner at Berlin Summit ... In-Reply-To: References: Message-ID: <98b376ea-631c-1e8a-4cda-31957837941f@gmail.com> Ivan, Yeah, I saw that was the case but it seems like there is not a point in time where there isn't a conflict.  Need to get some food at some point so anyone who wants to join can, and then we can head to the party if people want. Jay On 11/10/2018 8:07 AM, Ivan Kolodyazhny wrote: > Thanks for organizing this, Jay, > > Just in case if you missed it, Matrix Party hosted by Trilio + Red Hat > will be on Tuesday too. > > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > > On Thu, Nov 8, 2018 at 12:43 AM Jay S Bryant > wrote: > > All, > > I am working on scheduling a dinner for the Cinder team (and our > extended family that work on and around Cinder) during the Summit > in Berlin.  I have created an etherpad for people to RSVP for > dinner [1]. > > It seemed like Tuesday night after the Marketplace Mixer was the > best time for most people. > > So, it will be a little later dinner ... 8 pm.  Here is the place: > > Location: http://www.dicke-wirtin.de/ > Address: Carmerstraße 9, 10623 Berlin, Germany > > It looks like the kind of place that will fit for our usual group. > > If planning to attend please add your name to the etherpad and I > will get a reservation in over the weekend. > > Hope to see you all on Tuesday! > > Jay > > [1] https://etherpad.openstack.org/p/BER-cinder-outing-planning > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at openstack.org Mon Nov 12 09:27:22 2018 From: chris at openstack.org (Chris Hoge) Date: Mon, 12 Nov 2018 10:27:22 +0100 Subject: [openstack-dev] [docs] New Four Opens Project Message-ID: <19E45C45-2E7E-454B-8743-7B41B21B4701@openstack.org> Earlier this year, the OpenStack Foundation staff had the opportunity to brainstorm some ideas about how to express the values behind The Four Opens and how they are applied in practice. As the Foundation grows in scope to include new strategic focus areas and new projects, we felt it was important to provide explanation and guidance on the principles that guide our community. We’ve collected these notes and have written some seeds to start this document. I’ve staged this work into github and have prepared a review to move the work into OpenStack hosting, turning this over to the community to help guide and shape the document. This is very much a work in progress, but we have a goal to polish this up and make it an important document that captures our vision and values for the OpenStack development community, guides the establishment of governance for new top-level projects, and is a reference for the open-source development community as a whole. I also want to be clear that the original Four Opens, as listed in the OpenStack governance page, is an OpenStack TC document. This project doesn’t change that. Instead, it is meant to be applied to the Foundation as a whole and be a reference to the new projects that land both as pilot top-level projects and projects hosted by our new infrastructure efforts. Thanks to all of the original authors of the Four-Opens for your visionary work that started this process, and thanks in advance to the community members who will continue to grow and evolve this document. Chris Hoge OpenStack Foundation Four Opens: https://governance.openstack.org/tc/reference/opens.html New Project Review Patch: https://review.openstack.org/#/c/617005/ Four Opens Document Staging: https://github.com/hogepodge/four-opens -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Nov 12 09:33:41 2018 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 12 Nov 2018 09:33:41 +0000 Subject: [openstack-dev] [kolla][iconic] baremetal network In-Reply-To: <9D8A2486E35F0941A60430473E29F15B017BAE35C9@MXDB2.ad.garvan.unsw.edu.au> References: <9D8A2486E35F0941A60430473E29F15B017BAE35C9@MXDB2.ad.garvan.unsw.edu.au> Message-ID: Hi Manuel, You can configure the neutron tenant network types in kolla-ansible via the 'neutron_tenant_network_types' variable in globals.yml. It's a comma-separated list. The default for that variable is vxlan, it's expected that you set it to match your requirements. The 'ironic_cleaning_network' variable should be the name of a network in neutron to be used for cleaning, rather than an interface name. If you're using flat networking, this will just be 'the' network. Regards, Mark On Mon, 12 Nov 2018 at 01:23, Manuel Sopena Ballesteros < manuel.sb at garvan.org.au> wrote: > Dear Kolla-ansible team, > > > > I am trying to deploy ironic through kolla-ansible. According to ironic > documentation > https://docs.openstack.org/ironic/rocky/install/configure-networking.html > we need bare metal network with tenant_network_types = flat. However > kolla-ansible configures: > > > > [root at TEST-openstack-controller ~]# grep -R -i "baremetal" -R /etc/kolla/* > > /etc/kolla/neutron-openvswitch-agent/ml2_conf.ini:mechanism_drivers = > openvswitch,baremetal,l2population > > /etc/kolla/neutron-server/ml2_conf.ini:mechanism_drivers = > openvswitch,baremetal,l2population > > > > > > [root at TEST-openstack-controller ~]# grep -R -i "tenant_network_types" -R > /etc/kolla/* > > /etc/kolla/neutron-openvswitch-agent/ml2_conf.ini:tenant_network_types = > vxlan > > /etc/kolla/neutron-server/ml2_conf.ini:tenant_network_types = vxlan > > > > This is my filtered globals.yml: > > > > [root at openstack-deployment ~]# grep -E -i "(^[^#]|ironic)" > /etc/kolla/globals.yml > > --- > > openstack_release: "rocky" > > kolla_internal_vip_address: "192.168.1.51" > > neutron_external_interface: "ens161" > > enable_cinder: "yes" > > enable_cinder_backend_nfs: "yes" > > #enable_horizon_ironic: "{{ enable_ironic | bool }}" > > enable_ironic: "yes" > > #enable_ironic_ipxe: "no" > > #enable_ironic_neutron_agent: "no" > > #enable_ironic_pxe_uefi: "no" > > glance_enable_rolling_upgrade: "no" > > # Ironic options > > # following value must be set when enable ironic, the value format > > ironic_dnsmasq_dhcp_range: "192.168.1.100,192.168.1.150" > > # PXE bootloader file for Ironic Inspector, relative to /tftpboot. > > ironic_dnsmasq_boot_file: "pxelinux.0" > > ironic_cleaning_network: "ens224" > > #ironic_dnsmasq_default_gateway: 192.168.1.255 > > # Configure ironic upgrade option, due to currently kolla support > > # two upgrade ways for ironic: legacy_upgrade and rolling_upgrade > > # The variable "ironic_enable_rolling_upgrade: yes" is meaning > legacy_upgrade > > #ironic_enable_rolling_upgrade: "yes" > > #ironic_inspector_kernel_cmdline_extras: [] > > tempest_image_id: > > tempest_flavor_ref_id: > > tempest_public_network_id: > > tempest_floating_network_name: > > > > ens224 is a my management network for admins to ssh and install and manage > the physical nodes. > > > > Any idea why tenant_network_types = vxlan and not flat as suggested by > the ironic documentation? > > > > Thank you > > > > *Manuel Sopena Ballesteros *| Big data Engineer > *Garvan Institute of Medical Research * > The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010 > *T:* + 61 (0)2 9355 5760 | *F:* +61 (0)2 9295 8507 | *E:* > manuel.sb at garvan.org.au > > > NOTICE > Please consider the environment before printing this email. This message > and any attachments are intended for the addressee named and may contain > legally privileged/confidential/copyright information. If you are not the > intended recipient, you should not read, use, disclose, copy or distribute > this communication. If you have received this message in error please > notify us at once by return email and then delete both messages. We accept > no liability for the distribution of viruses or similar in electronic > communications. This notice should not be removed. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Mon Nov 12 10:35:32 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 12 Nov 2018 04:35:32 -0600 Subject: [openstack-dev] [neutron] Cancelling weekly meeting on November 12th Message-ID: Dear Neutron Team, Due to the OpenStack Summit in Berlin and the activities around it, let's cancel the weekly IRC meeting on November 12th. We will resume normally on the 20th. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Mon Nov 12 12:44:12 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Mon, 12 Nov 2018 12:44:12 +0000 Subject: [openstack-dev] [nova][neutron] boot server with more than one subnet selection question Message-ID: An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Nov 12 17:05:44 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 12 Nov 2018 18:05:44 +0100 Subject: [openstack-dev] [nova][neutron] boot server with more than one subnet selection question In-Reply-To: References: Message-ID: <4886F27A-F0E9-4EA2-92BE-87B87A1F27AD@redhat.com> Hi, You can choose which subnet (and even IP address) should be used, see „fixed_ips” field in [1]. If You will not provide anything Neutron will choose for You one IPv4 address and one IPv6 address and in both cases it will be chosen randomly from available IPs from all subnets. [1] https://developer.openstack.org/api-ref/network/v2/?expanded=create-port-detail#create-port > Wiadomość napisana przez Chen CH Ji w dniu 12.11.2018, o godz. 13:44: > > I have a network created like below: > > 1 network with 3 subnets (1 ipv6 and 2 ipv4) ,when boot, whether I can select subnet to boot from or the subnet will be force selected by the order the subnet created? Any document or code can be referred? Thanks > > | fd0e2078-044d-4c5c-b114-3858631e6328 | private | a8184e4f-5165-4ea8-8ed8-b776d619af6e fd9b:c245:1aaa::/64 | > | | | b3ee7cad-c672-4172-a183-8e9f069bea31 10.0.0.0/26 | > | | | 9439abfd-afa2-4264-8422-977d725a7166 10.0.2.0/24 | > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From MM9745 at att.com Mon Nov 12 20:45:38 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Mon, 12 Nov 2018 20:45:38 +0000 Subject: [openstack-dev] [qa] [containers] [airship] [berlin] Berlin Airship Forums Message-ID: <7C64A75C21BB8D43BD75BB18635E4D896CD9DD8C@MOSTLS1MSGUSRFF.ITServices.sbc.com> I wanted to make sure that all interested folks are aware of the Airship-related Forums that will be held on Tuesday: Cross-project container security discussion: https://etherpad.openstack.org/p/BER-container-security Airship Quality Assurance use cases: https://etherpad.openstack.org/p/BER-airship-qa Airship Bare Metal provisioning brainstorming & design: https://etherpad.openstack.org/p/BER-airship-bare-metal We welcome all participation and discussion - please add any topics you'd like to discuss to the etherpads! I look forward to some good sessions tomorrow. Thanks, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Tue Nov 13 02:55:14 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 13 Nov 2018 11:55:14 +0900 Subject: [openstack-dev] [Searchlight] Report for the week of Stein R-22 Message-ID: Hi team, This is the report for last week, Stein R-22 [1]. Please follow to know what going on with Searchlight. [1] https://www.dangtrinh.com/2018/11/searchlight-weekly-report-stein-r-22.html Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Tue Nov 13 03:45:17 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Tue, 13 Nov 2018 03:45:17 +0000 Subject: [openstack-dev] [nova][neutron] boot server with more than one subnet selection question In-Reply-To: <4886F27A-F0E9-4EA2-92BE-87B87A1F27AD@redhat.com> References: <4886F27A-F0E9-4EA2-92BE-87B87A1F27AD@redhat.com>, Message-ID: An HTML attachment was scrubbed... URL: From sam47priya at gmail.com Tue Nov 13 03:56:36 2018 From: sam47priya at gmail.com (Sam P) Date: Tue, 13 Nov 2018 04:56:36 +0100 Subject: [openstack-dev] [masakari] No masakari meeting on 11/12 Message-ID: Hi all!, Sorry for the late announcement. Since most of us in Berlin summit, there will be no IRC meeting at 11/12. --- Regards, Sampath From aschultz at redhat.com Tue Nov 13 04:20:17 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 12 Nov 2018 21:20:17 -0700 Subject: [openstack-dev] [tripleo] puppet5 has broken the master gate Message-ID: Just a heads up but we recently updated to puppet5 in the master dependencies. It appears that this has completely hosed the master scenarios and containers-multinode jobs. Please do recheck/approve anything until we get this resolved. See https://bugs.launchpad.net/tripleo/+bug/1803024 I have a possible fix (https://review.openstack.org/#/c/617441/) but it's probably a better idea to roll back the puppet package if possible. Thanks, -Alex From chkumar246 at gmail.com Tue Nov 13 04:32:13 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 13 Nov 2018 10:02:13 +0530 Subject: [openstack-dev] [tripleo][openstack-ansible] Updates on collaboration on os_tempest role Message-ID: Hello, During the starting of Denver 2018 PTG [1]., We started collaborating towards using the openstack-ansible-os_tempest role [2] as a unified tempest role in TripleO and openstack-ansible project within OpenStack community. It will help us to improve the testing strategies between two projects which can be further expanded to other OpenStack deployment tools. We will be sharing bi-weekly updates through mailing lists. We are tracking/planning all the work here: Proposal doc: https://etherpad.openstack.org/p/ansible-tempest-role Work item collaboration doc: https://etherpad.openstack.org/p/openstack-ansible-tempest Here is the update till now: openstack-ansible-os_tempest project: * Enable stackviz support - https://review.openstack.org/603100 * Added support for installing tempest from distro - https://review.openstack.org/591424 * Fixed missing ; from if statement in tempest_run - https://review.openstack.org/614521 * Added task to list tempest plugins - https://review.openstack.org/615837 * Remove apt_package_pinning dependency from os_tempest role - https://review.openstack.org/609992 * Enable python-tempestconf support - https://review.openstack.org/612968 Support added to openstack/rpm-packaging project (will be consumed in os_tempest role): * Added spec file for stackviz - https://review.openstack.org/609337 * Add initial spec for python-tempestconf - https://review.openstack.org/598143 Upcoming improvements: * Finish the integration of python-tempestconf in os_tempest role. Have queries, Feel free to ping us on #tripleo or #openstack-ansible channel. Links: [1.] http://lists.openstack.org/pipermail/openstack-dev/2018-August/133119.html [2.] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest Thanks, Chandan Kumar From rico.lin.guanyu at gmail.com Tue Nov 13 05:49:57 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 13 Nov 2018 06:49:57 +0100 Subject: [openstack-dev] [heat] Heat sessions & forum in Berlin Summit Message-ID: Dear all Here are some Heat relative sessions in OpenStack summit for you this week. Welcome everyone to join us and check it out! *Orchestration Ops/Users feedback session* Wed 14, 1:40pm - 2:20pm CityCube Berlin - Level 3 - M-Räume 6 https://etherpad.openstack.org/p/heat-user-berlin *Heat - Project Update* Wed 14, 3:45pm - 4:05pm CityCube Berlin - Level 3 - M-Räume 3 https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22739/heat-project-update *Autoscaling Integration, improvement, and feedback* Thu 15, 9:00am - 9:40am CityCube Berlin - Level 3 - M-Räume 8 https://etherpad.openstack.org/p/autoscaling-integration-and-feedback *Heat - Project Onboarding* Thu 15, 10:50am - 11:30am CityCube Berlin - Level 3 - M-Räume 1 https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22733/heat-project-onboarding -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Tue Nov 13 07:28:48 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 13 Nov 2018 12:58:48 +0530 Subject: [openstack-dev] [tripleo] puppet5 has broken the master gate In-Reply-To: References: Message-ID: Hello Alex, On Tue, Nov 13, 2018 at 9:53 AM Alex Schultz wrote: > > Just a heads up but we recently updated to puppet5 in the master > dependencies. It appears that this has completely hosed the master > scenarios and containers-multinode jobs. Please do recheck/approve > anything until we get this resolved. > > See https://bugs.launchpad.net/tripleo/+bug/1803024 > > I have a possible fix (https://review.openstack.org/#/c/617441/) but > it's probably a better idea to roll back the puppet package if > possible. > In RDO, we have reverted Revert "Stein: push puppet 5.5.6" -> https://review.rdoproject.org/r/#/c/17333/1 Thanks for the heads up! Thanks, Chandan Kumar From e0ne at e0ne.info Tue Nov 13 08:55:08 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Tue, 13 Nov 2018 09:55:08 +0100 Subject: [openstack-dev] [horizon] No meeting this week Message-ID: Hi team, Let's skip the meeting tomorrow due to the OpenStack Summit. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Nov 13 11:27:04 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 13 Nov 2018 12:27:04 +0100 Subject: [openstack-dev] [nova][neutron] boot server with more than one subnet selection question In-Reply-To: References: <4886F27A-F0E9-4EA2-92BE-87B87A1F27AD@redhat.com> Message-ID: <46aa5eaa-0339-a8a7-eb61-bdda70768422@gmail.com> On 11/13/2018 4:45 AM, Chen CH Ji wrote: > Got it, this is what I am looking for .. thank you Regarding that you can do with server create, I believe it's: 1. don't specify anything for networking, you get a port on the network available to you; if there are multiple networks, it's a failure and the user has to specify one. 2. specify a network, nova creates a port on that network 3. specify a port, nova uses that port and doesn't create anything in neutron 4. specify a network and fixed IP, nova creates a port on that network using that fixed IP. It sounds like you want #3 or #4. -- Thanks, Matt From ifatafekn at gmail.com Tue Nov 13 11:36:44 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Tue, 13 Nov 2018 13:36:44 +0200 Subject: [openstack-dev] [vitrage] No IRC meeting this week Message-ID: Hi, We will not hold the Vitrage IRC meeting tomorrow, since some of our contributors are in Berlin. Our next meeting will be Next Wednesday, November 21th. Thanks, Ifat. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Tue Nov 13 12:14:18 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 13 Nov 2018 06:14:18 -0600 Subject: [openstack-dev] [cinder][manila] PLEASE READ: Change of location for dinner ... Message-ID: <9eaf0c6f-2f26-7370-c090-a98711ec5811@gmail.com> Team, The dinner has had to change locations.  Dicke Wirtin didn't get my online reservation and they are full. NEW LOCATION: Joe's Restaurant and Wirsthaus -- Theodor-Heuss-Platz 10, 14052 Berlin The time is still 8 pm. Please pass the word on! Jay From smooney at redhat.com Tue Nov 13 12:17:13 2018 From: smooney at redhat.com (Sean Mooney) Date: Tue, 13 Nov 2018 12:17:13 +0000 Subject: [openstack-dev] [nova][neutron] boot server with more than one subnet selection question In-Reply-To: <46aa5eaa-0339-a8a7-eb61-bdda70768422@gmail.com> References: <4886F27A-F0E9-4EA2-92BE-87B87A1F27AD@redhat.com> <46aa5eaa-0339-a8a7-eb61-bdda70768422@gmail.com> Message-ID: <02b8bbf780ce6cd5600833e14ef3bcf1cdad0c1a.camel@redhat.com> On Tue, 2018-11-13 at 12:27 +0100, Matt Riedemann wrote: > On 11/13/2018 4:45 AM, Chen CH Ji wrote: > > Got it, this is what I am looking for .. thank you > > Regarding that you can do with server create, I believe it's: > > 1. don't specify anything for networking, you get a port on the network > available to you; if there are multiple networks, it's a failure and the > user has to specify one. > > 2. specify a network, nova creates a port on that network in this case i belive neutron alocate an 1 ipv4 adress and 1 ipv6 addres assumeing the network has a subnet for each type. > > 3. specify a port, nova uses that port and doesn't create anything in > neutron in this case nova just reads the ips the neutron has already allocated to the port and list those for the instace > > 4. specify a network and fixed IP, nova creates a port on that network > using that fixed IP. and in this case nova will create the port in neutron using the fixed ip you supplied which will cause neutron to attach the prot to the correct subnet > > It sounds like you want #3 or #4. > i think what is actully wanted is "openstack server create --nic net-id=,v4-fixed-ip=" we do not have a subnet-id option for --nic so if you want to select the subnet as part of the boot you have to supply the ip. similary if you want neutron to select the ip you have to precreate teh port and use the --port option when creating the vm. so as matt saidn #3 or #4 are the best solutions for your request. From smooney at redhat.com Tue Nov 13 13:42:49 2018 From: smooney at redhat.com (Sean Mooney) Date: Tue, 13 Nov 2018 13:42:49 +0000 Subject: [openstack-dev] [Openstack] [nova] PCI alias attribute numa_policy ignored when flavor has hw:cpu_policy=dedicated set In-Reply-To: <8BB43193-8F89-46E6-B075-5E7432D4F61C@gmail.com> References: <2F2B5387-A725-44DC-A649-E7F6050164BB@gmail.com> <8BB43193-8F89-46E6-B075-5E7432D4F61C@gmail.com> Message-ID: <9b221d32e322a7fe4e97b29d1ef3369240d51a67.camel@redhat.com> On Tue, 2018-11-13 at 07:52 -0500, Satish Patel wrote: > Mike, > > Here is the bug which I reported https://bugs.launchpad.net/bugs/1795920 actully this is a releated but different bug based in the description below. thanks for highlighting this to me. > > Cc'ing: Sean > > Sent from my iPhone > > On Nov 12, 2018, at 8:27 AM, Satish Patel wrote: > > > Mike, > > > > I had same issue month ago when I roll out sriov in my cloud and this is what I did to solve this issue. Set > > following in flavor > > > > hw:numa_nodes=2 > > > > It will spread out instance vcpu across numa, yes there will be little penalty but if you tune your application > > according they you are good > > > > Yes this is bug I have already open ticket and I believe folks are working on it but its not simple fix. They may > > release new feature in coming oprnstack release. > > > > Sent from my iPhone > > > > On Nov 11, 2018, at 9:25 PM, Mike Joseph wrote: > > > > > Hi folks, > > > > > > It appears that the numa_policy attribute of a PCI alias is ignored for flavors referencing that alias if the > > > flavor also has hw:cpu_policy=dedicated set. The alias config is: > > > > > > alias = { "name": "mlx", "device_type": "type-VF", "vendor_id": "15b3", "product_id": "1004", "numa_policy": > > > "preferred" } > > > > > > And the flavor config is: > > > > > > { > > > "OS-FLV-DISABLED:disabled": false, > > > "OS-FLV-EXT-DATA:ephemeral": 0, > > > "access_project_ids": null, > > > "disk": 10, > > > "id": "221e1bcd-2dde-48e6-bd09-820012198908", > > > "name": "vm-2", > > > "os-flavor-access:is_public": true, > > > "properties": "hw:cpu_policy='dedicated', pci_passthrough:alias='mlx:1'", > > > "ram": 8192, > > > "rxtx_factor": 1.0, > > > "swap": "", > > > "vcpus": 2 > > > } Satish in your case you were trying to use neutrons sriov vnic types such that the VF would be connected to a neutron network. In this case the mellanox connectx 3 virtual funcitons are being passed to the guest using the pci alias via the flavor which means they cannot be used to connect to neutron networks but they should be able to use affinity poileices. > > > > > > In short, our compute nodes have an SR-IOV Mellanox NIC (ConnectX-3) with 16 VFs configured. We wish to expose > > > these VFs to VMs that schedule on the host. However, the NIC is in NUMA region 0 which means that only half of > > > the compute node's CPU cores would be usable if we required VM affinity to the NIC's NUMA region. But we don't > > > need that, since we are okay with cross-region access to the PCI device. > > > > > > However, we do need CPU pinning to work, in order to have efficient cache hits on our VM processes. Therefore, we > > > still want to pin our vCPUs to pCPUs, even if the pins end up on on a NUMA region opposite of the NIC. The spec > > > for numa_policy seem to indicate that this is exactly the intent of the option: > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html > > > > > > But, with the above config, we still get PCI affinity scheduling errors: > > > > > > 'Insufficient compute resources: Requested instance NUMA topology together with requested PCI devices cannot fit > > > the given host NUMA topology.' > > > > > > This strikes me as a bug, but perhaps I am missing something here? yes this does infact seam like a new bug. can you add myself and stephen to the bug once you file it. in the bug please include the version of opentack you were deploying. in the interim setting hw:numa_nodes=2 will allow you to pin the guest without the error however the flavor and alias you have provided should have been enough. im hoping that we can fix both the alisa and neutorn based case this cycle but to do so we will need to reporpose original queens spec for stein and disucss if we can backport any of the fixes or if this would be only completed in stein+ i would hope we coudl backport fixes for the flavor based use case but the neutron based sue case would likely be stein+ regards sean > > > > > > Thanks, > > > MJ > > > _______________________________________________ > > > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > Post to : openstack at lists.openstack.org > > > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From satish.txt at gmail.com Tue Nov 13 16:46:20 2018 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 13 Nov 2018 11:46:20 -0500 Subject: [openstack-dev] [Openstack] [nova] PCI alias attribute numa_policy ignored when flavor has hw:cpu_policy=dedicated set In-Reply-To: <9b221d32e322a7fe4e97b29d1ef3369240d51a67.camel@redhat.com> References: <2F2B5387-A725-44DC-A649-E7F6050164BB@gmail.com> <8BB43193-8F89-46E6-B075-5E7432D4F61C@gmail.com> <9b221d32e322a7fe4e97b29d1ef3369240d51a67.camel@redhat.com> Message-ID: Sean, Thank you for the detailed explanation, i really hope if we can backport to queens, it would be harder for me to upgrade cluster..! On Tue, Nov 13, 2018 at 8:42 AM Sean Mooney wrote: > > On Tue, 2018-11-13 at 07:52 -0500, Satish Patel wrote: > > Mike, > > > > Here is the bug which I reported https://bugs.launchpad.net/bugs/1795920 > actully this is a releated but different bug based in the description below. > thanks for highlighting this to me. > > > > Cc'ing: Sean > > > > Sent from my iPhone > > > > On Nov 12, 2018, at 8:27 AM, Satish Patel wrote: > > > > > Mike, > > > > > > I had same issue month ago when I roll out sriov in my cloud and this is what I did to solve this issue. Set > > > following in flavor > > > > > > hw:numa_nodes=2 > > > > > > It will spread out instance vcpu across numa, yes there will be little penalty but if you tune your application > > > according they you are good > > > > > > Yes this is bug I have already open ticket and I believe folks are working on it but its not simple fix. They may > > > release new feature in coming oprnstack release. > > > > > > Sent from my iPhone > > > > > > On Nov 11, 2018, at 9:25 PM, Mike Joseph wrote: > > > > > > > Hi folks, > > > > > > > > It appears that the numa_policy attribute of a PCI alias is ignored for flavors referencing that alias if the > > > > flavor also has hw:cpu_policy=dedicated set. The alias config is: > > > > > > > > alias = { "name": "mlx", "device_type": "type-VF", "vendor_id": "15b3", "product_id": "1004", "numa_policy": > > > > "preferred" } > > > > > > > > And the flavor config is: > > > > > > > > { > > > > "OS-FLV-DISABLED:disabled": false, > > > > "OS-FLV-EXT-DATA:ephemeral": 0, > > > > "access_project_ids": null, > > > > "disk": 10, > > > > "id": "221e1bcd-2dde-48e6-bd09-820012198908", > > > > "name": "vm-2", > > > > "os-flavor-access:is_public": true, > > > > "properties": "hw:cpu_policy='dedicated', pci_passthrough:alias='mlx:1'", > > > > "ram": 8192, > > > > "rxtx_factor": 1.0, > > > > "swap": "", > > > > "vcpus": 2 > > > > } > Satish in your case you were trying to use neutrons sriov vnic types such that the VF would be connected to a neutron > network. In this case the mellanox connectx 3 virtual funcitons are being passed to the guest using the pci alias via > the flavor which means they cannot be used to connect to neutron networks but they should be able to use affinity > poileices. > > > > > > > > In short, our compute nodes have an SR-IOV Mellanox NIC (ConnectX-3) with 16 VFs configured. We wish to expose > > > > these VFs to VMs that schedule on the host. However, the NIC is in NUMA region 0 which means that only half of > > > > the compute node's CPU cores would be usable if we required VM affinity to the NIC's NUMA region. But we don't > > > > need that, since we are okay with cross-region access to the PCI device. > > > > > > > > However, we do need CPU pinning to work, in order to have efficient cache hits on our VM processes. Therefore, we > > > > still want to pin our vCPUs to pCPUs, even if the pins end up on on a NUMA region opposite of the NIC. The spec > > > > for numa_policy seem to indicate that this is exactly the intent of the option: > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html > > > > > > > > But, with the above config, we still get PCI affinity scheduling errors: > > > > > > > > 'Insufficient compute resources: Requested instance NUMA topology together with requested PCI devices cannot fit > > > > the given host NUMA topology.' > > > > > > > > This strikes me as a bug, but perhaps I am missing something here? > yes this does infact seam like a new bug. > can you add myself and stephen to the bug once you file it. > in the bug please include the version of opentack you were deploying. > > in the interim setting hw:numa_nodes=2 will allow you to pin the guest without the error > however the flavor and alias you have provided should have been enough. > > im hoping that we can fix both the alisa and neutorn based case this cycle but to do so we > will need to reporpose original queens spec for stein and disucss if we can backport any of the > fixes or if this would be only completed in stein+ i would hope we coudl backport fixes for the flavor > based use case but the neutron based sue case would likely be stein+ > > regards > sean > > > > > > > > Thanks, > > > > MJ > > > > _______________________________________________ > > > > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > Post to : openstack at lists.openstack.org > > > > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From corey.bryant at canonical.com Tue Nov 13 18:26:46 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Tue, 13 Nov 2018 13:26:46 -0500 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <1541607064.1040711.1568901040.6867B704@webmail.messagingengine.com> References: <1541607064.1040711.1568901040.6867B704@webmail.messagingengine.com> Message-ID: On Wed, Nov 7, 2018 at 11:12 AM Clark Boylan wrote: > On Wed, Nov 7, 2018, at 4:47 AM, Mohammed Naser wrote: > > On Wed, Nov 7, 2018 at 1:37 PM Doug Hellmann > wrote: > > > > > > Corey Bryant writes: > > > > > > > On Wed, Oct 10, 2018 at 8:45 AM Corey Bryant < > corey.bryant at canonical.com> > > > > wrote: > > > > > > > > I'd like to start moving forward with enabling py37 unit tests for a > subset > > > > of projects. Rather than putting too much load on infra by enabling > 3 x py3 > > > > unit tests for every project, this would just focus on enablement of > py37 > > > > unit tests for a subset of projects in the Stein cycle. And just to > be > > > > clear, I would not be disabling any unit tests (such as py35). I'd > just be > > > > enabling py37 unit tests. > > > > > > > > As some background, this ML thread originally led to updating the > > > > python3-first governance goal ( > https://review.openstack.org/#/c/610708/) > > > > but has now led back to this ML thread for a +1 rather than updating > the > > > > governance goal. > > > > > > > > I'd like to get an official +1 here on the ML from parties such as > the TC > > > > and infra in particular but anyone else's input would be welcomed > too. > > > > Obviously individual projects would have the right to reject proposed > > > > changes that enable py37 unit tests. Hopefully they wouldn't, of > course, > > > > but they could individually vote that way. > > > > > > > > Thanks, > > > > Corey > > > > > > This seems like a good way to start. It lets us make incremental > > > progress while we take the time to think about the python version > > > management question more broadly. We can come back to the other > projects > > > to add 3.7 jobs and remove 3.5 jobs when we have that plan worked out. > > > > What's the impact on the number of consumption in upstream CI node usage? > > > > For period from 2018-10-25 15:16:32,079 to 2018-11-07 15:59:04,994, > openstack-tox-py35 jobs in aggregate represent 0.73% of our total capacity > usage. > > I don't expect py37 to significantly deviate from that. Again the major > resource consumption is dominated by a small number of projects/repos/jobs. > Generally testing outside of that bubble doesn't represent a significant > resource cost. > > I see no problem with adding python 3.7 unit testing from an > infrastructure perspective. > > Clark > > > Thanks all for the input on this. It seems like we have no objections to moving forward so I'll plan on getting started soon. Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Nov 13 23:00:40 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 13 Nov 2018 18:00:40 -0500 Subject: [openstack-dev] [tripleo] no recheck / no workflow until gate is stable Message-ID: We have serious issues with the gate at this time, we believe it is a mix of mirrors errors (infra) and tempest timeouts (see https://review.openstack.org/617845). Until the situation is resolved, do not recheck or approve any patch for now. Thanks for your understanding, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Nov 14 04:42:35 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 14 Nov 2018 05:42:35 +0100 Subject: [openstack-dev] [all] All Hail our Newest Release Name - OpenStack Train Message-ID: <20181114044233.GA10706@thor.bakeyournoodle.com> Hi everybody! As the subject reads, the "T" release of OpenStack is officially "Train". Unlike recent choices Train was the popular choice so congrats! Thanks to everybody who participated and help with the naming process. Lets make OpenStack Train the release so awesome that people can't help but choo-choo-choose to run it[1]! Yours Tony. [1] Too soon? Too much? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jeremyfreudberg at gmail.com Wed Nov 14 05:12:43 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Wed, 14 Nov 2018 00:12:43 -0500 Subject: [openstack-dev] [all] All Hail our Newest Release Name - OpenStack Train In-Reply-To: <20181114044233.GA10706@thor.bakeyournoodle.com> References: <20181114044233.GA10706@thor.bakeyournoodle.com> Message-ID: Hey Tony, What's the reason for the results of the poll not being public? Thanks, Jeremy On Tue, Nov 13, 2018 at 11:52 PM Tony Breeds wrote: > > > Hi everybody! > > As the subject reads, the "T" release of OpenStack is officially > "Train". Unlike recent choices Train was the popular choice so > congrats! > > Thanks to everybody who participated and help with the naming process. > > Lets make OpenStack Train the release so awesome that people can't help > but choo-choo-choose to run it[1]! > > > Yours Tony. > [1] Too soon? Too much? > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From skaplons at redhat.com Wed Nov 14 07:37:09 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 14 Nov 2018 08:37:09 +0100 Subject: [openstack-dev] [all] All Hail our Newest Release Name - OpenStack Train In-Reply-To: References: <20181114044233.GA10706@thor.bakeyournoodle.com> Message-ID: <7064638E-4B0A-4572-A194-4FF6327DC6B1@redhat.com> Hi, I think it was published, see http://lists.openstack.org/pipermail/openstack/2018-November/047172.html > Wiadomość napisana przez Jeremy Freudberg w dniu 14.11.2018, o godz. 06:12: > > Hey Tony, > > What's the reason for the results of the poll not being public? > > Thanks, > Jeremy > On Tue, Nov 13, 2018 at 11:52 PM Tony Breeds wrote: >> >> >> Hi everybody! >> >> As the subject reads, the "T" release of OpenStack is officially >> "Train". Unlike recent choices Train was the popular choice so >> congrats! >> >> Thanks to everybody who participated and help with the naming process. >> >> Lets make OpenStack Train the release so awesome that people can't help >> but choo-choo-choose to run it[1]! >> >> >> Yours Tony. >> [1] Too soon? Too much? >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From duc.openstack at gmail.com Wed Nov 14 12:12:31 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Wed, 14 Nov 2018 13:12:31 +0100 Subject: [openstack-dev] [senlin] No meeting this week and next week Message-ID: Everyone, This is a reminder that there won't be a meeting this week due to the summit and next week due to the Thanksgiving holiday The next meeting will be on Friday, November 30. Regards, Duc From jeremyfreudberg at gmail.com Wed Nov 14 14:43:33 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Wed, 14 Nov 2018 08:43:33 -0600 Subject: [openstack-dev] [all] All Hail our Newest Release Name - OpenStack Train In-Reply-To: <7064638E-4B0A-4572-A194-4FF6327DC6B1@redhat.com> References: <20181114044233.GA10706@thor.bakeyournoodle.com> <7064638E-4B0A-4572-A194-4FF6327DC6B1@redhat.com> Message-ID: Thanks Slawomir-- you're right. My mistake. (I had been looking at http://lists.openstack.org/pipermail/openstack-dev/2018-November/136309.html , whose Condorcet link indicates private results.) On Wed, Nov 14, 2018 at 1:45 AM Slawomir Kaplonski wrote: > > Hi, > > I think it was published, see http://lists.openstack.org/pipermail/openstack/2018-November/047172.html > > > Wiadomość napisana przez Jeremy Freudberg w dniu 14.11.2018, o godz. 06:12: > > > > Hey Tony, > > > > What's the reason for the results of the poll not being public? > > > > Thanks, > > Jeremy > > On Tue, Nov 13, 2018 at 11:52 PM Tony Breeds wrote: > >> > >> > >> Hi everybody! > >> > >> As the subject reads, the "T" release of OpenStack is officially > >> "Train". Unlike recent choices Train was the popular choice so > >> congrats! > >> > >> Thanks to everybody who participated and help with the naming process. > >> > >> Lets make OpenStack Train the release so awesome that people can't help > >> but choo-choo-choose to run it[1]! > >> > >> > >> Yours Tony. > >> [1] Too soon? Too much? > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From eumel at arcor.de Wed Nov 14 14:02:57 2018 From: eumel at arcor.de (Frank Kloeker) Date: Wed, 14 Nov 2018 15:02:57 +0100 Subject: [openstack-dev] [docs][i18n] Team dinner Berlin Summit Message-ID: Hello, thanks to all participants of the OpenStack Summit in Berlin. Hopefully you have an event with enough success and fun so far. I really appreciate that you are here in Berlin. To round up I would like to have the Docs- and I18n team for a team dinner on Thursday, after the Onboarding Session. I reserved a table for 18:30 on Restaurant Scheune (https://www.google.com/maps/reserve/v/dine/m/DH2FjXOtEXw). We can start on the CityCube possibly on foot. I want to skip the I18n Office Hour on Thursday. kind regards Frank menu: http://www.scheune-restaurant.de/ From ekcs.openstack at gmail.com Wed Nov 14 15:35:35 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 14 Nov 2018 07:35:35 -0800 Subject: [openstack-dev] [congress] temporary change of meeting time Message-ID: Sorry for the rather late announcement. We're moving this week's Congress team meeting from Friday 4AM UTC to Thursday 10AM UTC in order to accommodate the summit local time zone. Thanks! Eric From rfolco at redhat.com Wed Nov 14 18:21:21 2018 From: rfolco at redhat.com (Rafael Folco) Date: Wed, 14 Nov 2018 16:21:21 -0200 Subject: [openstack-dev] [tripleo] Proposal: Remove newton/ocata CI jobs Message-ID: Greetings, The non-containerized multinode scenario jobs were active up to Ocata release and are no longer supported. I'm proposing a cleanup [1] on these old jobs so I've added this topic to the next tripleo meeting agenda [2] to discuss with the tripleo team. Since this may affect multiple projects, these jobs need to be deleted from their respective zuul config before the cleanup on tripleo-ci. Thanks, --Folco [1] https://review.openstack.org/#/c/617999/ [2] https://etherpad.openstack.org/p/tripleo-meeting-items -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Wed Nov 14 21:08:33 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 14 Nov 2018 22:08:33 +0100 Subject: [openstack-dev] manila community meeting this week is *on* Message-ID: <20181114210833.pis2ibrd2jsdfn22@barron.net> As we discussed last week, we *will* have our normal weekly manila community meeting this week, at the regular time and place Thursday, 15 November, 1500 UTC, #openstack-meetings-alt on freenode Some of us are at Summit but we need to continue to discuss/review outstanding specs, links for which can be found in our next meeting agenda [1]. It was great meeting new folks at the project onboarding and project update sessions at Summit this week -- please feel free to join the IRC meeting tomorrow! Cheers, -- Tom Barron (tbarron) [1] https://wiki.openstack.org/w/index.php?title=Manila/Meetings&action=edit§ion=2 From emccormick at cirrusseven.com Thu Nov 15 01:29:56 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Thu, 15 Nov 2018 02:29:56 +0100 Subject: [openstack-dev] manila community meeting this week is *on* In-Reply-To: <20181114210833.pis2ibrd2jsdfn22@barron.net> References: <20181114210833.pis2ibrd2jsdfn22@barron.net> Message-ID: Are you gathering somewhere in the building for in-person discussion? On Wed, Nov 14, 2018, 10:11 PM Tom Barron As we discussed last week, we *will* have our normal weekly manila > community meeting this week, at the regular time and place > > Thursday, 15 November, 1500 UTC, #openstack-meetings-alt on > freenode > > Some of us are at Summit but we need to continue to discuss/review > outstanding specs, links for which can be found in our next meeting > agenda [1]. > > It was great meeting new folks at the project onboarding and project > update sessions at Summit this week -- please feel free to join the > IRC meeting tomorrow! > > Cheers, > > -- Tom Barron (tbarron) > > [1] > https://wiki.openstack.org/w/index.php?title=Manila/Meetings&action=edit§ion=2 > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lujinluo at gmail.com Thu Nov 15 05:55:43 2018 From: lujinluo at gmail.com (Lujin Luo) Date: Wed, 14 Nov 2018 21:55:43 -0800 Subject: [openstack-dev] [Neutron] [Upgrade] No meetings on Nov. 15th and 22nd Message-ID: Hi everyone, I won't be able to chair the Neutron Upgrade subteam meeting on 15th, and some team members are at the summit. Let's skip it. Also, since 22nd is Thanksgiving in the US, let skip the meeting on that day too. We will resume the meeting on 29th. Happy early Thanksgiving! Best regards, Lujin From tpb at dyncloud.net Thu Nov 15 08:11:26 2018 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 15 Nov 2018 09:11:26 +0100 Subject: [openstack-dev] manila community meeting this week is *on* In-Reply-To: References: <20181114210833.pis2ibrd2jsdfn22@barron.net> Message-ID: <20181115081126.jsaseibuafamyyyq@barron.net> On 15/11/18 02:29 +0100, Erik McCormick wrote: >Are you gathering somewhere in the building for in-person discussion? We have a Forum session at 9:50 where we'll cover similar material: https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22830/setting-the-compass-for-manila-rwx-cloud-storage The weekly meeting will be on irc, as usual, with meeting minutes logged there, with no concurrent sidebar conversations. It'd be great to meet you in person here, so if you can't get to the Forum session please ping me at tpb at dyncloud.net or tbarron on irc and we can chat a bit. Cheers, -- Tom (tbarron) > >On Wed, Nov 14, 2018, 10:11 PM Tom Barron >> As we discussed last week, we *will* have our normal weekly manila >> community meeting this week, at the regular time and place >> >> Thursday, 15 November, 1500 UTC, #openstack-meetings-alt on >> freenode >> >> Some of us are at Summit but we need to continue to discuss/review >> outstanding specs, links for which can be found in our next meeting >> agenda [1]. >> >> It was great meeting new folks at the project onboarding and project >> update sessions at Summit this week -- please feel free to join the >> IRC meeting tomorrow! >> >> Cheers, >> >> -- Tom Barron (tbarron) >> >> [1] >> https://wiki.openstack.org/w/index.php?title=Manila/Meetings&action=edit§ion=2 >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From wjstk16 at gmail.com Thu Nov 15 08:24:12 2018 From: wjstk16 at gmail.com (Won) Date: Thu, 15 Nov 2018 17:24:12 +0900 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: Hi, We solved the timestamp bug. There are two patches for master [1] and > stable/rocky [2]. I applied the patch and checked it worked well. Looking at the logs, I see two issues: > 1. On ubuntu server, you get a notification about the vm deletion, while > on compute1 you don't get it. > Please make sure that Nova sends notifications to 'vitrage_notifications' > - it should be configured in /etc/nova/nova.conf. > 2. Once in 10 minutes (by default) nova.instance datasource queries all > instances. The deleted vm is supposed to be deleted in Vitrage at this > stage, even if the notification was lost. > Please check in your collector log for the a message of "novaclient.v2.client > [-] RESP BODY" before and after the deletion, and send me its content. I attached two log files. I created a VM in computer1 which is a computer node and deleted it a few minutes later. Log for 30 minutes from VM creation. The first is the log of the vitrage-collect that grep instance name. The second is the noovaclient.v2.clinet [-] RESP BODY log. After I deleted the VM, no log of the instance appeared in the collector log no matter how long I waited. I added the following to Nova.conf on the computer1 node.(attached file 'compute_node_local_conf.txt') notification_topics = notifications,vitrage_notifications notification_driver = messagingv2 vif_plugging_timeout = 300 notify_on_state_change = vm_and_task_state instance_usage_audit_period = hour instance_usage_audit = True However, the problem has not been resolved. I tried to test Vitrage for prometheus alarm recognition problems and for problems where instances of multinode do not disappear from Entity-graph, but I have not yet found the cause. Br, Won 2018년 11월 8일 (목) 오후 8:47, Ifat Afek 님이 작성: > Hi, > > We solved the timestamp bug. There are two patches for master [1] and > stable/rocky [2]. > I'll check the other issues next week. > > Regards, > Ifat > > [1] https://review.openstack.org/#/c/616468/ > [2] https://review.openstack.org/#/c/616469/ > > > On Wed, Oct 31, 2018 at 10:59 AM Won wrote: > >> >>>>> [image: image.png] >>>>> The time stamp is recorded well in log(vitrage-graph,collect etc), but >>>>> in vitrage-dashboard it is marked 2001-01-01. >>>>> However, it seems that the time stamp is recognized well internally >>>>> because the alarm can be resolved and is recorded well in log. >>>>> >>>> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 42202 bytes Desc: not available URL: -------------- next part -------------- root at ubuntu:~# journalctl -f -u devstack at vitrage-collector.service |grep deltesting Nov 15 16:37:10 ubuntu vitrage-collector[764]: 2018-11-15 16:37:10.223 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:6e:1e:12", "version": 4, "addr": "192.168.12.160", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/072dc77d-5e72-4849-aadd-69a6ec741bdf", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/072dc77d-5e72-4849-aadd-69a6ec741bdf", "rel": "bookmark"}], "image": {"id": "46b39d12-d1cd-48e4-a55c-fb0d364ae0f2", "links": [{"href": "http://192.168.12.201/compute/images/46b39d12-d1cd-48e4-a55c-fb0d364ae0f2", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-11-15T07:36:18.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "072dc77d-5e72-4849-aadd-69a6ec741bdf", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "deltesting", "OS-DCF:diskConfig": "AUTO", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-0101dh0f", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-15T07:36:19Z", "hostId": "8ea30fd4c789f058735fa34c49eae562bb4905bbe014baccacb77862", "OS-EXT-SRV-ATTR:host": "compute1", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute1", "name": "deltesting", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-11-15T07:36:11Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:bc:9a:84", "versio -------------- next part -------------- root at ubuntu:~# journalctl -f -u devstack at vitrage-collector.service |grep OS-EXT Nov 15 16:36:27 ubuntu vitrage-collector[764]: 2018-11-15 16:36:27.851 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:36:28 ubuntu vitrage-collector[764]: 2018-11-15 16:36:28.197 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:37:10 ubuntu vitrage-collector[764]: 2018-11-15 16:37:10.223 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:6e:1e:12", "version": 4, "addr": "192.168.12.160", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/072dc77d-5e72-4849-aadd-69a6ec741bdf", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/072dc77d-5e72-4849-aadd-69a6ec741bdf", "rel": "bookmark"}], "image": {"id": "46b39d12-d1cd-48e4-a55c-fb0d364ae0f2", "links": [{"href": "http://192.168.12.201/compute/images/46b39d12-d1cd-48e4-a55c-fb0d364ae0f2", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-11-15T07:36:18.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "072dc77d-5e72-4849-aadd-69a6ec741bdf", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "deltesting", "OS-DCF:diskConfig": "AUTO", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-0101dh0f", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-15T07:36:19Z", "hostId": "8ea30fd4c789f058735fa34c49eae562bb4905bbe014baccacb77862", "OS-EXT-SRV-ATTR:host": "compute1", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute1", "name": "deltesting", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-11-15T07:36:11Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:bc:9a:84", "versio Nov 15 16:37:10 ubuntu vitrage-collector[764]: n": 4, "addr": "192.168.12.170", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/b5ba9d6e-4530-410f-b972-04df0709fe32", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/b5ba9d6e-4530-410f-b972-04df0709fe32", "rel": "bookmark"}], "image": {"id": "a6629907-6e98-443f-8c83-3461e7f22c7e", "links": [{"href": "http://192.168.12.201/compute/images/a6629907-6e98-443f-8c83-3461e7f22c7e", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000008", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:16:05.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "b5ba9d6e-4530-410f-b972-04df0709fe32", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "getlist", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-2xqqebpn", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:19Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "GetList", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:12:46Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fc:49:bd", "version": 4, "addr": "192.168.12.152", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/1be42baf-b310-4ddf-856e-425285a89fed", "rel": "self"}, {"href": "http://192.1 Nov 15 16:37:10 ubuntu vitrage-collector[764]: 68.12.201/compute/servers/1be42baf-b310-4ddf-856e-425285a89fed", "rel": "bookmark"}], "image": {"id": "6bfd7b4f-0397-400a-bdde-e6f932090f16", "links": [{"href": "http://192.168.12.201/compute/images/6bfd7b4f-0397-400a-bdde-e6f932090f16", "rel": "bookmark"}]}, OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:15:43.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "1be42baf-b310-4ddf-856e-425285a89fed", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "search", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-y3cl03a2", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:20Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Search", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:11:54Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:58:ef:16", "version": 4, "addr": "192.168.12.176", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/04a2a360-8dd2-49cd-9589-05e6e22b9ae9", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/04a2a360-8dd2-49cd-9589-05e6e22b9ae9", "rel": "bookmark"}], "image": {"id": "31de911e-270c-407c-ace8-bd93f39dbf5c", "links": [{"href": "http://192.168.12.201/compute/images/31de911 Nov 15 16:37:10 ubuntu vitrage-collector[764]: e-270c-407c-ace8-bd93f39dbf5c", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000006", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:13:34.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "04a2a360-8dd2-49cd-9589-05e6e22b9ae9", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "editproduct", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-0jm0oask", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:20Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "EditProduct", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:06:08Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:00:f3:6f", "version": 4, "addr": "192.168.12.165", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/85c08394-b8c7-4761-bb84-f56a37d3a56b", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/85c08394-b8c7-4761-bb84-f56a37d3a56b", "rel": "bookmark"}], "image": {"id": "0b384453-9377-4ed9-aef0-fcb55c124894", "links": [{"href": "http://192.168.12.201/compute/images/0b384453-9377-4ed9-aef0-fcb55c124894", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:roo Nov 15 16:37:10 ubuntu vitrage-collector[764]: t_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:12:41.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "85c08394-b8c7-4761-bb84-f56a37d3a56b", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "login", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-wr4lika5", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Login", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:06:01Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b4:1f:7f", "version": 4, "addr": "192.168.12.166", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/32de61af-a922-4d18-8c6f-053b27038571", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/32de61af-a922-4d18-8c6f-053b27038571", "rel": "bookmark"}], "image": {"id": "88fd8588-3658-4e3c-96b1-f7662218a793", "links": [{"href": "http://192.168.12.201/compute/images/88fd8588-3658-4e3c-96b1-f7662218a793", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:10:05.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "32de61af-a9 Nov 15 16:37:10 ubuntu vitrage-collector[764]: 22-4d18-8c6f-053b27038571", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "signup", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-iheedd6i", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:24Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "SignUp", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:56Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "" Nov 15 16:37:10 ubuntu vitrage-collector[764]: , "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EX Nov 15 16:37:10 ubuntu vitrage-collector[764]: T-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:37:42 ubuntu vitrage-collector[764]: 2018-11-15 16:37:42.930 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:37:43 ubuntu vitrage-collector[764]: 2018-11-15 16:37:43.203 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:38:57 ubuntu vitrage-collector[764]: 2018-11-15 16:38:57.952 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:38:58 ubuntu vitrage-collector[764]: 2018-11-15 16:38:58.518 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:40:13 ubuntu vitrage-collector[764]: 2018-11-15 16:40:13.010 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:40:13 ubuntu vitrage-collector[764]: 2018-11-15 16:40:13.458 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:41:28 ubuntu vitrage-collector[764]: 2018-11-15 16:41:28.016 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:41:28 ubuntu vitrage-collector[764]: 2018-11-15 16:41:28.296 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:42:42 ubuntu vitrage-collector[764]: 2018-11-15 16:42:42.934 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:42:43 ubuntu vitrage-collector[764]: 2018-11-15 16:42:43.210 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:43:57 ubuntu vitrage-collector[764]: 2018-11-15 16:43:57.908 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:43:58 ubuntu vitrage-collector[764]: 2018-11-15 16:43:58.198 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:45:12 ubuntu vitrage-collector[764]: 2018-11-15 16:45:12.997 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:45:13 ubuntu vitrage-collector[764]: 2018-11-15 16:45:13.374 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:46:27 ubuntu vitrage-collector[764]: 2018-11-15 16:46:27.923 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:46:28 ubuntu vitrage-collector[764]: 2018-11-15 16:46:28.267 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:47:10 ubuntu vitrage-collector[764]: 2018-11-15 16:47:10.158 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:bc:9a:84", "version": 4, "addr": "192.168.12.170", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/b5ba9d6e-4530-410f-b972-04df0709fe32", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/b5ba9d6e-4530-410f-b972-04df0709fe32", "rel": "bookmark"}], "image": {"id": "a6629907-6e98-443f-8c83-3461e7f22c7e", "links": [{"href": "http://192.168.12.201/compute/images/a6629907-6e98-443f-8c83-3461e7f22c7e", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000008", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:16:05.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "b5ba9d6e-4530-410f-b972-04df0709fe32", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "getlist", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-2xqqebpn", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:19Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "GetList", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:12:46Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fc:49:bd", "version": 4, " Nov 15 16:47:10 ubuntu vitrage-collector[764]: addr": "192.168.12.152", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/1be42baf-b310-4ddf-856e-425285a89fed", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/1be42baf-b310-4ddf-856e-425285a89fed", "rel": "bookmark"}], "image": {"id": "6bfd7b4f-0397-400a-bdde-e6f932090f16", "links": [{"href": "http://192.168.12.201/compute/images/6bfd7b4f-0397-400a-bdde-e6f932090f16", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:15:43.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "1be42baf-b310-4ddf-856e-425285a89fed", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "search", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-y3cl03a2", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:20Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Search", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:11:54Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:58:ef:16", "version": 4, "addr": "192.168.12.176", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/04a2a360-8dd2-49cd-9589-05e6e22b9ae9", "rel": "self"}, {"href": "http://192.168.12.201/ Nov 15 16:47:10 ubuntu vitrage-collector[764]: compute/servers/04a2a360-8dd2-49cd-9589-05e6e22b9ae9", "rel": "bookmark"}], "image": {"id": "31de911e-270c-407c-ace8-bd93f39dbf5c", "links": [{"href": "http://192.168.12.201/compute/images/31de911e-270c-407c-ace8-bd93f39dbf5c", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000006", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:13:34.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "04a2a360-8dd2-49cd-9589-05e6e22b9ae9", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "editproduct", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-0jm0oask", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:20Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "EditProduct", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:06:08Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:00:f3:6f", "version": 4, "addr": "192.168.12.165", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/85c08394-b8c7-4761-bb84-f56a37d3a56b", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/85c08394-b8c7-4761-bb84-f56a37d3a56b", "rel": "bookmark"}], "image": {"id": "0b384453-9377-4ed9-aef0-fcb55c124894", "links": [{"href": "http://192.168.12.201/compute/images/0b38445 Nov 15 16:47:10 ubuntu vitrage-collector[764]: 3-9377-4ed9-aef0-fcb55c124894", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:12:41.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "85c08394-b8c7-4761-bb84-f56a37d3a56b", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "login", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-wr4lika5", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Login", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:06:01Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b4:1f:7f", "version": 4, "addr": "192.168.12.166", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/32de61af-a922-4d18-8c6f-053b27038571", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/32de61af-a922-4d18-8c6f-053b27038571", "rel": "bookmark"}], "image": {"id": "88fd8588-3658-4e3c-96b1-f7662218a793", "links": [{"href": "http://192.168.12.201/compute/images/88fd8588-3658-4e3c-96b1-f7662218a793", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:root_device_nam Nov 15 16:47:10 ubuntu vitrage-collector[764]: e": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:10:05.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "32de61af-a922-4d18-8c6f-053b27038571", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "signup", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-iheedd6i", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:24Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "SignUp", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:56Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd Nov 15 16:47:10 ubuntu vitrage-collector[764]: 72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", Nov 15 16:47:10 ubuntu vitrage-collector[764]: "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:47:42 ubuntu vitrage-collector[764]: 2018-11-15 16:47:42.927 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:47:43 ubuntu vitrage-collector[764]: 2018-11-15 16:47:43.229 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:48:58 ubuntu vitrage-collector[764]: 2018-11-15 16:48:58.060 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:48:58 ubuntu vitrage-collector[764]: 2018-11-15 16:48:58.553 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:50:13 ubuntu vitrage-collector[764]: 2018-11-15 16:50:13.006 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:50:13 ubuntu vitrage-collector[764]: 2018-11-15 16:50:13.502 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:51:27 ubuntu vitrage-collector[764]: 2018-11-15 16:51:27.908 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:51:28 ubuntu vitrage-collector[764]: 2018-11-15 16:51:28.281 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:52:42 ubuntu vitrage-collector[764]: 2018-11-15 16:52:42.883 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:52:43 ubuntu vitrage-collector[764]: 2018-11-15 16:52:43.331 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:53:57 ubuntu vitrage-collector[764]: 2018-11-15 16:53:57.893 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:53:58 ubuntu vitrage-collector[764]: 2018-11-15 16:53:58.413 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:55:13 ubuntu vitrage-collector[764]: 2018-11-15 16:55:13.142 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:55:13 ubuntu vitrage-collector[764]: 2018-11-15 16:55:13.818 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:56:27 ubuntu vitrage-collector[764]: 2018-11-15 16:56:27.948 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:56:28 ubuntu vitrage-collector[764]: 2018-11-15 16:56:28.382 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:57:10 ubuntu vitrage-collector[764]: 2018-11-15 16:57:10.117 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:bc:9a:84", "version": 4, "addr": "192.168.12.170", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/b5ba9d6e-4530-410f-b972-04df0709fe32", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/b5ba9d6e-4530-410f-b972-04df0709fe32", "rel": "bookmark"}], "image": {"id": "a6629907-6e98-443f-8c83-3461e7f22c7e", "links": [{"href": "http://192.168.12.201/compute/images/a6629907-6e98-443f-8c83-3461e7f22c7e", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000008", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:16:05.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "b5ba9d6e-4530-410f-b972-04df0709fe32", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "getlist", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-2xqqebpn", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:19Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "GetList", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:12:46Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fc:49:bd", "version": 4, " Nov 15 16:57:10 ubuntu vitrage-collector[764]: addr": "192.168.12.152", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/1be42baf-b310-4ddf-856e-425285a89fed", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/1be42baf-b310-4ddf-856e-425285a89fed", "rel": "bookmark"}], "image": {"id": "6bfd7b4f-0397-400a-bdde-e6f932090f16", "links": [{"href": "http://192.168.12.201/compute/images/6bfd7b4f-0397-400a-bdde-e6f932090f16", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:15:43.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "1be42baf-b310-4ddf-856e-425285a89fed", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "search", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-y3cl03a2", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:20Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Search", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:11:54Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:58:ef:16", "version": 4, "addr": "192.168.12.176", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/04a2a360-8dd2-49cd-9589-05e6e22b9ae9", "rel": "self"}, {"href": "http://192.168.12.201/ Nov 15 16:57:10 ubuntu vitrage-collector[764]: compute/servers/04a2a360-8dd2-49cd-9589-05e6e22b9ae9", "rel": "bookmark"}], "image": {"id": "31de911e-270c-407c-ace8-bd93f39dbf5c", "links": [{"href": "http://192.168.12.201/compute/images/31de911e-270c-407c-ace8-bd93f39dbf5c", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000006", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:13:34.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "04a2a360-8dd2-49cd-9589-05e6e22b9ae9", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "editproduct", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-0jm0oask", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:20Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "EditProduct", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:06:08Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:00:f3:6f", "version": 4, "addr": "192.168.12.165", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/85c08394-b8c7-4761-bb84-f56a37d3a56b", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/85c08394-b8c7-4761-bb84-f56a37d3a56b", "rel": "bookmark"}], "image": {"id": "0b384453-9377-4ed9-aef0-fcb55c124894", "links": [{"href": "http://192.168.12.201/compute/images/0b38445 Nov 15 16:57:10 ubuntu vitrage-collector[764]: 3-9377-4ed9-aef0-fcb55c124894", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:12:41.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "85c08394-b8c7-4761-bb84-f56a37d3a56b", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "login", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-wr4lika5", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Login", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:06:01Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b4:1f:7f", "version": 4, "addr": "192.168.12.166", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/32de61af-a922-4d18-8c6f-053b27038571", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/32de61af-a922-4d18-8c6f-053b27038571", "rel": "bookmark"}], "image": {"id": "88fd8588-3658-4e3c-96b1-f7662218a793", "links": [{"href": "http://192.168.12.201/compute/images/88fd8588-3658-4e3c-96b1-f7662218a793", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:root_device_nam Nov 15 16:57:10 ubuntu vitrage-collector[764]: e": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:10:05.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "32de61af-a922-4d18-8c6f-053b27038571", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "signup", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-iheedd6i", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:24Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "SignUp", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:56Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd Nov 15 16:57:10 ubuntu vitrage-collector[764]: 72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", Nov 15 16:57:10 ubuntu vitrage-collector[764]: "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:57:42 ubuntu vitrage-collector[764]: 2018-11-15 16:57:42.880 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:57:43 ubuntu vitrage-collector[764]: 2018-11-15 16:57:43.170 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:58:57 ubuntu vitrage-collector[764]: 2018-11-15 16:58:57.927 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 16:58:58 ubuntu vitrage-collector[764]: 2018-11-15 16:58:58.384 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 17:00:12 ubuntu vitrage-collector[764]: 2018-11-15 17:00:12.997 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:0a:cd", "version": 4, "addr": "192.168.12.154", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/77e40f2c-a1a1-454d-bd72-7fec899844e5", "rel": "bookmark"}], "image": {"id": "63c30309-7bac-46d5-94b5-cbd4dea45157", "links": [{"href": "http://192.168.12.201/compute/images/63c30309-7bac-46d5-94b5-cbd4dea45157", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:11:04.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "77e40f2c-a1a1-454d-bd72-7fec899844e5", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "apigateway", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-ym069n3q", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:22Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Apigateway", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:51Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 Nov 15 17:00:13 ubuntu vitrage-collector[764]: 2018-11-15 17:00:13.236 793 DEBUG novaclient.v2.client [-] RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {"public": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:23:2e", "version": 4, "addr": "192.168.12.164", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.12.201/compute/v2.1/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "self"}, {"href": "http://192.168.12.201/compute/servers/25fcb462-0393-493e-95b6-1977a9793a3e", "rel": "bookmark"}], "image": {"id": "f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "links": [{"href": "http://192.168.12.201/compute/images/f4d31abf-4dba-42b9-b4ba-e7ba4b734835", "rel": "bookmark"}]}, "OS-EXT-SRV-ATTR:user_data": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:root_device_name": "/dev/vda", "OS-SRV-USG:launched_at": "2018-10-30T13:08:13.000000", "flavor": {"id": "2", "links": [{"href": "http://192.168.12.201/compute/flavors/2", "rel": "bookmark"}]}, "id": "25fcb462-0393-493e-95b6-1977a9793a3e", "security_groups": [{"name": "default"}], "user_id": "a5843b50270f4457ab1f69943a480fee", "OS-EXT-SRV-ATTR:hostname": "kube-master", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-SRV-ATTR:reservation_id": "r-dljj03fr", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ACTIVE", "OS-EXT-SRV-ATTR:ramdisk_id": "", "updated": "2018-11-03T01:50:21Z", "hostId": "caccdbab50744ac513f229a25f9a7948df1ef8693f3a4d8c2acce436", "OS-EXT-SRV-ATTR:host": "ubuntu", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:kernel_id": "", "locked": false, "OS-EXT-SRV-ATTR:hypervisor_hostname": "ubuntu", "name": "Kube-Master", "OS-EXT-SRV-ATTR:launch_index": 0, "created": "2018-10-30T13:05:47Z", "tenant_id": "6508f093fc7a44f980f2f77f145c212e", "os-extended-volumes:volumes_attached": [], "metadata": {}}]} _http_log_response /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:511 -------------- next part -------------- [DEFAULT] notification_topics = notifications,vitrage_notifications notification_driver = messagingv2 vif_plugging_timeout = 300 vif_plugging_is_fatal = True use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver = libvirt.LibvirtDriver default_ephemeral_format = ext4 pointer_model = ps2mouse graceful_shutdown_timeout = 5 metadata_workers = 2 osapi_compute_workers = 2 transport_url = rabbit://stackrabbit:root at 192.168.12.201:5672/ notify_on_state_change = vm_and_task_state instance_usage_audit_period = hour instance_usage_audit = True logging_exception_prefix = ERROR %(name)s ^[[01;35m%(instance)s^[[00m logging_default_format_string = %(color)s%(levelname)s %(name)s [^[[00;36m-%(color)s] ^[[01;35m%(instance)s%(color)s%(message)s^[[00m logging_context_format_string = %(color)s%(levelname)s %(name)s [^[[01;36m%(global_request_id)s %(request_id)s ^[[00;36m%(project_name)s %(user_name)s%(color)s] ^[[01;35m%(instance)s%(color)s%(message)s^[[00m logging_debug_format_suffix = ^[[00;33m{{(pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d}}^[[00m send_arp_for_ha = True multi_host = True instances_path = /opt/stack/data/nova/instances state_path = /opt/stack/data/nova metadata_listen = 0.0.0.0 osapi_compute_listen = 0.0.0.0 instance_name_template = instance-%08x my_ip = 192.168.12.236 default_floating_pool = public rootwrap_config = /etc/nova/rootwrap.conf allow_resize_to_same_host = True debug = True [wsgi] api_paste_config = /etc/nova/api-paste.ini [scheduler] workers = 2 driver = filter_scheduler [filter_scheduler] enabled_filters = RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter,DifferentHostFilter [key_manager] fixed_key = bae3516cc1c0eb18b05440eba8012a4a880a2ee04d584a9c1579445e675b12defdc716ec backend = nova.keymgr.conf_key_mgr.ConfKeyManager [oslo_concurrency] lock_path = /opt/stack/data/nova [upgrade_levels] compute = auto [oslo_messaging_notifications] transport_url = rabbit://stackrabbit:root at 192.168.12.201:5672/ driver = messagingv2 [conductor] workers = 2 [cinder] os_region_name = RegionOne [libvirt] live_migration_uri = qemu+ssh://stack@%s/system cpu_mode = none virt_type = kvm [placement] project_domain_name = Default project_name = service user_domain_name = Default password = root username = placement auth_url = http://192.168.12.201/identity auth_type = password [neutron] region_name = RegionOne auth_strategy = keystone project_domain_name = Default project_name = service user_domain_name = Default password = root username = neutron auth_url = http://192.168.12.201/identity auth_type = password From miguel at mlavalle.com Thu Nov 15 10:15:48 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 15 Nov 2018 04:15:48 -0600 Subject: [openstack-dev] [neutron] [drivers] Cancelling weekly meeting on November 16th Message-ID: Hi Neutron Team, The majority of the drivers team (Akihiro, Takashi and myself) are attending the Berlin Summit and will be travelling the day of the meeting. As a consequence, let's cancel it. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Thu Nov 15 10:15:48 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 15 Nov 2018 04:15:48 -0600 Subject: [openstack-dev] [neutron] [drivers] Cancelling weekly meeting on November 16th Message-ID: Hi Neutron Team, The majority of the drivers team (Akihiro, Takashi and myself) are attending the Berlin Summit and will be travelling the day of the meeting. As a consequence, let's cancel it. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifatafekn at gmail.com Thu Nov 15 13:06:25 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Thu, 15 Nov 2018 15:06:25 +0200 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: On Thu, Nov 15, 2018 at 10:28 AM Won wrote: > Looking at the logs, I see two issues: >> 1. On ubuntu server, you get a notification about the vm deletion, while >> on compute1 you don't get it. >> Please make sure that Nova sends notifications to 'vitrage_notifications' >> - it should be configured in /etc/nova/nova.conf. >> 2. Once in 10 minutes (by default) nova.instance datasource queries all >> instances. The deleted vm is supposed to be deleted in Vitrage at this >> stage, even if the notification was lost. >> Please check in your collector log for the a message of "novaclient.v2.client >> [-] RESP BODY" before and after the deletion, and send me its content. > > > I attached two log files. I created a VM in computer1 which is a computer > node and deleted it a few minutes later. Log for 30 minutes from VM > creation. > The first is the log of the vitrage-collect that grep instance name. > The second is the noovaclient.v2.clinet [-] RESP BODY log. > After I deleted the VM, no log of the instance appeared in the collector > log no matter how long I waited. > > I added the following to Nova.conf on the computer1 node.(attached file > 'compute_node_local_conf.txt') > notification_topics = notifications,vitrage_notifications > notification_driver = messagingv2 > vif_plugging_timeout = 300 > notify_on_state_change = vm_and_task_state > instance_usage_audit_period = hour > instance_usage_audit = True > Hi, >From the collector log RESP BODY messages I understand that in the beginning there were the following servers: compute1: deltesting ubuntu: Apigateway, KubeMaster and others After ~20minutes, there was only Apigateway. Does it make sense? did you delete the instances on ubuntu, in addition to deltesting? In any case, I would expect to see the instances deleted from the graph at this stage, since they were not returned by get_all. Can you please send me the log of vitrage-graph at the same time (Nov 15, 16:35-17:10)? There is still the question of why we don't see a notification from Nova, but let's try to solve the issues one by one. Thanks, Ifat -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Thu Nov 15 14:01:30 2018 From: eumel at arcor.de (Frank Kloeker) Date: Thu, 15 Nov 2018 15:01:30 +0100 Subject: [openstack-dev] [OpenStack-I18n] [docs][i18n] Team dinner Berlin Summit In-Reply-To: References: Message-ID: Hello again, if you don't want to participate the Onboarding session we may meet around 6 at the registration. I think there is still a possibility to make a team picture on the right side near the sponsor wall. kind regards Frank Am 2018-11-14 15:02, schrieb Frank Kloeker: > Hello, > > thanks to all participants of the OpenStack Summit in Berlin. > Hopefully you have an event with enough success and fun so far. I > really appreciate that you are here in Berlin. > To round up I would like to have the Docs- and I18n team for a team > dinner on Thursday, after the Onboarding Session. I reserved a table > for 18:30 on Restaurant Scheune > (https://www.google.com/maps/reserve/v/dine/m/DH2FjXOtEXw). We can > start on the CityCube possibly on foot. > I want to skip the I18n Office Hour on Thursday. > > kind regards > > Frank > > menu: http://www.scheune-restaurant.de/ > > _______________________________________________ > OpenStack-I18n mailing list > OpenStack-I18n at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n From sshnaidm at redhat.com Thu Nov 15 15:50:37 2018 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Thu, 15 Nov 2018 17:50:37 +0200 Subject: [openstack-dev] [tripleo] Proposing Enrique Llorente Pastora as a core reviewer for TripleO Message-ID: Hi, I'd like to propose Quique (@quiquell) as a core reviewer for TripleO. Quique is actively involved in improvements and development of TripleO and TripleO CI. He also helps in other projects including but not limited to Infrastructure. He shows a very good understanding how TripleO and CI works and I'd like suggest him as core reviewer of TripleO in CI related code. Please vote! My +1 is here :) Thanks -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Nov 15 15:54:50 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 15 Nov 2018 08:54:50 -0700 Subject: [openstack-dev] [tripleo] Proposing Enrique Llorente Pastora as a core reviewer for TripleO In-Reply-To: References: Message-ID: On Thu, Nov 15, 2018 at 8:52 AM Sagi Shnaidman wrote: > Hi, > I'd like to propose Quique (@quiquell) as a core reviewer for TripleO. > Quique is actively involved in improvements and development of TripleO and > TripleO CI. He also helps in other projects including but not limited to > Infrastructure. > He shows a very good understanding how TripleO and CI works and I'd like > suggest him as core reviewer of TripleO in CI related code. > > Please vote! > My +1 is here :) > +1 for tripleo-ci core, I don't think we're proposing tripleo core atm. Thanks for proposing and sending this Sagi! > > Thanks > -- > Best regards > Sagi Shnaidman > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Thu Nov 15 16:03:51 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Thu, 15 Nov 2018 17:03:51 +0100 Subject: [openstack-dev] [tripleo] Proposing Enrique Llorente Pastora as a core reviewer for TripleO In-Reply-To: References: Message-ID: <9267037d-89e2-4c10-5c28-934e543b6144@redhat.com> On 15. 11. 18 16:54, Wesley Hayutin wrote: > On Thu, Nov 15, 2018 at 8:52 AM Sagi Shnaidman wrote: > >> Hi, >> I'd like to propose Quique (@quiquell) as a core reviewer for TripleO. >> Quique is actively involved in improvements and development of TripleO and >> TripleO CI. He also helps in other projects including but not limited to >> Infrastructure. >> He shows a very good understanding how TripleO and CI works and I'd like >> suggest him as core reviewer of TripleO in CI related code. >> >> Please vote! >> My +1 is here :) >> > > +1 for tripleo-ci core, I don't think we're proposing tripleo core atm. > Thanks for proposing and sending this Sagi! +1! > > >> >> Thanks >> -- >> Best regards >> Sagi Shnaidman >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From aschultz at redhat.com Thu Nov 15 16:32:25 2018 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 15 Nov 2018 09:32:25 -0700 Subject: [openstack-dev] [tripleo] Proposing Enrique Llorente Pastora as a core reviewer for TripleO In-Reply-To: References: Message-ID: +1 On Thu, Nov 15, 2018 at 8:51 AM Sagi Shnaidman wrote: > > Hi, > I'd like to propose Quique (@quiquell) as a core reviewer for TripleO. Quique is actively involved in improvements and development of TripleO and TripleO CI. He also helps in other projects including but not limited to Infrastructure. > He shows a very good understanding how TripleO and CI works and I'd like suggest him as core reviewer of TripleO in CI related code. > > Please vote! > My +1 is here :) > > Thanks > -- > Best regards > Sagi Shnaidman > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From emilien at redhat.com Thu Nov 15 16:37:00 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 15 Nov 2018 11:37:00 -0500 Subject: [openstack-dev] [tripleo] Proposing Enrique Llorente Pastora as a core reviewer for TripleO In-Reply-To: References: Message-ID: +1 to have him part of TripleO CI core team, thanks for your dedication and hard work. I'm glad to see you're learning fast. Keep your motivation and thanks again! On Thu, Nov 15, 2018 at 11:33 AM Alex Schultz wrote: > +1 > On Thu, Nov 15, 2018 at 8:51 AM Sagi Shnaidman > wrote: > > > > Hi, > > I'd like to propose Quique (@quiquell) as a core reviewer for TripleO. > Quique is actively involved in improvements and development of TripleO and > TripleO CI. He also helps in other projects including but not limited to > Infrastructure. > > He shows a very good understanding how TripleO and CI works and I'd like > suggest him as core reviewer of TripleO in CI related code. > > > > Please vote! > > My +1 is here :) > > > > Thanks > > -- > > Best regards > > Sagi Shnaidman > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Thu Nov 15 16:39:54 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Thu, 15 Nov 2018 17:39:54 +0100 Subject: [openstack-dev] [tripleo] Proposing Enrique Llorente Pastora as a core reviewer for TripleO In-Reply-To: References: Message-ID: Of course +1 :). On 11/15/18 5:37 PM, Emilien Macchi wrote: > +1 to have him part of TripleO CI core team, thanks for your dedication > and hard work. I'm glad to see you're learning fast. Keep your > motivation and thanks again! > > On Thu, Nov 15, 2018 at 11:33 AM Alex Schultz > wrote: > > +1 > On Thu, Nov 15, 2018 at 8:51 AM Sagi Shnaidman > wrote: > > > > Hi, > > I'd like to propose Quique (@quiquell) as a core reviewer for > TripleO. Quique is actively involved in improvements and development > of TripleO and TripleO CI. He also helps in other projects including > but not limited to Infrastructure. > > He shows a very good understanding how TripleO and CI works and > I'd like suggest him as core reviewer of TripleO in CI related code. > > > > Please vote! > > My +1 is here :) > > > > Thanks > > -- > > Best regards > > Sagi Shnaidman > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From marios at redhat.com Thu Nov 15 16:58:32 2018 From: marios at redhat.com (Marios Andreou) Date: Thu, 15 Nov 2018 18:58:32 +0200 Subject: [openstack-dev] [tripleo] Proposing Enrique Llorente Pastora as a core reviewer for TripleO In-Reply-To: References: Message-ID: On Thu, Nov 15, 2018 at 5:51 PM Sagi Shnaidman wrote: > Hi, > I'd like to propose Quique (@quiquell) as a core reviewer for TripleO. > Quique is actively involved in improvements and development of TripleO and > TripleO CI. He also helps in other projects including but not limited to > Infrastructure. > He shows a very good understanding how TripleO and CI works and I'd like > suggest him as core reviewer of TripleO in CI related code. > > Please vote! > My +1 is here :) > > +1++ > Thanks > -- > Best regards > Sagi Shnaidman > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From songbaisen at szzt.com.cn Fri Nov 16 02:04:31 2018 From: songbaisen at szzt.com.cn (songbaisen at szzt.com.cn) Date: Fri, 16 Nov 2018 10:04:31 +0800 Subject: [openstack-dev] [neutron] hi neutorn core team would you help to review this pr. it has blocked our pr merged.Thanks advance! Message-ID: <201811161004310696366@szzt.com.cn> https://review.openstack.org/#/c/616211/ songbaisen at szzt.com.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.andre at redhat.com Fri Nov 16 07:21:37 2018 From: m.andre at redhat.com (=?UTF-8?Q?Martin_Andr=C3=A9?=) Date: Fri, 16 Nov 2018 08:21:37 +0100 Subject: [openstack-dev] [tripleo] Proposing Enrique Llorente Pastora as a core reviewer for TripleO In-Reply-To: References: Message-ID: On Thu, Nov 15, 2018 at 5:00 PM Wesley Hayutin wrote: > > > > On Thu, Nov 15, 2018 at 8:52 AM Sagi Shnaidman wrote: >> >> Hi, >> I'd like to propose Quique (@quiquell) as a core reviewer for TripleO. Quique is actively involved in improvements and development of TripleO and TripleO CI. He also helps in other projects including but not limited to Infrastructure. >> He shows a very good understanding how TripleO and CI works and I'd like suggest him as core reviewer of TripleO in CI related code. >> >> Please vote! >> My +1 is here :) > > > +1 for tripleo-ci core, I don't think we're proposing tripleo core atm. > Thanks for proposing and sending this Sagi! +1 Martin >> >> >> Thanks >> -- >> Best regards >> Sagi Shnaidman >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From viktor.tikkanen at nokia.com Fri Nov 16 09:40:04 2018 From: viktor.tikkanen at nokia.com (Tikkanen, Viktor (Nokia - FI/Espoo)) Date: Fri, 16 Nov 2018 09:40:04 +0000 Subject: [openstack-dev] [horizon] how to get horizon related logs to syslog? Message-ID: Hi! For most openstack parts (e.g. Aodh, Cinder, Glance, Heat, Ironic, Keystone, Neutron, Nova) it was easy to get logs to syslog. For example for Heat something similar to this (into file templates/heat.conf.j2): # Syslog usage {% if heat_syslog_enabled %} use_syslog = True syslog_log_facility = LOG_LOCAL3 {% else %} log_file = /var/log/heat/heat.log {% endif %} But how to get Horizon related logs to syslog? -V. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Fri Nov 16 10:22:59 2018 From: eumel at arcor.de (Frank Kloeker) Date: Fri, 16 Nov 2018 11:22:59 +0100 Subject: [openstack-dev] [docs][i18n] Team photos Berlin Summit In-Reply-To: References: Message-ID: <4e3e3296294af8c536329331e1e95407@arcor.de> Hello, I would like to share my photos from the poster ehibition which provided I18n for the Summit, some impressions from our sessions, the team photos and the afterstack team dinner in Grunewald. Feel free to contribute more pictures for people they don't were with us. Have a save trip home and hopefully you enjoyed the Berlin Summit. Frank link: https://photos.app.goo.gl/i9Wnq7QyukpG29VF6 From ellorent at redhat.com Fri Nov 16 13:33:12 2018 From: ellorent at redhat.com (Felix Enrique Llorente Pastora) Date: Fri, 16 Nov 2018 14:33:12 +0100 Subject: [openstack-dev] [tripleo] Proposing Enrique Llorente Pastora as a core reviewer for TripleO In-Reply-To: References: Message-ID: Thanks guys! On Fri, Nov 16, 2018 at 8:26 AM Martin André wrote: > On Thu, Nov 15, 2018 at 5:00 PM Wesley Hayutin > wrote: > > > > > > > > On Thu, Nov 15, 2018 at 8:52 AM Sagi Shnaidman > wrote: > >> > >> Hi, > >> I'd like to propose Quique (@quiquell) as a core reviewer for TripleO. > Quique is actively involved in improvements and development of TripleO and > TripleO CI. He also helps in other projects including but not limited to > Infrastructure. > >> He shows a very good understanding how TripleO and CI works and I'd > like suggest him as core reviewer of TripleO in CI related code. > >> > >> Please vote! > >> My +1 is here :) > > > > > > +1 for tripleo-ci core, I don't think we're proposing tripleo core atm. > > Thanks for proposing and sending this Sagi! > > +1 > > Martin > > >> > >> > >> Thanks > >> -- > >> Best regards > >> Sagi Shnaidman > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Quique Llorente Openstack TripleO CI -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Fri Nov 16 14:25:23 2018 From: ykarel at redhat.com (Yatin Karel) Date: Fri, 16 Nov 2018 19:55:23 +0530 Subject: [openstack-dev] [tripleo] Proposing Enrique Llorente Pastora as a core reviewer for TripleO In-Reply-To: References: Message-ID: +1 On Thu, Nov 15, 2018 at 9:24 PM Sagi Shnaidman wrote: > > Hi, > I'd like to propose Quique (@quiquell) as a core reviewer for TripleO. Quique is actively involved in improvements and development of TripleO and TripleO CI. He also helps in other projects including but not limited to Infrastructure. > He shows a very good understanding how TripleO and CI works and I'd like suggest him as core reviewer of TripleO in CI related code. > > Please vote! > My +1 is here :) > > Thanks > -- > Best regards > Sagi Shnaidman > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From bdobreli at redhat.com Fri Nov 16 16:11:43 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 16 Nov 2018 17:11:43 +0100 Subject: [openstack-dev] [Edge-computing] [tripleo][FEMDC] IEEE Fog Computing: Call for Contributions - Deadline Approaching In-Reply-To: <5d1b7e47-ebd9-089e-30e9-15464ae6e49b@redhat.com> References: <5d1b7e47-ebd9-089e-30e9-15464ae6e49b@redhat.com> Message-ID: <220a9ac3-fa24-3791-f30c-faf77b96c70f@redhat.com> Hello. The final version of the position paper "Edge Clouds Multiple Control Planes Data Replication Challenges" [0],[1] drafted, and have been uploaded to EDAS. The deadline expires today and I'm afraid there is no time left for more of amendments. Thank you all for reviews and inputs, and those edge sessions at the summit in Berlin were really mind opening! PS. I wish I could have kept working on that draft while was attending the summit, but that was not the case :) [0] https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/LaTeX/position_paper_1570506394.pdf [1] https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/LaTeX/position_paper_1570506394.tex On 11/8/18 6:58 PM, Bogdan Dobrelya wrote: > Hi folks. > The deadline for papers seems to be extended till Nov 17, so that's a > great news! > I finished drafting the position paper [0],[1]. > > Please /proof read and review. There is also open questions placed there > and it would be really nice to have a co-author here for any of those > items remaining... > >... -- Best regards, Bogdan Dobrelya, Irc #bogdando From bdobreli at redhat.com Fri Nov 16 16:35:00 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 16 Nov 2018 17:35:00 +0100 Subject: [openstack-dev] [tripleo] Proposing Enrique Llorente Pastora as a core reviewer for TripleO In-Reply-To: References: Message-ID: +1 On 11/15/18 4:50 PM, Sagi Shnaidman wrote: > Hi, > I'd like to propose Quique (@quiquell) as a core reviewer for TripleO. > Quique is actively involved in improvements and development of TripleO > and TripleO CI. He also helps in other projects including but not > limited to Infrastructure. > He shows a very good understanding how TripleO and CI works and I'd like > suggest him as core reviewer of TripleO in CI related code. > > Please vote! > My +1 is here :) > > Thanks > -- > Best regards > Sagi Shnaidman > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From jaypipes at gmail.com Fri Nov 16 19:52:48 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 16 Nov 2018 14:52:48 -0500 Subject: [openstack-dev] [nova] Can we deprecate the server backup API please? Message-ID: <7778dd25-53f8-ca5c-55c5-0c0e6db29be4@gmail.com> The server backup API was added 8 years ago. It has Nova basically implementing a poor-man's cron for some unknown reason (probably because the original RAX Cloud Servers API had some similar or identical functionality, who knows...). Can we deprecate this functionality please? It's confusing for end users to have an `openstack server image create` and `openstack server backup create` command where the latter does virtually the same thing as the former only sets up some whacky cron-like thing and deletes images after some number of rotations. If a cloud provider wants to offer some backup thing as a service, they could implement this functionality separately IMHO, store the user's requested cronjob state in their own system (or in glance which is kind of how the existing Nova createBackup functionality works), and run a simple cronjob executor that ran `openstack server image create` and `openstack image delete` as needed. This is a perfect example of an API that should never have been added to the Compute API, in my opinion, and removing it would be a step in the right direction if we're going to get serious about cleaning the Compute API up. Thoughts? -jay From dangtrinhnt at gmail.com Sat Nov 17 00:59:22 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Sat, 17 Nov 2018 09:59:22 +0900 Subject: [openstack-dev] [openstack-community] Asking for suggestion of video conference tool for team and webinar In-Reply-To: References: Message-ID: Thank Thierry, Arun, and everybody for all the suggestions. My team will try these tools. On Thu, Nov 8, 2018 at 3:23 AM Arun Kumar Khan wrote: > > On Tue, Nov 6, 2018 at 5:36 AM Trinh Nguyen wrote: > >> Hi, >> >> I tried several free tools for team meetings, vPTG, and webinars but >> there are always drawbacks ( because it's free, of course). For example: >> >> - Google Hangout: only allow a maximum of 10 people to do the video >> calls >> - Zoom: limits about 45m per meeting. So for a webinar or conference >> call takes more than 45m we have to splits into 2 sessions or so. >> >> If anyone in the community can suggest some better video conferencing >> tools, that would be great. So we can organize better communication for our >> team and the community's webinars. >> > > All good suggestions. > > A couple of other options: > > 1. https://hubl.in -- open source i.e. you can host your own or use the > hosted version at https://hubl.in/ > 2. Wire.com -- open source with mobile apps. > > Let us know which solution works out for you. > > -- Arun Khan > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From yashpaldevelop at gmail.com Sat Nov 17 05:43:09 2018 From: yashpaldevelop at gmail.com (Yash Pal) Date: Sat, 17 Nov 2018 11:13:09 +0530 Subject: [openstack-dev] TechEvent Telegram Channel Message-ID: Hi Folks, I writing the mail on behalf of GlobalTechEvent community. We just started Telegram Channel where we are trying to post all Technical Related event which is happening in India right now. If anyone would like to join this channel. Please feel free to join the channel. One more thing if anyone wants to post your Technical related event on our channel you can also connect with @kavingates (on Telegram). Our channel link - https://t.me/GlobalTechEvents Kavin Gates - http://t.me/kavingates Thanks in advance Regards Yash TechEvents Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Sat Nov 17 10:52:12 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Sat, 17 Nov 2018 10:52:12 +0000 Subject: [openstack-dev] [nova] Can we deprecate the server backup API please? In-Reply-To: <7778dd25-53f8-ca5c-55c5-0c0e6db29be4@gmail.com> References: <7778dd25-53f8-ca5c-55c5-0c0e6db29be4@gmail.com> Message-ID: <7A3EAFE2-E0EB-47FC-95FD-54BCD923E8CE@cern.ch> Mistral can schedule the executions and then a workflow to do the server image create. The CERN implementation of this is described at http://openstack-in-production.blogspot.com/2017/08/scheduled-snapshots.html with the implementation at https://gitlab.cern.ch/cloud-infrastructure/mistral-workflows. It is pretty generic but I don't know if anyone has tried to run it elsewhere. A few features - Schedule can be chosen - Logs visible in Horizon - Option to shutdown instances before and restart after - Mails can be sent on success and/or failure - Rotation of backups to keep a maximum number of copies There are equivalent restore and clone functions in the workflow also. Tim -----Original Message----- From: Jay Pipes Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Friday, 16 November 2018 at 20:58 To: "openstack-dev at lists.openstack.org" Subject: [openstack-dev] [nova] Can we deprecate the server backup API please? The server backup API was added 8 years ago. It has Nova basically implementing a poor-man's cron for some unknown reason (probably because the original RAX Cloud Servers API had some similar or identical functionality, who knows...). Can we deprecate this functionality please? It's confusing for end users to have an `openstack server image create` and `openstack server backup create` command where the latter does virtually the same thing as the former only sets up some whacky cron-like thing and deletes images after some number of rotations. If a cloud provider wants to offer some backup thing as a service, they could implement this functionality separately IMHO, store the user's requested cronjob state in their own system (or in glance which is kind of how the existing Nova createBackup functionality works), and run a simple cronjob executor that ran `openstack server image create` and `openstack image delete` as needed. This is a perfect example of an API that should never have been added to the Compute API, in my opinion, and removing it would be a step in the right direction if we're going to get serious about cleaning the Compute API up. Thoughts? -jay __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From soulxu at gmail.com Sun Nov 18 12:51:51 2018 From: soulxu at gmail.com (Alex Xu) Date: Sun, 18 Nov 2018 20:51:51 +0800 Subject: [openstack-dev] [nova] Can we deprecate the server backup API please? In-Reply-To: <7778dd25-53f8-ca5c-55c5-0c0e6db29be4@gmail.com> References: <7778dd25-53f8-ca5c-55c5-0c0e6db29be4@gmail.com> Message-ID: Sounds make sense to me, and then we needn't fix this strange behaviour also https://review.openstack.org/#/c/409644/ Jay Pipes 于2018年11月17日周六 上午3:56写道: > The server backup API was added 8 years ago. It has Nova basically > implementing a poor-man's cron for some unknown reason (probably because > the original RAX Cloud Servers API had some similar or identical > functionality, who knows...). > > Can we deprecate this functionality please? It's confusing for end users > to have an `openstack server image create` and `openstack server backup > create` command where the latter does virtually the same thing as the > former only sets up some whacky cron-like thing and deletes images after > some number of rotations. > > If a cloud provider wants to offer some backup thing as a service, they > could implement this functionality separately IMHO, store the user's > requested cronjob state in their own system (or in glance which is kind > of how the existing Nova createBackup functionality works), and run a > simple cronjob executor that ran `openstack server image create` and > `openstack image delete` as needed. > > This is a perfect example of an API that should never have been added to > the Compute API, in my opinion, and removing it would be a step in the > right direction if we're going to get serious about cleaning the Compute > API up. > > Thoughts? > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Nov 19 00:03:52 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 19 Nov 2018 00:03:52 +0000 Subject: [openstack-dev] IMPORTANT: We're combining the lists! In-Reply-To: <20181109181447.qhutsauxl4fuinnh@yuggoth.org> Message-ID: <20181119000352.lgrg57kyjylbrmx6@yuggoth.org> REMINDER: The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists (to which this was sent) are being replaced by a new openstack-discuss at lists.openstack.org mailing list. The new list[0] is open for posts from subscribers starting now, and the old lists will be configured to no longer accept posts starting on Monday December 3. In the interim, posts to the old lists will also get copied to the new list so it's safe to unsubscribe from them now and not miss any messages. See my previous notice[1] for details. As of the time of this announcement, we have 280 subscribers on openstack-discuss with three weeks to go before the old lists are closed down for good). At the recommendation of David Medberry at the OpenStack Summit last week, this reminder is being sent individually to each of the old lists (not as a cross-post), and without any topic tag in case either might be resulting in subscribers missing it. [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From yjf1970231893 at gmail.com Mon Nov 19 01:45:21 2018 From: yjf1970231893 at gmail.com (Jeff Yang) Date: Mon, 19 Nov 2018 09:45:21 +0800 Subject: [openstack-dev] [octavia] Please give me some comments about my patchs Message-ID: Hi, Octavia Team: There are two patches I committed: https://review.openstack.org/#/c/590620/ https://review.openstack.org/#/c/594040/ The first implement l7policy and l7rule's quota management. The second provides some restrictions about the protocol when listener is associated with pool. I think these functions are useful for users. I hope to receive some suggestions from you. Thinks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Mon Nov 19 08:17:34 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 19 Nov 2018 09:17:34 +0100 Subject: [openstack-dev] [puppet] [stable] Deprecation of newton branches Message-ID: <2ae10ca2-8c77-ccbc-c009-a22c1d3cfd69@binero.se> Hello, We've been talking for a while about the deprecation and removal of the stable/newton branches. I think it's time now that we get rid of them, we have no open patches and Newton is considered EOL. Could cores get back with a quick feedback and then the stable team can get rid of those whenever they have time. Best regards Tobias From melwittt at gmail.com Mon Nov 19 09:17:08 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 19 Nov 2018 10:17:08 +0100 Subject: [openstack-dev] [nova] Stein forum session notes Message-ID: <9614718a-77d4-f9ca-a7ba-73bc9af28795@gmail.com> Hey all, Here's some notes I took in forum sessions I attended -- feel free to add notes on sessions I missed. Etherpad links: https://wiki.openstack.org/wiki/Forum/Berlin2018 Cheers, -melanie TUE --- Cells v2 updates ================ - Went over the etherpad, no objections to anything - Not directly related to the session, but CERN (hallway track) and NeCTAR (dev ML) have both given feedback and asked that the policy-driven idea for handling quota for down cells be avoided. Revived the "propose counting quota in placement" spec to see if there's any way forward here Getting users involved in the project ===================================== - Disconnect between SIGs/WGs and project teams - Too steep a first step to get involved by subscribing to ML - People confused about how to participate Community outreach when culture, time zones, and language differ ================================================================ - Most discussion around how to synchronize real-time communication considering different time zones - Best to emphasize asynchronous communication. Discussion on ML and gerrit reviews - Helpful to create weekly meeting agenda in advance so contributors from other time zones can add notes/response to discussion items WED --- NFV/HPC pain points =================== Top issues for immediate action: NUMA-aware live migration (spec just needs re-approval), improved scheduler logging (resurrect cfriesen's patch and clean it up), distant third is SRIOV live migration BFV improvements ================ - Went over the etherpad, no major objections to anything - Agree: we should expose boot_index from the attachments API - Unclear what to do about post-create delete_on_termination. Being able to specify it for attach sounds reasonable, but is it enough for those asking? Or would it end up serving no one? Better expose what we produce ============================= - Project teams should propose patches to openstack/openstack-map to improve their project pages - Would be ideal if project pages included a longer paragraph explaining the project, have a diagram, list SIGs/WGs related to the project, etc Blazar reservations to new resource types ========================================= - For nova compute hosts, reservations are done by putting reserved hosts into "blazar" host aggregate and then a special scheduler filter is used to exclude those hosts from scheduling. But how to extend that concept to other projects? - Note: the nova approach will change from scheduler filter => placement request filter Edge use cases and requirements =============================== - Showed the reference architectures again - Most popular use case was "Mobile service provider 5G/4G virtual RAN deployment and Edge Cloud B2B2X" with seven +1s on the etherpad Deletion of project and project resources ========================================= - What is wanted: a delete API per service that takes a project_id and force deletes all resources owned by it with --dry-run component - Challenge to work out the dependencies for the order of deletion of all resources in all projects. Disable project, then delete things in order of dependency - Idea: turn os-purge into a REST API and each project implement a plugin for it Getting operators' bug fixes upstreamed ======================================= - Problem: operator reports a bug and provides a solution, for example, pastes a diff in launchpad or otherwise describes how to fix the bug. How can we increase the chances of those fixes making it to gerrit? - Concern: are there legal issues with accepting patches pasted into launchpad by someone who hasn't signed the ICLA? - Possible actions: create a best practices guide tailored for operators and socialize it among the ops docs/meetup/midcycle group. Example: guidance on how to indicate you don't have time to add test coverage, etc when you propose a patch THU --- Bug triage: why not all the community? ====================================== - Cruft and mixing tasks with defect reports makes triage more difficult to manage. Example: difference between a defect reported by a user vs an effective TODO added by a developer. If New bugs were reliably from end users, would we be more likely to triage? - Bug deputy weekly ML reporting could help - Action: copy the generic portion of the nova bug triage wiki doc into the contributor guide docs. The idea/hope being that easy-to-understand instructions available to the wider community might increase the chances of people outside of the project team being capable of triaging bugs, so all of it doesn't fall on project teams - Idea: should we remove the bug supervisor requirement from nova to allow people who haven't joined the bug team to set Status and Importance? Current state of volume encryption ================================== - Feedback: public clouds can't offer encryption because keys are stored in the cloud. Telcos are required to make sure admin can't access secrets. Action: SecuStack has a PoC for E2E key transfer, mnaser to help see what could be upstreamed - Features needed: ability for users to provide keys or use customer barbican or other key store. Thread: http://lists.openstack.org/pipermail/openstack-dev/2018-November/136258.html Cross-technical leadership session (OpenStack, Kata, StarlingX, Airship, Zuul) ======================================================================== - Took down the structure of how leadership positions work in each project on the etherpad, look at differences - StarlingX taking a new approach for upstreaming, New strategy: align with master, analyze what they need, and address the gaps (as opposed to pushing all the deltas up). Bug fixes still need to be brought forward, that won't change Concurrency limits for service instance creation ================================================ - Looking for ways to test and detect changes in performance as a community. Not straightforward because test hardware must stay consistent in order to detect performance deltas, release to release. Infra can't provide such an environment - Idea: it could help to write up a doc per project with a list of the usual tunables and basic info about how to use them Change of ownership of resources ================================ - Ignore the network piece for now, it's the most complicated. Being able to transfer everything else would solve 90% of City Network's use cases - Some ideas around having this be a keystone auth-based access granting instead of an update of project/user, but if keystone could hand user A a token for user B, that token would apply to all resources of user B's, not just the ones desired for transfer Update on placement extraction from nova ======================================== - Upgrade step additions from integrated placement to extracted placement in TripleO and OpenStackAnsible are being worked on now - Reshaper patches for libvirt and xenapi drivers are up for review - Lab test for vGPU upgrade and reshape + new schedule for libvirt driver patch has been done already - FFU script work needs an owner. Will need to query libvirtd to get mdevs and use PlacementDirect to populate placement Python bindings for the placement API ===================================== - Placement client code replicated in different projects: nova, blazar, neutron, cyborg. Want to commonize into python bindings lib - Consensus was that the placement bindings should go into openstacksdk and then projects will consume it from there T series community goal discussion ================================== - Most popular goal ideas: Finish moving legacy python-*client CLIs to python-openstackclient, Deletion of project resources as discussed in forum session earlier in the week, ensure all projects use ServiceTokens when calling one another with incoming token From jaosorior at redhat.com Mon Nov 19 10:15:13 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Mon, 19 Nov 2018 12:15:13 +0200 Subject: [openstack-dev] [tripleo] Proposing Enrique Llorente Pastora as a core reviewer for TripleO In-Reply-To: References: Message-ID: <756fbdc1-124c-7540-c769-badd57859988@redhat.com> +1 on making him tripleo-ci core. Great work! On 11/15/18 5:50 PM, Sagi Shnaidman wrote: > Hi, > I'd like to propose Quique (@quiquell) as a core reviewer for TripleO. > Quique is actively involved in improvements and development of TripleO > and TripleO CI. He also helps in other projects including but not > limited to Infrastructure. > He shows a very good understanding how TripleO and CI works and I'd > like suggest him as core reviewer of TripleO in CI related code. > > Please vote! > My +1 is here :) > > Thanks > -- > Best regards > Sagi Shnaidman > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gcerami at redhat.com Mon Nov 19 11:44:29 2018 From: gcerami at redhat.com (Gabriele Cerami) Date: Mon, 19 Nov 2018 11:44:29 +0000 Subject: [openstack-dev] [tripleo] Proposing Enrique Llorente Pastora as a core reviewer for TripleO In-Reply-To: References: Message-ID: <20181119114429.6qoeir3bquor74tt@localhost> On 15 Nov, Sagi Shnaidman wrote: > Hi, > I'd like to propose Quique (@quiquell) as a core reviewer for TripleO. > Quique is actively involved in improvements and development of TripleO and > TripleO CI. He also helps in other projects including but not limited to > Infrastructure. It'll be grand. +1 From amotoki at gmail.com Mon Nov 19 12:19:21 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 19 Nov 2018 21:19:21 +0900 Subject: [openstack-dev] [horizon] how to get horizon related logs to syslog? In-Reply-To: References: Message-ID: Hi, Horizon logging depends on Django configuration which supports full python logging [1]. [1] does not provide enough examples. Perhaps [2] will help you (though I haven't tested it). [1] https://docs.djangoproject.com/en/2.0/topics/logging/ [2] https://www.simonkrueger.com/2015/05/27/logging-django-apps-to-syslog.html Thanks, Akihiro 2018年11月16日(金) 18:47 Tikkanen, Viktor (Nokia - FI/Espoo) < viktor.tikkanen at nokia.com>: > Hi! > > For most openstack parts (e.g. Aodh, Cinder, Glance, Heat, Ironic, > Keystone, Neutron, Nova) it was easy to get logs to syslog. > > For example for Heat something similar to this (into file > templates/heat.conf.j2): > > # Syslog usage > {% if heat_syslog_enabled %} > use_syslog = True > syslog_log_facility = LOG_LOCAL3 > {% else %} > log_file = /var/log/heat/heat.log > {% endif %} > > But how to get Horizon related logs to syslog? > > -V. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Mon Nov 19 12:42:30 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 19 Nov 2018 07:42:30 -0500 Subject: [openstack-dev] [puppet] [stable] Deprecation of newton branches In-Reply-To: <2ae10ca2-8c77-ccbc-c009-a22c1d3cfd69@binero.se> References: <2ae10ca2-8c77-ccbc-c009-a22c1d3cfd69@binero.se> Message-ID: +1 for me, I haven't seen much interest in keeping these branches for puppet modules. I also would like to hear from our users though. On Mon, Nov 19, 2018 at 3:18 AM Tobias Urdin wrote: > Hello, > > We've been talking for a while about the deprecation and removal of the > stable/newton branches. > I think it's time now that we get rid of them, we have no open patches > and Newton is considered EOL. > > Could cores get back with a quick feedback and then the stable team can > get rid of those whenever they have time. > > Best regards > Tobias > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon Nov 19 12:48:29 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 19 Nov 2018 07:48:29 -0500 Subject: [openstack-dev] [puppet] [stable] Deprecation of newton branches In-Reply-To: References: <2ae10ca2-8c77-ccbc-c009-a22c1d3cfd69@binero.se> Message-ID: With my core hat, I would give it a +1. At this point, it has no open patches and the last one we merged was 7 months ago. https://review.openstack.org/#/q/projects:openstack/puppet-+-project:openstack/puppet-tripleo+branch:stable/newton+is:open https://review.openstack.org/#/q/projects:openstack/puppet-+-project:openstack/puppet-tripleo+branch:stable/newton+is:merged I can't speak about it as an operator as we don't run anything that old. On Mon, Nov 19, 2018 at 7:43 AM Emilien Macchi wrote: > +1 for me, I haven't seen much interest in keeping these branches for > puppet modules. > I also would like to hear from our users though. > > On Mon, Nov 19, 2018 at 3:18 AM Tobias Urdin > wrote: > >> Hello, >> >> We've been talking for a while about the deprecation and removal of the >> stable/newton branches. >> I think it's time now that we get rid of them, we have no open patches >> and Newton is considered EOL. >> >> Could cores get back with a quick feedback and then the stable team can >> get rid of those whenever they have time. >> >> Best regards >> Tobias >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Mon Nov 19 12:55:30 2018 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 19 Nov 2018 13:55:30 +0100 Subject: [openstack-dev] [puppet] [stable] Deprecation of newton branches In-Reply-To: References: <2ae10ca2-8c77-ccbc-c009-a22c1d3cfd69@binero.se> Message-ID: +1 for me Em seg, 19 de nov de 2018 às 13:51, Mohammed Naser escreveu: > With my core hat, I would give it a +1. At this point, it has no open > patches and the last one we merged was 7 months ago. > > > https://review.openstack.org/#/q/projects:openstack/puppet-+-project:openstack/puppet-tripleo+branch:stable/newton+is:open > > https://review.openstack.org/#/q/projects:openstack/puppet-+-project:openstack/puppet-tripleo+branch:stable/newton+is:merged > > I can't speak about it as an operator as we don't run anything that old. > > On Mon, Nov 19, 2018 at 7:43 AM Emilien Macchi wrote: > >> +1 for me, I haven't seen much interest in keeping these branches for >> puppet modules. >> I also would like to hear from our users though. >> >> On Mon, Nov 19, 2018 at 3:18 AM Tobias Urdin >> wrote: >> >>> Hello, >>> >>> We've been talking for a while about the deprecation and removal of the >>> stable/newton branches. >>> I think it's time now that we get rid of them, we have no open patches >>> and Newton is considered EOL. >>> >>> Could cores get back with a quick feedback and then the stable team can >>> get rid of those whenever they have time. >>> >>> Best regards >>> Tobias >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> -- >> Emilien Macchi >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Mon Nov 19 12:58:31 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Mon, 19 Nov 2018 20:58:31 +0800 Subject: [openstack-dev] [glance] about use shared image with each other Message-ID: Hi,all Recently, I want to use the shared image with each other.I find it isn't convenient that the producer notifies the consumer via email which the image has been shared and what its UUID is. In other words, why the image api v2 is no provision for producer-consumer communication? To make it is more convenient, if we can add a task to change the member_status from "pending" to "accepted" when we share the image with each other. It is similar to the resize_confirm in Nova, we can control the time interval in config. Can you tell me more about this?Thank you very much! Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Nov 19 13:15:39 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 19 Nov 2018 07:15:39 -0600 Subject: [openstack-dev] [nova] Can we deprecate the server backup API please? In-Reply-To: References: <7778dd25-53f8-ca5c-55c5-0c0e6db29be4@gmail.com> Message-ID: On 11/18/2018 6:51 AM, Alex Xu wrote: > Sounds make sense to me, and then we needn't fix this strange behaviour > also https://review.openstack.org/#/c/409644/ The same discussion was had in the spec for that change: https://review.openstack.org/#/c/511825/ Ultimately it amounted to a big "meh, let's just not fix the bug but also no one really cares about deprecating the API either". The only thing deprecating the API would do is signal that it probably shouldn't be used. We would still support it on older microversions. If all anyone cares about is signalling not to use the API then deprecation is probably fine, but I personally don't feel too strongly about it either way. -- Thanks, Matt From mriedemos at gmail.com Mon Nov 19 13:38:46 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 19 Nov 2018 07:38:46 -0600 Subject: [openstack-dev] [nova] Stein forum session notes In-Reply-To: <9614718a-77d4-f9ca-a7ba-73bc9af28795@gmail.com> References: <9614718a-77d4-f9ca-a7ba-73bc9af28795@gmail.com> Message-ID: <6061a8e5-ad61-2075-223c-b540a911ebe1@gmail.com> On 11/19/2018 3:17 AM, melanie witt wrote: > - Not directly related to the session, but CERN (hallway track) and > NeCTAR (dev ML) have both given feedback and asked that the > policy-driven idea for handling quota for down cells be avoided. Revived > the "propose counting quota in placement" spec to see if there's any way > forward here Should this be abandoned then? https://review.openstack.org/#/c/614783/ Since there is no microversion impact to that change, it could be added separately as a bug fix for the down cell case if other operators want that functionality. But maybe we don't know what other operators want since no one else is at multi-cell cells v2 yet. -- Thanks, Matt From rosmaita.fossdev at gmail.com Mon Nov 19 14:26:00 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 19 Nov 2018 09:26:00 -0500 Subject: [openstack-dev] [glance] about use shared image with each other In-Reply-To: References: Message-ID: <6d53c9e9-98f6-d594-bbfb-837c74d5fbae@gmail.com> On 11/19/18 7:58 AM, Rambo wrote: > Hi,all > >      Recently, I want to use the shared image with each other.I find it > isn't convenient that the producer notifies the consumer via email which > the image has been shared and what its UUID is. In other words, why the > image api v2 is no provision for producer-consumer communication? The design goal for Image API v2 image sharing was to provide an infrastructure for an "image marketplace" in an OpenStack cloud by (a) making it easy for cloud end users to share images, and (b) making it easy for end users not to be spammed by other end users taking advantage of (a). When v2 image sharing was introduced in the Grizzly release, we did not want to dictate how producer-consumer communication would work (because we had no idea how it would develop), so we left it up to operators and end users to figure this out. The advantage of email communication is that client side message filtering is available for whatever client a particular cloud end-user employs, and presumably that end-user knows how to manipulate the filters without learning some new scheme (or, if the end-user doesn't know, learning how to filter messages will apply beyond just image sharing, which is a plus). Also, email communication is just one way to handle producer-consumer communication. Some operators have adjusted their web interfaces so that when an end-user looks at the list of images available, a notification pops up if the end-user has any images that have been shared with them and are still in "pending" status. There are various other creative things you can do using the normal API calls with regular user credentials. In brief, we figured that if an image marketplace evolved in a particular cloud, producers and consumers would forge their own relationships in whatever way made the most sense for their particular use cases. So we left producer-consumer communication out-of-band. >       To make it is more convenient,  if we can add a task to change the > member_status from "pending" to "accepted" when we share the image with > each other. It is similar to the resize_confirm in Nova, we can control > the time interval in config. You could do this, but that would defeat the entire purpose of the member statuses implementation, and hence I do not recommend it. See OSSN-0005 [1] for more about this issue. Additionally, since the Ocata release, "community" images have been available. These do not have to be accepted by an end user (but they also don't show up in the default image-list response). Who can "communitize" an image is governed by policy. See [2] for a discussion of the various types of image sharing currently available in the Image API v2. The Image Service API v2 api-ref [3] contains a brief discussion of image visibility and image sharing that may also be useful. Finally, the Glance Ocata release notes [4] have an extensive discussion of image visibility. >        Can you tell me more about this?Thank you very much! The original design page on the wiki [5] has a list of 14 use cases we wanted to address; looking through those will give you a better idea of why we made the design choices we did. Hope this helps! cheers, brian [0] http://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html [1] https://wiki.openstack.org/wiki/OSSN/1226078 [2] http://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html [3] https://developer.openstack.org/api-ref/image/v2/ [4] https://docs.openstack.org/releasenotes/glance/ocata.html [5] https://wiki.openstack.org/wiki/Glance-api-v2-image-sharing > > Best Regards > Rambo > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dangtrinhnt at gmail.com Mon Nov 19 14:39:57 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Mon, 19 Nov 2018 23:39:57 +0900 Subject: [openstack-dev] [Searchlight] New team meeting time Message-ID: Hello team, In order to welcome new contributors to the Searchlight project, we decided to change the meeting time [1]: - Time: 13:30 UCT - Meeting Channel: #openstack-searchlight - Meeting recurrence: bi-weekly starting from 19th Nov. 2018 [1] http://eavesdrop.openstack.org/irclogs/%23openstack-searchlight/latest.log.html Hope that there will be new blood into the project :D -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Nov 19 14:44:59 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 19 Nov 2018 09:44:59 -0500 Subject: [openstack-dev] [horizon] how to get horizon related logs to syslog? In-Reply-To: References: Message-ID: Akihiro Motoki writes: > Hi, > > Horizon logging depends on Django configuration which supports full python > logging [1]. > [1] does not provide enough examples. > Perhaps [2] will help you (though I haven't tested it). Based on https://docs.djangoproject.com/en/2.1/topics/logging/ it looks like it should be possible to have Horizon's settings module disable Django's logging configuration and call oslo.log to do that configuration. I wonder if it would make sense to do that, for consistency with the other services? Doug > > [1] https://docs.djangoproject.com/en/2.0/topics/logging/ > [2] > https://www.simonkrueger.com/2015/05/27/logging-django-apps-to-syslog.html > > Thanks, > Akihiro > > 2018年11月16日(金) 18:47 Tikkanen, Viktor (Nokia - FI/Espoo) < > viktor.tikkanen at nokia.com>: > >> Hi! >> >> For most openstack parts (e.g. Aodh, Cinder, Glance, Heat, Ironic, >> Keystone, Neutron, Nova) it was easy to get logs to syslog. >> >> For example for Heat something similar to this (into file >> templates/heat.conf.j2): >> >> # Syslog usage >> {% if heat_syslog_enabled %} >> use_syslog = True >> syslog_log_facility = LOG_LOCAL3 >> {% else %} >> log_file = /var/log/heat/heat.log >> {% endif %} >> >> But how to get Horizon related logs to syslog? >> >> -V. >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Doug From dangtrinhnt at gmail.com Mon Nov 19 15:09:54 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 20 Nov 2018 00:09:54 +0900 Subject: [openstack-dev] [Searchlight] Weekly report, Stein R-21 Message-ID: Hello team, Here is the report for last week, Stein R-21. Please have a look: https://www.dangtrinh.com/2018/11/searchlight-weekly-report-stein-r-21.html Thanks, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Nov 19 15:11:43 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 19 Nov 2018 08:11:43 -0700 Subject: [openstack-dev] [puppet] [stable] Deprecation of newton branches In-Reply-To: <2ae10ca2-8c77-ccbc-c009-a22c1d3cfd69@binero.se> References: <2ae10ca2-8c77-ccbc-c009-a22c1d3cfd69@binero.se> Message-ID: On Mon, Nov 19, 2018 at 1:18 AM Tobias Urdin wrote: > > Hello, > > We've been talking for a while about the deprecation and removal of the > stable/newton branches. > I think it's time now that we get rid of them, we have no open patches > and Newton is considered EOL. > > Could cores get back with a quick feedback and then the stable team can > get rid of those whenever they have time. > yes please. let's EOL them > Best regards > Tobias > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From surya.seetharaman9 at gmail.com Mon Nov 19 16:19:22 2018 From: surya.seetharaman9 at gmail.com (Surya Seetharaman) Date: Mon, 19 Nov 2018 17:19:22 +0100 Subject: [openstack-dev] [nova] Stein forum session notes In-Reply-To: <6061a8e5-ad61-2075-223c-b540a911ebe1@gmail.com> References: <9614718a-77d4-f9ca-a7ba-73bc9af28795@gmail.com> <6061a8e5-ad61-2075-223c-b540a911ebe1@gmail.com> Message-ID: On Mon, Nov 19, 2018 at 2:39 PM Matt Riedemann wrote: > On 11/19/2018 3:17 AM, melanie witt wrote: > > - Not directly related to the session, but CERN (hallway track) and > > NeCTAR (dev ML) have both given feedback and asked that the > > policy-driven idea for handling quota for down cells be avoided. Revived > > the "propose counting quota in placement" spec to see if there's any > way > > forward here > > Should this be abandoned then? > > https://review.openstack.org/#/c/614783/ > > Since there is no microversion impact to that change, it could be added > separately as a bug fix for the down cell case if other operators want > that functionality. But maybe we don't know what other operators want > since no one else is at multi-cell cells v2 yet. > > I thought the policy check was needed until the "propose counting quota in placement" has been implemented as a workaround and that is what the "handling down cell" spec also proposed, unless the former spec would be implemented within this cycle in which case we do not need the policy check. -- Regards, Surya. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Nov 19 18:10:35 2018 From: smooney at redhat.com (Sean Mooney) Date: Mon, 19 Nov 2018 18:10:35 +0000 Subject: [openstack-dev] [Openstack] Create VMs with Power architecture(ppc64) on OpenStack running on x86_64 nodes?? In-Reply-To: <36812fc83e71a08fbeb809fbfa26722a743aab78.camel@redhat.com> References: <9f1bdf3c463f3d788adb3401fc9777ee@jots.org> <36812fc83e71a08fbeb809fbfa26722a743aab78.camel@redhat.com> Message-ID: <7d06072569866475fd4f0efff25c7fea4df399f1.camel@redhat.com> adding openstack dev. On Mon, 2018-11-19 at 18:08 +0000, Sean Mooney wrote: > On Mon, 2018-11-19 at 11:57 -0500, Ken D'Ambrosio wrote: > > On 2018-11-19 11:25, Yedhu Sastri wrote: > > > Hello All, > > > > > > I have some use-cases which I want to test in PowerPC architecture(ppc64). As I dont have any Power machines I > > > would > > > like to try it with ppc64 VM's. Is it possible to run these kind of VM's on my OpenStack cluster(Queens) which > > > runs > > > on X86_64 architecture nodes(OS RHEL 7)?? > > > > > > > I'm not 100% sure, but I'm 95% sure that the answer to your question is "No." While there's much emulation that > > occurs, the CPU isn't so much emulated, but more abstracted. Constructing and running a modern CPU in software > > would > > be non-trivial. > > you can do this but not with kvm. > you need to revert the virt dirver in the nova.conf back to qemu to disable kvm to allow emulation of other > plathforms. > that said i have only emulated power on x86_64 using libvirt or qemu idrectly never with openstack but i belive we > used > to support this i just hav enot done it personally. > > > > -Ken > > > > > I set the image property architecture=ppc64 to the ppc64 image I uploaded to glance but no success in launching VM > > > with those images. I am using KVM as hypervisor(qemu 2.10.0) in my compute nodes and I think it is not built to > > > support power architecture. For testing without OpenStack I manually built qemu on a x86_64 host with ppc64 > > > support(qemu-ppc64) and then I am able to host the ppc64 VM. But I dont know how to do this on my OpenStack > > > cluster. > > > Whether I need to manually build qemu on compute nodes with ppc64 support or I need to add some lines in my > > > nova.conf to do this?? Any help to solve this issue would be much appreciated. > > > > > > > > > -- > > > Thank you for your time and have a nice day, > > > > > > With kind regards, > > > Yedhu Sastri > > > > > > _______________________________________________ > > > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > Post to : openstack at lists.openstack.org > > > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > _______________________________________________ > > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > Post to : openstack at lists.openstack.org > > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From shardy at redhat.com Mon Nov 19 19:07:12 2018 From: shardy at redhat.com (Steven Hardy) Date: Mon, 19 Nov 2018 19:07:12 +0000 Subject: [openstack-dev] [tripleo] Proposing Enrique Llorente Pastora as a core reviewer for TripleO In-Reply-To: References: Message-ID: On Thu, Nov 15, 2018 at 3:54 PM Sagi Shnaidman wrote: > > Hi, > I'd like to propose Quique (@quiquell) as a core reviewer for TripleO. Quique is actively involved in improvements and development of TripleO and TripleO CI. He also helps in other projects including but not limited to Infrastructure. > He shows a very good understanding how TripleO and CI works and I'd like suggest him as core reviewer of TripleO in CI related code. > > Please vote! > My +1 is here :) +1! From rfolco at redhat.com Mon Nov 19 19:25:43 2018 From: rfolco at redhat.com (Rafael Folco) Date: Mon, 19 Nov 2018 17:25:43 -0200 Subject: [openstack-dev] [Openstack] Create VMs with Power architecture(ppc64) on OpenStack running on x86_64 nodes?? In-Reply-To: <7d06072569866475fd4f0efff25c7fea4df399f1.camel@redhat.com> References: <9f1bdf3c463f3d788adb3401fc9777ee@jots.org> <36812fc83e71a08fbeb809fbfa26722a743aab78.camel@redhat.com> <7d06072569866475fd4f0efff25c7fea4df399f1.camel@redhat.com> Message-ID: (not sure if my answer has been sent, sorry if duplicate) I don't touch ppc for a while but AFAIK this should work as long as you run full emulation (qemu, not kvm) as libvirt_type in nova.conf and get the qemu-system-ppc64le installed in the compute node. Assume also you get the ppc64le image to launch your instance. Expect poor performance though. On Mon, Nov 19, 2018 at 4:12 PM Sean Mooney wrote: > adding openstack dev. > On Mon, 2018-11-19 at 18:08 +0000, Sean Mooney wrote: > > On Mon, 2018-11-19 at 11:57 -0500, Ken D'Ambrosio wrote: > > > On 2018-11-19 11:25, Yedhu Sastri wrote: > > > > Hello All, > > > > > > > > I have some use-cases which I want to test in PowerPC > architecture(ppc64). As I dont have any Power machines I > > > > would > > > > like to try it with ppc64 VM's. Is it possible to run these kind of > VM's on my OpenStack cluster(Queens) which > > > > runs > > > > on X86_64 architecture nodes(OS RHEL 7)?? > > > > > > > > > > I'm not 100% sure, but I'm 95% sure that the answer to your question > is "No." While there's much emulation that > > > occurs, the CPU isn't so much emulated, but more abstracted. > Constructing and running a modern CPU in software > > > would > > > be non-trivial. > > > > you can do this but not with kvm. > > you need to revert the virt dirver in the nova.conf back to qemu to > disable kvm to allow emulation of other > > plathforms. > > that said i have only emulated power on x86_64 using libvirt or qemu > idrectly never with openstack but i belive we > > used > > to support this i just hav enot done it personally. > > > > > > -Ken > > > > > > > I set the image property architecture=ppc64 to the ppc64 image I > uploaded to glance but no success in launching VM > > > > with those images. I am using KVM as hypervisor(qemu 2.10.0) in my > compute nodes and I think it is not built to > > > > support power architecture. For testing without OpenStack I manually > built qemu on a x86_64 host with ppc64 > > > > support(qemu-ppc64) and then I am able to host the ppc64 VM. But I > dont know how to do this on my OpenStack > > > > cluster. > > > > Whether I need to manually build qemu on compute nodes with ppc64 > support or I need to add some lines in my > > > > nova.conf to do this?? Any help to solve this issue would be much > appreciated. > > > > > > > > > > > > -- > > > > Thank you for your time and have a nice day, > > > > > > > > With kind regards, > > > > Yedhu Sastri > > > > > > > > _______________________________________________ > > > > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > Post to : openstack at lists.openstack.org > > > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > > _______________________________________________ > > > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > Post to : openstack at lists.openstack.org > > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Rafael Folco Senior Software Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Nov 19 19:56:08 2018 From: smooney at redhat.com (Sean Mooney) Date: Mon, 19 Nov 2018 19:56:08 +0000 Subject: [openstack-dev] [Openstack] Create VMs with Power architecture(ppc64) on OpenStack running on x86_64 nodes?? In-Reply-To: References: <9f1bdf3c463f3d788adb3401fc9777ee@jots.org> <36812fc83e71a08fbeb809fbfa26722a743aab78.camel@redhat.com> <7d06072569866475fd4f0efff25c7fea4df399f1.camel@redhat.com> Message-ID: <6e4346f08e047edf5d0d1a5722943b9fb5d0f477.camel@redhat.com> On Mon, 2018-11-19 at 17:25 -0200, Rafael Folco wrote: > (not sure if my answer has been sent, sorry if duplicate) > I don't touch ppc for a while but AFAIK this should work as long as you run full emulation (qemu, not kvm) as > libvirt_type in nova.conf and get the qemu-system-ppc64le installed in the compute node. Assume also you get the > ppc64le image to launch your instance. Expect poor performance though. that is my understanding also. the perfromace is accepable but not great if you can enabled the multithread tcg backend instead of the default singel tread tcg backend that is used when in qemu only mode but it still wont match kvm accleration or power on power performance. i dont think we have a way to enabel the multi thread accleration for qemu only backend via nova unfortunetly it would be triavial to add but no one has asked as far as i am aware. > > On Mon, Nov 19, 2018 at 4:12 PM Sean Mooney wrote: > > adding openstack dev. > > On Mon, 2018-11-19 at 18:08 +0000, Sean Mooney wrote: > > > On Mon, 2018-11-19 at 11:57 -0500, Ken D'Ambrosio wrote: > > > > On 2018-11-19 11:25, Yedhu Sastri wrote: > > > > > Hello All, > > > > > > > > > > I have some use-cases which I want to test in PowerPC architecture(ppc64). As I dont have any Power machines I > > > > > would > > > > > like to try it with ppc64 VM's. Is it possible to run these kind of VM's on my OpenStack cluster(Queens) which > > > > > runs > > > > > on X86_64 architecture nodes(OS RHEL 7)?? > > > > > > > > > > > > > I'm not 100% sure, but I'm 95% sure that the answer to your question is "No." While there's much emulation that > > > > occurs, the CPU isn't so much emulated, but more abstracted. Constructing and running a modern CPU in software > > > > would > > > > be non-trivial. > > > > > > you can do this but not with kvm. > > > you need to revert the virt dirver in the nova.conf back to qemu to disable kvm to allow emulation of other > > > plathforms. > > > that said i have only emulated power on x86_64 using libvirt or qemu idrectly never with openstack but i belive we > > > used > > > to support this i just hav enot done it personally. > > > > > > > > -Ken > > > > > > > > > I set the image property architecture=ppc64 to the ppc64 image I uploaded to glance but no success in > > launching VM > > > > > with those images. I am using KVM as hypervisor(qemu 2.10.0) in my compute nodes and I think it is not built > > to > > > > > support power architecture. For testing without OpenStack I manually built qemu on a x86_64 host with ppc64 > > > > > support(qemu-ppc64) and then I am able to host the ppc64 VM. But I dont know how to do this on my OpenStack > > > > > cluster. > > > > > Whether I need to manually build qemu on compute nodes with ppc64 support or I need to add some lines in my > > > > > nova.conf to do this?? Any help to solve this issue would be much appreciated. > > > > > > > > > > > > > > > -- > > > > > Thank you for your time and have a nice day, > > > > > > > > > > With kind regards, > > > > > Yedhu Sastri > > > > > > > > > > _______________________________________________ > > > > > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > Post to : openstack at lists.openstack.org > > > > > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > > > > _______________________________________________ > > > > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > Post to : openstack at lists.openstack.org > > > > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From andrea.franceschini.rm at gmail.com Tue Nov 20 00:13:35 2018 From: andrea.franceschini.rm at gmail.com (Andrea Franceschini) Date: Tue, 20 Nov 2018 01:13:35 +0100 Subject: [openstack-dev] [omni] Has the Platform9's hybrid multicloud project been abandoned? Message-ID: Hello All, I'm trying to figure out if the project openstack/omni https://github.com/openstack/omni has been abandoned or superseded by another project, as it is no longer updated (at least not in the last year). Generally speaking, which is, if any, a project that aims at the same goal? In other words, which way should hybrid cloud be approached in Openstack? Do you have any idea/advice? Thanks a lot, Andrea From david.ames at canonical.com Tue Nov 20 00:30:27 2018 From: david.ames at canonical.com (David Ames) Date: Mon, 19 Nov 2018 16:30:27 -0800 Subject: [openstack-dev] [charms] 18.11 OpenStack Charms release Message-ID: Announcing the 18.11 release of the OpenStack Charms. The 18.11 charms have support for Nova cells v2 for deployments of Queens and later. The 18.11 release also has preview features including series upgrades and the Octavia load balancer charm. 47 bugs have been fixed and released across the OpenStack charms. For full details of the release, please refer to the release notes: https://docs.openstack.org/charm-guide/latest/1811.html Thanks go to the following contributors for this release: Alex Kavanagh Andrey Grebennikov Aymen Frikha Billy Olsen Chris MacNaughton Chris Sanders Corey Bryant David Ames Dmitrii Shcherbakov Drew Freiberger Edward Hope-Morley Felipe Reyes Frode Nordahl Fulvio Galeazzi James Page Liam Young Neiloy Mukerjee Pedro Guimarães Pete Vander Giessen Ryan Beisner Vladimir Grevtsev dongdong tao viswesuwara nathan From xuchao at chinacloud.com.cn Tue Nov 20 02:26:12 2018 From: xuchao at chinacloud.com.cn (xuchao at chinacloud.com.cn) Date: Tue, 20 Nov 2018 10:26:12 +0800 Subject: [openstack-dev] =?gb2312?b?W1Nlbmxpbl0gRG9lcyBTZW5saW4gc3VwcG9y?= =?gb2312?b?dCBQcm9tZXRoZXVzo78=?= Message-ID: <201811201022041324405@chinacloud.com.cn> Hi, Senlin, Ceilometer/Aodh is currently integrated, does it support Prometheus ? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Tue Nov 20 03:32:59 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Tue, 20 Nov 2018 11:32:59 +0800 Subject: [openstack-dev] [nova] about filter the flavor Message-ID: Hi,all I have an idea.Now we can't filter the special flavor according to the property.Can we achieve it?If we achieved this,we can filter the flavor according the property's key and value to filter the flavor. What do you think of the idea?Can you tell me more about this ?Thank you very much. -------------- next part -------------- An HTML attachment was scrubbed... URL: From blake at platform9.com Tue Nov 20 06:08:39 2018 From: blake at platform9.com (Blake Covarrubias) Date: Mon, 19 Nov 2018 22:08:39 -0800 Subject: [openstack-dev] [omni] Has the Platform9's hybrid multicloud project been abandoned? In-Reply-To: References: Message-ID: Hi Andrea, Omni has not been abandoned by Platform9. We're still developing Omni internally, and are working to open source additional code as well as improve docs so that others may more easily test & contribute. With that said, I too would be interested in hearing how others are enabling, or looking to enable, hybrid cloud use cases with OpenStack. I'm not aware of any other projects with similar goals as Omni, however its possible I just haven't been looking in the right places. Regards, On Mon, Nov 19, 2018 at 7:40 PM Andrea Franceschini < andrea.franceschini.rm at gmail.com> wrote: > Hello All, > > I'm trying to figure out if the project openstack/omni > > https://github.com/openstack/omni > > has been abandoned or superseded by another project, as it is no > longer updated (at least not in the last year). > > Generally speaking, which is, if any, a project that aims at the same goal? > > In other words, which way should hybrid cloud be approached in Openstack? > > Do you have any idea/advice? > > Thanks a lot, > > Andrea > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Blake Covarrubias | Product Manager -------------- next part -------------- An HTML attachment was scrubbed... URL: From shardy at redhat.com Tue Nov 20 08:33:00 2018 From: shardy at redhat.com (Steven Hardy) Date: Tue, 20 Nov 2018 08:33:00 +0000 Subject: [openstack-dev] [omni] Has the Platform9's hybrid multicloud project been abandoned? In-Reply-To: References: Message-ID: On Tue, Nov 20, 2018 at 6:13 AM Blake Covarrubias wrote: > > Hi Andrea, > > Omni has not been abandoned by Platform9. We're still developing Omni internally, and are working to open source additional code as well as improve docs so that others may more easily test & contribute. It sounds like you're approaching this using an internal design process, so you may like to check: https://governance.openstack.org/tc/reference/opens.html The most recent commit was over a year ago, so it's understandably confusing to have an apparently unmaintained project, hosted under the openstack github org, which doesn't actually follow our comunity design/development process? > With that said, I too would be interested in hearing how others are enabling, or looking to enable, hybrid cloud use cases with OpenStack. I'm not aware of any other projects with similar goals as Omni, however its possible I just haven't been looking in the right places. Having design discussions in the open, and some way for others in the community to understand the goals and roadmap of the project is really the first step in engaging folks for this sort of discussion IME. Steve From melwittt at gmail.com Tue Nov 20 11:28:03 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 20 Nov 2018 12:28:03 +0100 Subject: [openstack-dev] [nova] Stein forum session notes In-Reply-To: References: <9614718a-77d4-f9ca-a7ba-73bc9af28795@gmail.com> <6061a8e5-ad61-2075-223c-b540a911ebe1@gmail.com> Message-ID: <325ffcf6-b9a8-d7e4-fffd-4c9119c348bb@gmail.com> On Mon, 19 Nov 2018 17:19:22 +0100, Surya Seetharaman wrote: > > > On Mon, Nov 19, 2018 at 2:39 PM Matt Riedemann > wrote: > > On 11/19/2018 3:17 AM, melanie witt wrote: > > - Not directly related to the session, but CERN (hallway track) and > > NeCTAR (dev ML) have both given feedback and asked that the > > policy-driven idea for handling quota for down cells be avoided. > Revived > > the "propose counting quota in placement" spec to see if there's > any way > > forward here > > Should this be abandoned then? > > https://review.openstack.org/#/c/614783/ > > Since there is no microversion impact to that change, it could be added > separately as a bug fix for the down cell case if other operators want > that functionality. But maybe we don't know what other operators want > since no one else is at multi-cell cells v2 yet. > > > I thought the policy check was needed until the "propose counting quota > in placement" > has been implemented as a workaround and that is what the "handling down > cell" spec also proposed, > unless the former spec would be implemented within this cycle in which > case we do not need the > policy check. Right, I don't think that anyone _wants_ the policy check approach. That was just the workaround, last resort idea we had for dealing with down cells in the absence of being able to count quota usage from placement. The operators we've discussed with (CERN, NeCTAR, Oath) would like quota counting not to depend on cell databases, if possible. But they are understanding and will accept the policy-driven workaround if we can't move forward with counting quota usage from placement. If we can get agreement on the count quota usage from placement spec (I have updated it with new proposed details), then we should abandon the policy-driven behavior patch. I am eager to find out what everyone thinks of the latest proposal. Cheers, -melanie From duc.openstack at gmail.com Tue Nov 20 17:25:14 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Tue, 20 Nov 2018 09:25:14 -0800 Subject: [openstack-dev] =?utf-8?q?=5BSenlin=5D_Does_Senlin_support_Promet?= =?utf-8?b?aGV1c++8nw==?= In-Reply-To: <201811201022041324405@chinacloud.com.cn> References: <201811201022041324405@chinacloud.com.cn> Message-ID: Senlin integrates with Aodh using webhooks. As shown in [1], you create a receiver of type webhook and then create a Aodh alarm to call this webhook [2] on the desired condition. Prometheus AlertManager supports notifications to webhooks [3]. While I have not tried this myself, I believe that it should be as simple as setting up Prometheus AlertManager to notify the webhook created in Senlin (see [1]). Regards, Duc (dtruong) [1] https://docs.openstack.org/senlin/latest/scenarios/autoscaling_ceilometer.html#step-2-create-receivers [2] https://docs.openstack.org/senlin/latest/scenarios/autoscaling_ceilometer.html#step-3-creating-aodh-alarms [3] https://prometheus.io/docs/alerting/configuration/#webhook_config On Mon, Nov 19, 2018 at 6:33 PM xuchao at chinacloud.com.cn wrote: > > Hi, > > Senlin, Ceilometer/Aodh is currently integrated, does it support Prometheus ? > > > thanks > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From raniaadouni at gmail.com Tue Nov 20 22:01:58 2018 From: raniaadouni at gmail.com (Rania Adouni) Date: Tue, 20 Nov 2018 23:01:58 +0100 Subject: [openstack-dev] [zun] Failed to create Container Message-ID: Hi , I am starting to use zun on openstack pike , and when I try to lunch container with cirros image , I get this reason : Docker internal error: 500 Server Error: Internal Server Error ("legacy plugin: Post http://192.168.1.10:23750/Plugin.Activate: dial tcp 192.168.1.10:23750: connect: connection refused"). you can find here the log on my compute node :https://pastebin.com/nZTiTZiV. some help with this will be great . Best Regards, Rania Adouni -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Nov 20 22:07:14 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 20 Nov 2018 16:07:14 -0600 Subject: [openstack-dev] [nova] about filter the flavor In-Reply-To: References: Message-ID: On 11/19/2018 9:32 PM, Rambo wrote: >       I have an idea.Now we can't filter the special flavor according > to the property.Can we achieve it?If we achieved this,we can filter the > flavor according the property's key and value to filter the flavor. What > do you think of the idea?Can you tell me more about this ?Thank you very > much. To be clear, you want to filter flavors by extra spec key and/or value? So something like: GET /flavors?key=hw%3Acpu_policy would return all flavors with an extra spec with key "hw:cpu_policy". And: GET /flavors?key=hw%3Acpu_policy&value=dedicated would return all flavors with extra spec "hw:cpu_policy" with value "dedicated". The query parameter semantics are probably what gets messiest about this. Because I could see wanting to couple the key and value together, but I'm not sure how you do that, because I don't think you can do this: GET /flavors?spec=hw%3Acpu_policy=dedicated Maybe you'd do: GET /flavors?hw%3Acpu_policy=dedicated The problem with that is we wouldn't be able to perform any kind of request schema validation of it, especially since flavor extra specs are not standardized. -- Thanks, Matt From jburckhardt at pdvwireless.com Wed Nov 21 00:44:09 2018 From: jburckhardt at pdvwireless.com (Jacob Burckhardt) Date: Wed, 21 Nov 2018 00:44:09 +0000 Subject: [openstack-dev] [zun] Failed to create Container In-Reply-To: References: Message-ID: Your error message says it cannot connect to port 23750. That's the kuryr-libnetwork default listen port. On 192.168.1.10, run: lsof -iTCP -sTCP:LISTEN -P On my compute node, it outputs: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME … kuryr-ser 1976 root 6u IPv4 30896 0t0 TCP localhost:23750 (LISTEN) So, you can see that kuryr is listening on 23750. If lsof does not list a process listening on 23750, then check if kuryr is up by running: systemctl status kuryr-libnetwork You can also try restarting it by running: systemctl restart kuryr-libnetwork I installed that service by following: https://docs.openstack.org/kuryr-libnetwork/queens/install/compute-install-ubuntu.html That is the queens install guide which might help you since zun has no install guide in pike. If you see it listening, you can try this command: telnet 192.168.1.10 23750 If it fails to connect, then try: telnet localhost 23750 If it works with localhost but not 192.168.1.10, then I think that means you need to tell docker to connect to localhost instead of 192.168.1.10. Can you get the commands at the following web page to succeed? https://docs.openstack.org/kuryr-libnetwork/queens/install/verify.html Jacob Burckhardt Senior Software Engineer [pdvWireless] From: Rania Adouni Sent: Tuesday, November 20, 2018 2:02 PM To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [zun] Failed to create Container [https://mailtrack.io/trace/mail/17bd40a2e38fcf449f8e80fdc1e7d9474dfbed19.png?u=2497697] Hi , I am starting to use zun on openstack pike , and when I try to lunch container with cirros image , I get this reason : Docker internal error: 500 Server Error: Internal Server Error ("legacy plugin: Post http://192.168.1.10:23750/Plugin.Activate: dial tcp 192.168.1.10:23750: connect: connection refused"). you can find here the log on my compute node :https://pastebin.com/nZTiTZiV. some help with this will be great . Best Regards, Rania Adouni -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 2499 bytes Desc: image003.jpg URL: From raniaadouni at gmail.com Wed Nov 21 00:58:11 2018 From: raniaadouni at gmail.com (Rania Adouni) Date: Wed, 21 Nov 2018 01:58:11 +0100 Subject: [openstack-dev] [zun] Failed to create Container In-Reply-To: References: Message-ID: hi , Thanks for your reply and yes the problem that kuryr-libnetwork is failed to start with the error : ● kuryr-libnetwork.service - Kuryr-libnetwork - Docker network plugin for Neutron Loaded: loaded (/etc/systemd/system/kuryr-libnetwork.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since mer. 2018-11-21 01:48:48 CET; 287ms ago Process: 13974 ExecStart=/usr/local/bin/kuryr-server --config-file /etc/kuryr/kuryr.conf (code=exited, status=1/FAILURE) Main PID: 13974 (code=exited, status=1/FAILURE) nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 13974 ERROR kuryr value = self._do_get(name, group, namespace) nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 13974 ERROR kuryr File "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2942, in _ nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 13974 ERROR kuryr info = self._get_opt_info(name, group) nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 13974 ERROR kuryr File "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3099, in _ nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 13974 ERROR kuryr raise NoSuchOptError(opt_name, group) nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 13974 ERROR kuryr NoSuchOptError: no such option driver in group [binding] nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 13974 ERROR kuryr nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Main process exited, code=exited, status=1/FAILURE nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Unit entered failed state. nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Failed with result 'exit-code'. and also the command lsof -iTCP -sTCP:LISTEN -P doesn't show the kuryr service : COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME dockerd 1144 root 5u IPv4 20120 0t0 TCP compute:2375 (LISTEN) sshd 1288 root 3u IPv4 19532 0t0 TCP *:22 (LISTEN) sshd 1288 root 4u IPv6 19534 0t0 TCP *:22 (LISTEN) dnsmasq 1961 nobody 6u IPv4 23450 0t0 TCP 192.168.122.1:53 (LISTEN) sshd 4163 adouni 9u IPv6 34950 0t0 TCP localhost:6010 (LISTEN) sshd 4163 adouni 10u IPv4 34951 0t0 TCP localhost:6010 (LISTEN) qemu-syst 4621 libvirt-qemu 18u IPv4 37121 0t0 TCP *:5900 (LISTEN) !!! Le mer. 21 nov. 2018 à 01:45, Jacob Burckhardt a écrit : > Your error message says it cannot connect to port 23750. That's the > kuryr-libnetwork default listen port. On 192.168.1.10, run: > > > > lsof -iTCP -sTCP:LISTEN -P > > > > On my compute node, it outputs: > > > > COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME > > … > > kuryr-ser 1976 root 6u IPv4 30896 0t0 TCP > localhost:23750 (LISTEN) > > > > So, you can see that kuryr is listening on 23750. If lsof does not list a > process listening on 23750, then check if kuryr is up by running: > > > > systemctl status kuryr-libnetwork > > > > You can also try restarting it by running: > > > > systemctl restart kuryr-libnetwork > > > > I installed that service by following: > > > > > https://docs.openstack.org/kuryr-libnetwork/queens/install/compute-install-ubuntu.html > > > > That is the queens install guide which might help you since zun has no > install guide in pike. > > > > If you see it listening, you can try this command: > > > > telnet 192.168.1.10 23750 > > > > If it fails to connect, then try: > > > > telnet localhost 23750 > > > > If it works with localhost but not 192.168.1.10, then I think that means > you need to tell docker to connect to localhost instead of 192.168.1.10. > > > > Can you get the commands at the following web page to succeed? > > > > https://docs.openstack.org/kuryr-libnetwork/queens/install/verify.html > > > > *Jacob Burckhardt* > > Senior Software Engineer > > [image: pdvWireless] > > > > *From:* Rania Adouni > *Sent:* Tuesday, November 20, 2018 2:02 PM > *To:* openstack-dev at lists.openstack.org > *Subject:* [openstack-dev] [zun] Failed to create Container > > > > > > Hi , > > > > I am starting to use zun on openstack pike , and when I try to lunch > container with cirros image , I get this reason : Docker internal error: > 500 Server Error: Internal Server Error ("legacy plugin: Post > http://192.168.1.10:23750/Plugin.Activate: dial tcp 192.168.1.10:23750: > connect: connection refused"). > > you can find here the log on my compute node : > https://pastebin.com/nZTiTZiV. > > some help with this will be great . > > Best Regards, > > Rania Adouni > > > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 2499 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 2499 bytes Desc: not available URL: From jburckhardt at pdvwireless.com Wed Nov 21 01:20:19 2018 From: jburckhardt at pdvwireless.com (Jacob Burckhardt) Date: Wed, 21 Nov 2018 01:20:19 +0000 Subject: [openstack-dev] [zun] Failed to create Container In-Reply-To: References: Message-ID: The important part of the error message seems to be: NoSuchOptError: no such option driver in group [binding] In my /etc/kuryr/kuryr.conf file, all the text is commented out in the [binding] section. Some of the commented options have "driver" in the name. Since they are commented, I guess they have defaults. I suggest commenting any option that mentions "driver" and then restart kuryr-libnetwork. -Jacob Burckhardt From: Rania Adouni Sent: Tuesday, November 20, 2018 4:58 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [zun] Failed to create Container hi ,  Thanks for your reply and yes the problem that kuryr-libnetwork  is failed to start with the error : ● kuryr-libnetwork.service - Kuryr-libnetwork - Docker network plugin for Neutron    Loaded: loaded (/etc/systemd/system/kuryr-libnetwork.service; enabled; vendor preset: enabled)    Active: failed (Result: exit-code) since mer. 2018-11-21 01:48:48 CET; 287ms ago   Process: 13974 ExecStart=/usr/local/bin/kuryr-server --config-file /etc/kuryr/kuryr.conf (code=exited, status=1/FAILURE)  Main PID: 13974 (code=exited, status=1/FAILURE) nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 13974 ERROR kuryr     value = self._do_get(name, group, namespace) nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 13974 ERROR kuryr   File "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2942, in _ nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 13974 ERROR kuryr     info = self._get_opt_info(name, group) nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 13974 ERROR kuryr   File "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3099, in _ nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 13974 ERROR kuryr     raise NoSuchOptError(opt_name, group) nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 13974 ERROR kuryr NoSuchOptError: no such option driver in group [binding] nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 13974 ERROR kuryr nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Main process exited, code=exited, status=1/FAILURE nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Unit entered failed state. nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Failed with result 'exit-code'. and also the command lsof -iTCP -sTCP:LISTEN -P doesn't show the kuryr service : COMMAND    PID         USER   FD   TYPE DEVICE SIZE/OFF NODE NAME dockerd   1144         root    5u  IPv4  20120      0t0  TCP compute:2375 (LISTEN) sshd      1288         root    3u  IPv4  19532      0t0  TCP *:22 (LISTEN) sshd      1288         root    4u  IPv6  19534      0t0  TCP *:22 (LISTEN) dnsmasq   1961       nobody    6u  IPv4  23450      0t0  TCP http://192.168.122.1:53 (LISTEN) sshd      4163       adouni    9u  IPv6  34950      0t0  TCP localhost:6010 (LISTEN) sshd      4163       adouni   10u  IPv4  34951      0t0  TCP localhost:6010 (LISTEN) qemu-syst 4621 libvirt-qemu   18u  IPv4  37121      0t0  TCP *:5900 (LISTEN)   !!! Le mer. 21 nov. 2018 à 01:45, Jacob Burckhardt a écrit : Your error message says it cannot connect to port 23750.  That's the kuryr-libnetwork default listen port.  On 192.168.1.10, run:   lsof -iTCP -sTCP:LISTEN -P   On my compute node, it outputs:   COMMAND    PID        USER   FD   TYPE DEVICE SIZE/OFF NODE NAME … kuryr-ser 1976        root    6u  IPv4  30896      0t0  TCP localhost:23750 (LISTEN)   So, you can see that kuryr is listening on 23750.  If lsof does not list a process listening on 23750, then check if kuryr is up by running:   systemctl status kuryr-libnetwork   You can also try restarting it by running:   systemctl restart kuryr-libnetwork   I installed that service by following:   https://docs.openstack.org/kuryr-libnetwork/queens/install/compute-install-ubuntu.html   That is the queens install guide which might help you since zun has no install guide in pike.   If you see it listening, you can try this command:   telnet 192.168.1.10 23750   If it fails to connect, then try:   telnet localhost 23750   If it works with localhost but not 192.168.1.10, then I think that means you need to tell docker to connect to localhost instead of 192.168.1.10.   Can you get the commands at the following web page to succeed?   https://docs.openstack.org/kuryr-libnetwork/queens/install/verify.html   Jacob Burckhardt Senior Software Engineer   From: Rania Adouni Sent: Tuesday, November 20, 2018 2:02 PM To: mailto:openstack-dev at lists.openstack.org Subject: [openstack-dev] [zun] Failed to create Container     Hi ,   I am starting to use zun on openstack pike , and when I try to lunch container with cirros image , I get this reason : Docker internal error: 500 Server Error: Internal Server Error ("legacy plugin: Post http://192.168.1.10:23750/Plugin.Activate: dial tcp http://192.168.1.10:23750/: connect: connection refused").  you can find here the log on my compute node :https://pastebin.com/nZTiTZiV. some help with this will be great . Best Regards, Rania Adouni         __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From hongbin034 at gmail.com Wed Nov 21 01:26:22 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Tue, 20 Nov 2018 20:26:22 -0500 Subject: [openstack-dev] [zun] Failed to create Container In-Reply-To: References: Message-ID: Hi Rania, The config option 'driver' was recently renamed, so I guess the problem is the version of kuryr-lib is not right. The Pike release of Zun is matched to kuryr-lib 0.6.0 so you might want to confirm the version of kuryr-lib. $ sudo pip freeze | grep kuryr If the version is not right, uninstall and re-install the kuryr-lib, then restart kuryr-libnetwork. Let me know if it still doesn't work. Best regards, Hongbin On Tue, Nov 20, 2018 at 8:01 PM Rania Adouni wrote: > hi , > Thanks for your reply and yes the problem that kuryr-libnetwork is failed > to start with the error : > ● kuryr-libnetwork.service - Kuryr-libnetwork - Docker network plugin for > Neutron > Loaded: loaded (/etc/systemd/system/kuryr-libnetwork.service; enabled; > vendor preset: enabled) > Active: failed (Result: exit-code) since mer. 2018-11-21 01:48:48 CET; > 287ms ago > Process: 13974 ExecStart=/usr/local/bin/kuryr-server --config-file > /etc/kuryr/kuryr.conf (code=exited, status=1/FAILURE) > Main PID: 13974 (code=exited, status=1/FAILURE) > > nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 > 13974 ERROR kuryr value = self._do_get(name, group, namespace) > nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 > 13974 ERROR kuryr File > "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2942, in _ > nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 > 13974 ERROR kuryr info = self._get_opt_info(name, group) > nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 > 13974 ERROR kuryr File > "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3099, in _ > nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 > 13974 ERROR kuryr raise NoSuchOptError(opt_name, group) > nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 > 13974 ERROR kuryr NoSuchOptError: no such option driver in group [binding] > nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 > 13974 ERROR kuryr > nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Main > process exited, code=exited, status=1/FAILURE > nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Unit > entered failed state. > nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Failed with > result 'exit-code'. > > > and also the command lsof -iTCP -sTCP:LISTEN -P doesn't show the kuryr > service : > COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME > dockerd 1144 root 5u IPv4 20120 0t0 TCP compute:2375 > (LISTEN) > sshd 1288 root 3u IPv4 19532 0t0 TCP *:22 (LISTEN) > sshd 1288 root 4u IPv6 19534 0t0 TCP *:22 (LISTEN) > dnsmasq 1961 nobody 6u IPv4 23450 0t0 TCP > 192.168.122.1:53 (LISTEN) > sshd 4163 adouni 9u IPv6 34950 0t0 TCP > localhost:6010 (LISTEN) > sshd 4163 adouni 10u IPv4 34951 0t0 TCP > localhost:6010 (LISTEN) > qemu-syst 4621 libvirt-qemu 18u IPv4 37121 0t0 TCP *:5900 > (LISTEN) > > !!! > > > Le mer. 21 nov. 2018 à 01:45, Jacob Burckhardt < > jburckhardt at pdvwireless.com> a écrit : > >> Your error message says it cannot connect to port 23750. That's the >> kuryr-libnetwork default listen port. On 192.168.1.10, run: >> >> >> >> lsof -iTCP -sTCP:LISTEN -P >> >> >> >> On my compute node, it outputs: >> >> >> >> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME >> >> … >> >> kuryr-ser 1976 root 6u IPv4 30896 0t0 TCP >> localhost:23750 (LISTEN) >> >> >> >> So, you can see that kuryr is listening on 23750. If lsof does not list >> a process listening on 23750, then check if kuryr is up by running: >> >> >> >> systemctl status kuryr-libnetwork >> >> >> >> You can also try restarting it by running: >> >> >> >> systemctl restart kuryr-libnetwork >> >> >> >> I installed that service by following: >> >> >> >> >> https://docs.openstack.org/kuryr-libnetwork/queens/install/compute-install-ubuntu.html >> >> >> >> That is the queens install guide which might help you since zun has no >> install guide in pike. >> >> >> >> If you see it listening, you can try this command: >> >> >> >> telnet 192.168.1.10 23750 >> >> >> >> If it fails to connect, then try: >> >> >> >> telnet localhost 23750 >> >> >> >> If it works with localhost but not 192.168.1.10, then I think that means >> you need to tell docker to connect to localhost instead of 192.168.1.10. >> >> >> >> Can you get the commands at the following web page to succeed? >> >> >> >> https://docs.openstack.org/kuryr-libnetwork/queens/install/verify.html >> >> >> >> *Jacob Burckhardt* >> >> Senior Software Engineer >> >> [image: pdvWireless] >> >> >> >> *From:* Rania Adouni >> *Sent:* Tuesday, November 20, 2018 2:02 PM >> *To:* openstack-dev at lists.openstack.org >> *Subject:* [openstack-dev] [zun] Failed to create Container >> >> >> >> >> >> Hi , >> >> >> >> I am starting to use zun on openstack pike , and when I try to lunch >> container with cirros image , I get this reason : Docker internal error: >> 500 Server Error: Internal Server Error ("legacy plugin: Post >> http://192.168.1.10:23750/Plugin.Activate: dial tcp 192.168.1.10:23750: >> connect: connection refused"). >> >> you can find here the log on my compute node : >> https://pastebin.com/nZTiTZiV. >> >> some help with this will be great . >> >> Best Regards, >> >> Rania Adouni >> >> >> >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raniaadouni at gmail.com Wed Nov 21 01:34:35 2018 From: raniaadouni at gmail.com (Rania Adouni) Date: Wed, 21 Nov 2018 02:34:35 +0100 Subject: [openstack-dev] [zun] Failed to create Container In-Reply-To: References: Message-ID: Hi Hongbin, Yes the version of kuryr-lib is :0.8.0 , So I think I should remove it ! I should remove only the kuryr-lib ! if you have some command they can help me to unisntall it , it will be nice to provide it . Best Regards, Rania Le mer. 21 nov. 2018 à 02:27, Hongbin Lu a écrit : > Hi Rania, > > The config option 'driver' was recently renamed, so I guess the problem is > the version of kuryr-lib is not right. The Pike release of Zun is matched > to kuryr-lib 0.6.0 so you might want to confirm the version of kuryr-lib. > > $ sudo pip freeze | grep kuryr > > If the version is not right, uninstall and re-install the kuryr-lib, then > restart kuryr-libnetwork. Let me know if it still doesn't work. > > Best regards, > Hongbin > > On Tue, Nov 20, 2018 at 8:01 PM Rania Adouni > wrote: > >> hi , >> Thanks for your reply and yes the problem that kuryr-libnetwork is >> failed to start with the error : >> ● kuryr-libnetwork.service - Kuryr-libnetwork - Docker network plugin for >> Neutron >> Loaded: loaded (/etc/systemd/system/kuryr-libnetwork.service; enabled; >> vendor preset: enabled) >> Active: failed (Result: exit-code) since mer. 2018-11-21 01:48:48 CET; >> 287ms ago >> Process: 13974 ExecStart=/usr/local/bin/kuryr-server --config-file >> /etc/kuryr/kuryr.conf (code=exited, status=1/FAILURE) >> Main PID: 13974 (code=exited, status=1/FAILURE) >> >> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >> 13974 ERROR kuryr value = self._do_get(name, group, namespace) >> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >> 13974 ERROR kuryr File >> "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2942, in _ >> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >> 13974 ERROR kuryr info = self._get_opt_info(name, group) >> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >> 13974 ERROR kuryr File >> "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3099, in _ >> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >> 13974 ERROR kuryr raise NoSuchOptError(opt_name, group) >> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >> 13974 ERROR kuryr NoSuchOptError: no such option driver in group [binding] >> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >> 13974 ERROR kuryr >> nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Main >> process exited, code=exited, status=1/FAILURE >> nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Unit >> entered failed state. >> nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Failed >> with result 'exit-code'. >> >> >> and also the command lsof -iTCP -sTCP:LISTEN -P doesn't show the kuryr >> service : >> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME >> dockerd 1144 root 5u IPv4 20120 0t0 TCP compute:2375 >> (LISTEN) >> sshd 1288 root 3u IPv4 19532 0t0 TCP *:22 (LISTEN) >> sshd 1288 root 4u IPv6 19534 0t0 TCP *:22 (LISTEN) >> dnsmasq 1961 nobody 6u IPv4 23450 0t0 TCP >> 192.168.122.1:53 (LISTEN) >> sshd 4163 adouni 9u IPv6 34950 0t0 TCP >> localhost:6010 (LISTEN) >> sshd 4163 adouni 10u IPv4 34951 0t0 TCP >> localhost:6010 (LISTEN) >> qemu-syst 4621 libvirt-qemu 18u IPv4 37121 0t0 TCP *:5900 >> (LISTEN) >> >> !!! >> >> >> Le mer. 21 nov. 2018 à 01:45, Jacob Burckhardt < >> jburckhardt at pdvwireless.com> a écrit : >> >>> Your error message says it cannot connect to port 23750. That's the >>> kuryr-libnetwork default listen port. On 192.168.1.10, run: >>> >>> >>> >>> lsof -iTCP -sTCP:LISTEN -P >>> >>> >>> >>> On my compute node, it outputs: >>> >>> >>> >>> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME >>> >>> … >>> >>> kuryr-ser 1976 root 6u IPv4 30896 0t0 TCP >>> localhost:23750 (LISTEN) >>> >>> >>> >>> So, you can see that kuryr is listening on 23750. If lsof does not list >>> a process listening on 23750, then check if kuryr is up by running: >>> >>> >>> >>> systemctl status kuryr-libnetwork >>> >>> >>> >>> You can also try restarting it by running: >>> >>> >>> >>> systemctl restart kuryr-libnetwork >>> >>> >>> >>> I installed that service by following: >>> >>> >>> >>> >>> https://docs.openstack.org/kuryr-libnetwork/queens/install/compute-install-ubuntu.html >>> >>> >>> >>> That is the queens install guide which might help you since zun has no >>> install guide in pike. >>> >>> >>> >>> If you see it listening, you can try this command: >>> >>> >>> >>> telnet 192.168.1.10 23750 >>> >>> >>> >>> If it fails to connect, then try: >>> >>> >>> >>> telnet localhost 23750 >>> >>> >>> >>> If it works with localhost but not 192.168.1.10, then I think that means >>> you need to tell docker to connect to localhost instead of 192.168.1.10. >>> >>> >>> >>> Can you get the commands at the following web page to succeed? >>> >>> >>> >>> https://docs.openstack.org/kuryr-libnetwork/queens/install/verify.html >>> >>> >>> >>> *Jacob Burckhardt* >>> >>> Senior Software Engineer >>> >>> [image: pdvWireless] >>> >>> >>> >>> *From:* Rania Adouni >>> *Sent:* Tuesday, November 20, 2018 2:02 PM >>> *To:* openstack-dev at lists.openstack.org >>> *Subject:* [openstack-dev] [zun] Failed to create Container >>> >>> >>> >>> >>> >>> Hi , >>> >>> >>> >>> I am starting to use zun on openstack pike , and when I try to lunch >>> container with cirros image , I get this reason : Docker internal >>> error: 500 Server Error: Internal Server Error ("legacy plugin: Post >>> http://192.168.1.10:23750/Plugin.Activate: dial tcp 192.168.1.10:23750: >>> connect: connection refused"). >>> >>> you can find here the log on my compute node : >>> https://pastebin.com/nZTiTZiV. >>> >>> some help with this will be great . >>> >>> Best Regards, >>> >>> Rania Adouni >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raniaadouni at gmail.com Wed Nov 21 01:55:18 2018 From: raniaadouni at gmail.com (Rania Adouni) Date: Wed, 21 Nov 2018 02:55:18 +0100 Subject: [openstack-dev] [zun] Failed to create Container In-Reply-To: References: Message-ID: Hi Hongbin , thanks for your advice I did fix it with changing the kuryr-lib to 0.6.0 and my service just work fine and also with lsof -iTCP -sTCP:LISTEN -P I can see it in the list : *kuryr-ser 16581 root 5u IPv4 91929 0t0 TCP localhost:23750 (LISTEN)* ------------> that is great. + but I have two questions : 1- How I can change the localhost to my compute address ( 192.168.1.10:23750) because it is still the same error when trying to create container " Docker internal error: 500 Server Error: Internal Server Error ("legacy plugin: Post http://192.168.1.10:23750/Plugin.Activate: dial tcp 192.168.1.10:23750: connect: connection refused"). ? 2- my other question is about the log of zun why I can find it under my directory /var/log ? Best Regards, Rania Adouni Le mer. 21 nov. 2018 à 02:34, Rania Adouni a écrit : > Hi Hongbin, > Yes the version of kuryr-lib is :0.8.0 , So I think I should remove it ! > I should remove only the kuryr-lib ! if you have some command they can > help me to unisntall it , it will be nice to provide it . > > Best Regards, > Rania > > > Le mer. 21 nov. 2018 à 02:27, Hongbin Lu a écrit : > >> Hi Rania, >> >> The config option 'driver' was recently renamed, so I guess the problem >> is the version of kuryr-lib is not right. The Pike release of Zun is >> matched to kuryr-lib 0.6.0 so you might want to confirm the version of >> kuryr-lib. >> >> $ sudo pip freeze | grep kuryr >> >> If the version is not right, uninstall and re-install the kuryr-lib, then >> restart kuryr-libnetwork. Let me know if it still doesn't work. >> >> Best regards, >> Hongbin >> >> On Tue, Nov 20, 2018 at 8:01 PM Rania Adouni >> wrote: >> >>> hi , >>> Thanks for your reply and yes the problem that kuryr-libnetwork is >>> failed to start with the error : >>> ● kuryr-libnetwork.service - Kuryr-libnetwork - Docker network plugin >>> for Neutron >>> Loaded: loaded (/etc/systemd/system/kuryr-libnetwork.service; >>> enabled; vendor preset: enabled) >>> Active: failed (Result: exit-code) since mer. 2018-11-21 01:48:48 >>> CET; 287ms ago >>> Process: 13974 ExecStart=/usr/local/bin/kuryr-server --config-file >>> /etc/kuryr/kuryr.conf (code=exited, status=1/FAILURE) >>> Main PID: 13974 (code=exited, status=1/FAILURE) >>> >>> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >>> 13974 ERROR kuryr value = self._do_get(name, group, namespace) >>> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >>> 13974 ERROR kuryr File >>> "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2942, in _ >>> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >>> 13974 ERROR kuryr info = self._get_opt_info(name, group) >>> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >>> 13974 ERROR kuryr File >>> "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3099, in _ >>> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >>> 13974 ERROR kuryr raise NoSuchOptError(opt_name, group) >>> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >>> 13974 ERROR kuryr NoSuchOptError: no such option driver in group [binding] >>> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >>> 13974 ERROR kuryr >>> nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Main >>> process exited, code=exited, status=1/FAILURE >>> nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Unit >>> entered failed state. >>> nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Failed >>> with result 'exit-code'. >>> >>> >>> and also the command lsof -iTCP -sTCP:LISTEN -P doesn't show the kuryr >>> service : >>> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME >>> dockerd 1144 root 5u IPv4 20120 0t0 TCP >>> compute:2375 (LISTEN) >>> sshd 1288 root 3u IPv4 19532 0t0 TCP *:22 >>> (LISTEN) >>> sshd 1288 root 4u IPv6 19534 0t0 TCP *:22 >>> (LISTEN) >>> dnsmasq 1961 nobody 6u IPv4 23450 0t0 TCP >>> 192.168.122.1:53 (LISTEN) >>> sshd 4163 adouni 9u IPv6 34950 0t0 TCP >>> localhost:6010 (LISTEN) >>> sshd 4163 adouni 10u IPv4 34951 0t0 TCP >>> localhost:6010 (LISTEN) >>> qemu-syst 4621 libvirt-qemu 18u IPv4 37121 0t0 TCP *:5900 >>> (LISTEN) >>> >>> !!! >>> >>> >>> Le mer. 21 nov. 2018 à 01:45, Jacob Burckhardt < >>> jburckhardt at pdvwireless.com> a écrit : >>> >>>> Your error message says it cannot connect to port 23750. That's the >>>> kuryr-libnetwork default listen port. On 192.168.1.10, run: >>>> >>>> >>>> >>>> lsof -iTCP -sTCP:LISTEN -P >>>> >>>> >>>> >>>> On my compute node, it outputs: >>>> >>>> >>>> >>>> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME >>>> >>>> … >>>> >>>> kuryr-ser 1976 root 6u IPv4 30896 0t0 TCP >>>> localhost:23750 (LISTEN) >>>> >>>> >>>> >>>> So, you can see that kuryr is listening on 23750. If lsof does not >>>> list a process listening on 23750, then check if kuryr is up by running: >>>> >>>> >>>> >>>> systemctl status kuryr-libnetwork >>>> >>>> >>>> >>>> You can also try restarting it by running: >>>> >>>> >>>> >>>> systemctl restart kuryr-libnetwork >>>> >>>> >>>> >>>> I installed that service by following: >>>> >>>> >>>> >>>> >>>> https://docs.openstack.org/kuryr-libnetwork/queens/install/compute-install-ubuntu.html >>>> >>>> >>>> >>>> That is the queens install guide which might help you since zun has no >>>> install guide in pike. >>>> >>>> >>>> >>>> If you see it listening, you can try this command: >>>> >>>> >>>> >>>> telnet 192.168.1.10 23750 >>>> >>>> >>>> >>>> If it fails to connect, then try: >>>> >>>> >>>> >>>> telnet localhost 23750 >>>> >>>> >>>> >>>> If it works with localhost but not 192.168.1.10, then I think that >>>> means you need to tell docker to connect to localhost instead of >>>> 192.168.1.10. >>>> >>>> >>>> >>>> Can you get the commands at the following web page to succeed? >>>> >>>> >>>> >>>> https://docs.openstack.org/kuryr-libnetwork/queens/install/verify.html >>>> >>>> >>>> >>>> *Jacob Burckhardt* >>>> >>>> Senior Software Engineer >>>> >>>> [image: pdvWireless] >>>> >>>> >>>> >>>> *From:* Rania Adouni >>>> *Sent:* Tuesday, November 20, 2018 2:02 PM >>>> *To:* openstack-dev at lists.openstack.org >>>> *Subject:* [openstack-dev] [zun] Failed to create Container >>>> >>>> >>>> >>>> >>>> >>>> Hi , >>>> >>>> >>>> >>>> I am starting to use zun on openstack pike , and when I try to lunch >>>> container with cirros image , I get this reason : Docker internal >>>> error: 500 Server Error: Internal Server Error ("legacy plugin: Post >>>> http://192.168.1.10:23750/Plugin.Activate: dial tcp 192.168.1.10:23750: >>>> connect: connection refused"). >>>> >>>> you can find here the log on my compute node : >>>> https://pastebin.com/nZTiTZiV. >>>> >>>> some help with this will be great . >>>> >>>> Best Regards, >>>> >>>> Rania Adouni >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed Nov 21 02:18:23 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 20 Nov 2018 21:18:23 -0500 Subject: [openstack-dev] [tripleo] Feedback about our project at Summit Message-ID: Hi folks, I wasn't at the Summit but I was interested by the feedback people gave about TripleO so I've discussed with a few people who made the trip. I would like to see what actions we can take on short and long term to address it. I collected some thoughts here: https://etherpad.openstack.org/p/BER-tripleo-feedback Which is based on https://etherpad.openstack.org/p/BER-deployment-tools-feedback initially. Feel fee to add more feedback if missing, and also comment what was written. If you're aware of some WIP that address the feedback, adjust some wording if needed or just put some links if something exists and is documented already. I believe it is critical to us to listen this feedback and include some of it into our short term roadmap, so we can reduce some frustration that we can hear sometimes. Thanks for your help, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From liliueecg at gmail.com Wed Nov 21 05:00:13 2018 From: liliueecg at gmail.com (Li Liu) Date: Wed, 21 Nov 2018 00:00:13 -0500 Subject: [openstack-dev] [cyborg] [weekly-meeting] Message-ID: There is no weekly IRC meeting for the week of Nov 21st as most of the people are still recovering from the jetlag after the summit. I will follow up with individuals offline. -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From wjstk16 at gmail.com Wed Nov 21 11:18:23 2018 From: wjstk16 at gmail.com (Won) Date: Wed, 21 Nov 2018 20:18:23 +0900 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: Hi, I attached four log files. I collected the logs from about 17:14 to 17:42. I created an instance of 'deltesting3' at 17:17. 7minutes later, at 17:24, the entity graph showed the dentesting3 and vitrage colletor and graph logs are appeared. When creating an instance in ubuntu server, it appears immediately in the entity graph and logs, but when creating an instance in computer1 (multi node), it appears about 5~10 minutes later. I deleted an instance of 'deltesting3' around 17:26. > After ~20minutes, there was only Apigateway. Does it make sense? did you > delete the instances on ubuntu, in addition to deltesting? > I only deleted 'deltesting'. After that, only the logs from 'apigateway' and 'kube-master' were collected. But other instances were working well. I don't know why only two instances are collected in the log. NOV 19 In this log, 'agigateway' and 'kube-master' were continuously collected in a short period of time, but other instances were sometimes collected in long periods. In any case, I would expect to see the instances deleted from the graph at > this stage, since they were not returned by get_all. > Can you please send me the log of vitrage-graph at the same time (Nov 15, > 16:35-17:10)? > Information 'deldtesting3' that has already been deleted continues to be collected in vitrage-graph.service. Br, Won 2018년 11월 15일 (목) 오후 10:13, Ifat Afek 님이 작성: > > On Thu, Nov 15, 2018 at 10:28 AM Won wrote: > >> Looking at the logs, I see two issues: >>> 1. On ubuntu server, you get a notification about the vm deletion, while >>> on compute1 you don't get it. >>> Please make sure that Nova sends notifications to >>> 'vitrage_notifications' - it should be configured in /etc/nova/nova.conf. >>> 2. Once in 10 minutes (by default) nova.instance datasource queries all >>> instances. The deleted vm is supposed to be deleted in Vitrage at this >>> stage, even if the notification was lost. >>> Please check in your collector log for the a message of "novaclient.v2.client >>> [-] RESP BODY" before and after the deletion, and send me its content. >> >> >> I attached two log files. I created a VM in computer1 which is a >> computer node and deleted it a few minutes later. Log for 30 minutes from >> VM creation. >> The first is the log of the vitrage-collect that grep instance name. >> The second is the noovaclient.v2.clinet [-] RESP BODY log. >> After I deleted the VM, no log of the instance appeared in the collector >> log no matter how long I waited. >> >> I added the following to Nova.conf on the computer1 node.(attached file >> 'compute_node_local_conf.txt') >> notification_topics = notifications,vitrage_notifications >> notification_driver = messagingv2 >> vif_plugging_timeout = 300 >> notify_on_state_change = vm_and_task_state >> instance_usage_audit_period = hour >> instance_usage_audit = True >> > > Hi, > > From the collector log RESP BODY messages I understand that in the > beginning there were the following servers: > compute1: deltesting > ubuntu: Apigateway, KubeMaster and others > > After ~20minutes, there was only Apigateway. Does it make sense? did you > delete the instances on ubuntu, in addition to deltesting? > In any case, I would expect to see the instances deleted from the graph at > this stage, since they were not returned by get_all. > Can you please send me the log of vitrage-graph at the same time (Nov 15, > 16:35-17:10)? > > There is still the question of why we don't see a notification from Nova, > but let's try to solve the issues one by one. > > Thanks, > Ifat > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vitrage_collect_logs_grep_Instance_name Type: application/octet-stream Size: 2182 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vitrage_collect_logs_grep_RESPBODY Type: application/octet-stream Size: 105514 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vitrage_graph_logs_grep_Instance_name Type: application/octet-stream Size: 810274 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vitrage_graph_logs Type: application/octet-stream Size: 16992590 bytes Desc: not available URL: From lijie at unitedstack.com Wed Nov 21 12:16:20 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Wed, 21 Nov 2018 20:16:20 +0800 Subject: [openstack-dev] [glance] about use shared image with each other In-Reply-To: <6d53c9e9-98f6-d594-bbfb-837c74d5fbae@gmail.com> References: <6d53c9e9-98f6-d594-bbfb-837c74d5fbae@gmail.com> Message-ID: yes, but I also have a question, Do we have the quota limit for requests to share the image to each other? For example, someone shares the image with me without stop, how do we deal with it? ------------------ Original ------------------ From: "Brian Rosmaita"; Date: Mon, Nov 19, 2018 10:26 PM To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [glance] about use shared image with each other On 11/19/18 7:58 AM, Rambo wrote: > Hi,all > > Recently, I want to use the shared image with each other.I find it > isn't convenient that the producer notifies the consumer via email which > the image has been shared and what its UUID is. In other words, why the > image api v2 is no provision for producer-consumer communication? The design goal for Image API v2 image sharing was to provide an infrastructure for an "image marketplace" in an OpenStack cloud by (a) making it easy for cloud end users to share images, and (b) making it easy for end users not to be spammed by other end users taking advantage of (a). When v2 image sharing was introduced in the Grizzly release, we did not want to dictate how producer-consumer communication would work (because we had no idea how it would develop), so we left it up to operators and end users to figure this out. The advantage of email communication is that client side message filtering is available for whatever client a particular cloud end-user employs, and presumably that end-user knows how to manipulate the filters without learning some new scheme (or, if the end-user doesn't know, learning how to filter messages will apply beyond just image sharing, which is a plus). Also, email communication is just one way to handle producer-consumer communication. Some operators have adjusted their web interfaces so that when an end-user looks at the list of images available, a notification pops up if the end-user has any images that have been shared with them and are still in "pending" status. There are various other creative things you can do using the normal API calls with regular user credentials. In brief, we figured that if an image marketplace evolved in a particular cloud, producers and consumers would forge their own relationships in whatever way made the most sense for their particular use cases. So we left producer-consumer communication out-of-band. > To make it is more convenient, if we can add a task to change the > member_status from "pending" to "accepted" when we share the image with > each other. It is similar to the resize_confirm in Nova, we can control > the time interval in config. You could do this, but that would defeat the entire purpose of the member statuses implementation, and hence I do not recommend it. See OSSN-0005 [1] for more about this issue. Additionally, since the Ocata release, "community" images have been available. These do not have to be accepted by an end user (but they also don't show up in the default image-list response). Who can "communitize" an image is governed by policy. See [2] for a discussion of the various types of image sharing currently available in the Image API v2. The Image Service API v2 api-ref [3] contains a brief discussion of image visibility and image sharing that may also be useful. Finally, the Glance Ocata release notes [4] have an extensive discussion of image visibility. > Can you tell me more about this?Thank you very much! The original design page on the wiki [5] has a list of 14 use cases we wanted to address; looking through those will give you a better idea of why we made the design choices we did. Hope this helps! cheers, brian [0] http://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html [1] https://wiki.openstack.org/wiki/OSSN/1226078 [2] http://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html [3] https://developer.openstack.org/api-ref/image/v2/ [4] https://docs.openstack.org/releasenotes/glance/ocata.html [5] https://wiki.openstack.org/wiki/Glance-api-v2-image-sharing > > Best Regards > Rambo > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From raniaadouni at gmail.com Wed Nov 21 12:24:40 2018 From: raniaadouni at gmail.com (Rania Adouni) Date: Wed, 21 Nov 2018 13:24:40 +0100 Subject: [openstack-dev] [ZUN] [kuryr-libnetwork] Message-ID: Hi everyone, I was trying to create container but I just ended up by this error : *Docker internal error: 500 Server Error: Internal Server Error ("failed to update store for object type *libnetwork.endpointCnt: dial tcp 192.168.1.20:2379 : connect: connection refused").* Then I back to Verify operation of the kuryr-libnetwork, if it work fine or not with command : *docker network create --driver kuryr --ipam-driver kuryr --subnet 10.10.0.0/16 --gateway=10.10.0.1 test_net * But i get this error : *Error response from daemon: failed to update store for object type *libnetwork.endpointCnt: dial tcp 192.168.1.20:2379 : connect: connection refused * *Ps: 192.168.1.20 is the address of my controller Node !!* some help will be nice and thanks . Best Regards, Rania Adouni -------------- next part -------------- An HTML attachment was scrubbed... URL: From srikanthkumar.lingala at gmail.com Wed Nov 21 13:08:40 2018 From: srikanthkumar.lingala at gmail.com (Srikanth Kumar Lingala) Date: Wed, 21 Nov 2018 18:38:40 +0530 Subject: [openstack-dev] VM stalls with the console log "Exiting boot services and installing virtual address map" Message-ID: Hi, I am working with OpenStack stable/pike release in Ubuntu 18.04. When I am trying to spawn VM, which is worked in the earlier OpenStack releases, VM is stalled with the following log in the VM console logs: EFI stub: Booting Linux Kernel... EFI stub: Generating empty DTB EFI stub: Exiting boot services and installing virtual address map... I am observing that VM is in Running status, without any error logs. And, VM is booting with UEFI support. But, I am not able to get VM Console. Earlier I didn't observe this. How can I disable UEFI support, while spawning VM's in OpenStack Nova-Compute. My Libvirt version is 4.0.0. Can someone help me to fix the issue. Regards, Srikanth. -- ---- Srikanth. -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Wed Nov 21 14:30:49 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 21 Nov 2018 16:30:49 +0200 Subject: [openstack-dev] [horizon] Do we want new meeting time? In-Reply-To: References: Message-ID: Hi all, Due to the low activity on our weekly meeting, I would like to raise this question again. Do we want a new meeting day and/or time? I'm available any day except Tuesday and Wednesday. I'm going to get some feedback before I create a new poll. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Thu, Apr 5, 2018 at 1:42 PM Ivan Kolodyazhny wrote: > Hi team, > > It's a friendly reminder that we've got voting open [1] until next > meeting. If you would like to attend Horizon meetings, please, select > comfortable options for you. > > [1] https://doodle.com/poll/ei5gstt73d8v3a35 > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > On Wed, Mar 21, 2018 at 6:40 PM, Ivan Kolodyazhny wrote: > >> Hi team, >> >> As was discussed at PTG, usually we've got a very few participants on our >> weekly meetings. I hope, mostly it's because of not comfort meeting time >> for many of us. >> >> Let's try to re-schedule Horizon weekly meetings and get more attendees >> there. I've created a doodle for it [1]. Please, vote for the best time for >> you. >> >> >> [1] https://doodle.com/poll/ei5gstt73d8v3a35 >> >> Regards, >> Ivan Kolodyazhny, >> http://blog.e0ne.info/ >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Wed Nov 21 15:44:05 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Wed, 21 Nov 2018 10:44:05 -0500 Subject: [openstack-dev] [zun] Failed to create Container In-Reply-To: References: Message-ID: Rania, For question #1, you can do the followings in *each compute node*: * In your kuryr config file (i.e. /etc/kuryr/kuryr.conf), change the config [DEFAULT]kuryr_uri . For example: [DEFAULT] kuryr_uri = http://192.168.1.10:23750 This config decides the IP address and port that the kuryr-libnetwork process will listen to. By default, it listens to localhost which should be good enough for most of the cases. Therefore, change it only if it is necessary. * In the file: /usr/lib/docker/plugins/kuryr/kuryr.spec , change the URL so that docker will know how to connect to kuryr-libnetwork. The value must match the kuryr_uri config above. Again, change it only if it is necessary. Then, restart the docker and kuryr-libnetwork process. Let me know if it still doesn't work for you. For question #2, I remembered all processes are running in systemd, so you can retrieve logs by using journalctl. For example: $ journalctl -u kuryr-libnetwork > kuryr.log $ journalctl -u zun-compute > zun-compute.log $ journalctl -u zun-api > zun-api.log Best regards, Hongbin On Tue, Nov 20, 2018 at 9:01 PM Rania Adouni wrote: > Hi Hongbin , > thanks for your advice I did fix it with changing the kuryr-lib to 0.6.0 > and my service just work fine and also with lsof -iTCP -sTCP:LISTEN -P > > I can see it in the list : > *kuryr-ser 16581 root 5u IPv4 91929 0t0 TCP > localhost:23750 (LISTEN)* ------------> that is great. > > + but I have two questions : > 1- How I can change the localhost to my compute address ( > 192.168.1.10:23750) because it is still the same error when trying to > create container " Docker internal error: 500 Server Error: Internal > Server Error ("legacy plugin: Post > http://192.168.1.10:23750/Plugin.Activate: dial tcp 192.168.1.10:23750: > connect: connection refused"). ? > > 2- my other question is about the log of zun why I can find it under > my directory /var/log ? > > Best Regards, > Rania Adouni > > > Le mer. 21 nov. 2018 à 02:34, Rania Adouni a > écrit : > >> Hi Hongbin, >> Yes the version of kuryr-lib is :0.8.0 , So I think I should remove it ! >> I should remove only the kuryr-lib ! if you have some command they can >> help me to unisntall it , it will be nice to provide it . >> >> Best Regards, >> Rania >> >> >> Le mer. 21 nov. 2018 à 02:27, Hongbin Lu a écrit : >> >>> Hi Rania, >>> >>> The config option 'driver' was recently renamed, so I guess the problem >>> is the version of kuryr-lib is not right. The Pike release of Zun is >>> matched to kuryr-lib 0.6.0 so you might want to confirm the version of >>> kuryr-lib. >>> >>> $ sudo pip freeze | grep kuryr >>> >>> If the version is not right, uninstall and re-install the kuryr-lib, >>> then restart kuryr-libnetwork. Let me know if it still doesn't work. >>> >>> Best regards, >>> Hongbin >>> >>> On Tue, Nov 20, 2018 at 8:01 PM Rania Adouni >>> wrote: >>> >>>> hi , >>>> Thanks for your reply and yes the problem that kuryr-libnetwork is >>>> failed to start with the error : >>>> ● kuryr-libnetwork.service - Kuryr-libnetwork - Docker network plugin >>>> for Neutron >>>> Loaded: loaded (/etc/systemd/system/kuryr-libnetwork.service; >>>> enabled; vendor preset: enabled) >>>> Active: failed (Result: exit-code) since mer. 2018-11-21 01:48:48 >>>> CET; 287ms ago >>>> Process: 13974 ExecStart=/usr/local/bin/kuryr-server --config-file >>>> /etc/kuryr/kuryr.conf (code=exited, status=1/FAILURE) >>>> Main PID: 13974 (code=exited, status=1/FAILURE) >>>> >>>> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >>>> 13974 ERROR kuryr value = self._do_get(name, group, namespace) >>>> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >>>> 13974 ERROR kuryr File >>>> "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2942, in _ >>>> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >>>> 13974 ERROR kuryr info = self._get_opt_info(name, group) >>>> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >>>> 13974 ERROR kuryr File >>>> "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3099, in _ >>>> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >>>> 13974 ERROR kuryr raise NoSuchOptError(opt_name, group) >>>> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >>>> 13974 ERROR kuryr NoSuchOptError: no such option driver in group [binding] >>>> nov. 21 01:48:48 compute kuryr-server[13974]: 2018-11-21 01:48:48.542 >>>> 13974 ERROR kuryr >>>> nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Main >>>> process exited, code=exited, status=1/FAILURE >>>> nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Unit >>>> entered failed state. >>>> nov. 21 01:48:48 compute systemd[1]: kuryr-libnetwork.service: Failed >>>> with result 'exit-code'. >>>> >>>> >>>> and also the command lsof -iTCP -sTCP:LISTEN -P doesn't show the kuryr >>>> service : >>>> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME >>>> dockerd 1144 root 5u IPv4 20120 0t0 TCP >>>> compute:2375 (LISTEN) >>>> sshd 1288 root 3u IPv4 19532 0t0 TCP *:22 >>>> (LISTEN) >>>> sshd 1288 root 4u IPv6 19534 0t0 TCP *:22 >>>> (LISTEN) >>>> dnsmasq 1961 nobody 6u IPv4 23450 0t0 TCP >>>> 192.168.122.1:53 (LISTEN) >>>> sshd 4163 adouni 9u IPv6 34950 0t0 TCP >>>> localhost:6010 (LISTEN) >>>> sshd 4163 adouni 10u IPv4 34951 0t0 TCP >>>> localhost:6010 (LISTEN) >>>> qemu-syst 4621 libvirt-qemu 18u IPv4 37121 0t0 TCP *:5900 >>>> (LISTEN) >>>> >>>> !!! >>>> >>>> >>>> Le mer. 21 nov. 2018 à 01:45, Jacob Burckhardt < >>>> jburckhardt at pdvwireless.com> a écrit : >>>> >>>>> Your error message says it cannot connect to port 23750. That's the >>>>> kuryr-libnetwork default listen port. On 192.168.1.10, run: >>>>> >>>>> >>>>> >>>>> lsof -iTCP -sTCP:LISTEN -P >>>>> >>>>> >>>>> >>>>> On my compute node, it outputs: >>>>> >>>>> >>>>> >>>>> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME >>>>> >>>>> … >>>>> >>>>> kuryr-ser 1976 root 6u IPv4 30896 0t0 TCP >>>>> localhost:23750 (LISTEN) >>>>> >>>>> >>>>> >>>>> So, you can see that kuryr is listening on 23750. If lsof does not >>>>> list a process listening on 23750, then check if kuryr is up by running: >>>>> >>>>> >>>>> >>>>> systemctl status kuryr-libnetwork >>>>> >>>>> >>>>> >>>>> You can also try restarting it by running: >>>>> >>>>> >>>>> >>>>> systemctl restart kuryr-libnetwork >>>>> >>>>> >>>>> >>>>> I installed that service by following: >>>>> >>>>> >>>>> >>>>> >>>>> https://docs.openstack.org/kuryr-libnetwork/queens/install/compute-install-ubuntu.html >>>>> >>>>> >>>>> >>>>> That is the queens install guide which might help you since zun has no >>>>> install guide in pike. >>>>> >>>>> >>>>> >>>>> If you see it listening, you can try this command: >>>>> >>>>> >>>>> >>>>> telnet 192.168.1.10 23750 >>>>> >>>>> >>>>> >>>>> If it fails to connect, then try: >>>>> >>>>> >>>>> >>>>> telnet localhost 23750 >>>>> >>>>> >>>>> >>>>> If it works with localhost but not 192.168.1.10, then I think that >>>>> means you need to tell docker to connect to localhost instead of >>>>> 192.168.1.10. >>>>> >>>>> >>>>> >>>>> Can you get the commands at the following web page to succeed? >>>>> >>>>> >>>>> >>>>> https://docs.openstack.org/kuryr-libnetwork/queens/install/verify.html >>>>> >>>>> >>>>> >>>>> *Jacob Burckhardt* >>>>> >>>>> Senior Software Engineer >>>>> >>>>> [image: pdvWireless] >>>>> >>>>> >>>>> >>>>> *From:* Rania Adouni >>>>> *Sent:* Tuesday, November 20, 2018 2:02 PM >>>>> *To:* openstack-dev at lists.openstack.org >>>>> *Subject:* [openstack-dev] [zun] Failed to create Container >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Hi , >>>>> >>>>> >>>>> >>>>> I am starting to use zun on openstack pike , and when I try to lunch >>>>> container with cirros image , I get this reason : Docker internal >>>>> error: 500 Server Error: Internal Server Error ("legacy plugin: Post >>>>> http://192.168.1.10:23750/Plugin.Activate: dial tcp 192.168.1.10:23750: >>>>> connect: connection refused"). >>>>> >>>>> you can find here the log on my compute node : >>>>> https://pastebin.com/nZTiTZiV. >>>>> >>>>> some help with this will be great . >>>>> >>>>> Best Regards, >>>>> >>>>> Rania Adouni >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Wed Nov 21 15:46:20 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Wed, 21 Nov 2018 10:46:20 -0500 Subject: [openstack-dev] [ZUN] [kuryr-libnetwork] In-Reply-To: References: Message-ID: Hi Rania, See if this helps: http://lists.openstack.org/pipermail/openstack-dev/2018-November/136547.html Best regards, Hongbin On Wed, Nov 21, 2018 at 7:27 AM Rania Adouni wrote: > Hi everyone, > > I was trying to create container but I just ended up by this error : > *Docker internal error: 500 Server Error: Internal Server Error ("failed > to update store for object type *libnetwork.endpointCnt: dial tcp > 192.168.1.20:2379 : connect: connection > refused").* > > Then I back to Verify operation of the kuryr-libnetwork, if it work fine > or not with command : > *docker network create --driver kuryr --ipam-driver kuryr --subnet > 10.10.0.0/16 --gateway=10.10.0.1 test_net * > > But i get this error : > *Error response from daemon: failed to update store for object type > *libnetwork.endpointCnt: dial tcp 192.168.1.20:2379 > : connect: connection refused * > *Ps: 192.168.1.20 is the address of my controller Node !!* > > some help will be nice and thanks . > > Best Regards, > Rania Adouni > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Wed Nov 21 17:07:40 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Wed, 21 Nov 2018 18:07:40 +0100 Subject: [openstack-dev] [kolla] Berlin summit resume Message-ID: Hi kollagues, During the Berlin Summit kolla team had a few talks and forum discussions, as well as other cross-project related topics [0] First session was ``Kolla project onboarding``, the room was full of people interested in contribute to kolla, many of them already using kolla in production environments whiling to make upstream some work they've done downstream. I can say this talk was a total success and we hope to see many new faces during this release putting features and bug fixes into kolla. Slides of the session at [1] Second session was ``Kolla project update``, was a brief resume of what work has been done during rocky release and some items will be implemented in the future. Number of attendees to this session was massive, no more people could enter the room. Slides at [2] Then forum sessions.. First one was ``Kolla user feedback``, many users came over the room. We've notice a big increase in production deployments and some PoC migrating to production soon, many of those environments are huge. Overall the impressions was that kolla is great and don't have any big issue or requirement, ``it works great`` became a common phrase to listen. Here's a resume of the user feedback needs [3] - Improve operational usage for add, remove, change and stop/start nodes and services. - Database backup and recovery - Lack of documentation is the bigger request, users need to read the code to know how to configure other than core/default services - Multi cells_v2 - New services request, cyborg, masakari and tricircle were the most requested - SElinux enabled - More SDN services such as Contrail and calico - Possibility to include user's ansible tasks during deploy as well as support custom config.json - HTTPS for internal networks Second one was about ``kolla for the edge``, we've meet with Edge computing group and others interested in edge deployments to identify what's missing in kolla and where we can help. Things we've identified are: - Kolla seems good at how the service split can be done, tweaking inventory file and config values can deploy independent environments easily. - Missing keystone federation - Glance cache support is not a hard requirement but improves efficiency (already merged) - Multi cells v2 - Multi storage per edge/far-edge - A documentation or architecture reference would be nice to have. Last one was ``kolla for NFV``, few people came over to discuss about NUMA, GPU, SRIOV. Nothing noticiable from this session, mainly was support DPDK for CentOS/RHEL,OracleLinux and few service addition covered by previous discussions. [0] https://etherpad.openstack.org/p/kolla-stein-summit [1] https://es.slideshare.net/EduardoGonzalezGutie/kolla-project-onboarding-openstack-summit-berlin-2018 [2] https://es.slideshare.net/EduardoGonzalezGutie/openstack-kolla-project-update-rocky-release [3] https://etherpad.openstack.org/p/berlin-2018-kolla-user-feedback [4] https://etherpad.openstack.org/p/berlin-2018-kolla-edge [5] https://etherpad.openstack.org/p/berlin-2018-kolla-nfv -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Wed Nov 21 17:28:03 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 21 Nov 2018 11:28:03 -0600 Subject: [openstack-dev] [neutron] [drivers] Cancelling weekly meeting on November 23rd Message-ID: Dear Neutron team, Due to the Thanksgiving Holiday in the USA, several members of the team will not be able to attend the weekly meeting. Let's cancel it and resume normally on the 30th of November Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From Waines Wed Nov 21 17:54:41 2018 From: Waines (Waines) Date: Thu, 22 Nov 2018 03:54:41 +1000 Subject: [openstack-dev] New invoice Message-ID: <11357192714329319416.305B9BA845132358@lists.openstack.org> Dear Valued Waines, Greg Customer: Please advise payment actual date for this invoice. Thank you for your valued custom. Waines, Greg Direct #: 836-338-6252 870-188-1601 x247 e Greg.Waines at windriver.com - This email transmission is intended only for the use of the individual or entity named above and may contain information that is confidential, privileged, and exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or use of any of the information contained in this transmission is strictly prohibited. All email and attachments are scanned for viruses before transmission. However, I recommend that you perform such scans as are necessary before opening any attachments, as we accept no liability any loss or damage caused in the process of communication. All quotations and recommendations in any proposal, report, e-mail or letter are made in good faith and on the basis of the information before us at the time. In consequence, no statement in any proposal, report e-mail or letter is to be deemed in any circumstances a representation, undertaking, warranty or contractual condition. If you have received this transmission in error, please notify me by email at the above address. Thank you. -------------- next part -------------- A non-text attachment was scrubbed... Name: Untitled-112118-6731392.doc Type: application/msword Size: 80768 bytes Desc: not available URL: From miguel at mlavalle.com Wed Nov 21 18:11:50 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 21 Nov 2018 12:11:50 -0600 Subject: [openstack-dev] StarlingX gap analysis to converge with OpenStack master Message-ID: Hi Stackers, One of the key goals of StarlingX during the current cycle is to converge with the OpenStack projects master branches. In order to accomplish this goal, the Technical Steering Committee put together a gap analysis that shows the specs and patches that need to merge in the different OpenStack projects by the end of Stein. The attached PDF document shows this analysis. Although other projects are involved, most of the work has to be done in Nova, Neutron and Horizon. Hopefully all the involved projects will help StarlingX achieve this important goal. It has to be noted that work is still on-going to refine this gap analysis, so there might be some updates to it in the near future. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: stx_openstack_gaps.pdf Type: application/pdf Size: 152044 bytes Desc: not available URL: From mark at stackhpc.com Wed Nov 21 18:44:56 2018 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 21 Nov 2018 18:44:56 +0000 Subject: [openstack-dev] [kolla] Berlin summit resume In-Reply-To: References: Message-ID: Thanks for the write up Eduardo. I thought you and Surya did a good job of presenting and moderating those sessions. Mark On Wed, 21 Nov 2018 at 17:08, Eduardo Gonzalez wrote: > Hi kollagues, > > During the Berlin Summit kolla team had a few talks and forum discussions, > as well as other cross-project related topics [0] > > First session was ``Kolla project onboarding``, the room was full of > people interested in contribute to kolla, many of them already using kolla > in production environments whiling to make upstream some work they've done > downstream. I can say this talk was a total success and we hope to see many > new faces during this release putting features and bug fixes into kolla. > Slides of the session at [1] > > Second session was ``Kolla project update``, was a brief resume of what > work has been done during rocky release and some items will be implemented > in the future. Number of attendees to this session was massive, no more > people could enter the room. Slides at [2] > > > Then forum sessions.. > > First one was ``Kolla user feedback``, many users came over the room. > We've notice a big increase in production deployments and some PoC > migrating to production soon, many of those environments are huge. > Overall the impressions was that kolla is great and don't have any big > issue or requirement, ``it works great`` became a common phrase to listen. > Here's a resume of the user feedback needs [3] > > - Improve operational usage for add, remove, change and stop/start nodes > and services. > - Database backup and recovery > - Lack of documentation is the bigger request, users need to read the code > to know how to configure other than core/default services > - Multi cells_v2 > - New services request, cyborg, masakari and tricircle were the most > requested > - SElinux enabled > - More SDN services such as Contrail and calico > - Possibility to include user's ansible tasks during deploy as well as > support custom config.json > - HTTPS for internal networks > > Second one was about ``kolla for the edge``, we've meet with Edge > computing group and others interested in edge deployments to identify > what's missing in kolla and where we can help. > Things we've identified are: > > - Kolla seems good at how the service split can be done, tweaking > inventory file and config values can deploy independent environments easily. > - Missing keystone federation > - Glance cache support is not a hard requirement but improves efficiency > (already merged) > - Multi cells v2 > - Multi storage per edge/far-edge > - A documentation or architecture reference would be nice to have. > > Last one was ``kolla for NFV``, few people came over to discuss about > NUMA, GPU, SRIOV. > Nothing noticiable from this session, mainly was support DPDK for > CentOS/RHEL,OracleLinux and few service addition covered by previous > discussions. > > [0] https://etherpad.openstack.org/p/kolla-stein-summit > [1] > https://es.slideshare.net/EduardoGonzalezGutie/kolla-project-onboarding-openstack-summit-berlin-2018 > [2] > https://es.slideshare.net/EduardoGonzalezGutie/openstack-kolla-project-update-rocky-release > [3] https://etherpad.openstack.org/p/berlin-2018-kolla-user-feedback > [4] https://etherpad.openstack.org/p/berlin-2018-kolla-edge > [5] https://etherpad.openstack.org/p/berlin-2018-kolla-nfv > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Nov 21 19:54:59 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 21 Nov 2018 14:54:59 -0500 Subject: [openstack-dev] StarlingX gap analysis to converge with OpenStack master In-Reply-To: References: Message-ID: Thanks for doing this in the open and working with the upstream teams to reduce divergence! On Wed, Nov 21, 2018 at 1:17 PM Miguel Lavalle wrote: > Hi Stackers, > > One of the key goals of StarlingX during the current cycle is to converge > with the OpenStack projects master branches. In order to accomplish this > goal, the Technical Steering Committee put together a gap analysis that > shows the specs and patches that need to merge in the different OpenStack > projects by the end of Stein. The attached PDF document shows this > analysis. Although other projects are involved, most of the work has to be > done in Nova, Neutron and Horizon. Hopefully all the involved projects will > help StarlingX achieve this important goal. > > It has to be noted that work is still on-going to refine this gap > analysis, so there might be some updates to it in the near future. > > Best regards > > Miguel > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Wed Nov 21 18:47:42 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 21 Nov 2018 13:47:42 -0500 Subject: [openstack-dev] [manila] no meeting this week Message-ID: <20181121184742.zyefgxpwptihl6b2@barron.net> Just a reminder that there will be no manila community meeting this week. Next manila meeting will be Thursday, 29 November, at 1500 UTC in #openstack-meeting-alt on freenode. Agenda here [1] - Feel free to add ... -- Tom Barron (tbarron) [1] https://wiki.openstack.org/wiki/Manila/Meetings#Next_meeting From melwittt at gmail.com Wed Nov 21 20:23:51 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 21 Nov 2018 21:23:51 +0100 Subject: [openstack-dev] StarlingX gap analysis to converge with OpenStack master In-Reply-To: References: Message-ID: <7566c269-c7e9-c803-fe23-21840004a58e@gmail.com> On Wed, 21 Nov 2018 12:11:50 -0600, Miguel Lavalle wrote: > One of the key goals of StarlingX during the current cycle is to > converge with the OpenStack projects master branches. In order to > accomplish this goal, the Technical Steering Committee put together a > gap analysis that shows the specs and patches that need to merge in the > different OpenStack projects by the end of Stein. The attached PDF > document shows this analysis. Although other projects are involved, most > of the work has to be done in Nova, Neutron and Horizon. Hopefully all > the involved projects will help StarlingX achieve this important goal. > > It has to be noted that work is still on-going to refine this gap > analysis, so there might be some updates to it in the near future. Thanks for sending this out. I'm going to reply about what I know of the status of nova-related planned upstream features. On NUMA-aware live migration, it was identified as the top priority issue in the NFV/HPC pain points forum session [1]. The spec has been approved before in the past for the Rocky cycle, so it's a matter of re-proposing it for re-approval in Stein. We need to confirm with artom and/or stephenfin whether one of them can pick it up this cycle. I don't know as much about the shared/dedicated vCPUs on a single host or the shared vCPU extension, but the cited spec [2] has one +2 already. If we can find a second approver, we can work on this too in Stein. The vTPM support spec was merged about two weeks ago and we are awaiting implementation patches from cfriesen. The HPET support spec was merged about two weeks ago and the implementation patch is under active review in a runway with one +2 now. For vCPU model, I'm not aware of any new proposed spec for Stein from the STX community as of today. Let us know if/when the spec is proposed. For disk performance fixes, the specless blueprint patch is currently under active review in a runway. The extra spec validation spec [3] is under active review now. For the bits that will be addressed using upstream features that are already available, I assume the STX community will take care of this. Please reach out to us in #openstack-nova or on the ML if there are questions/issues. For the bugs, again feel free to reach out to us for reviews/help. Cheers, -melanie [1] https://etherpad.openstack.org/p/BER-nfv-hpc-pain-points [2] https://review.openstack.org/555081 [3] https://review.openstack.org/618542 From tpb at dyncloud.net Wed Nov 21 20:30:41 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 21 Nov 2018 15:30:41 -0500 Subject: [openstack-dev] [k8s] [manila] Manila CSI driver plan Message-ID: <20181121203041.a7dsoo5twdcr2h4f@barron.net> [Robert Vasek, Red Hat colleagues working in this space: please correct any mis-understandings or omissions below] At the Berlin Summit SIG-K8s Working session [1] [2], we agreed to follow up with a note to the larger community summarizing our plan to enable Manila as RWX storage provider for k8s and other container orchestrators. Here it is :) Today there are kubernetes external service providers [3] [4] for Manila with NFS protocol back ends, as well as a way to use an external service provider on the master host in combination with a CSI CephFS node-host plugin [5]. We propose to target an end-to-end, multi-protocol Manila CSI plugin -- aiming at CSI 1.0, which should get support in container orchestrators early in 2019. Converging on CSI will: * provide support for multiple Container Orchestrators, not just k8s * de-couple storage plugin development from k8s life-cycle going forwards * unify development efforts and distributions and set clear expectations for operators and deployers Since manila needs to support multiple file system protocols such as CephFS native and NFS, we propose that work align to the multiplexing CSI architecture outlined here [6]. High level work plan: * Write master host multiplexing controller and node proxy plugins. * Use CephFS node-only plugin from [5] * Write NFS node-only plugin NFS and CephFS are immediate priorities - other file system protocols supported by manila can be added over time if there is interest. -- Tom Barron (irc: tbarron) [1] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22752/sig-k8s-working-session [2] https://etherpad.openstack.org/p/sig-k8s-2018-berlin-summit [3] https://github.com/kubernetes-incubator/external-storage [4] https://github.com/kubernetes/cloud-provider-openstack [5] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/21997/dynamic-storage-provisioning-of-manilacephfs-shares-on-kubernetes [6] https://github.com/container-storage-interface/spec/issues/263#issuecomment-411471611 From melwittt at gmail.com Wed Nov 21 22:06:20 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 21 Nov 2018 23:06:20 +0100 Subject: [openstack-dev] StarlingX gap analysis to converge with OpenStack master In-Reply-To: <7566c269-c7e9-c803-fe23-21840004a58e@gmail.com> References: <7566c269-c7e9-c803-fe23-21840004a58e@gmail.com> Message-ID: <3b126d54-e7d7-188f-461c-09de196924dd@gmail.com> On Wed, 21 Nov 2018 21:23:51 +0100, Melanie Witt wrote: > On Wed, 21 Nov 2018 12:11:50 -0600, Miguel Lavalle wrote: >> One of the key goals of StarlingX during the current cycle is to >> converge with the OpenStack projects master branches. In order to >> accomplish this goal, the Technical Steering Committee put together a >> gap analysis that shows the specs and patches that need to merge in the >> different OpenStack projects by the end of Stein. The attached PDF >> document shows this analysis. Although other projects are involved, most >> of the work has to be done in Nova, Neutron and Horizon. Hopefully all >> the involved projects will help StarlingX achieve this important goal. >> >> It has to be noted that work is still on-going to refine this gap >> analysis, so there might be some updates to it in the near future. > > Thanks for sending this out. I'm going to reply about what I know of the > status of nova-related planned upstream features. > > On NUMA-aware live migration, it was identified as the top priority > issue in the NFV/HPC pain points forum session [1]. The spec has been > approved before in the past for the Rocky cycle, so it's a matter of > re-proposing it for re-approval in Stein. We need to confirm with artom > and/or stephenfin whether one of them can pick it up this cycle. Turns out this spec has already been re-proposed for Stein as of Sep 4: https://review.openstack.org/599587 and is under active review now. Apologies for missing this in my previous reply. > I don't know as much about the shared/dedicated vCPUs on a single host > or the shared vCPU extension, but the cited spec [2] has one +2 already. > If we can find a second approver, we can work on this too in Stein. > > The vTPM support spec was merged about two weeks ago and we are awaiting > implementation patches from cfriesen. > > The HPET support spec was merged about two weeks ago and the > implementation patch is under active review in a runway with one +2 now. > > For vCPU model, I'm not aware of any new proposed spec for Stein from > the STX community as of today. Let us know if/when the spec is proposed. > > For disk performance fixes, the specless blueprint patch is currently > under active review in a runway. > > The extra spec validation spec [3] is under active review now. > > For the bits that will be addressed using upstream features that are > already available, I assume the STX community will take care of this. > Please reach out to us in #openstack-nova or on the ML if there are > questions/issues. > > For the bugs, again feel free to reach out to us for reviews/help. > > Cheers, > -melanie > > [1] https://etherpad.openstack.org/p/BER-nfv-hpc-pain-points > [2] https://review.openstack.org/555081 > [3] https://review.openstack.org/618542 > > > From raniaadouni at gmail.com Wed Nov 21 23:16:10 2018 From: raniaadouni at gmail.com (Rania Adouni) Date: Thu, 22 Nov 2018 00:16:10 +0100 Subject: [openstack-dev] [ZUN] [kuryr-libnetwork] In-Reply-To: References: Message-ID: Hi hongbin, Finally , I had to move and using queens because after all I think when I set up ZUN , it has effect on my nova-compute service was down .and for more secure work I move to queens . and now my kuryr work fine , but when I try container I had this error : *Docker internal error: 400 Client Error: Bad Request ("OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown").* some suggestion to fix it please and thanks . Best Regards, Rania Adouni [image: Mailtrack] Sender notified by Mailtrack 11/22/18, 12:15:06 AM Le mer. 21 nov. 2018 à 16:47, Hongbin Lu a écrit : > Hi Rania, > > See if this helps: > http://lists.openstack.org/pipermail/openstack-dev/2018-November/136547.html > > Best regards, > Hongbin > > On Wed, Nov 21, 2018 at 7:27 AM Rania Adouni > wrote: > >> Hi everyone, >> >> I was trying to create container but I just ended up by this error : >> *Docker internal error: 500 Server Error: Internal Server Error ("failed >> to update store for object type *libnetwork.endpointCnt: dial tcp >> 192.168.1.20:2379 : connect: connection >> refused").* >> >> Then I back to Verify operation of the kuryr-libnetwork, if it work fine >> or not with command : >> *docker network create --driver kuryr --ipam-driver kuryr --subnet >> 10.10.0.0/16 --gateway=10.10.0.1 test_net * >> >> But i get this error : >> *Error response from daemon: failed to update store for object type >> *libnetwork.endpointCnt: dial tcp 192.168.1.20:2379 >> : connect: connection refused * >> *Ps: 192.168.1.20 is the address of my controller Node !!* >> >> some help will be nice and thanks . >> >> Best Regards, >> Rania Adouni >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.j.ivey at gmail.com Wed Nov 21 23:55:53 2018 From: david.j.ivey at gmail.com (David Ivey) Date: Wed, 21 Nov 2018 18:55:53 -0500 Subject: [openstack-dev] [ZUN] [kuryr-libnetwork] In-Reply-To: References: Message-ID: Hi Rania, I am assuming you are spinning up a cirros container still. Iirc the cirros container does not have /bin/bash. Try just executing /bin/sh. David On Wed, Nov 21, 2018, 6:26 PM Rania Adouni Hi hongbin, > > Finally , I had to move and using queens because after all I think when I > set up ZUN , it has effect on my nova-compute service was down .and for > more secure work I move to queens . and now my kuryr work fine , but when I > try container I had this error : > *Docker internal error: 400 Client Error: Bad Request ("OCI runtime create > failed: container_linux.go:348: starting container process caused "exec: > \"/bin/bash\": stat /bin/bash: no such file or directory": unknown").* > some suggestion to fix it please and thanks . > > Best Regards, > Rania Adouni > > > [image: Mailtrack] > Sender > notified by > Mailtrack > 11/22/18, > 12:15:06 AM > > Le mer. 21 nov. 2018 à 16:47, Hongbin Lu a écrit : > >> Hi Rania, >> >> See if this helps: >> http://lists.openstack.org/pipermail/openstack-dev/2018-November/136547.html >> >> Best regards, >> Hongbin >> >> On Wed, Nov 21, 2018 at 7:27 AM Rania Adouni >> wrote: >> >>> Hi everyone, >>> >>> I was trying to create container but I just ended up by this error : >>> *Docker internal error: 500 Server Error: Internal Server Error ("failed >>> to update store for object type *libnetwork.endpointCnt: dial tcp >>> 192.168.1.20:2379 : connect: connection >>> refused").* >>> >>> Then I back to Verify operation of the kuryr-libnetwork, if it work fine >>> or not with command : >>> *docker network create --driver kuryr --ipam-driver kuryr --subnet >>> 10.10.0.0/16 --gateway=10.10.0.1 test_net * >>> >>> But i get this error : >>> *Error response from daemon: failed to update store for object type >>> *libnetwork.endpointCnt: dial tcp 192.168.1.20:2379 >>> : connect: connection refused * >>> *Ps: 192.168.1.20 is the address of my controller Node !!* >>> >>> some help will be nice and thanks . >>> >>> Best Regards, >>> Rania Adouni >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raniaadouni at gmail.com Thu Nov 22 00:13:39 2018 From: raniaadouni at gmail.com (Rania Adouni) Date: Thu, 22 Nov 2018 01:13:39 +0100 Subject: [openstack-dev] [ZUN] [kuryr-libnetwork] In-Reply-To: References: Message-ID: Yes david thanks for reply, it was the problem with the command /bin/bash when i just modified with sh my container was up and running. Thanks for all the team openstack dev for the help . Best regards, Rania adouni Le jeu. 22 nov. 2018 à 00:57, David Ivey a écrit : > Hi Rania, > > I am assuming you are spinning up a cirros container still. Iirc the > cirros container does not have /bin/bash. Try just executing /bin/sh. > > David > > > On Wed, Nov 21, 2018, 6:26 PM Rania Adouni >> Hi hongbin, >> >> Finally , I had to move and using queens because after all I think when I >> set up ZUN , it has effect on my nova-compute service was down .and for >> more secure work I move to queens . and now my kuryr work fine , but when I >> try container I had this error : >> *Docker internal error: 400 Client Error: Bad Request ("OCI runtime >> create failed: container_linux.go:348: starting container process caused >> "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": >> unknown").* >> some suggestion to fix it please and thanks . >> >> Best Regards, >> Rania Adouni >> >> >> [image: Mailtrack] >> Sender >> notified by >> Mailtrack >> 11/22/18, >> 12:15:06 AM >> >> Le mer. 21 nov. 2018 à 16:47, Hongbin Lu a écrit : >> >>> Hi Rania, >>> >>> See if this helps: >>> http://lists.openstack.org/pipermail/openstack-dev/2018-November/136547.html >>> >>> Best regards, >>> Hongbin >>> >>> On Wed, Nov 21, 2018 at 7:27 AM Rania Adouni >>> wrote: >>> >>>> Hi everyone, >>>> >>>> I was trying to create container but I just ended up by this error : >>>> *Docker internal error: 500 Server Error: Internal Server Error >>>> ("failed to update store for object type *libnetwork.endpointCnt: dial tcp >>>> 192.168.1.20:2379 : connect: connection >>>> refused").* >>>> >>>> Then I back to Verify operation of the kuryr-libnetwork, if it work >>>> fine or not with command : >>>> *docker network create --driver kuryr --ipam-driver kuryr --subnet >>>> 10.10.0.0/16 --gateway=10.10.0.1 test_net * >>>> >>>> But i get this error : >>>> *Error response from daemon: failed to update store for object type >>>> *libnetwork.endpointCnt: dial tcp 192.168.1.20:2379 >>>> : connect: connection refused * >>>> *Ps: 192.168.1.20 is the address of my controller Node !!* >>>> >>>> some help will be nice and thanks . >>>> >>>> Best Regards, >>>> Rania Adouni >>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Thu Nov 22 03:20:53 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Thu, 22 Nov 2018 03:20:53 +0000 Subject: [openstack-dev] StarlingX gap analysis to converge with OpenStack master In-Reply-To: References: Message-ID: Hi Miguel, There is another RFE “Add l2pop support for floating IP resources” proposed to Launchpad. This RFE also comes from StarlingX and the link is below: https://bugs.launchpad.net/neutron/+bug/1803494 Could you please help review this RFE? I think this RFE can be added to the gap analysis. What’s more, there is a bug and a RFE relating to l2pop and neutron-dynamic-routing is being written and is expected to be released next week. Best Regards, Xu, Chenjie From: Miguel Lavalle [mailto:miguel at mlavalle.com] Sent: Thursday, November 22, 2018 2:12 AM To: openstack-discuss at lists.openstack.org; OpenStack Development Mailing List > Subject: [openstack-dev] StarlingX gap analysis to converge with OpenStack master Hi Stackers, One of the key goals of StarlingX during the current cycle is to converge with the OpenStack projects master branches. In order to accomplish this goal, the Technical Steering Committee put together a gap analysis that shows the specs and patches that need to merge in the different OpenStack projects by the end of Stein. The attached PDF document shows this analysis. Although other projects are involved, most of the work has to be done in Nova, Neutron and Horizon. Hopefully all the involved projects will help StarlingX achieve this important goal. It has to be noted that work is still on-going to refine this gap analysis, so there might be some updates to it in the near future. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifatafekn at gmail.com Thu Nov 22 12:30:20 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Thu, 22 Nov 2018 14:30:20 +0200 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: Hi, A deleted instance should be removed from Vitrage in one of two ways: 1. By reacting to a notification from Nova 2. If no notification is received, then after a while the instance vertex in Vitrage is considered "outdated" and is deleted Regarding #1, it is clear from your logs that you don't get notifications from Nova on the second compute. Do you have on one of your nodes, in addition to nova.conf, also a nova-cpu.conf? if so, please make the same change in this file: notification_topics = notifications,vitrage_notifications notification_driver = messagingv2 And please make sure to restart nova compute service on that node. Regarding #2, as a second-best solution, the instances should be deleted from the graph after not being updated for a while. I realized that we have a bug in this area and I will push a fix to gerrit later today. In the meantime, you can add to InstanceDriver class the following function: @staticmethod def should_delete_outdated_entities(): return True Let me know if it solved your problem, Ifat On Wed, Nov 21, 2018 at 1:50 PM Won wrote: > I attached four log files. > I collected the logs from about 17:14 to 17:42. I created an instance of > 'deltesting3' at 17:17. 7minutes later, at 17:24, the entity graph showed > the dentesting3 and vitrage colletor and graph logs are appeared. > When creating an instance in ubuntu server, it appears immediately in the > entity graph and logs, but when creating an instance in computer1 (multi > node), it appears about 5~10 minutes later. > I deleted an instance of 'deltesting3' around 17:26. > > >> After ~20minutes, there was only Apigateway. Does it make sense? did you >> delete the instances on ubuntu, in addition to deltesting? >> > > I only deleted 'deltesting'. After that, only the logs from 'apigateway' > and 'kube-master' were collected. But other instances were working well. I > don't know why only two instances are collected in the log. > NOV 19 In this log, 'agigateway' and 'kube-master' were continuously > collected in a short period of time, but other instances were sometimes > collected in long periods. > > In any case, I would expect to see the instances deleted from the graph at >> this stage, since they were not returned by get_all. >> Can you please send me the log of vitrage-graph at the same time (Nov 15, >> 16:35-17:10)? >> > > Information 'deldtesting3' that has already been deleted continues to be > collected in vitrage-graph.service. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Nov 22 21:01:18 2018 From: smooney at redhat.com (Sean Mooney) Date: Thu, 22 Nov 2018 21:01:18 +0000 Subject: [openstack-dev] StarlingX gap analysis to converge with OpenStack master In-Reply-To: References: Message-ID: hi just following up form yesterday. i just finished testing the pci numa policy feature and i can confirm that the prefer policy which allow use of non local pci devices does not work. the test case was relitively simple 1 host with 2 numa nodes 1 pci device attach to numa node 0 and vcpu pin set configured to allow only numa node 1 in this case booting a vm with a passthrough alias and a passthrough alias fails. [stack at cloud-3 devstack]$ openstack flavor show 42 +----------------------------+-----------------------------------------------------+ | Field | Value | +----------------------------+-----------------------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | access_project_ids | None | | disk | 0 | | id | 42 | | name | m1.nano | | os-flavor-access:is_public | True | | properties | hw:numa_nodes='1', pci_passthrough:alias='nic-pf:1' | | ram | 64 | | rxtx_factor | 1.0 | | swap | | | vcpus | 1 | +----------------------------+-----------------------------------------------------+ passthrough_whitelist = { "address": "0000:01:00.1" } alias = { "vendor_id":"8086", "product_id":"10c9", "device_type":"type-PF", "name":"nic-pf", "numa_policy": "preferred"} removeing the hw:numa_nodes='1' extra spec allow the vm to boot as it nolonger has a numa topology. the vm passes scheduling in both casess but when the vm has a virtual numa toplogy of 1 node we fail in the resocue tracker on the compute node when claiming resouces for the instance. i will submit a bug for this and repose the spec next week to track closing this gap. On Thu, 2018-11-22 at 03:20 +0000, Xu, Chenjie wrote: > Hi Miguel, > There is another RFE “Add l2pop support for floating IP resources” proposed to Launchpad. This RFE also comes from > StarlingX and the link is below: > https://bugs.launchpad.net/neutron/+bug/1803494 > Could you please help review this RFE? I think this RFE can be added to the gap analysis. What’s more, there is a bug > and a RFE relating to l2pop and neutron-dynamic-routing is being written and is expected to be released next week. > > Best Regards, > Xu, Chenjie > > From: Miguel Lavalle [mailto:miguel at mlavalle.com] > Sent: Thursday, November 22, 2018 2:12 AM > To: openstack-discuss at lists.openstack.org; OpenStack Development Mailing List > Subject: [openstack-dev] StarlingX gap analysis to converge with OpenStack master > > Hi Stackers, > > One of the key goals of StarlingX during the current cycle is to converge with the OpenStack projects master branches. > In order to accomplish this goal, the Technical Steering Committee put together a gap analysis that shows the specs > and patches that need to merge in the different OpenStack projects by the end of Stein. The attached PDF document > shows this analysis. Although other projects are involved, most of the work has to be done in Nova, Neutron and > Horizon. Hopefully all the involved projects will help StarlingX achieve this important goal. > > It has to be noted that work is still on-going to refine this gap analysis, so there might be some updates to it in > the near future. > > Best regards > > Miguel From zhu.bingbing at 99cloud.net Sun Nov 25 04:06:38 2018 From: zhu.bingbing at 99cloud.net (zhubingbing) Date: Sun, 25 Nov 2018 12:06:38 +0800 (GMT+08:00) Subject: [openstack-dev] =?utf-8?q?=5Bkolla=5D_Berlin_summit_resume?= In-Reply-To: Message-ID: thanks Eduardo Gonzalez From: Eduardo Gonzalez Date: 2018-11-22 01:07:40 To: "OpenStack Development Mailing List (not for usage questions)" ,OpenStack Operators ,openstack at lists.openstack.org Subject: [openstack-dev] [kolla] Berlin summit resume Hi kollagues, During the Berlin Summit kolla team had a few talks and forum discussions, as well as other cross-project related topics [0] First session was ``Kolla project onboarding``, the room was full of people interested in contribute to kolla, many of them already using kolla in production environments whiling to make upstream some work they've done downstream. I can say this talk was a total success and we hope to see many new faces during this release putting features and bug fixes into kolla . Slides of the session at [1] Second session was ``Kolla project update``, was a brief resume of what work has been done during rocky release and some items will be implemented in the future. Number of attendees to this session was massive, no more people could enter the room. Slides at [2] Then forum sessions.. First one was ``Kolla user feedback``, many users came over the room. We've notice a big increase in production deployments and some PoC migrating to production soon, many of those environments are huge. Overall the impressions was that kolla is great and don't have any big issue or requirement, ``it works great`` became a common phrase to listen. Here's a resume of the user feedback needs [3] - Improve operational usage for add, remove , change and stop/start nodes and services. - Database backup and recovery - Lack of documentation is the bigger request, users need to read the code to know how to configure other than core/default services - Multi cells_v2 - New services request, cyborg, masakari and tricircle were the most requested - SElinux enabled - More SDN services such as Contrail and calico - Possibility to include user's ansible tasks during deploy as well as support custom config.json - HTTPS for internal networks Second one was about ``kolla for the edge``, we've meet with Edge computing group and others interested in edge deployments to identify what's missing in kolla and where we can help. Things we've identified are: - Kolla seems good at how the service split can be done, tweaking inventory file and config values can deploy independent environments easily. - Missing keystone federation - Glance cache support is not a hard requirement but improves efficiency (already merged) - Multi cells v2 - Multi storage per edge/far-edge - A documentation or architecture reference would be nice to have. Last one was ``kolla for NFV` `, few people came over to discuss about NUMA, GPU, SRIOV. Nothing noticiable from this session, mainly was support DPDK for CentOS/RHEL,OracleLinux and few service addition covered by previous discussions. [0] https://etherpad.openstack .org/p/kolla-stein-summit [1] https://es.slideshare.net/EduardoGonzalezGutie/kolla-project-onboarding-openstack-summit-berlin-2018 [2] https://es.slideshare.net /EduardoGonzalezGutie/openstack-kolla-project-update-rocky-release [3] https://etherpad.openstack.org/p/berlin-2018-kolla-user-feedback [4] https://etherpad.openstack.org/p/berlin-2018-kolla-edge [5] https ://etherpad.openstack.org/p/berlin-2018-kolla-nfv -------------- next part -------------- An HTML attachment was scrubbed... URL: From naichuan.sun at citrix.com Mon Nov 26 10:45:57 2018 From: naichuan.sun at citrix.com (Naichuan Sun) Date: Mon, 26 Nov 2018 10:45:57 +0000 Subject: [openstack-dev] [nova][placement] Please help to review XenServer vgpu related patches Message-ID: Hi, Sylvain, Jay, Eric and Matt, I saw the n-rp and reshaper patches in upstream have almost finished. Could you help to review XenServer vGPU related patches when you have the time? https://review.openstack.org/#/c/520313/ https://review.openstack.org/#/c/521041/ https://review.openstack.org/#/c/521717/ Thank you very much. BR. Naichuan Sun From akamyshnikova at mirantis.com Mon Nov 26 13:41:28 2018 From: akamyshnikova at mirantis.com (Anna Taraday) Date: Mon, 26 Nov 2018 17:41:28 +0400 Subject: [openstack-dev] [Octavia] Multinode setup Message-ID: Hello everyone! I'm looking into how to run Octavia services (controller worker, housekeeper, health manager) on several network nodes and get confused with setup guide [1]. Is there any special config option for such case? (controller_ip_port_list probably) What manuals/docs/examples do we have except [2] ? [1] - https://docs.openstack.org/octavia/queens/contributor/guides/dev-quick-start.html [2] - https://github.com/openstack/octavia/blob/stable/queens/devstack/samples/multinode/local-2.conf -- Regards, Ann Taraday -------------- next part -------------- An HTML attachment was scrubbed... URL: From saphi070 at gmail.com Mon Nov 26 13:56:16 2018 From: saphi070 at gmail.com (Sa Pham) Date: Mon, 26 Nov 2018 20:56:16 +0700 Subject: [openstack-dev] [Octavia] Multinode setup In-Reply-To: References: Message-ID: <7D8C6984-1733-4AAD-B8F0-8A3FEF122E32@gmail.com> Hi, The controller_ip_port_list option is list of all IP of node which is deployed octavia-health-manager. > On Nov 26, 2018, at 8:41 PM, Anna Taraday wrote: > > Hello everyone! > > I'm looking into how to run Octavia services (controller worker, housekeeper, health manager) on several network nodes and get confused with setup guide [1]. > Is there any special config option for such case? (controller_ip_port_list probably) > What manuals/docs/examples do we have except [2] ? > > [1] - https://docs.openstack.org/octavia/queens/contributor/guides/dev-quick-start.html > [2] -https://github.com/openstack/octavia/blob/stable/queens/devstack/samples/multinode/local-2.conf -- > Regards, > Ann Taraday > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Sa Pham Dang Cloud RnD Team - VCCloud / VCCorp Phone: 0986.849.582 Skype: great_bn -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Mon Nov 26 14:12:55 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 26 Nov 2018 15:12:55 +0100 Subject: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage Message-ID: <118afcca-b6a4-4529-3286-11dfefda68b4@redhat.com> Here is a related bug [1] and implementation [1] for that. PTAL folks! [0] https://bugs.launchpad.net/tripleo/+bug/1804822 [1] https://review.openstack.org/#/q/topic:base-container-reduction > Let's also think of removing puppet-tripleo from the base container. > It really brings the world-in (and yum updates in CI!) each job and each > container! > So if we did so, we should then either install puppet-tripleo and co on > the host and bind-mount it for the docker-puppet deployment task steps > (bad idea IMO), OR use the magical --volumes-from > option to mount volumes from some "puppet-config" sidecar container > inside each of the containers being launched by docker-puppet tooling. On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås wrote: > We add this to all images: > > https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35 > > /bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 python > socat sudo which openstack-tripleo-common-container-base rsync cronie > crudini openstack-selinux ansible python-shade puppet-tripleo python2- > kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB > > Is the additional 276 MB reasonable here? > openstack-selinux <- This package run relabling, does that kind of > touching the filesystem impact the size due to docker layers? > > Also: python2-kubernetes is a fairly large package (18007990) do we use > that in every image? I don't see any tripleo related repos importing > from that when searching on Hound? The original commit message[1] > adding it states it is for future convenience. > > On my undercloud we have 101 images, if we are downloading every 18 MB > per image thats almost 1.8 GB for a package we don't use? (I hope it's > not like this? With docker layers, we only download that 276 MB > transaction once? Or?) > > > [1] https://review.openstack.org/527927 -- Best regards, Bogdan Dobrelya, Irc #bogdando From rosmaita.fossdev at gmail.com Mon Nov 26 14:21:37 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 26 Nov 2018 09:21:37 -0500 Subject: [openstack-dev] [glance] about use shared image with each other In-Reply-To: References: <6d53c9e9-98f6-d594-bbfb-837c74d5fbae@gmail.com> Message-ID: On 11/21/18 7:16 AM, Rambo wrote: > yes, but I also have a question, Do we have the quota limit for requests > to share the image to each other? For example, someone shares the image > with me without stop, how do we deal with it? Given that the producer-consumer notifications are not handled by Glance, this is not a problem. (Or, to be more precise, not a problem for Glance.) A producer can share an image with you multiple times, but since the producer cannot change your member-status, it will remain in 'pending' (or 'rejected' if you've already rejected it). So there is no quota necessary for this operation. >   > ------------------ Original ------------------ > *From: * "Brian Rosmaita"; > *Date: * Mon, Nov 19, 2018 10:26 PM > *To: * "OpenStack Developmen"; > *Subject: * Re: [openstack-dev] [glance] about use shared image with > each other >   > On 11/19/18 7:58 AM, Rambo wrote: >> Hi,all >> >>      Recently, I want to use the shared image with each other.I find it >> isn't convenient that the producer notifies the consumer via email which >> the image has been shared and what its UUID is. In other words, why the >> image api v2 is no provision for producer-consumer communication? > > The design goal for Image API v2 image sharing was to provide an > infrastructure for an "image marketplace" in an OpenStack cloud by (a) > making it easy for cloud end users to share images, and (b) making it > easy for end users not to be spammed by other end users taking advantage > of (a).  When v2 image sharing was introduced in the Grizzly release, we > did not want to dictate how producer-consumer communication would work > (because we had no idea how it would develop), so we left it up to > operators and end users to figure this out. > > The advantage of email communication is that client side message > filtering is available for whatever client a particular cloud end-user > employs, and presumably that end-user knows how to manipulate the > filters without learning some new scheme (or, if the end-user doesn't > know, learning how to filter messages will apply beyond just image > sharing, which is a plus). > > Also, email communication is just one way to handle producer-consumer > communication.  Some operators have adjusted their web interfaces so > that when an end-user looks at the list of images available, a > notification pops up if the end-user has any images that have been > shared with them and are still in "pending" status.  There are various > other creative things you can do using the normal API calls with regular > user credentials. > > In brief, we figured that if an image marketplace evolved in a > particular cloud, producers and consumers would forge their own > relationships in whatever way made the most sense for their particular > use cases.  So we left producer-consumer communication out-of-band. > >>       To make it is more convenient,  if we can add a task to change the >> member_status from "pending" to "accepted" when we share the image with >> each other. It is similar to the resize_confirm in Nova, we can control >> the time interval in config. > > You could do this, but that would defeat the entire purpose of the > member statuses implementation, and hence I do not recommend it.  See > OSSN-0005 [1] for more about this issue. > > Additionally, since the Ocata release, "community" images have been > available.  These do not have to be accepted by an end user (but they > also don't show up in the default image-list response).  Who can > "communitize" an image is governed by policy. > > See [2] for a discussion of the various types of image sharing currently > available in the Image API v2.  The Image Service API v2 api-ref [3] > contains a brief discussion of image visibility and image sharing that > may also be useful.  Finally, the Glance Ocata release notes [4] have an > extensive discussion of image visibility. > >>        Can you tell me more about this?Thank you very much! > > The original design page on the wiki [5] has a list of 14 use cases we > wanted to address; looking through those will give you a better idea of > why we made the design choices we did. > > Hope this helps! > > cheers, > brian > > [0] > http://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html > [1] https://wiki.openstack.org/wiki/OSSN/1226078 > [2] > http://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html > [3] https://developer.openstack.org/api-ref/image/v2/ > [4] https://docs.openstack.org/releasenotes/glance/ocata.html > [5] https://wiki.openstack.org/wiki/Glance-api-v2-image-sharing > > >> >> Best Regards >> Rambo >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From johnsomor at gmail.com Mon Nov 26 18:28:34 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 26 Nov 2018 10:28:34 -0800 Subject: [openstack-dev] [Octavia] Multinode setup In-Reply-To: <7D8C6984-1733-4AAD-B8F0-8A3FEF122E32@gmail.com> References: <7D8C6984-1733-4AAD-B8F0-8A3FEF122E32@gmail.com> Message-ID: At the moment that is all we have for a setup guide. That said, all of the Octavia controller processes are fully HA capable. The one setting I can think of is the controller_ip_port_list setting mentioned above. It will need to contain an entry for each health manager IP/port as Sa Pham mentioned. You will also want to load balance connections across your API instances. Load balancing for the other processes is built into the design and does not require any additional load balancing. Michael On Mon, Nov 26, 2018 at 5:59 AM Sa Pham wrote: > > Hi, > > The controller_ip_port_list option is list of all IP of node which is deployed octavia-health-manager. > > > > On Nov 26, 2018, at 8:41 PM, Anna Taraday wrote: > > Hello everyone! > > I'm looking into how to run Octavia services (controller worker, housekeeper, health manager) on several network nodes and get confused with setup guide [1]. > Is there any special config option for such case? (controller_ip_port_list probably) > What manuals/docs/examples do we have except [2] ? > > [1] - https://docs.openstack.org/octavia/queens/contributor/guides/dev-quick-start.html > [2] -https://github.com/openstack/octavia/blob/stable/queens/devstack/samples/multinode/local-2.conf > -- > Regards, > Ann Taraday > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > Sa Pham Dang > Cloud RnD Team - VCCloud / VCCorp > Phone: 0986.849.582 > Skype: great_bn > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sundar.nadathur at intel.com Mon Nov 26 23:08:19 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Mon, 26 Nov 2018 15:08:19 -0800 Subject: [openstack-dev] [cyborg] New time for Cyborg weekly IRC meetings Message-ID: <80cd0921-8472-d4b7-252c-9cdc85a703ae@intel.com> Hi,      The current time for the weekly Cyborg IRC meeting is 1400 UTC, which is 6 am Pacific and 10pm China time. That is a bad time for most people in the call. Please vote in this doodle for what time you prefer. If you need more options, please respond in this thread. [1] https://doodle.com/poll/eqy3hp8hfqtf2qyn Thanks & Regards, Sundar From bdobreli at redhat.com Tue Nov 27 15:24:44 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 27 Nov 2018 16:24:44 +0100 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes Message-ID: <9dee4044-2744-3ab6-3a15-bd79702ff35a@redhat.com> Changing the topic to follow the subject. [tl;dr] it's time to rearchitect container images to stop incluiding config-time only (puppet et al) bits, which are not needed runtime and pose security issues, like CVEs, to maintain daily. Background: 1) For the Distributed Compute Node edge case, there is potentially tens of thousands of a single-compute-node remote edge sites connected over WAN to a single control plane, which is having high latency, like a 100ms or so, and limited bandwith. Reducing the base layer size becomes a decent goal there. See the security background below. 2) For a generic security (Day 2, maintenance) case, when puppet/ruby/systemd/name-it gets a CVE fixed, the base layer has to be updated and all layers on top - to be rebuild, and all of those layers, to be re-fetched for cloud hosts and all containers to be restarted... And all of that because of some fixes that have nothing to OpenStack. By the remote edge sites as well, remember of "tens of thousands", high latency and limited bandwith?.. 3) TripleO CI updates (including puppet*) packages in containers, not in a common base layer of those. So each a CI job has to update puppet* and its dependencies - ruby/systemd as well. Reducing numbers of packages to update for each container makes sense for CI as well. Implementation related: WIP patches [0],[1] for early review, uses a config "pod" approach, does not require to maintain a two sets of config vs runtime images. Future work: a) cronie requires systemd, we'd want to fix that also off the base layer. b) rework to podman pods for docker-puppet.py instead of --volumes-from a side car container (can't be backported for Queens then, which is still nice to have a support for the Edge DCN case, at least downstream only perhaps). Some questions raised on IRC: Q: is having a service be able to configure itself really need to involve a separate pod? A: Highly likely yes, removing not-runtime things is a good idea and pods is an established PaaS paradigm already. That will require some changes in the architecture though (see the topic with WIP patches). Q: that's (fetching a config container) actually more data that about to download otherwise A: It's not, if thinking of Day 2, when have to re-fetch the base layer and top layers, when some unrelated to openstack CVEs got fixed there for ruby/puppet/systemd. Avoid the need to restart service containers because of those minor updates puched is also a nice thing. Q: the best solution here would be using packages on the host, generating the config files on the host. And then having an all-in-one container for all the services which lets them run in an isolated mannner. A: I think for Edge cases, that's a no go as we might want to consider tiny low footprint OS distros like former known Container Linux or Atomic. Also, an all-in-one container looks like an anti-pattern from the world of VMs. [0] https://review.openstack.org/#/q/topic:base-container-reduction [1] https://review.rdoproject.org/r/#/q/topic:base-container-reduction > Here is a related bug [1] and implementation [1] for that. PTAL folks! > > [0] https://bugs.launchpad.net/tripleo/+bug/1804822 > [1] https://review.openstack.org/#/q/topic:base-container-reduction > >> Let's also think of removing puppet-tripleo from the base container. >> It really brings the world-in (and yum updates in CI!) each job and each >> container! >> So if we did so, we should then either install puppet-tripleo and co on >> the host and bind-mount it for the docker-puppet deployment task steps >> (bad idea IMO), OR use the magical --volumes-from >> option to mount volumes from some "puppet-config" sidecar container >> inside each of the containers being launched by docker-puppet tooling. > > On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås > wrote: >> We add this to all images: >> >> https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35 >> >> /bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 python >> socat sudo which openstack-tripleo-common-container-base rsync cronie >> crudini openstack-selinux ansible python-shade puppet-tripleo python2- >> kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB >> >> Is the additional 276 MB reasonable here? >> openstack-selinux <- This package run relabling, does that kind of >> touching the filesystem impact the size due to docker layers? >> >> Also: python2-kubernetes is a fairly large package (18007990) do we use >> that in every image? I don't see any tripleo related repos importing >> from that when searching on Hound? The original commit message[1] >> adding it states it is for future convenience. >> >> On my undercloud we have 101 images, if we are downloading every 18 MB >> per image thats almost 1.8 GB for a package we don't use? (I hope it's >> not like this? With docker layers, we only download that 276 MB >> transaction once? Or?) >> >> >> [1] https://review.openstack.org/527927 > > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando -- Best regards, Bogdan Dobrelya, Irc #bogdando From marios at redhat.com Tue Nov 27 15:44:57 2018 From: marios at redhat.com (Marios Andreou) Date: Tue, 27 Nov 2018 17:44:57 +0200 Subject: [openstack-dev] [tripleo] Let's improve upstream docs Message-ID: Hi folks, as just mentioned in the tripleo weekly irc meeting [1] some of us are trying to make small weekly improvements to the tripleo docs [2]. We are using this bug [3] for tracking and this effort is a result of some feedback during the recent Berlin summit. The general idea is 1 per week (or more if you can and want) - improvement/removal of stale content/identifying missing sections, or anything else you might care to propose. Please join us if you can, just add "Related-Bug: #1804642" to your commit message thanks [1] https://wiki.openstack.org/wiki/Meetings/TripleO [2] https://docs.openstack.org/tripleo-docs/latest/ [3] https://bugs.launchpad.net/tripleo/+bug/1804642 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dprince at redhat.com Tue Nov 27 18:10:50 2018 From: dprince at redhat.com (Dan Prince) Date: Tue, 27 Nov 2018 13:10:50 -0500 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <9dee4044-2744-3ab6-3a15-bd79702ff35a@redhat.com> References: <9dee4044-2744-3ab6-3a15-bd79702ff35a@redhat.com> Message-ID: <089336b7a431cab76e1350a959b7bb566175134b.camel@redhat.com> On Tue, 2018-11-27 at 16:24 +0100, Bogdan Dobrelya wrote: > Changing the topic to follow the subject. > > [tl;dr] it's time to rearchitect container images to stop incluiding > config-time only (puppet et al) bits, which are not needed runtime > and > pose security issues, like CVEs, to maintain daily. I think your assertion that we need to rearchitect the config images to container the puppet bits is incorrect here. After reviewing the patches you linked to below it appears that you are proposing we use --volumes-from to bind mount application binaries from one container into another. I don't believe this is a good pattern for containers. On baremetal if we followed the same pattern it would be like using an /nfs share to obtain access to binaries across the network to optimize local storage. Now... some people do this (like maybe high performance computing would launch an MPI job like this) but I don't think we should consider it best practice for our containers in TripleO. Each container should container its own binaries and libraries as much as possible. And while I do think we should be using --volumes-from more often in TripleO it would be for sharing *data* between containers, not binaries. > > Background: > 1) For the Distributed Compute Node edge case, there is potentially > tens > of thousands of a single-compute-node remote edge sites connected > over > WAN to a single control plane, which is having high latency, like a > 100ms or so, and limited bandwith. Reducing the base layer size > becomes > a decent goal there. See the security background below. The reason we put Puppet into the base layer was in fact to prevent it from being downloaded multiple times. If we were to re-architect the image layers such that the child layers all contained their own copies of Puppet for example there would actually be a net increase in bandwidth and disk usage. So I would argue we are already addressing the goal of optimizing network and disk space. Moving it out of the base layer so that you can patch it more often without disrupting other services is a valid concern. But addressing this concern while also preserving our definiation of a container (see above, a container should contain all of its binaries) is going to cost you something, namely disk and network space because Puppet would need to be duplicated in each child container. As Puppet is used to configure a majority of the services in TripleO having it in the base container makes most sense. And yes, if there are security patches for Puppet/Ruby those might result in a bunch of containers getting pushed. But let Docker layers take care of this I think... Don't try to solve things by constructing your own custom mounts and volumes to work around the issue. > 2) For a generic security (Day 2, maintenance) case, when > puppet/ruby/systemd/name-it gets a CVE fixed, the base layer has to > be > updated and all layers on top - to be rebuild, and all of those > layers, > to be re-fetched for cloud hosts and all containers to be > restarted... > And all of that because of some fixes that have nothing to OpenStack. > By > the remote edge sites as well, remember of "tens of thousands", high > latency and limited bandwith?.. > 3) TripleO CI updates (including puppet*) packages in containers, not > in > a common base layer of those. So each a CI job has to update puppet* > and > its dependencies - ruby/systemd as well. Reducing numbers of packages > to > update for each container makes sense for CI as well. > > Implementation related: > > WIP patches [0],[1] for early review, uses a config "pod" approach, > does > not require to maintain a two sets of config vs runtime images. > Future > work: a) cronie requires systemd, we'd want to fix that also off the > base layer. b) rework to podman pods for docker-puppet.py instead of > --volumes-from a side car container (can't be backported for Queens > then, which is still nice to have a support for the Edge DCN case, > at > least downstream only perhaps). > > Some questions raised on IRC: > > Q: is having a service be able to configure itself really need to > involve a separate pod? > A: Highly likely yes, removing not-runtime things is a good idea and > pods is an established PaaS paradigm already. That will require some > changes in the architecture though (see the topic with WIP patches). I'm a little confused on this one. Are you suggesting that we have 2 containers for each service? One with Puppet and one without? That is certainly possible, but to pull it off would likely require you to have things built like this: |base container| --> |service container| --> |service container w/ Puppet installed| The end result would be Puppet being duplicated in a layer for each services "config image". Very inefficient. Again, I'm ansering this assumping we aren't violating our container constraints and best practices where each container has the binaries its needs to do its own configuration. > > Q: that's (fetching a config container) actually more data that > about to > download otherwise > A: It's not, if thinking of Day 2, when have to re-fetch the base > layer > and top layers, when some unrelated to openstack CVEs got fixed > there > for ruby/puppet/systemd. Avoid the need to restart service > containers > because of those minor updates puched is also a nice thing. Puppet is used only for configuration in TripleO. While security issues do need to be addressed at any layer I'm not sure there would be an urgency to re-deploy your cluster simply for a Puppet security fix alone. Smart change management would help eliminate blindly deploying new containers in the case where they provide very little security benefit. I think the focus on Puppet, and Ruby here is perhaps a bad example as they are config time only. Rather than just think about them we should also consider the rest of the things in our base container images as well. This is always going to be a "balancing act". There are pros and cons of having things in the base layer vs. the child/leaf layers. > > Q: the best solution here would be using packages on the host, > generating the config files on the host. And then having an all-in- > one > container for all the services which lets them run in an isolated > mannner. > A: I think for Edge cases, that's a no go as we might want to > consider > tiny low footprint OS distros like former known Container Linux or > Atomic. Also, an all-in-one container looks like an anti-pattern > from > the world of VMs. This was suggested on IRC because it likely gives you the smallest network/storage footprint for each edge node. The container would get used for everything: running all the services, and configuring all the services. Sort of a golden image approach. It may be an anti-pattern but initially I thought you were looking to optimize these things. I think a better solution might be to have container registries, or container mirrors (reverse proxies or whatever) that allow you to cache things as you deploy to the edge and thus optimize the network traffic. > > [0] https://review.openstack.org/#/q/topic:base-container-reduction > [1] > https://review.rdoproject.org/r/#/q/topic:base-container-reduction > > > Here is a related bug [1] and implementation [1] for that. PTAL > > folks! > > > > [0] https://bugs.launchpad.net/tripleo/+bug/1804822 > > [1] https://review.openstack.org/#/q/topic:base-container-reduction > > > > > Let's also think of removing puppet-tripleo from the base > > > container. > > > It really brings the world-in (and yum updates in CI!) each job > > > and each > > > container! > > > So if we did so, we should then either install puppet-tripleo and > > > co on > > > the host and bind-mount it for the docker-puppet deployment task > > > steps > > > (bad idea IMO), OR use the magical --volumes-from > > container> > > > option to mount volumes from some "puppet-config" sidecar > > > container > > > inside each of the containers being launched by docker-puppet > > > tooling. > > > > On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås > redhat.com> > > wrote: > > > We add this to all images: > > > > > > https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35 > > > > > > /bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 > > > python > > > socat sudo which openstack-tripleo-common-container-base rsync > > > cronie > > > crudini openstack-selinux ansible python-shade puppet-tripleo > > > python2- > > > kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB > > > > > > Is the additional 276 MB reasonable here? > > > openstack-selinux <- This package run relabling, does that kind > > > of > > > touching the filesystem impact the size due to docker layers? > > > > > > Also: python2-kubernetes is a fairly large package (18007990) do > > > we use > > > that in every image? I don't see any tripleo related repos > > > importing > > > from that when searching on Hound? The original commit message[1] > > > adding it states it is for future convenience. > > > > > > On my undercloud we have 101 images, if we are downloading every > > > 18 MB > > > per image thats almost 1.8 GB for a package we don't use? (I hope > > > it's > > > not like this? With docker layers, we only download that 276 MB > > > transaction once? Or?) > > > > > > > > > [1] https://review.openstack.org/527927 > > > > > > -- > > Best regards, > > Bogdan Dobrelya, > > Irc #bogdando > > From fungi at yuggoth.org Tue Nov 27 18:23:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 27 Nov 2018 18:23:22 +0000 Subject: [openstack-dev] IMPORTANT: We're combining the lists! In-Reply-To: <20181119000352.lgrg57kyjylbrmx6@yuggoth.org> References: <20181109181447.qhutsauxl4fuinnh@yuggoth.org> <20181119000352.lgrg57kyjylbrmx6@yuggoth.org> Message-ID: <20181127182322.jznby55ktuafcwet@yuggoth.org> REMINDER: The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists (to which this was sent) are being replaced by a new openstack-discuss at lists.openstack.org mailing list. The new list[0] has been open for posts from subscribers since Monday November 19, and the old lists will be configured to no longer accept posts starting on Monday December 3. In the interim, posts to the old lists will also get copied to the new list so it's safe to unsubscribe from them now and not miss any messages. See my previous notice[1] for details. As of the time of this announcement, we have 403 subscribers on openstack-discuss with six days to go before the old lists are closed down for good). I have updated the old list descriptions to indicate the openstack-discuss list is preferred, and added a custom "welcome message" with the same for anyone who subscribes to them over the next week. [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From murali.annamneni at oracle.com Tue Nov 27 18:39:10 2018 From: murali.annamneni at oracle.com (Murali Annamneni) Date: Tue, 27 Nov 2018 18:39:10 +0000 Subject: [openstack-dev] [pacemaker] Live migration with VirtualDomain resource-agent Message-ID: <8237a933-eb9b-d193-948f-eb28fcd332b1@oracle.com> Hi, I'm trying to setup pacemaker for live-migration when a compute node hosting the VM goes down. So, I created cluster resource for the compute-instance (demo1) with 'VirtualDomain' resource agent.      >>> pcs resource create demo1 VirtualDomain migrateuri="qemu+tcp://:16509/system" config="/etc/pacemaker/demo1.xml" migration_transport="tcp"  meta allow-migrate="true" op start timeout="120s" op stop timeout="180s" And I manually shutdown the compute node hosting the vm, but, I don't see the migration is happening. Can someone please help me what I am missing here ? Thanks & Regards, Murali -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Wed Nov 28 00:31:24 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 28 Nov 2018 00:31:24 +0000 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <089336b7a431cab76e1350a959b7bb566175134b.camel@redhat.com> References: <9dee4044-2744-3ab6-3a15-bd79702ff35a@redhat.com>, <089336b7a431cab76e1350a959b7bb566175134b.camel@redhat.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C248BF7@EX10MBOX03.pnnl.gov> The pod concept allows you to have one tool per container do one thing and do it well. You can have a container for generating config, and another container for consuming it. In a Kubernetes pod, if you still wanted to do puppet, you could have a pod that: 1. had an init container that ran puppet and dumped the resulting config to an emptyDir volume. 2. had your main container pull its config from the emptyDir volume. Then each container would have no dependency on each other. In full blown Kubernetes cluster you might have puppet generate a configmap though and ship it to your main container directly. Thats another matter though. I think the example pod example above is still usable without k8s? Thanks, Kevin ________________________________________ From: Dan Prince [dprince at redhat.com] Sent: Tuesday, November 27, 2018 10:10 AM To: OpenStack Development Mailing List (not for usage questions); openstack-discuss at lists.openstack.org Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes On Tue, 2018-11-27 at 16:24 +0100, Bogdan Dobrelya wrote: > Changing the topic to follow the subject. > > [tl;dr] it's time to rearchitect container images to stop incluiding > config-time only (puppet et al) bits, which are not needed runtime > and > pose security issues, like CVEs, to maintain daily. I think your assertion that we need to rearchitect the config images to container the puppet bits is incorrect here. After reviewing the patches you linked to below it appears that you are proposing we use --volumes-from to bind mount application binaries from one container into another. I don't believe this is a good pattern for containers. On baremetal if we followed the same pattern it would be like using an /nfs share to obtain access to binaries across the network to optimize local storage. Now... some people do this (like maybe high performance computing would launch an MPI job like this) but I don't think we should consider it best practice for our containers in TripleO. Each container should container its own binaries and libraries as much as possible. And while I do think we should be using --volumes-from more often in TripleO it would be for sharing *data* between containers, not binaries. > > Background: > 1) For the Distributed Compute Node edge case, there is potentially > tens > of thousands of a single-compute-node remote edge sites connected > over > WAN to a single control plane, which is having high latency, like a > 100ms or so, and limited bandwith. Reducing the base layer size > becomes > a decent goal there. See the security background below. The reason we put Puppet into the base layer was in fact to prevent it from being downloaded multiple times. If we were to re-architect the image layers such that the child layers all contained their own copies of Puppet for example there would actually be a net increase in bandwidth and disk usage. So I would argue we are already addressing the goal of optimizing network and disk space. Moving it out of the base layer so that you can patch it more often without disrupting other services is a valid concern. But addressing this concern while also preserving our definiation of a container (see above, a container should contain all of its binaries) is going to cost you something, namely disk and network space because Puppet would need to be duplicated in each child container. As Puppet is used to configure a majority of the services in TripleO having it in the base container makes most sense. And yes, if there are security patches for Puppet/Ruby those might result in a bunch of containers getting pushed. But let Docker layers take care of this I think... Don't try to solve things by constructing your own custom mounts and volumes to work around the issue. > 2) For a generic security (Day 2, maintenance) case, when > puppet/ruby/systemd/name-it gets a CVE fixed, the base layer has to > be > updated and all layers on top - to be rebuild, and all of those > layers, > to be re-fetched for cloud hosts and all containers to be > restarted... > And all of that because of some fixes that have nothing to OpenStack. > By > the remote edge sites as well, remember of "tens of thousands", high > latency and limited bandwith?.. > 3) TripleO CI updates (including puppet*) packages in containers, not > in > a common base layer of those. So each a CI job has to update puppet* > and > its dependencies - ruby/systemd as well. Reducing numbers of packages > to > update for each container makes sense for CI as well. > > Implementation related: > > WIP patches [0],[1] for early review, uses a config "pod" approach, > does > not require to maintain a two sets of config vs runtime images. > Future > work: a) cronie requires systemd, we'd want to fix that also off the > base layer. b) rework to podman pods for docker-puppet.py instead of > --volumes-from a side car container (can't be backported for Queens > then, which is still nice to have a support for the Edge DCN case, > at > least downstream only perhaps). > > Some questions raised on IRC: > > Q: is having a service be able to configure itself really need to > involve a separate pod? > A: Highly likely yes, removing not-runtime things is a good idea and > pods is an established PaaS paradigm already. That will require some > changes in the architecture though (see the topic with WIP patches). I'm a little confused on this one. Are you suggesting that we have 2 containers for each service? One with Puppet and one without? That is certainly possible, but to pull it off would likely require you to have things built like this: |base container| --> |service container| --> |service container w/ Puppet installed| The end result would be Puppet being duplicated in a layer for each services "config image". Very inefficient. Again, I'm ansering this assumping we aren't violating our container constraints and best practices where each container has the binaries its needs to do its own configuration. > > Q: that's (fetching a config container) actually more data that > about to > download otherwise > A: It's not, if thinking of Day 2, when have to re-fetch the base > layer > and top layers, when some unrelated to openstack CVEs got fixed > there > for ruby/puppet/systemd. Avoid the need to restart service > containers > because of those minor updates puched is also a nice thing. Puppet is used only for configuration in TripleO. While security issues do need to be addressed at any layer I'm not sure there would be an urgency to re-deploy your cluster simply for a Puppet security fix alone. Smart change management would help eliminate blindly deploying new containers in the case where they provide very little security benefit. I think the focus on Puppet, and Ruby here is perhaps a bad example as they are config time only. Rather than just think about them we should also consider the rest of the things in our base container images as well. This is always going to be a "balancing act". There are pros and cons of having things in the base layer vs. the child/leaf layers. > > Q: the best solution here would be using packages on the host, > generating the config files on the host. And then having an all-in- > one > container for all the services which lets them run in an isolated > mannner. > A: I think for Edge cases, that's a no go as we might want to > consider > tiny low footprint OS distros like former known Container Linux or > Atomic. Also, an all-in-one container looks like an anti-pattern > from > the world of VMs. This was suggested on IRC because it likely gives you the smallest network/storage footprint for each edge node. The container would get used for everything: running all the services, and configuring all the services. Sort of a golden image approach. It may be an anti-pattern but initially I thought you were looking to optimize these things. I think a better solution might be to have container registries, or container mirrors (reverse proxies or whatever) that allow you to cache things as you deploy to the edge and thus optimize the network traffic. > > [0] https://review.openstack.org/#/q/topic:base-container-reduction > [1] > https://review.rdoproject.org/r/#/q/topic:base-container-reduction > > > Here is a related bug [1] and implementation [1] for that. PTAL > > folks! > > > > [0] https://bugs.launchpad.net/tripleo/+bug/1804822 > > [1] https://review.openstack.org/#/q/topic:base-container-reduction > > > > > Let's also think of removing puppet-tripleo from the base > > > container. > > > It really brings the world-in (and yum updates in CI!) each job > > > and each > > > container! > > > So if we did so, we should then either install puppet-tripleo and > > > co on > > > the host and bind-mount it for the docker-puppet deployment task > > > steps > > > (bad idea IMO), OR use the magical --volumes-from > > container> > > > option to mount volumes from some "puppet-config" sidecar > > > container > > > inside each of the containers being launched by docker-puppet > > > tooling. > > > > On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås > redhat.com> > > wrote: > > > We add this to all images: > > > > > > https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35 > > > > > > /bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 > > > python > > > socat sudo which openstack-tripleo-common-container-base rsync > > > cronie > > > crudini openstack-selinux ansible python-shade puppet-tripleo > > > python2- > > > kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB > > > > > > Is the additional 276 MB reasonable here? > > > openstack-selinux <- This package run relabling, does that kind > > > of > > > touching the filesystem impact the size due to docker layers? > > > > > > Also: python2-kubernetes is a fairly large package (18007990) do > > > we use > > > that in every image? I don't see any tripleo related repos > > > importing > > > from that when searching on Hound? The original commit message[1] > > > adding it states it is for future convenience. > > > > > > On my undercloud we have 101 images, if we are downloading every > > > 18 MB > > > per image thats almost 1.8 GB for a package we don't use? (I hope > > > it's > > > not like this? With docker layers, we only download that 276 MB > > > transaction once? Or?) > > > > > > > > > [1] https://review.openstack.org/527927 > > > > > > -- > > Best regards, > > Bogdan Dobrelya, > > Irc #bogdando > > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From liliueecg at gmail.com Wed Nov 28 05:04:32 2018 From: liliueecg at gmail.com (Li Liu) Date: Tue, 27 Nov 2018 21:04:32 -0800 Subject: [openstack-dev] [Cyborg] IRC meeting still happens at usual time this Wednesday Message-ID: Hi Team, I know there is a polling on the new IRC meeting time, but before we gather more inputs, let's keep using the old time slot for this week (10PM Beijing Time/9AM EST/6AM PST) Agenda for this week's meeting: 1. Status update on NOVA interaction spec 2. Status update on DB scheme spec 3. Status tracking on all the open patches -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtomasek at redhat.com Wed Nov 28 10:11:12 2018 From: jtomasek at redhat.com (Jiri Tomasek) Date: Wed, 28 Nov 2018 11:11:12 +0100 Subject: [openstack-dev] [tripleo] Workflows Squad changes Message-ID: Hi all, Recently, the workflows squad has been reorganized and people from the squad are joining different squads. I would like to discuss how we are going to adjust to this situation to make sure that tripleo-common development is not going to be blocked in terms of feature work and reviews. With this change, most of the tripleo-common maintenance work goes naturally to UI & Validations squad as CLI and GUI are the consumers of the API provided by tripleo-common. Adriano Petrich from workflows squad has joined UI squad to take on this work. As a possible solution, I would like to propose Adriano as a core reviewer to tripleo-common and adding tripleo-ui cores right to +2 tripleo-common patches. It would be great to hear opinions especially former members of Workflows squad and regular contributors to tripleo-common on these changes and in general on how to establish regular reviews and maintenance to ensure that tripleo-common codebase is moved towards converging the CLI and GUI deployment workflow. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Wed Nov 28 11:45:54 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 28 Nov 2018 12:45:54 +0100 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: References: Message-ID: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> To follow up and explain the patches for code review: The "header" patch https://review.openstack.org/620310 -> (requires) https://review.rdoproject.org/r/#/c/17534/, and also https://review.openstack.org/620061 -> (which in turn requires) https://review.openstack.org/619744 -> (Kolla change, the 1st to go) https://review.openstack.org/619736 Please also read the commit messages, I tried to explain all "Whys" very carefully. Just to sum up it here as well: The current self-containing (config and runtime bits) architecture of containers badly affects: * the size of the base layer and all containers images as an additional 300MB (adds an extra 30% of size). * Edge cases, where we have containers images to be distributed, at least once to hit local registries, over high-latency and limited bandwith, highly unreliable WAN connections. * numbers of packages to update in CI for all containers for all services (CI jobs do not rebuild containers so each container gets updated for those 300MB of extra size). * security and the surface of attacks, by introducing systemd et al as additional subjects for CVE fixes to maintain for all containers. * services uptime, by additional restarts of services related to security maintanence of irrelevant to openstack components sitting as a dead weight in containers images for ever. On 11/27/18 4:08 PM, Bogdan Dobrelya wrote: > Changing the topic to follow the subject. > > [tl;dr] it's time to rearchitect container images to stop incluiding > config-time only (puppet et al) bits, which are not needed runtime and > pose security issues, like CVEs, to maintain daily. > > Background: 1) For the Distributed Compute Node edge case, there is > potentially tens of thousands of a single-compute-node remote edge sites > connected over WAN to a single control plane, which is having high > latency, like a 100ms or so, and limited bandwith. > 2) For a generic security case, > 3) TripleO CI updates all > > Challenge: > >> Here is a related bug [1] and implementation [1] for that. PTAL folks! >> >> [0] https://bugs.launchpad.net/tripleo/+bug/1804822 >> [1] https://review.openstack.org/#/q/topic:base-container-reduction >> >>> Let's also think of removing puppet-tripleo from the base container. >>> It really brings the world-in (and yum updates in CI!) each job and >>> each container! >>> So if we did so, we should then either install puppet-tripleo and co >>> on the host and bind-mount it for the docker-puppet deployment task >>> steps (bad idea IMO), OR use the magical --volumes-from >>> option to mount volumes from some >>> "puppet-config" sidecar container inside each of the containers being >>> launched by docker-puppet tooling. >> >> On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås >> wrote: >>> We add this to all images: >>> >>> https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35 >>> >>> >>> /bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 python >>> socat sudo which openstack-tripleo-common-container-base rsync cronie >>> crudini openstack-selinux ansible python-shade puppet-tripleo python2- >>> kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB >>> Is the additional 276 MB reasonable here? >>> openstack-selinux <- This package run relabling, does that kind of >>> touching the filesystem impact the size due to docker layers? >>> >>> Also: python2-kubernetes is a fairly large package (18007990) do we use >>> that in every image? I don't see any tripleo related repos importing >>> from that when searching on Hound? The original commit message[1] >>> adding it states it is for future convenience. >>> >>> On my undercloud we have 101 images, if we are downloading every 18 MB >>> per image thats almost 1.8 GB for a package we don't use? (I hope it's >>> not like this? With docker layers, we only download that 276 MB >>> transaction once? Or?) >>> >>> >>> [1] https://review.openstack.org/527927 >> >> >> >> -- >> Best regards, >> Bogdan Dobrelya, >> Irc #bogdando > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From dprince at redhat.com Wed Nov 28 13:31:52 2018 From: dprince at redhat.com (Dan Prince) Date: Wed, 28 Nov 2018 08:31:52 -0500 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C248BF7@EX10MBOX03.pnnl.gov> References: <9dee4044-2744-3ab6-3a15-bd79702ff35a@redhat.com> , <089336b7a431cab76e1350a959b7bb566175134b.camel@redhat.com> <1A3C52DFCD06494D8528644858247BF01C248BF7@EX10MBOX03.pnnl.gov> Message-ID: <7364ab27a3f50e40a44c2af4305bde97816944fd.camel@redhat.com> On Wed, 2018-11-28 at 00:31 +0000, Fox, Kevin M wrote: > The pod concept allows you to have one tool per container do one > thing and do it well. > > You can have a container for generating config, and another container > for consuming it. > > In a Kubernetes pod, if you still wanted to do puppet, > you could have a pod that: > 1. had an init container that ran puppet and dumped the resulting > config to an emptyDir volume. > 2. had your main container pull its config from the emptyDir volume. We have basically implemented the same workflow in TripleO today. First we execute Puppet in an "init container" (really just an ephemeral container that generates the config files and then goes away). Then we bind mount those configs into the service container. One improvement we could make (which we aren't doing yet) is to use a data container/volume to store the config files instead of using the host. Sharing *data* within a 'pod' (set of containers, etc.) is certainly a valid use of container volumes. None of this is what we are really talking about in this thread though. Most of the suggestions and patches are about making our base container(s) smaller in size. And the means by which the patches do that is to share binaries/applications across containers with custom mounts/volumes. I don't think it is a good idea at all as it violates encapsulation of the containers in general, regardless of whether we use pods or not. Dan > > Then each container would have no dependency on each other. > > In full blown Kubernetes cluster you might have puppet generate a > configmap though and ship it to your main container directly. Thats > another matter though. I think the example pod example above is still > usable without k8s? > > Thanks, > Kevin > ________________________________________ > From: Dan Prince [dprince at redhat.com] > Sent: Tuesday, November 27, 2018 10:10 AM > To: OpenStack Development Mailing List (not for usage questions); > openstack-discuss at lists.openstack.org > Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of > containers for security and size of images (maintenance) sakes > > On Tue, 2018-11-27 at 16:24 +0100, Bogdan Dobrelya wrote: > > Changing the topic to follow the subject. > > > > [tl;dr] it's time to rearchitect container images to stop > > incluiding > > config-time only (puppet et al) bits, which are not needed runtime > > and > > pose security issues, like CVEs, to maintain daily. > > I think your assertion that we need to rearchitect the config images > to > container the puppet bits is incorrect here. > > After reviewing the patches you linked to below it appears that you > are > proposing we use --volumes-from to bind mount application binaries > from > one container into another. I don't believe this is a good pattern > for > containers. On baremetal if we followed the same pattern it would be > like using an /nfs share to obtain access to binaries across the > network to optimize local storage. Now... some people do this (like > maybe high performance computing would launch an MPI job like this) > but > I don't think we should consider it best practice for our containers > in > TripleO. > > Each container should container its own binaries and libraries as > much > as possible. And while I do think we should be using --volumes-from > more often in TripleO it would be for sharing *data* between > containers, not binaries. > > > > Background: > > 1) For the Distributed Compute Node edge case, there is potentially > > tens > > of thousands of a single-compute-node remote edge sites connected > > over > > WAN to a single control plane, which is having high latency, like a > > 100ms or so, and limited bandwith. Reducing the base layer size > > becomes > > a decent goal there. See the security background below. > > The reason we put Puppet into the base layer was in fact to prevent > it > from being downloaded multiple times. If we were to re-architect the > image layers such that the child layers all contained their own > copies > of Puppet for example there would actually be a net increase in > bandwidth and disk usage. So I would argue we are already addressing > the goal of optimizing network and disk space. > > Moving it out of the base layer so that you can patch it more often > without disrupting other services is a valid concern. But addressing > this concern while also preserving our definiation of a container > (see > above, a container should contain all of its binaries) is going to > cost > you something, namely disk and network space because Puppet would > need > to be duplicated in each child container. > > As Puppet is used to configure a majority of the services in TripleO > having it in the base container makes most sense. And yes, if there > are > security patches for Puppet/Ruby those might result in a bunch of > containers getting pushed. But let Docker layers take care of this I > think... Don't try to solve things by constructing your own custom > mounts and volumes to work around the issue. > > > > 2) For a generic security (Day 2, maintenance) case, when > > puppet/ruby/systemd/name-it gets a CVE fixed, the base layer has to > > be > > updated and all layers on top - to be rebuild, and all of those > > layers, > > to be re-fetched for cloud hosts and all containers to be > > restarted... > > And all of that because of some fixes that have nothing to > > OpenStack. > > By > > the remote edge sites as well, remember of "tens of thousands", > > high > > latency and limited bandwith?.. > > 3) TripleO CI updates (including puppet*) packages in containers, > > not > > in > > a common base layer of those. So each a CI job has to update > > puppet* > > and > > its dependencies - ruby/systemd as well. Reducing numbers of > > packages > > to > > update for each container makes sense for CI as well. > > > > Implementation related: > > > > WIP patches [0],[1] for early review, uses a config "pod" approach, > > does > > not require to maintain a two sets of config vs runtime images. > > Future > > work: a) cronie requires systemd, we'd want to fix that also off > > the > > base layer. b) rework to podman pods for docker-puppet.py instead > > of > > --volumes-from a side car container (can't be backported for Queens > > then, which is still nice to have a support for the Edge DCN case, > > at > > least downstream only perhaps). > > > > Some questions raised on IRC: > > > > Q: is having a service be able to configure itself really need to > > involve a separate pod? > > A: Highly likely yes, removing not-runtime things is a good idea > > and > > pods is an established PaaS paradigm already. That will require > > some > > changes in the architecture though (see the topic with WIP > > patches). > > I'm a little confused on this one. Are you suggesting that we have 2 > containers for each service? One with Puppet and one without? > > That is certainly possible, but to pull it off would likely require > you > to have things built like this: > > |base container| --> |service container| --> |service container w/ > Puppet installed| > > The end result would be Puppet being duplicated in a layer for each > services "config image". Very inefficient. > > Again, I'm ansering this assumping we aren't violating our container > constraints and best practices where each container has the binaries > its needs to do its own configuration. > > > Q: that's (fetching a config container) actually more data that > > about to > > download otherwise > > A: It's not, if thinking of Day 2, when have to re-fetch the base > > layer > > and top layers, when some unrelated to openstack CVEs got fixed > > there > > for ruby/puppet/systemd. Avoid the need to restart service > > containers > > because of those minor updates puched is also a nice thing. > > Puppet is used only for configuration in TripleO. While security > issues > do need to be addressed at any layer I'm not sure there would be an > urgency to re-deploy your cluster simply for a Puppet security fix > alone. Smart change management would help eliminate blindly deploying > new containers in the case where they provide very little security > benefit. > > I think the focus on Puppet, and Ruby here is perhaps a bad example > as > they are config time only. Rather than just think about them we > should > also consider the rest of the things in our base container images as > well. This is always going to be a "balancing act". There are pros > and > cons of having things in the base layer vs. the child/leaf layers. > > > > Q: the best solution here would be using packages on the host, > > generating the config files on the host. And then having an all-in- > > one > > container for all the services which lets them run in an isolated > > mannner. > > A: I think for Edge cases, that's a no go as we might want to > > consider > > tiny low footprint OS distros like former known Container Linux or > > Atomic. Also, an all-in-one container looks like an anti-pattern > > from > > the world of VMs. > > This was suggested on IRC because it likely gives you the smallest > network/storage footprint for each edge node. The container would get > used for everything: running all the services, and configuring all > the > services. Sort of a golden image approach. It may be an anti-pattern > but initially I thought you were looking to optimize these things. > > I think a better solution might be to have container registries, or > container mirrors (reverse proxies or whatever) that allow you to > cache > things as you deploy to the edge and thus optimize the network > traffic. > > > > [0] https://review.openstack.org/#/q/topic:base-container-reduction > > [1] > > https://review.rdoproject.org/r/#/q/topic:base-container-reduction > > > > > Here is a related bug [1] and implementation [1] for that. PTAL > > > folks! > > > > > > [0] https://bugs.launchpad.net/tripleo/+bug/1804822 > > > [1] > > > https://review.openstack.org/#/q/topic:base-container-reduction > > > > > > > Let's also think of removing puppet-tripleo from the base > > > > container. > > > > It really brings the world-in (and yum updates in CI!) each job > > > > and each > > > > container! > > > > So if we did so, we should then either install puppet-tripleo > > > > and > > > > co on > > > > the host and bind-mount it for the docker-puppet deployment > > > > task > > > > steps > > > > (bad idea IMO), OR use the magical --volumes-from > > > container> > > > > option to mount volumes from some "puppet-config" sidecar > > > > container > > > > inside each of the containers being launched by docker-puppet > > > > tooling. > > > > > > On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås > > redhat.com> > > > wrote: > > > > We add this to all images: > > > > > > > > https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35 > > > > > > > > /bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 > > > > python > > > > socat sudo which openstack-tripleo-common-container-base rsync > > > > cronie > > > > crudini openstack-selinux ansible python-shade puppet-tripleo > > > > python2- > > > > kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB > > > > > > > > Is the additional 276 MB reasonable here? > > > > openstack-selinux <- This package run relabling, does that kind > > > > of > > > > touching the filesystem impact the size due to docker layers? > > > > > > > > Also: python2-kubernetes is a fairly large package (18007990) > > > > do > > > > we use > > > > that in every image? I don't see any tripleo related repos > > > > importing > > > > from that when searching on Hound? The original commit > > > > message[1] > > > > adding it states it is for future convenience. > > > > > > > > On my undercloud we have 101 images, if we are downloading > > > > every > > > > 18 MB > > > > per image thats almost 1.8 GB for a package we don't use? (I > > > > hope > > > > it's > > > > not like this? With docker layers, we only download that 276 MB > > > > transaction once? Or?) > > > > > > > > > > > > [1] https://review.openstack.org/527927 > > > > > > -- > > > Best regards, > > > Bogdan Dobrelya, > > > Irc #bogdando > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From bdobreli at redhat.com Wed Nov 28 13:56:55 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 28 Nov 2018 14:56:55 +0100 Subject: [openstack-dev] [TripleO][Edge][Kolla] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> Message-ID: <9b2836a5-18e3-f94d-88e4-16e9f4762f71@redhat.com> Added Kolla tag as we all together might want to do something to that systemd included in containers via *multiple* package dependencies, like [0]. Ideally, that might be properly packaging all/some (like those names listed in [1]) of the places having it as a dependency, to stop doing that as of now it's Containers Time?.. As a temporary security band-aiding I was thinking of removing systemd via footers [1] as an extra layer added on top, but not sure that buys something good long-term. [0] https://pastebin.com/RSaRsYgZ [1] https://review.openstack.org/#/c/620310/2/container-images/tripleo_kolla_template_overrides.j2 at 680 On 11/28/18 12:45 PM, Bogdan Dobrelya wrote: > To follow up and explain the patches for code review: > > The "header" patch https://review.openstack.org/620310 -> (requires) > https://review.rdoproject.org/r/#/c/17534/, and also > https://review.openstack.org/620061 -> (which in turn requires) > https://review.openstack.org/619744 -> (Kolla change, the 1st to go) > https://review.openstack.org/619736 > > Please also read the commit messages, I tried to explain all "Whys" very > carefully. Just to sum up it here as well: > > The current self-containing (config and runtime bits) architecture of > containers badly affects: > > * the size of the base layer and all containers images as an >   additional 300MB (adds an extra 30% of size). > * Edge cases, where we have containers images to be distributed, at >   least once to hit local registries, over high-latency and limited >   bandwith, highly unreliable WAN connections. > * numbers of packages to update in CI for all containers for all >   services (CI jobs do not rebuild containers so each container gets >   updated for those 300MB of extra size). > * security and the surface of attacks, by introducing systemd et al as >   additional subjects for CVE fixes to maintain for all containers. > * services uptime, by additional restarts of services related to >   security maintanence of irrelevant to openstack components sitting >   as a dead weight in containers images for ever. > > On 11/27/18 4:08 PM, Bogdan Dobrelya wrote: >> Changing the topic to follow the subject. >> >> [tl;dr] it's time to rearchitect container images to stop incluiding >> config-time only (puppet et al) bits, which are not needed runtime and >> pose security issues, like CVEs, to maintain daily. >> >> Background: 1) For the Distributed Compute Node edge case, there is >> potentially tens of thousands of a single-compute-node remote edge >> sites connected over WAN to a single control plane, which is having >> high latency, like a 100ms or so, and limited bandwith. >> 2) For a generic security case, >> 3) TripleO CI updates all >> >> Challenge: >> >>> Here is a related bug [1] and implementation [1] for that. PTAL folks! >>> >>> [0] https://bugs.launchpad.net/tripleo/+bug/1804822 >>> [1] https://review.openstack.org/#/q/topic:base-container-reduction >>> >>>> Let's also think of removing puppet-tripleo from the base container. >>>> It really brings the world-in (and yum updates in CI!) each job and >>>> each container! >>>> So if we did so, we should then either install puppet-tripleo and co >>>> on the host and bind-mount it for the docker-puppet deployment task >>>> steps (bad idea IMO), OR use the magical --volumes-from >>>> option to mount volumes from some >>>> "puppet-config" sidecar container inside each of the containers >>>> being launched by docker-puppet tooling. >>> >>> On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås >> redhat.com> wrote: >>>> We add this to all images: >>>> >>>> https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35 >>>> >>>> >>>> /bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 python >>>> socat sudo which openstack-tripleo-common-container-base rsync cronie >>>> crudini openstack-selinux ansible python-shade puppet-tripleo python2- >>>> kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB >>>> Is the additional 276 MB reasonable here? >>>> openstack-selinux <- This package run relabling, does that kind of >>>> touching the filesystem impact the size due to docker layers? >>>> >>>> Also: python2-kubernetes is a fairly large package (18007990) do we use >>>> that in every image? I don't see any tripleo related repos importing >>>> from that when searching on Hound? The original commit message[1] >>>> adding it states it is for future convenience. >>>> >>>> On my undercloud we have 101 images, if we are downloading every 18 MB >>>> per image thats almost 1.8 GB for a package we don't use? (I hope it's >>>> not like this? With docker layers, we only download that 276 MB >>>> transaction once? Or?) >>>> >>>> >>>> [1] https://review.openstack.org/527927 >>> >>> >>> >>> -- >>> Best regards, >>> Bogdan Dobrelya, >>> Irc #bogdando >> >> > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From dprince at redhat.com Wed Nov 28 13:58:45 2018 From: dprince at redhat.com (Dan Prince) Date: Wed, 28 Nov 2018 08:58:45 -0500 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> Message-ID: <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> On Wed, 2018-11-28 at 12:45 +0100, Bogdan Dobrelya wrote: > To follow up and explain the patches for code review: > > The "header" patch https://review.openstack.org/620310 -> (requires) > https://review.rdoproject.org/r/#/c/17534/, and also > https://review.openstack.org/620061 -> (which in turn requires) > https://review.openstack.org/619744 -> (Kolla change, the 1st to go) > https://review.openstack.org/619736 This email was cross-posted to multiple lists and I think we may have lost some of the context in the process as the subject was changed. Most of the suggestions and patches are about making our base container(s) smaller in size. And the means by which the patches do that is to share binaries/applications across containers with custom mounts/volumes. I've -2'd most of them. What concerns me however is that some of the TripleO cores seemed open to this idea yesterday on IRC. Perhaps I've misread things but what you appear to be doing here is quite drastic I think we need to consider any of this carefully before proceeding with any of it. > > Please also read the commit messages, I tried to explain all "Whys" > very > carefully. Just to sum up it here as well: > > The current self-containing (config and runtime bits) architecture > of > containers badly affects: > > * the size of the base layer and all containers images as an > additional 300MB (adds an extra 30% of size). You are accomplishing this by removing Puppet from the base container, but you are also creating another container in the process. This would still be required on all nodes as Puppet is our config tool. So you would still be downloading some of this data anyways. Understood your reasons for doing this are that it avoids rebuilding all containers when there is a change to any of these packages in the base container. What you are missing however is how often is it the case that Puppet is updated that something else in the base container isn't? I would wager that it is more rare than you'd think. Perhaps looking at the history of an OpenStack distribution would be a valid way to assess this more critically. Without this data to backup the numbers I'm afraid what you are doing here falls into "pre-optimization" territory for me and I don't think the means used in the patches warrent the benefits you mention here. > * Edge cases, where we have containers images to be distributed, at > least once to hit local registries, over high-latency and limited > bandwith, highly unreliable WAN connections. > * numbers of packages to update in CI for all containers for all > services (CI jobs do not rebuild containers so each container gets > updated for those 300MB of extra size). It would seem to me there are other ways to solve the CI containers update problems. Rebuilding the base layer more often would solve this right? If we always build our service containers off of a base layer that is recent there should be no updates to the system/puppet packages there in our CI pipelines. > * security and the surface of attacks, by introducing systemd et al > as > additional subjects for CVE fixes to maintain for all containers. We aren't actually using systemd within our containers. I think those packages are getting pulled in by an RPM dependency elsewhere. So rather than using 'rpm -ev --nodeps' to remove it we could create a sub-package for containers in those cases and install it instead. In short rather than hack this to remove them why not pursue a proper packaging fix? In general I am a fan of getting things out of the base container we don't need... so yeah lets do this. But lets do it properly. > * services uptime, by additional restarts of services related to > security maintanence of irrelevant to openstack components sitting > as a dead weight in containers images for ever. Like I said above how often is it that these packages actually change where something else in the base container doesn't? Perhaps we should get more data here before blindly implementing a solution we aren't sure really helps out in the real world. > > On 11/27/18 4:08 PM, Bogdan Dobrelya wrote: > > Changing the topic to follow the subject. > > > > [tl;dr] it's time to rearchitect container images to stop > > incluiding > > config-time only (puppet et al) bits, which are not needed runtime > > and > > pose security issues, like CVEs, to maintain daily. > > > > Background: 1) For the Distributed Compute Node edge case, there > > is > > potentially tens of thousands of a single-compute-node remote edge > > sites > > connected over WAN to a single control plane, which is having high > > latency, like a 100ms or so, and limited bandwith. > > 2) For a generic security case, > > 3) TripleO CI updates all > > > > Challenge: > > > > > Here is a related bug [1] and implementation [1] for that. PTAL > > > folks! > > > > > > [0] https://bugs.launchpad.net/tripleo/+bug/1804822 > > > [1] > > > https://review.openstack.org/#/q/topic:base-container-reduction > > > > > > > Let's also think of removing puppet-tripleo from the base > > > > container. > > > > It really brings the world-in (and yum updates in CI!) each job > > > > and > > > > each container! > > > > So if we did so, we should then either install puppet-tripleo > > > > and co > > > > on the host and bind-mount it for the docker-puppet deployment > > > > task > > > > steps (bad idea IMO), OR use the magical --volumes-from > > > > option to mount volumes from some > > > > "puppet-config" sidecar container inside each of the containers > > > > being > > > > launched by docker-puppet tooling. > > > > > > On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås > > redhat.com> > > > wrote: > > > > We add this to all images: > > > > > > > > https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35 > > > > > > > > > > > > /bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 > > > > python > > > > socat sudo which openstack-tripleo-common-container-base rsync > > > > cronie > > > > crudini openstack-selinux ansible python-shade puppet-tripleo > > > > python2- > > > > kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB > > > > Is the additional 276 MB reasonable here? > > > > openstack-selinux <- This package run relabling, does that kind > > > > of > > > > touching the filesystem impact the size due to docker layers? > > > > > > > > Also: python2-kubernetes is a fairly large package (18007990) > > > > do we use > > > > that in every image? I don't see any tripleo related repos > > > > importing > > > > from that when searching on Hound? The original commit > > > > message[1] > > > > adding it states it is for future convenience. > > > > > > > > On my undercloud we have 101 images, if we are downloading > > > > every 18 MB > > > > per image thats almost 1.8 GB for a package we don't use? (I > > > > hope it's > > > > not like this? With docker layers, we only download that 276 MB > > > > transaction once? Or?) > > > > > > > > > > > > [1] https://review.openstack.org/527927 > > > > > > > > > -- > > > Best regards, > > > Bogdan Dobrelya, > > > Irc #bogdando > > From bdobreli at redhat.com Wed Nov 28 14:12:08 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 28 Nov 2018 15:12:08 +0100 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> Message-ID: <4121346a-7341-184f-2dcb-32092409196b@redhat.com> On 11/28/18 2:58 PM, Dan Prince wrote: > On Wed, 2018-11-28 at 12:45 +0100, Bogdan Dobrelya wrote: >> To follow up and explain the patches for code review: >> >> The "header" patch https://review.openstack.org/620310 -> (requires) >> https://review.rdoproject.org/r/#/c/17534/, and also >> https://review.openstack.org/620061 -> (which in turn requires) >> https://review.openstack.org/619744 -> (Kolla change, the 1st to go) >> https://review.openstack.org/619736 > > This email was cross-posted to multiple lists and I think we may have > lost some of the context in the process as the subject was changed. > > Most of the suggestions and patches are about making our base > container(s) smaller in size. And the means by which the patches do > that is to share binaries/applications across containers with custom > mounts/volumes. I've -2'd most of them. What concerns me however is > that some of the TripleO cores seemed open to this idea yesterday on > IRC. Perhaps I've misread things but what you appear to be doing here > is quite drastic I think we need to consider any of this carefully > before proceeding with any of it. > > >> >> Please also read the commit messages, I tried to explain all "Whys" >> very >> carefully. Just to sum up it here as well: >> >> The current self-containing (config and runtime bits) architecture >> of >> containers badly affects: >> >> * the size of the base layer and all containers images as an >> additional 300MB (adds an extra 30% of size). > > You are accomplishing this by removing Puppet from the base container, > but you are also creating another container in the process. This would > still be required on all nodes as Puppet is our config tool. So you > would still be downloading some of this data anyways. Understood your > reasons for doing this are that it avoids rebuilding all containers > when there is a change to any of these packages in the base container. > What you are missing however is how often is it the case that Puppet is > updated that something else in the base container isn't? For CI jobs updating all containers, its quite an often to have changes in openstack/tripleo puppet modules to pull in. IIUC, that automatically picks up any updates for all of its dependencies and for the dependencies of dependencies, and all that multiplied by a hundred of total containers to get it updated. That is a *pain* we're used to have these day for quite often timing out CI jobs... Ofc, the main cause is delayed promotions though. For real deployments, I have no data for the cadence of minor updates in puppet and tripleo & openstack modules for it, let's ask operators (as we're happened to be in the merged openstack-discuss list)? For its dependencies though, like systemd and ruby, I'm pretty sure it's quite often to have CVEs fixed there. So I expect what "in the fields" security fixes delivering for those might bring some unwanted hassle for long-term maintenance of LTS releases. As Tengu noted on IRC: "well, between systemd, puppet and ruby, there are many security concernes, almost every month... and also, what's the point keeping them in runtime containers when they are useless?" > > I would wager that it is more rare than you'd think. Perhaps looking at > the history of an OpenStack distribution would be a valid way to assess > this more critically. Without this data to backup the numbers I'm > afraid what you are doing here falls into "pre-optimization" territory > for me and I don't think the means used in the patches warrent the > benefits you mention here. > > >> * Edge cases, where we have containers images to be distributed, at >> least once to hit local registries, over high-latency and limited >> bandwith, highly unreliable WAN connections. >> * numbers of packages to update in CI for all containers for all >> services (CI jobs do not rebuild containers so each container gets >> updated for those 300MB of extra size). > > It would seem to me there are other ways to solve the CI containers > update problems. Rebuilding the base layer more often would solve this > right? If we always build our service containers off of a base layer > that is recent there should be no updates to the system/puppet packages > there in our CI pipelines. > >> * security and the surface of attacks, by introducing systemd et al >> as >> additional subjects for CVE fixes to maintain for all containers. > > We aren't actually using systemd within our containers. I think those > packages are getting pulled in by an RPM dependency elsewhere. So > rather than using 'rpm -ev --nodeps' to remove it we could create a > sub-package for containers in those cases and install it instead. In > short rather than hack this to remove them why not pursue a proper > packaging fix? > > In general I am a fan of getting things out of the base container we > don't need... so yeah lets do this. But lets do it properly. > >> * services uptime, by additional restarts of services related to >> security maintanence of irrelevant to openstack components sitting >> as a dead weight in containers images for ever. > > Like I said above how often is it that these packages actually change > where something else in the base container doesn't? Perhaps we should > get more data here before blindly implementing a solution we aren't > sure really helps out in the real world. > >> >> On 11/27/18 4:08 PM, Bogdan Dobrelya wrote: >>> Changing the topic to follow the subject. >>> >>> [tl;dr] it's time to rearchitect container images to stop >>> incluiding >>> config-time only (puppet et al) bits, which are not needed runtime >>> and >>> pose security issues, like CVEs, to maintain daily. >>> >>> Background: 1) For the Distributed Compute Node edge case, there >>> is >>> potentially tens of thousands of a single-compute-node remote edge >>> sites >>> connected over WAN to a single control plane, which is having high >>> latency, like a 100ms or so, and limited bandwith. >>> 2) For a generic security case, >>> 3) TripleO CI updates all >>> >>> Challenge: >>> >>>> Here is a related bug [1] and implementation [1] for that. PTAL >>>> folks! >>>> >>>> [0] https://bugs.launchpad.net/tripleo/+bug/1804822 >>>> [1] >>>> https://review.openstack.org/#/q/topic:base-container-reduction >>>> >>>>> Let's also think of removing puppet-tripleo from the base >>>>> container. >>>>> It really brings the world-in (and yum updates in CI!) each job >>>>> and >>>>> each container! >>>>> So if we did so, we should then either install puppet-tripleo >>>>> and co >>>>> on the host and bind-mount it for the docker-puppet deployment >>>>> task >>>>> steps (bad idea IMO), OR use the magical --volumes-from >>>>> option to mount volumes from some >>>>> "puppet-config" sidecar container inside each of the containers >>>>> being >>>>> launched by docker-puppet tooling. >>>> >>>> On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås >>> redhat.com> >>>> wrote: >>>>> We add this to all images: >>>>> >>>>> https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35 >>>>> >>>>> >>>>> /bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 >>>>> python >>>>> socat sudo which openstack-tripleo-common-container-base rsync >>>>> cronie >>>>> crudini openstack-selinux ansible python-shade puppet-tripleo >>>>> python2- >>>>> kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB >>>>> Is the additional 276 MB reasonable here? >>>>> openstack-selinux <- This package run relabling, does that kind >>>>> of >>>>> touching the filesystem impact the size due to docker layers? >>>>> >>>>> Also: python2-kubernetes is a fairly large package (18007990) >>>>> do we use >>>>> that in every image? I don't see any tripleo related repos >>>>> importing >>>>> from that when searching on Hound? The original commit >>>>> message[1] >>>>> adding it states it is for future convenience. >>>>> >>>>> On my undercloud we have 101 images, if we are downloading >>>>> every 18 MB >>>>> per image thats almost 1.8 GB for a package we don't use? (I >>>>> hope it's >>>>> not like this? With docker layers, we only download that 276 MB >>>>> transaction once? Or?) >>>>> >>>>> >>>>> [1] https://review.openstack.org/527927 >>>> >>>> >>>> -- >>>> Best regards, >>>> Bogdan Dobrelya, >>>> Irc #bogdando >> >> > -- Best regards, Bogdan Dobrelya, Irc #bogdando From dprince at redhat.com Wed Nov 28 14:25:13 2018 From: dprince at redhat.com (Dan Prince) Date: Wed, 28 Nov 2018 09:25:13 -0500 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <4121346a-7341-184f-2dcb-32092409196b@redhat.com> References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> Message-ID: <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> On Wed, 2018-11-28 at 15:12 +0100, Bogdan Dobrelya wrote: > On 11/28/18 2:58 PM, Dan Prince wrote: > > On Wed, 2018-11-28 at 12:45 +0100, Bogdan Dobrelya wrote: > > > To follow up and explain the patches for code review: > > > > > > The "header" patch https://review.openstack.org/620310 -> > > > (requires) > > > https://review.rdoproject.org/r/#/c/17534/, and also > > > https://review.openstack.org/620061 -> (which in turn requires) > > > https://review.openstack.org/619744 -> (Kolla change, the 1st to > > > go) > > > https://review.openstack.org/619736 > > > > This email was cross-posted to multiple lists and I think we may > > have > > lost some of the context in the process as the subject was changed. > > > > Most of the suggestions and patches are about making our base > > container(s) smaller in size. And the means by which the patches do > > that is to share binaries/applications across containers with > > custom > > mounts/volumes. I've -2'd most of them. What concerns me however is > > that some of the TripleO cores seemed open to this idea yesterday > > on > > IRC. Perhaps I've misread things but what you appear to be doing > > here > > is quite drastic I think we need to consider any of this carefully > > before proceeding with any of it. > > > > > > > Please also read the commit messages, I tried to explain all > > > "Whys" > > > very > > > carefully. Just to sum up it here as well: > > > > > > The current self-containing (config and runtime bits) > > > architecture > > > of > > > containers badly affects: > > > > > > * the size of the base layer and all containers images as an > > > additional 300MB (adds an extra 30% of size). > > > > You are accomplishing this by removing Puppet from the base > > container, > > but you are also creating another container in the process. This > > would > > still be required on all nodes as Puppet is our config tool. So you > > would still be downloading some of this data anyways. Understood > > your > > reasons for doing this are that it avoids rebuilding all containers > > when there is a change to any of these packages in the base > > container. > > What you are missing however is how often is it the case that > > Puppet is > > updated that something else in the base container isn't? > > For CI jobs updating all containers, its quite an often to have > changes > in openstack/tripleo puppet modules to pull in. IIUC, that > automatically > picks up any updates for all of its dependencies and for the > dependencies of dependencies, and all that multiplied by a hundred > of > total containers to get it updated. That is a *pain* we're used to > have > these day for quite often timing out CI jobs... Ofc, the main cause > is > delayed promotions though. Regarding CI I made a separate suggestion on that below in that rebuilding the base layer more often could be a good solution here. I don't think the puppet-tripleo package is that large however so we could just live with it. > > For real deployments, I have no data for the cadence of minor updates > in > puppet and tripleo & openstack modules for it, let's ask operators > (as > we're happened to be in the merged openstack-discuss list)? For its > dependencies though, like systemd and ruby, I'm pretty sure it's > quite > often to have CVEs fixed there. So I expect what "in the fields" > security fixes delivering for those might bring some unwanted hassle > for > long-term maintenance of LTS releases. As Tengu noted on IRC: > "well, between systemd, puppet and ruby, there are many security > concernes, almost every month... and also, what's the point keeping > them > in runtime containers when they are useless?" Reiterating again on previous points: -I'd be fine removing systemd. But lets do it properly and not via 'rpm -ev --nodeps'. -Puppet and Ruby *are* required for configuration. We can certainly put them in a separate container outside of the runtime service containers but doing so would actually cost you much more space/bandwidth for each service container. As both of these have to get downloaded to each node anyway in order to generate config files with our current mechanisms I'm not sure this buys you anything. We are going in circles here I think.... Dan > > > I would wager that it is more rare than you'd think. Perhaps > > looking at > > the history of an OpenStack distribution would be a valid way to > > assess > > this more critically. Without this data to backup the numbers I'm > > afraid what you are doing here falls into "pre-optimization" > > territory > > for me and I don't think the means used in the patches warrent the > > benefits you mention here. > > > > > > > * Edge cases, where we have containers images to be distributed, > > > at > > > least once to hit local registries, over high-latency and > > > limited > > > bandwith, highly unreliable WAN connections. > > > * numbers of packages to update in CI for all containers for all > > > services (CI jobs do not rebuild containers so each container > > > gets > > > updated for those 300MB of extra size). > > > > It would seem to me there are other ways to solve the CI containers > > update problems. Rebuilding the base layer more often would solve > > this > > right? If we always build our service containers off of a base > > layer > > that is recent there should be no updates to the system/puppet > > packages > > there in our CI pipelines. > > > > > * security and the surface of attacks, by introducing systemd et > > > al > > > as > > > additional subjects for CVE fixes to maintain for all > > > containers. > > > > We aren't actually using systemd within our containers. I think > > those > > packages are getting pulled in by an RPM dependency elsewhere. So > > rather than using 'rpm -ev --nodeps' to remove it we could create a > > sub-package for containers in those cases and install it instead. > > In > > short rather than hack this to remove them why not pursue a proper > > packaging fix? > > > > In general I am a fan of getting things out of the base container > > we > > don't need... so yeah lets do this. But lets do it properly. > > > > > * services uptime, by additional restarts of services related to > > > security maintanence of irrelevant to openstack components > > > sitting > > > as a dead weight in containers images for ever. > > > > Like I said above how often is it that these packages actually > > change > > where something else in the base container doesn't? Perhaps we > > should > > get more data here before blindly implementing a solution we aren't > > sure really helps out in the real world. > > > > > On 11/27/18 4:08 PM, Bogdan Dobrelya wrote: > > > > Changing the topic to follow the subject. > > > > > > > > [tl;dr] it's time to rearchitect container images to stop > > > > incluiding > > > > config-time only (puppet et al) bits, which are not needed > > > > runtime > > > > and > > > > pose security issues, like CVEs, to maintain daily. > > > > > > > > Background: 1) For the Distributed Compute Node edge case, > > > > there > > > > is > > > > potentially tens of thousands of a single-compute-node remote > > > > edge > > > > sites > > > > connected over WAN to a single control plane, which is having > > > > high > > > > latency, like a 100ms or so, and limited bandwith. > > > > 2) For a generic security case, > > > > 3) TripleO CI updates all > > > > > > > > Challenge: > > > > > > > > > Here is a related bug [1] and implementation [1] for that. > > > > > PTAL > > > > > folks! > > > > > > > > > > [0] https://bugs.launchpad.net/tripleo/+bug/1804822 > > > > > [1] > > > > > https://review.openstack.org/#/q/topic:base-container-reduction > > > > > > > > > > > Let's also think of removing puppet-tripleo from the base > > > > > > container. > > > > > > It really brings the world-in (and yum updates in CI!) each > > > > > > job > > > > > > and > > > > > > each container! > > > > > > So if we did so, we should then either install puppet- > > > > > > tripleo > > > > > > and co > > > > > > on the host and bind-mount it for the docker-puppet > > > > > > deployment > > > > > > task > > > > > > steps (bad idea IMO), OR use the magical --volumes-from > > > > > > option to mount volumes from some > > > > > > "puppet-config" sidecar container inside each of the > > > > > > containers > > > > > > being > > > > > > launched by docker-puppet tooling. > > > > > > > > > > On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås > > > > redhat.com> > > > > > wrote: > > > > > > We add this to all images: > > > > > > > > > > > > https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35 > > > > > > > > > > > > > > > > > > /bin/sh -c yum -y install iproute iscsi-initiator-utils > > > > > > lvm2 > > > > > > python > > > > > > socat sudo which openstack-tripleo-common-container-base > > > > > > rsync > > > > > > cronie > > > > > > crudini openstack-selinux ansible python-shade puppet- > > > > > > tripleo > > > > > > python2- > > > > > > kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB > > > > > > Is the additional 276 MB reasonable here? > > > > > > openstack-selinux <- This package run relabling, does that > > > > > > kind > > > > > > of > > > > > > touching the filesystem impact the size due to docker > > > > > > layers? > > > > > > > > > > > > Also: python2-kubernetes is a fairly large package > > > > > > (18007990) > > > > > > do we use > > > > > > that in every image? I don't see any tripleo related repos > > > > > > importing > > > > > > from that when searching on Hound? The original commit > > > > > > message[1] > > > > > > adding it states it is for future convenience. > > > > > > > > > > > > On my undercloud we have 101 images, if we are downloading > > > > > > every 18 MB > > > > > > per image thats almost 1.8 GB for a package we don't use? > > > > > > (I > > > > > > hope it's > > > > > > not like this? With docker layers, we only download that > > > > > > 276 MB > > > > > > transaction once? Or?) > > > > > > > > > > > > > > > > > > [1] https://review.openstack.org/527927 > > > > > > > > > > -- > > > > > Best regards, > > > > > Bogdan Dobrelya, > > > > > Irc #bogdando > > From natal at redhat.com Wed Nov 28 14:33:02 2018 From: natal at redhat.com (=?UTF-8?Q?Natal_Ng=C3=A9tal?=) Date: Wed, 28 Nov 2018 15:33:02 +0100 Subject: [openstack-dev] [tripleo] Let's improve upstream docs In-Reply-To: References: Message-ID: On Tue, Nov 27, 2018 at 4:50 PM Marios Andreou wrote: > as just mentioned in the tripleo weekly irc meeting [1] some of us are trying to make small weekly improvements to the tripleo docs [2]. We are using this bug [3] for tracking and this effort is a result of some feedback during the recent Berlin summit. It's a good idea. The documentation of a project it's very important. > The general idea is 1 per week (or more if you can and want) - improvement/removal of stale content/identifying missing sections, or anything else you might care to propose. Please join us if you can, just add "Related-Bug: #1804642" to your commit message I'm going to try to help you on this ticket. I started to make code review. I would to know if you have more details about that. I mean, do you have examples of part able improved or something like that. From marios at redhat.com Wed Nov 28 15:15:02 2018 From: marios at redhat.com (Marios Andreou) Date: Wed, 28 Nov 2018 17:15:02 +0200 Subject: [openstack-dev] [tripleo] Let's improve upstream docs In-Reply-To: References: Message-ID: On Wed, Nov 28, 2018 at 4:33 PM Natal Ngétal wrote: > On Tue, Nov 27, 2018 at 4:50 PM Marios Andreou wrote: > > as just mentioned in the tripleo weekly irc meeting [1] some of us are > trying to make small weekly improvements to the tripleo docs [2]. We are > using this bug [3] for tracking and this effort is a result of some > feedback during the recent Berlin summit. > It's a good idea. The documentation of a project it's very important. > > > The general idea is 1 per week (or more if you can and want) - > improvement/removal of stale content/identifying missing sections, or > anything else you might care to propose. Please join us if you can, just > add "Related-Bug: #1804642" to your commit message > I'm going to try to help you on this ticket. I started to make code > great you are very welcome ! > review. I would to know if you have more details about that. I mean, > do you have examples of part able improved or something like that. > > not really, I mean "anything goes" as long as it's an improvement ( and the usual review process will determine if it is or not :) ). Could be as small as typos or broken links/images, through to reorganising sections or even bigger contributions like completely new sections if you can and want. Take a look at the existing patches that are on the bug for ideas thanks > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From natal at redhat.com Wed Nov 28 15:28:23 2018 From: natal at redhat.com (=?UTF-8?Q?Natal_Ng=C3=A9tal?=) Date: Wed, 28 Nov 2018 16:28:23 +0100 Subject: [openstack-dev] [tripleo] Let's improve upstream docs In-Reply-To: References: Message-ID: On Wed, Nov 28, 2018 at 4:19 PM Marios Andreou wrote: > great you are very welcome ! Thanks. > not really, I mean "anything goes" as long as it's an improvement ( and the usual review process will determine if it is or not :) ). Could be as small as typos or broken links/images, through to reorganising sections or even bigger contributions like completely new sections if you can and want. Take a look at the existing patches that are on the bug for ideas I see. I have made a first patch and I'm going to find what I can do and continue to make code review. From sgolovat at redhat.com Wed Nov 28 16:36:50 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Wed, 28 Nov 2018 17:36:50 +0100 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <089336b7a431cab76e1350a959b7bb566175134b.camel@redhat.com> References: <9dee4044-2744-3ab6-3a15-bd79702ff35a@redhat.com> <089336b7a431cab76e1350a959b7bb566175134b.camel@redhat.com> Message-ID: Hi, On Tue, Nov 27, 2018 at 7:13 PM Dan Prince wrote: > On Tue, 2018-11-27 at 16:24 +0100, Bogdan Dobrelya wrote: > > Changing the topic to follow the subject. > > > > [tl;dr] it's time to rearchitect container images to stop incluiding > > config-time only (puppet et al) bits, which are not needed runtime > > and > > pose security issues, like CVEs, to maintain daily. > > I think your assertion that we need to rearchitect the config images to > container the puppet bits is incorrect here. > > After reviewing the patches you linked to below it appears that you are > proposing we use --volumes-from to bind mount application binaries from > one container into another. I don't believe this is a good pattern for > containers. On baremetal if we followed the same pattern it would be > like using an /nfs share to obtain access to binaries across the > network to optimize local storage. Now... some people do this (like > maybe high performance computing would launch an MPI job like this) but > I don't think we should consider it best practice for our containers in > TripleO. > > Each container should container its own binaries and libraries as much > as possible. And while I do think we should be using --volumes-from > more often in TripleO it would be for sharing *data* between > containers, not binaries. > > > > > > Background: > > 1) For the Distributed Compute Node edge case, there is potentially > > tens > > of thousands of a single-compute-node remote edge sites connected > > over > > WAN to a single control plane, which is having high latency, like a > > 100ms or so, and limited bandwith. Reducing the base layer size > > becomes > > a decent goal there. See the security background below. > > The reason we put Puppet into the base layer was in fact to prevent it > from being downloaded multiple times. If we were to re-architect the > image layers such that the child layers all contained their own copies > of Puppet for example there would actually be a net increase in > bandwidth and disk usage. So I would argue we are already addressing > the goal of optimizing network and disk space. > > Moving it out of the base layer so that you can patch it more often > without disrupting other services is a valid concern. But addressing > this concern while also preserving our definiation of a container (see > above, a container should contain all of its binaries) is going to cost > you something, namely disk and network space because Puppet would need > to be duplicated in each child container. > > As Puppet is used to configure a majority of the services in TripleO > having it in the base container makes most sense. And yes, if there are > security patches for Puppet/Ruby those might result in a bunch of > containers getting pushed. But let Docker layers take care of this I > think... Don't try to solve things by constructing your own custom > mounts and volumes to work around the issue. > > > > 2) For a generic security (Day 2, maintenance) case, when > > puppet/ruby/systemd/name-it gets a CVE fixed, the base layer has to > > be > > updated and all layers on top - to be rebuild, and all of those > > layers, > > to be re-fetched for cloud hosts and all containers to be > > restarted... > > And all of that because of some fixes that have nothing to OpenStack. > > By > > the remote edge sites as well, remember of "tens of thousands", high > > latency and limited bandwith?.. > > 3) TripleO CI updates (including puppet*) packages in containers, not > > in > > a common base layer of those. So each a CI job has to update puppet* > > and > > its dependencies - ruby/systemd as well. Reducing numbers of packages > > to > > update for each container makes sense for CI as well. > > > > Implementation related: > > > > WIP patches [0],[1] for early review, uses a config "pod" approach, > > does > > not require to maintain a two sets of config vs runtime images. > > Future > > work: a) cronie requires systemd, we'd want to fix that also off the > > base layer. b) rework to podman pods for docker-puppet.py instead of > > --volumes-from a side car container (can't be backported for Queens > > then, which is still nice to have a support for the Edge DCN case, > > at > > least downstream only perhaps). > > > > Some questions raised on IRC: > > > > Q: is having a service be able to configure itself really need to > > involve a separate pod? > > A: Highly likely yes, removing not-runtime things is a good idea and > > pods is an established PaaS paradigm already. That will require some > > changes in the architecture though (see the topic with WIP patches). > > I'm a little confused on this one. Are you suggesting that we have 2 > containers for each service? One with Puppet and one without? > > That is certainly possible, but to pull it off would likely require you > to have things built like this: > > |base container| --> |service container| --> |service container w/ > Puppet installed| > > The end result would be Puppet being duplicated in a layer for each > services "config image". Very inefficient. > > Again, I'm ansering this assumping we aren't violating our container > constraints and best practices where each container has the binaries > its needs to do its own configuration. > > > > > Q: that's (fetching a config container) actually more data that > > about to > > download otherwise > > A: It's not, if thinking of Day 2, when have to re-fetch the base > > layer > > and top layers, when some unrelated to openstack CVEs got fixed > > there > > for ruby/puppet/systemd. Avoid the need to restart service > > containers > > because of those minor updates puched is also a nice thing. > > Puppet is used only for configuration in TripleO. While security issues > do need to be addressed at any layer I'm not sure there would be an > urgency to re-deploy your cluster simply for a Puppet security fix > alone. Smart change management would help eliminate blindly deploying > new containers in the case where they provide very little security > benefit. > > I think the focus on Puppet, and Ruby here is perhaps a bad example as > they are config time only. Rather than just think about them we should > also consider the rest of the things in our base container images as > well. This is always going to be a "balancing act". There are pros and > cons of having things in the base layer vs. the child/leaf layers. > It's interesting as puppet is required for config time only, but it is kept in every image whole its life. There is a pattern of side cars in Kubernetes where side container configures what's needed for main container and dies. > > > > > > Q: the best solution here would be using packages on the host, > > generating the config files on the host. And then having an all-in- > > one > > container for all the services which lets them run in an isolated > > mannner. > > A: I think for Edge cases, that's a no go as we might want to > > consider > > tiny low footprint OS distros like former known Container Linux or > > Atomic. Also, an all-in-one container looks like an anti-pattern > > from > > the world of VMs. > > This was suggested on IRC because it likely gives you the smallest > network/storage footprint for each edge node. The container would get > used for everything: running all the services, and configuring all the > services. Sort of a golden image approach. It may be an anti-pattern > but initially I thought you were looking to optimize these things. > It is antipattern indeed. The smaller container is the better. Less chance of security issues, less data to transfer over network, less storage. In programming there are a lot of patterns to reuse code (OOP is a sample). So the same pattern should be applied to containers rather than blindly copy data to every container. > > I think a better solution might be to have container registries, or > container mirrors (reverse proxies or whatever) that allow you to cache > things as you deploy to the edge and thus optimize the network traffic. > This solution is good addition but containers should be tiny and not fat. > > > > > > [0] https://review.openstack.org/#/q/topic:base-container-reduction > > [1] > > https://review.rdoproject.org/r/#/q/topic:base-container-reduction > > > > > Here is a related bug [1] and implementation [1] for that. PTAL > > > folks! > > > > > > [0] https://bugs.launchpad.net/tripleo/+bug/1804822 > > > [1] https://review.openstack.org/#/q/topic:base-container-reduction > > > > > > > Let's also think of removing puppet-tripleo from the base > > > > container. > > > > It really brings the world-in (and yum updates in CI!) each job > > > > and each > > > > container! > > > > So if we did so, we should then either install puppet-tripleo and > > > > co on > > > > the host and bind-mount it for the docker-puppet deployment task > > > > steps > > > > (bad idea IMO), OR use the magical --volumes-from > > > container> > > > > option to mount volumes from some "puppet-config" sidecar > > > > container > > > > inside each of the containers being launched by docker-puppet > > > > tooling. > > > > > > On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås > > redhat.com> > > > wrote: > > > > We add this to all images: > > > > > > > > > https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35 > > > > > > > > /bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 > > > > python > > > > socat sudo which openstack-tripleo-common-container-base rsync > > > > cronie > > > > crudini openstack-selinux ansible python-shade puppet-tripleo > > > > python2- > > > > kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB > > > > > > > > Is the additional 276 MB reasonable here? > > > > openstack-selinux <- This package run relabling, does that kind > > > > of > > > > touching the filesystem impact the size due to docker layers? > > > > > > > > Also: python2-kubernetes is a fairly large package (18007990) do > > > > we use > > > > that in every image? I don't see any tripleo related repos > > > > importing > > > > from that when searching on Hound? The original commit message[1] > > > > adding it states it is for future convenience. > > > > > > > > On my undercloud we have 101 images, if we are downloading every > > > > 18 MB > > > > per image thats almost 1.8 GB for a package we don't use? (I hope > > > > it's > > > > not like this? With docker layers, we only download that 276 MB > > > > transaction once? Or?) > > > > > > > > > > > > [1] https://review.openstack.org/527927 > > > > > > > > > -- > > > Best regards, > > > Bogdan Dobrelya, > > > Irc #bogdando > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best Regards, Sergii Golovatiuk -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed Nov 28 16:39:18 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 28 Nov 2018 11:39:18 -0500 Subject: [openstack-dev] [tripleo] Workflows Squad changes In-Reply-To: References: Message-ID: On Wed, Nov 28, 2018 at 5:13 AM Jiri Tomasek wrote: [...] > As a possible solution, I would like to propose Adriano as a core reviewer > to tripleo-common and adding tripleo-ui cores right to +2 tripleo-common > patches. > [...] Not a member of the squad but +2 to the idea Thanks for proposing, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Wed Nov 28 16:46:16 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 28 Nov 2018 16:46:16 +0000 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <7364ab27a3f50e40a44c2af4305bde97816944fd.camel@redhat.com> References: <9dee4044-2744-3ab6-3a15-bd79702ff35a@redhat.com> , <089336b7a431cab76e1350a959b7bb566175134b.camel@redhat.com> <1A3C52DFCD06494D8528644858247BF01C248BF7@EX10MBOX03.pnnl.gov>, <7364ab27a3f50e40a44c2af4305bde97816944fd.camel@redhat.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C24924B@EX10MBOX03.pnnl.gov> Ok, so you have the workflow in place, but it sounds like the containers are not laid out to best use that workflow. Puppet is in the base layer. That means whenever puppet gets updated, all the other containers must be too. And other such update coupling issues. I'm with you, that binaries should not be copied from one container to another though. Thanks, Kevin ________________________________________ From: Dan Prince [dprince at redhat.com] Sent: Wednesday, November 28, 2018 5:31 AM To: Former OpenStack Development Mailing List, use openstack-discuss now; openstack-discuss at lists.openstack.org Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes On Wed, 2018-11-28 at 00:31 +0000, Fox, Kevin M wrote: > The pod concept allows you to have one tool per container do one > thing and do it well. > > You can have a container for generating config, and another container > for consuming it. > > In a Kubernetes pod, if you still wanted to do puppet, > you could have a pod that: > 1. had an init container that ran puppet and dumped the resulting > config to an emptyDir volume. > 2. had your main container pull its config from the emptyDir volume. We have basically implemented the same workflow in TripleO today. First we execute Puppet in an "init container" (really just an ephemeral container that generates the config files and then goes away). Then we bind mount those configs into the service container. One improvement we could make (which we aren't doing yet) is to use a data container/volume to store the config files instead of using the host. Sharing *data* within a 'pod' (set of containers, etc.) is certainly a valid use of container volumes. None of this is what we are really talking about in this thread though. Most of the suggestions and patches are about making our base container(s) smaller in size. And the means by which the patches do that is to share binaries/applications across containers with custom mounts/volumes. I don't think it is a good idea at all as it violates encapsulation of the containers in general, regardless of whether we use pods or not. Dan > > Then each container would have no dependency on each other. > > In full blown Kubernetes cluster you might have puppet generate a > configmap though and ship it to your main container directly. Thats > another matter though. I think the example pod example above is still > usable without k8s? > > Thanks, > Kevin > ________________________________________ > From: Dan Prince [dprince at redhat.com] > Sent: Tuesday, November 27, 2018 10:10 AM > To: OpenStack Development Mailing List (not for usage questions); > openstack-discuss at lists.openstack.org > Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of > containers for security and size of images (maintenance) sakes > > On Tue, 2018-11-27 at 16:24 +0100, Bogdan Dobrelya wrote: > > Changing the topic to follow the subject. > > > > [tl;dr] it's time to rearchitect container images to stop > > incluiding > > config-time only (puppet et al) bits, which are not needed runtime > > and > > pose security issues, like CVEs, to maintain daily. > > I think your assertion that we need to rearchitect the config images > to > container the puppet bits is incorrect here. > > After reviewing the patches you linked to below it appears that you > are > proposing we use --volumes-from to bind mount application binaries > from > one container into another. I don't believe this is a good pattern > for > containers. On baremetal if we followed the same pattern it would be > like using an /nfs share to obtain access to binaries across the > network to optimize local storage. Now... some people do this (like > maybe high performance computing would launch an MPI job like this) > but > I don't think we should consider it best practice for our containers > in > TripleO. > > Each container should container its own binaries and libraries as > much > as possible. And while I do think we should be using --volumes-from > more often in TripleO it would be for sharing *data* between > containers, not binaries. > > > > Background: > > 1) For the Distributed Compute Node edge case, there is potentially > > tens > > of thousands of a single-compute-node remote edge sites connected > > over > > WAN to a single control plane, which is having high latency, like a > > 100ms or so, and limited bandwith. Reducing the base layer size > > becomes > > a decent goal there. See the security background below. > > The reason we put Puppet into the base layer was in fact to prevent > it > from being downloaded multiple times. If we were to re-architect the > image layers such that the child layers all contained their own > copies > of Puppet for example there would actually be a net increase in > bandwidth and disk usage. So I would argue we are already addressing > the goal of optimizing network and disk space. > > Moving it out of the base layer so that you can patch it more often > without disrupting other services is a valid concern. But addressing > this concern while also preserving our definiation of a container > (see > above, a container should contain all of its binaries) is going to > cost > you something, namely disk and network space because Puppet would > need > to be duplicated in each child container. > > As Puppet is used to configure a majority of the services in TripleO > having it in the base container makes most sense. And yes, if there > are > security patches for Puppet/Ruby those might result in a bunch of > containers getting pushed. But let Docker layers take care of this I > think... Don't try to solve things by constructing your own custom > mounts and volumes to work around the issue. > > > > 2) For a generic security (Day 2, maintenance) case, when > > puppet/ruby/systemd/name-it gets a CVE fixed, the base layer has to > > be > > updated and all layers on top - to be rebuild, and all of those > > layers, > > to be re-fetched for cloud hosts and all containers to be > > restarted... > > And all of that because of some fixes that have nothing to > > OpenStack. > > By > > the remote edge sites as well, remember of "tens of thousands", > > high > > latency and limited bandwith?.. > > 3) TripleO CI updates (including puppet*) packages in containers, > > not > > in > > a common base layer of those. So each a CI job has to update > > puppet* > > and > > its dependencies - ruby/systemd as well. Reducing numbers of > > packages > > to > > update for each container makes sense for CI as well. > > > > Implementation related: > > > > WIP patches [0],[1] for early review, uses a config "pod" approach, > > does > > not require to maintain a two sets of config vs runtime images. > > Future > > work: a) cronie requires systemd, we'd want to fix that also off > > the > > base layer. b) rework to podman pods for docker-puppet.py instead > > of > > --volumes-from a side car container (can't be backported for Queens > > then, which is still nice to have a support for the Edge DCN case, > > at > > least downstream only perhaps). > > > > Some questions raised on IRC: > > > > Q: is having a service be able to configure itself really need to > > involve a separate pod? > > A: Highly likely yes, removing not-runtime things is a good idea > > and > > pods is an established PaaS paradigm already. That will require > > some > > changes in the architecture though (see the topic with WIP > > patches). > > I'm a little confused on this one. Are you suggesting that we have 2 > containers for each service? One with Puppet and one without? > > That is certainly possible, but to pull it off would likely require > you > to have things built like this: > > |base container| --> |service container| --> |service container w/ > Puppet installed| > > The end result would be Puppet being duplicated in a layer for each > services "config image". Very inefficient. > > Again, I'm ansering this assumping we aren't violating our container > constraints and best practices where each container has the binaries > its needs to do its own configuration. > > > Q: that's (fetching a config container) actually more data that > > about to > > download otherwise > > A: It's not, if thinking of Day 2, when have to re-fetch the base > > layer > > and top layers, when some unrelated to openstack CVEs got fixed > > there > > for ruby/puppet/systemd. Avoid the need to restart service > > containers > > because of those minor updates puched is also a nice thing. > > Puppet is used only for configuration in TripleO. While security > issues > do need to be addressed at any layer I'm not sure there would be an > urgency to re-deploy your cluster simply for a Puppet security fix > alone. Smart change management would help eliminate blindly deploying > new containers in the case where they provide very little security > benefit. > > I think the focus on Puppet, and Ruby here is perhaps a bad example > as > they are config time only. Rather than just think about them we > should > also consider the rest of the things in our base container images as > well. This is always going to be a "balancing act". There are pros > and > cons of having things in the base layer vs. the child/leaf layers. > > > > Q: the best solution here would be using packages on the host, > > generating the config files on the host. And then having an all-in- > > one > > container for all the services which lets them run in an isolated > > mannner. > > A: I think for Edge cases, that's a no go as we might want to > > consider > > tiny low footprint OS distros like former known Container Linux or > > Atomic. Also, an all-in-one container looks like an anti-pattern > > from > > the world of VMs. > > This was suggested on IRC because it likely gives you the smallest > network/storage footprint for each edge node. The container would get > used for everything: running all the services, and configuring all > the > services. Sort of a golden image approach. It may be an anti-pattern > but initially I thought you were looking to optimize these things. > > I think a better solution might be to have container registries, or > container mirrors (reverse proxies or whatever) that allow you to > cache > things as you deploy to the edge and thus optimize the network > traffic. > > > > [0] https://review.openstack.org/#/q/topic:base-container-reduction > > [1] > > https://review.rdoproject.org/r/#/q/topic:base-container-reduction > > > > > Here is a related bug [1] and implementation [1] for that. PTAL > > > folks! > > > > > > [0] https://bugs.launchpad.net/tripleo/+bug/1804822 > > > [1] > > > https://review.openstack.org/#/q/topic:base-container-reduction > > > > > > > Let's also think of removing puppet-tripleo from the base > > > > container. > > > > It really brings the world-in (and yum updates in CI!) each job > > > > and each > > > > container! > > > > So if we did so, we should then either install puppet-tripleo > > > > and > > > > co on > > > > the host and bind-mount it for the docker-puppet deployment > > > > task > > > > steps > > > > (bad idea IMO), OR use the magical --volumes-from > > > container> > > > > option to mount volumes from some "puppet-config" sidecar > > > > container > > > > inside each of the containers being launched by docker-puppet > > > > tooling. > > > > > > On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås > > redhat.com> > > > wrote: > > > > We add this to all images: > > > > > > > > https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35 > > > > > > > > /bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 > > > > python > > > > socat sudo which openstack-tripleo-common-container-base rsync > > > > cronie > > > > crudini openstack-selinux ansible python-shade puppet-tripleo > > > > python2- > > > > kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB > > > > > > > > Is the additional 276 MB reasonable here? > > > > openstack-selinux <- This package run relabling, does that kind > > > > of > > > > touching the filesystem impact the size due to docker layers? > > > > > > > > Also: python2-kubernetes is a fairly large package (18007990) > > > > do > > > > we use > > > > that in every image? I don't see any tripleo related repos > > > > importing > > > > from that when searching on Hound? The original commit > > > > message[1] > > > > adding it states it is for future convenience. > > > > > > > > On my undercloud we have 101 images, if we are downloading > > > > every > > > > 18 MB > > > > per image thats almost 1.8 GB for a package we don't use? (I > > > > hope > > > > it's > > > > not like this? With docker layers, we only download that 276 MB > > > > transaction once? Or?) > > > > > > > > > > > > [1] https://review.openstack.org/527927 > > > > > > -- > > > Best regards, > > > Bogdan Dobrelya, > > > Irc #bogdando > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From rbrady at redhat.com Wed Nov 28 16:51:32 2018 From: rbrady at redhat.com (Ryan Brady) Date: Wed, 28 Nov 2018 11:51:32 -0500 Subject: [openstack-dev] [tripleo] Workflows Squad changes In-Reply-To: References: Message-ID: On Wed, Nov 28, 2018 at 5:13 AM Jiri Tomasek wrote: > Hi all, > > Recently, the workflows squad has been reorganized and people from the > squad are joining different squads. I would like to discuss how we are > going to adjust to this situation to make sure that tripleo-common > development is not going to be blocked in terms of feature work and reviews. > > With this change, most of the tripleo-common maintenance work goes > naturally to UI & Validations squad as CLI and GUI are the consumers of the > API provided by tripleo-common. Adriano Petrich from workflows squad has > joined UI squad to take on this work. > > As a possible solution, I would like to propose Adriano as a core reviewer > to tripleo-common and adding tripleo-ui cores right to +2 tripleo-common > patches. > > It would be great to hear opinions especially former members of Workflows > squad and regular contributors to tripleo-common on these changes and in > general on how to establish regular reviews and maintenance to ensure that > tripleo-common codebase is moved towards converging the CLI and GUI > deployment workflow. > Well, I'm not really going that far and plan to continue working in tripleo-common for the time being. If that isn't sustainable during the next cycle, I'll make sure to shout. > Thanks > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- RYAN BRADY SENIOR SOFTWARE ENGINEER Red Hat Inc rbrady at redhat.com T: (919)-890-8925 IM: rbrady @redhatway @redhatinc @redhatsnaps -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Wed Nov 28 17:02:47 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Wed, 28 Nov 2018 18:02:47 +0100 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> Message-ID: <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> > > Reiterating again on previous points: > > -I'd be fine removing systemd. But lets do it properly and not via 'rpm > -ev --nodeps'. > -Puppet and Ruby *are* required for configuration. We can certainly put > them in a separate container outside of the runtime service containers > but doing so would actually cost you much more space/bandwidth for each > service container. As both of these have to get downloaded to each node > anyway in order to generate config files with our current mechanisms > I'm not sure this buys you anything. +1. I was actually under the impression that we concluded yesterday on IRC that this is the only thing that makes sense to seriously consider. But even then it's not a win-win -- we'd gain some security by leaner production images, but pay for it with space+bandwidth by duplicating image content (IOW we can help achieve one of the goals we had in mind by worsening the situation w/r/t the other goal we had in mind.) Personally i'm not sold yet but it's something that i'd consider if we got measurements of how much more space/bandwidth usage this would consume, and if we got some further details/examples about how serious are the security concerns if we leave config mgmt tools in runtime images. IIRC the other options (that were brought forward so far) were already dismissed in yesterday's IRC discussion and on the reviews. Bin/lib bind mounting being too hacky and fragile, and nsenter not really solving the problem (because it allows us to switch to having different bins/libs available, but it does not allow merging the availability of bins/libs from two containers into a single context). > > We are going in circles here I think.... +1. I think too much of the discussion focuses on "why it's bad to have config tools in runtime images", but IMO we all sorta agree that it would be better not to have them there, if it came at no cost. I think to move forward, it would be interesting to know: if we do this (i'll borrow Dan's drawing): |base container| --> |service container| --> |service container w/ Puppet installed| How much more space and bandwidth would this consume per node (e.g. separately per controller, per compute). This could help with decision making. > > Dan > Thanks Jirka From bdobreli at redhat.com Wed Nov 28 17:29:41 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 28 Nov 2018 18:29:41 +0100 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> Message-ID: On 11/28/18 6:02 PM, Jiří Stránský wrote: > > >> >> Reiterating again on previous points: >> >> -I'd be fine removing systemd. But lets do it properly and not via 'rpm >> -ev --nodeps'. >> -Puppet and Ruby *are* required for configuration. We can certainly put >> them in a separate container outside of the runtime service containers >> but doing so would actually cost you much more space/bandwidth for each >> service container. As both of these have to get downloaded to each node >> anyway in order to generate config files with our current mechanisms >> I'm not sure this buys you anything. > > +1. I was actually under the impression that we concluded yesterday on > IRC that this is the only thing that makes sense to seriously consider. > But even then it's not a win-win -- we'd gain some security by leaner > production images, but pay for it with space+bandwidth by duplicating > image content (IOW we can help achieve one of the goals we had in mind > by worsening the situation w/r/t the other goal we had in mind.) > > Personally i'm not sold yet but it's something that i'd consider if we > got measurements of how much more space/bandwidth usage this would > consume, and if we got some further details/examples about how serious > are the security concerns if we leave config mgmt tools in runtime images. > > IIRC the other options (that were brought forward so far) were already > dismissed in yesterday's IRC discussion and on the reviews. Bin/lib bind > mounting being too hacky and fragile, and nsenter not really solving the > problem (because it allows us to switch to having different bins/libs > available, but it does not allow merging the availability of bins/libs > from two containers into a single context). > >> >> We are going in circles here I think.... > > +1. I think too much of the discussion focuses on "why it's bad to have > config tools in runtime images", but IMO we all sorta agree that it > would be better not to have them there, if it came at no cost. > > I think to move forward, it would be interesting to know: if we do this > (i'll borrow Dan's drawing): > > |base container| --> |service container| --> |service container w/ > Puppet installed| > > How much more space and bandwidth would this consume per node (e.g. > separately per controller, per compute). This could help with decision > making. As I've already evaluated in the related bug, that is: puppet-* modules and manifests ~ 16MB puppet with dependencies ~61MB dependencies of the seemingly largest a dependency, systemd ~190MB that would be an extra layer size for each of the container images to be downloaded/fetched into registries. Given that we should decouple systemd from all/some of the dependencies (an example topic for RDO [0]), that could save a 190MB. But it seems we cannot break the love of puppet and systemd as it heavily relies on the latter and changing packaging like that would higly likely affect baremetal deployments with puppet and systemd co-operating. Long story short, we cannot shoot both rabbits with a single shot, not with puppet :) May be we could with ansible replacing puppet fully... So splitting config and runtime images is the only choice yet to address the raised security concerns. And let's forget about edge cases for now. Tossing around a pair of extra bytes over 40,000 WAN-distributed computes ain't gonna be our the biggest problem for sure. [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction > >> >> Dan >> > > Thanks > > Jirka > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Bogdan Dobrelya, Irc #bogdando From james.slagle at gmail.com Wed Nov 28 18:28:59 2018 From: james.slagle at gmail.com (James Slagle) Date: Wed, 28 Nov 2018 13:28:59 -0500 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> Message-ID: On Wed, Nov 28, 2018 at 12:31 PM Bogdan Dobrelya wrote: > Long story short, we cannot shoot both rabbits with a single shot, not > with puppet :) May be we could with ansible replacing puppet fully... > So splitting config and runtime images is the only choice yet to address > the raised security concerns. And let's forget about edge cases for now. > Tossing around a pair of extra bytes over 40,000 WAN-distributed > computes ain't gonna be our the biggest problem for sure. I think it's this last point that is the crux of this discussion. We can agree to disagree about the merits of this proposal and whether it's a pre-optimzation or micro-optimization, which I admit are somewhat subjective terms. Ultimately, it seems to be about the "why" do we need to do this as to the reason why the conversation seems to be going in circles a bit. I'm all for reducing container image size, but the reality is that this proposal doesn't necessarily help us with the Edge use cases we are talking about trying to solve. Why would we even run the exact same puppet binary + manifest individually 40,000 times so that we can produce the exact same set of configuration files that differ only by things such as IP address, hostnames, and passwords? Maybe we should instead be thinking about how we can do that *1* time centrally, and produce a configuration that can be reused across 40,000 nodes with little effort. The opportunity for a significant impact in terms of how we can scale TripleO is much larger if we consider approaching these problems with a wider net of what we could do. There's opportunity for a lot of better reuse in TripleO, configuration is just one area. The plan and Heat stack (within the ResourceGroup) are some other areas. At the same time, if some folks want to work on smaller optimizations (such as container image size), with an approach that can be agreed upon, then they should do so. We just ought to be careful about how we justify those changes so that we can carefully weigh the effort vs the payoff. In this specific case, I don't personally see this proposal helping us with Edge use cases in a meaningful way given the scope of the changes. That's not to say there aren't other use cases that could justify it though (such as the security points brought up earlier). -- -- James Slagle -- From dprince at redhat.com Wed Nov 28 22:22:56 2018 From: dprince at redhat.com (Dan Prince) Date: Wed, 28 Nov 2018 17:22:56 -0500 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> Message-ID: <5cbe32fb549b2e107a57aa3c13d4451a6fc6a35e.camel@redhat.com> On Wed, 2018-11-28 at 13:28 -0500, James Slagle wrote: > On Wed, Nov 28, 2018 at 12:31 PM Bogdan Dobrelya > wrote: > > Long story short, we cannot shoot both rabbits with a single shot, > > not > > with puppet :) May be we could with ansible replacing puppet > > fully... > > So splitting config and runtime images is the only choice yet to > > address > > the raised security concerns. And let's forget about edge cases for > > now. > > Tossing around a pair of extra bytes over 40,000 WAN-distributed > > computes ain't gonna be our the biggest problem for sure. > > I think it's this last point that is the crux of this discussion. We > can agree to disagree about the merits of this proposal and whether > it's a pre-optimzation or micro-optimization, which I admit are > somewhat subjective terms. Ultimately, it seems to be about the "why" > do we need to do this as to the reason why the conversation seems to > be going in circles a bit. > > I'm all for reducing container image size, but the reality is that > this proposal doesn't necessarily help us with the Edge use cases we > are talking about trying to solve. > > Why would we even run the exact same puppet binary + manifest > individually 40,000 times so that we can produce the exact same set > of > configuration files that differ only by things such as IP address, > hostnames, and passwords? Maybe we should instead be thinking about > how we can do that *1* time centrally, and produce a configuration > that can be reused across 40,000 nodes with little effort. The > opportunity for a significant impact in terms of how we can scale > TripleO is much larger if we consider approaching these problems with > a wider net of what we could do. There's opportunity for a lot of > better reuse in TripleO, configuration is just one area. The plan and > Heat stack (within the ResourceGroup) are some other areas. We run Puppet for configuration because that is what we did on baremetal and we didn't break backwards compatability for our configuration options for upgrades. Our Puppet model relies on being executed on each local host in order to splice in the correct IP address and hostname. It executes in a distributed fashion, and works fairly well considering the history of the project. It is robust, guarantees no duplicate configs are being set, and is backwards compatible with all the options TripleO supported on baremetal. Puppet is arguably better for configuration than Ansible (which is what I hear people most often suggest we replace it with). It suits our needs fine, but it is perhaps a bit overkill considering we are only generating config files. I think the answer here is moving to something like Etcd. Perhaps skipping over Ansible entirely as a config management tool (it is arguably less capable than Puppet in this category anyway). Or we could use Ansible for "legacy" services only, switch to Etcd for a majority of the OpenStack services, and drop Puppet entirely (my favorite option). Consolidating our technology stack would be wise. We've already put some work and analysis into the Etcd effort. Just need to push on it some more. Looking at the previous Kubernetes prototypes for TripleO would be the place to start. Config management migration is going to be tedious. Its technical debt that needs to be handled at some point anyway. I think it is a general TripleO improvement that could benefit all clouds, not just Edge. Dan > > At the same time, if some folks want to work on smaller optimizations > (such as container image size), with an approach that can be agreed > upon, then they should do so. We just ought to be careful about how > we > justify those changes so that we can carefully weigh the effort vs > the > payoff. In this specific case, I don't personally see this proposal > helping us with Edge use cases in a meaningful way given the scope of > the changes. That's not to say there aren't other use cases that > could > justify it though (such as the security points brought up earlier). > From renat.akhmerov at gmail.com Thu Nov 29 09:23:43 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Thu, 29 Nov 2018 16:23:43 +0700 Subject: [openstack-dev] [mistral] Renat is on vacation until Dec 17th Message-ID: <9e3e6449-5162-42f7-a851-f754deb0dcc0@Spark> Hi, I’ll be on vacation till Dec 17th. I’ll be replying emails but with delays. Thanks Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Thu Nov 29 09:28:27 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 29 Nov 2018 10:28:27 +0100 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <5cbe32fb549b2e107a57aa3c13d4451a6fc6a35e.camel@redhat.com> References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> <5cbe32fb549b2e107a57aa3c13d4451a6fc6a35e.camel@redhat.com> Message-ID: On 11/28/18 8:55 PM, Doug Hellmann wrote: > I thought the preferred solution for more complex settings was config maps. Did that approach not work out? > > Regardless, now that the driver work is done if someone wants to take another stab at etcd integration it’ll be more straightforward today. > > Doug > While sharing configs is a feasible option to consider for large scale configuration management, Etcd only provides a strong consistency, which is also known as "Unavailable" [0]. For edge scenarios, to configure 40,000 remote computes over WAN connections, we'd rather want instead weaker consistency models, like "Sticky Available" [0]. That would allow services to fetch their configuration either from a central "uplink" or locally as well, when the latter is not accessible from remote edge sites. Etcd cannot provide 40,000 local endpoints to fit that case I'm afraid, even if those would be read only replicas. That is also something I'm highlighting in the paper [1] drafted for ICFC-2019. But had we such a sticky available key value storage solution, we would indeed have solved the problem of multiple configuration management system execution for thousands of nodes as James describes it. [0] https://jepsen.io/consistency [1] https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/LaTeX/position_paper_1570506394.pdf On 11/28/18 11:22 PM, Dan Prince wrote: > On Wed, 2018-11-28 at 13:28 -0500, James Slagle wrote: >> On Wed, Nov 28, 2018 at 12:31 PM Bogdan Dobrelya >> wrote: >>> Long story short, we cannot shoot both rabbits with a single shot, >>> not >>> with puppet :) May be we could with ansible replacing puppet >>> fully... >>> So splitting config and runtime images is the only choice yet to >>> address >>> the raised security concerns. And let's forget about edge cases for >>> now. >>> Tossing around a pair of extra bytes over 40,000 WAN-distributed >>> computes ain't gonna be our the biggest problem for sure. >> >> I think it's this last point that is the crux of this discussion. We >> can agree to disagree about the merits of this proposal and whether >> it's a pre-optimzation or micro-optimization, which I admit are >> somewhat subjective terms. Ultimately, it seems to be about the "why" >> do we need to do this as to the reason why the conversation seems to >> be going in circles a bit. >> >> I'm all for reducing container image size, but the reality is that >> this proposal doesn't necessarily help us with the Edge use cases we >> are talking about trying to solve. >> >> Why would we even run the exact same puppet binary + manifest >> individually 40,000 times so that we can produce the exact same set >> of >> configuration files that differ only by things such as IP address, >> hostnames, and passwords? Maybe we should instead be thinking about >> how we can do that *1* time centrally, and produce a configuration >> that can be reused across 40,000 nodes with little effort. The >> opportunity for a significant impact in terms of how we can scale >> TripleO is much larger if we consider approaching these problems with >> a wider net of what we could do. There's opportunity for a lot of >> better reuse in TripleO, configuration is just one area. The plan and >> Heat stack (within the ResourceGroup) are some other areas. > > We run Puppet for configuration because that is what we did on > baremetal and we didn't break backwards compatability for our > configuration options for upgrades. Our Puppet model relies on being > executed on each local host in order to splice in the correct IP > address and hostname. It executes in a distributed fashion, and works > fairly well considering the history of the project. It is robust, > guarantees no duplicate configs are being set, and is backwards > compatible with all the options TripleO supported on baremetal. Puppet > is arguably better for configuration than Ansible (which is what I hear > people most often suggest we replace it with). It suits our needs fine, > but it is perhaps a bit overkill considering we are only generating > config files. > > I think the answer here is moving to something like Etcd. Perhaps Not Etcd I think, see my comment above. But you're absolutely right Dan. > skipping over Ansible entirely as a config management tool (it is > arguably less capable than Puppet in this category anyway). Or we could > use Ansible for "legacy" services only, switch to Etcd for a majority > of the OpenStack services, and drop Puppet entirely (my favorite > option). Consolidating our technology stack would be wise. > > We've already put some work and analysis into the Etcd effort. Just > need to push on it some more. Looking at the previous Kubernetes > prototypes for TripleO would be the place to start. > > Config management migration is going to be tedious. Its technical debt > that needs to be handled at some point anyway. I think it is a general > TripleO improvement that could benefit all clouds, not just Edge. > > Dan > >> >> At the same time, if some folks want to work on smaller optimizations >> (such as container image size), with an approach that can be agreed >> upon, then they should do so. We just ought to be careful about how >> we >> justify those changes so that we can carefully weigh the effort vs >> the >> payoff. In this specific case, I don't personally see this proposal >> helping us with Edge use cases in a meaningful way given the scope of >> the changes. That's not to say there aren't other use cases that >> could >> justify it though (such as the security points brought up earlier). >> > -- Best regards, Bogdan Dobrelya, Irc #bogdando From tobias.urdin at binero.se Thu Nov 29 10:38:45 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Thu, 29 Nov 2018 11:38:45 +0100 Subject: [openstack-dev] [puppet] [stable] Deprecation of newton branches In-Reply-To: References: <2ae10ca2-8c77-ccbc-c009-a22c1d3cfd69@binero.se> Message-ID: Hello, This got lost way down in my mailbox. I think we have a consensus about getting rid of the newton branches. Does anybody in Stable release team have time to deprecate the stable/newton branches? Best regards Tobias On 11/19/2018 06:21 PM, Alex Schultz wrote: > On Mon, Nov 19, 2018 at 1:18 AM Tobias Urdin wrote: >> Hello, >> >> We've been talking for a while about the deprecation and removal of the >> stable/newton branches. >> I think it's time now that we get rid of them, we have no open patches >> and Newton is considered EOL. >> >> Could cores get back with a quick feedback and then the stable team can >> get rid of those whenever they have time. >> > yes please. let's EOL them > >> Best regards >> Tobias >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ervikrant06 at gmail.com Thu Nov 29 11:12:05 2018 From: ervikrant06 at gmail.com (Vikrant Aggarwal) Date: Thu, 29 Nov 2018 16:42:05 +0530 Subject: [openstack-dev] [magnum] [Rocky] K8 deployment on fedora-atomic is failed Message-ID: Hello Team, Trying to deploy on K8 on fedora atomic. Here is the output of cluster template: ~~~ [root at packstack1 k8s_fedora_atomic_v1(keystone_admin)]# magnum cluster-template-show 16eb91f7-18fe-4ce3-98db-c732603f2e57 WARNING: The magnum client is deprecated and will be removed in a future release. Use the OpenStack client to avoid seeing this message. +-----------------------+--------------------------------------+ | Property | Value | +-----------------------+--------------------------------------+ | insecure_registry | - | | labels | {} | | updated_at | - | | floating_ip_enabled | True | | fixed_subnet | - | | master_flavor_id | - | | user_id | 203617849df9490084dde1897b28eb53 | | uuid | 16eb91f7-18fe-4ce3-98db-c732603f2e57 | | no_proxy | - | | https_proxy | - | | tls_disabled | False | | keypair_id | kubernetes | | project_id | 45a6706c831c42d5bf2da928573382b1 | | public | False | | http_proxy | - | | docker_volume_size | 10 | | server_type | vm | | external_network_id | external1 | | cluster_distro | fedora-atomic | | image_id | f5954340-f042-4de3-819e-a3b359591770 | | volume_driver | - | | registry_enabled | False | | docker_storage_driver | devicemapper | | apiserver_port | - | | name | coe-k8s-template | | created_at | 2018-11-28T12:58:21+00:00 | | network_driver | flannel | | fixed_network | - | | coe | kubernetes | | flavor_id | m1.small | | master_lb_enabled | False | | dns_nameserver | 8.8.8.8 | +-----------------------+--------------------------------------+ ~~~ Found couple of issues in the logs of VM started by magnum. - etcd was not getting started because of incorrect permission on file "/etc/etcd/certs/server.key". This file is owned by root by default have 0440 as permission. Changed the permission to 0444 so that etcd can read the file. After that etcd started successfully. - etcd DB doesn't contain anything: [root at kube-cluster1-qobaagdob75g-master-0 ~]# etcdctl ls / -r [root at kube-cluster1-qobaagdob75g-master-0 ~]# - Flanneld is stuck in activating status. ~~~ [root at kube-cluster1-qobaagdob75g-master-0 ~]# systemctl status flanneld ● flanneld.service - Flanneld overlay address etcd agent Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled) Active: activating (start) since Thu 2018-11-29 11:05:39 UTC; 14s ago Main PID: 6491 (flanneld) Tasks: 6 (limit: 4915) Memory: 4.7M CPU: 53ms CGroup: /system.slice/flanneld.service └─6491 /usr/bin/flanneld -etcd-endpoints=http://127.0.0.1:2379 -etcd-prefix=/atomic.io/network Nov 29 11:05:44 kube-cluster1-qobaagdob75g-master-0.novalocal flanneld[6491]: E1129 11:05:44.569376 6491 network.go:102] failed to retrieve network config: 100: Key not found (/atomic.io) [3] Nov 29 11:05:45 kube-cluster1-qobaagdob75g-master-0.novalocal flanneld[6491]: E1129 11:05:45.584532 6491 network.go:102] failed to retrieve network config: 100: Key not found (/atomic.io) [3] Nov 29 11:05:46 kube-cluster1-qobaagdob75g-master-0.novalocal flanneld[6491]: E1129 11:05:46.646255 6491 network.go:102] failed to retrieve network config: 100: Key not found (/atomic.io) [3] Nov 29 11:05:47 kube-cluster1-qobaagdob75g-master-0.novalocal flanneld[6491]: E1129 11:05:47.673062 6491 network.go:102] failed to retrieve network config: 100: Key not found (/atomic.io) [3] Nov 29 11:05:48 kube-cluster1-qobaagdob75g-master-0.novalocal flanneld[6491]: E1129 11:05:48.686919 6491 network.go:102] failed to retrieve network config: 100: Key not found (/atomic.io) [3] Nov 29 11:05:49 kube-cluster1-qobaagdob75g-master-0.novalocal flanneld[6491]: E1129 11:05:49.709136 6491 network.go:102] failed to retrieve network config: 100: Key not found (/atomic.io) [3] Nov 29 11:05:50 kube-cluster1-qobaagdob75g-master-0.novalocal flanneld[6491]: E1129 11:05:50.729548 6491 network.go:102] failed to retrieve network config: 100: Key not found (/atomic.io) [3] Nov 29 11:05:51 kube-cluster1-qobaagdob75g-master-0.novalocal flanneld[6491]: E1129 11:05:51.749425 6491 network.go:102] failed to retrieve network config: 100: Key not found (/atomic.io) [3] Nov 29 11:05:52 kube-cluster1-qobaagdob75g-master-0.novalocal flanneld[6491]: E1129 11:05:52.776612 6491 network.go:102] failed to retrieve network config: 100: Key not found (/atomic.io) [3] Nov 29 11:05:53 kube-cluster1-qobaagdob75g-master-0.novalocal flanneld[6491]: E1129 11:05:53.790418 6491 network.go:102] failed to retrieve network config: 100: Key not found (/atomic.io) [3] ~~~ - Continuously in the jouralctl logs following messages are printed. ~~~ Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal kube-apiserver[6888]: F1129 11:06:39.338416 6888 server.go:269] Invalid Authorization Config: Unknown authorization mode Node specified Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=255/n/a Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal kube-scheduler[2540]: E1129 11:06:39.408272 2540 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:463: Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal kube-scheduler[2540]: E1129 11:06:39.444737 2540 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:460: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal kube-scheduler[2540]: E1129 11:06:39.445793 2540 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466: Failed to list *api.PersistentVolume: Get http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=kube-apiserver comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: Failed to start Kubernetes API Server. Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: kube-apiserver.service: Unit entered failed state. Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal kube-scheduler[2540]: E1129 11:06:39.611699 2540 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481: Failed to list *extensions.ReplicaSet: Get http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused ~~~ Any help on above issue is highly appreciated. Thanks & Regards, Vikrant Aggarwal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Thu Nov 29 17:42:08 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Thu, 29 Nov 2018 18:42:08 +0100 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> Message-ID: On 28. 11. 18 18:29, Bogdan Dobrelya wrote: > On 11/28/18 6:02 PM, Jiří Stránský wrote: >> >> >>> >>> Reiterating again on previous points: >>> >>> -I'd be fine removing systemd. But lets do it properly and not via 'rpm >>> -ev --nodeps'. >>> -Puppet and Ruby *are* required for configuration. We can certainly put >>> them in a separate container outside of the runtime service containers >>> but doing so would actually cost you much more space/bandwidth for each >>> service container. As both of these have to get downloaded to each node >>> anyway in order to generate config files with our current mechanisms >>> I'm not sure this buys you anything. >> >> +1. I was actually under the impression that we concluded yesterday on >> IRC that this is the only thing that makes sense to seriously consider. >> But even then it's not a win-win -- we'd gain some security by leaner >> production images, but pay for it with space+bandwidth by duplicating >> image content (IOW we can help achieve one of the goals we had in mind >> by worsening the situation w/r/t the other goal we had in mind.) >> >> Personally i'm not sold yet but it's something that i'd consider if we >> got measurements of how much more space/bandwidth usage this would >> consume, and if we got some further details/examples about how serious >> are the security concerns if we leave config mgmt tools in runtime images. >> >> IIRC the other options (that were brought forward so far) were already >> dismissed in yesterday's IRC discussion and on the reviews. Bin/lib bind >> mounting being too hacky and fragile, and nsenter not really solving the >> problem (because it allows us to switch to having different bins/libs >> available, but it does not allow merging the availability of bins/libs >> from two containers into a single context). >> >>> >>> We are going in circles here I think.... >> >> +1. I think too much of the discussion focuses on "why it's bad to have >> config tools in runtime images", but IMO we all sorta agree that it >> would be better not to have them there, if it came at no cost. >> >> I think to move forward, it would be interesting to know: if we do this >> (i'll borrow Dan's drawing): >> >> |base container| --> |service container| --> |service container w/ >> Puppet installed| >> >> How much more space and bandwidth would this consume per node (e.g. >> separately per controller, per compute). This could help with decision >> making. > > As I've already evaluated in the related bug, that is: > > puppet-* modules and manifests ~ 16MB > puppet with dependencies ~61MB > dependencies of the seemingly largest a dependency, systemd ~190MB > > that would be an extra layer size for each of the container images to be > downloaded/fetched into registries. Thanks, i tried to do the math of the reduction vs. inflation in sizes as follows. I think the crucial point here is the layering. If we do this image layering: |base| --> |+ service| --> |+ Puppet| we'd drop ~267 MB from base image, but we'd be installing that to the topmost level, per-component, right? In my basic deployment, undercloud seems to have 17 "components" (49 containers), overcloud controller 15 components (48 containers), and overcloud compute 4 components (7 containers). Accounting for overlaps, the total number of "components" used seems to be 19. (By "components" here i mean whatever uses a different ConfigImage than other services. I just eyeballed it but i think i'm not too far off the correct number.) So we'd subtract 267 MB from base image and add that to 19 leaf images used in this deployment. That means difference of +4.8 GB to the current image sizes. My /var/lib/registry dir on undercloud with all the images currently has 5.1 GB. We'd almost double that to 9.9 GB. Going from 5.1 to 9.9 GB seems like a lot of extra traffic for the CDNs (both external and e.g. internal within OpenStack Infra CI clouds). And for internal traffic between local registry and overcloud nodes, it gives +3.7 GB per controller and +800 MB per compute. That may not be so critical but still feels like a considerable downside. Another gut feeling is that this way of image layering would take longer time to build and to run the modify-image Ansible role which we use in CI, so that could endanger how our CI jobs fit into the time limit. We could also probably measure this but i'm not sure if it's worth spending the time. All in all i'd argue we should be looking at different options still. > > Given that we should decouple systemd from all/some of the dependencies > (an example topic for RDO [0]), that could save a 190MB. But it seems we > cannot break the love of puppet and systemd as it heavily relies on the > latter and changing packaging like that would higly likely affect > baremetal deployments with puppet and systemd co-operating. Ack :/ > > Long story short, we cannot shoot both rabbits with a single shot, not > with puppet :) May be we could with ansible replacing puppet fully... > So splitting config and runtime images is the only choice yet to address > the raised security concerns. And let's forget about edge cases for now. > Tossing around a pair of extra bytes over 40,000 WAN-distributed > computes ain't gonna be our the biggest problem for sure. > > [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction > >> >>> >>> Dan >>> >> >> Thanks >> >> Jirka >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From Kevin.Fox at pnnl.gov Thu Nov 29 19:20:14 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 29 Nov 2018 19:20:14 +0000 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> , Message-ID: <1A3C52DFCD06494D8528644858247BF01C24AC91@EX10MBOX03.pnnl.gov> If the base layers are shared, you won't pay extra for the separate puppet container unless you have another container also installing ruby in an upper layer. With OpenStack, thats unlikely. the apparent size of a container is not equal to its actual size. Thanks, Kevin ________________________________________ From: Jiří Stránský [jistr at redhat.com] Sent: Thursday, November 29, 2018 9:42 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes On 28. 11. 18 18:29, Bogdan Dobrelya wrote: > On 11/28/18 6:02 PM, Jiří Stránský wrote: >> >> >>> >>> Reiterating again on previous points: >>> >>> -I'd be fine removing systemd. But lets do it properly and not via 'rpm >>> -ev --nodeps'. >>> -Puppet and Ruby *are* required for configuration. We can certainly put >>> them in a separate container outside of the runtime service containers >>> but doing so would actually cost you much more space/bandwidth for each >>> service container. As both of these have to get downloaded to each node >>> anyway in order to generate config files with our current mechanisms >>> I'm not sure this buys you anything. >> >> +1. I was actually under the impression that we concluded yesterday on >> IRC that this is the only thing that makes sense to seriously consider. >> But even then it's not a win-win -- we'd gain some security by leaner >> production images, but pay for it with space+bandwidth by duplicating >> image content (IOW we can help achieve one of the goals we had in mind >> by worsening the situation w/r/t the other goal we had in mind.) >> >> Personally i'm not sold yet but it's something that i'd consider if we >> got measurements of how much more space/bandwidth usage this would >> consume, and if we got some further details/examples about how serious >> are the security concerns if we leave config mgmt tools in runtime images. >> >> IIRC the other options (that were brought forward so far) were already >> dismissed in yesterday's IRC discussion and on the reviews. Bin/lib bind >> mounting being too hacky and fragile, and nsenter not really solving the >> problem (because it allows us to switch to having different bins/libs >> available, but it does not allow merging the availability of bins/libs >> from two containers into a single context). >> >>> >>> We are going in circles here I think.... >> >> +1. I think too much of the discussion focuses on "why it's bad to have >> config tools in runtime images", but IMO we all sorta agree that it >> would be better not to have them there, if it came at no cost. >> >> I think to move forward, it would be interesting to know: if we do this >> (i'll borrow Dan's drawing): >> >> |base container| --> |service container| --> |service container w/ >> Puppet installed| >> >> How much more space and bandwidth would this consume per node (e.g. >> separately per controller, per compute). This could help with decision >> making. > > As I've already evaluated in the related bug, that is: > > puppet-* modules and manifests ~ 16MB > puppet with dependencies ~61MB > dependencies of the seemingly largest a dependency, systemd ~190MB > > that would be an extra layer size for each of the container images to be > downloaded/fetched into registries. Thanks, i tried to do the math of the reduction vs. inflation in sizes as follows. I think the crucial point here is the layering. If we do this image layering: |base| --> |+ service| --> |+ Puppet| we'd drop ~267 MB from base image, but we'd be installing that to the topmost level, per-component, right? In my basic deployment, undercloud seems to have 17 "components" (49 containers), overcloud controller 15 components (48 containers), and overcloud compute 4 components (7 containers). Accounting for overlaps, the total number of "components" used seems to be 19. (By "components" here i mean whatever uses a different ConfigImage than other services. I just eyeballed it but i think i'm not too far off the correct number.) So we'd subtract 267 MB from base image and add that to 19 leaf images used in this deployment. That means difference of +4.8 GB to the current image sizes. My /var/lib/registry dir on undercloud with all the images currently has 5.1 GB. We'd almost double that to 9.9 GB. Going from 5.1 to 9.9 GB seems like a lot of extra traffic for the CDNs (both external and e.g. internal within OpenStack Infra CI clouds). And for internal traffic between local registry and overcloud nodes, it gives +3.7 GB per controller and +800 MB per compute. That may not be so critical but still feels like a considerable downside. Another gut feeling is that this way of image layering would take longer time to build and to run the modify-image Ansible role which we use in CI, so that could endanger how our CI jobs fit into the time limit. We could also probably measure this but i'm not sure if it's worth spending the time. All in all i'd argue we should be looking at different options still. > > Given that we should decouple systemd from all/some of the dependencies > (an example topic for RDO [0]), that could save a 190MB. But it seems we > cannot break the love of puppet and systemd as it heavily relies on the > latter and changing packaging like that would higly likely affect > baremetal deployments with puppet and systemd co-operating. Ack :/ > > Long story short, we cannot shoot both rabbits with a single shot, not > with puppet :) May be we could with ansible replacing puppet fully... > So splitting config and runtime images is the only choice yet to address > the raised security concerns. And let's forget about edge cases for now. > Tossing around a pair of extra bytes over 40,000 WAN-distributed > computes ain't gonna be our the biggest problem for sure. > > [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction > >> >>> >>> Dan >>> >> >> Thanks >> >> Jirka >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Kevin.Fox at pnnl.gov Thu Nov 29 19:38:17 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 29 Nov 2018 19:38:17 +0000 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C24AC91@EX10MBOX03.pnnl.gov> References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> , , <1A3C52DFCD06494D8528644858247BF01C24AC91@EX10MBOX03.pnnl.gov> Message-ID: <1A3C52DFCD06494D8528644858247BF01C24ACE9@EX10MBOX03.pnnl.gov> Oh, rereading the conversation again, the concern is having shared deps move up layers? so more systemd related then ruby? The conversation about --nodeps makes it sound like its not actually used. Just an artifact of how the rpms are built... What about creating a dummy package that provides(systemd)? That avoids using --nodeps. Thanks, Kevin ________________________________________ From: Fox, Kevin M [Kevin.Fox at pnnl.gov] Sent: Thursday, November 29, 2018 11:20 AM To: Former OpenStack Development Mailing List, use openstack-discuss now Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes If the base layers are shared, you won't pay extra for the separate puppet container unless you have another container also installing ruby in an upper layer. With OpenStack, thats unlikely. the apparent size of a container is not equal to its actual size. Thanks, Kevin ________________________________________ From: Jiří Stránský [jistr at redhat.com] Sent: Thursday, November 29, 2018 9:42 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes On 28. 11. 18 18:29, Bogdan Dobrelya wrote: > On 11/28/18 6:02 PM, Jiří Stránský wrote: >> >> >>> >>> Reiterating again on previous points: >>> >>> -I'd be fine removing systemd. But lets do it properly and not via 'rpm >>> -ev --nodeps'. >>> -Puppet and Ruby *are* required for configuration. We can certainly put >>> them in a separate container outside of the runtime service containers >>> but doing so would actually cost you much more space/bandwidth for each >>> service container. As both of these have to get downloaded to each node >>> anyway in order to generate config files with our current mechanisms >>> I'm not sure this buys you anything. >> >> +1. I was actually under the impression that we concluded yesterday on >> IRC that this is the only thing that makes sense to seriously consider. >> But even then it's not a win-win -- we'd gain some security by leaner >> production images, but pay for it with space+bandwidth by duplicating >> image content (IOW we can help achieve one of the goals we had in mind >> by worsening the situation w/r/t the other goal we had in mind.) >> >> Personally i'm not sold yet but it's something that i'd consider if we >> got measurements of how much more space/bandwidth usage this would >> consume, and if we got some further details/examples about how serious >> are the security concerns if we leave config mgmt tools in runtime images. >> >> IIRC the other options (that were brought forward so far) were already >> dismissed in yesterday's IRC discussion and on the reviews. Bin/lib bind >> mounting being too hacky and fragile, and nsenter not really solving the >> problem (because it allows us to switch to having different bins/libs >> available, but it does not allow merging the availability of bins/libs >> from two containers into a single context). >> >>> >>> We are going in circles here I think.... >> >> +1. I think too much of the discussion focuses on "why it's bad to have >> config tools in runtime images", but IMO we all sorta agree that it >> would be better not to have them there, if it came at no cost. >> >> I think to move forward, it would be interesting to know: if we do this >> (i'll borrow Dan's drawing): >> >> |base container| --> |service container| --> |service container w/ >> Puppet installed| >> >> How much more space and bandwidth would this consume per node (e.g. >> separately per controller, per compute). This could help with decision >> making. > > As I've already evaluated in the related bug, that is: > > puppet-* modules and manifests ~ 16MB > puppet with dependencies ~61MB > dependencies of the seemingly largest a dependency, systemd ~190MB > > that would be an extra layer size for each of the container images to be > downloaded/fetched into registries. Thanks, i tried to do the math of the reduction vs. inflation in sizes as follows. I think the crucial point here is the layering. If we do this image layering: |base| --> |+ service| --> |+ Puppet| we'd drop ~267 MB from base image, but we'd be installing that to the topmost level, per-component, right? In my basic deployment, undercloud seems to have 17 "components" (49 containers), overcloud controller 15 components (48 containers), and overcloud compute 4 components (7 containers). Accounting for overlaps, the total number of "components" used seems to be 19. (By "components" here i mean whatever uses a different ConfigImage than other services. I just eyeballed it but i think i'm not too far off the correct number.) So we'd subtract 267 MB from base image and add that to 19 leaf images used in this deployment. That means difference of +4.8 GB to the current image sizes. My /var/lib/registry dir on undercloud with all the images currently has 5.1 GB. We'd almost double that to 9.9 GB. Going from 5.1 to 9.9 GB seems like a lot of extra traffic for the CDNs (both external and e.g. internal within OpenStack Infra CI clouds). And for internal traffic between local registry and overcloud nodes, it gives +3.7 GB per controller and +800 MB per compute. That may not be so critical but still feels like a considerable downside. Another gut feeling is that this way of image layering would take longer time to build and to run the modify-image Ansible role which we use in CI, so that could endanger how our CI jobs fit into the time limit. We could also probably measure this but i'm not sure if it's worth spending the time. All in all i'd argue we should be looking at different options still. > > Given that we should decouple systemd from all/some of the dependencies > (an example topic for RDO [0]), that could save a 190MB. But it seems we > cannot break the love of puppet and systemd as it heavily relies on the > latter and changing packaging like that would higly likely affect > baremetal deployments with puppet and systemd co-operating. Ack :/ > > Long story short, we cannot shoot both rabbits with a single shot, not > with puppet :) May be we could with ansible replacing puppet fully... > So splitting config and runtime images is the only choice yet to address > the raised security concerns. And let's forget about edge cases for now. > Tossing around a pair of extra bytes over 40,000 WAN-distributed > computes ain't gonna be our the biggest problem for sure. > > [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction > >> >>> >>> Dan >>> >> >> Thanks >> >> Jirka >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jistr at redhat.com Thu Nov 29 19:44:46 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Thu, 29 Nov 2018 20:44:46 +0100 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C24AC91@EX10MBOX03.pnnl.gov> References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> <1A3C52DFCD06494D8528644858247BF01C24AC91@EX10MBOX03.pnnl.gov> Message-ID: <96f1bbd8-0519-cddb-19f3-ea3c61fe09f6@redhat.com> On 29. 11. 18 20:20, Fox, Kevin M wrote: > If the base layers are shared, you won't pay extra for the separate puppet container Yes, and that's the state we're in right now. >unless you have another container also installing ruby in an upper layer. Not just Ruby but also Puppet and Systemd. I think that's what the proposal we're discussing here suggests -- removing this content from the base layer (so that we can get service runtime images without this content present) and putting this content *on top* of individual service images. Unless i'm missing some trick to start sharing *top* layers rather than *base* layers, i think that effectively disables the space sharing for the Ruby+Puppet+Systemd content. > With OpenStack, thats unlikely. > > the apparent size of a container is not equal to its actual size. Yes. :) Thanks Jirka > > Thanks, > Kevin > ________________________________________ > From: Jiří Stránský [jistr at redhat.com] > Sent: Thursday, November 29, 2018 9:42 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes > > On 28. 11. 18 18:29, Bogdan Dobrelya wrote: >> On 11/28/18 6:02 PM, Jiří Stránský wrote: >>> >>> >>>> >>>> Reiterating again on previous points: >>>> >>>> -I'd be fine removing systemd. But lets do it properly and not via 'rpm >>>> -ev --nodeps'. >>>> -Puppet and Ruby *are* required for configuration. We can certainly put >>>> them in a separate container outside of the runtime service containers >>>> but doing so would actually cost you much more space/bandwidth for each >>>> service container. As both of these have to get downloaded to each node >>>> anyway in order to generate config files with our current mechanisms >>>> I'm not sure this buys you anything. >>> >>> +1. I was actually under the impression that we concluded yesterday on >>> IRC that this is the only thing that makes sense to seriously consider. >>> But even then it's not a win-win -- we'd gain some security by leaner >>> production images, but pay for it with space+bandwidth by duplicating >>> image content (IOW we can help achieve one of the goals we had in mind >>> by worsening the situation w/r/t the other goal we had in mind.) >>> >>> Personally i'm not sold yet but it's something that i'd consider if we >>> got measurements of how much more space/bandwidth usage this would >>> consume, and if we got some further details/examples about how serious >>> are the security concerns if we leave config mgmt tools in runtime images. >>> >>> IIRC the other options (that were brought forward so far) were already >>> dismissed in yesterday's IRC discussion and on the reviews. Bin/lib bind >>> mounting being too hacky and fragile, and nsenter not really solving the >>> problem (because it allows us to switch to having different bins/libs >>> available, but it does not allow merging the availability of bins/libs >>> from two containers into a single context). >>> >>>> >>>> We are going in circles here I think.... >>> >>> +1. I think too much of the discussion focuses on "why it's bad to have >>> config tools in runtime images", but IMO we all sorta agree that it >>> would be better not to have them there, if it came at no cost. >>> >>> I think to move forward, it would be interesting to know: if we do this >>> (i'll borrow Dan's drawing): >>> >>> |base container| --> |service container| --> |service container w/ >>> Puppet installed| >>> >>> How much more space and bandwidth would this consume per node (e.g. >>> separately per controller, per compute). This could help with decision >>> making. >> >> As I've already evaluated in the related bug, that is: >> >> puppet-* modules and manifests ~ 16MB >> puppet with dependencies ~61MB >> dependencies of the seemingly largest a dependency, systemd ~190MB >> >> that would be an extra layer size for each of the container images to be >> downloaded/fetched into registries. > > Thanks, i tried to do the math of the reduction vs. inflation in sizes > as follows. I think the crucial point here is the layering. If we do > this image layering: > > |base| --> |+ service| --> |+ Puppet| > > we'd drop ~267 MB from base image, but we'd be installing that to the > topmost level, per-component, right? > > In my basic deployment, undercloud seems to have 17 "components" (49 > containers), overcloud controller 15 components (48 containers), and > overcloud compute 4 components (7 containers). Accounting for overlaps, > the total number of "components" used seems to be 19. (By "components" > here i mean whatever uses a different ConfigImage than other services. I > just eyeballed it but i think i'm not too far off the correct number.) > > So we'd subtract 267 MB from base image and add that to 19 leaf images > used in this deployment. That means difference of +4.8 GB to the current > image sizes. My /var/lib/registry dir on undercloud with all the images > currently has 5.1 GB. We'd almost double that to 9.9 GB. > > Going from 5.1 to 9.9 GB seems like a lot of extra traffic for the CDNs > (both external and e.g. internal within OpenStack Infra CI clouds). > > And for internal traffic between local registry and overcloud nodes, it > gives +3.7 GB per controller and +800 MB per compute. That may not be so > critical but still feels like a considerable downside. > > Another gut feeling is that this way of image layering would take longer > time to build and to run the modify-image Ansible role which we use in > CI, so that could endanger how our CI jobs fit into the time limit. We > could also probably measure this but i'm not sure if it's worth spending > the time. > > All in all i'd argue we should be looking at different options still. > >> >> Given that we should decouple systemd from all/some of the dependencies >> (an example topic for RDO [0]), that could save a 190MB. But it seems we >> cannot break the love of puppet and systemd as it heavily relies on the >> latter and changing packaging like that would higly likely affect >> baremetal deployments with puppet and systemd co-operating. > > Ack :/ > >> >> Long story short, we cannot shoot both rabbits with a single shot, not >> with puppet :) May be we could with ansible replacing puppet fully... >> So splitting config and runtime images is the only choice yet to address >> the raised security concerns. And let's forget about edge cases for now. >> Tossing around a pair of extra bytes over 40,000 WAN-distributed >> computes ain't gonna be our the biggest problem for sure. >> >> [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction >> >>> >>>> >>>> Dan >>>> >>> >>> Thanks >>> >>> Jirka >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From feilong at catalyst.net.nz Thu Nov 29 21:06:52 2018 From: feilong at catalyst.net.nz (Feilong Wang) Date: Fri, 30 Nov 2018 10:06:52 +1300 Subject: [openstack-dev] [magnum] [Rocky] K8 deployment on fedora-atomic is failed In-Reply-To: References: Message-ID: Hi Vikrant, Before we dig more, it would be nice if you can let us know the version of your Magnum and Heat. Cheers. On 30/11/18 12:12 AM, Vikrant Aggarwal wrote: > Hello Team, > > Trying to deploy on K8 on fedora atomic. > > Here is the output of cluster template: > ~~~ > [root at packstack1 k8s_fedora_atomic_v1(keystone_admin)]# magnum > cluster-template-show 16eb91f7-18fe-4ce3-98db-c732603f2e57 > WARNING: The magnum client is deprecated and will be removed in a > future release. > Use the OpenStack client to avoid seeing this message. > +-----------------------+--------------------------------------+ > | Property              | Value                                | > +-----------------------+--------------------------------------+ > | insecure_registry     | -                                    | > | labels                | {}                                   | > | updated_at            | -                                    | > | floating_ip_enabled   | True                                 | > | fixed_subnet          | -                                    | > | master_flavor_id      | -                                    | > | user_id               | 203617849df9490084dde1897b28eb53     | > | uuid                  | 16eb91f7-18fe-4ce3-98db-c732603f2e57 | > | no_proxy              | -                                    | > | https_proxy           | -                                    | > | tls_disabled          | False                                | > | keypair_id            | kubernetes                           | > | project_id            | 45a6706c831c42d5bf2da928573382b1     | > | public                | False                                | > | http_proxy            | -                                    | > | docker_volume_size    | 10                                   | > | server_type           | vm                                   | > | external_network_id   | external1                            | > | cluster_distro        | fedora-atomic                        | > | image_id              | f5954340-f042-4de3-819e-a3b359591770 | > | volume_driver         | -                                    | > | registry_enabled      | False                                | > | docker_storage_driver | devicemapper                         | > | apiserver_port        | -                                    | > | name                  | coe-k8s-template                     | > | created_at            | 2018-11-28T12:58:21+00:00            | > | network_driver        | flannel                              | > | fixed_network         | -                                    | > | coe                   | kubernetes                           | > | flavor_id             | m1.small                             | > | master_lb_enabled     | False                                | > | dns_nameserver        | 8.8.8.8                              | > +-----------------------+--------------------------------------+ > ~~~ > Found couple of issues in the logs of VM started by magnum. > > - etcd was not getting started because of incorrect permission on file > "/etc/etcd/certs/server.key". This file is owned by root by default > have 0440 as permission. Changed the permission to 0444 so that etcd > can read the file. After that etcd started successfully. > > - etcd DB doesn't contain anything: > > [root at kube-cluster1-qobaagdob75g-master-0 ~]# etcdctl ls / -r > [root at kube-cluster1-qobaagdob75g-master-0 ~]# > > - Flanneld is stuck in activating status. > ~~~ > [root at kube-cluster1-qobaagdob75g-master-0 ~]# systemctl status flanneld > ● flanneld.service - Flanneld overlay address etcd agent >    Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; > vendor preset: disabled) >    Active: activating (start) since Thu 2018-11-29 11:05:39 UTC; 14s ago >  Main PID: 6491 (flanneld) >     Tasks: 6 (limit: 4915) >    Memory: 4.7M >       CPU: 53ms >    CGroup: /system.slice/flanneld.service >            └─6491 /usr/bin/flanneld > -etcd-endpoints=http://127.0.0.1:2379 -etcd-prefix=/atomic.io/network > > > Nov 29 11:05:44 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:44.569376    6491 network.go:102] failed > to retrieve network config: 100: Key not found (/atomic.io > ) [3] > Nov 29 11:05:45 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:45.584532    6491 network.go:102] failed > to retrieve network config: 100: Key not found (/atomic.io > ) [3] > Nov 29 11:05:46 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:46.646255    6491 network.go:102] failed > to retrieve network config: 100: Key not found (/atomic.io > ) [3] > Nov 29 11:05:47 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:47.673062    6491 network.go:102] failed > to retrieve network config: 100: Key not found (/atomic.io > ) [3] > Nov 29 11:05:48 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:48.686919    6491 network.go:102] failed > to retrieve network config: 100: Key not found (/atomic.io > ) [3] > Nov 29 11:05:49 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:49.709136    6491 network.go:102] failed > to retrieve network config: 100: Key not found (/atomic.io > ) [3] > Nov 29 11:05:50 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:50.729548    6491 network.go:102] failed > to retrieve network config: 100: Key not found (/atomic.io > ) [3] > Nov 29 11:05:51 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:51.749425    6491 network.go:102] failed > to retrieve network config: 100: Key not found (/atomic.io > ) [3] > Nov 29 11:05:52 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:52.776612    6491 network.go:102] failed > to retrieve network config: 100: Key not found (/atomic.io > ) [3] > Nov 29 11:05:53 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:53.790418    6491 network.go:102] failed > to retrieve network config: 100: Key not found (/atomic.io > ) [3] > ~~~ > > - Continuously in the jouralctl logs following messages are printed. > > ~~~ > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal > kube-apiserver[6888]: F1129 11:06:39.338416    6888 server.go:269] > Invalid Authorization Config: Unknown authorization mode Node specified > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal > systemd[1]: kube-apiserver.service: Main process exited, code=exited, > status=255/n/a > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal > kube-scheduler[2540]: E1129 11:06:39.408272    2540 reflector.go:199] > k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:463 > : > Failed to list *api.Node: Get > http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0: dial tcp > 127.0.0.1:8080 : getsockopt: connection refused > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal > kube-scheduler[2540]: E1129 11:06:39.444737    2540 reflector.go:199] > k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:460 > : > Failed to list *api.Pod: Get > http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: > dial tcp 127.0.0.1:8080 : getsockopt: > connection refused > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal > kube-scheduler[2540]: E1129 11:06:39.445793    2540 reflector.go:199] > k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466 > : > Failed to list *api.PersistentVolume: Get > http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0: dial > tcp 127.0.0.1:8080 : getsockopt: connection refused > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal > audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 > subj=system_u:system_r:init_t:s0 msg='unit=kube-apiserver > comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? > terminal=? res=failed' > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal > systemd[1]: Failed to start Kubernetes API Server. > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal > systemd[1]: kube-apiserver.service: Unit entered failed state. > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal > systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal > kube-scheduler[2540]: E1129 11:06:39.611699    2540 reflector.go:199] > k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481 > : > Failed to list *extensions.ReplicaSet: Get > http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: > dial tcp 127.0.0.1:8080 : getsockopt: > connection refused > ~~~ > > Any help on above issue is highly appreciated. > > Thanks & Regards, > Vikrant Aggarwal > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ervikrant06 at gmail.com Fri Nov 30 03:43:30 2018 From: ervikrant06 at gmail.com (Vikrant Aggarwal) Date: Fri, 30 Nov 2018 09:13:30 +0530 Subject: [openstack-dev] [magnum] [Rocky] K8 deployment on fedora-atomic is failed In-Reply-To: References: Message-ID: Hi Feilong, Thanks for your reply. Kindly find the below outputs. [root at packstack1 ~]# rpm -qa | grep -i magnum python-magnum-7.0.1-1.el7.noarch openstack-magnum-conductor-7.0.1-1.el7.noarch openstack-magnum-ui-5.0.1-1.el7.noarch openstack-magnum-api-7.0.1-1.el7.noarch puppet-magnum-13.3.1-1.el7.noarch python2-magnumclient-2.10.0-1.el7.noarch openstack-magnum-common-7.0.1-1.el7.noarch [root at packstack1 ~]# rpm -qa | grep -i heat openstack-heat-ui-1.4.0-1.el7.noarch openstack-heat-api-cfn-11.0.0-1.el7.noarch openstack-heat-engine-11.0.0-1.el7.noarch puppet-heat-13.3.1-1.el7.noarch python2-heatclient-1.16.1-1.el7.noarch openstack-heat-api-11.0.0-1.el7.noarch openstack-heat-common-11.0.0-1.el7.noarch Thanks & Regards, Vikrant Aggarwal On Fri, Nov 30, 2018 at 2:44 AM Feilong Wang wrote: > Hi Vikrant, > > Before we dig more, it would be nice if you can let us know the version of > your Magnum and Heat. Cheers. > > > On 30/11/18 12:12 AM, Vikrant Aggarwal wrote: > > Hello Team, > > Trying to deploy on K8 on fedora atomic. > > Here is the output of cluster template: > ~~~ > [root at packstack1 k8s_fedora_atomic_v1(keystone_admin)]# magnum > cluster-template-show 16eb91f7-18fe-4ce3-98db-c732603f2e57 > WARNING: The magnum client is deprecated and will be removed in a future > release. > Use the OpenStack client to avoid seeing this message. > +-----------------------+--------------------------------------+ > | Property | Value | > +-----------------------+--------------------------------------+ > | insecure_registry | - | > | labels | {} | > | updated_at | - | > | floating_ip_enabled | True | > | fixed_subnet | - | > | master_flavor_id | - | > | user_id | 203617849df9490084dde1897b28eb53 | > | uuid | 16eb91f7-18fe-4ce3-98db-c732603f2e57 | > | no_proxy | - | > | https_proxy | - | > | tls_disabled | False | > | keypair_id | kubernetes | > | project_id | 45a6706c831c42d5bf2da928573382b1 | > | public | False | > | http_proxy | - | > | docker_volume_size | 10 | > | server_type | vm | > | external_network_id | external1 | > | cluster_distro | fedora-atomic | > | image_id | f5954340-f042-4de3-819e-a3b359591770 | > | volume_driver | - | > | registry_enabled | False | > | docker_storage_driver | devicemapper | > | apiserver_port | - | > | name | coe-k8s-template | > | created_at | 2018-11-28T12:58:21+00:00 | > | network_driver | flannel | > | fixed_network | - | > | coe | kubernetes | > | flavor_id | m1.small | > | master_lb_enabled | False | > | dns_nameserver | 8.8.8.8 | > +-----------------------+--------------------------------------+ > ~~~ > Found couple of issues in the logs of VM started by magnum. > > - etcd was not getting started because of incorrect permission on file > "/etc/etcd/certs/server.key". This file is owned by root by default have > 0440 as permission. Changed the permission to 0444 so that etcd can read > the file. After that etcd started successfully. > > - etcd DB doesn't contain anything: > > [root at kube-cluster1-qobaagdob75g-master-0 ~]# etcdctl ls / -r > [root at kube-cluster1-qobaagdob75g-master-0 ~]# > > - Flanneld is stuck in activating status. > ~~~ > [root at kube-cluster1-qobaagdob75g-master-0 ~]# systemctl status flanneld > ● flanneld.service - Flanneld overlay address etcd agent > Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; > vendor preset: disabled) > Active: activating (start) since Thu 2018-11-29 11:05:39 UTC; 14s ago > Main PID: 6491 (flanneld) > Tasks: 6 (limit: 4915) > Memory: 4.7M > CPU: 53ms > CGroup: /system.slice/flanneld.service > └─6491 /usr/bin/flanneld -etcd-endpoints=http://127.0.0.1:2379 > -etcd-prefix=/atomic.io/network > > Nov 29 11:05:44 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:44.569376 6491 network.go:102] failed to > retrieve network config: 100: Key not found (/atomic.io) [3] > Nov 29 11:05:45 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:45.584532 6491 network.go:102] failed to > retrieve network config: 100: Key not found (/atomic.io) [3] > Nov 29 11:05:46 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:46.646255 6491 network.go:102] failed to > retrieve network config: 100: Key not found (/atomic.io) [3] > Nov 29 11:05:47 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:47.673062 6491 network.go:102] failed to > retrieve network config: 100: Key not found (/atomic.io) [3] > Nov 29 11:05:48 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:48.686919 6491 network.go:102] failed to > retrieve network config: 100: Key not found (/atomic.io) [3] > Nov 29 11:05:49 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:49.709136 6491 network.go:102] failed to > retrieve network config: 100: Key not found (/atomic.io) [3] > Nov 29 11:05:50 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:50.729548 6491 network.go:102] failed to > retrieve network config: 100: Key not found (/atomic.io) [3] > Nov 29 11:05:51 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:51.749425 6491 network.go:102] failed to > retrieve network config: 100: Key not found (/atomic.io) [3] > Nov 29 11:05:52 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:52.776612 6491 network.go:102] failed to > retrieve network config: 100: Key not found (/atomic.io) [3] > Nov 29 11:05:53 kube-cluster1-qobaagdob75g-master-0.novalocal > flanneld[6491]: E1129 11:05:53.790418 6491 network.go:102] failed to > retrieve network config: 100: Key not found (/atomic.io) [3] > ~~~ > > - Continuously in the jouralctl logs following messages are printed. > > ~~~ > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal > kube-apiserver[6888]: F1129 11:06:39.338416 6888 server.go:269] Invalid > Authorization Config: Unknown authorization mode Node specified > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: > kube-apiserver.service: Main process exited, code=exited, status=255/n/a > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal > kube-scheduler[2540]: E1129 11:06:39.408272 2540 reflector.go:199] > k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:463: Failed to > list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0: > dial tcp 127.0.0.1:8080: getsockopt: connection refused > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal > kube-scheduler[2540]: E1129 11:06:39.444737 2540 reflector.go:199] > k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:460: Failed to > list *api.Pod: Get > http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: > dial tcp 127.0.0.1:8080: getsockopt: connection refused > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal > kube-scheduler[2540]: E1129 11:06:39.445793 2540 reflector.go:199] > k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466: Failed to > list *api.PersistentVolume: Get > http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0: dial > tcp 127.0.0.1:8080: getsockopt: connection refused > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal audit[1]: > SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 > subj=system_u:system_r:init_t:s0 msg='unit=kube-apiserver comm="systemd" > exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: > Failed to start Kubernetes API Server. > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: > kube-apiserver.service: Unit entered failed state. > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: > kube-apiserver.service: Failed with result 'exit-code'. > Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal > kube-scheduler[2540]: E1129 11:06:39.611699 2540 reflector.go:199] > k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481: Failed to > list *extensions.ReplicaSet: Get > http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: > dial tcp 127.0.0.1:8080: getsockopt: connection refused > ~~~ > > Any help on above issue is highly appreciated. > > Thanks & Regards, > Vikrant Aggarwal > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > -------------------------------------------------------------------------- > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > -------------------------------------------------------------------------- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ervikrant06 at gmail.com Fri Nov 30 04:04:43 2018 From: ervikrant06 at gmail.com (Vikrant Aggarwal) Date: Fri, 30 Nov 2018 09:34:43 +0530 Subject: [openstack-dev] [kuryr] kuryr libnetwork installation inside the vm is failing Message-ID: Started one Fedora 29 VM on openstack using official qcow2 image. I followed this doc for kuryr installation inside the VM. https://docs.openstack.org/kuryr-libnetwork/latest/readme.html My objective is to run the nested docker i.e inside the VM. While installing the kuryr-libbetwork from the source. it's getting failed with following error: ~~~ [root at vm0 kuryr-libnetwork]# pip install . WARNING: Running pip install with root privileges is generally not a good idea. Try `pip install --user` instead. Processing /root/kuryr-libnetwork Requirement already satisfied: Babel!=2.4.0,>=2.3.4 in /usr/lib/python2.7/site-packages (from kuryr-libnetwork==2.0.1.dev20) (2.6.0) Requirement already satisfied: Flask!=0.11,>=0.10 in /usr/lib/python2.7/site-packages (from kuryr-libnetwork==2.0.1.dev20) (1.0.2) Requirement already satisfied: ipaddress>=1.0.17 in /usr/lib/python2.7/site-packages (from kuryr-libnetwork==2.0.1.dev20) (1.0.22) Requirement already satisfied: jsonschema<3.0.0,>=2.6.0 in /usr/lib/python2.7/site-packages (from kuryr-libnetwork==2.0.1.dev20) (2.6.0) Collecting kuryr-lib>=0.5.0 (from kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/19/f0/8a425cb4a34ce641e32c5bba3cddc19abf1cd7034537ec7acb9c96ec305d/kuryr_lib-0.8.0-py2.py3-none-any.whl Requirement already satisfied: neutron-lib>=1.13.0 in /usr/lib/python2.7/site-packages (from kuryr-libnetwork==2.0.1.dev20) (1.20.0) Collecting os-client-config>=1.28.0 (from kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/e5/af/2f3bf859a25c98993f85ff1eb17762aa49b2f894630ebe99c389545b0502/os_client_config-1.31.2-py2.py3-none-any.whl Requirement already satisfied: oslo.concurrency>=3.25.0 in /usr/lib/python2.7/site-packages (from kuryr-libnetwork==2.0.1.dev20) (3.29.0) Requirement already satisfied: oslo.config>=5.2.0 in /usr/lib/python2.7/site-packages (from kuryr-libnetwork==2.0.1.dev20) (6.7.0) Requirement already satisfied: oslo.log>=3.36.0 in /usr/lib/python2.7/site-packages (from kuryr-libnetwork==2.0.1.dev20) (3.41.0) Requirement already satisfied: oslo.utils>=3.33.0 in /usr/lib/python2.7/site-packages (from kuryr-libnetwork==2.0.1.dev20) (3.37.1) Requirement already satisfied: pbr!=2.1.0,>=2.0.0 in /usr/lib/python2.7/site-packages (from kuryr-libnetwork==2.0.1.dev20) (5.1.1) Collecting python-neutronclient>=6.7.0 (from kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/cd/40/2fee7f7f2b562f987c59dfa499ed084af44b57e3c7e342ed2fa5b41d73aa/python_neutronclient-6.11.0-py2.py3-none-any.whl Requirement already satisfied: six>=1.10.0 in /usr/lib/python2.7/site-packages (from kuryr-libnetwork==2.0.1.dev20) (1.11.0) Requirement already satisfied: pytz>=0a in /usr/lib/python2.7/site-packages (from Babel!=2.4.0,>=2.3.4->kuryr-libnetwork==2.0.1.dev20) (2018.7) Requirement already satisfied: Werkzeug>=0.14 in /usr/lib/python2.7/site-packages (from Flask!=0.11,>=0.10->kuryr-libnetwork==2.0.1.dev20) (0.14.1) Requirement already satisfied: click>=5.1 in /usr/lib64/python2.7/site-packages (from Flask!=0.11,>=0.10->kuryr-libnetwork==2.0.1.dev20) (7.0) Requirement already satisfied: itsdangerous>=0.24 in /usr/lib/python2.7/site-packages (from Flask!=0.11,>=0.10->kuryr-libnetwork==2.0.1.dev20) (1.1.0) Requirement already satisfied: Jinja2>=2.10 in /usr/lib/python2.7/site-packages (from Flask!=0.11,>=0.10->kuryr-libnetwork==2.0.1.dev20) (2.10) Requirement already satisfied: functools32; python_version == "2.7" in /usr/lib/python2.7/site-packages (from jsonschema<3.0.0,>=2.6.0->kuryr-libnetwork==2.0.1.dev20) (3.2.3.post2) Requirement already satisfied: keystoneauth1>=3.4.0 in /usr/lib/python2.7/site-packages (from kuryr-lib>=0.5.0->kuryr-libnetwork==2.0.1.dev20) (3.11.1) Requirement already satisfied: oslo.i18n>=3.15.3 in /usr/lib/python2.7/site-packages (from kuryr-lib>=0.5.0->kuryr-libnetwork==2.0.1.dev20) (3.23.0) Requirement already satisfied: pyroute2>=0.4.21; sys_platform != "win32" in /usr/lib/python2.7/site-packages (from kuryr-lib>=0.5.0->kuryr-libnetwork==2.0.1.dev20) (0.5.3) Requirement already satisfied: SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 in /usr/lib/python2.7/site-packages (from neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.2.14) Requirement already satisfied: osprofiler>=1.4.0 in /usr/lib/python2.7/site-packages (from neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (2.5.1) Requirement already satisfied: oslo.versionedobjects>=1.31.2 in /usr/lib/python2.7/site-packages (from neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.34.1) Requirement already satisfied: oslo.context>=2.19.2 in /usr/lib/python2.7/site-packages (from neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (2.21.0) Requirement already satisfied: oslo.policy>=1.30.0 in /usr/lib/python2.7/site-packages (from neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.41.1) Requirement already satisfied: oslo.db>=4.27.0 in /usr/lib/python2.7/site-packages (from neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (4.42.0) Requirement already satisfied: os-traits>=0.9.0 in /usr/lib/python2.7/site-packages (from neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (0.10.0) Requirement already satisfied: oslo.serialization!=2.19.1,>=2.18.0 in /usr/lib/python2.7/site-packages (from neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (2.28.1) Requirement already satisfied: oslo.service!=1.28.1,>=1.24.0 in /usr/lib/python2.7/site-packages (from neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.33.0) Requirement already satisfied: WebOb>=1.7.1 in /usr/lib/python2.7/site-packages (from neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.8.4) Requirement already satisfied: oslo.messaging>=5.29.0 in /usr/lib/python2.7/site-packages (from neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (9.2.1) Requirement already satisfied: stevedore>=1.20.0 in /usr/lib/python2.7/site-packages (from neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.30.0) Requirement already satisfied: weakrefmethod>=1.0.2; python_version == "2.7" in /usr/lib/python2.7/site-packages (from neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.0.3) Requirement already satisfied: pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 in /usr/lib/python2.7/site-packages (from neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.3.2) Collecting openstacksdk>=0.13.0 (from os-client-config>=1.28.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/05/e1/38b3fe8b3fa8b78c8917bc78e2aa3f1a13312bc810ee83d5d8f285514245/openstacksdk-0.19.0-py2.py3-none-any.whl Requirement already satisfied: enum34>=1.0.4; python_version == "2.7" or python_version == "2.6" or python_version == "3.3" in /usr/lib/python2.7/site-packages (from oslo.concurrency>=3.25.0->kuryr-libnetwork==2.0.1.dev20) (1.1.6) Requirement already satisfied: fasteners>=0.7.0 in /usr/lib/python2.7/site-packages (from oslo.concurrency>=3.25.0->kuryr-libnetwork==2.0.1.dev20) (0.14.1) Requirement already satisfied: rfc3986>=0.3.1 in /usr/lib/python2.7/site-packages (from oslo.config>=5.2.0->kuryr-libnetwork==2.0.1.dev20) (1.1.0) Requirement already satisfied: netaddr>=0.7.18 in /usr/lib/python2.7/site-packages (from oslo.config>=5.2.0->kuryr-libnetwork==2.0.1.dev20) (0.7.19) Requirement already satisfied: PyYAML>=3.12 in /usr/lib64/python2.7/site-packages (from oslo.config>=5.2.0->kuryr-libnetwork==2.0.1.dev20) (3.13) Requirement already satisfied: requests>=2.18.0 in /usr/lib/python2.7/site-packages (from oslo.config>=5.2.0->kuryr-libnetwork==2.0.1.dev20) (2.20.1) Requirement already satisfied: debtcollector>=1.2.0 in /usr/lib/python2.7/site-packages (from oslo.config>=5.2.0->kuryr-libnetwork==2.0.1.dev20) (1.20.0) Requirement already satisfied: monotonic>=1.4 in /usr/lib/python2.7/site-packages (from oslo.log>=3.36.0->kuryr-libnetwork==2.0.1.dev20) (1.5) Requirement already satisfied: python-dateutil>=2.7.0 in /usr/lib/python2.7/site-packages (from oslo.log>=3.36.0->kuryr-libnetwork==2.0.1.dev20) (2.7.5) Requirement already satisfied: pyinotify>=0.9.6; sys_platform != "win32" and sys_platform != "darwin" and sys_platform != "sunos5" in /usr/lib/python2.7/site-packages (from oslo.log>=3.36.0->kuryr-libnetwork==2.0.1.dev20) (0.9.6) Requirement already satisfied: netifaces>=0.10.4 in /usr/lib64/python2.7/site-packages (from oslo.utils>=3.33.0->kuryr-libnetwork==2.0.1.dev20) (0.10.7) Requirement already satisfied: pyparsing>=2.1.0 in /usr/lib/python2.7/site-packages (from oslo.utils>=3.33.0->kuryr-libnetwork==2.0.1.dev20) (2.3.0) Requirement already satisfied: funcsigs>=1.0.0; python_version == "2.7" or python_version == "2.6" in /usr/lib/python2.7/site-packages (from oslo.utils>=3.33.0->kuryr-libnetwork==2.0.1.dev20) (1.0.2) Requirement already satisfied: iso8601>=0.1.11 in /usr/lib/python2.7/site-packages (from oslo.utils>=3.33.0->kuryr-libnetwork==2.0.1.dev20) (0.1.12) Collecting osc-lib>=1.8.0 (from python-neutronclient>=6.7.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/8d/a8/b63a53baed828a3e46ff2d1ee5cb63f794e9e071f05e99229642edfc46a5/osc-lib-1.11.1.tar.gz Collecting python-keystoneclient>=3.8.0 (from python-neutronclient>=6.7.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/51/48/c25bb60d7b95294cc5e49a406ae0b9e10f3e0ff533d0df149501abf70300/python_keystoneclient-3.18.0-py2.py3-none-any.whl Collecting cliff!=2.9.0,>=2.8.0 (from python-neutronclient>=6.7.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/8e/1a/5404afee3d83a2e5f27e0d20ac7012c9f07bd8e9b03d0ae1fd9bb3e63037/cliff-2.14.0-py2.py3-none-any.whl Collecting simplejson>=3.5.1 (from python-neutronclient>=6.7.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/e3/24/c35fb1c1c315fc0fffe61ea00d3f88e85469004713dab488dee4f35b0aff/simplejson-3.16.0.tar.gz Requirement already satisfied: MarkupSafe>=0.23 in /usr/lib64/python2.7/site-packages (from Jinja2>=2.10->Flask!=0.11,>=0.10->kuryr-libnetwork==2.0.1.dev20) (1.1.0) Requirement already satisfied: os-service-types>=1.2.0 in /usr/lib/python2.7/site-packages (from keystoneauth1>=3.4.0->kuryr-lib>=0.5.0->kuryr-libnetwork==2.0.1.dev20) (1.3.0) Requirement already satisfied: PrettyTable<0.8,>=0.7.2 in /usr/lib/python2.7/site-packages (from osprofiler>=1.4.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (0.7.2) Requirement already satisfied: testscenarios>=0.4 in /usr/lib/python2.7/site-packages (from oslo.db>=4.27.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (0.5.0) Requirement already satisfied: alembic>=0.9.6 in /usr/lib/python2.7/site-packages (from oslo.db>=4.27.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.0.5) Requirement already satisfied: testresources>=2.0.0 in /usr/lib/python2.7/site-packages (from oslo.db>=4.27.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (2.0.1) Requirement already satisfied: sqlalchemy-migrate>=0.11.0 in /usr/lib/python2.7/site-packages (from oslo.db>=4.27.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (0.11.0) Requirement already satisfied: msgpack>=0.5.2 in /usr/lib64/python2.7/site-packages (from oslo.serialization!=2.19.1,>=2.18.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (0.5.6) Requirement already satisfied: Paste>=2.0.2 in /usr/lib/python2.7/site-packages (from oslo.service!=1.28.1,>=1.24.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (3.0.4) Requirement already satisfied: PasteDeploy>=1.5.0 in /usr/lib/python2.7/site-packages (from oslo.service!=1.28.1,>=1.24.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.5.2) Requirement already satisfied: greenlet>=0.4.10 in /usr/lib64/python2.7/site-packages (from oslo.service!=1.28.1,>=1.24.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (0.4.15) Requirement already satisfied: Routes>=2.3.1 in /usr/lib/python2.7/site-packages (from oslo.service!=1.28.1,>=1.24.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (2.4.1) Requirement already satisfied: fixtures>=3.0.0 in /usr/lib/python2.7/site-packages (from oslo.service!=1.28.1,>=1.24.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (3.0.0) Requirement already satisfied: eventlet!=0.18.3,!=0.20.1,>=0.18.2 in /usr/lib/python2.7/site-packages (from oslo.service!=1.28.1,>=1.24.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (0.24.1) Requirement already satisfied: kombu!=4.0.2,>=4.0.0 in /usr/lib/python2.7/site-packages (from oslo.messaging>=5.29.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (4.2.1) Requirement already satisfied: oslo.middleware>=3.31.0 in /usr/lib/python2.7/site-packages (from oslo.messaging>=5.29.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (3.36.0) Requirement already satisfied: cachetools>=2.0.0 in /usr/lib/python2.7/site-packages (from oslo.messaging>=5.29.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (3.0.0) Requirement already satisfied: futurist>=1.2.0 in /usr/lib/python2.7/site-packages (from oslo.messaging>=5.29.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.8.0) Requirement already satisfied: amqp>=2.3.0 in /usr/lib/python2.7/site-packages (from oslo.messaging>=5.29.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (2.3.2) Requirement already satisfied: logutils>=0.3 in /usr/lib/python2.7/site-packages (from pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (0.3.5) Requirement already satisfied: singledispatch in /usr/lib/python2.7/site-packages (from pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (3.4.0.3) Requirement already satisfied: Mako>=0.4.0 in /usr/lib/python2.7/site-packages (from pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.0.7) Requirement already satisfied: WebTest>=1.3.1 in /usr/lib/python2.7/site-packages (from pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (2.0.32) Collecting dogpile.cache>=0.6.2 (from openstacksdk>=0.13.0->os-client-config>=1.28.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/73/bf/0cbed594e4f0f9360bfb98e7276131bf32e1af1d15e6c11a9dd8bd93a12f/dogpile.cache-0.6.8.tar.gz Requirement already satisfied: decorator>=3.4.0 in /usr/lib/python2.7/site-packages (from openstacksdk>=0.13.0->os-client-config>=1.28.0->kuryr-libnetwork==2.0.1.dev20) (4.3.0) Requirement already satisfied: futures>=3.0.0; python_version == "2.7" or python_version == "2.6" in /usr/lib/python2.7/site-packages (from openstacksdk>=0.13.0->os-client-config>=1.28.0->kuryr-libnetwork==2.0.1.dev20) (3.2.0) Collecting requestsexceptions>=1.2.0 (from openstacksdk>=0.13.0->os-client-config>=1.28.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/01/8c/49ca60ea8c907260da4662582c434bec98716177674e88df3fd340acf06d/requestsexceptions-1.4.0-py2.py3-none-any.whl Collecting cryptography>=2.1 (from openstacksdk>=0.13.0->os-client-config>=1.28.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/7f/ba/383b51cc26e3141c689ce988814385c7659f5ba01c4b5f2de38233010b5f/cryptography-2.4.2-cp27-cp27mu-manylinux1_x86_64.whl Collecting munch>=2.1.0 (from openstacksdk>=0.13.0->os-client-config>=1.28.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/68/f4/260ec98ea840757a0da09e0ed8135333d59b8dfebe9752a365b04857660a/munch-2.3.2.tar.gz Collecting appdirs>=1.3.0 (from openstacksdk>=0.13.0->os-client-config>=1.28.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/56/eb/810e700ed1349edde4cbdc1b2a21e28cdf115f9faf263f6bbf8447c1abf3/appdirs-1.4.3-py2.py3-none-any.whl Collecting jsonpatch!=1.20,>=1.16 (from openstacksdk>=0.13.0->os-client-config>=1.28.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/a0/e6/d50d526ae2218b765ddbdb2dda14d65e19f501ce07410b375bc43ad20b7a/jsonpatch-1.23-py2.py3-none-any.whl Collecting jmespath>=0.9.0 (from openstacksdk>=0.13.0->os-client-config>=1.28.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/b7/31/05c8d001f7f87f0f07289a5fc0fc3832e9a57f2dbd4d3b0fee70e0d51365/jmespath-0.9.3-py2.py3-none-any.whl Requirement already satisfied: idna<2.8,>=2.5 in /usr/lib/python2.7/site-packages (from requests>=2.18.0->oslo.config>=5.2.0->kuryr-libnetwork==2.0.1.dev20) (2.7) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/lib/python2.7/site-packages (from requests>=2.18.0->oslo.config>=5.2.0->kuryr-libnetwork==2.0.1.dev20) (3.0.4) Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/lib/python2.7/site-packages (from requests>=2.18.0->oslo.config>=5.2.0->kuryr-libnetwork==2.0.1.dev20) (1.24.1) Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python2.7/site-packages (from requests>=2.18.0->oslo.config>=5.2.0->kuryr-libnetwork==2.0.1.dev20) (2018.10.15) Requirement already satisfied: wrapt>=1.7.0 in /usr/lib/python2.7/site-packages (from debtcollector>=1.2.0->oslo.config>=5.2.0->kuryr-libnetwork==2.0.1.dev20) (1.10.11) Collecting cmd2!=0.8.3,<0.9.0; python_version < "3.0" (from cliff!=2.9.0,>=2.8.0->python-neutronclient>=6.7.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/e9/40/a71caa2aaff10c73612a7106e2d35f693e85b8cf6e37ab0774274bca3cf9/cmd2-0.8.9-py2.py3-none-any.whl Collecting unicodecsv>=0.8.0; python_version < "3.0" (from cliff!=2.9.0,>=2.8.0->python-neutronclient>=6.7.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/6f/a4/691ab63b17505a26096608cc309960b5a6bdf39e4ba1a793d5f9b1a53270/unicodecsv-0.14.1.tar.gz Requirement already satisfied: testtools in /usr/lib/python2.7/site-packages (from testscenarios>=0.4->oslo.db>=4.27.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (2.3.0) Requirement already satisfied: python-editor>=0.3 in /usr/lib/python2.7/site-packages (from alembic>=0.9.6->oslo.db>=4.27.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.0.3) Requirement already satisfied: sqlparse in /usr/lib/python2.7/site-packages (from sqlalchemy-migrate>=0.11.0->oslo.db>=4.27.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (0.2.4) Requirement already satisfied: Tempita>=0.4 in /usr/lib/python2.7/site-packages (from sqlalchemy-migrate>=0.11.0->oslo.db>=4.27.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (0.5.2) Requirement already satisfied: repoze.lru>=0.3 in /usr/lib/python2.7/site-packages (from Routes>=2.3.1->oslo.service!=1.28.1,>=1.24.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (0.7) Requirement already satisfied: dnspython>=1.15.0 in /usr/lib/python2.7/site-packages (from eventlet!=0.18.3,!=0.20.1,>=0.18.2->oslo.service!=1.28.1,>=1.24.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.15.0) Requirement already satisfied: statsd>=3.2.1 in /usr/lib/python2.7/site-packages (from oslo.middleware>=3.31.0->oslo.messaging>=5.29.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (3.3.0) Requirement already satisfied: contextlib2>=0.4.0; python_version < "3.0" in /usr/lib/python2.7/site-packages (from futurist>=1.2.0->oslo.messaging>=5.29.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (0.5.5) Requirement already satisfied: vine>=1.1.3 in /usr/lib/python2.7/site-packages (from amqp>=2.3.0->oslo.messaging>=5.29.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.1.4) Requirement already satisfied: beautifulsoup4 in /usr/lib/python2.7/site-packages (from WebTest>=1.3.1->pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (4.6.3) Requirement already satisfied: waitress>=0.8.5 in /usr/lib/python2.7/site-packages (from WebTest>=1.3.1->pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.1.0) Collecting cffi!=1.11.3,>=1.7 (from cryptography>=2.1->openstacksdk>=0.13.0->os-client-config>=1.28.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/14/dd/3e7a1e1280e7d767bd3fa15791759c91ec19058ebe31217fe66f3e9a8c49/cffi-1.11.5-cp27-cp27mu-manylinux1_x86_64.whl Collecting asn1crypto>=0.21.0 (from cryptography>=2.1->openstacksdk>=0.13.0->os-client-config>=1.28.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/ea/cd/35485615f45f30a510576f1a56d1e0a7ad7bd8ab5ed7cdc600ef7cd06222/asn1crypto-0.24.0-py2.py3-none-any.whl Collecting jsonpointer>=1.9 (from jsonpatch!=1.20,>=1.16->openstacksdk>=0.13.0->os-client-config>=1.28.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/18/b0/a80d29577c08eea401659254dfaed87f1af45272899e1812d7e01b679bc5/jsonpointer-2.0-py2.py3-none-any.whl Requirement already satisfied: pyperclip in /usr/lib/python2.7/site-packages (from cmd2!=0.8.3,<0.9.0; python_version < "3.0"->cliff!=2.9.0,>=2.8.0->python-neutronclient>=6.7.0->kuryr-libnetwork==2.0.1.dev20) (1.7.0) Requirement already satisfied: wcwidth; sys_platform != "win32" in /usr/lib/python2.7/site-packages (from cmd2!=0.8.3,<0.9.0; python_version < "3.0"->cliff!=2.9.0,>=2.8.0->python-neutronclient>=6.7.0->kuryr-libnetwork==2.0.1.dev20) (0.1.7) Collecting subprocess32; python_version < "3.0" (from cmd2!=0.8.3,<0.9.0; python_version < "3.0"->cliff!=2.9.0,>=2.8.0->python-neutronclient>=6.7.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/be/2b/beeba583e9877e64db10b52a96915afc0feabf7144dcbf2a0d0ea68bf73d/subprocess32-3.5.3.tar.gz Requirement already satisfied: extras>=1.0.0 in /usr/lib/python2.7/site-packages (from testtools->testscenarios>=0.4->oslo.db>=4.27.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.0.0) Requirement already satisfied: unittest2>=1.0.0 in /usr/lib/python2.7/site-packages (from testtools->testscenarios>=0.4->oslo.db>=4.27.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.1.0) Requirement already satisfied: traceback2 in /usr/lib/python2.7/site-packages (from testtools->testscenarios>=0.4->oslo.db>=4.27.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.4.0) Requirement already satisfied: python-mimeparse in /usr/lib/python2.7/site-packages (from testtools->testscenarios>=0.4->oslo.db>=4.27.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.6.0) Collecting pycparser (from cffi!=1.11.3,>=1.7->cryptography>=2.1->openstacksdk>=0.13.0->os-client-config>=1.28.0->kuryr-libnetwork==2.0.1.dev20) Using cached https://files.pythonhosted.org/packages/68/9e/49196946aee219aead1290e00d1e7fdeab8567783e83e1b9ab5585e6206a/pycparser-2.19.tar.gz Requirement already satisfied: argparse in /usr/lib/python2.7/site-packages (from unittest2>=1.0.0->testtools->testscenarios>=0.4->oslo.db>=4.27.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.4.0) Requirement already satisfied: linecache2 in /usr/lib/python2.7/site-packages (from traceback2->testtools->testscenarios>=0.4->oslo.db>=4.27.0->neutron-lib>=1.13.0->kuryr-libnetwork==2.0.1.dev20) (1.0.0) Installing collected packages: subprocess32, cmd2, unicodecsv, cliff, dogpile.cache, requestsexceptions, pycparser, cffi, asn1crypto, cryptography, munch, appdirs, jsonpointer, jsonpatch, jmespath, openstacksdk, simplejson, osc-lib, python-keystoneclient, os-client-config, python-neutronclient, kuryr-lib, kuryr-libnetwork Running setup.py install for subprocess32 ... error Complete output from command /usr/bin/python2 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-tVH19b/subprocess32/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-_SN4DI/install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build/lib.linux-x86_64-2.7 copying subprocess32.py -> build/lib.linux-x86_64-2.7 running build_ext running build_configure checking for gcc... no checking for cc... no checking for cl.exe... no configure: error: in `/tmp/pip-install-tVH19b/subprocess32': configure: error: no acceptable C compiler found in $PATH See `config.log' for more details Traceback (most recent call last): File "", line 1, in File "/tmp/pip-install-tVH19b/subprocess32/setup.py", line 120, in main() File "/tmp/pip-install-tVH19b/subprocess32/setup.py", line 114, in main 'Programming Language :: Python :: Implementation :: CPython', File "/usr/lib/python2.7/site-packages/setuptools/__init__.py", line 140, in setup return distutils.core.setup(**attrs) File "/usr/lib64/python2.7/distutils/core.py", line 151, in setup dist.run_commands() File "/usr/lib64/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/usr/lib/python2.7/site-packages/setuptools/command/install.py", line 61, in run return orig.install.run(self) File "/usr/lib64/python2.7/distutils/command/install.py", line 563, in run self.run_command('build') File "/usr/lib64/python2.7/distutils/cmd.py", line 326, in run_command self.distribution.run_command(command) File "/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/usr/lib64/python2.7/distutils/command/build.py", line 127, in run self.run_command(cmd_name) File "/usr/lib64/python2.7/distutils/cmd.py", line 326, in run_command self.distribution.run_command(command) File "/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/tmp/pip-install-tVH19b/subprocess32/setup.py", line 41, in run self.run_command(command) File "/usr/lib64/python2.7/distutils/cmd.py", line 326, in run_command self.distribution.run_command(command) File "/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/tmp/pip-install-tVH19b/subprocess32/setup.py", line 26, in run raise RuntimeError(configure_command + ' failed.') RuntimeError: sh ./configure failed. ---------------------------------------- Command "/usr/bin/python2 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-tVH19b/subprocess32/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-_SN4DI/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-install-tVH19b/subprocess32/ ~~~ python-dev and docker is already installed on the machine. Any help is highly appreciated. Thanks & Regards, Vikrant Aggarwal -------------- next part -------------- An HTML attachment was scrubbed... URL: From ervikrant06 at gmail.com Fri Nov 30 04:08:13 2018 From: ervikrant06 at gmail.com (Vikrant Aggarwal) Date: Fri, 30 Nov 2018 09:38:13 +0530 Subject: [openstack-dev] [kuryr] can we start kuryr libnetwork in container inside the nova VM. Message-ID: Hello Team, I have seen the steps of starting the kuryr libnetwork container on compute node. But If I need to run the same container inside the VM running on compute node, is't possible to do that? I am not sure how can I map the /var/run/openvswitch inside the nested VM because this is present on compute node. https://docs.openstack.org/kuryr-libnetwork/latest/readme.html Thanks & Regards, Vikrant Aggarwal -------------- next part -------------- An HTML attachment was scrubbed... URL: From wjstk16 at gmail.com Fri Nov 30 07:40:18 2018 From: wjstk16 at gmail.com (Won) Date: Fri, 30 Nov 2018 16:40:18 +0900 Subject: [openstack-dev] [vitrage] I have some problems with Prometheus alarms in vitrage. In-Reply-To: References: Message-ID: Hi, I checked that both of the methods you propose work well. After I add 'should_delete_outdated_entities' function to InstanceDriver, it took about 10 minutes to clear the old Instance. And I added two sentences you said to Nova-cpu.conf, so the vitrage collector get notifications well. Thank you for your help. Best regards, Won 2018년 11월 22일 (목) 오후 9:35, Ifat Afek 님이 작성: > Hi, > > A deleted instance should be removed from Vitrage in one of two ways: > 1. By reacting to a notification from Nova > 2. If no notification is received, then after a while the instance vertex > in Vitrage is considered "outdated" and is deleted > > Regarding #1, it is clear from your logs that you don't get notifications > from Nova on the second compute. > Do you have on one of your nodes, in addition to nova.conf, also a > nova-cpu.conf? if so, please make the same change in this file: > > notification_topics = notifications,vitrage_notifications > > notification_driver = messagingv2 > > And please make sure to restart nova compute service on that node. > > Regarding #2, as a second-best solution, the instances should be deleted > from the graph after not being updated for a while. > I realized that we have a bug in this area and I will push a fix to gerrit > later today. In the meantime, you can add to > InstanceDriver class the following function: > > @staticmethod > def should_delete_outdated_entities(): > return True > > Let me know if it solved your problem, > Ifat > > > On Wed, Nov 21, 2018 at 1:50 PM Won wrote: > >> I attached four log files. >> I collected the logs from about 17:14 to 17:42. I created an instance of >> 'deltesting3' at 17:17. 7minutes later, at 17:24, the entity graph showed >> the dentesting3 and vitrage colletor and graph logs are appeared. >> When creating an instance in ubuntu server, it appears immediately in the >> entity graph and logs, but when creating an instance in computer1 (multi >> node), it appears about 5~10 minutes later. >> I deleted an instance of 'deltesting3' around 17:26. >> >> >>> After ~20minutes, there was only Apigateway. Does it make sense? did you >>> delete the instances on ubuntu, in addition to deltesting? >>> >> >> I only deleted 'deltesting'. After that, only the logs from 'apigateway' >> and 'kube-master' were collected. But other instances were working well. I >> don't know why only two instances are collected in the log. >> NOV 19 In this log, 'agigateway' and 'kube-master' were continuously >> collected in a short period of time, but other instances were sometimes >> collected in long periods. >> >> In any case, I would expect to see the instances deleted from the graph >>> at this stage, since they were not returned by get_all. >>> Can you please send me the log of vitrage-graph at the same time (Nov >>> 15, 16:35-17:10)? >>> >> >> Information 'deldtesting3' that has already been deleted continues to be >> collected in vitrage-graph.service. >> >> __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Fri Nov 30 09:31:15 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 30 Nov 2018 10:31:15 +0100 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> Message-ID: On 11/29/18 6:42 PM, Jiří Stránský wrote: > On 28. 11. 18 18:29, Bogdan Dobrelya wrote: >> On 11/28/18 6:02 PM, Jiří Stránský wrote: >>> >>> >>>> >>>> Reiterating again on previous points: >>>> >>>> -I'd be fine removing systemd. But lets do it properly and not via 'rpm >>>> -ev --nodeps'. >>>> -Puppet and Ruby *are* required for configuration. We can certainly put >>>> them in a separate container outside of the runtime service containers >>>> but doing so would actually cost you much more space/bandwidth for each >>>> service container. As both of these have to get downloaded to each node >>>> anyway in order to generate config files with our current mechanisms >>>> I'm not sure this buys you anything. >>> >>> +1. I was actually under the impression that we concluded yesterday on >>> IRC that this is the only thing that makes sense to seriously consider. >>> But even then it's not a win-win -- we'd gain some security by leaner >>> production images, but pay for it with space+bandwidth by duplicating >>> image content (IOW we can help achieve one of the goals we had in mind >>> by worsening the situation w/r/t the other goal we had in mind.) >>> >>> Personally i'm not sold yet but it's something that i'd consider if we >>> got measurements of how much more space/bandwidth usage this would >>> consume, and if we got some further details/examples about how serious >>> are the security concerns if we leave config mgmt tools in runtime >>> images. >>> >>> IIRC the other options (that were brought forward so far) were already >>> dismissed in yesterday's IRC discussion and on the reviews. Bin/lib bind >>> mounting being too hacky and fragile, and nsenter not really solving the >>> problem (because it allows us to switch to having different bins/libs >>> available, but it does not allow merging the availability of bins/libs >>> from two containers into a single context). >>> >>>> >>>> We are going in circles here I think.... >>> >>> +1. I think too much of the discussion focuses on "why it's bad to have >>> config tools in runtime images", but IMO we all sorta agree that it >>> would be better not to have them there, if it came at no cost. >>> >>> I think to move forward, it would be interesting to know: if we do this >>> (i'll borrow Dan's drawing): >>> >>> |base container| --> |service container| --> |service container w/ >>> Puppet installed| >>> >>> How much more space and bandwidth would this consume per node (e.g. >>> separately per controller, per compute). This could help with decision >>> making. >> >> As I've already evaluated in the related bug, that is: >> >> puppet-* modules and manifests ~ 16MB >> puppet with dependencies ~61MB >> dependencies of the seemingly largest a dependency, systemd ~190MB >> >> that would be an extra layer size for each of the container images to be >> downloaded/fetched into registries. > > Thanks, i tried to do the math of the reduction vs. inflation in sizes > as follows. I think the crucial point here is the layering. If we do > this image layering: > > |base| --> |+ service| --> |+ Puppet| > > we'd drop ~267 MB from base image, but we'd be installing that to the > topmost level, per-component, right? Given we detached systemd from puppet, cronie et al, that would be 267-190MB, so the math below would be looking much better > > In my basic deployment, undercloud seems to have 17 "components" (49 > containers), overcloud controller 15 components (48 containers), and > overcloud compute 4 components (7 containers). Accounting for overlaps, > the total number of "components" used seems to be 19. (By "components" > here i mean whatever uses a different ConfigImage than other services. I > just eyeballed it but i think i'm not too far off the correct number.) > > So we'd subtract 267 MB from base image and add that to 19 leaf images > used in this deployment. That means difference of +4.8 GB to the current > image sizes. My /var/lib/registry dir on undercloud with all the images > currently has 5.1 GB. We'd almost double that to 9.9 GB. > > Going from 5.1 to 9.9 GB seems like a lot of extra traffic for the CDNs > (both external and e.g. internal within OpenStack Infra CI clouds). > > And for internal traffic between local registry and overcloud nodes, it > gives +3.7 GB per controller and +800 MB per compute. That may not be so > critical but still feels like a considerable downside. > > Another gut feeling is that this way of image layering would take longer > time to build and to run the modify-image Ansible role which we use in > CI, so that could endanger how our CI jobs fit into the time limit. We > could also probably measure this but i'm not sure if it's worth spending > the time. > > All in all i'd argue we should be looking at different options still. > >> >> Given that we should decouple systemd from all/some of the dependencies >> (an example topic for RDO [0]), that could save a 190MB. But it seems we >> cannot break the love of puppet and systemd as it heavily relies on the >> latter and changing packaging like that would higly likely affect >> baremetal deployments with puppet and systemd co-operating. > > Ack :/ > >> >> Long story short, we cannot shoot both rabbits with a single shot, not >> with puppet :) May be we could with ansible replacing puppet fully... >> So splitting config and runtime images is the only choice yet to address >> the raised security concerns. And let's forget about edge cases for now. >> Tossing around a pair of extra bytes over 40,000 WAN-distributed >> computes ain't gonna be our the biggest problem for sure. >> >> [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction >> >>> >>>> >>>> Dan >>>> >>> >>> Thanks >>> >>> Jirka >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Bogdan Dobrelya, Irc #bogdando From mbooth at redhat.com Fri Nov 30 12:06:56 2018 From: mbooth at redhat.com (Matthew Booth) Date: Fri, 30 Nov 2018 12:06:56 +0000 Subject: [openstack-dev] [nova] Would an api option to create an instance without powering on be useful? Message-ID: I have a request to do $SUBJECT in relation to a V2V workflow. The use case here is conversion of a VM/Physical which was previously powered off. We want to move its data, but we don't want to be powering on stuff which wasn't previously on. This would involve an api change, and a hopefully very small change in drivers to support it. Technically I don't see it as an issue. However, is it a change we'd be willing to accept? Is there any good reason not to do this? Are there any less esoteric workflows which might use this feature? Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From dprince at redhat.com Fri Nov 30 12:52:08 2018 From: dprince at redhat.com (Dan Prince) Date: Fri, 30 Nov 2018 07:52:08 -0500 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> Message-ID: <7d2fce52ca8bb5156b753a95cbb9e2df7ad741c8.camel@redhat.com> On Fri, 2018-11-30 at 10:31 +0100, Bogdan Dobrelya wrote: > On 11/29/18 6:42 PM, Jiří Stránský wrote: > > On 28. 11. 18 18:29, Bogdan Dobrelya wrote: > > > On 11/28/18 6:02 PM, Jiří Stránský wrote: > > > > > > > > > > > > > Reiterating again on previous points: > > > > > > > > > > -I'd be fine removing systemd. But lets do it properly and > > > > > not via 'rpm > > > > > -ev --nodeps'. > > > > > -Puppet and Ruby *are* required for configuration. We can > > > > > certainly put > > > > > them in a separate container outside of the runtime service > > > > > containers > > > > > but doing so would actually cost you much more > > > > > space/bandwidth for each > > > > > service container. As both of these have to get downloaded to > > > > > each node > > > > > anyway in order to generate config files with our current > > > > > mechanisms > > > > > I'm not sure this buys you anything. > > > > > > > > +1. I was actually under the impression that we concluded > > > > yesterday on > > > > IRC that this is the only thing that makes sense to seriously > > > > consider. > > > > But even then it's not a win-win -- we'd gain some security by > > > > leaner > > > > production images, but pay for it with space+bandwidth by > > > > duplicating > > > > image content (IOW we can help achieve one of the goals we had > > > > in mind > > > > by worsening the situation w/r/t the other goal we had in > > > > mind.) > > > > > > > > Personally i'm not sold yet but it's something that i'd > > > > consider if we > > > > got measurements of how much more space/bandwidth usage this > > > > would > > > > consume, and if we got some further details/examples about how > > > > serious > > > > are the security concerns if we leave config mgmt tools in > > > > runtime > > > > images. > > > > > > > > IIRC the other options (that were brought forward so far) were > > > > already > > > > dismissed in yesterday's IRC discussion and on the reviews. > > > > Bin/lib bind > > > > mounting being too hacky and fragile, and nsenter not really > > > > solving the > > > > problem (because it allows us to switch to having different > > > > bins/libs > > > > available, but it does not allow merging the availability of > > > > bins/libs > > > > from two containers into a single context). > > > > > > > > > We are going in circles here I think.... > > > > > > > > +1. I think too much of the discussion focuses on "why it's bad > > > > to have > > > > config tools in runtime images", but IMO we all sorta agree > > > > that it > > > > would be better not to have them there, if it came at no cost. > > > > > > > > I think to move forward, it would be interesting to know: if we > > > > do this > > > > (i'll borrow Dan's drawing): > > > > > > > > > base container| --> |service container| --> |service > > > > > container w/ > > > > Puppet installed| > > > > > > > > How much more space and bandwidth would this consume per node > > > > (e.g. > > > > separately per controller, per compute). This could help with > > > > decision > > > > making. > > > > > > As I've already evaluated in the related bug, that is: > > > > > > puppet-* modules and manifests ~ 16MB > > > puppet with dependencies ~61MB > > > dependencies of the seemingly largest a dependency, systemd > > > ~190MB > > > > > > that would be an extra layer size for each of the container > > > images to be > > > downloaded/fetched into registries. > > > > Thanks, i tried to do the math of the reduction vs. inflation in > > sizes > > as follows. I think the crucial point here is the layering. If we > > do > > this image layering: > > > > > base| --> |+ service| --> |+ Puppet| > > > > we'd drop ~267 MB from base image, but we'd be installing that to > > the > > topmost level, per-component, right? > > Given we detached systemd from puppet, cronie et al, that would be > 267-190MB, so the math below would be looking much better Would it be worth writing a spec that summarizes what action items are bing taken to optimize our base image with regards to the systemd? It seems like the general consenses is that cleaning up some of the RPM dependencies so that we don't install Systemd is the biggest win. What confuses me is why are there still patches posted to move Puppet out of the base layer when we agree moving it out of the base layer would actually cause our resulting container image set to be larger in size. Dan > > > In my basic deployment, undercloud seems to have 17 "components" > > (49 > > containers), overcloud controller 15 components (48 containers), > > and > > overcloud compute 4 components (7 containers). Accounting for > > overlaps, > > the total number of "components" used seems to be 19. (By > > "components" > > here i mean whatever uses a different ConfigImage than other > > services. I > > just eyeballed it but i think i'm not too far off the correct > > number.) > > > > So we'd subtract 267 MB from base image and add that to 19 leaf > > images > > used in this deployment. That means difference of +4.8 GB to the > > current > > image sizes. My /var/lib/registry dir on undercloud with all the > > images > > currently has 5.1 GB. We'd almost double that to 9.9 GB. > > > > Going from 5.1 to 9.9 GB seems like a lot of extra traffic for the > > CDNs > > (both external and e.g. internal within OpenStack Infra CI clouds). > > > > And for internal traffic between local registry and overcloud > > nodes, it > > gives +3.7 GB per controller and +800 MB per compute. That may not > > be so > > critical but still feels like a considerable downside. > > > > Another gut feeling is that this way of image layering would take > > longer > > time to build and to run the modify-image Ansible role which we use > > in > > CI, so that could endanger how our CI jobs fit into the time limit. > > We > > could also probably measure this but i'm not sure if it's worth > > spending > > the time. > > > > All in all i'd argue we should be looking at different options > > still. > > > > > Given that we should decouple systemd from all/some of the > > > dependencies > > > (an example topic for RDO [0]), that could save a 190MB. But it > > > seems we > > > cannot break the love of puppet and systemd as it heavily relies > > > on the > > > latter and changing packaging like that would higly likely affect > > > baremetal deployments with puppet and systemd co-operating. > > > > Ack :/ > > > > > Long story short, we cannot shoot both rabbits with a single > > > shot, not > > > with puppet :) May be we could with ansible replacing puppet > > > fully... > > > So splitting config and runtime images is the only choice yet to > > > address > > > the raised security concerns. And let's forget about edge cases > > > for now. > > > Tossing around a pair of extra bytes over 40,000 WAN-distributed > > > computes ain't gonna be our the biggest problem for sure. > > > > > > [0] > > > https://review.rdoproject.org/r/#/q/topic:base-container-reduction > > > > > > > > Dan > > > > > > > > > > > > > Thanks > > > > > > > > Jirka > > > > > > > > _______________________________________________________________ > > > > ___________ > > > > > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: > > > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > ___________________________________________________________________ > > _______ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > bscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From bdobreli at redhat.com Fri Nov 30 13:31:13 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 30 Nov 2018 14:31:13 +0100 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <7d2fce52ca8bb5156b753a95cbb9e2df7ad741c8.camel@redhat.com> References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> <7d2fce52ca8bb5156b753a95cbb9e2df7ad741c8.camel@redhat.com> Message-ID: On 11/30/18 1:52 PM, Dan Prince wrote: > On Fri, 2018-11-30 at 10:31 +0100, Bogdan Dobrelya wrote: >> On 11/29/18 6:42 PM, Jiří Stránský wrote: >>> On 28. 11. 18 18:29, Bogdan Dobrelya wrote: >>>> On 11/28/18 6:02 PM, Jiří Stránský wrote: >>>>> >>>>> >>>>>> Reiterating again on previous points: >>>>>> >>>>>> -I'd be fine removing systemd. But lets do it properly and >>>>>> not via 'rpm >>>>>> -ev --nodeps'. >>>>>> -Puppet and Ruby *are* required for configuration. We can >>>>>> certainly put >>>>>> them in a separate container outside of the runtime service >>>>>> containers >>>>>> but doing so would actually cost you much more >>>>>> space/bandwidth for each >>>>>> service container. As both of these have to get downloaded to >>>>>> each node >>>>>> anyway in order to generate config files with our current >>>>>> mechanisms >>>>>> I'm not sure this buys you anything. >>>>> >>>>> +1. I was actually under the impression that we concluded >>>>> yesterday on >>>>> IRC that this is the only thing that makes sense to seriously >>>>> consider. >>>>> But even then it's not a win-win -- we'd gain some security by >>>>> leaner >>>>> production images, but pay for it with space+bandwidth by >>>>> duplicating >>>>> image content (IOW we can help achieve one of the goals we had >>>>> in mind >>>>> by worsening the situation w/r/t the other goal we had in >>>>> mind.) >>>>> >>>>> Personally i'm not sold yet but it's something that i'd >>>>> consider if we >>>>> got measurements of how much more space/bandwidth usage this >>>>> would >>>>> consume, and if we got some further details/examples about how >>>>> serious >>>>> are the security concerns if we leave config mgmt tools in >>>>> runtime >>>>> images. >>>>> >>>>> IIRC the other options (that were brought forward so far) were >>>>> already >>>>> dismissed in yesterday's IRC discussion and on the reviews. >>>>> Bin/lib bind >>>>> mounting being too hacky and fragile, and nsenter not really >>>>> solving the >>>>> problem (because it allows us to switch to having different >>>>> bins/libs >>>>> available, but it does not allow merging the availability of >>>>> bins/libs >>>>> from two containers into a single context). >>>>> >>>>>> We are going in circles here I think.... >>>>> >>>>> +1. I think too much of the discussion focuses on "why it's bad >>>>> to have >>>>> config tools in runtime images", but IMO we all sorta agree >>>>> that it >>>>> would be better not to have them there, if it came at no cost. >>>>> >>>>> I think to move forward, it would be interesting to know: if we >>>>> do this >>>>> (i'll borrow Dan's drawing): >>>>> >>>>>> base container| --> |service container| --> |service >>>>>> container w/ >>>>> Puppet installed| >>>>> >>>>> How much more space and bandwidth would this consume per node >>>>> (e.g. >>>>> separately per controller, per compute). This could help with >>>>> decision >>>>> making. >>>> >>>> As I've already evaluated in the related bug, that is: >>>> >>>> puppet-* modules and manifests ~ 16MB >>>> puppet with dependencies ~61MB >>>> dependencies of the seemingly largest a dependency, systemd >>>> ~190MB >>>> >>>> that would be an extra layer size for each of the container >>>> images to be >>>> downloaded/fetched into registries. >>> >>> Thanks, i tried to do the math of the reduction vs. inflation in >>> sizes >>> as follows. I think the crucial point here is the layering. If we >>> do >>> this image layering: >>> >>>> base| --> |+ service| --> |+ Puppet| >>> >>> we'd drop ~267 MB from base image, but we'd be installing that to >>> the >>> topmost level, per-component, right? >> >> Given we detached systemd from puppet, cronie et al, that would be >> 267-190MB, so the math below would be looking much better > > Would it be worth writing a spec that summarizes what action items are > bing taken to optimize our base image with regards to the systemd? Perhaps it would be. But honestly, I see nothing biggie to require a full blown spec. Just changing RPM deps and layers for containers images. I'm tracking systemd changes here [0],[1],[2], btw (if accepted, it should be working as of fedora28(or 29) I hope) [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction [1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672 > > It seems like the general consenses is that cleaning up some of the RPM > dependencies so that we don't install Systemd is the biggest win. > > What confuses me is why are there still patches posted to move Puppet > out of the base layer when we agree moving it out of the base layer > would actually cause our resulting container image set to be larger in > size. > > Dan > > >> >>> In my basic deployment, undercloud seems to have 17 "components" >>> (49 >>> containers), overcloud controller 15 components (48 containers), >>> and >>> overcloud compute 4 components (7 containers). Accounting for >>> overlaps, >>> the total number of "components" used seems to be 19. (By >>> "components" >>> here i mean whatever uses a different ConfigImage than other >>> services. I >>> just eyeballed it but i think i'm not too far off the correct >>> number.) >>> >>> So we'd subtract 267 MB from base image and add that to 19 leaf >>> images >>> used in this deployment. That means difference of +4.8 GB to the >>> current >>> image sizes. My /var/lib/registry dir on undercloud with all the >>> images >>> currently has 5.1 GB. We'd almost double that to 9.9 GB. >>> >>> Going from 5.1 to 9.9 GB seems like a lot of extra traffic for the >>> CDNs >>> (both external and e.g. internal within OpenStack Infra CI clouds). >>> >>> And for internal traffic between local registry and overcloud >>> nodes, it >>> gives +3.7 GB per controller and +800 MB per compute. That may not >>> be so >>> critical but still feels like a considerable downside. >>> >>> Another gut feeling is that this way of image layering would take >>> longer >>> time to build and to run the modify-image Ansible role which we use >>> in >>> CI, so that could endanger how our CI jobs fit into the time limit. >>> We >>> could also probably measure this but i'm not sure if it's worth >>> spending >>> the time. >>> >>> All in all i'd argue we should be looking at different options >>> still. >>> >>>> Given that we should decouple systemd from all/some of the >>>> dependencies >>>> (an example topic for RDO [0]), that could save a 190MB. But it >>>> seems we >>>> cannot break the love of puppet and systemd as it heavily relies >>>> on the >>>> latter and changing packaging like that would higly likely affect >>>> baremetal deployments with puppet and systemd co-operating. >>> >>> Ack :/ >>> >>>> Long story short, we cannot shoot both rabbits with a single >>>> shot, not >>>> with puppet :) May be we could with ansible replacing puppet >>>> fully... >>>> So splitting config and runtime images is the only choice yet to >>>> address >>>> the raised security concerns. And let's forget about edge cases >>>> for now. >>>> Tossing around a pair of extra bytes over 40,000 WAN-distributed >>>> computes ain't gonna be our the biggest problem for sure. >>>> >>>> [0] >>>> https://review.rdoproject.org/r/#/q/topic:base-container-reduction >>>> >>>>>> Dan >>>>>> >>>>> >>>>> Thanks >>>>> >>>>> Jirka >>>>> >>>>> _______________________________________________________________ >>>>> ___________ >>>>> >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> ___________________________________________________________________ >>> _______ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu >>> bscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > -- Best regards, Bogdan Dobrelya, Irc #bogdando From mnaser at vexxhost.com Fri Nov 30 14:40:00 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 30 Nov 2018 09:40:00 -0500 Subject: [openstack-dev] [Openstack-operators] [nova] Would an api option to create an instance without powering on be useful? In-Reply-To: References: Message-ID: On Fri, Nov 30, 2018 at 7:07 AM Matthew Booth wrote: > I have a request to do $SUBJECT in relation to a V2V workflow. The use > case here is conversion of a VM/Physical which was previously powered > off. We want to move its data, but we don't want to be powering on > stuff which wasn't previously on. > > This would involve an api change, and a hopefully very small change in > drivers to support it. Technically I don't see it as an issue. > > However, is it a change we'd be willing to accept? Is there any good > reason not to do this? Are there any less esoteric workflows which > might use this feature? > If you upload an image of said VM which you don't boot, you'd really be accomplishing the same thing, no? Unless you want to be in a state where you want the VM to be there but sitting in SHUTOFF state > Matt > -- > Matthew Booth > Red Hat OpenStack Engineer, Compute DFG > > Phone: +442070094448 (UK) > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Fri Nov 30 16:34:47 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 30 Nov 2018 10:34:47 -0600 Subject: [openstack-dev] [nova] Would an api option to create an instance without powering on be useful? In-Reply-To: References: Message-ID: <81627e78-4f92-2909-f209-284b4847ae09@nemebean.com> On 11/30/18 6:06 AM, Matthew Booth wrote: > I have a request to do $SUBJECT in relation to a V2V workflow. The use > case here is conversion of a VM/Physical which was previously powered > off. We want to move its data, but we don't want to be powering on > stuff which wasn't previously on. > > This would involve an api change, and a hopefully very small change in > drivers to support it. Technically I don't see it as an issue. > > However, is it a change we'd be willing to accept? Is there any good > reason not to do this? Are there any less esoteric workflows which > might use this feature? I don't know if it qualifies as less esoteric, but I would use this for OVB[1]. When we create the "baremetal" VMs there's no need to actually power them on since the first thing we do with them is shut them down again. Their initial footprint is pretty small so it's not a huge deal, but it is another potential use case for this feature. 1: https://openstack-virtual-baremetal.readthedocs.io/en/latest/introduction.html From Kevin.Fox at pnnl.gov Fri Nov 30 17:48:42 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Fri, 30 Nov 2018 17:48:42 +0000 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> <7d2fce52ca8bb5156b753a95cbb9e2df7ad741c8.camel@redhat.com>, Message-ID: <1A3C52DFCD06494D8528644858247BF01C24B507@EX10MBOX03.pnnl.gov> Still confused by: [base] -> [service] -> [+ puppet] not: [base] -> [puppet] and [base] -> [service] ? Thanks, Kevin ________________________________________ From: Bogdan Dobrelya [bdobreli at redhat.com] Sent: Friday, November 30, 2018 5:31 AM To: Dan Prince; openstack-dev at lists.openstack.org; openstack-discuss at lists.openstack.org Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes On 11/30/18 1:52 PM, Dan Prince wrote: > On Fri, 2018-11-30 at 10:31 +0100, Bogdan Dobrelya wrote: >> On 11/29/18 6:42 PM, Jiří Stránský wrote: >>> On 28. 11. 18 18:29, Bogdan Dobrelya wrote: >>>> On 11/28/18 6:02 PM, Jiří Stránský wrote: >>>>> >>>>> >>>>>> Reiterating again on previous points: >>>>>> >>>>>> -I'd be fine removing systemd. But lets do it properly and >>>>>> not via 'rpm >>>>>> -ev --nodeps'. >>>>>> -Puppet and Ruby *are* required for configuration. We can >>>>>> certainly put >>>>>> them in a separate container outside of the runtime service >>>>>> containers >>>>>> but doing so would actually cost you much more >>>>>> space/bandwidth for each >>>>>> service container. As both of these have to get downloaded to >>>>>> each node >>>>>> anyway in order to generate config files with our current >>>>>> mechanisms >>>>>> I'm not sure this buys you anything. >>>>> >>>>> +1. I was actually under the impression that we concluded >>>>> yesterday on >>>>> IRC that this is the only thing that makes sense to seriously >>>>> consider. >>>>> But even then it's not a win-win -- we'd gain some security by >>>>> leaner >>>>> production images, but pay for it with space+bandwidth by >>>>> duplicating >>>>> image content (IOW we can help achieve one of the goals we had >>>>> in mind >>>>> by worsening the situation w/r/t the other goal we had in >>>>> mind.) >>>>> >>>>> Personally i'm not sold yet but it's something that i'd >>>>> consider if we >>>>> got measurements of how much more space/bandwidth usage this >>>>> would >>>>> consume, and if we got some further details/examples about how >>>>> serious >>>>> are the security concerns if we leave config mgmt tools in >>>>> runtime >>>>> images. >>>>> >>>>> IIRC the other options (that were brought forward so far) were >>>>> already >>>>> dismissed in yesterday's IRC discussion and on the reviews. >>>>> Bin/lib bind >>>>> mounting being too hacky and fragile, and nsenter not really >>>>> solving the >>>>> problem (because it allows us to switch to having different >>>>> bins/libs >>>>> available, but it does not allow merging the availability of >>>>> bins/libs >>>>> from two containers into a single context). >>>>> >>>>>> We are going in circles here I think.... >>>>> >>>>> +1. I think too much of the discussion focuses on "why it's bad >>>>> to have >>>>> config tools in runtime images", but IMO we all sorta agree >>>>> that it >>>>> would be better not to have them there, if it came at no cost. >>>>> >>>>> I think to move forward, it would be interesting to know: if we >>>>> do this >>>>> (i'll borrow Dan's drawing): >>>>> >>>>>> base container| --> |service container| --> |service >>>>>> container w/ >>>>> Puppet installed| >>>>> >>>>> How much more space and bandwidth would this consume per node >>>>> (e.g. >>>>> separately per controller, per compute). This could help with >>>>> decision >>>>> making. >>>> >>>> As I've already evaluated in the related bug, that is: >>>> >>>> puppet-* modules and manifests ~ 16MB >>>> puppet with dependencies ~61MB >>>> dependencies of the seemingly largest a dependency, systemd >>>> ~190MB >>>> >>>> that would be an extra layer size for each of the container >>>> images to be >>>> downloaded/fetched into registries. >>> >>> Thanks, i tried to do the math of the reduction vs. inflation in >>> sizes >>> as follows. I think the crucial point here is the layering. If we >>> do >>> this image layering: >>> >>>> base| --> |+ service| --> |+ Puppet| >>> >>> we'd drop ~267 MB from base image, but we'd be installing that to >>> the >>> topmost level, per-component, right? >> >> Given we detached systemd from puppet, cronie et al, that would be >> 267-190MB, so the math below would be looking much better > > Would it be worth writing a spec that summarizes what action items are > bing taken to optimize our base image with regards to the systemd? Perhaps it would be. But honestly, I see nothing biggie to require a full blown spec. Just changing RPM deps and layers for containers images. I'm tracking systemd changes here [0],[1],[2], btw (if accepted, it should be working as of fedora28(or 29) I hope) [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction [1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672 > > It seems like the general consenses is that cleaning up some of the RPM > dependencies so that we don't install Systemd is the biggest win. > > What confuses me is why are there still patches posted to move Puppet > out of the base layer when we agree moving it out of the base layer > would actually cause our resulting container image set to be larger in > size. > > Dan > > >> >>> In my basic deployment, undercloud seems to have 17 "components" >>> (49 >>> containers), overcloud controller 15 components (48 containers), >>> and >>> overcloud compute 4 components (7 containers). Accounting for >>> overlaps, >>> the total number of "components" used seems to be 19. (By >>> "components" >>> here i mean whatever uses a different ConfigImage than other >>> services. I >>> just eyeballed it but i think i'm not too far off the correct >>> number.) >>> >>> So we'd subtract 267 MB from base image and add that to 19 leaf >>> images >>> used in this deployment. That means difference of +4.8 GB to the >>> current >>> image sizes. My /var/lib/registry dir on undercloud with all the >>> images >>> currently has 5.1 GB. We'd almost double that to 9.9 GB. >>> >>> Going from 5.1 to 9.9 GB seems like a lot of extra traffic for the >>> CDNs >>> (both external and e.g. internal within OpenStack Infra CI clouds). >>> >>> And for internal traffic between local registry and overcloud >>> nodes, it >>> gives +3.7 GB per controller and +800 MB per compute. That may not >>> be so >>> critical but still feels like a considerable downside. >>> >>> Another gut feeling is that this way of image layering would take >>> longer >>> time to build and to run the modify-image Ansible role which we use >>> in >>> CI, so that could endanger how our CI jobs fit into the time limit. >>> We >>> could also probably measure this but i'm not sure if it's worth >>> spending >>> the time. >>> >>> All in all i'd argue we should be looking at different options >>> still. >>> >>>> Given that we should decouple systemd from all/some of the >>>> dependencies >>>> (an example topic for RDO [0]), that could save a 190MB. But it >>>> seems we >>>> cannot break the love of puppet and systemd as it heavily relies >>>> on the >>>> latter and changing packaging like that would higly likely affect >>>> baremetal deployments with puppet and systemd co-operating. >>> >>> Ack :/ >>> >>>> Long story short, we cannot shoot both rabbits with a single >>>> shot, not >>>> with puppet :) May be we could with ansible replacing puppet >>>> fully... >>>> So splitting config and runtime images is the only choice yet to >>>> address >>>> the raised security concerns. And let's forget about edge cases >>>> for now. >>>> Tossing around a pair of extra bytes over 40,000 WAN-distributed >>>> computes ain't gonna be our the biggest problem for sure. >>>> >>>> [0] >>>> https://review.rdoproject.org/r/#/q/topic:base-container-reduction >>>> >>>>>> Dan >>>>>> >>>>> >>>>> Thanks >>>>> >>>>> Jirka >>>>> >>>>> _______________________________________________________________ >>>>> ___________ >>>>> >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> ___________________________________________________________________ >>> _______ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu >>> bscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > -- Best regards, Bogdan Dobrelya, Irc #bogdando From smooney at redhat.com Fri Nov 30 21:28:39 2018 From: smooney at redhat.com (Sean Mooney) Date: Fri, 30 Nov 2018 21:28:39 +0000 Subject: [openstack-dev] [Openstack-operators] [nova] Would an api option to create an instance without powering on be useful? In-Reply-To: References: Message-ID: <58ed5a65770e77128b2b14a819ec37a4dc8401d8.camel@redhat.com> On Fri, 2018-11-30 at 09:40 -0500, Mohammed Naser wrote: > > > On Fri, Nov 30, 2018 at 7:07 AM Matthew Booth wrote: > > I have a request to do $SUBJECT in relation to a V2V workflow. The use > > case here is conversion of a VM/Physical which was previously powered > > off. We want to move its data, but we don't want to be powering on > > stuff which wasn't previously on. > > > > This would involve an api change, and a hopefully very small change in > > drivers to support it. Technically I don't see it as an issue. > > > > However, is it a change we'd be willing to accept? Is there any good > > reason not to do this? Are there any less esoteric workflows which > > might use this feature? > > If you upload an image of said VM which you don't boot, you'd really be > accomplishing the same thing, no? > > Unless you want to be in a state where you want the VM to be there but > sitting in SHUTOFF state i think the intent was to have a vm ready to go with ips/ports, volumes exctra all created so you can quickly start it when needed. if that is the case another alternitive which might be more public cloud freidly form a wallet perspecitve would be the ablity to create a shelved isntace. that way all the ports ectra would be logically created but it would not be consumeing any compute resources. > > > Matt > > -- > > Matthew Booth > > Red Hat OpenStack Engineer, Compute DFG > > > > Phone: +442070094448 (UK) > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev