From sorrison at gmail.com Thu Nov 1 00:04:19 2018 From: sorrison at gmail.com (Sam Morrison) Date: Thu, 1 Nov 2018 11:04:19 +1100 Subject: [Openstack-operators] RabbitMQ and SSL Message-ID: <8EEDA593-4A37-4A1C-823A-FCA61299B2DE@gmail.com> Hi all, We’ve been battling an issue after an upgrade to pike which essentially makes using rabbit with ssl impossible https://bugs.launchpad.net/oslo.messaging/+bug/1800957 We use ubuntu cloud archives so it might no exactly be olso but a dependant library. Anyone else seen similar issues? Cheers, Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Thu Nov 1 11:25:31 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 1 Nov 2018 06:25:31 -0500 Subject: [Openstack-operators] [openlab] October Report Message-ID: Hi everyone, Here are some highlights from OpenLab for the month of October: CI additions - cluster-api-provider-openstack - AdoptOpenJDK - very important open source project - many Java developers - strategic for open source ecosystem Website redesign completed - fielding resource and support requests via GitHub - ML sign up via website - Community page - CI Infrastructure and High level request pipeline still manual but driven by Google Sheets - closer to being fully automated; easier to manage via spreadsheet instead of website backend Promotion - OSN Day Dallas, November 6th, 2018 https://events.linuxfoundation.org/events/osn_days_2018/north-america/ dallas/ - Twitter account is live - @askopenlab Mailing List - https://lists.openlabtesting.org - running latest mailman - postorious frontend - net new members - 7 OpenLab Tests (October) - total number of tests run - 3504 - SUCCESS - 2421 - FAILURE - 871 - POST_FAILURE- 72 - RETRY_LIMIT - 131 - TIMED_OUT - 9 - NODE_FAILURE - 0 - SKIPPED - 0 - 69.0925% : 30.9075% (success to fail/other job ratio) (September) - total number of tests run - 4350 - SUCCESS - 2611 - FAILURE - 1326 - POST_FAILURE- 336 - RETRY_LIMIT - 66 - TIMED_OUT - 11 - NODE_FAILURE - 0 - SKIPPED - 0 - 60.0230% : 39.9770% (success to fail/other job ratio) Delta - 9.0695% increase in success to fail/other job ratio - testament to great support by Chenrui and Liusheng "keeping the lights on". Additional Infrastructure - Packet - 80 vCPUs, 80G RAM, 1000G Disk - ARM - ARM-based OpenStack Cloud - Managed by codethink.co.uk - single compute node - 96 vCPUs, 128G RAM, 800G Disk - Massachusetts Open Cloud - in progress - small project for now - academia partner Build Status Legend: SUCCESS job executed correctly and exited without failure FAILURE job executed correctly, but exited with a failure RETRY_LIMIT pre-build tasks/plays failed more than the maximum number of retry attempts POST_FAILURE post-build tasks/plays failed SKIPPED one of the build dependencies failed and this job was not executed NODE_FAILURE no device available to run the build TIMED_OUT build got stuck at some point and hit the timeout limit Thank you to everyone who has read through this month’s update. If you have any question/concerns please feel free to start a thread on the mailing list or if it is something not to be shared publicly right now you can email info at openlabtesting.org Kind regards, OpenLab Governance Team -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Nov 1 14:57:56 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 1 Nov 2018 09:57:56 -0500 Subject: [Openstack-operators] [nova] Is anyone running their own script to purge old instance_faults table entries? Message-ID: <856536e5-631c-c975-7a6f-91a2167e9baf@gmail.com> I came across this bug [1] in triage today and I thought this was fixed already [2] but either something regressed or there is more to do here. I'm mostly just wondering, are operators already running any kind of script which purges old instance_faults table records before an instance is deleted and archived/purged? Because if so, that might be something we want to add as a nova-manage command. [1] https://bugs.launchpad.net/nova/+bug/1800755 [2] https://review.openstack.org/#/c/409943/ -- Thanks, Matt From doug at doughellmann.com Thu Nov 1 17:56:27 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 01 Nov 2018 13:56:27 -0400 Subject: [Openstack-operators] [goals] selecting community-wide goals for T series Message-ID: I have started a thread on the -dev list [1] to discuss the community-wide goals for the T series, prior to the Forum session in Berlin in a couple of weeks. Please join the conversation in that thread if you have any input. Thanks! Doug [1] http://lists.openstack.org/pipermail/openstack-dev/2018-November/136228.html From amy at demarco.com Fri Nov 2 15:14:24 2018 From: amy at demarco.com (Amy Marrich) Date: Fri, 2 Nov 2018 10:14:24 -0500 Subject: [Openstack-operators] OpenStack Diversity and Inclusion Survey Message-ID: The Diversity and Inclusion WG is still looking for your assistance in reaching and including data from as many members of our community as possible. We revised the Diversity Survey that was originally distributed to the Community in the Fall of 2015 and reached out in August with our new survey. We are looking to update our view of the OpenStack community and it's diversity. We are pleased to be working with members of the CHAOSS project who have signed confidentiality agreements in order to assist us in the following ways: 1) Assistance in analyzing the results 2) And feeding the results into the CHAOSS software and metrics development work so that we can help other Open Source projects Please take the time to fill out the survey and share it with others in the community. The survey can be found at: https://www.surveymonkey.com/r/OpenStackDiversity Thank you for assisting us in this important task! Please feel free to reach out to me via email, in Berlin, or to myself or any WG member in #openstack-diversity! Amy Marrich (spotz) Diversity and Inclusion Working Group Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Fri Nov 2 19:22:53 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 2 Nov 2018 14:22:53 -0500 Subject: [Openstack-operators] [nova][placement] Placement requests and caching in the resource tracker Message-ID: <8a461cc3-50cc-4ea9-d4c8-460c61ce7efc@fried.cc> All- Based on a (long) discussion yesterday [1] I have put up a patch [2] whereby you can set [compute]resource_provider_association_refresh to zero and the resource tracker will never* refresh the report client's provider cache. Philosophically, we're removing the "healing" aspect of the resource tracker's periodic and trusting that placement won't diverge from whatever's in our cache. (If it does, it's because the op hit the CLI, in which case they should SIGHUP - see below.) *except: - When we initially create the compute node record and bootstrap its resource provider. - When the virt driver's update_provider_tree makes a change, update_from_provider_tree reflects them in the cache as well as pushing them back to placement. - If update_from_provider_tree fails, the cache is cleared and gets rebuilt on the next periodic. - If you send SIGHUP to the compute process, the cache is cleared. This should dramatically reduce the number of calls to placement from the compute service. Like, to nearly zero, unless something is actually changing. Can I get some initial feedback as to whether this is worth polishing up into something real? (It will probably need a bp/spec if so.) [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-11-01.log.html#t2018-11-01T17:32:03 [2] https://review.openstack.org/#/c/614886/ ========== Background ========== In the Queens release, our friends at CERN noticed a serious spike in the number of requests to placement from compute nodes, even in a stable-state cloud. Given that we were in the process of adding a ton of infrastructure to support sharing and nested providers, this was not unexpected. Roughly, what was previously: @periodic_task: GET /resource_providers/$compute_uuid GET /resource_providers/$compute_uuid/inventories became more like: @periodic_task: # In Queens/Rocky, this would still just return the compute RP GET /resource_providers?in_tree=$compute_uuid # In Queens/Rocky, this would return nothing GET /resource_providers?member_of=...&required=MISC_SHARES... for each provider returned above: # i.e. just one in Q/R GET /resource_providers/$compute_uuid/inventories GET /resource_providers/$compute_uuid/traits GET /resource_providers/$compute_uuid/aggregates In a cloud the size of CERN's, the load wasn't acceptable. But at the time, CERN worked around the problem by disabling refreshing entirely. (The fact that this seems to have worked for them is an encouraging sign for the proposed code change.) We're not actually making use of most of that information, but it sets the stage for things that we're working on in Stein and beyond, like multiple VGPU types, bandwidth resource providers, accelerators, NUMA, etc., so removing/reducing the amount of information we look at isn't really an option strategically. From mriedemos at gmail.com Mon Nov 5 17:51:57 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 5 Nov 2018 11:51:57 -0600 Subject: [Openstack-operators] Dropping lazy translation support Message-ID: This is a follow up to a dev ML email [1] where I noticed that some implementations of the upgrade-checkers goal were failing because some projects still use the oslo_i18n.enable_lazy() hook for lazy log message translation (and maybe API responses?). The very old blueprints related to this can be found here [2][3][4]. If memory serves me correctly from my time working at IBM on this, this was needed to: 1. Generate logs translated in other languages. 2. Return REST API responses if the "Accept-Language" header was used and a suitable translation existed for that language. #1 is a dead horse since I think at least the Ocata summit when we agreed to no longer translate logs since no one used them. #2 is probably something no one knows about. I can't find end-user documentation about it anywhere. It's not tested and therefore I have no idea if it actually works anymore. I would like to (1) deprecate the oslo_i18n.enable_lazy() function so new projects don't use it and (2) start removing the enable_lazy() usage from existing projects like keystone, glance and cinder. Are there any users, deployments or vendor distributions that still rely on this feature? If so, please speak up now. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-November/136285.html [2] https://blueprints.launchpad.net/oslo-incubator/+spec/i18n-messages [3] https://blueprints.launchpad.net/nova/+spec/i18n-messages [4] https://blueprints.launchpad.net/nova/+spec/user-locale-api -- Thanks, Matt From doug at doughellmann.com Mon Nov 5 19:36:56 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 05 Nov 2018 14:36:56 -0500 Subject: [Openstack-operators] [Openstack-sigs] Dropping lazy translation support In-Reply-To: References: Message-ID: Matt Riedemann writes: > This is a follow up to a dev ML email [1] where I noticed that some > implementations of the upgrade-checkers goal were failing because some > projects still use the oslo_i18n.enable_lazy() hook for lazy log message > translation (and maybe API responses?). > > The very old blueprints related to this can be found here [2][3][4]. > > If memory serves me correctly from my time working at IBM on this, this > was needed to: > > 1. Generate logs translated in other languages. > > 2. Return REST API responses if the "Accept-Language" header was used > and a suitable translation existed for that language. > > #1 is a dead horse since I think at least the Ocata summit when we > agreed to no longer translate logs since no one used them. > > #2 is probably something no one knows about. I can't find end-user > documentation about it anywhere. It's not tested and therefore I have no > idea if it actually works anymore. > > I would like to (1) deprecate the oslo_i18n.enable_lazy() function so > new projects don't use it and (2) start removing the enable_lazy() usage > from existing projects like keystone, glance and cinder. > > Are there any users, deployments or vendor distributions that still rely > on this feature? If so, please speak up now. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-November/136285.html > [2] https://blueprints.launchpad.net/oslo-incubator/+spec/i18n-messages > [3] https://blueprints.launchpad.net/nova/+spec/i18n-messages > [4] https://blueprints.launchpad.net/nova/+spec/user-locale-api > > -- > > Thanks, > > Matt > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs I think the lazy stuff was all about the API responses. The log translations worked a completely different way. Doug From mriedemos at gmail.com Mon Nov 5 21:13:56 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 5 Nov 2018 15:13:56 -0600 Subject: [Openstack-operators] [Openstack-sigs] Dropping lazy translation support In-Reply-To: References: Message-ID: <2ae55c40-297d-3590-84de-665eae797522@gmail.com> On 11/5/2018 1:36 PM, Doug Hellmann wrote: > I think the lazy stuff was all about the API responses. The log > translations worked a completely different way. Yeah maybe. And if so, I came across this in one of the blueprints: https://etherpad.openstack.org/p/disable-lazy-translation Which says that because of a critical bug, the lazy translation was disabled in Havana to be fixed in Icehouse but I don't think that ever happened before IBM developers dropped it upstream, which is further justification for nuking this code from the various projects. -- Thanks, Matt From doug at doughellmann.com Mon Nov 5 21:36:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 05 Nov 2018 16:36:40 -0500 Subject: [Openstack-operators] [Openstack-sigs] Dropping lazy translation support In-Reply-To: <2ae55c40-297d-3590-84de-665eae797522@gmail.com> References: <2ae55c40-297d-3590-84de-665eae797522@gmail.com> Message-ID: Matt Riedemann writes: > On 11/5/2018 1:36 PM, Doug Hellmann wrote: >> I think the lazy stuff was all about the API responses. The log >> translations worked a completely different way. > > Yeah maybe. And if so, I came across this in one of the blueprints: > > https://etherpad.openstack.org/p/disable-lazy-translation > > Which says that because of a critical bug, the lazy translation was > disabled in Havana to be fixed in Icehouse but I don't think that ever > happened before IBM developers dropped it upstream, which is further > justification for nuking this code from the various projects. I agree. Doug From openstack at nemebean.com Mon Nov 5 21:39:58 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 5 Nov 2018 15:39:58 -0600 Subject: [Openstack-operators] [Openstack-sigs] Dropping lazy translation support In-Reply-To: <2ae55c40-297d-3590-84de-665eae797522@gmail.com> References: <2ae55c40-297d-3590-84de-665eae797522@gmail.com> Message-ID: <8febca8b-d3d9-0f17-44dd-1a1fe8eddff0@nemebean.com> On 11/5/18 3:13 PM, Matt Riedemann wrote: > On 11/5/2018 1:36 PM, Doug Hellmann wrote: >> I think the lazy stuff was all about the API responses. The log >> translations worked a completely different way. > > Yeah maybe. And if so, I came across this in one of the blueprints: > > https://etherpad.openstack.org/p/disable-lazy-translation > > Which says that because of a critical bug, the lazy translation was > disabled in Havana to be fixed in Icehouse but I don't think that ever > happened before IBM developers dropped it upstream, which is further > justification for nuking this code from the various projects. > It was disabled last-minute, but I'm pretty sure it was turned back on (hence why we're hitting issues today). I still see coercion code in oslo.log that was added to fix the problem[1] (I think). I could be wrong about that since this code has undergone significant changes over the years, but it looks to me like we're still forcing things to be unicode.[2] 1: https://review.openstack.org/#/c/49230/3/openstack/common/log.py 2: https://github.com/openstack/oslo.log/blob/a9ba6c544cbbd4bd804dcd5e38d72106ea0b8b8f/oslo_log/formatters.py#L414 From sean.mcginnis at gmx.com Tue Nov 6 01:54:52 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 5 Nov 2018 19:54:52 -0600 Subject: [Openstack-operators] FIPS Compliance Message-ID: <20181106015452.GA14203@sm-workstation> I'm interested in some feedback from the community, particularly those running OpenStack deployments, as to whether FIPS compliance [0][1] is something folks are looking for. I've been seeing small changes starting to be proposed here and there for things like MD5 usage related to its incompatibility to FIPS mode. But looking across a wider stripe of our repos, it appears like it would be a wider effort to be able to get all OpenStack services compatible with FIPS mode. This should be a fairly easy thing to test, but before we put in much effort into updating code and figuring out testing, I'd like to see some input on whether something like this is needed. Thanks for any input on this. Sean [0] https://en.wikipedia.org/wiki/FIPS_140-2 [1] https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.140-2.pdf From tony at bakeyournoodle.com Tue Nov 6 02:19:20 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 6 Nov 2018 13:19:20 +1100 Subject: [Openstack-operators] [openstack-dev] [all]Naming the T release of OpenStack -- Poll open In-Reply-To: <20181030054024.GC2343@thor.bakeyournoodle.com> References: <20181030054024.GC2343@thor.bakeyournoodle.com> Message-ID: <20181106021919.GB20576@thor.bakeyournoodle.com> Hi all, Time is running out for you to have your say in the T release name poll. We have just under 3 days left. If you haven't voted please do! On Tue, Oct 30, 2018 at 04:40:25PM +1100, Tony Breeds wrote: > Hi folks, > > It is time again to cast your vote for the naming of the T Release. > As with last time we'll use a public polling option over per user private URLs > for voting. This means, everybody should proceed to use the following URL to > cast their vote: > > https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_aac97f1cbb6c61df&akey=b9e448b340787f0e > > We've selected a public poll to ensure that the whole community, not just gerrit > change owners get a vote. Also the size of our community has grown such that we > can overwhelm CIVS if using private urls. A public can mean that users > behind NAT, proxy servers or firewalls may receive an message saying > that your vote has already been lodged, if this happens please try > another IP. > > Because this is a public poll, results will currently be only viewable by myself > until the poll closes. Once closed, I'll post the URL making the results > viewable to everybody. This was done to avoid everybody seeing the results while > the public poll is running. > > The poll will officially end on 2018-11-08 00:00:00+00:00[1], and results will be > posted shortly after. > > [1] https://governance.openstack.org/tc/reference/release-naming.html > --- > > According to the Release Naming Process, this poll is to determine the > community preferences for the name of the T release of OpenStack. It is > possible that the top choice is not viable for legal reasons, so the second or > later community preference could wind up being the name. > > Release Name Criteria > --------------------- > > Each release name must start with the letter of the ISO basic Latin alphabet > following the initial letter of the previous release, starting with the > initial release of "Austin". After "Z", the next name should start with > "A" again. > > The name must be composed only of the 26 characters of the ISO basic Latin > alphabet. Names which can be transliterated into this character set are also > acceptable. > > The name must refer to the physical or human geography of the region > encompassing the location of the OpenStack design summit for the > corresponding release. The exact boundaries of the geographic region under > consideration must be declared before the opening of nominations, as part of > the initiation of the selection process. > > The name must be a single word with a maximum of 10 characters. Words that > describe the feature should not be included, so "Foo City" or "Foo Peak" > would both be eligible as "Foo". > > Names which do not meet these criteria but otherwise sound really cool > should be added to a separate section of the wiki page and the TC may make > an exception for one or more of them to be considered in the Condorcet poll. > The naming official is responsible for presenting the list of exceptional > names for consideration to the TC before the poll opens. > > Exact Geographic Region > ----------------------- > > The Geographic Region from where names for the S release will come is Colorado > > Proposed Names > -------------- > > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas : the Tank Engine > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria (accepted by the TC) > ----------------------------------------------------------------- > > * Train🚂 : Many Attendees of the first Denver PTG have a story to tell about the trains near the PTG hotel. We could celebrate those stories with this name > > Yours Tony. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From doug at doughellmann.com Tue Nov 6 13:06:58 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 06 Nov 2018 08:06:58 -0500 Subject: [Openstack-operators] FIPS Compliance In-Reply-To: <20181106015452.GA14203@sm-workstation> References: <20181106015452.GA14203@sm-workstation> Message-ID: Sean McGinnis writes: > I'm interested in some feedback from the community, particularly those running > OpenStack deployments, as to whether FIPS compliance [0][1] is something folks > are looking for. > > I've been seeing small changes starting to be proposed here and there for > things like MD5 usage related to its incompatibility to FIPS mode. But looking > across a wider stripe of our repos, it appears like it would be a wider effort > to be able to get all OpenStack services compatible with FIPS mode. > > This should be a fairly easy thing to test, but before we put in much effort > into updating code and figuring out testing, I'd like to see some input on > whether something like this is needed. > > Thanks for any input on this. > > Sean > > [0] https://en.wikipedia.org/wiki/FIPS_140-2 > [1] https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.140-2.pdf > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators I know we've had some interest in it at different times. I think some of the changes will end up being backwards-incompatible, so we may need a "FIPS-mode" configuration flag for those, but in other places we could just switch hashing algorithms and be fine. I'm not sure if anyone has put together the details of what would be needed to update each project, but this feels like it could be a candidate for a goal for a future cycle once we have that information and can assess the level of effort. Doug From mihalis68 at gmail.com Tue Nov 6 14:54:46 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 6 Nov 2018 09:54:46 -0500 Subject: [Openstack-operators] no formal ops meetups team meeting today Message-ID: Hello Ops, It appears there will not be enough attendance on IRC today for a useful ops meetups team meeting. I think everyone is getting ready for berlin next week, which at this stage is likely a better use of the time. We'll try to find a good venue for a social get-together on the Tuesday, which will be communicated nearer the time on the IRC channel and via email. Otherwise we will see you at the Forum! Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From rochelle.grober at huawei.com Tue Nov 6 23:24:19 2018 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Tue, 6 Nov 2018 23:24:19 +0000 Subject: [Openstack-operators] [openstack-dev] [Openstack-sigs] Dropping lazy translation support In-Reply-To: <8febca8b-d3d9-0f17-44dd-1a1fe8eddff0@nemebean.com> References: <2ae55c40-297d-3590-84de-665eae797522@gmail.com> <8febca8b-d3d9-0f17-44dd-1a1fe8eddff0@nemebean.com> Message-ID: I seem to recall list discussion on this quite a ways back. I think most of it happened on the Docs ml, though. Maybe Juno/Kilo timeframe? If possible, it would be good to search over the code bases for places it was called to see its current footprint. I'm pretty sure it was the docs folks working with the oslo folks to make it work. But then the question was put to the ops folks about translations of logs (maybe the New York midcycle) and ops don't use translation. The ops input was broadcast to dev and docs and most efforts stopped at that point. But, I believe some projects had already done some work on lazy translation. I suspect the amount done, though was pretty low. Maybe the fastest way to get info would be to turn it off and see where the code barfs in a long run (to catch as many projects as possible)? --rocky > From: Ben Nemec > Sent: Monday, November 05, 2018 1:40 PM > > On 11/5/18 3:13 PM, Matt Riedemann wrote: > > On 11/5/2018 1:36 PM, Doug Hellmann wrote: > >> I think the lazy stuff was all about the API responses. The log > >> translations worked a completely different way. > > > > Yeah maybe. And if so, I came across this in one of the blueprints: > > > > https://etherpad.openstack.org/p/disable-lazy-translation > > > > Which says that because of a critical bug, the lazy translation was > > disabled in Havana to be fixed in Icehouse but I don't think that ever > > happened before IBM developers dropped it upstream, which is further > > justification for nuking this code from the various projects. > > > > It was disabled last-minute, but I'm pretty sure it was turned back on (hence > why we're hitting issues today). I still see coercion code in oslo.log that was > added to fix the problem[1] (I think). I could be wrong about that since this > code has undergone significant changes over the years, but it looks to me like > we're still forcing things to be unicode.[2] > > 1: https://review.openstack.org/#/c/49230/3/openstack/common/log.py > 2: > https://github.com/openstack/oslo.log/blob/a9ba6c544cbbd4bd804dcd5e38 > d72106ea0b8b8f/oslo_log/formatters.py#L414 > From mriedemos at gmail.com Wed Nov 7 00:28:14 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 6 Nov 2018 18:28:14 -0600 Subject: [Openstack-operators] [openstack-dev] [Openstack-sigs] Dropping lazy translation support In-Reply-To: References: <2ae55c40-297d-3590-84de-665eae797522@gmail.com> <8febca8b-d3d9-0f17-44dd-1a1fe8eddff0@nemebean.com> Message-ID: <5894b2cd-310c-04b4-1c90-859812bd47c9@gmail.com> On 11/6/2018 5:24 PM, Rochelle Grober wrote: > Maybe the fastest way to get info would be to turn it off and see where the code barfs in a long run (to catch as many projects as possible)? There is zero integration testing for lazy translation, so "turning it off and seeing what breaks" wouldn't result in anything breaking. -- Thanks, Matt From mriedemos at gmail.com Wed Nov 7 00:35:04 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 6 Nov 2018 18:35:04 -0600 Subject: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: Message-ID: <53c663d2-7e0f-9a50-4d4b-7ff11ebb02c6@gmail.com> After hacking on the PoC for awhile [1] I have finally pushed up a spec [2]. Behold it in all its dark glory! [1] https://review.openstack.org/#/c/603930/ [2] https://review.openstack.org/#/c/616037/ On 8/22/2018 8:23 PM, Matt Riedemann wrote: > Hi everyone, > > I have started an etherpad for cells topics at the Stein PTG [1]. The > main issue in there right now is dealing with cross-cell cold migration > in nova. > > At a high level, I am going off these requirements: > > * Cells can shard across flavors (and hardware type) so operators would > like to move users off the old flavors/hardware (old cell) to new > flavors in a new cell. > > * There is network isolation between compute hosts in different cells, > so no ssh'ing the disk around like we do today. But the image service is > global to all cells. > > Based on this, for the initial support for cross-cell cold migration, I > am proposing that we leverage something like shelve offload/unshelve > masquerading as resize. We shelve offload from the source cell and > unshelve in the target cell. This should work for both volume-backed and > non-volume-backed servers (we use snapshots for shelved offloaded > non-volume-backed servers). > > There are, of course, some complications. The main ones that I need help > with right now are what happens with volumes and ports attached to the > server. Today we detach from the source and attach at the target, but > that's assuming the storage backend and network are available to both > hosts involved in the move of the server. Will that be the case across > cells? I am assuming that depends on the network topology (are routed > networks being used?) and storage backend (routed storage?). If the > network and/or storage backend are not available across cells, how do we > migrate volumes and ports? Cinder has a volume migrate API for admins > but I do not know how nova would know the proper affinity per-cell to > migrate the volume to the proper host (cinder does not have a routed > storage concept like routed provider networks in neutron, correct?). And > as far as I know, there is no such thing as port migration in Neutron. > > Could Placement help with the volume/port migration stuff? Neutron > routed provider networks rely on placement aggregates to schedule the VM > to a compute host in the same network segment as the port used to create > the VM, however, if that segment does not span cells we are kind of > stuck, correct? > > To summarize the issues as I see them (today): > > * How to deal with the targeted cell during scheduling? This is so we > can even get out of the source cell in nova. > > * How does the API deal with the same instance being in two DBs at the > same time during the move? > > * How to handle revert resize? > > * How are volumes and ports handled? > > I can get feedback from my company's operators based on what their > deployment will look like for this, but that does not mean it will work > for others, so I need as much feedback from operators, especially those > running with multiple cells today, as possible. Thanks in advance. > > [1] https://etherpad.openstack.org/p/nova-ptg-stein-cells > -- Thanks, Matt From michael.stang at dhbw-mannheim.de Wed Nov 7 09:47:15 2018 From: michael.stang at dhbw-mannheim.de (Michael Stang) Date: Wed, 7 Nov 2018 10:47:15 +0100 (CET) Subject: [Openstack-operators] Question about path in disk file Message-ID: <1981218741.24166.1541584035312@ox.dhbw-mannheim.de> Hi, I have a question about the "disk" file from the instances, I searched about it but didn't find any infos. At the beginning of the file the is a path to the baseimage which was used to create the instance, and I want to knwo what this ist for? The problem behind is, that we changed the path to the imagecache and some instances denied to start after becaus they could not find the base image anymore. But the other instances didn't have this problem at all. Also I noticed that the info in the disk file ist not updated when the instance is migrated to another host. So my question is this normal? What is when I migrate the sever to another host with another storage and there is another path to the image cahche? We use still mitaka at the moment, maybe in a newer version this has already changed? Thank you :) Kind regards, Michael Michael Stang Laboringenieur, Dipl. Inf. (FH) Duale Hochschule Baden-Württemberg Mannheim Baden-Wuerttemberg Cooperative State University Mannheim ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen Fachbereich Informatik, Fakultät Technik Coblitzallee 1-9 68163 Mannheim Tel.: +49 (0)621 4105 - 1367 michael.stang at dhbw-mannheim.de mailto:michael.stang at dhbw-mannheim.dehttp://www.dhbw-mannheim.de/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 28323 bytes Desc: not available URL: From amy at demarco.com Wed Nov 7 15:18:47 2018 From: amy at demarco.com (Amy Marrich) Date: Wed, 7 Nov 2018 09:18:47 -0600 Subject: [Openstack-operators] Diversity and Inclusion at OpenStack Summit Message-ID: I just wanted to pass on a few things we have going on during Summit that might be of interest! *Diversity and Inclusion GroupMe* - Is it your first summit and you don't know anyone else? Maybe you just don't want to travel to and from the venue alone? In the tradition of WoO, I have created a GroupMe so people can communicate with each other. If you would like to be added to the group please let me know and I'll get you added! *Night Watch Tour *- On Wednesday night at 10PM, members of the community will be meeting up to go on a private Night Watch Tour[0]! This is a non-alcoholic activity for those wanting to get with other Stackers, but don't want to partake in the Pub Crawl! We've been doing these since Boston and they're a lot of fun. The cost is 15 euros cash and I do need you to RSVP to me as we will need to get a second guide if we grow too large! Summit sessions you may wish to attend: Tuesday - *Speed Mentoring Lunch* [1] 12:30 -1:40 - We are still looking for both Mentors and Mentees for the session so please RSVP! This is another great way to meet people in the community, learn more and give back!!! *Cohort Mentoring BoF* [2] 4:20 - 5:00 - Come talk to the people in charge of the Cohort Mentoring program and see how you can get involved as a Mentor or Mentee! *D&I WG Update* [3] 5:10- 5:50 - Learn what we've been up to, how you can get involved, and what's next. Wednesday - *Git and Gerrit Hands-On Workshop* [4] 3:20 - 4:20 - So you've seen some exciting stuff this week but don't know how to get setup to start contributing? This session is for you in that we'll walk you through getting your logins, your system configured and if time allows even how to submit a bug and patch! Thursday - *Mentoring Program Reboot* [5] 3:20 - 4:00 - Learn about the importance of mentoring, the changes in the OPenStack mentoring programs and how you can get involved. Hope to see everyone in Berlin next week! Please feel free to contact me or grab me in the hall next week with any questions or to join in the fun! Amy Marrich (spotz) Diversity and Inclusion WG Chair [0] - http://baerentouren.de/nachtwache_en.html [1] - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22873/speed-mentoring-lunch [2] - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22892/long-term-mentoring-keeping-the-party-going [3] - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22893/diversity-and-inclusion-wg-update [4] - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/21943/git-and-gerrit-hands-on-workshop [5] - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22443/mentoring-program-reboot -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Nov 8 00:48:20 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 8 Nov 2018 11:48:20 +1100 Subject: [Openstack-operators] [all] Results of the T release naming poll. open Message-ID: <20181108004819.GF20576@thor.bakeyournoodle.com> Hello all! The results of the naming poll are in! **PLEASE REMEMBER** that these now have to go through legal vetting. So it is too soon to say 'OpenStack Train' is our next release, given that previous polls have had some issues with the top choice. In any case, the names will be sent off to legal for vetting. As soon as we have a final winner, I'll let you all know. https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_aac97f1cbb6c61df&rkey=7c8b5588574494c1 Result 1. Train (Condorcet winner: wins contests with all other choices) 2. Tiger loses to Train by 142–70 3. Timber loses to Train by 142–72, loses to Tiger by 100–76 4. Trail loses to Train by 150–55, loses to Timber by 93–62 5. Telluride loses to Train by 155–56, loses to Trail by 81–69 6. Teller loses to Train by 158–46, loses to Telluride by 70–67 7. Treasure loses to Train by 151–52, loses to Teller by 68–67 8. Teakettle loses to Train by 158–49, loses to Treasure by 75–67 9. Tincup loses to Train by 157–47, loses to Teakettle by 67–60 10. Turret loses to Train by 158–48, loses to Tincup by 75–56 11. Thomas loses to Train by 159–42, loses to Turret by 66–63 12. Trinidad loses to Train by 153–44, loses to Thomas by 70–56 13. Troublesome loses to Train by 165–41, loses to Trinidad by 69–62 14. Thornton loses to Train by 163–35, loses to Troublesome by 62–59 15. Tyrone loses to Train by 163–35, loses to Thornton by 58–38 16. Tarryall loses to Train by 170–31, loses to Tyrone by 54–50 17. Timnath loses to Train by 170–23, loses to Tarryall by 60–32 18. Tiny Town loses to Train by 168–29, loses to Timnath by 45–43 19. Torreys loses to Train by 167–29, loses to Tiny Town by 48–40 20. Trussville loses to Train by 169–25, loses to Torreys by 43–34 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From lijie at unitedstack.com Thu Nov 8 11:30:03 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Thu, 8 Nov 2018 19:30:03 +0800 Subject: [Openstack-operators] [openstack-dev] [nova] about resize the instance In-Reply-To: References: Message-ID: When I resize the instance, the compute node report that "libvirtError: internal error: qemu unexpectedly closed the monitor: 2018-11-08T09:42:04.695681Z qemu-kvm: cannot set up guest memory 'pc.ram': Cannot allocate memory".Has anyone seen this situation?And the ram_allocation_ratio is set 3 in nova.conf.The total memory is 125G.When I use the "nova hypervisor-show server" command to show the compute node's free_ram_mb is -45G.If it is the result of excessive use of memory? Can you give me some suggestions about this?Thank you very much. ------------------ Original ------------------ From: "Rambo"; Date: Thu, Nov 8, 2018 05:45 PM To: "OpenStack Developmen"; Subject: [openstack-dev] [nova] about resize the instance Hi,all When we resize/migrate instance, if error occurs on source compute node, the instance state can rollback to active currently.But if error occurs in "finish_resize" function on destination compute node, the instance state would not rollback to active. Is there a bug, or if anyone plans to change this?Can you tell me more about this ?Thank you very much. Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From j at ocado.com Thu Nov 8 14:55:55 2018 From: j at ocado.com (Justin Cattle) Date: Thu, 8 Nov 2018 14:55:55 +0000 Subject: [Openstack-operators] [puppet] openstack providers - endpoint not configurable Message-ID: Hi, I've been recently working on separating management and client traffic onto different endpoints. We have different endpoint URLs configured for "public", "admin" and "internal". For openstack itself, this is working well. However, puppet providers don't seem to cater for this. Particularly, right now, I'm looking at the neutron providers, but they may be mostly the same. It always uses the public endpoint, and doesn't seem configurable [ unless I'm missing something ]. All the client config for auth etc is sourced from neutron.conf, and I can't see a way of specifying endpoint type via that mechanism. I can change the provider like this, and it all works: diff --git a/lib/puppet/provider/neutron.rb b/lib/puppet/provider/neutron.rb index a55fa0b..786e64d 100644 --- a/lib/puppet/provider/neutron.rb +++ b/lib/puppet/provider/neutron.rb @@ -75,14 +75,16 @@ correctly configured.") :OS_AUTH_URL => q['identity_uri'], :OS_USERNAME => q['admin_user'], :OS_TENANT_NAME => q['admin_tenant_name'], - :OS_PASSWORD => q['admin_password'] + :OS_PASSWORD => q['admin_password'], + :OS_ENDPOINT_TYPE => 'internal', } else authenv = { :OS_AUTH_URL => q['auth_url'], :OS_USERNAME => q['username'], :OS_TENANT_NAME => q['tenant_name'], - :OS_PASSWORD => q['password'] + :OS_PASSWORD => q['password'], + :OS_ENDPOINT_TYPE => 'internal', } end if q.key?('nova_region_name') Notice, I'm adding OS_ENDPOINT_TYPE to control the endpoint that selected from the catalogue. I want to keep the "public" endpoints for external clients only on the external network, the "internal" endpoints for inter service API comms on the management network, and the "admin" endpoints for admin operations on the management network. In particular, I want to be able to stop advertising the public endpoints during maintenance windows, and still be able to run puppet! Can anyone think of a way of overcoming this? If it's not possible through config, is there some way I can drop in my own provider version with the same name safely ? Anything else I'm missing? Thanks! Cheers, Just -- Notice:  This email is confidential and may contain copyright material of members of the Ocado Group. Opinions and views expressed in this message may not necessarily reflect the opinions and views of the members of the Ocado Group.    If you are not the intended recipient, please notify us immediately and delete all copies of this message. Please note that it is your responsibility to scan this message for viruses.    Fetch and Sizzle are trading names of Speciality Stores Limited and Fabled is a trading name of Marie Claire Beauty Limited, both members of the Ocado Group.   References to the “Ocado Group” are to Ocado Group plc (registered in England and Wales with number 7098618) and its subsidiary undertakings (as that expression is defined in the Companies Act 2006) from time to time.   The registered office of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, Hatfield, Hertfordshire, AL10 9UL. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Thu Nov 8 15:06:19 2018 From: amy at demarco.com (Amy Marrich) Date: Thu, 8 Nov 2018 09:06:19 -0600 Subject: [Openstack-operators] Diversity and Inclusion at OpenStack Summit In-Reply-To: References: Message-ID: Forgot one important one on Wednesday, 12:30 - 1:40!!! We are very pleased to have the very first *Diversity Networking Lunch* which os being sponsored by Intel[0]. In the past there was feedback that allies and others didn't wish to intrude on the WoO lunch and Intel was all for this change to a more open Diversity lunch! So please come and join us on Wednesday for lunch!! See you all soon, Amy Marrich (spotz) Diversity and Inclusion WG Chair [0] - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22850/diversity-networking-lunch-sponsored-by-intel On Wed, Nov 7, 2018 at 9:18 AM, Amy Marrich wrote: > I just wanted to pass on a few things we have going on during Summit that > might be of interest! > > *Diversity and Inclusion GroupMe* - Is it your first summit and you don't > know anyone else? Maybe you just don't want to travel to and from the venue > alone? In the tradition of WoO, I have created a GroupMe so people can > communicate with each other. If you would like to be added to the group > please let me know and I'll get you added! > > *Night Watch Tour *- On Wednesday night at 10PM, members of the community > will be meeting up to go on a private Night Watch Tour[0]! This is a > non-alcoholic activity for those wanting to get with other Stackers, but > don't want to partake in the Pub Crawl! We've been doing these since Boston > and they're a lot of fun. The cost is 15 euros cash and I do need you to > RSVP to me as we will need to get a second guide if we grow too large! > > Summit sessions you may wish to attend: > Tuesday - > *Speed Mentoring Lunch* [1] 12:30 -1:40 - We are still looking for both > Mentors and Mentees for the session so please RSVP! This is another great > way to meet people in the community, learn more and give back!!! > *Cohort Mentoring BoF* [2] 4:20 - 5:00 - Come talk to the people in > charge of the Cohort Mentoring program and see how you can get involved as > a Mentor or Mentee! > *D&I WG Update* [3] 5:10- 5:50 - Learn what we've been up to, how you can > get involved, and what's next. > > Wednesday - > *Git and Gerrit Hands-On Workshop* [4] 3:20 - 4:20 - So you've seen some > exciting stuff this week but don't know how to get setup to start > contributing? This session is for you in that we'll walk you through > getting your logins, your system configured and if time allows even how to > submit a bug and patch! > > Thursday - > *Mentoring Program Reboot* [5] 3:20 - 4:00 - Learn about the importance > of mentoring, the changes in the OPenStack mentoring programs and how you > can get involved. > > Hope to see everyone in Berlin next week! Please feel free to contact me > or grab me in the hall next week with any questions or to join in the fun! > > Amy Marrich (spotz) > Diversity and Inclusion WG Chair > > [0] - http://baerentouren.de/nachtwache_en.html > [1] - https://www.openstack.org/summit/berlin-2018/summit- > schedule/events/22873/speed-mentoring-lunch > [2] - https://www.openstack.org/summit/berlin-2018/summit- > schedule/events/22892/long-term-mentoring-keeping-the-party-going > [3] - https://www.openstack.org/summit/berlin-2018/summit- > schedule/events/22893/diversity-and-inclusion-wg-update > [4] - https://www.openstack.org/summit/berlin-2018/summit- > schedule/events/21943/git-and-gerrit-hands-on-workshop > [5] - https://www.openstack.org/summit/berlin-2018/summit- > schedule/events/22443/mentoring-program-reboot > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Nov 9 02:02:54 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 09 Nov 2018 11:02:54 +0900 Subject: [Openstack-operators] [openstack-dev] [openstack-operators] [qa] [berlin] QA Team & related sessions at berlin summit Message-ID: <166f635357e.bf214dd5138768.7752784958860017838@ghanshyammann.com> Hello everyone, Along with project updates & onboarding sessions, QA team will host QA feedback sessions in berlin summit. Feel free to catch us next week for any QA related questions or if you need help to contribute in QA (we are really looking forward to onbaord new contributor in QA). Below are the QA related sessions, feel free to append the list if i missed anything. I am working on onboarding/forum sessions etherpad and will send the link tomorrow. Tuesday: 1. OpenStack QA - Project Update. [1] 2. OpenStack QA - Project Onboarding. [2] 3. OpenStack Patrole – Foolproofing your OpenStack Deployment [3] Wednesday: 4. Forum: Users / Operators adoption of QA tools / plugins. [4] Thursday: 5. Using Rally/Tempest for change validation (OPS session) [5] [1] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22763/openstack-qa-project-update [2] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22762/openstack-qa-project-onboarding [3] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22148/openstack-patrole-foolproofing-your-openstack-deployment [4] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22788/users-operators-adoption-of-qa-tools-plugins [5] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22837/using-rallytempest-for-change-validation-ops-session -gmann From fungi at yuggoth.org Fri Nov 9 18:14:47 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 9 Nov 2018 18:14:47 +0000 Subject: [Openstack-operators] [all] We're combining the lists! In-Reply-To: <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> Message-ID: <20181109181447.qhutsauxl4fuinnh@yuggoth.org> REMINDER: The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists (to which this is being sent) will be replaced by a new openstack-discuss at lists.openstack.org mailing list. The new list is open for subscriptions[0] now, but is not yet accepting posts until Monday November 19 and it's strongly recommended to subscribe before that date so as not to miss any messages posted there. The old lists will be configured to no longer accept posts starting on Monday December 3, but in the interim posts to the old lists will also get copied to the new list so it's safe to unsubscribe from them any time after the 19th and not miss any messages. See my previous notice[1] for details. For those wondering, we have 207 subscribers so far on openstack-discuss with a little over a week to go before it will be put into use (and less than a month before the old lists are closed down for good). [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From thierry at openstack.org Sat Nov 10 10:02:15 2018 From: thierry at openstack.org (Thierry Carrez) Date: Sat, 10 Nov 2018 11:02:15 +0100 Subject: [Openstack-operators] [openstack-dev] [all] We're combining the lists! In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> <20181109181447.qhutsauxl4fuinnh@yuggoth.org> Message-ID: Robert Collins wrote: > There don't seem to be any topics defined for -discuss yet, I hope > there will be, as I'm certainly not in a position of enough bandwidth > to handle everything *stack related. > > I'd suggest one for each previously list, at minimum. As we are ultimately planning to move lists to mailman3 (which decided to drop the "topics" concept altogether), I don't think we planned to add serverside mailman topics to the new list. We'll still have standardized subject line topics. The current list lives at: https://etherpad.openstack.org/p/common-openstack-ml-topics -- Thierry Carrez (ttx) From gmann at ghanshyammann.com Sat Nov 10 15:39:51 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 11 Nov 2018 00:39:51 +0900 Subject: [Openstack-operators] [openstack-dev] [openstack-operators] [qa] [berlin] QA Team & related sessions at berlin summit In-Reply-To: <166f635357e.bf214dd5138768.7752784958860017838@ghanshyammann.com> References: <166f635357e.bf214dd5138768.7752784958860017838@ghanshyammann.com> Message-ID: <166fe478287.eab83f3e160854.6859983257172890856@ghanshyammann.com> Hello Everyone, I have created the below etherpads to use during QA Forum sessions: - Users / Operators adoption of QA tools: https://etherpad.openstack.org/p/BER-qa-ops-user-feedback - QA Onboarding: https://etherpad.openstack.org/p/BER-qa-onboarding-vancouver -gmann ---- On Fri, 09 Nov 2018 11:02:54 +0900 Ghanshyam Mann wrote ---- > Hello everyone, > > Along with project updates & onboarding sessions, QA team will host QA feedback sessions in berlin summit. Feel free to catch us next week for any QA related questions or if you need help to contribute in QA (we are really looking forward to onbaord new contributor in QA). > > Below are the QA related sessions, feel free to append the list if i missed anything. I am working on onboarding/forum sessions etherpad and will send the link tomorrow. > > Tuesday: > 1. OpenStack QA - Project Update. [1] > 2. OpenStack QA - Project Onboarding. [2] > 3. OpenStack Patrole – Foolproofing your OpenStack Deployment [3] > > Wednesday: > 4. Forum: Users / Operators adoption of QA tools / plugins. [4] > > Thursday: > 5. Using Rally/Tempest for change validation (OPS session) [5] > > [1] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22763/openstack-qa-project-update > [2] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22762/openstack-qa-project-onboarding > [3] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22148/openstack-patrole-foolproofing-your-openstack-deployment > [4] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22788/users-operators-adoption-of-qa-tools-plugins > [5] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22837/using-rallytempest-for-change-validation-ops-session > > -gmann > From fungi at yuggoth.org Sat Nov 10 15:53:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 10 Nov 2018 15:53:22 +0000 Subject: [Openstack-operators] [openstack-dev] [all] We're combining the lists! In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> <20181109181447.qhutsauxl4fuinnh@yuggoth.org> Message-ID: <20181110155322.nedu4jiammpi537x@yuggoth.org> On 2018-11-10 11:02:15 +0100 (+0100), Thierry Carrez wrote: [...] > As we are ultimately planning to move lists to mailman3 (which decided > to drop the "topics" concept altogether), I don't think we planned to > add serverside mailman topics to the new list. Correct, that was covered in more detail in the longer original announcement linked from my past couple of reminders: http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html In short, we're recommending client-side filtering because server-side topic selection/management was not retained in Mailman 3 as Thierry indicates and we hope we might move our lists to an MM3 instance sometime in the not-too-distant future. > We'll still have standardized subject line topics. The current list > lives at: > > https://etherpad.openstack.org/p/common-openstack-ml-topics Which is its initial location for crowd-sourcing/brainstorming, but will get published to a more durable location like on lists.openstack.org itself or perhaps the Project-Team Guide once the list is in use. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jose.castro.leon at cern.ch Mon Nov 12 08:24:08 2018 From: jose.castro.leon at cern.ch (Jose Castro Leon) Date: Mon, 12 Nov 2018 08:24:08 +0000 Subject: [Openstack-operators] [openstack-operators] [radosgw] Adding user type attribute in radosGW admin operations API to simplify quota operations Message-ID: <0a792575a57a769fd5d9623e3b5adb51ccbe3adb.camel@cern.ch> Dear all, At CERN, we are currently adding the radosgw component in our private cloud OpenStack based offering. In order to ease the integration with lifecycle management, we are proposing to enable the possibility to add users with the keystone type through the radosgw Admin Ops API. During the integration process, we observed that the users are created upon first user request onto the radosgw. For the quota configuration, this is taken from the default values configured and once this user has been created,then it can be modified later. For the lifecycle management of resources, we are using OpenStack Mistral that is orchestrating the needed steps to configure the project from creation to be ready to be offered to the user. In this workflow, we configure the services that the project has access and the quotas associated with them. For the radosgw component we need to consider two different events: provisioning and decommissioning of resources in it. On the cleanup / decommissioning side every bit is covered through the admin operations api. Here comes our problem. On the provisioning side, we could not apply quotas to users that have not yet been created by radosgw (as it waits for the first user request). Once they are created, they have a type attribute with the value keystone So we would like to be able to create the users on radosgw with the keystone type, way before the first user request, by adding the possibility to specify the type on user creation. We think this addition has added value for other OpenStack operators that are using radosgw for their S3/Swift offering and gives them flexibility for lifecycle management of the resources contained in radosgw. We have submmitted a feature request for this particular addition to Ceph tracker and we would like to know if you are also interested into this feature as well. Cheers, Jose Castro Leon CERN Cloud Infrastructure Team From Arkady.Kanevsky at dell.com Mon Nov 12 15:46:38 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 12 Nov 2018 15:46:38 +0000 Subject: [Openstack-operators] new SIGs to cover use cases Message-ID: <2da0506287d64764bb4219b145518470@AUSX13MPS308.AMER.DELL.COM> Team, At today Board and joint TC and UC meetings 2 questions come up: 1. Do we have or want to create a user community around Hybrid cloud. This is one of the major push of OpenStack for the communities. With 70+% of questionnaire responders told that they deploy and use hybrid cloud. We do have Public and Private clouds SIGs but not hybrid. That brings the question where do we capture and driver hybrid cloud requirements. 2. As we target AI/ML as 2019 target application domain do we want to create a SIG for it? Or do we extend scientific community SIG to cover it? Want to start dialog on it. Thanks, Arkady -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Nov 12 18:49:15 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 12 Nov 2018 19:49:15 +0100 Subject: [Openstack-operators] Queens metadata agent error 500 Message-ID: Hi All, I upgraded manually my centos 7 openstack ocata to pike. All worked fine. Then I upgraded from pike to Queens and instances stopped to reach metadata on 169.254.169.254 with error 500. I am using isolated metadata true in my dhcp conf and in dhcp namespace the port 80 is in listen. Please, anyone can help me? Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Nov 12 20:37:52 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 12 Nov 2018 21:37:52 +0100 Subject: [Openstack-operators] Queens metadata agent error 500 In-Reply-To: References: Message-ID: Hi, Can You share logs from Your haproxy-metadata-proxy service which is running in qdhcp namespace? There should be some info about reason of those errors 500. > Wiadomość napisana przez Ignazio Cassano w dniu 12.11.2018, o godz. 19:49: > > Hi All, > I upgraded manually my centos 7 openstack ocata to pike. > All worked fine. > Then I upgraded from pike to Queens and instances stopped to reach metadata on 169.254.169.254 with error 500. > I am using isolated metadata true in my dhcp conf and in dhcp namespace the port 80 is in listen. > Please, anyone can help me? > Regards > Ignazio > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators — Slawek Kaplonski Senior software engineer Red Hat From ignaziocassano at gmail.com Mon Nov 12 21:15:30 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 12 Nov 2018 22:15:30 +0100 Subject: [Openstack-operators] Queens metadata agent error 500 In-Reply-To: References: Message-ID: Hello, attached here there is the log file. Connecting to an instance created befor upgrade I also tried: wget http://169.254.169.254/2009-04-04/meta-data/instance-id The following is the output --2018-11-12 22:14:45-- http://169.254.169.254/2009-04-04/meta-data/instance-id Connecting to 169.254.169.254:80... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2018-11-12 22:14:45 ERROR 500: Internal Server Error Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski < skaplons at redhat.com> ha scritto: > Hi, > > Can You share logs from Your haproxy-metadata-proxy service which is > running in qdhcp namespace? There should be some info about reason of those > errors 500. > > > Wiadomość napisana przez Ignazio Cassano w > dniu 12.11.2018, o godz. 19:49: > > > > Hi All, > > I upgraded manually my centos 7 openstack ocata to pike. > > All worked fine. > > Then I upgraded from pike to Queens and instances stopped to reach > metadata on 169.254.169.254 with error 500. > > I am using isolated metadata true in my dhcp conf and in dhcp namespace > the port 80 is in listen. > > Please, anyone can help me? > > Regards > > Ignazio > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: metadata-agent.log Type: application/octet-stream Size: 155092 bytes Desc: not available URL: From ignaziocassano at gmail.com Mon Nov 12 21:16:01 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 12 Nov 2018 22:16:01 +0100 Subject: [Openstack-operators] Queens metadata agent error 500 In-Reply-To: References: Message-ID: PS Thanks for your help Il giorno lun 12 nov 2018 alle ore 22:15 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello, > attached here there is the log file. > > Connecting to an instance created befor upgrade I also tried: > wget http://169.254.169.254/2009-04-04/meta-data/instance-id > > The following is the output > > --2018-11-12 22:14:45-- > http://169.254.169.254/2009-04-04/meta-data/instance-id > Connecting to 169.254.169.254:80... connected. > HTTP request sent, awaiting response... 500 Internal Server Error > 2018-11-12 22:14:45 ERROR 500: Internal Server Error > > > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski < > skaplons at redhat.com> ha scritto: > >> Hi, >> >> Can You share logs from Your haproxy-metadata-proxy service which is >> running in qdhcp namespace? There should be some info about reason of those >> errors 500. >> >> > Wiadomość napisana przez Ignazio Cassano w >> dniu 12.11.2018, o godz. 19:49: >> > >> > Hi All, >> > I upgraded manually my centos 7 openstack ocata to pike. >> > All worked fine. >> > Then I upgraded from pike to Queens and instances stopped to reach >> metadata on 169.254.169.254 with error 500. >> > I am using isolated metadata true in my dhcp conf and in dhcp >> namespace the port 80 is in listen. >> > Please, anyone can help me? >> > Regards >> > Ignazio >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Nov 12 21:34:52 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 12 Nov 2018 22:34:52 +0100 Subject: [Openstack-operators] Queens metadata agent error 500 In-Reply-To: References: Message-ID: Hello again, I have another installation of ocata . On ocata the metadata for a network id is displayed by ps -afe like this: /usr/bin/python2 /bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 --metadata_proxy_group=993 --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log --log-dir=/var/log/neutron On queens like this: haproxy -f /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf Is it the correct behaviour ? Regards Ignazio Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski < skaplons at redhat.com> ha scritto: > Hi, > > Can You share logs from Your haproxy-metadata-proxy service which is > running in qdhcp namespace? There should be some info about reason of those > errors 500. > > > Wiadomość napisana przez Ignazio Cassano w > dniu 12.11.2018, o godz. 19:49: > > > > Hi All, > > I upgraded manually my centos 7 openstack ocata to pike. > > All worked fine. > > Then I upgraded from pike to Queens and instances stopped to reach > metadata on 169.254.169.254 with error 500. > > I am using isolated metadata true in my dhcp conf and in dhcp namespace > the port 80 is in listen. > > Please, anyone can help me? > > Regards > > Ignazio > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Nov 12 21:40:19 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 12 Nov 2018 22:40:19 +0100 Subject: [Openstack-operators] Queens metadata agent error 500 In-Reply-To: References: Message-ID: <0E03B24F-05C2-4D3E-948D-F67B56AD9DBA@redhat.com> Hi, From logs which You attached it looks that Your neutron-metadata-agent can’t connect to nova-api service. Please check if nova-metadata-api is reachable from node where Your neutron-metadata-agent is running. > Wiadomość napisana przez Ignazio Cassano w dniu 12.11.2018, o godz. 22:34: > > Hello again, > I have another installation of ocata . > On ocata the metadata for a network id is displayed by ps -afe like this: > /usr/bin/python2 /bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 --metadata_proxy_group=993 --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log --log-dir=/var/log/neutron > > On queens like this: > haproxy -f /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf > > Is it the correct behaviour ? Yes, that is correct. It was changed some time ago, see https://bugs.launchpad.net/neutron/+bug/1524916 > > Regards > Ignazio > > > > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski ha scritto: > Hi, > > Can You share logs from Your haproxy-metadata-proxy service which is running in qdhcp namespace? There should be some info about reason of those errors 500. > > > Wiadomość napisana przez Ignazio Cassano w dniu 12.11.2018, o godz. 19:49: > > > > Hi All, > > I upgraded manually my centos 7 openstack ocata to pike. > > All worked fine. > > Then I upgraded from pike to Queens and instances stopped to reach metadata on 169.254.169.254 with error 500. > > I am using isolated metadata true in my dhcp conf and in dhcp namespace the port 80 is in listen. > > Please, anyone can help me? > > Regards > > Ignazio > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > — > Slawek Kaplonski > Senior software engineer > Red Hat > — Slawek Kaplonski Senior software engineer Red Hat From ignaziocassano at gmail.com Mon Nov 12 21:55:30 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 12 Nov 2018 22:55:30 +0100 Subject: [Openstack-operators] Queens metadata agent error 500 In-Reply-To: <0E03B24F-05C2-4D3E-948D-F67B56AD9DBA@redhat.com> References: <0E03B24F-05C2-4D3E-948D-F67B56AD9DBA@redhat.com> Message-ID: Hello, the nova api in on the same controller on port 8774 and it can be reached from the metadata agent No firewall is present Regards Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski < skaplons at redhat.com> ha scritto: > Hi, > > From logs which You attached it looks that Your neutron-metadata-agent > can’t connect to nova-api service. Please check if nova-metadata-api is > reachable from node where Your neutron-metadata-agent is running. > > > Wiadomość napisana przez Ignazio Cassano w > dniu 12.11.2018, o godz. 22:34: > > > > Hello again, > > I have another installation of ocata . > > On ocata the metadata for a network id is displayed by ps -afe like this: > > /usr/bin/python2 /bin/neutron-ns-metadata-proxy > --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid > --metadata_proxy_socket=/var/lib/neutron/metadata_proxy > --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 > --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 > --metadata_proxy_group=993 > --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log > --log-dir=/var/log/neutron > > > > On queens like this: > > haproxy -f > /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf > > > > Is it the correct behaviour ? > > Yes, that is correct. It was changed some time ago, see > https://bugs.launchpad.net/neutron/+bug/1524916 > > > > > Regards > > Ignazio > > > > > > > > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski < > skaplons at redhat.com> ha scritto: > > Hi, > > > > Can You share logs from Your haproxy-metadata-proxy service which is > running in qdhcp namespace? There should be some info about reason of those > errors 500. > > > > > Wiadomość napisana przez Ignazio Cassano w > dniu 12.11.2018, o godz. 19:49: > > > > > > Hi All, > > > I upgraded manually my centos 7 openstack ocata to pike. > > > All worked fine. > > > Then I upgraded from pike to Queens and instances stopped to reach > metadata on 169.254.169.254 with error 500. > > > I am using isolated metadata true in my dhcp conf and in dhcp > namespace the port 80 is in listen. > > > Please, anyone can help me? > > > Regards > > > Ignazio > > > > > > _______________________________________________ > > > OpenStack-operators mailing list > > > OpenStack-operators at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Nov 12 22:05:50 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 12 Nov 2018 23:05:50 +0100 Subject: [Openstack-operators] Queens metadata agent error 500 In-Reply-To: <0E03B24F-05C2-4D3E-948D-F67B56AD9DBA@redhat.com> References: <0E03B24F-05C2-4D3E-948D-F67B56AD9DBA@redhat.com> Message-ID: Hello again, ath the same time I tried to create an instance nova-api log reports: 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines [req-e4799f40-eeab-482d-9717-cb41be8ffde2 89f76bc5de5545f381da2c10c7df7f15 59f1f232ce28409593d66d8f6495e434 - default default] Database connection was found disconnected; reconnecting: DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query') [SQL: u'SELECT 1'] (Background on this error at: http://sqlalche.me/e/e3q8) 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines Traceback (most recent call last): 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 73, in _connect_ping_listener 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines connection.scalar(select([1])) 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 880, in scalar 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines return self.execute(object, *multiparams, **params).scalar() 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines return meth(self, multiparams, params) 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines return connection._execute_clauseelement(self, multiparams, params) 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines compiled_sql, distilled_params 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines context) 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1409, in _handle_dbapi_exception 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines util.raise_from_cause(newraise, exc_info) 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines reraise(type(exception), exception, tb=exc_tb, cause=cause) 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines context) 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 507, in do_execute 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines cursor.execute(statement, parameters) 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines result = self._query(query) 2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _quer I never lost connections to de db before upgrading :-( Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski < skaplons at redhat.com> ha scritto: > Hi, > > From logs which You attached it looks that Your neutron-metadata-agent > can’t connect to nova-api service. Please check if nova-metadata-api is > reachable from node where Your neutron-metadata-agent is running. > > > Wiadomość napisana przez Ignazio Cassano w > dniu 12.11.2018, o godz. 22:34: > > > > Hello again, > > I have another installation of ocata . > > On ocata the metadata for a network id is displayed by ps -afe like this: > > /usr/bin/python2 /bin/neutron-ns-metadata-proxy > --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid > --metadata_proxy_socket=/var/lib/neutron/metadata_proxy > --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 > --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 > --metadata_proxy_group=993 > --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log > --log-dir=/var/log/neutron > > > > On queens like this: > > haproxy -f > /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf > > > > Is it the correct behaviour ? > > Yes, that is correct. It was changed some time ago, see > https://bugs.launchpad.net/neutron/+bug/1524916 > > > > > Regards > > Ignazio > > > > > > > > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski < > skaplons at redhat.com> ha scritto: > > Hi, > > > > Can You share logs from Your haproxy-metadata-proxy service which is > running in qdhcp namespace? There should be some info about reason of those > errors 500. > > > > > Wiadomość napisana przez Ignazio Cassano w > dniu 12.11.2018, o godz. 19:49: > > > > > > Hi All, > > > I upgraded manually my centos 7 openstack ocata to pike. > > > All worked fine. > > > Then I upgraded from pike to Queens and instances stopped to reach > metadata on 169.254.169.254 with error 500. > > > I am using isolated metadata true in my dhcp conf and in dhcp > namespace the port 80 is in listen. > > > Please, anyone can help me? > > > Regards > > > Ignazio > > > > > > _______________________________________________ > > > OpenStack-operators mailing list > > > OpenStack-operators at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Nov 12 22:08:43 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 12 Nov 2018 23:08:43 +0100 Subject: [Openstack-operators] Queens metadata agent error 500 In-Reply-To: References: <0E03B24F-05C2-4D3E-948D-F67B56AD9DBA@redhat.com> Message-ID: <6E95227A-8785-464A-9532-A35C543AE420@redhat.com> Hi, > Wiadomość napisana przez Ignazio Cassano w dniu 12.11.2018, o godz. 22:55: > > Hello, > the nova api in on the same controller on port 8774 and it can be reached from the metadata agent Nova-metadata-api is running on port 8775 IIRC. > No firewall is present > Regards > > Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski ha scritto: > Hi, > > From logs which You attached it looks that Your neutron-metadata-agent can’t connect to nova-api service. Please check if nova-metadata-api is reachable from node where Your neutron-metadata-agent is running. > > > Wiadomość napisana przez Ignazio Cassano w dniu 12.11.2018, o godz. 22:34: > > > > Hello again, > > I have another installation of ocata . > > On ocata the metadata for a network id is displayed by ps -afe like this: > > /usr/bin/python2 /bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 --metadata_proxy_group=993 --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log --log-dir=/var/log/neutron > > > > On queens like this: > > haproxy -f /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf > > > > Is it the correct behaviour ? > > Yes, that is correct. It was changed some time ago, see https://bugs.launchpad.net/neutron/+bug/1524916 > > > > > Regards > > Ignazio > > > > > > > > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski ha scritto: > > Hi, > > > > Can You share logs from Your haproxy-metadata-proxy service which is running in qdhcp namespace? There should be some info about reason of those errors 500. > > > > > Wiadomość napisana przez Ignazio Cassano w dniu 12.11.2018, o godz. 19:49: > > > > > > Hi All, > > > I upgraded manually my centos 7 openstack ocata to pike. > > > All worked fine. > > > Then I upgraded from pike to Queens and instances stopped to reach metadata on 169.254.169.254 with error 500. > > > I am using isolated metadata true in my dhcp conf and in dhcp namespace the port 80 is in listen. > > > Please, anyone can help me? > > > Regards > > > Ignazio > > > > > > _______________________________________________ > > > OpenStack-operators mailing list > > > OpenStack-operators at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > — Slawek Kaplonski Senior software engineer Red Hat From ignaziocassano at gmail.com Mon Nov 12 22:11:52 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 12 Nov 2018 23:11:52 +0100 Subject: [Openstack-operators] Queens metadata agent error 500 In-Reply-To: <0E03B24F-05C2-4D3E-948D-F67B56AD9DBA@redhat.com> References: <0E03B24F-05C2-4D3E-948D-F67B56AD9DBA@redhat.com> Message-ID: I tried 1 minute ago to create another instance. Nova api reports the following: RROR oslo_db.sqlalchemy.engines [req-cac96dee-d91b-48cb-831b-31f95cffa2f4 89f76bc5de5545f381da2c10c7df7f15 59f1f232ce28409593d66d8f6495e434 - default default] Database connection was found disconnected; reconnecting: DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query') [SQL: u'SELECT 1'] (Background on this error at: http://sqlalche.me/e/e3q8) 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines Traceback (most recent call last): 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 73, in _connect_ping_listener 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines connection.scalar(select([1])) 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 880, in scalar 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines return self.execute(object, *multiparams, **params).scalar() 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines return meth(self, multiparams, params) 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines return connection._execute_clauseelement(self, multiparams, params) 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines compiled_sql, distilled_params 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines context) 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1409, in _handle_dbapi_exception 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines util.raise_from_cause(newraise, exc_info) 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines reraise(type(exception), exception, tb=exc_tb, cause=cause) 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines context) 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 507, in do_execute 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines cursor.execute(statement, parameters) 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines result = self._query(query) 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _query 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines conn.query(q) 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 856, in query 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines self._affected_rows = self._read_query_result(unbuffered=unbuffered) 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1057, in _read_query_result 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines result.read() 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1340, in read 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines first_packet = self.connection._read_packet() 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 987, in _read_packet 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines packet_header = self._read_bytes(4) 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1033, in _read_bytes 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines CR.CR_SERVER_LOST, "Lost connection to MySQL server during query") 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query') [SQL: u'SELECT 1'] (Background on this error at: http://sqlalche.me/e/e3q8) 2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines [req-cac96dee-d91b-48cb-831b-31f95cffa2f4 89f76bc5de5545f381da2c10c7df7f15 59f1f232ce28409593d66d8f6495e434 - default default] Database connection was found disconnected; reconnecting: DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query') [SQL: u'SELECT 1'] (Background on this error at: http://sqlalche.me/e/e3q8) 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines Traceback (most recent call last): 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 73, in _connect_ping_listener 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines connection.scalar(select([1])) 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 880, in scalar 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines return self.execute(object, *multiparams, **params).scalar() 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines return meth(self, multiparams, params) 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines return connection._execute_clauseelement(self, multiparams, params) 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines compiled_sql, distilled_params 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines context) 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1409, in _handle_dbapi_exception 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines util.raise_from_cause(newraise, exc_info) 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines reraise(type(exception), exception, tb=exc_tb, cause=cause) 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines context) 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 507, in do_execute 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines cursor.execute(statement, parameters) 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines result = self._query(query) 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _query 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines conn.query(q) 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 856, in query 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines self._affected_rows = self._read_query_result(unbuffered=unbuffered) 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1057, in _read_query_result 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines result.read() 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1340, in read 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines first_packet = self.connection._read_packet() 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 987, in _read_packet 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines packet_header = self._read_bytes(4) 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1033, in _read_bytes 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines CR.CR_SERVER_LOST, "Lost connection to MySQL server during query") 2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query') [SQL: u'SELECT 1'] (Background on this error at: http://sqlalche.me/e/e3q8) Anything went wrong while upgrade to queens in nova database ? # nova-manage db online_data_migrations /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported exception.NotSupportedWarning Running batches of 50 until complete 1 rows matched query migrate_instances_add_request_spec, 0 migrated +---------------------------------------------+--------------+-----------+ | Migration | Total Needed | Completed | +---------------------------------------------+--------------+-----------+ | delete_build_requests_with_no_instance_uuid | 0 | 0 | | migrate_aggregate_reset_autoincrement | 0 | 0 | | migrate_aggregates | 0 | 0 | | migrate_instance_groups_to_api_db | 0 | 0 | | migrate_instances_add_request_spec | 1 | 0 | | migrate_keypairs_to_api_db | 0 | 0 | | migrate_quota_classes_to_api_db | 0 | 0 | | migrate_quota_limits_to_api_db | 0 | 0 | | migration_migrate_to_uuid | 0 | 0 | | populate_missing_availability_zones | 0 | 0 | | populate_uuids | 0 | 0 | | service_uuids_online_data_migration | 0 | 0 | +---------------------------------------------+--------------+-----------+ Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski < skaplons at redhat.com> ha scritto: > Hi, > > From logs which You attached it looks that Your neutron-metadata-agent > can’t connect to nova-api service. Please check if nova-metadata-api is > reachable from node where Your neutron-metadata-agent is running. > > > Wiadomość napisana przez Ignazio Cassano w > dniu 12.11.2018, o godz. 22:34: > > > > Hello again, > > I have another installation of ocata . > > On ocata the metadata for a network id is displayed by ps -afe like this: > > /usr/bin/python2 /bin/neutron-ns-metadata-proxy > --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid > --metadata_proxy_socket=/var/lib/neutron/metadata_proxy > --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 > --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 > --metadata_proxy_group=993 > --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log > --log-dir=/var/log/neutron > > > > On queens like this: > > haproxy -f > /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf > > > > Is it the correct behaviour ? > > Yes, that is correct. It was changed some time ago, see > https://bugs.launchpad.net/neutron/+bug/1524916 > > > > > Regards > > Ignazio > > > > > > > > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski < > skaplons at redhat.com> ha scritto: > > Hi, > > > > Can You share logs from Your haproxy-metadata-proxy service which is > running in qdhcp namespace? There should be some info about reason of those > errors 500. > > > > > Wiadomość napisana przez Ignazio Cassano w > dniu 12.11.2018, o godz. 19:49: > > > > > > Hi All, > > > I upgraded manually my centos 7 openstack ocata to pike. > > > All worked fine. > > > Then I upgraded from pike to Queens and instances stopped to reach > metadata on 169.254.169.254 with error 500. > > > I am using isolated metadata true in my dhcp conf and in dhcp > namespace the port 80 is in listen. > > > Please, anyone can help me? > > > Regards > > > Ignazio > > > > > > _______________________________________________ > > > OpenStack-operators mailing list > > > OpenStack-operators at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Nov 12 22:17:03 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 12 Nov 2018 23:17:03 +0100 Subject: [Openstack-operators] Queens metadata agent error 500 In-Reply-To: <6E95227A-8785-464A-9532-A35C543AE420@redhat.com> References: <0E03B24F-05C2-4D3E-948D-F67B56AD9DBA@redhat.com> <6E95227A-8785-464A-9532-A35C543AE420@redhat.com> Message-ID: Yes, sorry. Also the 8775 port is reachable from neutron metadata agent Regards Ignazio Il giorno lun 12 nov 2018 alle ore 23:08 Slawomir Kaplonski < skaplons at redhat.com> ha scritto: > Hi, > > > Wiadomość napisana przez Ignazio Cassano w > dniu 12.11.2018, o godz. 22:55: > > > > Hello, > > the nova api in on the same controller on port 8774 and it can be > reached from the metadata agent > > Nova-metadata-api is running on port 8775 IIRC. > > > No firewall is present > > Regards > > > > Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski < > skaplons at redhat.com> ha scritto: > > Hi, > > > > From logs which You attached it looks that Your neutron-metadata-agent > can’t connect to nova-api service. Please check if nova-metadata-api is > reachable from node where Your neutron-metadata-agent is running. > > > > > Wiadomość napisana przez Ignazio Cassano w > dniu 12.11.2018, o godz. 22:34: > > > > > > Hello again, > > > I have another installation of ocata . > > > On ocata the metadata for a network id is displayed by ps -afe like > this: > > > /usr/bin/python2 /bin/neutron-ns-metadata-proxy > --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid > --metadata_proxy_socket=/var/lib/neutron/metadata_proxy > --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 > --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 > --metadata_proxy_group=993 > --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log > --log-dir=/var/log/neutron > > > > > > On queens like this: > > > haproxy -f > /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf > > > > > > Is it the correct behaviour ? > > > > Yes, that is correct. It was changed some time ago, see > https://bugs.launchpad.net/neutron/+bug/1524916 > > > > > > > > Regards > > > Ignazio > > > > > > > > > > > > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski < > skaplons at redhat.com> ha scritto: > > > Hi, > > > > > > Can You share logs from Your haproxy-metadata-proxy service which is > running in qdhcp namespace? There should be some info about reason of those > errors 500. > > > > > > > Wiadomość napisana przez Ignazio Cassano > w dniu 12.11.2018, o godz. 19:49: > > > > > > > > Hi All, > > > > I upgraded manually my centos 7 openstack ocata to pike. > > > > All worked fine. > > > > Then I upgraded from pike to Queens and instances stopped to reach > metadata on 169.254.169.254 with error 500. > > > > I am using isolated metadata true in my dhcp conf and in dhcp > namespace the port 80 is in listen. > > > > Please, anyone can help me? > > > > Regards > > > > Ignazio > > > > > > > > _______________________________________________ > > > > OpenStack-operators mailing list > > > > OpenStack-operators at lists.openstack.org > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > — > > > Slawek Kaplonski > > > Senior software engineer > > > Red Hat > > > > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Nov 12 22:45:35 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 12 Nov 2018 22:45:35 +0000 Subject: [Openstack-operators] new SIGs to cover use cases In-Reply-To: <2da0506287d64764bb4219b145518470@AUSX13MPS308.AMER.DELL.COM> References: <2da0506287d64764bb4219b145518470@AUSX13MPS308.AMER.DELL.COM> Message-ID: <20181112224535.ofkgektadexifwzm@yuggoth.org> On 2018-11-12 15:46:38 +0000 (+0000), Arkady.Kanevsky at dell.com wrote: [...] > 1. Do we have or want to create a user community around Hybrid cloud. [...] > 2. As we target AI/ML as 2019 target application domain do we > want to create a SIG for it? Or do we extend scientific > community SIG to cover it? [...] It may also be worthwhile to ask this on the openstack-sigs mailing list. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ignaziocassano at gmail.com Mon Nov 12 23:58:07 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 13 Nov 2018 00:58:07 +0100 Subject: [Openstack-operators] Queens metadata agent error 500 In-Reply-To: <0E03B24F-05C2-4D3E-948D-F67B56AD9DBA@redhat.com> References: <0E03B24F-05C2-4D3E-948D-F67B56AD9DBA@redhat.com> Message-ID: Any other suggestion ? It does not work. Nova metatada is on port 8775 in listen but no way to solve this issue. Thanks Ignazio Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski < skaplons at redhat.com> ha scritto: > Hi, > > From logs which You attached it looks that Your neutron-metadata-agent > can’t connect to nova-api service. Please check if nova-metadata-api is > reachable from node where Your neutron-metadata-agent is running. > > > Wiadomość napisana przez Ignazio Cassano w > dniu 12.11.2018, o godz. 22:34: > > > > Hello again, > > I have another installation of ocata . > > On ocata the metadata for a network id is displayed by ps -afe like this: > > /usr/bin/python2 /bin/neutron-ns-metadata-proxy > --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid > --metadata_proxy_socket=/var/lib/neutron/metadata_proxy > --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 > --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 > --metadata_proxy_group=993 > --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log > --log-dir=/var/log/neutron > > > > On queens like this: > > haproxy -f > /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf > > > > Is it the correct behaviour ? > > Yes, that is correct. It was changed some time ago, see > https://bugs.launchpad.net/neutron/+bug/1524916 > > > > > Regards > > Ignazio > > > > > > > > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski < > skaplons at redhat.com> ha scritto: > > Hi, > > > > Can You share logs from Your haproxy-metadata-proxy service which is > running in qdhcp namespace? There should be some info about reason of those > errors 500. > > > > > Wiadomość napisana przez Ignazio Cassano w > dniu 12.11.2018, o godz. 19:49: > > > > > > Hi All, > > > I upgraded manually my centos 7 openstack ocata to pike. > > > All worked fine. > > > Then I upgraded from pike to Queens and instances stopped to reach > metadata on 169.254.169.254 with error 500. > > > I am using isolated metadata true in my dhcp conf and in dhcp > namespace the port 80 is in listen. > > > Please, anyone can help me? > > > Regards > > > Ignazio > > > > > > _______________________________________________ > > > OpenStack-operators mailing list > > > OpenStack-operators at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bitskrieg at bitskrieg.net Tue Nov 13 02:46:14 2018 From: bitskrieg at bitskrieg.net (Chris Apsey) Date: Mon, 12 Nov 2018 21:46:14 -0500 Subject: [Openstack-operators] Queens metadata agent error 500 In-Reply-To: References: <0E03B24F-05C2-4D3E-948D-F67B56AD9DBA@redhat.com> Message-ID: <1670af650f0.278c.5f0d7f2baa7831a2bbe6450f254d9a24@bitskrieg.net> Did you change the nova_metadata_ip option to nova_metadata_host in metadata_agent.ini? The former value was deprecated several releases ago and now no longer functions as of pike. The metadata service will throw 500 errors if you don't change it. On November 12, 2018 19:00:46 Ignazio Cassano wrote: > Any other suggestion ? > It does not work. > Nova metatada is on port 8775 in listen but no way to solve this issue. > Thanks > Ignazio > > Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski > ha scritto: > Hi, > > From logs which You attached it looks that Your neutron-metadata-agent > can’t connect to nova-api service. Please check if nova-metadata-api is > reachable from node where Your neutron-metadata-agent is running. > >> Wiadomość napisana przez Ignazio Cassano w dniu >> 12.11.2018, o godz. 22:34: >> >> Hello again, >> I have another installation of ocata . >> On ocata the metadata for a network id is displayed by ps -afe like this: >> /usr/bin/python2 /bin/neutron-ns-metadata-proxy >> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid >> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy >> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 >> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 >> --metadata_proxy_group=993 >> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log >> --log-dir=/var/log/neutron >> >> On queens like this: >> haproxy -f >> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf >> >> Is it the correct behaviour ? > > Yes, that is correct. It was changed some time ago, see > https://bugs.launchpad.net/neutron/+bug/1524916 > >> >> Regards >> Ignazio >> >> >> >> Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski >> ha scritto: >> Hi, >> >> Can You share logs from Your haproxy-metadata-proxy service which is >> running in qdhcp namespace? There should be some info about reason of those >> errors 500. >> >> > Wiadomość napisana przez Ignazio Cassano w >> dniu 12.11.2018, o godz. 19:49: >> > >> > Hi All, >> > I upgraded manually my centos 7 openstack ocata to pike. >> > All worked fine. >> > Then I upgraded from pike to Queens and instances stopped to reach >> metadata on 169.254.169.254 with error 500. >> > I am using isolated metadata true in my dhcp conf and in dhcp namespace >> the port 80 is in listen. >> > Please, anyone can help me? >> > Regards >> > Ignazio >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Tue Nov 13 04:12:55 2018 From: sorrison at gmail.com (Sam Morrison) Date: Tue, 13 Nov 2018 15:12:55 +1100 Subject: [Openstack-operators] RabbitMQ and SSL In-Reply-To: <8EEDA593-4A37-4A1C-823A-FCA61299B2DE@gmail.com> References: <8EEDA593-4A37-4A1C-823A-FCA61299B2DE@gmail.com> Message-ID: On the off chance that others see this or there is talk about this in Berlin I have tracked this down to versions of python-amqp and python-kombu More information at the bug report https://bugs.launchpad.net/oslo.messaging/+bug/1800957 Sam > On 1 Nov 2018, at 11:04 am, Sam Morrison wrote: > > Hi all, > > We’ve been battling an issue after an upgrade to pike which essentially makes using rabbit with ssl impossible > > https://bugs.launchpad.net/oslo.messaging/+bug/1800957 > > We use ubuntu cloud archives so it might no exactly be olso but a dependant library. > > Anyone else seen similar issues? > > Cheers, > Sam > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Nov 13 05:28:16 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 13 Nov 2018 06:28:16 +0100 Subject: [Openstack-operators] Queens metadata agent error 500 In-Reply-To: <1670af650f0.278c.5f0d7f2baa7831a2bbe6450f254d9a24@bitskrieg.net> References: <0E03B24F-05C2-4D3E-948D-F67B56AD9DBA@redhat.com> <1670af650f0.278c.5f0d7f2baa7831a2bbe6450f254d9a24@bitskrieg.net> Message-ID: Hello, I am going to check it. Thanks Ignazio Il giorno Mar 13 Nov 2018 03:46 Chris Apsey ha scritto: > Did you change the nova_metadata_ip option to nova_metadata_host in > metadata_agent.ini? The former value was deprecated several releases ago > and now no longer functions as of pike. The metadata service will throw > 500 errors if you don't change it. > > On November 12, 2018 19:00:46 Ignazio Cassano > wrote: > >> Any other suggestion ? >> It does not work. >> Nova metatada is on port 8775 in listen but no way to solve this issue. >> Thanks >> Ignazio >> >> Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski < >> skaplons at redhat.com> ha scritto: >> >>> Hi, >>> >>> From logs which You attached it looks that Your neutron-metadata-agent >>> can’t connect to nova-api service. Please check if nova-metadata-api is >>> reachable from node where Your neutron-metadata-agent is running. >>> >>> > Wiadomość napisana przez Ignazio Cassano w >>> dniu 12.11.2018, o godz. 22:34: >>> > >>> > Hello again, >>> > I have another installation of ocata . >>> > On ocata the metadata for a network id is displayed by ps -afe like >>> this: >>> > /usr/bin/python2 /bin/neutron-ns-metadata-proxy >>> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid >>> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy >>> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 >>> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 >>> --metadata_proxy_group=993 >>> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log >>> --log-dir=/var/log/neutron >>> > >>> > On queens like this: >>> > haproxy -f >>> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf >>> > >>> > Is it the correct behaviour ? >>> >>> Yes, that is correct. It was changed some time ago, see >>> https://bugs.launchpad.net/neutron/+bug/1524916 >>> >>> > >>> > Regards >>> > Ignazio >>> > >>> > >>> > >>> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski < >>> skaplons at redhat.com> ha scritto: >>> > Hi, >>> > >>> > Can You share logs from Your haproxy-metadata-proxy service which is >>> running in qdhcp namespace? There should be some info about reason of those >>> errors 500. >>> > >>> > > Wiadomość napisana przez Ignazio Cassano >>> w dniu 12.11.2018, o godz. 19:49: >>> > > >>> > > Hi All, >>> > > I upgraded manually my centos 7 openstack ocata to pike. >>> > > All worked fine. >>> > > Then I upgraded from pike to Queens and instances stopped to reach >>> metadata on 169.254.169.254 with error 500. >>> > > I am using isolated metadata true in my dhcp conf and in dhcp >>> namespace the port 80 is in listen. >>> > > Please, anyone can help me? >>> > > Regards >>> > > Ignazio >>> > > >>> > > _______________________________________________ >>> > > OpenStack-operators mailing list >>> > > OpenStack-operators at lists.openstack.org >>> > > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> > >>> > — >>> > Slawek Kaplonski >>> > Senior software engineer >>> > Red Hat >>> > >>> >>> — >>> Slawek Kaplonski >>> Senior software engineer >>> Red Hat >>> >>> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Nov 13 07:35:32 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 13 Nov 2018 08:35:32 +0100 Subject: [Openstack-operators] Queens metadata agent error 500 In-Reply-To: <1670af650f0.278c.5f0d7f2baa7831a2bbe6450f254d9a24@bitskrieg.net> References: <0E03B24F-05C2-4D3E-948D-F67B56AD9DBA@redhat.com> <1670af650f0.278c.5f0d7f2baa7831a2bbe6450f254d9a24@bitskrieg.net> Message-ID: Hi Chris, many thanks for your answer. It solved the issue. Regards Ignazio Il giorno mar 13 nov 2018 alle ore 03:46 Chris Apsey < bitskrieg at bitskrieg.net> ha scritto: > Did you change the nova_metadata_ip option to nova_metadata_host in > metadata_agent.ini? The former value was deprecated several releases ago > and now no longer functions as of pike. The metadata service will throw > 500 errors if you don't change it. > > On November 12, 2018 19:00:46 Ignazio Cassano > wrote: > >> Any other suggestion ? >> It does not work. >> Nova metatada is on port 8775 in listen but no way to solve this issue. >> Thanks >> Ignazio >> >> Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski < >> skaplons at redhat.com> ha scritto: >> >>> Hi, >>> >>> From logs which You attached it looks that Your neutron-metadata-agent >>> can’t connect to nova-api service. Please check if nova-metadata-api is >>> reachable from node where Your neutron-metadata-agent is running. >>> >>> > Wiadomość napisana przez Ignazio Cassano w >>> dniu 12.11.2018, o godz. 22:34: >>> > >>> > Hello again, >>> > I have another installation of ocata . >>> > On ocata the metadata for a network id is displayed by ps -afe like >>> this: >>> > /usr/bin/python2 /bin/neutron-ns-metadata-proxy >>> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid >>> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy >>> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 >>> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 >>> --metadata_proxy_group=993 >>> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log >>> --log-dir=/var/log/neutron >>> > >>> > On queens like this: >>> > haproxy -f >>> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf >>> > >>> > Is it the correct behaviour ? >>> >>> Yes, that is correct. It was changed some time ago, see >>> https://bugs.launchpad.net/neutron/+bug/1524916 >>> >>> > >>> > Regards >>> > Ignazio >>> > >>> > >>> > >>> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski < >>> skaplons at redhat.com> ha scritto: >>> > Hi, >>> > >>> > Can You share logs from Your haproxy-metadata-proxy service which is >>> running in qdhcp namespace? There should be some info about reason of those >>> errors 500. >>> > >>> > > Wiadomość napisana przez Ignazio Cassano >>> w dniu 12.11.2018, o godz. 19:49: >>> > > >>> > > Hi All, >>> > > I upgraded manually my centos 7 openstack ocata to pike. >>> > > All worked fine. >>> > > Then I upgraded from pike to Queens and instances stopped to reach >>> metadata on 169.254.169.254 with error 500. >>> > > I am using isolated metadata true in my dhcp conf and in dhcp >>> namespace the port 80 is in listen. >>> > > Please, anyone can help me? >>> > > Regards >>> > > Ignazio >>> > > >>> > > _______________________________________________ >>> > > OpenStack-operators mailing list >>> > > OpenStack-operators at lists.openstack.org >>> > > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> > >>> > — >>> > Slawek Kaplonski >>> > Senior software engineer >>> > Red Hat >>> > >>> >>> — >>> Slawek Kaplonski >>> Senior software engineer >>> Red Hat >>> >>> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Tue Nov 13 08:01:47 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Tue, 13 Nov 2018 08:01:47 +0000 Subject: [Openstack-operators] new SIGs to cover use cases In-Reply-To: <20181112224535.ofkgektadexifwzm@yuggoth.org> References: <2da0506287d64764bb4219b145518470@AUSX13MPS308.AMER.DELL.COM> <20181112224535.ofkgektadexifwzm@yuggoth.org> Message-ID: <91fe2241790443128cd9b7dc959468fe@AUSX13MPS308.AMER.DELL.COM> Good point. Adding SIG list. -----Original Message----- From: Jeremy Stanley [mailto:fungi at yuggoth.org] Sent: Monday, November 12, 2018 4:46 PM To: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] new SIGs to cover use cases [EXTERNAL EMAIL] Please report any suspicious attachments, links, or requests for sensitive information. On 2018-11-12 15:46:38 +0000 (+0000), Arkady.Kanevsky at dell.com wrote: [...] > 1. Do we have or want to create a user community around Hybrid cloud. [...] > 2. As we target AI/ML as 2019 target application domain do we > want to create a SIG for it? Or do we extend scientific > community SIG to cover it? [...] It may also be worthwhile to ask this on the openstack-sigs mailing list. -- Jeremy Stanley From ignaziocassano at gmail.com Tue Nov 13 08:47:31 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 13 Nov 2018 09:47:31 +0100 Subject: [Openstack-operators] queens: vnc password console does not work anymore Message-ID: Hi All, before upgrading to queens, we used in qemu.conf the vnc_passwd parameter. Using the dashboard console asked me for the password. Upgrading to queens it does not work anymore (unable to negotiote security with server). Removing the vnc_password from qemu.conf e restart libvirt and virtual machine it returns to work. What is changed in queens ? Thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Nov 13 10:28:45 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 13 Nov 2018 11:28:45 +0100 Subject: [Openstack-operators] upgrade pike to queens get `xxx DBConnectionError SELECT 1` Message-ID: Hi all, upgradig from pike to queens I got a lot of errors in nova-api.log: upgrade pike to queens get `xxx DBConnectionError SELECT 1` My issue is the same reported at: https://bugs.launchpad.net/oslo.db/+bug/1774544 The difference is that I am using centos 7. I am also using haproxy to balancing galera cluster . Modifying nova.conf ans using end exluding haproxu specifying only a galera cluster member the problem disappears. Please, any help ? Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Nov 13 11:57:02 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 13 Nov 2018 12:57:02 +0100 Subject: [Openstack-operators] operators get-together today at the Berlin Summit Message-ID: We never did come up with a good plan for a separate event for operators this evening, so I think maybe we should just meet up at the marketplace mixer, so may I propose meet at the front at 6pm? Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon at csail.mit.edu Tue Nov 13 14:22:11 2018 From: jon at csail.mit.edu (Jonathan D. Proulx) Date: Tue, 13 Nov 2018 09:22:11 -0500 Subject: [Openstack-operators] operators get-together today at the Berlin Summit In-Reply-To: References: Message-ID: <20181113142211.2qhfirxf3moruw4d@csail.mit.edu> On Tue, Nov 13, 2018 at 12:57:02PM +0100, Chris Morgan wrote: : We never did come up with a good plan for a separate event for : operators this evening, so I think maybe we should just meet up at the : marketplace mixer, so may I propose meet at the front at 6pm? : Chris : -- : Chris Morgan <[1]mihalis68 at gmail.com> Sure sounds good to me. -Jon :References : : 1. mailto:mihalis68 at gmail.com :_______________________________________________ :OpenStack-operators mailing list :OpenStack-operators at lists.openstack.org :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From stig.openstack at telfer.org Tue Nov 13 15:39:36 2018 From: stig.openstack at telfer.org (Stig Telfer) Date: Tue, 13 Nov 2018 16:39:36 +0100 Subject: [Openstack-operators] new SIGs to cover use cases In-Reply-To: <91fe2241790443128cd9b7dc959468fe@AUSX13MPS308.AMER.DELL.COM> References: <2da0506287d64764bb4219b145518470@AUSX13MPS308.AMER.DELL.COM> <20181112224535.ofkgektadexifwzm@yuggoth.org> <91fe2241790443128cd9b7dc959468fe@AUSX13MPS308.AMER.DELL.COM> Message-ID: <4E26EC12-087C-42AC-A207-696764077787@telfer.org> You are right to make the connection - this is a subject that regularly comes up in the discussions of the Scientific SIG, though it’s just one of many use cases for hybrid cloud. If a new SIG was created around hybrid cloud, it would be useful to have it closely connected with the Scientific SIG. Cheers, Stig > On 13 Nov 2018, at 09:01, wrote: > > Good point. > Adding SIG list. > > -----Original Message----- > From: Jeremy Stanley [mailto:fungi at yuggoth.org] > Sent: Monday, November 12, 2018 4:46 PM > To: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] new SIGs to cover use cases > > > [EXTERNAL EMAIL] > Please report any suspicious attachments, links, or requests for sensitive information. > > > On 2018-11-12 15:46:38 +0000 (+0000), Arkady.Kanevsky at dell.com wrote: > [...] >> 1. Do we have or want to create a user community around Hybrid cloud. > [...] >> 2. As we target AI/ML as 2019 target application domain do we >> want to create a SIG for it? Or do we extend scientific >> community SIG to cover it? > [...] > > It may also be worthwhile to ask this on the openstack-sigs mailing > list. > -- > Jeremy Stanley > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From yihleong at gmail.com Wed Nov 14 04:11:21 2018 From: yihleong at gmail.com (Yih Leong, Sun.) Date: Tue, 13 Nov 2018 20:11:21 -0800 Subject: [Openstack-operators] [Openstack-sigs] new SIGs to cover use cases In-Reply-To: <838DF2C6-02B1-4A97-BC65-C41671A1C604@telfer.org> References: <2da0506287d64764bb4219b145518470@AUSX13MPS308.AMER.DELL.COM> <20181112224535.ofkgektadexifwzm@yuggoth.org> <91fe2241790443128cd9b7dc959468fe@AUSX13MPS308.AMER.DELL.COM> <838DF2C6-02B1-4A97-BC65-C41671A1C604@telfer.org> Message-ID: Wondering if we should *up level* to Multi-Cloud, where Hybrid Cloud can be a subset of Multi-Cloud. I think Scientific SIG can still focus on Scientific and HPC, whereas Multi/Hybrid Cloud will support broader use cases. On Tue, Nov 13, 2018 at 8:22 AM Stig Telfer wrote: > You are right to make the connection - this is a subject that regularly > comes up in the discussions of the Scientific SIG, though it’s just one of > many use cases for hybrid cloud. If a new SIG was created around hybrid > cloud, it would be useful to have it closely connected with the Scientific > SIG. > > Cheers, > Stig > > > > On 13 Nov 2018, at 09:01, < > Arkady.Kanevsky at dell.com> wrote: > > > > Good point. > > Adding SIG list. > > > > -----Original Message----- > > From: Jeremy Stanley [mailto:fungi at yuggoth.org] > > Sent: Monday, November 12, 2018 4:46 PM > > To: openstack-operators at lists.openstack.org > > Subject: Re: [Openstack-operators] new SIGs to cover use cases > > > > > > [EXTERNAL EMAIL] > > Please report any suspicious attachments, links, or requests for > sensitive information. > > > > > > On 2018-11-12 15:46:38 +0000 (+0000), Arkady.Kanevsky at dell.com wrote: > > [...] > >> 1. Do we have or want to create a user community around Hybrid cloud. > > [...] > >> 2. As we target AI/ML as 2019 target application domain do we > >> want to create a SIG for it? Or do we extend scientific > >> community SIG to cover it? > > [...] > > > > It may also be worthwhile to ask this on the openstack-sigs mailing > > list. > > -- > > Jeremy Stanley > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Nov 14 04:42:35 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 14 Nov 2018 05:42:35 +0100 Subject: [Openstack-operators] [all] All Hail our Newest Release Name - OpenStack Train Message-ID: <20181114044233.GA10706@thor.bakeyournoodle.com> Hi everybody! As the subject reads, the "T" release of OpenStack is officially "Train". Unlike recent choices Train was the popular choice so congrats! Thanks to everybody who participated and help with the naming process. Lets make OpenStack Train the release so awesome that people can't help but choo-choo-choose to run it[1]! Yours Tony. [1] Too soon? Too much? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From skaplons at redhat.com Wed Nov 14 07:37:09 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 14 Nov 2018 08:37:09 +0100 Subject: [Openstack-operators] [openstack-dev] [all] All Hail our Newest Release Name - OpenStack Train In-Reply-To: References: <20181114044233.GA10706@thor.bakeyournoodle.com> Message-ID: <7064638E-4B0A-4572-A194-4FF6327DC6B1@redhat.com> Hi, I think it was published, see http://lists.openstack.org/pipermail/openstack/2018-November/047172.html > Wiadomość napisana przez Jeremy Freudberg w dniu 14.11.2018, o godz. 06:12: > > Hey Tony, > > What's the reason for the results of the poll not being public? > > Thanks, > Jeremy > On Tue, Nov 13, 2018 at 11:52 PM Tony Breeds wrote: >> >> >> Hi everybody! >> >> As the subject reads, the "T" release of OpenStack is officially >> "Train". Unlike recent choices Train was the popular choice so >> congrats! >> >> Thanks to everybody who participated and help with the naming process. >> >> Lets make OpenStack Train the release so awesome that people can't help >> but choo-choo-choose to run it[1]! >> >> >> Yours Tony. >> [1] Too soon? Too much? >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From itachi.sama.amaterasu at gmail.com Wed Nov 14 10:28:04 2018 From: itachi.sama.amaterasu at gmail.com (Giovanni Santini) Date: Wed, 14 Nov 2018 11:28:04 +0100 Subject: [Openstack-operators] [charms] Issues with cinder and ceph Message-ID: Hi everyone, I am deploying OpenStack using Juju at my workplace. Everything works well, except the fact cinder is unable to connect to the ceph cluster; it is a bit hard to debug from my side (I don't have previous Ceph experience) but what I noticed is that: - *ceph.conf* is practically empty, except for a few logging options - cinder tries to access a pool using a user when both pool and user do not exist - cinder tries to contact the monitor node at the DNS name 'ceph-mon' (is it hardcoded?) I hope I could have some help debugging and solving the issue. Thanks in advance, --- Giovanni Santini My blog: http://giovannisantini.tk My code: https://git{hub,lab}.com/ItachiSan My GPG: 2FADEBF5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Nov 14 16:01:53 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 14 Nov 2018 17:01:53 +0100 Subject: [Openstack-operators] Openstack zun on centos??? Message-ID: Hi All, I'd like to know if openstack zun will be released for centos. Reading documentation at docs.openstack.org only ubuntu installation is reported. Many thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Wed Nov 14 17:38:43 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Wed, 14 Nov 2018 18:38:43 +0100 Subject: [Openstack-operators] Openstack zun on centos??? In-Reply-To: References: Message-ID: Hi Cassano, you can use zun in centos deployed by kolla-ansible. https://docs.openstack.org/kolla-ansible/latest/reference/zun-guide.html Regards El mié., 14 nov. 2018 17:11, Ignazio Cassano escribió: > Hi All, > I'd like to know if openstack zun will be released for centos. > Reading documentation at docs.openstack.org only ubuntu installation is > reported. > Many thanks > Ignazio > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Nov 14 18:48:09 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 14 Nov 2018 19:48:09 +0100 Subject: [Openstack-operators] Openstack zun on centos??? In-Reply-To: References: Message-ID: Hi Edoardo, does it mean openstack kolla installs zun using pip ? I did not find any zun rpm package Regards Ignazio Il giorno Mer 14 Nov 2018 18:38 Eduardo Gonzalez ha scritto: > Hi Cassano, you can use zun in centos deployed by kolla-ansible. > > https://docs.openstack.org/kolla-ansible/latest/reference/zun-guide.html > > Regards > > El mié., 14 nov. 2018 17:11, Ignazio Cassano > escribió: > >> Hi All, >> I'd like to know if openstack zun will be released for centos. >> Reading documentation at docs.openstack.org only ubuntu installation is >> reported. >> Many thanks >> Ignazio >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Wed Nov 14 18:49:43 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Wed, 14 Nov 2018 19:49:43 +0100 Subject: [Openstack-operators] Openstack zun on centos??? In-Reply-To: References: Message-ID: yes, only source code (pip) is supported, there is not any rpm available for it yet. Regards El mié., 14 nov. 2018 19:48, Ignazio Cassano escribió: > Hi Edoardo, > does it mean openstack kolla installs zun using pip ? > I did not find any zun rpm package > Regards > Ignazio > > Il giorno Mer 14 Nov 2018 18:38 Eduardo Gonzalez ha > scritto: > >> Hi Cassano, you can use zun in centos deployed by kolla-ansible. >> >> https://docs.openstack.org/kolla-ansible/latest/reference/zun-guide.html >> >> Regards >> >> El mié., 14 nov. 2018 17:11, Ignazio Cassano >> escribió: >> >>> Hi All, >>> I'd like to know if openstack zun will be released for centos. >>> Reading documentation at docs.openstack.org only ubuntu installation is >>> reported. >>> Many thanks >>> Ignazio >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Wed Nov 14 18:59:33 2018 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Wed, 14 Nov 2018 19:59:33 +0100 Subject: [Openstack-operators] Openstack zun on centos??? In-Reply-To: References: Message-ID: You are right, zun is not included in RDO yet. I will be happy to help anyone interested in add and maintain it!! There is some info about how to add new packages in [1] [1] https://www.rdoproject.org/documentation/add-packages/ On Wed, Nov 14, 2018 at 7:53 PM Eduardo Gonzalez wrote: > yes, only source code (pip) is supported, there is not any rpm available > for it yet. > > Regards > > El mié., 14 nov. 2018 19:48, Ignazio Cassano > escribió: > >> Hi Edoardo, >> does it mean openstack kolla installs zun using pip ? >> I did not find any zun rpm package >> Regards >> Ignazio >> >> Il giorno Mer 14 Nov 2018 18:38 Eduardo Gonzalez ha >> scritto: >> >>> Hi Cassano, you can use zun in centos deployed by kolla-ansible. >>> >>> https://docs.openstack.org/kolla-ansible/latest/reference/zun-guide.html >>> >>> Regards >>> >>> El mié., 14 nov. 2018 17:11, Ignazio Cassano >>> escribió: >>> >>>> Hi All, >>>> I'd like to know if openstack zun will be released for centos. >>>> Reading documentation at docs.openstack.org only ubuntu installation >>>> is reported. >>>> Many thanks >>>> Ignazio >>>> _______________________________________________ >>>> OpenStack-operators mailing list >>>> OpenStack-operators at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>> >>> _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Wed Nov 14 19:00:45 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 14 Nov 2018 19:00:45 +0000 Subject: [Openstack-operators] Openstack zun on centos??? In-Reply-To: References: , Message-ID: <1A3C52DFCD06494D8528644858247BF01C235375@EX10MBOX03.pnnl.gov> kolla installs it via containers. Thanks, Kevin ________________________________ From: Ignazio Cassano [ignaziocassano at gmail.com] Sent: Wednesday, November 14, 2018 10:48 AM To: Eduardo Gonzalez Cc: OpenStack Operators Subject: Re: [Openstack-operators] Openstack zun on centos??? Hi Edoardo, does it mean openstack kolla installs zun using pip ? I did not find any zun rpm package Regards Ignazio Il giorno Mer 14 Nov 2018 18:38 Eduardo Gonzalez > ha scritto: Hi Cassano, you can use zun in centos deployed by kolla-ansible. https://docs.openstack.org/kolla-ansible/latest/reference/zun-guide.html Regards El mié., 14 nov. 2018 17:11, Ignazio Cassano > escribió: Hi All, I'd like to know if openstack zun will be released for centos. Reading documentation at docs.openstack.org only ubuntu installation is reported. Many thanks Ignazio _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Nov 14 19:14:49 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 14 Nov 2018 20:14:49 +0100 Subject: [Openstack-operators] Openstack zun on centos??? In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C235375@EX10MBOX03.pnnl.gov> References: <1A3C52DFCD06494D8528644858247BF01C235375@EX10MBOX03.pnnl.gov> Message-ID: Thanks Kevin and Edoardo. unfortunately I installed openstack using ansible playbooks and some projects are already in production. PROBALY KOLLA MIGRATION MEANS OPENSTACK REINSTALLATION ;-( Ignazio Il giorno Mer 14 Nov 2018 20:00 Fox, Kevin M ha scritto: > kolla installs it via containers. > > Thanks, > Kevin > ------------------------------ > *From:* Ignazio Cassano [ignaziocassano at gmail.com] > *Sent:* Wednesday, November 14, 2018 10:48 AM > *To:* Eduardo Gonzalez > *Cc:* OpenStack Operators > *Subject:* Re: [Openstack-operators] Openstack zun on centos??? > > Hi Edoardo, > does it mean openstack kolla installs zun using pip ? > I did not find any zun rpm package > Regards > Ignazio > > Il giorno Mer 14 Nov 2018 18:38 Eduardo Gonzalez ha > scritto: > >> Hi Cassano, you can use zun in centos deployed by kolla-ansible. >> >> https://docs.openstack.org/kolla-ansible/latest/reference/zun-guide.html >> >> Regards >> >> El mié., 14 nov. 2018 17:11, Ignazio Cassano >> escribió: >> >>> Hi All, >>> I'd like to know if openstack zun will be released for centos. >>> Reading documentation at docs.openstack.org only ubuntu installation is >>> reported. >>> Many thanks >>> Ignazio >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Nov 14 19:28:38 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 14 Nov 2018 20:28:38 +0100 Subject: [Openstack-operators] Openstack zun on centos??? In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C235375@EX10MBOX03.pnnl.gov> References: <1A3C52DFCD06494D8528644858247BF01C235375@EX10MBOX03.pnnl.gov> Message-ID: Hello again, I read zun documents very fastly. After zun installation (with or without kolla) , do containers run directly in compute nodes or in virtual machines ? Magnum can run them in both but Magnum seems very different. Ignazio Il giorno Mer 14 Nov 2018 20:00 Fox, Kevin M ha scritto: > kolla installs it via containers. > > Thanks, > Kevin > ------------------------------ > *From:* Ignazio Cassano [ignaziocassano at gmail.com] > *Sent:* Wednesday, November 14, 2018 10:48 AM > *To:* Eduardo Gonzalez > *Cc:* OpenStack Operators > *Subject:* Re: [Openstack-operators] Openstack zun on centos??? > > Hi Edoardo, > does it mean openstack kolla installs zun using pip ? > I did not find any zun rpm package > Regards > Ignazio > > Il giorno Mer 14 Nov 2018 18:38 Eduardo Gonzalez ha > scritto: > >> Hi Cassano, you can use zun in centos deployed by kolla-ansible. >> >> https://docs.openstack.org/kolla-ansible/latest/reference/zun-guide.html >> >> Regards >> >> El mié., 14 nov. 2018 17:11, Ignazio Cassano >> escribió: >> >>> Hi All, >>> I'd like to know if openstack zun will be released for centos. >>> Reading documentation at docs.openstack.org only ubuntu installation is >>> reported. >>> Many thanks >>> Ignazio >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Wed Nov 14 19:36:32 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Wed, 14 Nov 2018 20:36:32 +0100 Subject: [Openstack-operators] Openstack zun on centos??? In-Reply-To: References: <1A3C52DFCD06494D8528644858247BF01C235375@EX10MBOX03.pnnl.gov> Message-ID: Hello, zun will create the containers in the directly in the docker daemon associated as zun-compute node. About installation, you can install zun with pip and configure manually. It will work no matter if is ubuntu or centos, maybe the packages required doesn't match between distributions, is a matter of try what dependency it have. Regards El mié., 14 nov. 2018 20:29, Ignazio Cassano escribió: > Hello again, > I read zun documents very fastly. > After zun installation (with or without kolla) , do containers run > directly in compute nodes or in virtual machines ? > Magnum can run them in both but Magnum seems very different. > Ignazio > > > Il giorno Mer 14 Nov 2018 20:00 Fox, Kevin M ha > scritto: > >> kolla installs it via containers. >> >> Thanks, >> Kevin >> ------------------------------ >> *From:* Ignazio Cassano [ignaziocassano at gmail.com] >> *Sent:* Wednesday, November 14, 2018 10:48 AM >> *To:* Eduardo Gonzalez >> *Cc:* OpenStack Operators >> *Subject:* Re: [Openstack-operators] Openstack zun on centos??? >> >> Hi Edoardo, >> does it mean openstack kolla installs zun using pip ? >> I did not find any zun rpm package >> Regards >> Ignazio >> >> Il giorno Mer 14 Nov 2018 18:38 Eduardo Gonzalez ha >> scritto: >> >>> Hi Cassano, you can use zun in centos deployed by kolla-ansible. >>> >>> https://docs.openstack.org/kolla-ansible/latest/reference/zun-guide.html >>> >>> Regards >>> >>> El mié., 14 nov. 2018 17:11, Ignazio Cassano >>> escribió: >>> >>>> Hi All, >>>> I'd like to know if openstack zun will be released for centos. >>>> Reading documentation at docs.openstack.org only ubuntu installation >>>> is reported. >>>> Many thanks >>>> Ignazio >>>> _______________________________________________ >>>> OpenStack-operators mailing list >>>> OpenStack-operators at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Wed Nov 14 19:38:04 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Wed, 14 Nov 2018 20:38:04 +0100 Subject: [Openstack-operators] Openstack zun on centos??? In-Reply-To: References: <1A3C52DFCD06494D8528644858247BF01C235375@EX10MBOX03.pnnl.gov> Message-ID: Difference between magnum and zun is that magnum creates COE and zun at this moment runs directly in docker daemon without orchestration. El mié., 14 nov. 2018 20:36, Eduardo Gonzalez escribió: > Hello, zun will create the containers in the directly in the docker daemon > associated as zun-compute node. > > About installation, you can install zun with pip and configure manually. > It will work no matter if is ubuntu or centos, maybe the packages required > doesn't match between distributions, is a matter of try what dependency it > have. > > Regards > > > El mié., 14 nov. 2018 20:29, Ignazio Cassano > escribió: > >> Hello again, >> I read zun documents very fastly. >> After zun installation (with or without kolla) , do containers run >> directly in compute nodes or in virtual machines ? >> Magnum can run them in both but Magnum seems very different. >> Ignazio >> >> >> Il giorno Mer 14 Nov 2018 20:00 Fox, Kevin M ha >> scritto: >> >>> kolla installs it via containers. >>> >>> Thanks, >>> Kevin >>> ------------------------------ >>> *From:* Ignazio Cassano [ignaziocassano at gmail.com] >>> *Sent:* Wednesday, November 14, 2018 10:48 AM >>> *To:* Eduardo Gonzalez >>> *Cc:* OpenStack Operators >>> *Subject:* Re: [Openstack-operators] Openstack zun on centos??? >>> >>> Hi Edoardo, >>> does it mean openstack kolla installs zun using pip ? >>> I did not find any zun rpm package >>> Regards >>> Ignazio >>> >>> Il giorno Mer 14 Nov 2018 18:38 Eduardo Gonzalez >>> ha scritto: >>> >>>> Hi Cassano, you can use zun in centos deployed by kolla-ansible. >>>> >>>> https://docs.openstack.org/kolla-ansible/latest/reference/zun-guide.html >>>> >>>> Regards >>>> >>>> El mié., 14 nov. 2018 17:11, Ignazio Cassano >>>> escribió: >>>> >>>>> Hi All, >>>>> I'd like to know if openstack zun will be released for centos. >>>>> Reading documentation at docs.openstack.org only ubuntu installation >>>>> is reported. >>>>> Many thanks >>>>> Ignazio >>>>> _______________________________________________ >>>>> OpenStack-operators mailing list >>>>> OpenStack-operators at lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Thu Nov 15 12:47:03 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Thu, 15 Nov 2018 13:47:03 +0100 Subject: [Openstack-operators] ops meetups team catch-up session Message-ID: Reminder : There is an ops meetups team catch up this afternoon here at the Berlin Summit Session summary : https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22800/ops-meetups-team-catch-up-session Etherpad here : https://etherpad.openstack.org/p/BER-Ops-Catch-Up time/location : Thursday, November 15, 3:20pm-4:00pm Hall 7 - Level 1 - 7.1b / London 1 See you there! Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Nov 19 00:04:14 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 19 Nov 2018 00:04:14 +0000 Subject: [Openstack-operators] IMPORTANT: We're combining the lists! In-Reply-To: <20181109181447.qhutsauxl4fuinnh@yuggoth.org> Message-ID: <20181119000414.u675z4z3s7esymat@yuggoth.org> REMINDER: The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists (to which this was sent) are being replaced by a new openstack-discuss at lists.openstack.org mailing list. The new list[0] is open for posts from subscribers starting now, and the old lists will be configured to no longer accept posts starting on Monday December 3. In the interim, posts to the old lists will also get copied to the new list so it's safe to unsubscribe from them now and not miss any messages. See my previous notice[1] for details. As of the time of this announcement, we have 280 subscribers on openstack-discuss with three weeks to go before the old lists are closed down for good). At the recommendation of David Medberry at the OpenStack Summit last week, this reminder is being sent individually to each of the old lists (not as a cross-post), and without any topic tag in case either might be resulting in subscribers missing it. [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mriedemos at gmail.com Tue Nov 20 22:07:14 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 20 Nov 2018 16:07:14 -0600 Subject: [Openstack-operators] [openstack-dev] [nova] about filter the flavor In-Reply-To: References: Message-ID: On 11/19/2018 9:32 PM, Rambo wrote: >       I have an idea.Now we can't filter the special flavor according > to the property.Can we achieve it?If we achieved this,we can filter the > flavor according the property's key and value to filter the flavor. What > do you think of the idea?Can you tell me more about this ?Thank you very > much. To be clear, you want to filter flavors by extra spec key and/or value? So something like: GET /flavors?key=hw%3Acpu_policy would return all flavors with an extra spec with key "hw:cpu_policy". And: GET /flavors?key=hw%3Acpu_policy&value=dedicated would return all flavors with extra spec "hw:cpu_policy" with value "dedicated". The query parameter semantics are probably what gets messiest about this. Because I could see wanting to couple the key and value together, but I'm not sure how you do that, because I don't think you can do this: GET /flavors?spec=hw%3Acpu_policy=dedicated Maybe you'd do: GET /flavors?hw%3Acpu_policy=dedicated The problem with that is we wouldn't be able to perform any kind of request schema validation of it, especially since flavor extra specs are not standardized. -- Thanks, Matt From dabarren at gmail.com Wed Nov 21 17:07:40 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Wed, 21 Nov 2018 18:07:40 +0100 Subject: [Openstack-operators] [kolla] Berlin summit resume Message-ID: Hi kollagues, During the Berlin Summit kolla team had a few talks and forum discussions, as well as other cross-project related topics [0] First session was ``Kolla project onboarding``, the room was full of people interested in contribute to kolla, many of them already using kolla in production environments whiling to make upstream some work they've done downstream. I can say this talk was a total success and we hope to see many new faces during this release putting features and bug fixes into kolla. Slides of the session at [1] Second session was ``Kolla project update``, was a brief resume of what work has been done during rocky release and some items will be implemented in the future. Number of attendees to this session was massive, no more people could enter the room. Slides at [2] Then forum sessions.. First one was ``Kolla user feedback``, many users came over the room. We've notice a big increase in production deployments and some PoC migrating to production soon, many of those environments are huge. Overall the impressions was that kolla is great and don't have any big issue or requirement, ``it works great`` became a common phrase to listen. Here's a resume of the user feedback needs [3] - Improve operational usage for add, remove, change and stop/start nodes and services. - Database backup and recovery - Lack of documentation is the bigger request, users need to read the code to know how to configure other than core/default services - Multi cells_v2 - New services request, cyborg, masakari and tricircle were the most requested - SElinux enabled - More SDN services such as Contrail and calico - Possibility to include user's ansible tasks during deploy as well as support custom config.json - HTTPS for internal networks Second one was about ``kolla for the edge``, we've meet with Edge computing group and others interested in edge deployments to identify what's missing in kolla and where we can help. Things we've identified are: - Kolla seems good at how the service split can be done, tweaking inventory file and config values can deploy independent environments easily. - Missing keystone federation - Glance cache support is not a hard requirement but improves efficiency (already merged) - Multi cells v2 - Multi storage per edge/far-edge - A documentation or architecture reference would be nice to have. Last one was ``kolla for NFV``, few people came over to discuss about NUMA, GPU, SRIOV. Nothing noticiable from this session, mainly was support DPDK for CentOS/RHEL,OracleLinux and few service addition covered by previous discussions. [0] https://etherpad.openstack.org/p/kolla-stein-summit [1] https://es.slideshare.net/EduardoGonzalezGutie/kolla-project-onboarding-openstack-summit-berlin-2018 [2] https://es.slideshare.net/EduardoGonzalezGutie/openstack-kolla-project-update-rocky-release [3] https://etherpad.openstack.org/p/berlin-2018-kolla-user-feedback [4] https://etherpad.openstack.org/p/berlin-2018-kolla-edge [5] https://etherpad.openstack.org/p/berlin-2018-kolla-nfv -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Nov 21 18:44:56 2018 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 21 Nov 2018 18:44:56 +0000 Subject: [Openstack-operators] [openstack-dev] [kolla] Berlin summit resume In-Reply-To: References: Message-ID: Thanks for the write up Eduardo. I thought you and Surya did a good job of presenting and moderating those sessions. Mark On Wed, 21 Nov 2018 at 17:08, Eduardo Gonzalez wrote: > Hi kollagues, > > During the Berlin Summit kolla team had a few talks and forum discussions, > as well as other cross-project related topics [0] > > First session was ``Kolla project onboarding``, the room was full of > people interested in contribute to kolla, many of them already using kolla > in production environments whiling to make upstream some work they've done > downstream. I can say this talk was a total success and we hope to see many > new faces during this release putting features and bug fixes into kolla. > Slides of the session at [1] > > Second session was ``Kolla project update``, was a brief resume of what > work has been done during rocky release and some items will be implemented > in the future. Number of attendees to this session was massive, no more > people could enter the room. Slides at [2] > > > Then forum sessions.. > > First one was ``Kolla user feedback``, many users came over the room. > We've notice a big increase in production deployments and some PoC > migrating to production soon, many of those environments are huge. > Overall the impressions was that kolla is great and don't have any big > issue or requirement, ``it works great`` became a common phrase to listen. > Here's a resume of the user feedback needs [3] > > - Improve operational usage for add, remove, change and stop/start nodes > and services. > - Database backup and recovery > - Lack of documentation is the bigger request, users need to read the code > to know how to configure other than core/default services > - Multi cells_v2 > - New services request, cyborg, masakari and tricircle were the most > requested > - SElinux enabled > - More SDN services such as Contrail and calico > - Possibility to include user's ansible tasks during deploy as well as > support custom config.json > - HTTPS for internal networks > > Second one was about ``kolla for the edge``, we've meet with Edge > computing group and others interested in edge deployments to identify > what's missing in kolla and where we can help. > Things we've identified are: > > - Kolla seems good at how the service split can be done, tweaking > inventory file and config values can deploy independent environments easily. > - Missing keystone federation > - Glance cache support is not a hard requirement but improves efficiency > (already merged) > - Multi cells v2 > - Multi storage per edge/far-edge > - A documentation or architecture reference would be nice to have. > > Last one was ``kolla for NFV``, few people came over to discuss about > NUMA, GPU, SRIOV. > Nothing noticiable from this session, mainly was support DPDK for > CentOS/RHEL,OracleLinux and few service addition covered by previous > discussions. > > [0] https://etherpad.openstack.org/p/kolla-stein-summit > [1] > https://es.slideshare.net/EduardoGonzalezGutie/kolla-project-onboarding-openstack-summit-berlin-2018 > [2] > https://es.slideshare.net/EduardoGonzalezGutie/openstack-kolla-project-update-rocky-release > [3] https://etherpad.openstack.org/p/berlin-2018-kolla-user-feedback > [4] https://etherpad.openstack.org/p/berlin-2018-kolla-edge > [5] https://etherpad.openstack.org/p/berlin-2018-kolla-nfv > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahm.jawad118 at gmail.com Thu Nov 22 10:03:54 2018 From: ahm.jawad118 at gmail.com (Jawad Ahmed) Date: Thu, 22 Nov 2018 11:03:54 +0100 Subject: [Openstack-operators] openstack-annsible networking layout before running playbooks Message-ID: Hi all, I am deploying openstack-ansible in test environment where I need to use br-mgmt bridge for both storage and management traffic (same bridge for both) so that container interfaces eth1 and eth2 will connect to br-mgmt for mgmt and storage traffic at same time.Does it make sense if I ll setup provider networks openstack_user_config.yml as below? tunnel_bridge: "br-vxlan" //separate bridge for vxlan though management_bridge: "br-mgmt" provider_networks: - network: container_bridge: "br-mgmt" container_type: "veth" container_interface: "eth1" ip_from_q: "container" type: "raw" group_binds: - all_containers - hosts is_container_address: true is_ssh_address: true - network: container_bridge: "br-mgmt" container_type: "veth" container_interface: "eth2" ip_from_q: "storage" type: "raw" group_binds: - glance_api - cinder_api - cinder_volume - nova_compute Help would be appreciated. -- Greetings, Jawad Ahmed -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Thu Nov 22 15:10:04 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 22 Nov 2018 10:10:04 -0500 Subject: [Openstack-operators] openstack-annsible networking layout before running playbooks In-Reply-To: References: Message-ID: Hey there, You can just have one br-mgmt and skip the second one and everything will go over br-mgmt :) Thanks, Mohammed On Thu, Nov 22, 2018 at 5:05 AM Jawad Ahmed wrote: > Hi all, > I am deploying openstack-ansible in test environment where I need to use > br-mgmt bridge for both storage and management traffic (same bridge for > both) so that container interfaces eth1 and eth2 will connect to br-mgmt > for mgmt and storage traffic at same time.Does it make sense if I ll setup > provider networks openstack_user_config.yml as below? > > tunnel_bridge: "br-vxlan" //separate bridge for vxlan though > management_bridge: "br-mgmt" > > provider_networks: > - network: > container_bridge: "br-mgmt" > container_type: "veth" > container_interface: "eth1" > ip_from_q: "container" > type: "raw" > group_binds: > - all_containers > - hosts > is_container_address: true > is_ssh_address: true > > > - network: > container_bridge: "br-mgmt" > container_type: "veth" > container_interface: "eth2" > ip_from_q: "storage" > type: "raw" > group_binds: > - glance_api > - cinder_api > - cinder_volume > - nova_compute > > Help would be appreciated. > > -- > Greetings, > Jawad Ahmed > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Fri Nov 23 16:10:25 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Fri, 23 Nov 2018 11:10:25 -0500 Subject: [Openstack-operators] Openstack zun on centos??? In-Reply-To: References: Message-ID: Hi Edoardo, It looks you gets your answer from others. I just want to add a few more comments. We (the Zun team) would like to have CentOS included in our installation guide and I have created a ticket for that: https://blueprints.launchpad.net/zun/+spec/installation-guide-for-centos . It will be picked up by contributors if someone interests to work on it. I expect the steps would be very similar as Ubuntu expect a few necessary tweaks. Right now, there is no Debian or RPM packages for Zun so we instruct users to install from source, which might be a bit painful. I would like to see Zun included in distro packages and I will see if we can recruit contributors to work on that, or I will work on that myself. Best regards, Hongbin On Wed, Nov 14, 2018 at 1:51 PM Ignazio Cassano wrote: > Hi Edoardo, > does it mean openstack kolla installs zun using pip ? > I did not find any zun rpm package > Regards > Ignazio > > Il giorno Mer 14 Nov 2018 18:38 Eduardo Gonzalez ha > scritto: > >> Hi Cassano, you can use zun in centos deployed by kolla-ansible. >> >> https://docs.openstack.org/kolla-ansible/latest/reference/zun-guide.html >> >> Regards >> >> El mié., 14 nov. 2018 17:11, Ignazio Cassano >> escribió: >> >>> Hi All, >>> I'd like to know if openstack zun will be released for centos. >>> Reading documentation at docs.openstack.org only ubuntu installation is >>> reported. >>> Many thanks >>> Ignazio >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahm.jawad118 at gmail.com Fri Nov 23 23:28:48 2018 From: ahm.jawad118 at gmail.com (Jawad Ahmed) Date: Sat, 24 Nov 2018 00:28:48 +0100 Subject: [Openstack-operators] urlopen error [Errno 113] No route to host Message-ID: Hi all, Has anyone come across this error with utility container.I am running CentOS7 with Queens. After running setup-infrastructure.yml getting into this error while rest is success.Any workaround ? internal_lb_vip_address: 172.25.30.101 fatal: [rmq-db_utility_container-8e503460]: FAILED! => {"changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": " http://172.25.30.101:8181/os-releases/18.0.0/centos-7.5-x86_64/requirements_absolute_requirements.txt "} Thank you. -- Greetings, Jawad Ahmed -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Nov 27 17:32:48 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 27 Nov 2018 18:32:48 +0100 Subject: [Openstack-operators] Nova hypervisor uuid Message-ID: Hi All, Please anyone know where hypervisor uuid is retrived? Sometime updating kmv nodes with yum update it changes and in nova database 2 uuids are assigned to the same node. regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Nov 27 18:02:25 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 27 Nov 2018 12:02:25 -0600 Subject: [Openstack-operators] Nova hypervisor uuid In-Reply-To: References: Message-ID: <63283d3b-0661-7620-aaea-8ffbe9e483d8@gmail.com> On 11/27/2018 11:32 AM, Ignazio Cassano wrote: > Hi  All, > Please anyone know where hypervisor uuid is retrived? > Sometime updating kmv nodes with yum update it changes and in nova > database 2 uuids are assigned to the same node. > regards > Ignazio > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > To be clear, do you mean the computes_nodes.uuid column value in the cell database? Which is also used for the GET /os-hypervisors response 'id' value if using microversion >= 2.53. If so, that is generated randomly* when the compute_nodes table record is created: https://github.com/openstack/nova/blob/8545ba2af7476e0884b5e7fb90965bef92d605bc/nova/compute/resource_tracker.py#L588 https://github.com/openstack/nova/blob/8545ba2af7476e0884b5e7fb90965bef92d605bc/nova/objects/compute_node.py#L312 When you hit this problem, are you sure the hostname on the compute host is not changing? Because when nova-compute starts up, it should look for the existing compute node record by host name and node name, which for the libvirt driver should be the same. That lookup code is here: https://github.com/openstack/nova/blob/8545ba2af7476e0884b5e7fb90965bef92d605bc/nova/compute/resource_tracker.py#L815 So the only way nova-compute should create a new compute_nodes table record for the same host is if the host/node name changes during the upgrade. Is the deleted value in the database the same (0) for both of those records? * The exception to this is for the ironic driver which re-uses the ironic node uuid as of this change: https://review.openstack.org/#/c/571535/ -- Thanks, Matt From fungi at yuggoth.org Tue Nov 27 18:25:01 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 27 Nov 2018 18:25:01 +0000 Subject: [Openstack-operators] IMPORTANT: We're combining the lists! In-Reply-To: <20181119000414.u675z4z3s7esymat@yuggoth.org> References: <20181109181447.qhutsauxl4fuinnh@yuggoth.org> <20181119000414.u675z4z3s7esymat@yuggoth.org> Message-ID: <20181127182501.vjdxgrg2ncmwcnl3@yuggoth.org> REMINDER: The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists (to which this was sent) are being replaced by a new openstack-discuss at lists.openstack.org mailing list. The new list[0] has been open for posts from subscribers since Monday November 19, and the old lists will be configured to no longer accept posts starting on Monday December 3. In the interim, posts to the old lists will also get copied to the new list so it's safe to unsubscribe from them now and not miss any messages. See my previous notice[1] for details. As of the time of this announcement, we have 403 subscribers on openstack-discuss with six days to go before the old lists are closed down for good). I have updated the old list descriptions to indicate the openstack-discuss list is preferred, and added a custom "welcome message" with the same for anyone who subscribes to them over the next week. [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ignaziocassano at gmail.com Wed Nov 28 10:19:53 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 28 Nov 2018 11:19:53 +0100 Subject: [Openstack-operators] Fwd: Nova hypervisor uuid In-Reply-To: References: <63283d3b-0661-7620-aaea-8ffbe9e483d8@gmail.com> Message-ID: Hi Matt, sorry but I lost your answer and Gianpiero forwarded it to me. I am sure kvm nodes names are note changed. Tables where uuid are duplicated are: dataresource_providers in nova_api db compute_nodes in nova db Regards Ignazio Il 28/Nov/2018 11:09 AM, "Gianpiero Ardissono" ha scritto: > > ---------- Forwarded message --------- > From: Matt Riedemann > Date: mar 27 nov 2018, 19:03 > Subject: Re: [Openstack-operators] Nova hypervisor uuid > To: > > > On 11/27/2018 11:32 AM, Ignazio Cassano wrote: > > Hi All, > > Please anyone know where hypervisor uuid is retrived? > > Sometime updating kmv nodes with yum update it changes and in nova > > database 2 uuids are assigned to the same node. > > regards > > Ignazio > > > > > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > To be clear, do you mean the computes_nodes.uuid column value in the > cell database? Which is also used for the GET /os-hypervisors response > 'id' value if using microversion >= 2.53. If so, that is generated > randomly* when the compute_nodes table record is created: > > > https://github.com/openstack/nova/blob/8545ba2af7476e0884b5e7fb90965bef92d605bc/nova/compute/resource_tracker.py#L588 > > > https://github.com/openstack/nova/blob/8545ba2af7476e0884b5e7fb90965bef92d605bc/nova/objects/compute_node.py#L312 > > When you hit this problem, are you sure the hostname on the compute host > is not changing? Because when nova-compute starts up, it should look for > the existing compute node record by host name and node name, which for > the libvirt driver should be the same. That lookup code is here: > > > https://github.com/openstack/nova/blob/8545ba2af7476e0884b5e7fb90965bef92d605bc/nova/compute/resource_tracker.py#L815 > > So the only way nova-compute should create a new compute_nodes table > record for the same host is if the host/node name changes during the > upgrade. Is the deleted value in the database the same (0) for both of > those records? > > * The exception to this is for the ironic driver which re-uses the > ironic node uuid as of this change: > https://review.openstack.org/#/c/571535/ > > -- > > Thanks, > > Matt > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Nov 28 16:54:00 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 28 Nov 2018 10:54:00 -0600 Subject: [Openstack-operators] Fwd: Nova hypervisor uuid In-Reply-To: References: <63283d3b-0661-7620-aaea-8ffbe9e483d8@gmail.com> Message-ID: On 11/28/2018 4:19 AM, Ignazio Cassano wrote: > Hi Matt, sorry but I lost your answer and Gianpiero forwarded it to me. > I am sure kvm nodes names are note changed. > Tables where uuid are duplicated are: > dataresource_providers in nova_api db > compute_nodes in nova db > Regards > Ignazio It would be easier if you simply dumped the result of a select query on the compute_nodes table where the duplicate nodes exist (you said duplicate UUIDs but I think you mean duplicate host/node names with different UUIDs, correct?). There is a unique constraint on host/hypervisor_hostname (nodename)/deleted: schema.UniqueConstraint( 'host', 'hypervisor_hostname', 'deleted', name="uniq_compute_nodes0host0hypervisor_hostname0deleted"), So I'm wondering if the deleted field is not 0 on one of those because if one is marked as deleted, then the compute service will create a new compute_nodes table record on startup (and associated resource provider). -- Thanks, Matt From ignaziocassano at gmail.com Thu Nov 29 06:49:21 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 29 Nov 2018 07:49:21 +0100 Subject: [Openstack-operators] Fwd: Nova hypervisor uuid In-Reply-To: References: <63283d3b-0661-7620-aaea-8ffbe9e483d8@gmail.com> Message-ID: Hello Mattm Yes I mean sometimes I have same host/node names with different uuid in compute_nodes table in nova database I must delete nodes with uuid those not match with nova-hypervisor list command. At this time I have the following: MariaDB [nova]> select hypervisor_hostname,uuid,deleted from compute_nodes; +---------------------+--------------------------------------+---------+ | hypervisor_hostname | uuid | deleted | +---------------------+--------------------------------------+---------+ | tst2-kvm02 | 802b21c2-11fb-4426-86b9-bf25c8a5ae1d | 0 | | tst2-kvm01 | ce27803b-06cd-44a7-b927-1fa42c813b0f | 0 | +---------------------+--------------------------------------+---------+ 2 rows in set (0,00 sec) But sometimes old uuid are inserted in the table . I deleted again them. I restarted kvm nodes and now the table is ok. I also restarded each controller and the tables is ok. I do not know because 3 days ago I had same compute nodes names with different uuids. Thanks and Regards Ignazio Il giorno mer 28 nov 2018 alle ore 17:54 Matt Riedemann ha scritto: > On 11/28/2018 4:19 AM, Ignazio Cassano wrote: > > Hi Matt, sorry but I lost your answer and Gianpiero forwarded it to me. > > I am sure kvm nodes names are note changed. > > Tables where uuid are duplicated are: > > dataresource_providers in nova_api db > > compute_nodes in nova db > > Regards > > Ignazio > > It would be easier if you simply dumped the result of a select query on > the compute_nodes table where the duplicate nodes exist (you said > duplicate UUIDs but I think you mean duplicate host/node names with > different UUIDs, correct?). > > There is a unique constraint on host/hypervisor_hostname > (nodename)/deleted: > > schema.UniqueConstraint( > 'host', 'hypervisor_hostname', 'deleted', > name="uniq_compute_nodes0host0hypervisor_hostname0deleted"), > > So I'm wondering if the deleted field is not 0 on one of those because > if one is marked as deleted, then the compute service will create a new > compute_nodes table record on startup (and associated resource provider). > > -- > > Thanks, > > Matt > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Nov 29 15:28:46 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 29 Nov 2018 09:28:46 -0600 Subject: [Openstack-operators] Fwd: Nova hypervisor uuid In-Reply-To: References: <63283d3b-0661-7620-aaea-8ffbe9e483d8@gmail.com> Message-ID: <26bd8ef8-293e-20c6-0bf0-83ab076f843e@gmail.com> On 11/29/2018 12:49 AM, Ignazio Cassano wrote: > Hello Mattm > Yes I mean sometimes I have same host/node names with different uuid in > compute_nodes table in nova database > I must delete nodes with uuid those not match with nova-hypervisor list > command. > At this time I have the following: > MariaDB [nova]> select hypervisor_hostname,uuid,deleted from compute_nodes; > +---------------------+--------------------------------------+---------+ > | hypervisor_hostname | uuid                                 | deleted | > +---------------------+--------------------------------------+---------+ > | tst2-kvm02          | 802b21c2-11fb-4426-86b9-bf25c8a5ae1d |       0 | > | tst2-kvm01          | ce27803b-06cd-44a7-b927-1fa42c813b0f |       0 | > +---------------------+--------------------------------------+---------+ > 2 rows in set (0,00 sec) > > > But sometimes old uuid are inserted in the table . > I deleted again them. > I restarted kvm nodes and now the table is ok. > I also restarded each controller and the tables is ok. > I do not know because 3 days ago I had same compute nodes names with > different uuids. > > Thanks and Regards > Ignazio OK I guess if it happens again, please get the host/hypervisor_hostname/uuid/deleted values from the compute_nodes table before you cleanup any entries. Also, when you're deleting the resources from the DB, are you doing it in the DB directly or via the DELETE /os-services/{service_id} API? Because the latter cleans up other related resources to the nova-compute service (the services table record, the compute_nodes table record, the related resource_providers table record in placement, and the host_mappings table record in the nova API DB). The resource provider/host mappings cleanup when deleting a compute service is a more recent bug fix though which depending on your release you might not have: https://review.openstack.org/#/q/I7b8622b178d5043ed1556d7bdceaf60f47e5ac80 -- Thanks, Matt From ignaziocassano at gmail.com Thu Nov 29 16:27:36 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 29 Nov 2018 17:27:36 +0100 Subject: [Openstack-operators] Fwd: Nova hypervisor uuid In-Reply-To: <26bd8ef8-293e-20c6-0bf0-83ab076f843e@gmail.com> References: <63283d3b-0661-7620-aaea-8ffbe9e483d8@gmail.com> <26bd8ef8-293e-20c6-0bf0-83ab076f843e@gmail.com> Message-ID: Hi Matt, I did in the DB directly. I am using queens now. Any python client command to delete hold records or I must use api ? Thanks & Regards Ignazio Il giorno gio 29 nov 2018 alle ore 16:28 Matt Riedemann ha scritto: > On 11/29/2018 12:49 AM, Ignazio Cassano wrote: > > Hello Mattm > > Yes I mean sometimes I have same host/node names with different uuid in > > compute_nodes table in nova database > > I must delete nodes with uuid those not match with nova-hypervisor list > > command. > > At this time I have the following: > > MariaDB [nova]> select hypervisor_hostname,uuid,deleted from > compute_nodes; > > +---------------------+--------------------------------------+---------+ > > | hypervisor_hostname | uuid | deleted | > > +---------------------+--------------------------------------+---------+ > > | tst2-kvm02 | 802b21c2-11fb-4426-86b9-bf25c8a5ae1d | 0 | > > | tst2-kvm01 | ce27803b-06cd-44a7-b927-1fa42c813b0f | 0 | > > +---------------------+--------------------------------------+---------+ > > 2 rows in set (0,00 sec) > > > > > > But sometimes old uuid are inserted in the table . > > I deleted again them. > > I restarted kvm nodes and now the table is ok. > > I also restarded each controller and the tables is ok. > > I do not know because 3 days ago I had same compute nodes names with > > different uuids. > > > > Thanks and Regards > > Ignazio > > OK I guess if it happens again, please get the > host/hypervisor_hostname/uuid/deleted values from the compute_nodes > table before you cleanup any entries. > > Also, when you're deleting the resources from the DB, are you doing it > in the DB directly or via the DELETE /os-services/{service_id} API? > Because the latter cleans up other related resources to the nova-compute > service (the services table record, the compute_nodes table record, the > related resource_providers table record in placement, and the > host_mappings table record in the nova API DB). The resource > provider/host mappings cleanup when deleting a compute service is a more > recent bug fix though which depending on your release you might not have: > > https://review.openstack.org/#/q/I7b8622b178d5043ed1556d7bdceaf60f47e5ac80 > > -- > > Thanks, > > Matt > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Nov 29 16:28:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 29 Nov 2018 10:28:53 -0600 Subject: [Openstack-operators] Fwd: Nova hypervisor uuid In-Reply-To: References: <63283d3b-0661-7620-aaea-8ffbe9e483d8@gmail.com> <26bd8ef8-293e-20c6-0bf0-83ab076f843e@gmail.com> Message-ID: <097b1056-a883-48ec-8bc6-15267a987342@gmail.com> On 11/29/2018 10:27 AM, Ignazio Cassano wrote: > I did in the DB directly. > I am using queens now. > Any python client command to delete hold records or I must use api ? You can use the CLI: https://docs.openstack.org/python-novaclient/latest/cli/nova.html#nova-service-delete https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/compute-service.html#compute-service-delete -- Thanks, Matt From ignaziocassano at gmail.com Thu Nov 29 17:22:21 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 29 Nov 2018 18:22:21 +0100 Subject: [Openstack-operators] Fwd: Nova hypervisor uuid In-Reply-To: <097b1056-a883-48ec-8bc6-15267a987342@gmail.com> References: <63283d3b-0661-7620-aaea-8ffbe9e483d8@gmail.com> <26bd8ef8-293e-20c6-0bf0-83ab076f843e@gmail.com> <097b1056-a883-48ec-8bc6-15267a987342@gmail.com> Message-ID: Many thks Matt. If the issue will happen again I hope openstack commands will show duplicate entries and let me clean them. When happened nova-hypervisor list command did not show duplcated entries but I saw them only in the database. Regards Ignazio Il giorno Gio 29 Nov 2018 17:28 Matt Riedemann ha scritto: > On 11/29/2018 10:27 AM, Ignazio Cassano wrote: > > I did in the DB directly. > > I am using queens now. > > Any python client command to delete hold records or I must use api ? > > You can use the CLI: > > > https://docs.openstack.org/python-novaclient/latest/cli/nova.html#nova-service-delete > > > https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/compute-service.html#compute-service-delete > > -- > > Thanks, > > Matt > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbooth at redhat.com Fri Nov 30 12:06:56 2018 From: mbooth at redhat.com (Matthew Booth) Date: Fri, 30 Nov 2018 12:06:56 +0000 Subject: [Openstack-operators] [nova] Would an api option to create an instance without powering on be useful? Message-ID: I have a request to do $SUBJECT in relation to a V2V workflow. The use case here is conversion of a VM/Physical which was previously powered off. We want to move its data, but we don't want to be powering on stuff which wasn't previously on. This would involve an api change, and a hopefully very small change in drivers to support it. Technically I don't see it as an issue. However, is it a change we'd be willing to accept? Is there any good reason not to do this? Are there any less esoteric workflows which might use this feature? Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From mnaser at vexxhost.com Fri Nov 30 14:40:00 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 30 Nov 2018 09:40:00 -0500 Subject: [Openstack-operators] [nova] Would an api option to create an instance without powering on be useful? In-Reply-To: References: Message-ID: On Fri, Nov 30, 2018 at 7:07 AM Matthew Booth wrote: > I have a request to do $SUBJECT in relation to a V2V workflow. The use > case here is conversion of a VM/Physical which was previously powered > off. We want to move its data, but we don't want to be powering on > stuff which wasn't previously on. > > This would involve an api change, and a hopefully very small change in > drivers to support it. Technically I don't see it as an issue. > > However, is it a change we'd be willing to accept? Is there any good > reason not to do this? Are there any less esoteric workflows which > might use this feature? > If you upload an image of said VM which you don't boot, you'd really be accomplishing the same thing, no? Unless you want to be in a state where you want the VM to be there but sitting in SHUTOFF state > Matt > -- > Matthew Booth > Red Hat OpenStack Engineer, Compute DFG > > Phone: +442070094448 (UK) > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Fri Nov 30 16:34:47 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 30 Nov 2018 10:34:47 -0600 Subject: [Openstack-operators] [openstack-dev] [nova] Would an api option to create an instance without powering on be useful? In-Reply-To: References: Message-ID: <81627e78-4f92-2909-f209-284b4847ae09@nemebean.com> On 11/30/18 6:06 AM, Matthew Booth wrote: > I have a request to do $SUBJECT in relation to a V2V workflow. The use > case here is conversion of a VM/Physical which was previously powered > off. We want to move its data, but we don't want to be powering on > stuff which wasn't previously on. > > This would involve an api change, and a hopefully very small change in > drivers to support it. Technically I don't see it as an issue. > > However, is it a change we'd be willing to accept? Is there any good > reason not to do this? Are there any less esoteric workflows which > might use this feature? I don't know if it qualifies as less esoteric, but I would use this for OVB[1]. When we create the "baremetal" VMs there's no need to actually power them on since the first thing we do with them is shut them down again. Their initial footprint is pretty small so it's not a huge deal, but it is another potential use case for this feature. 1: https://openstack-virtual-baremetal.readthedocs.io/en/latest/introduction.html From smooney at redhat.com Fri Nov 30 21:28:39 2018 From: smooney at redhat.com (Sean Mooney) Date: Fri, 30 Nov 2018 21:28:39 +0000 Subject: [Openstack-operators] [openstack-dev] [nova] Would an api option to create an instance without powering on be useful? In-Reply-To: References: Message-ID: <58ed5a65770e77128b2b14a819ec37a4dc8401d8.camel@redhat.com> On Fri, 2018-11-30 at 09:40 -0500, Mohammed Naser wrote: > > > On Fri, Nov 30, 2018 at 7:07 AM Matthew Booth wrote: > > I have a request to do $SUBJECT in relation to a V2V workflow. The use > > case here is conversion of a VM/Physical which was previously powered > > off. We want to move its data, but we don't want to be powering on > > stuff which wasn't previously on. > > > > This would involve an api change, and a hopefully very small change in > > drivers to support it. Technically I don't see it as an issue. > > > > However, is it a change we'd be willing to accept? Is there any good > > reason not to do this? Are there any less esoteric workflows which > > might use this feature? > > If you upload an image of said VM which you don't boot, you'd really be > accomplishing the same thing, no? > > Unless you want to be in a state where you want the VM to be there but > sitting in SHUTOFF state i think the intent was to have a vm ready to go with ips/ports, volumes exctra all created so you can quickly start it when needed. if that is the case another alternitive which might be more public cloud freidly form a wallet perspecitve would be the ablity to create a shelved isntace. that way all the ports ectra would be logically created but it would not be consumeing any compute resources. > > > Matt > > -- > > Matthew Booth > > Red Hat OpenStack Engineer, Compute DFG > > > > Phone: +442070094448 (UK) > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev