From thierry at openstack.org Tue Sep 4 08:12:47 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 4 Sep 2018 10:12:47 +0200 Subject: [Openstack-operators] [ptg] ptgbot HOWTO Message-ID: <12e26b51-a418-0df6-c1be-cc577252aa23@openstack.org> Hi everyone, In a few days some of us will meet in Denver for the 4th OpenStack PTG. The event is made of several 'tracks' (organized around a specific team/group or a specific theme). Topics of discussions are loosely scheduled in those tracks, based on the needs of the attendance. This allows to maximize attendee productivity, but the downside is that it can make the event a bit confusing to navigate. To mitigate that issue, we are using an IRC bot to expose what's happening currently at the event at the following page: http://ptg.openstack.org/ptg.html It is therefore useful to have a volunteer in each room who makes use of the PTG bot to communicate what's happening. This is done by joining the #openstack-ptg IRC channel on Freenode and voicing commands to the bot. How to keep attendees informed of what's being discussed in your room --------------------------------------------------------------------- To indicate what's currently being discussed, you will use the track name hashtag (found in the "Scheduled tracks" section on the above page), with the 'now' command: #TRACK now Example: #swift now brainstorming improvements to the ring You can also mention other track names to make sure to get people attention when the topic is transverse: #ops-meetup now discussing #cinder pain points There can only be one 'now' entry for a given track at a time. To indicate what will be discussed next, you can enter one or more 'next' commands: #TRACK next Example: #api-sig next at 2pm we'll be discussing pagination woes Note that in order to keep content current, entering a new 'now' command for a track will erase any 'next' entry for that track. Finally, if you want to clear all 'now' and 'next' entries for your track, you can issue the 'clean' command: #TRACK clean Example: #ironic clean How to book reservable rooms ---------------------------- Like at every PTG, in Denver we will have additional reservable space for extra un-scheduled discussions. In addition, some of the smaller teams do not have any pre-scheduled space, and will solely be relying on this feature to book the time that makes the most sense for them. Those teams are Chef OpenStack (#chef), LOCI (#loci), OpenStackClient (#osc), Puppet OpenStack (#puppet), Release Management (#relmgt), Requirements (#requirements), and Designate (#designate). The PTG bot page shows which track is allocated to which room, as well as available reservable space, with a slot code (room name - time slot) that you can use to issue a 'book' command to the PTG bot: #TRACK book Example: #relmgt book Ballroom C-TueA2 Any track can book additional space and time using this system. All slots are 1h45-long. If your topic of discussion does not fall into an existing track, it is easy to add a track on the fly. Just ask PTG bot admins (ttx, diablo_rojo, infra...) to create a track for you (which they can do by getting op rights and issuing a ~add command). For more information on the bot commands, please see: https://git.openstack.org/cgit/openstack/ptgbot/tree/README.rst -- Thierry Carrez (ttx) From christophe.sauthier at objectif-libre.com Tue Sep 4 09:50:13 2018 From: christophe.sauthier at objectif-libre.com (Christophe Sauthier) Date: Tue, 04 Sep 2018 11:50:13 +0200 Subject: [Openstack-operators] =?utf-8?q?=5Bcloudkitty=5D_Anyone_running_C?= =?utf-8?q?loudkitty_with_SSL=3F?= In-Reply-To: References: <27c8f7b395ef4b468dc790d7ffadb869d8be7fa0.camel@gmail.com> Message-ID: <3cae92b4e8c94577e5d90d8f83f8b46b@objectif-libre.com> Hello Thanks for those elements. It is really surprising because as you can imagine this is something we set up many times... I'll take care to set up the same environment than you and I'll let you know if I am facing the same issues... I am trying to do that quickly... Regards Christophe ---- Christophe Sauthier CEO Objectif Libre : Au service de votre Cloud +33 (0) 6 16 98 63 96 | christophe.sauthier at objectif-libre.com https://www.objectif-libre.com | @objectiflibre Recevez la Pause Cloud Et DevOps : https://olib.re/abo-pause Le 2018-08-31 23:40, jonmills at gmail.com a écrit : > On Fri, 2018-08-31 at 23:20 +0200, Christophe Sauthier wrote: >> Hello Jonathan >> >> Can you describe a little more your setup (release/method of >> installation/linux distribution) /issues that you are facing ? > > > It is OpenStack Queens, on CentOS 7.5, using the packages from the > centos-cloud repo (which I suppose is the same is RDO). > > # uname -msr > Linux 3.10.0-862.3.2.el7.x86_64 x86_64 > > # rpm -qa |grep cloudkitty |sort > openstack-cloudkitty-api-7.0.0-1.el7.noarch > openstack-cloudkitty-common-7.0.0-1.el7.noarch > openstack-cloudkitty-processor-7.0.0-1.el7.noarch > openstack-cloudkitty-ui-7.0.0-1.el7.noarch > python2-cloudkittyclient-1.2.0-1.el7.noarch > > It is 'deployed' with custom puppet code only. I follow exactly the > installation guides posted here: > https://docs.openstack.org/cloudkitty/queens/index.html > > I'd prefer not to post full config files, but my [keystone_authtoken] > section of cloudkitty.conf is identical (aside from service > credentials) to the ones found in my glance, nova, cinder, neutron, > gnocchi, ceilometer, etc, all of those services are working perfectly. > > > My processor.log file is full of > > 2018-08-31 16:38:04.086 30471 WARNING cloudkitty.orchestrator [-] > Error > while collecting service network.floating: SSL exception connecting to > https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: > Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate > verify failed')],)",): SSLError: SSL exception connecting to > https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: > Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate > verify failed')],)",) > 2018-08-31 16:38:04.094 30471 WARNING cloudkitty.orchestrator [-] > Error > while collecting service image: SSL exception connecting to > https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: > Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate > verify failed')],)",): SSLError: SSL exception connecting to > https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: > Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate > verify failed')],)",) > > and so on > > > But, I mean, there's other little things too. I can see from running > > 'openstack --debug rating info-config-get' > > that it never even loads the cacert from my env, so it fails talking > to > keystone trying to get a token; the request never even gets to the > cloudkitty api endpoint. > > > >> >> Because we have deployed it/used it many times with SSL without >> issue... >> >> It could be great also that you step up on #cloudkitty to discuss it. >> >> Christophe >> >> ---- >> Christophe Sauthier >> CEO >> >> Objectif Libre : Au service de votre Cloud >> >> +33 (0) 6 16 98 63 96 | christophe.sauthier at objectif-libre.com >> >> https://www.objectif-libre.com | @objectiflibre >> Recevez la Pause Cloud Et DevOps : https://olib.re/abo-pause >> >> Le 2018-08-31 23:15, jonmills at gmail.com a écrit : >>> Anyone out there have Cloudkitty successfully working with SSL? By >>> which I mean that Cloudkitty is able to talk to keystone over https >>> without cert errors, and also talk to SSL'd rabbitmq? Oh, and the >>> client tools also? >>> >>> Asking for a friend... >>> >>> >>> >>> Jonathan >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From jonmills at gmail.com Tue Sep 4 12:37:34 2018 From: jonmills at gmail.com (Jonathan Mills) Date: Tue, 4 Sep 2018 08:37:34 -0400 Subject: [Openstack-operators] [cloudkitty] Anyone running Cloudkitty with SSL? In-Reply-To: <3cae92b4e8c94577e5d90d8f83f8b46b@objectif-libre.com> References: <27c8f7b395ef4b468dc790d7ffadb869d8be7fa0.camel@gmail.com> <3cae92b4e8c94577e5d90d8f83f8b46b@objectif-libre.com> Message-ID: Christophe, Thank you, we really appreciate you looking into this, and I will try to help you as much as I can, because we really need to have this software working, soon. So here's something that, to me, is very telling # printenv |grep OS_CACERT OS_CACERT=/etc/openldap/cacerts/gpcprod_root_ca.pem ^^^ here you can see that my self-signed CA cert is loaded into my environment, having sourced my openrc file Now I'm going to invoke the cloudkitty client with debug, and grep for 'curl' to see what it's actually doing: # openstack --debug rating info-config-get 2>&1 |grep -b1 curl 9774-Get auth_ref 9787:REQ: curl -g -i --cacert "/etc/openldap/cacerts/gpcprod_root_ca.pem" -X GET https://keystone.gpcprod:5000/v3 -H "Accept: application/json" -H "User-Agent: osc-lib/1.9.0 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" 10014-Starting new HTTPS connection (1): keystone.gpcprod -- 16319-run(Namespace()) 16336:REQ: curl -g -i -X GET https://keystone.gpcprod:5000/v3 -H "Accept: application/json" -H "User-Agent: python-keystoneclient" 16461-Starting new HTTPS connection (1): keystone.gpcprod ^^^ you can see that the first time, it correctly forms the curl, and that works fine. But the second time (and the User-Agent has changed), it never even passes the --cacert option to curl at all. The results then are predictable: Starting new HTTPS connection (1): keystone.gpcprod SSL exception connecting to https://keystone.gpcprod:5000/v3: ("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/cliff/app.py", line 400, in run_subcommand result = cmd.run(parsed_args) File "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line 41, in run return super(Command, self).run(parsed_args) File "/usr/lib/python2.7/site-packages/cliff/command.py", line 184, in run return_code = self.take_action(parsed_args) or 0 File "/usr/lib/python2.7/site-packages/cloudkittyclient/v1/shell_cli.py", line 78, in take_action shell.do_info_config_get(ckclient, parsed_args) File "/usr/lib/python2.7/site-packages/cloudkittyclient/v1/shell.py", line 93, in do_info_config_get utils.print_dict(cc.config.get_config(), dict_property="Section") File "/usr/lib/python2.7/site-packages/cloudkittyclient/v1/core.py", line 88, in get_config out = self.api.get(self.base_url).json() File "/usr/lib/python2.7/site-packages/cloudkittyclient/apiclient/client.py", line 359, in get return self.client_request("GET", url, **kwargs) File "/usr/lib/python2.7/site-packages/cloudkittyclient/apiclient/client.py", line 349, in client_request self, method, url, **kwargs) File "/usr/lib/python2.7/site-packages/cloudkittyclient/apiclient/client.py", line 248, in client_request self.authenticate() File "/usr/lib/python2.7/site-packages/cloudkittyclient/apiclient/client.py", line 319, in authenticate self.auth_plugin.authenticate(self) File "/usr/lib/python2.7/site-packages/cloudkittyclient/apiclient/auth.py", line 201, in authenticate self._do_authenticate(http_client) File "/usr/lib/python2.7/site-packages/cloudkittyclient/client.py", line 191, in _do_authenticate ks_session = _get_keystone_session(**ks_kwargs) File "/usr/lib/python2.7/site-packages/cloudkittyclient/client.py", line 87, in _get_keystone_session v2_auth_url, v3_auth_url = _discover_auth_versions(ks_session, auth_url) File "/usr/lib/python2.7/site-packages/cloudkittyclient/client.py", line 38, in _discover_auth_versions ks_discover = discover.Discover(session=session, auth_url=auth_url) File "/usr/lib/python2.7/site-packages/keystoneclient/discover.py", line 178, in __init__ authenticated=authenticated) File "/usr/lib/python2.7/site-packages/keystoneclient/_discover.py", line 143, in __init__ authenticated=authenticated) File "/usr/lib/python2.7/site-packages/keystoneclient/_discover.py", line 38, in get_version_data resp = session.get(url, headers=headers, authenticated=authenticated) File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 535, in get return self.request(url, 'GET', **kwargs) File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 428, in request resp = send(**kwargs) File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 466, in _send_request raise exceptions.SSLError(msg) SSLError: SSL exception connecting to https://keystone.gpcprod:5000/v3: ("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",) clean_up CliInfoGetConfig: SSL exception connecting to https://keystone.gpcprod:5000/v3: ("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",) Jonathan On Tue, Sep 4, 2018 at 5:50 AM Christophe Sauthier < christophe.sauthier at objectif-libre.com> wrote: > Hello > > Thanks for those elements. > > It is really surprising because as you can imagine this is something we > set up many times... > I'll take care to set up the same environment than you and I'll let you > know if I am facing the same issues... I am trying to do that quickly... > > Regards > > Christophe > > ---- > Christophe Sauthier > CEO > > Objectif Libre : Au service de votre Cloud > > +33 (0) 6 16 98 63 96 | christophe.sauthier at objectif-libre.com > > https://www.objectif-libre.com | @objectiflibre > Recevez la Pause Cloud Et DevOps : https://olib.re/abo-pause > > Le 2018-08-31 23:40, jonmills at gmail.com a écrit : > > On Fri, 2018-08-31 at 23:20 +0200, Christophe Sauthier wrote: > >> Hello Jonathan > >> > >> Can you describe a little more your setup (release/method of > >> installation/linux distribution) /issues that you are facing ? > > > > > > It is OpenStack Queens, on CentOS 7.5, using the packages from the > > centos-cloud repo (which I suppose is the same is RDO). > > > > # uname -msr > > Linux 3.10.0-862.3.2.el7.x86_64 x86_64 > > > > # rpm -qa |grep cloudkitty |sort > > openstack-cloudkitty-api-7.0.0-1.el7.noarch > > openstack-cloudkitty-common-7.0.0-1.el7.noarch > > openstack-cloudkitty-processor-7.0.0-1.el7.noarch > > openstack-cloudkitty-ui-7.0.0-1.el7.noarch > > python2-cloudkittyclient-1.2.0-1.el7.noarch > > > > It is 'deployed' with custom puppet code only. I follow exactly the > > installation guides posted here: > > https://docs.openstack.org/cloudkitty/queens/index.html > > > > I'd prefer not to post full config files, but my [keystone_authtoken] > > section of cloudkitty.conf is identical (aside from service > > credentials) to the ones found in my glance, nova, cinder, neutron, > > gnocchi, ceilometer, etc, all of those services are working perfectly. > > > > > > My processor.log file is full of > > > > 2018-08-31 16:38:04.086 30471 WARNING cloudkitty.orchestrator [-] > > Error > > while collecting service network.floating: SSL exception connecting to > > https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: > > Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate > > verify failed')],)",): SSLError: SSL exception connecting to > > https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: > > Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate > > verify failed')],)",) > > 2018-08-31 16:38:04.094 30471 WARNING cloudkitty.orchestrator [-] > > Error > > while collecting service image: SSL exception connecting to > > https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: > > Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate > > verify failed')],)",): SSLError: SSL exception connecting to > > https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: > > Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate > > verify failed')],)",) > > > > and so on > > > > > > But, I mean, there's other little things too. I can see from running > > > > 'openstack --debug rating info-config-get' > > > > that it never even loads the cacert from my env, so it fails talking > > to > > keystone trying to get a token; the request never even gets to the > > cloudkitty api endpoint. > > > > > > > >> > >> Because we have deployed it/used it many times with SSL without > >> issue... > >> > >> It could be great also that you step up on #cloudkitty to discuss it. > >> > >> Christophe > >> > >> ---- > >> Christophe Sauthier > >> CEO > >> > >> Objectif Libre : Au service de votre Cloud > >> > >> +33 (0) 6 16 98 63 96 | christophe.sauthier at objectif-libre.com > >> > >> https://www.objectif-libre.com | @objectiflibre > >> Recevez la Pause Cloud Et DevOps : https://olib.re/abo-pause > >> > >> Le 2018-08-31 23:15, jonmills at gmail.com a écrit : > >>> Anyone out there have Cloudkitty successfully working with SSL? By > >>> which I mean that Cloudkitty is able to talk to keystone over https > >>> without cert errors, and also talk to SSL'd rabbitmq? Oh, and the > >>> client tools also? > >>> > >>> Asking for a friend... > >>> > >>> > >>> > >>> Jonathan > >>> > >>> > >>> _______________________________________________ > >>> OpenStack-operators mailing list > >>> OpenStack-operators at lists.openstack.org > >>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at citynetwork.eu Tue Sep 4 14:36:01 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Tue, 4 Sep 2018 16:36:01 +0200 Subject: [Openstack-operators] [publiccloud-wg] Meeting tomorrow for Public Cloud WG Message-ID: <97dd2292-cea9-29e0-4d0e-b33ac8a5bc76@citynetwork.eu> Hi folks, Time for a new meeting for the Public Cloud WG. Agenda draft can be found at https://etherpad.openstack.org/p/publiccloud-wg, feel free to add items to that list. See you all tomorrow at 0700 UTC - IRC channel #openstack-publiccloud Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From rico.lin.guanyu at gmail.com Tue Sep 4 16:14:42 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 5 Sep 2018 00:14:42 +0800 Subject: [Openstack-operators] [openstack-dev][heat] Heat PTG Message-ID: Dear all As PTG is near. It's time to settle down the PTG format for Heat. Here is the *PTG etherpad*: https://etherpad.openstack.org/p/2018-Denver-PTG-Heat This time we will run with *physical + online for all sessions*. The online link for sessions will post on etherpad before the session begins. *We will only use Wednesday and Thursday, and our discussion will try to be Asia friendly*, which means any sessions require the entire team effort needs to happen in the morning. Also* feel free to add topic suggestion* if you would like to raise any discussion. Otherwise, I see you at PTG(physical/online). I'm *welcome any User/Ops feedbacks* as well, so feel free to leave any message for us. -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From codeology.lab at gmail.com Wed Sep 5 01:23:48 2018 From: codeology.lab at gmail.com (Cody) Date: Tue, 4 Sep 2018 21:23:48 -0400 Subject: [Openstack-operators] [tripleo]Render deployment plans with customized settings Message-ID: Hi everyone, How to render a deployment plan with customized network and role files? I was unable to pass those files with '-n' and '-r' options with the command 'openstack overcloud plan create'. Thank you for the help. Regards, Cody From mriedemos at gmail.com Wed Sep 5 14:56:59 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 5 Sep 2018 09:56:59 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> <62d1b308-720a-3f50-eb24-fefe52333e5e@gmail.com> Message-ID: On 9/5/2018 8:47 AM, Mohammed Naser wrote: > Could placement not do what happened for a while when the nova_api > database was created? Can you be more specific? I'm having a brain fart here and not remembering what you are referring to with respect to the nova_api DB. > > I say this because I know that moving the database is a huge task for > us, considering how big it can be in certain cases for us, and it > means control plane outage too I'm pretty sure you were in the room in YVR when we talked about how operators were going to do the database migration and were mostly OK with what was discussed, which was a lot will just copy and take the downtime (I think CERN said around 10 minutes for them, but they aren't a public cloud either), but others might do something more sophisticated and nova shouldn't try to pick the best fit for all. I'm definitely interested in what you do plan to do for the database migration to minimize downtime. +openstack-operators ML since this is an operators discussion now. -- Thanks, Matt From mnaser at vexxhost.com Wed Sep 5 15:03:03 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 5 Sep 2018 11:03:03 -0400 Subject: [Openstack-operators] [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> <62d1b308-720a-3f50-eb24-fefe52333e5e@gmail.com> Message-ID: On Wed, Sep 5, 2018 at 10:57 AM Matt Riedemann wrote: > > On 9/5/2018 8:47 AM, Mohammed Naser wrote: > > Could placement not do what happened for a while when the nova_api > > database was created? > > Can you be more specific? I'm having a brain fart here and not > remembering what you are referring to with respect to the nova_api DB. I think there was a period in time where the nova_api database was created where entires would try to get pulled out from the original nova database and then checking nova_api if it doesn't exist afterwards (or vice versa). One of the cases that this was done to deal with was for things like instance types or flavours. I don't know the exact details but I know that older instance types exist in the nova db and the newer ones are sitting in nova_api. Something along those lines? > > > > I say this because I know that moving the database is a huge task for > > us, considering how big it can be in certain cases for us, and it > > means control plane outage too > > I'm pretty sure you were in the room in YVR when we talked about how > operators were going to do the database migration and were mostly OK > with what was discussed, which was a lot will just copy and take the > downtime (I think CERN said around 10 minutes for them, but they aren't > a public cloud either), but others might do something more sophisticated > and nova shouldn't try to pick the best fit for all. If we're provided the list of tables used by placement, we could considerably make the downtime smaller because we don't have to pull in the other huge tables like instances/build requests/etc What happens if things like server deletes happen while the placement service is down? > I'm definitely interested in what you do plan to do for the database > migration to minimize downtime. At this point, I'm thinking turn off placement, setup the new one, do the migration of the placement-specific tables (this can be a straightforward documented task OR it would be awesome if it was a placement command (something along the lines of `placement-manage db import_from_nova`) which would import all the right things The idea of having a command would be *extremely* useful for deployment tools in automating the process and it also allows the placement team to selectively decide what they want to onboard? Just throwing ideas here. > +openstack-operators ML since this is an operators discussion now. > > -- > > Thanks, > > Matt -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mriedemos at gmail.com Wed Sep 5 15:19:23 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 5 Sep 2018 10:19:23 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> <62d1b308-720a-3f50-eb24-fefe52333e5e@gmail.com> Message-ID: On 9/5/2018 10:03 AM, Mohammed Naser wrote: > On Wed, Sep 5, 2018 at 10:57 AM Matt Riedemann wrote: >> On 9/5/2018 8:47 AM, Mohammed Naser wrote: >>> Could placement not do what happened for a while when the nova_api >>> database was created? >> Can you be more specific? I'm having a brain fart here and not >> remembering what you are referring to with respect to the nova_api DB. > I think there was a period in time where the nova_api database was created > where entires would try to get pulled out from the original nova database and > then checking nova_api if it doesn't exist afterwards (or vice versa). One > of the cases that this was done to deal with was for things like instance types > or flavours. > > I don't know the exact details but I know that older instance types exist in > the nova db and the newer ones are sitting in nova_api. Something along > those lines? Yeah that more about supporting online data migrations *within* nova where new records were created in the API DB and old records would be looked up in both the API DB and then if not found there, in the cell (traditional nova DB). But you'd also be running the "nova-manage db online_data_migrations" CLI to force the migration of the records from the cell DB to the API DB. With Placement split out of nova, we can't really do that. You could point placement at the nova_api DB so it can pull existing records, but it would continue to create new records in the nova_api DB rather than the placement DB and at some point you have to make that data migration. Maybe you were thinking something like have temporary fallback code in placement such that if a record isn't found in the placement database, it queries a configured nova_api database? That'd be a ton of work at this point, and if it was something we were going to do, we should have agreed on that in YVR several months ago, definitely pre-extraction. > >>> I say this because I know that moving the database is a huge task for >>> us, considering how big it can be in certain cases for us, and it >>> means control plane outage too >> I'm pretty sure you were in the room in YVR when we talked about how >> operators were going to do the database migration and were mostly OK >> with what was discussed, which was a lot will just copy and take the >> downtime (I think CERN said around 10 minutes for them, but they aren't >> a public cloud either), but others might do something more sophisticated >> and nova shouldn't try to pick the best fit for all. > If we're provided the list of tables used by placement, we could considerably > make the downtime smaller because we don't have to pull in the other huge > tables like instances/build requests/etc There are no instances records in the API DB, maybe you mean instance_mappings? But yes I get the point. > > What happens if things like server deletes happen while the placement service > is down? The DELETE /allocations/{consumer_id} requests from nova to placement will fail with some keystoneauth1 exception, but because of our old friend @safe_connect we likely won't fail the server delete because we squash the exception from KSA: https://github.com/openstack/nova/blob/0f102089dd0b27c7d35f0cbba87332414032c0a4/nova/scheduler/client/report.py#L2069 However, you'd still have allocations in placement against resource providers (compute nodes) for instances that no longer exist, which means you're available capacity for scheduling new requests is diminished until those bogus allocations are purged from placement, which will take some scripting. In other words, not good things. > >> I'm definitely interested in what you do plan to do for the database >> migration to minimize downtime. > At this point, I'm thinking turn off placement, setup the new one, do > the migration > of the placement-specific tables (this can be a straightforward documented task > OR it would be awesome if it was a placement command (something along > the lines of `placement-manage db import_from_nova`) which would import all > the right things You wouldn't also stop nova-api while doing this? Otherwise you're going to get into the data/resource tracking mess described above which will require some post-migration cleanup scripting. > > The idea of having a command would be*extremely* useful for deployment tools > in automating the process and it also allows the placement team to selectively > decide what they want to onboard? > > Just throwing ideas here. > -- Thanks, Matt From dms at danplanet.com Wed Sep 5 16:41:31 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 05 Sep 2018 09:41:31 -0700 Subject: [Openstack-operators] [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: (Mohammed Naser's message of "Wed, 5 Sep 2018 11:03:03 -0400") References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> <62d1b308-720a-3f50-eb24-fefe52333e5e@gmail.com> Message-ID: > I think there was a period in time where the nova_api database was created > where entires would try to get pulled out from the original nova database and > then checking nova_api if it doesn't exist afterwards (or vice versa). One > of the cases that this was done to deal with was for things like instance types > or flavours. > > I don't know the exact details but I know that older instance types exist in > the nova db and the newer ones are sitting in nova_api. Something along > those lines? Yep, we've moved entire databases before in nova with minimal disruption to the users. Not just flavors, but several pieces of data came out of the "main" database and into the api database transparently. It's doable, but with placement being split to a separate project/repo/whatever, there's not really any option for being graceful about it in this case. > At this point, I'm thinking turn off placement, setup the new one, do > the migration > of the placement-specific tables (this can be a straightforward documented task > OR it would be awesome if it was a placement command (something along > the lines of `placement-manage db import_from_nova`) which would import all > the right things > > The idea of having a command would be *extremely* useful for deployment tools > in automating the process and it also allows the placement team to selectively > decide what they want to onboard? Well, it's pretty cut-and-dried as all the tables in nova-api are either for nova or placement, so there's not much confusion about what belongs. I'm not sure that doing this import in python is really the most efficient way. I agree a placement-manage command would be ideal from an "easy button" point of view, but I think a couple lines of bash that call mysqldump are likely to vastly outperform us doing it natively in python. We could script exec()s of those commands from python, but.. I think I'd rather just see that as a shell script that people can easily alter/test on their own. Just curious, but in your case would the service catalog entry change at all? If you stand up the new placement in the exact same spot, it shouldn't, but I imagine some people will have the catalog entry change slightly (even if just because of a VIP or port change). Am I remembering correctly that the catalog can get cached in various places such that much of nova would need a restart to notice? --Dan From jimmy at openstack.org Thu Sep 6 00:52:06 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 05 Sep 2018 19:52:06 -0500 Subject: [Openstack-operators] 6 days left for the Forum Brainstorming Period... Message-ID: <5B907A36.3060901@openstack.org> Hello All! The Forum Brainstorming session ends September 11 and the topic submission phase begins September 12. Thank you to all of the projects that have created a wiki and begun the Brainstorming Phase. I'd like to encourage projects that have not yet created an etherpad to do so at https://wiki.openstack.org/wiki/Forum/Berlin2018 This is an opportunity to get feedback, vet ideas, and garner support from the community on your ideas. Don't rely only on a PTL to make the agenda... step on up and place the items you consider important front and center :) If you have questions or concerns about the process, please don't hesitate to reach out. Cheers, Jimmy From gmann at ghanshyammann.com Thu Sep 6 08:35:04 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 06 Sep 2018 17:35:04 +0900 Subject: [Openstack-operators] [openstack-dev] [openstack-operator] [qa] [forum] [berlin] QA Brainstorming Topic ideas for Berlin 2018 Message-ID: <165ae054131.ba353a7c58848.5452108263583664063@ghanshyammann.com> Hi All, I have created the below etherpad to collect the forum ideas related to QA for Berlin Summit. Please write up your ideas with your irc name on etherpad. https://etherpad.openstack.org/p/berlin-stein-forum-qa-brainstorming -gmann From zioproto at gmail.com Thu Sep 6 11:42:28 2018 From: zioproto at gmail.com (Saverio Proto) Date: Thu, 6 Sep 2018 13:42:28 +0200 Subject: [Openstack-operators] leaving Openstack mailing lists Message-ID: Hello, I will be leaving this mailing list in a few days. I am going to a new job and I will not be involved with Openstack at least in the short term future. Still, it was great working with the Openstack community in the past few years. If you need to reach me about any bug/patch/review that I submitted in the past, just write directly to my email. I will try to give answers. Cheers Saverio From blair.bethwaite at gmail.com Thu Sep 6 11:59:10 2018 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Thu, 6 Sep 2018 23:59:10 +1200 Subject: [Openstack-operators] leaving Openstack mailing lists In-Reply-To: References: Message-ID: Good luck with whatever you are doing next Saverio, you've been a great asset to the community and will be missed! On Thu, 6 Sep 2018 at 23:43, Saverio Proto wrote: > Hello, > > I will be leaving this mailing list in a few days. > > I am going to a new job and I will not be involved with Openstack at > least in the short term future. > Still, it was great working with the Openstack community in the past few > years. > > If you need to reach me about any bug/patch/review that I submitted in > the past, just write directly to my email. I will try to give answers. > > Cheers > > Saverio > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Cheers, ~Blairo -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Thu Sep 6 12:31:56 2018 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 6 Sep 2018 13:31:56 +0100 Subject: [Openstack-operators] [blazar] Blazar Forum session brainstorming etherpad Message-ID: Hi everyone, I created an etherpad [1] to gather Berlin Forum session ideas for the Blazar project, or resource reservation in general. Please contribute! Thanks, Pierre [1] https://etherpad.openstack.org/p/Berlin-stein-forum-blazar-brainstorming From mnaser at vexxhost.com Thu Sep 6 12:33:44 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 6 Sep 2018 08:33:44 -0400 Subject: [Openstack-operators] [openstack-dev] [nova] [placement] extraction (technical) update In-Reply-To: References: <76a2e6a2-7e7b-54a8-9f7f-742f15bce033@gmail.com> <91736df2-e020-400e-14c8-0e31ad3f962c@gmail.com> <62d1b308-720a-3f50-eb24-fefe52333e5e@gmail.com> Message-ID: On Wed, Sep 5, 2018 at 12:41 PM Dan Smith wrote: > > > I think there was a period in time where the nova_api database was created > > where entires would try to get pulled out from the original nova database and > > then checking nova_api if it doesn't exist afterwards (or vice versa). One > > of the cases that this was done to deal with was for things like instance types > > or flavours. > > > > I don't know the exact details but I know that older instance types exist in > > the nova db and the newer ones are sitting in nova_api. Something along > > those lines? > > Yep, we've moved entire databases before in nova with minimal disruption > to the users. Not just flavors, but several pieces of data came out of > the "main" database and into the api database transparently. It's > doable, but with placement being split to a separate > project/repo/whatever, there's not really any option for being graceful > about it in this case. > > > At this point, I'm thinking turn off placement, setup the new one, do > > the migration > > of the placement-specific tables (this can be a straightforward documented task > > OR it would be awesome if it was a placement command (something along > > the lines of `placement-manage db import_from_nova`) which would import all > > the right things > > > > The idea of having a command would be *extremely* useful for deployment tools > > in automating the process and it also allows the placement team to selectively > > decide what they want to onboard? > > Well, it's pretty cut-and-dried as all the tables in nova-api are either > for nova or placement, so there's not much confusion about what belongs. > > I'm not sure that doing this import in python is really the most > efficient way. I agree a placement-manage command would be ideal from an > "easy button" point of view, but I think a couple lines of bash that > call mysqldump are likely to vastly outperform us doing it natively in > python. We could script exec()s of those commands from python, but.. I > think I'd rather just see that as a shell script that people can easily > alter/test on their own. > > Just curious, but in your case would the service catalog entry change at > all? If you stand up the new placement in the exact same spot, it > shouldn't, but I imagine some people will have the catalog entry change > slightly (even if just because of a VIP or port change). Am I > remembering correctly that the catalog can get cached in various places > such that much of nova would need a restart to notice? We already have placement in the catalog and it's behind a load balancer so changing the backends resolves things right away, so we likely won't be needing any restarts (and I don't think OSA will either because it uses the same model). > --Dan -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From amy at demarco.com Thu Sep 6 13:18:26 2018 From: amy at demarco.com (Amy) Date: Thu, 6 Sep 2018 08:18:26 -0500 Subject: [Openstack-operators] leaving Openstack mailing lists In-Reply-To: References: Message-ID: Saverio, It was a pleasure working with you on the UC. Good luck in the new position and hopefully you’ll be back. Thanks for all that you did, Amy (spotz) Sent from my iPhone > On Sep 6, 2018, at 6:59 AM, Blair Bethwaite wrote: > > Good luck with whatever you are doing next Saverio, you've been a great asset to the community and will be missed! > >> On Thu, 6 Sep 2018 at 23:43, Saverio Proto wrote: >> Hello, >> >> I will be leaving this mailing list in a few days. >> >> I am going to a new job and I will not be involved with Openstack at >> least in the short term future. >> Still, it was great working with the Openstack community in the past few years. >> >> If you need to reach me about any bug/patch/review that I submitted in >> the past, just write directly to my email. I will try to give answers. >> >> Cheers >> >> Saverio >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > -- > Cheers, > ~Blairo > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Thu Sep 6 13:31:56 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 6 Sep 2018 15:31:56 +0200 Subject: [Openstack-operators] ocata nova /etc/nova/policy.json Message-ID: Hi everyone, I installed openstack ocata on centos and I saw /etc/nova/policy.json coontains the following: { } I created an instance in a a project "admin" with user admin that belogns to admin project I created a demo project with a user demo with "user" role. Using command lines (openstack server list --all-projects) the user demo can list the admin instances and can also delete one of them. I think this is a bug and a nova policy.json must be created with some rules for avoiding the above. Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From iain.macdonnell at oracle.com Thu Sep 6 14:41:21 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Thu, 6 Sep 2018 07:41:21 -0700 Subject: [Openstack-operators] ocata nova /etc/nova/policy.json In-Reply-To: References: Message-ID: On 09/06/2018 06:31 AM, Ignazio Cassano wrote: > I installed openstack ocata on centos and I saw /etc/nova/policy.json > coontains the following: > { > } > > I created an instance in a a project "admin" with user admin that > belogns to admin project > > I created a demo project with a user demo with "user" role. > > Using command lines (openstack server list --all-projects) the user demo > can list the admin instances and can also delete one of them. > > I think this is a bug and a nova policy.json must be created with some > rules for avoiding the above. See https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html You have something else going on ... ~iain From ignaziocassano at gmail.com Thu Sep 6 14:53:10 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 6 Sep 2018 16:53:10 +0200 Subject: [Openstack-operators] ocata nova /etc/nova/policy.json In-Reply-To: References: Message-ID: Thanks but I made a mistake because I forgot to change user variables before deleting the instance. User belonging to user role cannot delete instances of other projects. Sorry for my mistake Regards Ignazio Il giorno gio 6 set 2018 alle ore 16:41 iain MacDonnell < iain.macdonnell at oracle.com> ha scritto: > > > On 09/06/2018 06:31 AM, Ignazio Cassano wrote: > > I installed openstack ocata on centos and I saw /etc/nova/policy.json > > coontains the following: > > { > > } > > > > I created an instance in a a project "admin" with user admin that > > belogns to admin project > > > > I created a demo project with a user demo with "user" role. > > > > Using command lines (openstack server list --all-projects) the user demo > > can list the admin instances and can also delete one of them. > > > > I think this is a bug and a nova policy.json must be created with some > > rules for avoiding the above. > > See > > https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html > > You have something else going on ... > > ~iain > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Thu Sep 6 15:20:43 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Thu, 6 Sep 2018 11:20:43 -0400 Subject: [Openstack-operators] Draft Ops Meetup schedule for Denver PTG Message-ID: Hello Everyone, The Ops Meetups team is happy to announce we've put together a schedule for the ops meetup days at next week's OpenStack PTG, please see the attached PDF. Not all moderators are confirmed and the schedule is subject to further change for other reasons, so if you have feedback please share in this email thread. After working hard all Monday, a bunch of operators and other openstack folk are considering venturing to the Wynkoop Brewing Co. for refreshments and perhaps a game of pool. This is not currently sponsored, but should still be a fun outing. See you in Denver Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Ops Meetup Planning (PHL, YVR, PAO, TYO, MAN, AUS, NYC, BCN, MIL, MEC, DEN) - Denver.pdf Type: application/pdf Size: 62245 bytes Desc: not available URL: From jimmy at openstack.org Thu Sep 6 15:57:46 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 06 Sep 2018 10:57:46 -0500 Subject: [Openstack-operators] leaving Openstack mailing lists In-Reply-To: References: Message-ID: <5B914E7A.5050107@openstack.org> Thank you Saverio! It was a pressure working with you, if only briefly. Best of luck at your new gig and hope to see you around OpenStack land soon! Cheers, Jimmy Amy wrote: > Saverio, > > It was a pleasure working with you on the UC. Good luck in the new > position and hopefully you’ll be back. > > Thanks for all that you did, > > Amy (spotz) > > Sent from my iPhone > > On Sep 6, 2018, at 6:59 AM, Blair Bethwaite > wrote: > >> Good luck with whatever you are doing next Saverio, you've been a >> great asset to the community and will be missed! >> >> On Thu, 6 Sep 2018 at 23:43, Saverio Proto > > wrote: >> >> Hello, >> >> I will be leaving this mailing list in a few days. >> >> I am going to a new job and I will not be involved with Openstack at >> least in the short term future. >> Still, it was great working with the Openstack community in the >> past few years. >> >> If you need to reach me about any bug/patch/review that I >> submitted in >> the past, just write directly to my email. I will try to give >> answers. >> >> Cheers >> >> Saverio >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> >> -- >> Cheers, >> ~Blairo >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Thu Sep 6 16:02:39 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 06 Sep 2018 11:02:39 -0500 Subject: [Openstack-operators] leaving Openstack mailing lists In-Reply-To: <5B914E7A.5050107@openstack.org> References: <5B914E7A.5050107@openstack.org> Message-ID: <5B914F9F.4080705@openstack.org> Make that a pleasure. Not a pressure. :\ Jimmy McArthur wrote: > Thank you Saverio! It was a pressure working with you, if only > briefly. Best of luck at your new gig and hope to see you around > OpenStack land soon! > > Cheers, > Jimmy > > Amy wrote: >> Saverio, >> >> It was a pleasure working with you on the UC. Good luck in the new >> position and hopefully you’ll be back. >> >> Thanks for all that you did, >> >> Amy (spotz) >> >> Sent from my iPhone >> >> On Sep 6, 2018, at 6:59 AM, Blair Bethwaite >> > wrote: >> >>> Good luck with whatever you are doing next Saverio, you've been a >>> great asset to the community and will be missed! >>> >>> On Thu, 6 Sep 2018 at 23:43, Saverio Proto >> > wrote: >>> >>> Hello, >>> >>> I will be leaving this mailing list in a few days. >>> >>> I am going to a new job and I will not be involved with Openstack at >>> least in the short term future. >>> Still, it was great working with the Openstack community in the >>> past few years. >>> >>> If you need to reach me about any bug/patch/review that I >>> submitted in >>> the past, just write directly to my email. I will try to give >>> answers. >>> >>> Cheers >>> >>> Saverio >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >>> >>> >>> -- >>> Cheers, >>> ~Blairo >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Thu Sep 6 16:16:33 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Thu, 6 Sep 2018 16:16:33 +0000 Subject: [Openstack-operators] leaving Openstack mailing lists In-Reply-To: <5B914F9F.4080705@openstack.org> References: <5B914E7A.5050107@openstack.org> <5B914F9F.4080705@openstack.org> Message-ID: <0703186C-47C1-46D5-BC8C-BC086BCF511F@cern.ch> Saverio, And thanks for all your hard work with the openstack community, especially the Swiss OpenStack user group (https://www.meetup.com/openstack-ch/) Hope to have a chance to work again together in the future. Tim From: Jimmy McArthur Date: Thursday, 6 September 2018 at 18:06 To: Amy Cc: "openstack-oper." Subject: Re: [Openstack-operators] leaving Openstack mailing lists Make that a pleasure. Not a pressure. :\ Jimmy McArthur wrote: Thank you Saverio! It was a pressure working with you, if only briefly. Best of luck at your new gig and hope to see you around OpenStack land soon! Cheers, Jimmy Amy wrote: Saverio, It was a pleasure working with you on the UC. Good luck in the new position and hopefully you’ll be back. Thanks for all that you did, Amy (spotz) Sent from my iPhone On Sep 6, 2018, at 6:59 AM, Blair Bethwaite > wrote: Good luck with whatever you are doing next Saverio, you've been a great asset to the community and will be missed! On Thu, 6 Sep 2018 at 23:43, Saverio Proto > wrote: Hello, I will be leaving this mailing list in a few days. I am going to a new job and I will not be involved with Openstack at least in the short term future. Still, it was great working with the Openstack community in the past few years. If you need to reach me about any bug/patch/review that I submitted in the past, just write directly to my email. I will try to give answers. Cheers Saverio _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Cheers, ~Blairo _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Thu Sep 6 16:19:28 2018 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 6 Sep 2018 12:19:28 -0400 Subject: [Openstack-operators] [manila] retrospective and forum brainstorm etherpads Message-ID: <20180906161928.zhja4m4qhzpxshta@barron.net> Devs, Ops, community: We're going to start off the manila PTG sessions Monday with a retrospective on the Rocky cycle, using this etherpad [1]. Please enter your thoughts on what went well and what we should improve in Stein so that we take it into consideration. It's also time (until next Wednesday) to brainstorm topics for Berlin Forum. Please record these here [2]. We'll discuss this subject at the PTG as well. Thanks! -- Tom Barron (tbarron) [1] https://etherpad.openstack.org/p/manila-rocky-retrospective [2] https://etherpad.openstack.org/p/manila-berlin-forum-brainstorm From jpenick at gmail.com Thu Sep 6 17:14:24 2018 From: jpenick at gmail.com (James Penick) Date: Thu, 6 Sep 2018 10:14:24 -0700 Subject: [Openstack-operators] Cinder HA with zookeeper or redis? Message-ID: Hey folks, Does anyone have experience using zookeeper or redis to handle HA failover in cinder clusters? I know there's docs on pacemaker, however we already have the other two installed and don't want to add yet another component to package and maintain in our clusters. Thanks! -James -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Sep 6 19:31:01 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 6 Sep 2018 14:31:01 -0500 Subject: [Openstack-operators] [openstack-dev] OpenStack Summit Forum in Berlin: Topic Selection Process In-Reply-To: References: <5B86CF2E.5010708@openstack.org> Message-ID: <2b8ade00-b686-8fb8-e303-9ac25898b33b@gmail.com> On 8/29/2018 1:08 PM, Jim Rollenhagen wrote: > On Wed, Aug 29, 2018 at 12:51 PM, Jimmy McArthur > wrote: > > > Examples of typical sessions that make for a great Forum: > > Strategic, whole-of-community discussions, to think about the big > picture, including beyond just one release cycle and new technologies > > e.g. OpenStack One Platform for containers/VMs/Bare Metal (Strategic > session) the entire community congregates to share opinions on how > to make OpenStack achieve its integration engine goal > > > Just to clarify some speculation going on in IRC: this is an example, > right? Not a new thing being announced? > > // jim FYI for those that didn't see this on the other ML: http://lists.openstack.org/pipermail/foundation/2018-August/002617.html -- Thanks, Matt From fungi at yuggoth.org Thu Sep 6 19:56:53 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 6 Sep 2018 19:56:53 +0000 Subject: [Openstack-operators] [openstack-dev] OpenStack Summit Forum in Berlin: Topic Selection Process In-Reply-To: <2b8ade00-b686-8fb8-e303-9ac25898b33b@gmail.com> References: <5B86CF2E.5010708@openstack.org> <2b8ade00-b686-8fb8-e303-9ac25898b33b@gmail.com> Message-ID: <20180906195653.xarf2dusohaki55t@yuggoth.org> On 2018-09-06 14:31:01 -0500 (-0500), Matt Riedemann wrote: > On 8/29/2018 1:08 PM, Jim Rollenhagen wrote: > > On Wed, Aug 29, 2018 at 12:51 PM, Jimmy McArthur > > wrote: > > > > > > Examples of typical sessions that make for a great Forum: > > > > Strategic, whole-of-community discussions, to think about the big > > picture, including beyond just one release cycle and new technologies > > > > e.g. OpenStack One Platform for containers/VMs/Bare Metal (Strategic > > session) the entire community congregates to share opinions on how > > to make OpenStack achieve its integration engine goal > > > > > > Just to clarify some speculation going on in IRC: this is an example, > > right? Not a new thing being announced? > > > > // jim > > FYI for those that didn't see this on the other ML: > > http://lists.openstack.org/pipermail/foundation/2018-August/002617.html [...] While I agree that's a great post to point out to all corners of the community, I don't see what it has to do with whether "OpenStack One Platform for containers/VMs/Bare Metal" was an example forum topic. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mriedemos at gmail.com Thu Sep 6 20:03:52 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 6 Sep 2018 15:03:52 -0500 Subject: [Openstack-operators] [openstack-dev] OpenStack Summit Forum in Berlin: Topic Selection Process In-Reply-To: <20180906195653.xarf2dusohaki55t@yuggoth.org> References: <5B86CF2E.5010708@openstack.org> <2b8ade00-b686-8fb8-e303-9ac25898b33b@gmail.com> <20180906195653.xarf2dusohaki55t@yuggoth.org> Message-ID: <077492a5-5875-2a5a-0ed6-7529bbb74f91@gmail.com> On 9/6/2018 2:56 PM, Jeremy Stanley wrote: > On 2018-09-06 14:31:01 -0500 (-0500), Matt Riedemann wrote: >> On 8/29/2018 1:08 PM, Jim Rollenhagen wrote: >>> On Wed, Aug 29, 2018 at 12:51 PM, Jimmy McArthur >> > wrote: >>> >>> >>> Examples of typical sessions that make for a great Forum: >>> >>> Strategic, whole-of-community discussions, to think about the big >>> picture, including beyond just one release cycle and new technologies >>> >>> e.g. OpenStack One Platform for containers/VMs/Bare Metal (Strategic >>> session) the entire community congregates to share opinions on how >>> to make OpenStack achieve its integration engine goal >>> >>> >>> Just to clarify some speculation going on in IRC: this is an example, >>> right? Not a new thing being announced? >>> >>> // jim >> FYI for those that didn't see this on the other ML: >> >> http://lists.openstack.org/pipermail/foundation/2018-August/002617.html > [...] > > While I agree that's a great post to point out to all corners of the > community, I don't see what it has to do with whether "OpenStack One > Platform for containers/VMs/Bare Metal" was an example forum topic. Because if I'm not mistaken it was the impetus for the hullabaloo in the tc channel that was related to the foundation ML post. -- Thanks, Matt From mriedemos at gmail.com Thu Sep 6 20:58:41 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 6 Sep 2018 15:58:41 -0500 Subject: [Openstack-operators] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction Message-ID: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> I wanted to recap some upgrade-specific stuff from today outside of the other [1] technical extraction thread. Chris has a change up for review [2] which prompted the discussion. That change makes placement only work with placement.conf, not nova.conf, but does get a passing tempest run in the devstack patch [3]. The main issue here is upgrades. If you think of this like deprecating config options, the old config options continue to work for a release and then are dropped after a full release (or 3 months across boundaries for CDers) [4]. Given that, Chris's patch would break the standard deprecation policy. Clearly one simple way outside of code to make that work is just copy and rename nova.conf to placement.conf and voila. But that depends on *all* deployment/config tooling to get that right out of the gate. The other obvious thing is the database. The placement repo code as-is today still has the check for whether or not it should use the placement database but falls back to using the nova_api database [5]. So technically you could point the extracted placement at the same nova_api database and it should work. However, at some point deployers will clearly need to copy the placement-related tables out of the nova_api DB to a new placement DB and make sure the 'migrate_version' table is dropped so that placement DB schema versions can reset to 1. With respect to grenade and making this work in our own upgrade CI testing, we have I think two options (which might not be mutually exclusive): 1. Make placement support using nova.conf if placement.conf isn't found for Stein with lots of big warnings that it's going away in T. Then Rocky nova.conf with the nova_api database configuration just continues to work for placement in Stein. I don't think we then have any grenade changes to make, at least in Stein for upgrading *from* Rocky. Assuming fresh devstack installs in Stein use placement.conf and a placement-specific database, then upgrades from Stein to T should also be OK with respect to grenade, but likely punts the cut-over issue for all other deployment projects (because we don't CI with grenade doing Rocky->Stein->T, or FFU in other words). 2. If placement doesn't support nova.conf in Stein, then grenade will require an (exceptional) [6] from-rocky upgrade script which will (a) write out placement.conf fresh and (b) run a DB migration script, likely housed in the placement repo, to create the placement database and copy the placement-specific tables out of the nova_api database. Any script like this is likely needed regardless of what we do in grenade because deployers will need to eventually do this once placement would drop support for using nova.conf (if we went with option 1). That's my attempt at a summary. It's going to be very important that operators and deployment project contributors weigh in here if they have strong preferences either way, and note that we can likely do both options above - grenade could do the fresh cutover from rocky to stein but we allow running with nova.conf and nova_api DB in placement in stein with plans to drop that support in T. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/subject.html#134184 [2] https://review.openstack.org/#/c/600157/ [3] https://review.openstack.org/#/c/600162/ [4] https://governance.openstack.org/tc/reference/tags/assert_follows-standard-deprecation.html#requirements [5] https://github.com/openstack/placement/blob/fb7c1909/placement/db_api.py#L27 [6] https://docs.openstack.org/grenade/latest/readme.html#theory-of-upgrade -- Thanks, Matt From fungi at yuggoth.org Thu Sep 6 21:06:50 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 6 Sep 2018 21:06:50 +0000 Subject: [Openstack-operators] [openstack-dev] OpenStack Summit Forum in Berlin: Topic Selection Process In-Reply-To: <077492a5-5875-2a5a-0ed6-7529bbb74f91@gmail.com> References: <5B86CF2E.5010708@openstack.org> <2b8ade00-b686-8fb8-e303-9ac25898b33b@gmail.com> <20180906195653.xarf2dusohaki55t@yuggoth.org> <077492a5-5875-2a5a-0ed6-7529bbb74f91@gmail.com> Message-ID: <20180906210650.ss7p4k67viqiu6wg@yuggoth.org> On 2018-09-06 15:03:52 -0500 (-0500), Matt Riedemann wrote: > On 9/6/2018 2:56 PM, Jeremy Stanley wrote: > > On 2018-09-06 14:31:01 -0500 (-0500), Matt Riedemann wrote: > > > On 8/29/2018 1:08 PM, Jim Rollenhagen wrote: > > > > On Wed, Aug 29, 2018 at 12:51 PM, Jimmy McArthur > > > > wrote: > > > > > > > > > > > > Examples of typical sessions that make for a great Forum: > > > > > > > > Strategic, whole-of-community discussions, to think about the big > > > > picture, including beyond just one release cycle and new technologies > > > > > > > > e.g. OpenStack One Platform for containers/VMs/Bare Metal (Strategic > > > > session) the entire community congregates to share opinions on how > > > > to make OpenStack achieve its integration engine goal > > > > > > > > > > > > Just to clarify some speculation going on in IRC: this is an example, > > > > right? Not a new thing being announced? > > > > > > > > // jim > > > FYI for those that didn't see this on the other ML: > > > > > > http://lists.openstack.org/pipermail/foundation/2018-August/002617.html > > [...] > > > > While I agree that's a great post to point out to all corners of the > > community, I don't see what it has to do with whether "OpenStack One > > Platform for containers/VMs/Bare Metal" was an example forum topic. > > Because if I'm not mistaken it was the impetus for the hullabaloo in the tc > channel that was related to the foundation ML post. It would be more accurate to say that community surprise over the StarlingX mention in Vancouver keynotes caused some people to (either actually or merely in half-jest) start looking for subtext everywhere indicating the next big surprise announcement. The discussion[*] in #openstack-tc readily acknowledged that most of its participants didn't think "OpenStack One Platform for containers/VMs/Bare Metal" was an actual proposal for a forum discussion much less announcement of a new project, but were just looking for an opportunity to show feigned alarm and sarcasm. The most recent discussion[**] leading up to the foundation ML "OSF Open Infrastructure Projects" update occurred the previous week. That E-mail did go out the day after the forum topic brainstorming example discussion, but was unrelated (and already in the process of being put together by then). [*] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-29.log.html#t2018-08-29T16:55:37 [**] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-23.log.html#t2018-08-23T16:23:00 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Thu Sep 6 21:16:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 06 Sep 2018 17:16:34 -0400 Subject: [Openstack-operators] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction In-Reply-To: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> References: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> Message-ID: <1536268318-sup-2751@lrrr.local> Excerpts from Matt Riedemann's message of 2018-09-06 15:58:41 -0500: > I wanted to recap some upgrade-specific stuff from today outside of the > other [1] technical extraction thread. > > Chris has a change up for review [2] which prompted the discussion. > > That change makes placement only work with placement.conf, not > nova.conf, but does get a passing tempest run in the devstack patch [3]. > > The main issue here is upgrades. If you think of this like deprecating > config options, the old config options continue to work for a release > and then are dropped after a full release (or 3 months across boundaries > for CDers) [4]. Given that, Chris's patch would break the standard > deprecation policy. Clearly one simple way outside of code to make that > work is just copy and rename nova.conf to placement.conf and voila. But > that depends on *all* deployment/config tooling to get that right out of > the gate. > > The other obvious thing is the database. The placement repo code as-is > today still has the check for whether or not it should use the placement > database but falls back to using the nova_api database [5]. So > technically you could point the extracted placement at the same nova_api > database and it should work. However, at some point deployers will > clearly need to copy the placement-related tables out of the nova_api DB > to a new placement DB and make sure the 'migrate_version' table is > dropped so that placement DB schema versions can reset to 1. > > With respect to grenade and making this work in our own upgrade CI > testing, we have I think two options (which might not be mutually > exclusive): > > 1. Make placement support using nova.conf if placement.conf isn't found > for Stein with lots of big warnings that it's going away in T. Then > Rocky nova.conf with the nova_api database configuration just continues > to work for placement in Stein. I don't think we then have any grenade > changes to make, at least in Stein for upgrading *from* Rocky. Assuming > fresh devstack installs in Stein use placement.conf and a > placement-specific database, then upgrades from Stein to T should also > be OK with respect to grenade, but likely punts the cut-over issue for > all other deployment projects (because we don't CI with grenade doing > Rocky->Stein->T, or FFU in other words). Making placement read from both files should be pretty straightforward, right? It's possible to pass default_config_files and default_config_dirs to oslo.config, and the functions that build the original defaults are part of the public API (find_config_files and find_config_dirs in oslo_config.cfg) so the placement service can call them twice (with different "project" arguments) and merge the results before initializing the ConfigOpts instance. Doug > > 2. If placement doesn't support nova.conf in Stein, then grenade will > require an (exceptional) [6] from-rocky upgrade script which will (a) > write out placement.conf fresh and (b) run a DB migration script, likely > housed in the placement repo, to create the placement database and copy > the placement-specific tables out of the nova_api database. Any script > like this is likely needed regardless of what we do in grenade because > deployers will need to eventually do this once placement would drop > support for using nova.conf (if we went with option 1). > > That's my attempt at a summary. It's going to be very important that > operators and deployment project contributors weigh in here if they have > strong preferences either way, and note that we can likely do both > options above - grenade could do the fresh cutover from rocky to stein > but we allow running with nova.conf and nova_api DB in placement in > stein with plans to drop that support in T. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-September/subject.html#134184 > [2] https://review.openstack.org/#/c/600157/ > [3] https://review.openstack.org/#/c/600162/ > [4] > https://governance.openstack.org/tc/reference/tags/assert_follows-standard-deprecation.html#requirements > [5] > https://github.com/openstack/placement/blob/fb7c1909/placement/db_api.py#L27 > [6] https://docs.openstack.org/grenade/latest/readme.html#theory-of-upgrade > From jonmills at gmail.com Thu Sep 6 21:38:01 2018 From: jonmills at gmail.com (Jonathan Mills) Date: Thu, 6 Sep 2018 17:38:01 -0400 Subject: [Openstack-operators] [cloudkitty] Anyone running Cloudkitty with SSL? In-Reply-To: References: <27c8f7b395ef4b468dc790d7ffadb869d8be7fa0.camel@gmail.com> <3cae92b4e8c94577e5d90d8f83f8b46b@objectif-libre.com> Message-ID: Quick follow-up, just to close the loop on this thread. I found that there had been a lot of recent code changes/improvements in Cloudkitty 8.x, which is from the Rocky release. So on a hunch, I decided to see if I could run the Rocky version of Cloudkitty with the rest of OpenStack Queens. On CentOS 7.5, you need to upgrade python2-six from version 1.10 to 1.11 -- that's the only RPM dependency thing, and the change seems to have no effect on the rest of Queens. Other than that, Cloudkitty from Rocky installs just fine on CentOS 7.5 controller nodes running other parts of Queens. The packages (from http://mirror.centos.org/centos/7/cloud/x86_64/openstack-rocky) are: openstack-cloudkitty-api-8.0.0-1.el7.noarch openstack-cloudkitty-common-8.0.0-1.el7.noarch openstack-cloudkitty-processor-8.0.0-1.el7.noarch openstack-cloudkitty-ui-8.0.0-1.el7.noarch openstack-cloudkitty-ui-doc-8.0.0-1.el7.noarch python2-cloudkittyclient-2.0.0-1.el7.noarch After the upgrade, I saw immediate improvement in Cloudkitty's handling of SSL, but there was still a snag with the Horizon dashboard plugin. The folks at Objectif Libre (Christophe, Luka, Sebastien) have been working with me on the problem: I sent a bunch of debug output yesterday, and this afternoon they produced the following patches: https://review.openstack.org/#/c/600510/ https://review.openstack.org/#/c/600515/ According to my testing, everything now works! That was a fast turnaround! So my thanks, again, to Christophe and Objectif Libre. If you are going to be at SC18, feel free to stop by the NASA booth and you may well get to see their software in action. Jonathan Mills NASA Goddard Space Flight Center On Tue, Sep 4, 2018 at 8:37 AM Jonathan Mills wrote: > Christophe, > > Thank you, we really appreciate you looking into this, and I will try to > help you as much as I can, because we really need to have this software > working, soon. > > So here's something that, to me, is very telling > > # printenv |grep OS_CACERT > OS_CACERT=/etc/openldap/cacerts/gpcprod_root_ca.pem > > ^^^ here you can see that my self-signed CA cert is loaded into my > environment, having sourced my openrc file > > Now I'm going to invoke the cloudkitty client with debug, and grep for > 'curl' to see what it's actually doing: > > # openstack --debug rating info-config-get 2>&1 |grep -b1 curl > 9774-Get auth_ref > 9787:REQ: curl -g -i --cacert "/etc/openldap/cacerts/gpcprod_root_ca.pem" > -X GET https://keystone.gpcprod:5000/v3 -H "Accept: application/json" -H > "User-Agent: osc-lib/1.9.0 keystoneauth1/3.4.0 python-requests/2.14.2 > CPython/2.7.5" > 10014-Starting new HTTPS connection (1): keystone.gpcprod > -- > 16319-run(Namespace()) > 16336:REQ: curl -g -i -X GET https://keystone.gpcprod:5000/v3 -H "Accept: > application/json" -H "User-Agent: python-keystoneclient" > 16461-Starting new HTTPS connection (1): keystone.gpcprod > > ^^^ you can see that the first time, it correctly forms the curl, and that > works fine. But the second time (and the User-Agent has changed), it never > even passes the --cacert option to curl at all. The results then are > predictable: > > Starting new HTTPS connection (1): keystone.gpcprod > SSL exception connecting to https://keystone.gpcprod:5000/v3: ("bad > handshake: Error([('SSL routines', 'ssl3_get_server_certificate', > 'certificate verify failed')],)",) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/cliff/app.py", line 400, in > run_subcommand > result = cmd.run(parsed_args) > File "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line > 41, in run > return super(Command, self).run(parsed_args) > File "/usr/lib/python2.7/site-packages/cliff/command.py", line 184, in > run > return_code = self.take_action(parsed_args) or 0 > File > "/usr/lib/python2.7/site-packages/cloudkittyclient/v1/shell_cli.py", line > 78, in take_action > shell.do_info_config_get(ckclient, parsed_args) > File "/usr/lib/python2.7/site-packages/cloudkittyclient/v1/shell.py", > line 93, in do_info_config_get > utils.print_dict(cc.config.get_config(), dict_property="Section") > File "/usr/lib/python2.7/site-packages/cloudkittyclient/v1/core.py", > line 88, in get_config > out = self.api.get(self.base_url).json() > File > "/usr/lib/python2.7/site-packages/cloudkittyclient/apiclient/client.py", > line 359, in get > return self.client_request("GET", url, **kwargs) > File > "/usr/lib/python2.7/site-packages/cloudkittyclient/apiclient/client.py", > line 349, in client_request > self, method, url, **kwargs) > File > "/usr/lib/python2.7/site-packages/cloudkittyclient/apiclient/client.py", > line 248, in client_request > self.authenticate() > File > "/usr/lib/python2.7/site-packages/cloudkittyclient/apiclient/client.py", > line 319, in authenticate > self.auth_plugin.authenticate(self) > File > "/usr/lib/python2.7/site-packages/cloudkittyclient/apiclient/auth.py", line > 201, in authenticate > self._do_authenticate(http_client) > File "/usr/lib/python2.7/site-packages/cloudkittyclient/client.py", line > 191, in _do_authenticate > ks_session = _get_keystone_session(**ks_kwargs) > File "/usr/lib/python2.7/site-packages/cloudkittyclient/client.py", line > 87, in _get_keystone_session > v2_auth_url, v3_auth_url = _discover_auth_versions(ks_session, > auth_url) > File "/usr/lib/python2.7/site-packages/cloudkittyclient/client.py", line > 38, in _discover_auth_versions > ks_discover = discover.Discover(session=session, auth_url=auth_url) > File "/usr/lib/python2.7/site-packages/keystoneclient/discover.py", line > 178, in __init__ > authenticated=authenticated) > File "/usr/lib/python2.7/site-packages/keystoneclient/_discover.py", > line 143, in __init__ > authenticated=authenticated) > File "/usr/lib/python2.7/site-packages/keystoneclient/_discover.py", > line 38, in get_version_data > resp = session.get(url, headers=headers, authenticated=authenticated) > File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line > 535, in get > return self.request(url, 'GET', **kwargs) > File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line > 428, in request > resp = send(**kwargs) > File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line > 466, in _send_request > raise exceptions.SSLError(msg) > SSLError: SSL exception connecting to https://keystone.gpcprod:5000/v3: > ("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', > 'certificate verify failed')],)",) > clean_up CliInfoGetConfig: SSL exception connecting to > https://keystone.gpcprod:5000/v3: ("bad handshake: Error([('SSL > routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",) > > > Jonathan > > On Tue, Sep 4, 2018 at 5:50 AM Christophe Sauthier < > christophe.sauthier at objectif-libre.com> wrote: > >> Hello >> >> Thanks for those elements. >> >> It is really surprising because as you can imagine this is something we >> set up many times... >> I'll take care to set up the same environment than you and I'll let you >> know if I am facing the same issues... I am trying to do that quickly... >> >> Regards >> >> Christophe >> >> ---- >> Christophe Sauthier >> CEO >> >> Objectif Libre : Au service de votre Cloud >> >> +33 (0) 6 16 98 63 96 | christophe.sauthier at objectif-libre.com >> >> https://www.objectif-libre.com | @objectiflibre >> Recevez la Pause Cloud Et DevOps : https://olib.re/abo-pause >> >> Le 2018-08-31 23:40, jonmills at gmail.com a écrit : >> > On Fri, 2018-08-31 at 23:20 +0200, Christophe Sauthier wrote: >> >> Hello Jonathan >> >> >> >> Can you describe a little more your setup (release/method of >> >> installation/linux distribution) /issues that you are facing ? >> > >> > >> > It is OpenStack Queens, on CentOS 7.5, using the packages from the >> > centos-cloud repo (which I suppose is the same is RDO). >> > >> > # uname -msr >> > Linux 3.10.0-862.3.2.el7.x86_64 x86_64 >> > >> > # rpm -qa |grep cloudkitty |sort >> > openstack-cloudkitty-api-7.0.0-1.el7.noarch >> > openstack-cloudkitty-common-7.0.0-1.el7.noarch >> > openstack-cloudkitty-processor-7.0.0-1.el7.noarch >> > openstack-cloudkitty-ui-7.0.0-1.el7.noarch >> > python2-cloudkittyclient-1.2.0-1.el7.noarch >> > >> > It is 'deployed' with custom puppet code only. I follow exactly the >> > installation guides posted here: >> > https://docs.openstack.org/cloudkitty/queens/index.html >> > >> > I'd prefer not to post full config files, but my [keystone_authtoken] >> > section of cloudkitty.conf is identical (aside from service >> > credentials) to the ones found in my glance, nova, cinder, neutron, >> > gnocchi, ceilometer, etc, all of those services are working perfectly. >> > >> > >> > My processor.log file is full of >> > >> > 2018-08-31 16:38:04.086 30471 WARNING cloudkitty.orchestrator [-] >> > Error >> > while collecting service network.floating: SSL exception connecting to >> > https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: >> > Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate >> > verify failed')],)",): SSLError: SSL exception connecting to >> > https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: >> > Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate >> > verify failed')],)",) >> > 2018-08-31 16:38:04.094 30471 WARNING cloudkitty.orchestrator [-] >> > Error >> > while collecting service image: SSL exception connecting to >> > https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: >> > Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate >> > verify failed')],)",): SSLError: SSL exception connecting to >> > https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: >> > Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate >> > verify failed')],)",) >> > >> > and so on >> > >> > >> > But, I mean, there's other little things too. I can see from running >> > >> > 'openstack --debug rating info-config-get' >> > >> > that it never even loads the cacert from my env, so it fails talking >> > to >> > keystone trying to get a token; the request never even gets to the >> > cloudkitty api endpoint. >> > >> > >> > >> >> >> >> Because we have deployed it/used it many times with SSL without >> >> issue... >> >> >> >> It could be great also that you step up on #cloudkitty to discuss it. >> >> >> >> Christophe >> >> >> >> ---- >> >> Christophe Sauthier >> >> CEO >> >> >> >> Objectif Libre : Au service de votre Cloud >> >> >> >> +33 (0) 6 16 98 63 96 | christophe.sauthier at objectif-libre.com >> >> >> >> https://www.objectif-libre.com | @objectiflibre >> >> Recevez la Pause Cloud Et DevOps : https://olib.re/abo-pause >> >> >> >> Le 2018-08-31 23:15, jonmills at gmail.com a écrit : >> >>> Anyone out there have Cloudkitty successfully working with SSL? By >> >>> which I mean that Cloudkitty is able to talk to keystone over https >> >>> without cert errors, and also talk to SSL'd rabbitmq? Oh, and the >> >>> client tools also? >> >>> >> >>> Asking for a friend... >> >>> >> >>> >> >>> >> >>> Jonathan >> >>> >> >>> >> >>> _______________________________________________ >> >>> OpenStack-operators mailing list >> >>> OpenStack-operators at lists.openstack.org >> >>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rochelle.grober at huawei.com Fri Sep 7 00:39:59 2018 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Fri, 7 Sep 2018 00:39:59 +0000 Subject: [Openstack-operators] [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction In-Reply-To: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> References: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> Message-ID: Sounds like an important discussion to have with the operators in Denver. Should put this on the schedule for the Ops meetup. --Rocky > -----Original Message----- > From: Matt Riedemann [mailto:mriedemos at gmail.com] > Sent: Thursday, September 06, 2018 1:59 PM > To: OpenStack Development Mailing List (not for usage questions) > ; openstack- > operators at lists.openstack.org > Subject: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade- > specific news on extraction > > I wanted to recap some upgrade-specific stuff from today outside of the > other [1] technical extraction thread. > > Chris has a change up for review [2] which prompted the discussion. > > That change makes placement only work with placement.conf, not > nova.conf, but does get a passing tempest run in the devstack patch [3]. > > The main issue here is upgrades. If you think of this like deprecating config > options, the old config options continue to work for a release and then are > dropped after a full release (or 3 months across boundaries for CDers) [4]. > Given that, Chris's patch would break the standard deprecation policy. Clearly > one simple way outside of code to make that work is just copy and rename > nova.conf to placement.conf and voila. But that depends on *all* > deployment/config tooling to get that right out of the gate. > > The other obvious thing is the database. The placement repo code as-is > today still has the check for whether or not it should use the placement > database but falls back to using the nova_api database [5]. So technically you > could point the extracted placement at the same nova_api database and it > should work. However, at some point deployers will clearly need to copy the > placement-related tables out of the nova_api DB to a new placement DB and > make sure the 'migrate_version' table is dropped so that placement DB > schema versions can reset to 1. > > With respect to grenade and making this work in our own upgrade CI testing, > we have I think two options (which might not be mutually > exclusive): > > 1. Make placement support using nova.conf if placement.conf isn't found for > Stein with lots of big warnings that it's going away in T. Then Rocky nova.conf > with the nova_api database configuration just continues to work for > placement in Stein. I don't think we then have any grenade changes to make, > at least in Stein for upgrading *from* Rocky. Assuming fresh devstack installs > in Stein use placement.conf and a placement-specific database, then > upgrades from Stein to T should also be OK with respect to grenade, but > likely punts the cut-over issue for all other deployment projects (because we > don't CI with grenade doing > Rocky->Stein->T, or FFU in other words). > > 2. If placement doesn't support nova.conf in Stein, then grenade will require > an (exceptional) [6] from-rocky upgrade script which will (a) write out > placement.conf fresh and (b) run a DB migration script, likely housed in the > placement repo, to create the placement database and copy the placement- > specific tables out of the nova_api database. Any script like this is likely > needed regardless of what we do in grenade because deployers will need to > eventually do this once placement would drop support for using nova.conf (if > we went with option 1). > > That's my attempt at a summary. It's going to be very important that > operators and deployment project contributors weigh in here if they have > strong preferences either way, and note that we can likely do both options > above - grenade could do the fresh cutover from rocky to stein but we allow > running with nova.conf and nova_api DB in placement in stein with plans to > drop that support in T. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018- > September/subject.html#134184 > [2] https://review.openstack.org/#/c/600157/ > [3] https://review.openstack.org/#/c/600162/ > [4] > https://governance.openstack.org/tc/reference/tags/assert_follows- > standard-deprecation.html#requirements > [5] > https://github.com/openstack/placement/blob/fb7c1909/placement/db_api > .py#L27 > [6] https://docs.openstack.org/grenade/latest/readme.html#theory-of- > upgrade > > -- > > Thanks, > > Matt > > __________________________________________________________ > ________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From emccormick at cirrusseven.com Fri Sep 7 01:29:00 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Thu, 6 Sep 2018 21:29:00 -0400 Subject: [Openstack-operators] [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction In-Reply-To: References: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> Message-ID: On Thu, Sep 6, 2018, 8:40 PM Rochelle Grober wrote: > Sounds like an important discussion to have with the operators in Denver. > Should put this on the schedule for the Ops meetup. > > --Rocky > We are planning to attend the upgrade sessions on Monday as a group. How about we put it there? -Erik > > > -----Original Message----- > > From: Matt Riedemann [mailto:mriedemos at gmail.com] > > Sent: Thursday, September 06, 2018 1:59 PM > > To: OpenStack Development Mailing List (not for usage questions) > > ; openstack- > > operators at lists.openstack.org > > Subject: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade- > > specific news on extraction > > > > I wanted to recap some upgrade-specific stuff from today outside of the > > other [1] technical extraction thread. > > > > Chris has a change up for review [2] which prompted the discussion. > > > > That change makes placement only work with placement.conf, not > > nova.conf, but does get a passing tempest run in the devstack patch [3]. > > > > The main issue here is upgrades. If you think of this like deprecating > config > > options, the old config options continue to work for a release and then > are > > dropped after a full release (or 3 months across boundaries for CDers) > [4]. > > Given that, Chris's patch would break the standard deprecation policy. > Clearly > > one simple way outside of code to make that work is just copy and rename > > nova.conf to placement.conf and voila. But that depends on *all* > > deployment/config tooling to get that right out of the gate. > > > > The other obvious thing is the database. The placement repo code as-is > > today still has the check for whether or not it should use the placement > > database but falls back to using the nova_api database [5]. So > technically you > > could point the extracted placement at the same nova_api database and it > > should work. However, at some point deployers will clearly need to copy > the > > placement-related tables out of the nova_api DB to a new placement DB and > > make sure the 'migrate_version' table is dropped so that placement DB > > schema versions can reset to 1. > > > > With respect to grenade and making this work in our own upgrade CI > testing, > > we have I think two options (which might not be mutually > > exclusive): > > > > 1. Make placement support using nova.conf if placement.conf isn't found > for > > Stein with lots of big warnings that it's going away in T. Then Rocky > nova.conf > > with the nova_api database configuration just continues to work for > > placement in Stein. I don't think we then have any grenade changes to > make, > > at least in Stein for upgrading *from* Rocky. Assuming fresh devstack > installs > > in Stein use placement.conf and a placement-specific database, then > > upgrades from Stein to T should also be OK with respect to grenade, but > > likely punts the cut-over issue for all other deployment projects > (because we > > don't CI with grenade doing > > Rocky->Stein->T, or FFU in other words). > > > > 2. If placement doesn't support nova.conf in Stein, then grenade will > require > > an (exceptional) [6] from-rocky upgrade script which will (a) write out > > placement.conf fresh and (b) run a DB migration script, likely housed in > the > > placement repo, to create the placement database and copy the placement- > > specific tables out of the nova_api database. Any script like this is > likely > > needed regardless of what we do in grenade because deployers will need to > > eventually do this once placement would drop support for using nova.conf > (if > > we went with option 1). > > > > That's my attempt at a summary. It's going to be very important that > > operators and deployment project contributors weigh in here if they have > > strong preferences either way, and note that we can likely do both > options > > above - grenade could do the fresh cutover from rocky to stein but we > allow > > running with nova.conf and nova_api DB in placement in stein with plans > to > > drop that support in T. > > > > [1] > > http://lists.openstack.org/pipermail/openstack-dev/2018- > > September/subject.html#134184 > > [2] https://review.openstack.org/#/c/600157/ > > [3] https://review.openstack.org/#/c/600162/ > > [4] > > https://governance.openstack.org/tc/reference/tags/assert_follows- > > standard-deprecation.html#requirements > > [5] > > https://github.com/openstack/placement/blob/fb7c1909/placement/db_api > > .py#L27 > > [6] https://docs.openstack.org/grenade/latest/readme.html#theory-of- > > upgrade > > > > -- > > > > Thanks, > > > > Matt > > > > __________________________________________________________ > > ________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev- > > request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Sep 7 02:28:22 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 07 Sep 2018 11:28:22 +0900 Subject: [Openstack-operators] ocata nova /etc/nova/policy.json In-Reply-To: References: Message-ID: <165b1dbe2cf.fc68a02387293.1250600793363409393@ghanshyammann.com> ---- On Thu, 06 Sep 2018 23:53:10 +0900 Ignazio Cassano wrote ---- > Thanks but I made a mistake because I forgot to change user variables before deleting the instance.User belonging to user role cannot delete instances of other projects.Sorry for my mistakeRegardsIgnazio On Policy side, Nova has policy in code now. And for showing the all projects servers, nova has policy rule [1] for that which control the --all-projects parameter. By Default it is 'admin' only so demo user cannot see the other instance until this rule is modified in your policy.json [1] os_compute_api:servers:index:get_all_tenants os_compute_api:servers:detail:get_all_tenants https://docs.openstack.org/nova/latest/configuration/policy.html -gmann > > Il giorno gio 6 set 2018 alle ore 16:41 iain MacDonnell ha scritto: > > > On 09/06/2018 06:31 AM, Ignazio Cassano wrote: > > I installed openstack ocata on centos and I saw /etc/nova/policy.json > > coontains the following: > > { > > } > > > > I created an instance in a a project "admin" with user admin that > > belogns to admin project > > > > I created a demo project with a user demo with "user" role. > > > > Using command lines (openstack server list --all-projects) the user demo > > can list the admin instances and can also delete one of them. > > > > I think this is a bug and a nova policy.json must be created with some > > rules for avoiding the above. > > See > https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html > > You have something else going on ... > > ~iain > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From mriedemos at gmail.com Fri Sep 7 14:24:39 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Sep 2018 09:24:39 -0500 Subject: [Openstack-operators] leaving Openstack mailing lists In-Reply-To: References: Message-ID: On 9/6/2018 6:42 AM, Saverio Proto wrote: > Hello, > > I will be leaving this mailing list in a few days. > > I am going to a new job and I will not be involved with Openstack at > least in the short term future. > Still, it was great working with the Openstack community in the past few years. > > If you need to reach me about any bug/patch/review that I submitted in > the past, just write directly to my email. I will try to give answers. > > Cheers > > Saverio Good luck on the new thing. From a developer perspective, I appreciated you putting the screws to us from time to time, since it helps re-align priorities. -- Thanks, Matt From dms at danplanet.com Fri Sep 7 15:17:56 2018 From: dms at danplanet.com (Dan Smith) Date: Fri, 07 Sep 2018 08:17:56 -0700 Subject: [Openstack-operators] [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction References: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> Message-ID: > The other obvious thing is the database. The placement repo code as-is > today still has the check for whether or not it should use the > placement database but falls back to using the nova_api database > [5]. So technically you could point the extracted placement at the > same nova_api database and it should work. However, at some point > deployers will clearly need to copy the placement-related tables out > of the nova_api DB to a new placement DB and make sure the > 'migrate_version' table is dropped so that placement DB schema > versions can reset to 1. I think it's wrong to act like placement and nova-api schemas are the same. One is a clone of the other at a point in time, and technically it will work today. However the placement db sync tool won't do the right thing, and I think we run the major risk of operators not fully grokking what is going on here, seeing that pointing placement at nova-api "works" and move on. Later, when we add the next placement db migration (which could technically happen in stein), they will either screw their nova-api schema, or mess up their versioning, or be unable to apply the placement change. > With respect to grenade and making this work in our own upgrade CI > testing, we have I think two options (which might not be mutually > exclusive): > > 1. Make placement support using nova.conf if placement.conf isn't > found for Stein with lots of big warnings that it's going away in > T. Then Rocky nova.conf with the nova_api database configuration just > continues to work for placement in Stein. I don't think we then have > any grenade changes to make, at least in Stein for upgrading *from* > Rocky. Assuming fresh devstack installs in Stein use placement.conf > and a placement-specific database, then upgrades from Stein to T > should also be OK with respect to grenade, but likely punts the > cut-over issue for all other deployment projects (because we don't CI > with grenade doing Rocky->Stein->T, or FFU in other words). As I have said above and in the review, I really think this is the wrong approach. At the current point of time, the placement schema is a clone of the nova-api schema, and technically they will work. At the first point that placement evolves its schema, that will no longer be a workable solution, unless we also evolve nova-api's database in lockstep. > 2. If placement doesn't support nova.conf in Stein, then grenade will > require an (exceptional) [6] from-rocky upgrade script which will (a) > write out placement.conf fresh and (b) run a DB migration script, > likely housed in the placement repo, to create the placement database > and copy the placement-specific tables out of the nova_api > database. Any script like this is likely needed regardless of what we > do in grenade because deployers will need to eventually do this once > placement would drop support for using nova.conf (if we went with > option 1). Yep, and I'm asserting that we should write that script, make grenade do that step, and confirm that it works. I think operators should do that step during the stein upgrade because that's where the fork/split of history and schema is happening. I'll volunteer to do the grenade side at least. Maybe it would help to call out specifically that, IMHO, this can not and should not follow the typical config deprecation process. It's not a simple case of just making sure we "find" the nova-api database in the various configs. The problem is that _after_ the split, they are _not_ the same thing and should not be considered as the same. Thus, I think to avoid major disaster and major time sink for operators later, we need to impose the minor effort now to make sure that they don't take the process of deploying a new service lightly. Jay's original relatively small concern was that deploying a new placement service and failing to properly configure it would result in a placement running with the default, empty, sqlite database. That's a valid concern, and I think all we need to do is make sure we fail in that case, explaining the situation. We just had a hangout on the topic and I think we've come around to the consensus that just removing the default-to-empty-sqlite behavior is the right thing to do. Placement won't magically find nova.conf if it exists and jump into its database, and it also won't do the silly thing of starting up with an empty database if the very important config step is missed in the process of deploying placement itself. Operators will have to deploy the new package and do the database surgery (which we will provide instructions and a script for) as part of that process, but there's really no other sane alternative without changing the current agreed-to plan regarding the split. Is everyone okay with the above summary of the outcome? --Dan From mnaser at vexxhost.com Fri Sep 7 15:24:51 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 7 Sep 2018 11:24:51 -0400 Subject: [Openstack-operators] [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction In-Reply-To: References: <93f6eacd-f612-2cd8-28ea-1bce0286c8b7@gmail.com> Message-ID: On Fri, Sep 7, 2018 at 11:18 AM Dan Smith wrote: > > > The other obvious thing is the database. The placement repo code as-is > > today still has the check for whether or not it should use the > > placement database but falls back to using the nova_api database > > [5]. So technically you could point the extracted placement at the > > same nova_api database and it should work. However, at some point > > deployers will clearly need to copy the placement-related tables out > > of the nova_api DB to a new placement DB and make sure the > > 'migrate_version' table is dropped so that placement DB schema > > versions can reset to 1. > > I think it's wrong to act like placement and nova-api schemas are the > same. One is a clone of the other at a point in time, and technically it > will work today. However the placement db sync tool won't do the right > thing, and I think we run the major risk of operators not fully grokking > what is going on here, seeing that pointing placement at nova-api > "works" and move on. Later, when we add the next placement db migration > (which could technically happen in stein), they will either screw their > nova-api schema, or mess up their versioning, or be unable to apply the > placement change. > > > With respect to grenade and making this work in our own upgrade CI > > testing, we have I think two options (which might not be mutually > > exclusive): > > > > 1. Make placement support using nova.conf if placement.conf isn't > > found for Stein with lots of big warnings that it's going away in > > T. Then Rocky nova.conf with the nova_api database configuration just > > continues to work for placement in Stein. I don't think we then have > > any grenade changes to make, at least in Stein for upgrading *from* > > Rocky. Assuming fresh devstack installs in Stein use placement.conf > > and a placement-specific database, then upgrades from Stein to T > > should also be OK with respect to grenade, but likely punts the > > cut-over issue for all other deployment projects (because we don't CI > > with grenade doing Rocky->Stein->T, or FFU in other words). > > As I have said above and in the review, I really think this is the wrong > approach. At the current point of time, the placement schema is a clone > of the nova-api schema, and technically they will work. At the first point > that placement evolves its schema, that will no longer be a workable > solution, unless we also evolve nova-api's database in lockstep. > > > 2. If placement doesn't support nova.conf in Stein, then grenade will > > require an (exceptional) [6] from-rocky upgrade script which will (a) > > write out placement.conf fresh and (b) run a DB migration script, > > likely housed in the placement repo, to create the placement database > > and copy the placement-specific tables out of the nova_api > > database. Any script like this is likely needed regardless of what we > > do in grenade because deployers will need to eventually do this once > > placement would drop support for using nova.conf (if we went with > > option 1). > > Yep, and I'm asserting that we should write that script, make grenade do > that step, and confirm that it works. I think operators should do that > step during the stein upgrade because that's where the fork/split of > history and schema is happening. I'll volunteer to do the grenade side > at least. > > Maybe it would help to call out specifically that, IMHO, this can not > and should not follow the typical config deprecation process. It's not a > simple case of just making sure we "find" the nova-api database in the > various configs. The problem is that _after_ the split, they are _not_ > the same thing and should not be considered as the same. Thus, I think > to avoid major disaster and major time sink for operators later, we need > to impose the minor effort now to make sure that they don't take the > process of deploying a new service lightly. I think that's a valid different approach. I'd be okay with this if the appropriate scripts and documentation is out there. In this case, Grenade stuff will be really useful asset to look over upgrades with. > Jay's original relatively small concern was that deploying a new > placement service and failing to properly configure it would result in a > placement running with the default, empty, sqlite database. That's a > valid concern, and I think all we need to do is make sure we fail in > that case, explaining the situation. If it's a hard fail, that seems reasonable and ensures no surprises occur during the upgrade or much later. > We just had a hangout on the topic and I think we've come around to the > consensus that just removing the default-to-empty-sqlite behavior is the > right thing to do. Placement won't magically find nova.conf if it exists > and jump into its database, and it also won't do the silly thing of > starting up with an empty database if the very important config step is > missed in the process of deploying placement itself. Operators will have > to deploy the new package and do the database surgery (which we will > provide instructions and a script for) as part of that process, but > there's really no other sane alternative without changing the current > agreed-to plan regarding the split. > > Is everyone okay with the above summary of the outcome? I've dropped my -1 from this given the discussion https://review.openstack.org/#/c/600157/ > --Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From jimmy at openstack.org Fri Sep 7 20:32:48 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 07 Sep 2018 15:32:48 -0500 Subject: [Openstack-operators] [ptls] [user survey] User Survey Privacy Message-ID: <5B92E070.1000201@openstack.org> Hi PTLs! A recent question came up regarding public sharing of the Project-Specific feedback questions on the OpenStack User Survey. The short answer is... this is a great idea! This information is meant to help projects improve and the information is not meant to be kept secret. Oddly enough, nobody asked before lbragstad, so thanks for asking! The long answer... I would like to add a little bit of background on the user survey and how we treat the data. Part of the agreement we make with users that fill out the User Survey is we will keep their data anonymized. As a result, when we publish data on the website[1] we ensure the user can see data from no fewer than 10 companies at a time. Additionally, the User Committee, who helps with the data analysis, sign an NDA before reviewing any data, which helps to preserve user privacy. All that said, the questions for PTLs are framed as "Project Feedback", so the expectation and hope is that PTLs will not only use it to improve their projects, but will also share it amongst other relevant projects. As excited as we are to have you share this data with the community, we do want to make sure there is nothing that would reveal the identity of the survey taker. We've already vetted the English content, but we are still waiting on translations to finish up. So, if you decide to share the data publicly, please only share the English content for the time being. Feel free to reference this email or hit us up on the user-committee at lists.openstack.org Beyond that, we encourage you to follow in Keystone's footsteps and share this feedback with the mailing list, at the PTG, or even with a buddy. We hope it's valuable to your project and the community at large! Net: PTLs, please share the project feedback publicly (e.g. on the mailing lists) now (with the above caveats). Cheers, Jimmy [1] https://www.openstack.org/analytics -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Mon Sep 10 03:58:08 2018 From: aspiers at suse.com (Adam Spiers) Date: Mon, 10 Sep 2018 04:58:08 +0100 Subject: [Openstack-operators] Cinder HA with zookeeper or redis? In-Reply-To: References: Message-ID: <20180910035808.GA26854@arabian.linksys.moosehall> Hi James, James Penick wrote: >Hey folks, > Does anyone have experience using zookeeper or redis to handle HA failover >in cinder clusters? I'm guessing you mean failover of an active/passive cinder-volume service? >I know there's docs on pacemaker, however we already >have the other two installed and don't want to add yet another component to >package and maintain in our clusters. I'm afraid I don't, but if you make any progress on this, please let me know as it would be great to document: - how this would work - any pros and cons vs. Pacemaker and maybe I can help with that. One particular question: if the node running the service becomes unreachable, is it safe to fail it over straight away, or is fencing required first? (I'm pretty sure I've asked this same question before, but I can't remember the answer - sorry!) From jungleboyj at gmail.com Mon Sep 10 20:18:37 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 10 Sep 2018 15:18:37 -0500 Subject: [Openstack-operators] Cinder HA with zookeeper or redis? In-Reply-To: <20180910035808.GA26854@arabian.linksys.moosehall> References: <20180910035808.GA26854@arabian.linksys.moosehall> Message-ID: <24b2e94c-9e03-eedc-6211-acac4412dc31@gmail.com> On 9/9/2018 10:58 PM, Adam Spiers wrote: > Hi James, > > James Penick wrote: >> Hey folks, >> Does anyone have experience using zookeeper or redis to handle HA >> failover >> in cinder clusters? > > I'm guessing you mean failover of an active/passive cinder-volume > service? > >> I know there's docs on pacemaker, however we already >> have the other two installed and don't want to add yet another >> component to >> package and maintain in our clusters. > > I'm afraid I don't, but if you make any progress on this, please let > me know as it would be great to document: > >  - how this would work >  - any pros and cons vs. Pacemaker > > and maybe I can help with that. > > One particular question: if the node running the service becomes > unreachable, is it safe to fail it over straight away, or is fencing > required first?  (I'm pretty sure I've asked this same question > before, but I can't remember the answer - sorry!) James, I echo Adam's input.  I have only heard of people implementing with pacemaker but there is no reason that this couldn't be tried with other HA solutions. If you are able to try it and document it would be great to add documentation here:  [1] Also, Gorka Eguileor is a good contact as he has been doing much of the work on HA Cinder though his focus is on Active/Active HA. Let us know if you have any further questions. Thanks! Jay From mrhillsman at gmail.com Mon Sep 10 20:30:32 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 10 Sep 2018 15:30:32 -0500 Subject: [Openstack-operators] Cinder HA with zookeeper or redis? In-Reply-To: <24b2e94c-9e03-eedc-6211-acac4412dc31@gmail.com> References: <20180910035808.GA26854@arabian.linksys.moosehall> <24b2e94c-9e03-eedc-6211-acac4412dc31@gmail.com> Message-ID: Additionally if you require some resources to test this against OpenLab is a great resource - https://openlabtesting.org provides more info - https://github.com/theopenlab/resource-requests/issues/new - is where you can skip having to go through the site to do so On Mon, Sep 10, 2018 at 3:19 PM Jay S Bryant wrote: > > On 9/9/2018 10:58 PM, Adam Spiers wrote: > > Hi James, > > > > James Penick wrote: > >> Hey folks, > >> Does anyone have experience using zookeeper or redis to handle HA > >> failover > >> in cinder clusters? > > > > I'm guessing you mean failover of an active/passive cinder-volume > > service? > > > >> I know there's docs on pacemaker, however we already > >> have the other two installed and don't want to add yet another > >> component to > >> package and maintain in our clusters. > > > > I'm afraid I don't, but if you make any progress on this, please let > > me know as it would be great to document: > > > > - how this would work > > - any pros and cons vs. Pacemaker > > > > and maybe I can help with that. > > > > One particular question: if the node running the service becomes > > unreachable, is it safe to fail it over straight away, or is fencing > > required first? (I'm pretty sure I've asked this same question > > before, but I can't remember the answer - sorry!) > James, > > I echo Adam's input. I have only heard of people implementing with > pacemaker but there is no reason that this couldn't be tried with other > HA solutions. > > If you are able to try it and document it would be great to add > documentation here: [1] > > Also, Gorka Eguileor is a good contact as he has been doing much of the > work on HA Cinder though his focus is on Active/Active HA. > > Let us know if you have any further questions. > > Thanks! > Jay > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpenick at gmail.com Mon Sep 10 20:39:07 2018 From: jpenick at gmail.com (James Penick) Date: Mon, 10 Sep 2018 14:39:07 -0600 Subject: [Openstack-operators] Cinder HA with zookeeper or redis? In-Reply-To: <24b2e94c-9e03-eedc-6211-acac4412dc31@gmail.com> References: <20180910035808.GA26854@arabian.linksys.moosehall> <24b2e94c-9e03-eedc-6211-acac4412dc31@gmail.com> Message-ID: Ah ok so this is a case of no ones documented it, but it's do-able. If anyone out there has done it we'd be happy to take your notes! Otherwise we'll figure it out and upstream the process. thanks! -James On Mon, Sep 10, 2018 at 2:18 PM Jay S Bryant wrote: > > On 9/9/2018 10:58 PM, Adam Spiers wrote: > > Hi James, > > > > James Penick wrote: > >> Hey folks, > >> Does anyone have experience using zookeeper or redis to handle HA > >> failover > >> in cinder clusters? > > > > I'm guessing you mean failover of an active/passive cinder-volume > > service? > > > >> I know there's docs on pacemaker, however we already > >> have the other two installed and don't want to add yet another > >> component to > >> package and maintain in our clusters. > > > > I'm afraid I don't, but if you make any progress on this, please let > > me know as it would be great to document: > > > > - how this would work > > - any pros and cons vs. Pacemaker > > > > and maybe I can help with that. > > > > One particular question: if the node running the service becomes > > unreachable, is it safe to fail it over straight away, or is fencing > > required first? (I'm pretty sure I've asked this same question > > before, but I can't remember the answer - sorry!) > James, > > I echo Adam's input. I have only heard of people implementing with > pacemaker but there is no reason that this couldn't be tried with other > HA solutions. > > If you are able to try it and document it would be great to add > documentation here: [1] > > Also, Gorka Eguileor is a good contact as he has been doing much of the > work on HA Cinder though his focus is on Active/Active HA. > > Let us know if you have any further questions. > > Thanks! > Jay > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Mon Sep 10 21:32:07 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 10 Sep 2018 16:32:07 -0500 Subject: [Openstack-operators] Cinder HA with zookeeper or redis? In-Reply-To: References: <20180910035808.GA26854@arabian.linksys.moosehall> <24b2e94c-9e03-eedc-6211-acac4412dc31@gmail.com> Message-ID: <4199bddf-bbd3-c500-803a-5aedc1e741d4@gmail.com> James, Sorry, I forgot to include the link to our HA documentation in the earlier e-mail: https://docs.openstack.org/cinder/latest/contributor/high_availability.html Jay On 9/10/2018 3:39 PM, James Penick wrote: > Ah ok so this is a case of no ones documented it, but it's do-able. > > If anyone out there has done it we'd be happy to take your notes! > Otherwise we'll figure it out and upstream the process. > > thanks! > -James > > On Mon, Sep 10, 2018 at 2:18 PM Jay S Bryant > wrote: > > > On 9/9/2018 10:58 PM, Adam Spiers wrote: > > Hi James, > > > > James Penick > wrote: > >> Hey folks, > >> Does anyone have experience using zookeeper or redis to handle HA > >> failover > >> in cinder clusters? > > > > I'm guessing you mean failover of an active/passive cinder-volume > > service? > > > >> I know there's docs on pacemaker, however we already > >> have the other two installed and don't want to add yet another > >> component to > >> package and maintain in our clusters. > > > > I'm afraid I don't, but if you make any progress on this, please let > > me know as it would be great to document: > > > >  - how this would work > >  - any pros and cons vs. Pacemaker > > > > and maybe I can help with that. > > > > One particular question: if the node running the service becomes > > unreachable, is it safe to fail it over straight away, or is fencing > > required first?  (I'm pretty sure I've asked this same question > > before, but I can't remember the answer - sorry!) > James, > > I echo Adam's input.  I have only heard of people implementing with > pacemaker but there is no reason that this couldn't be tried with > other > HA solutions. > > If you are able to try it and document it would be great to add > documentation here:  [1] > > Also, Gorka Eguileor is a good contact as he has been doing much > of the > work on HA Cinder though his focus is on Active/Active HA. > > Let us know if you have any further questions. > > Thanks! > Jay > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Sep 10 23:10:59 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 10 Sep 2018 17:10:59 -0600 Subject: [Openstack-operators] [upgrade] request for pre-upgrade check for db purge Message-ID: I created a nova bug [1] to track a request that came up in the upgrades SIG room at the PTG today [2] and would like to see if there is any feedback from other operators/developers that weren't part of the discussion. The basic problem is that failing to archive/purge deleted records* from the database can make upgrades much slower during schema migrations. Anecdotes from the room mentioned that it can be literally impossible to complete upgrades for keystone and heat in certain scenarios if you don't purge the database first. The request was that a configurable limit gets added to each service which is checked as part of the service's pre-upgrade check routine [3] and warn if the number of records to purge is over that limit. For example, the nova-status upgrade check could warn if there are over 100000 deleted records total across all cells databases. Maybe cinder would have something similar for deleted volumes. Keystone could have something for revoked tokens. Another idea in the room was flagging on records over a certain age limit. For example, if there are deleted instances in nova that were deleted >1 year ago. How do people feel about this? It seems pretty straight-forward to me. If people are generally in favor of this, then the question is what would be sane defaults - or should we not assume a default and force operators to opt into this? * nova delete doesn't actually delete the record from the instances table, it flips a value to hide it - you have to archive/purge those records to get them out of the main table. [1] https://bugs.launchpad.net/nova/+bug/1791824 [2] https://etherpad.openstack.org/p/upgrade-sig-ptg-stein [3] https://governance.openstack.org/tc/goals/stein/upgrade-checkers.html -- Thanks, Matt From mihalis68 at gmail.com Mon Sep 10 23:53:59 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Mon, 10 Sep 2018 17:53:59 -0600 Subject: [Openstack-operators] revamped ops meetup day 2 Message-ID: Hi All, We (ops meetups team) got several additional suggestions for ops meetups session, so we've attempted to revamp day 2 to fit them in, please see https://docs.google.com/spreadsheets/d/1EUSYMs3GfglnD8yfFaAXWhLe0F5y9hCUKqCYe0Vp1oA/edit#gid=981527336 Given the timing, we'll attempt to confirm the rest of the day starting at 9am over coffee. If you're moderating something tomorrow please check out the adjusted times. If something doesn't work for you we'll try and swap sessions to make it work. Cheers Chris, Erik, Sean -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Tue Sep 11 15:01:49 2018 From: dms at danplanet.com (Dan Smith) Date: Tue, 11 Sep 2018 08:01:49 -0700 Subject: [Openstack-operators] [openstack-dev] [upgrade] request for pre-upgrade check for db purge In-Reply-To: (Matt Riedemann's message of "Mon, 10 Sep 2018 17:10:59 -0600") References: Message-ID: > How do people feel about this? It seems pretty straight-forward to > me. If people are generally in favor of this, then the question is > what would be sane defaults - or should we not assume a default and > force operators to opt into this? I dunno, adding something to nova.conf that is only used for nova-status like that seems kinda weird to me. It's just a warning/informational sort of thing so it just doesn't seem worth the complication to me. Moving it to an age thing set at one year seems okay, and better than making the absolute limit more configurable. Any reason why this wouldn't just be a command line flag to status if people want it to behave in a specific way from a specific tool? --Dan From mriedemos at gmail.com Tue Sep 11 22:27:12 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 11 Sep 2018 16:27:12 -0600 Subject: [Openstack-operators] [openstack-dev] [upgrade] request for pre-upgrade check for db purge In-Reply-To: References: Message-ID: <87668fc4-c2a2-9b0e-8c3e-4843319cbd87@gmail.com> On 9/11/2018 9:01 AM, Dan Smith wrote: > I dunno, adding something to nova.conf that is only used for nova-status > like that seems kinda weird to me. It's just a warning/informational > sort of thing so it just doesn't seem worth the complication to me. It doesn't seem complicated to me, I'm not sure why the config is weird, but maybe just because it's config-driven CLI behavior? > > Moving it to an age thing set at one year seems okay, and better than > making the absolute limit more configurable. > > Any reason why this wouldn't just be a command line flag to status if > people want it to behave in a specific way from a specific tool? I always think of the pre-upgrade checks as release-specific and we could drop the old ones at some point, so that's why I wasn't thinking about adding check-specific options to the command - but since we also say it's OK to run "nova-status upgrade check" to verify a green install, it's probably good to leave the old checks in place, i.e. you're likely always going to want those cells v2 and placement checks we added in ocata even long after ocata EOL. -- Thanks, Matt From mihalis68 at gmail.com Tue Sep 11 22:55:50 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 11 Sep 2018 16:55:50 -0600 Subject: [Openstack-operators] Finishing off feedback and Berlin planning? Message-ID: For those of us still at the PTG, we have a bit more to usefully discuss about this PTG, Berlin Forum topics etc. Perhaps we can use the same room (Aspen) tomorrow (Wednesday) and get a bit more done? We have the room, just no projector. If you can join on Wednesday, what time works? Shintaro will leave after Wednesday and anyone remotely near North or South Carolina may well also want to get out, understandably. There's a couple of items Lance Bragstad wants to go over at 9.30 and then some ops will be heading to the UC meeting. So maybe something after lunch? Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Sep 12 15:47:27 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 12 Sep 2018 09:47:27 -0600 Subject: [Openstack-operators] Open letter/request to TC candidates (and existing elected officials) Message-ID: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> Rather than take a tangent on Kristi's candidacy thread [1], I'll bring this up separately. Kristi said: "Ultimately, this list isn’t exclusive and I’d love to hear your and other people's opinions about what you think the I should focus on." Well since you asked... Some feedback I gave to the public cloud work group yesterday was to get their RFE/bug list ranked from the operator community (because some of the requests are not exclusive to public cloud), and then put pressure on the TC to help project manage the delivery of the top issue. I would like all of the SIGs to do this. The upgrades SIG should rank and socialize their #1 issue that needs attention from the developer community - maybe that's better upgrade CI testing for deployment projects, maybe it's getting the pre-upgrade checks goal done for Stein. The UC should also be doing this; maybe that's the UC saying, "we need help on closing feature gaps in openstack client and/or the SDK". I don't want SIGs to bombard the developers with *all* of their requirements, but I want to get past *talking* about the *same* issues *every* time we get together. I want each group to say, "this is our top issue and we want developers to focus on it." For example, the extended maintenance resolution [2] was purely birthed from frustration about talking about LTS and stable branch EOL every time we get together. It's also the responsibility of the operator and user communities to weigh in on proposed release goals, but the TC should be actively trying to get feedback from those communities about proposed goals, because I bet operators and users don't care about mox removal [3]. I want to see the TC be more of a cross-project project management group, like a group of Ildikos and what she did between nova and cinder to get volume multi-attach done, which took persistent supervision to herd the cats and get it delivered. Lance is already trying to do this with unified limits. Doug is doing this with the python3 goal. I want my elected TC members to be pushing tangible technical deliverables forward. I don't find any value in the TC debating ad nauseam about visions and constellations and "what is openstack?". Scope will change over time depending on who is contributing to openstack, we should just accept this. And we need to realize that if we are failing to deliver value to operators and users, they aren't going to use openstack and then "what is openstack?" won't matter because no one will care. So I encourage all elected TC members to work directly with the various SIGs to figure out their top issue and then work on managing those deliverables across the community because the TC is particularly well suited to do so given the elected position. I realize political and bureaucratic "how should openstack deal with x?" things will come up, but those should not be the priority of the TC. So instead of philosophizing about things like, "should all compute agents be in a single service with a REST API" for hours and hours, every few months - immediately ask, "would doing that get us any closer to achieving top technical priority x?" Because if not, or it's so fuzzy in scope that no one sees the way forward, document a decision and then drop it. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134490.html [2] https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html [3] https://governance.openstack.org/tc/goals/rocky/mox_removal.html -- Thanks, Matt From zhipengh512 at gmail.com Wed Sep 12 15:59:24 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 12 Sep 2018 09:59:24 -0600 Subject: [Openstack-operators] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> Message-ID: Well Public Cloud WG has prepared the ammo as you know and to discuss with TC on Friday :) A hundred percent with you on this matter. On Wed, Sep 12, 2018 at 9:47 AM Matt Riedemann wrote: > Rather than take a tangent on Kristi's candidacy thread [1], I'll bring > this up separately. > > Kristi said: > > "Ultimately, this list isn’t exclusive and I’d love to hear your and > other people's opinions about what you think the I should focus on." > > Well since you asked... > > Some feedback I gave to the public cloud work group yesterday was to get > their RFE/bug list ranked from the operator community (because some of > the requests are not exclusive to public cloud), and then put pressure > on the TC to help project manage the delivery of the top issue. I would > like all of the SIGs to do this. The upgrades SIG should rank and > socialize their #1 issue that needs attention from the developer > community - maybe that's better upgrade CI testing for deployment > projects, maybe it's getting the pre-upgrade checks goal done for Stein. > The UC should also be doing this; maybe that's the UC saying, "we need > help on closing feature gaps in openstack client and/or the SDK". I > don't want SIGs to bombard the developers with *all* of their > requirements, but I want to get past *talking* about the *same* issues > *every* time we get together. I want each group to say, "this is our top > issue and we want developers to focus on it." For example, the extended > maintenance resolution [2] was purely birthed from frustration about > talking about LTS and stable branch EOL every time we get together. It's > also the responsibility of the operator and user communities to weigh in > on proposed release goals, but the TC should be actively trying to get > feedback from those communities about proposed goals, because I bet > operators and users don't care about mox removal [3]. > > I want to see the TC be more of a cross-project project management > group, like a group of Ildikos and what she did between nova and cinder > to get volume multi-attach done, which took persistent supervision to > herd the cats and get it delivered. Lance is already trying to do this > with unified limits. Doug is doing this with the python3 goal. I want my > elected TC members to be pushing tangible technical deliverables forward. > > I don't find any value in the TC debating ad nauseam about visions and > constellations and "what is openstack?". Scope will change over time > depending on who is contributing to openstack, we should just accept > this. And we need to realize that if we are failing to deliver value to > operators and users, they aren't going to use openstack and then "what > is openstack?" won't matter because no one will care. > > So I encourage all elected TC members to work directly with the various > SIGs to figure out their top issue and then work on managing those > deliverables across the community because the TC is particularly well > suited to do so given the elected position. I realize political and > bureaucratic "how should openstack deal with x?" things will come up, > but those should not be the priority of the TC. So instead of > philosophizing about things like, "should all compute agents be in a > single service with a REST API" for hours and hours, every few months - > immediately ask, "would doing that get us any closer to achieving top > technical priority x?" Because if not, or it's so fuzzy in scope that no > one sees the way forward, document a decision and then drop it. > > [1] > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134490.html > [2] > > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html > [3] https://governance.openstack.org/tc/goals/rocky/mox_removal.html > > -- > > Thanks, > > Matt > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Sep 12 18:25:47 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 12 Sep 2018 20:25:47 +0200 Subject: [Openstack-operators] [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> Message-ID: <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> Matt Riedemann wrote: > [...] > I want to see the TC be more of a cross-project project management > group, like a group of Ildikos and what she did between nova and cinder > to get volume multi-attach done, which took persistent supervision to > herd the cats and get it delivered. Lance is already trying to do this > with unified limits. Doug is doing this with the python3 goal. I want my > elected TC members to be pushing tangible technical deliverables forward. > > I don't find any value in the TC debating ad nauseam about visions and > constellations and "what is openstack?". Scope will change over time > depending on who is contributing to openstack, we should just accept > this. And we need to realize that if we are failing to deliver value to > operators and users, they aren't going to use openstack and then "what > is openstack?" won't matter because no one will care. > [...] I agree that we generally need more of those cross-project champions, and generally TC members are in a good position to do that kind of work. The TC itself is also in a good position to "bless" those initiatives and give them some amount of priority (with our limited influence). I'm just a bit worried to limit that role to the elected TC members. If we say "it's the role of the TC to do cross-project PM in OpenStack" then we artificially limit the number of people who would sign up to do that kind of work. You mention Ildiko and Lance: they did that line of work without being elected. So I would definitely support having champions to drive SIG cross-project priorities, and use the TC both to support them and as a natural pool of champion candidates -- I would just avoid tying the role to the elected group? -- Thierry Carrez (ttx) From lbragstad at gmail.com Wed Sep 12 18:41:20 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 12 Sep 2018 12:41:20 -0600 Subject: [Openstack-operators] [all] Consistent policy names Message-ID: The topic of having consistent policy names has popped up a few times this week. Ultimately, if we are to move forward with this, we'll need a convention. To help with that a little bit I started an etherpad [0] that includes links to policy references, basic conventions *within* that service, and some examples of each. I got through quite a few projects this morning, but there are still a couple left. The idea is to look at what we do today and see what conventions we can come up with to move towards, which should also help us determine how much each convention is going to impact services (e.g. picking a convention that will cause 70% of services to rename policies). Please have a look and we can discuss conventions in this thread. If we come to agreement, I'll start working on some documentation in oslo.policy so that it's somewhat official because starting to renaming policies. [0] https://etherpad.openstack.org/p/consistent-policy-names -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Wed Sep 12 18:52:35 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 12 Sep 2018 18:52:35 +0000 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: Message-ID: So +1 Tim From: Lance Bragstad Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 12 September 2018 at 20:43 To: "OpenStack Development Mailing List (not for usage questions)" , OpenStack Operators Subject: [openstack-dev] [all] Consistent policy names The topic of having consistent policy names has popped up a few times this week. Ultimately, if we are to move forward with this, we'll need a convention. To help with that a little bit I started an etherpad [0] that includes links to policy references, basic conventions *within* that service, and some examples of each. I got through quite a few projects this morning, but there are still a couple left. The idea is to look at what we do today and see what conventions we can come up with to move towards, which should also help us determine how much each convention is going to impact services (e.g. picking a convention that will cause 70% of services to rename policies). Please have a look and we can discuss conventions in this thread. If we come to agreement, I'll start working on some documentation in oslo.policy so that it's somewhat official because starting to renaming policies. [0] https://etherpad.openstack.org/p/consistent-policy-names -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Wed Sep 12 21:07:19 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 12 Sep 2018 15:07:19 -0600 Subject: [Openstack-operators] Ops Forum Session Brainstorming Message-ID: Hello everyone, I have set up an etherpad to collect Ops related session ideas for the Forum at the Berlin Summit. Please suggest any topics that you would like to see covered, and +1 existing topics you like. https://etherpad.openstack.org/p/ops-forum-stein Cheers, Erik From dms at danplanet.com Wed Sep 12 21:30:12 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 12 Sep 2018 14:30:12 -0700 Subject: [Openstack-operators] [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> (Thierry Carrez's message of "Wed, 12 Sep 2018 20:25:47 +0200") References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> Message-ID: > I'm just a bit worried to limit that role to the elected TC members. If > we say "it's the role of the TC to do cross-project PM in OpenStack" > then we artificially limit the number of people who would sign up to do > that kind of work. You mention Ildiko and Lance: they did that line of > work without being elected. Why would saying that we _expect_ the TC members to do that work limit such activities only to those that are on the TC? I would expect the TC to take on the less-fun or often-neglected efforts that we all know are needed but don't have an obvious champion or sponsor. I think we expect some amount of widely-focused technical or project leadership from TC members, and certainly that expectation doesn't prevent others from leading efforts (even in the areas of proposing TC resolutions, etc) right? --Dan From davanum at gmail.com Wed Sep 12 21:41:45 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Wed, 12 Sep 2018 15:41:45 -0600 Subject: [Openstack-operators] [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> Message-ID: On Wed, Sep 12, 2018 at 3:30 PM Dan Smith wrote: > > I'm just a bit worried to limit that role to the elected TC members. If > > we say "it's the role of the TC to do cross-project PM in OpenStack" > > then we artificially limit the number of people who would sign up to do > > that kind of work. You mention Ildiko and Lance: they did that line of > > work without being elected. > > Why would saying that we _expect_ the TC members to do that work limit > such activities only to those that are on the TC? I would expect the TC > to take on the less-fun or often-neglected efforts that we all know are > needed but don't have an obvious champion or sponsor. > > I think we expect some amount of widely-focused technical or project > leadership from TC members, and certainly that expectation doesn't > prevent others from leading efforts (even in the areas of proposing TC > resolutions, etc) right? > +1 Dan! > --Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Sep 12 21:55:28 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 12 Sep 2018 21:55:28 +0000 Subject: [Openstack-operators] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> Message-ID: <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> On 2018-09-12 09:47:27 -0600 (-0600), Matt Riedemann wrote: [...] > So I encourage all elected TC members to work directly with the > various SIGs to figure out their top issue and then work on > managing those deliverables across the community because the TC is > particularly well suited to do so given the elected position. [...] I almost agree with you. I think the OpenStack TC members should be actively engaged in recruiting and enabling interested people in the community to do those things, but I don't think such work should be solely the domain of the TC and would hate to give the impression that you must be on the TC to have such an impact. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zhipengh512 at gmail.com Wed Sep 12 22:03:12 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 12 Sep 2018 16:03:12 -0600 Subject: [Openstack-operators] [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> Message-ID: On Wed, Sep 12, 2018 at 3:55 PM Jeremy Stanley wrote: > On 2018-09-12 09:47:27 -0600 (-0600), Matt Riedemann wrote: > [...] > > So I encourage all elected TC members to work directly with the > > various SIGs to figure out their top issue and then work on > > managing those deliverables across the community because the TC is > > particularly well suited to do so given the elected position. > [...] > > I almost agree with you. I think the OpenStack TC members should be > actively engaged in recruiting and enabling interested people in the > community to do those things, but I don't think such work should be > solely the domain of the TC and would hate to give the impression > that you must be on the TC to have such an impact. > -- > Jeremy Stanley > Jeremy, this is not to say that one must be on the TC to have such an impact, it is that TC has the duty more than anyone else to get this specific cross-project goal done. I would even argue it is not the job description of TC to enable/recruit, but to just do it. -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Sep 12 22:14:17 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 12 Sep 2018 22:14:17 +0000 Subject: [Openstack-operators] [Openstack-sigs] [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> Message-ID: <20180912221417.hxmsq6smyaxvvyqo@yuggoth.org> On 2018-09-12 16:03:12 -0600 (-0600), Zhipeng Huang wrote: > On Wed, Sep 12, 2018 at 3:55 PM Jeremy Stanley wrote: > > On 2018-09-12 09:47:27 -0600 (-0600), Matt Riedemann wrote: > > [...] > > > So I encourage all elected TC members to work directly with the > > > various SIGs to figure out their top issue and then work on > > > managing those deliverables across the community because the TC is > > > particularly well suited to do so given the elected position. > > [...] > > > > I almost agree with you. I think the OpenStack TC members should be > > actively engaged in recruiting and enabling interested people in the > > community to do those things, but I don't think such work should be > > solely the domain of the TC and would hate to give the impression > > that you must be on the TC to have such an impact. > > Jeremy, this is not to say that one must be on the TC to have such an > impact, it is that TC has the duty more than anyone else to get this > specific cross-project goal done. I would even argue it is not the job > description of TC to enable/recruit, but to just do it. I think Doug's work leading the Python 3 First effort is a great example. He has helped find and enable several other goal champions to collaborate on this. I appreciate the variety of other things Doug already does with his available time and would rather he not stop doing those things to spend all his time acting as a project manager. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mriedemos at gmail.com Wed Sep 12 23:00:04 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 12 Sep 2018 17:00:04 -0600 Subject: [Openstack-operators] [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> Message-ID: On 9/12/2018 3:30 PM, Dan Smith wrote: >> I'm just a bit worried to limit that role to the elected TC members. If >> we say "it's the role of the TC to do cross-project PM in OpenStack" >> then we artificially limit the number of people who would sign up to do >> that kind of work. You mention Ildiko and Lance: they did that line of >> work without being elected. > Why would saying that we_expect_ the TC members to do that work limit > such activities only to those that are on the TC? I would expect the TC > to take on the less-fun or often-neglected efforts that we all know are > needed but don't have an obvious champion or sponsor. > > I think we expect some amount of widely-focused technical or project > leadership from TC members, and certainly that expectation doesn't > prevent others from leading efforts (even in the areas of proposing TC > resolutions, etc) right? Absolutely. I'm not saying the cross-project project management should be restricted to or solely the responsibility of the TC. It's obvious there are people outside of the TC that have already been doing this - and no it's not always elected PTLs either. What I want is elected TC members to prioritize driving technical deliverables to completion based on ranked input from operators/users/SIGs over philosophical debates and politics/bureaucracy and help to complete the technical tasks if possible. -- Thanks, Matt From mriedemos at gmail.com Wed Sep 12 23:01:42 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 12 Sep 2018 17:01:42 -0600 Subject: [Openstack-operators] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> Message-ID: <970b673d-be91-f763-86a1-31f5e9ce52a3@gmail.com> On 9/12/2018 3:55 PM, Jeremy Stanley wrote: > I almost agree with you. I think the OpenStack TC members should be > actively engaged in recruiting and enabling interested people in the > community to do those things, but I don't think such work should be > solely the domain of the TC and would hate to give the impression > that you must be on the TC to have such an impact. See my reply to Thierry. This isn't what I'm saying. But I expect the elected TC members to be *much* more *directly* involved in managing and driving hard cross-project technical deliverables. -- Thanks, Matt From mriedemos at gmail.com Wed Sep 12 23:03:10 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 12 Sep 2018 17:03:10 -0600 Subject: [Openstack-operators] [Openstack-sigs] [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <20180912221417.hxmsq6smyaxvvyqo@yuggoth.org> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> <20180912221417.hxmsq6smyaxvvyqo@yuggoth.org> Message-ID: On 9/12/2018 4:14 PM, Jeremy Stanley wrote: > I think Doug's work leading the Python 3 First effort is a great > example. He has helped find and enable several other goal champions > to collaborate on this. I appreciate the variety of other things > Doug already does with his available time and would rather he not > stop doing those things to spend all his time acting as a project > manager. I specifically called out what Doug is doing as an example of things I want to see the TC doing. I want more/all TC members doing that. -- Thanks, Matt From fungi at yuggoth.org Wed Sep 12 23:06:54 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 12 Sep 2018 23:06:54 +0000 Subject: [Openstack-operators] [Openstack-sigs] [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> <20180912221417.hxmsq6smyaxvvyqo@yuggoth.org> Message-ID: <20180912230654.5ldabmmtxlusrxep@yuggoth.org> On 2018-09-12 17:03:10 -0600 (-0600), Matt Riedemann wrote: > On 9/12/2018 4:14 PM, Jeremy Stanley wrote: > > I think Doug's work leading the Python 3 First effort is a great > > example. He has helped find and enable several other goal champions > > to collaborate on this. I appreciate the variety of other things > > Doug already does with his available time and would rather he not > > stop doing those things to spend all his time acting as a project > > manager. > > I specifically called out what Doug is doing as an example of > things I want to see the TC doing. I want more/all TC members > doing that. With that I was replying to Zhipeng Huang's message which you have trimmed above, specifically countering the assertion that recruiting others to help with these efforts is a waste of time and that TC members should simply do all the work themselves instead. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mriedemos at gmail.com Wed Sep 12 23:50:30 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 12 Sep 2018 17:50:30 -0600 Subject: [Openstack-operators] [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <20180912231338.f2v5so7jelg3am7y@yuggoth.org> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> <20180912231338.f2v5so7jelg3am7y@yuggoth.org> Message-ID: <9ed16b6f-bc3a-4de3-bbbd-db62ac1ec32d@gmail.com> On 9/12/2018 5:13 PM, Jeremy Stanley wrote: > Sure, and I'm saying that instead I think the influence of TC > members_can_ be more valuable in finding and helping additional > people to do these things rather than doing it all themselves, and > it's not just about the limited number of available hours in the day > for one person versus many. The successes goal champions experience, > the connections they make and the elevated reputation they gain > throughout the community during the process of these efforts builds > new leaders for us all. Again, I'm not saying TC members should be doing all of the work themselves. That's not realistic, especially when critical parts of any major effort are going to involve developers from projects on which none of the TC members are active contributors (e.g. nova). I want to see TC members herd cats, for lack of a better analogy, and help out technically (with code) where possible. Given the repeated mention of how the "help wanted" list continues to not draw in contributors, I think the recruiting role of the TC should take a back seat to actually stepping in and helping work on those items directly. For example, Sean McGinnis is taking an active role in the operators guide and other related docs that continue to be discussed at every face to face event since those docs were dropped from openstack-manuals (in Pike). I think it's fair to say that the people generally elected to the TC are those most visible in the community (it's a popularity contest) and those people are generally the most visible because they have the luxury of working upstream the majority of their time. As such, it's their duty to oversee and spend time working on the hard cross-project technical deliverables that operators and users are asking for, rather than think of an infinite number of ways to try and draw *others* to help work on those gaps. As I think it's the role of a PTL within a given project to have a finger on the pulse of the technical priorities of that project and manage the developers involved (of which the PTL certainly may be one), it's the role of the TC to do the same across openstack as a whole. If a PTL doesn't have the time or willingness to do that within their project, they shouldn't be the PTL. The same goes for TC members IMO. -- Thanks, Matt From mriedemos at gmail.com Wed Sep 12 23:52:06 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 12 Sep 2018 17:52:06 -0600 Subject: [Openstack-operators] [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> <20180912231338.f2v5so7jelg3am7y@yuggoth.org> Message-ID: <6e3b031f-c450-22fd-391c-c71c8ad827cd@gmail.com> On 9/12/2018 5:32 PM, Melvin Hillsman wrote: > We basically spent the day focusing on two things specific to what you > bring up and are in agreement with you regarding action not just talk > around feedback and outreach. [1] > We wiped the agenda clean, discussed our availability (set reasonable > expectations), and revisited how we can be more diligent and successful > around these two principles which target your first comment, "...get > their RFE/bug list ranked from the operator community (because some of > the requests are not exclusive to public cloud), and then put pressure > on the TC to help project manage the delivery of the top issue..." > > I will not get into much detail because again this response is specific > to a portion of your email so in keeping with feedback and outreach the > UC is making it a point to be intentional. We have already got action > items [2] which target the concern you raise. We have agreed to hold > each other accountable and adjusted our meeting structure to facilitate > being successful. > > Not that the UC (elected members) are the only ones who can do this but > we believe it is our responsibility to; regardless of what anyone else > does. The UC is also expected to enlist others and hopefully through our > efforts others are encouraged participate and enlist others. > > [1] https://etherpad.openstack.org/p/uc-stein-ptg > [2] https://etherpad.openstack.org/p/UC-Election-Qualifications Awesome, thank you Melvin and others on the UC. -- Thanks, Matt From mrhillsman at gmail.com Thu Sep 13 02:08:10 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 12 Sep 2018 20:08:10 -0600 Subject: [Openstack-operators] [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <6e3b031f-c450-22fd-391c-c71c8ad827cd@gmail.com> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> <20180912231338.f2v5so7jelg3am7y@yuggoth.org> <6e3b031f-c450-22fd-391c-c71c8ad827cd@gmail.com> Message-ID: You're welcome! -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 On Wed, Sep 12, 2018, 5:52 PM Matt Riedemann wrote: > On 9/12/2018 5:32 PM, Melvin Hillsman wrote: > > We basically spent the day focusing on two things specific to what you > > bring up and are in agreement with you regarding action not just talk > > around feedback and outreach. [1] > > We wiped the agenda clean, discussed our availability (set reasonable > > expectations), and revisited how we can be more diligent and successful > > around these two principles which target your first comment, "...get > > their RFE/bug list ranked from the operator community (because some of > > the requests are not exclusive to public cloud), and then put pressure > > on the TC to help project manage the delivery of the top issue..." > > > > I will not get into much detail because again this response is specific > > to a portion of your email so in keeping with feedback and outreach the > > UC is making it a point to be intentional. We have already got action > > items [2] which target the concern you raise. We have agreed to hold > > each other accountable and adjusted our meeting structure to facilitate > > being successful. > > > > Not that the UC (elected members) are the only ones who can do this but > > we believe it is our responsibility to; regardless of what anyone else > > does. The UC is also expected to enlist others and hopefully through our > > efforts others are encouraged participate and enlist others. > > > > [1] https://etherpad.openstack.org/p/uc-stein-ptg > > [2] https://etherpad.openstack.org/p/UC-Election-Qualifications > > Awesome, thank you Melvin and others on the UC. > > -- > > Thanks, > > Matt > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Sep 13 14:19:21 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 13 Sep 2018 23:19:21 +0900 Subject: [Openstack-operators] [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> Message-ID: <165d34cf822.b5f4da7688669.7192778226044204749@ghanshyammann.com> ---- On Thu, 13 Sep 2018 00:47:27 +0900 Matt Riedemann wrote ---- > Rather than take a tangent on Kristi's candidacy thread [1], I'll bring > this up separately. > > Kristi said: > > "Ultimately, this list isn’t exclusive and I’d love to hear your and > other people's opinions about what you think the I should focus on." > > Well since you asked... > > Some feedback I gave to the public cloud work group yesterday was to get > their RFE/bug list ranked from the operator community (because some of > the requests are not exclusive to public cloud), and then put pressure > on the TC to help project manage the delivery of the top issue. I would > like all of the SIGs to do this. The upgrades SIG should rank and > socialize their #1 issue that needs attention from the developer > community - maybe that's better upgrade CI testing for deployment > projects, maybe it's getting the pre-upgrade checks goal done for Stein. > The UC should also be doing this; maybe that's the UC saying, "we need > help on closing feature gaps in openstack client and/or the SDK". I > don't want SIGs to bombard the developers with *all* of their > requirements, but I want to get past *talking* about the *same* issues > *every* time we get together. I want each group to say, "this is our top > issue and we want developers to focus on it." For example, the extended > maintenance resolution [2] was purely birthed from frustration about > talking about LTS and stable branch EOL every time we get together. It's > also the responsibility of the operator and user communities to weigh in > on proposed release goals, but the TC should be actively trying to get > feedback from those communities about proposed goals, because I bet > operators and users don't care about mox removal [3]. I agree on this and i feel this is real value we can add with current situation where contributors are less in almost all of the projects. When we set goal for any cycle, we should have user/operator/SIG weightage on priority in selection checklist and categorize the goal into respective category/tag something like "user-oriented" or "coding-oriented"(only developer/maintaining code benefits). And then we concentrate more on first category and leave second one more on project team. Project team further can plan the second catagory items as per their bandwidth and priority. I am not saying code/developer oriented goals should not be initiated by TC but those should be on low priority list kind of. -gmann > > I want to see the TC be more of a cross-project project management > group, like a group of Ildikos and what she did between nova and cinder > to get volume multi-attach done, which took persistent supervision to > herd the cats and get it delivered. Lance is already trying to do this > with unified limits. Doug is doing this with the python3 goal. I want my > elected TC members to be pushing tangible technical deliverables forward. > > I don't find any value in the TC debating ad nauseam about visions and > constellations and "what is openstack?". Scope will change over time > depending on who is contributing to openstack, we should just accept > this. And we need to realize that if we are failing to deliver value to > operators and users, they aren't going to use openstack and then "what > is openstack?" won't matter because no one will care. > > So I encourage all elected TC members to work directly with the various > SIGs to figure out their top issue and then work on managing those > deliverables across the community because the TC is particularly well > suited to do so given the elected position. I realize political and > bureaucratic "how should openstack deal with x?" things will come up, > but those should not be the priority of the TC. So instead of > philosophizing about things like, "should all compute agents be in a > single service with a REST API" for hours and hours, every few months - > immediately ask, "would doing that get us any closer to achieving top > technical priority x?" Because if not, or it's so fuzzy in scope that no > one sees the way forward, document a decision and then drop it. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134490.html > [2] > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html > [3] https://governance.openstack.org/tc/goals/rocky/mox_removal.html > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From Kevin.Fox at pnnl.gov Thu Sep 13 16:14:22 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 13 Sep 2018 16:14:22 +0000 Subject: [Openstack-operators] [Openstack-sigs] [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> , Message-ID: <1A3C52DFCD06494D8528644858247BF01C19A62A@EX10MBOX03.pnnl.gov> How about stated this way, Its the tc's responsibility to get it done. Either by delegating the activity, or by doing it themselves. But either way, it needs to get done. Its a ball that has been dropped too much in OpenStacks history. If no one is ultimately responsible, balls will keep getting dropped. Thanks, Kevin ________________________________________ From: Matt Riedemann [mriedemos at gmail.com] Sent: Wednesday, September 12, 2018 4:00 PM To: Dan Smith; Thierry Carrez Cc: OpenStack Development Mailing List (not for usage questions); openstack-sigs at lists.openstack.org; openstack-operators at lists.openstack.org Subject: Re: [Openstack-sigs] [openstack-dev] Open letter/request to TC candidates (and existing elected officials) On 9/12/2018 3:30 PM, Dan Smith wrote: >> I'm just a bit worried to limit that role to the elected TC members. If >> we say "it's the role of the TC to do cross-project PM in OpenStack" >> then we artificially limit the number of people who would sign up to do >> that kind of work. You mention Ildiko and Lance: they did that line of >> work without being elected. > Why would saying that we_expect_ the TC members to do that work limit > such activities only to those that are on the TC? I would expect the TC > to take on the less-fun or often-neglected efforts that we all know are > needed but don't have an obvious champion or sponsor. > > I think we expect some amount of widely-focused technical or project > leadership from TC members, and certainly that expectation doesn't > prevent others from leading efforts (even in the areas of proposing TC > resolutions, etc) right? Absolutely. I'm not saying the cross-project project management should be restricted to or solely the responsibility of the TC. It's obvious there are people outside of the TC that have already been doing this - and no it's not always elected PTLs either. What I want is elected TC members to prioritize driving technical deliverables to completion based on ranked input from operators/users/SIGs over philosophical debates and politics/bureaucracy and help to complete the technical tasks if possible. -- Thanks, Matt _______________________________________________ openstack-sigs mailing list openstack-sigs at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From zhipengh512 at gmail.com Thu Sep 13 16:38:31 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 13 Sep 2018 10:38:31 -0600 Subject: [Openstack-operators] [Openstack-sigs] [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C19A62A@EX10MBOX03.pnnl.gov> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> <1A3C52DFCD06494D8528644858247BF01C19A62A@EX10MBOX03.pnnl.gov> Message-ID: On Thu, Sep 13, 2018 at 10:15 AM Fox, Kevin M wrote: > How about stated this way, > Its the tc's responsibility to get it done. Either by delegating the > activity, or by doing it themselves. But either way, it needs to get done. > Its a ball that has been dropped too much in OpenStacks history. If no one > is ultimately responsible, balls will keep getting dropped. > > Thanks, > Kevin > +1 Kevin -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From samuel at cassi.ba Thu Sep 13 19:58:28 2018 From: samuel at cassi.ba (Samuel Cassiba) Date: Thu, 13 Sep 2018 12:58:28 -0700 Subject: [Openstack-operators] [Openstack-sigs] [openstack-dev] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C19A62A@EX10MBOX03.pnnl.gov> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <03705d03-d986-285a-8b17-c2ae554ed11d@openstack.org> <1A3C52DFCD06494D8528644858247BF01C19A62A@EX10MBOX03.pnnl.gov> Message-ID: On Thu, Sep 13, 2018 at 9:14 AM, Fox, Kevin M wrote: > How about stated this way, > Its the tc's responsibility to get it done. Either by delegating the activity, or by doing it themselves. But either way, it needs to get done. Its a ball that has been dropped too much in OpenStacks history. If no one is ultimately responsible, balls will keep getting dropped. > > Thanks, > Kevin I see the role of TC the same way I do the PTL hat, but on more of a meta scale: too much direct involvement can stifle things. On the inverse, not enough involvement can result in people saying one's work is legacy, to be nice, or dead, at worst. All too often, we humans get hung up on the definitions of words, sometimes to the point of inaction. It seems only when someone says sod it do things move forward, regardless of anyone's level of involvement. I look to TC as the group that sets the tone, de facto product owners, to paraphrase from OpenStack's native tongue. The more hands-on an individual is with the output, TC or not, a perception arises that a given effort needs only that person's attention; thereby, setting a much different narrative than might otherwise be immediately noticed or desired. The place I see TC is making sure that there is meaningful progress on agreed-upon efforts, however that needs to exist. Sometimes that might be recruiting, but I don't see browbeating social media to be particularly valuable from an individual standpoint. Sometimes that would be collaborating through code, if it comes down to it. From an overarching perspective, I view hands-on coding by TC to be somewhat of a last resort effort due to individual commitments. Perceptions surrounding actions, like the oft used 'stepping up' phrase, creates an effect where people do not carve out enough time to effect change, becoming too busy, repeat ad infinitum. Best, Samuel From fungi at yuggoth.org Thu Sep 13 20:44:29 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 13 Sep 2018 20:44:29 +0000 Subject: [Openstack-operators] [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <9ed16b6f-bc3a-4de3-bbbd-db62ac1ec32d@gmail.com> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> <20180912231338.f2v5so7jelg3am7y@yuggoth.org> <9ed16b6f-bc3a-4de3-bbbd-db62ac1ec32d@gmail.com> Message-ID: <20180913204428.bydeuacugcydpfxj@yuggoth.org> On 2018-09-12 17:50:30 -0600 (-0600), Matt Riedemann wrote: [...] > Again, I'm not saying TC members should be doing all of the work > themselves. That's not realistic, especially when critical parts > of any major effort are going to involve developers from projects > on which none of the TC members are active contributors (e.g. > nova). I want to see TC members herd cats, for lack of a better > analogy, and help out technically (with code) where possible. I can respect that. I think that OpenStack made a mistake in naming its community management governance body the "technical" committee. I do agree that having TC members engage in activities with tangible outcomes is preferable, and that the needs of the users of its software should weigh heavily in prioritization decisions, but those are not the only problems our community faces nor is it as if there are no other responsibilities associated with being a TC member. > Given the repeated mention of how the "help wanted" list continues > to not draw in contributors, I think the recruiting role of the TC > should take a back seat to actually stepping in and helping work > on those items directly. For example, Sean McGinnis is taking an > active role in the operators guide and other related docs that > continue to be discussed at every face to face event since those > docs were dropped from openstack-manuals (in Pike). I completely agree that the help wanted list hasn't worked out well in practice. It was based on requests from the board of directors to provide some means of communicating to their business-focused constituency where resources would be most useful to the project. We've had a subsequent request to reorient it to be more like a set of job descriptions along with clearer business use cases explaining the benefit to them of contributing to these efforts. In my opinion it's very much the responsibility of the TC to find ways to accomplish these sorts of things as well. > I think it's fair to say that the people generally elected to the > TC are those most visible in the community (it's a popularity > contest) and those people are generally the most visible because > they have the luxury of working upstream the majority of their > time. As such, it's their duty to oversee and spend time working > on the hard cross-project technical deliverables that operators > and users are asking for, rather than think of an infinite number > of ways to try and draw *others* to help work on those gaps. But not everyone who is funded for full-time involvement with the community is necessarily "visible" in ways that make them electable. Higher-profile involvement in such activities over time is what gets them the visibility to be more easily elected to governance positions via "popularity contest" mechanics. > As I think it's the role of a PTL within a given project to have a > finger on the pulse of the technical priorities of that project > and manage the developers involved (of which the PTL certainly may > be one), it's the role of the TC to do the same across openstack > as a whole. If a PTL doesn't have the time or willingness to do > that within their project, they shouldn't be the PTL. The same > goes for TC members IMO. Completely agree, I think we might just disagree on where to strike the balance of purely technical priorities for the TC (as I personally think the TC is somewhat incorrectly named). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From johnsomor at gmail.com Thu Sep 13 23:45:36 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 13 Sep 2018 17:45:36 -0600 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: Message-ID: In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post" which maps to the "os--api::" format. I selected it as it uses the service-type[1], references the API resource, and then the method. So it maps well to the API reference[2] for the service. [0] https://docs.openstack.org/octavia/latest/configuration/policy.html [1] https://service-types.openstack.org/ [2] https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer Michael On Wed, Sep 12, 2018 at 12:52 PM Tim Bell wrote: > > So +1 > > > > Tim > > > > From: Lance Bragstad > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 12 September 2018 at 20:43 > To: "OpenStack Development Mailing List (not for usage questions)" , OpenStack Operators > Subject: [openstack-dev] [all] Consistent policy names > > > > The topic of having consistent policy names has popped up a few times this week. Ultimately, if we are to move forward with this, we'll need a convention. To help with that a little bit I started an etherpad [0] that includes links to policy references, basic conventions *within* that service, and some examples of each. I got through quite a few projects this morning, but there are still a couple left. > > > > The idea is to look at what we do today and see what conventions we can come up with to move towards, which should also help us determine how much each convention is going to impact services (e.g. picking a convention that will cause 70% of services to rename policies). > > > > Please have a look and we can discuss conventions in this thread. If we come to agreement, I'll start working on some documentation in oslo.policy so that it's somewhat official because starting to renaming policies. > > > > [0] https://etherpad.openstack.org/p/consistent-policy-names > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From davanum at gmail.com Fri Sep 14 14:45:05 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Fri, 14 Sep 2018 08:45:05 -0600 Subject: [Openstack-operators] [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <20180913204428.bydeuacugcydpfxj@yuggoth.org> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> <20180912215528.kpkxrg7ifaagoyvy@yuggoth.org> <20180912231338.f2v5so7jelg3am7y@yuggoth.org> <9ed16b6f-bc3a-4de3-bbbd-db62ac1ec32d@gmail.com> <20180913204428.bydeuacugcydpfxj@yuggoth.org> Message-ID: Folks, Sorry for the top post - Those of you that are still at PTG, please feel free to drop in to the Clear Creek room today. Thanks, Dims On Thu, Sep 13, 2018 at 2:44 PM Jeremy Stanley wrote: > On 2018-09-12 17:50:30 -0600 (-0600), Matt Riedemann wrote: > [...] > > Again, I'm not saying TC members should be doing all of the work > > themselves. That's not realistic, especially when critical parts > > of any major effort are going to involve developers from projects > > on which none of the TC members are active contributors (e.g. > > nova). I want to see TC members herd cats, for lack of a better > > analogy, and help out technically (with code) where possible. > > I can respect that. I think that OpenStack made a mistake in naming > its community management governance body the "technical" committee. > I do agree that having TC members engage in activities with tangible > outcomes is preferable, and that the needs of the users of its > software should weigh heavily in prioritization decisions, but those > are not the only problems our community faces nor is it as if there > are no other responsibilities associated with being a TC member. > > > Given the repeated mention of how the "help wanted" list continues > > to not draw in contributors, I think the recruiting role of the TC > > should take a back seat to actually stepping in and helping work > > on those items directly. For example, Sean McGinnis is taking an > > active role in the operators guide and other related docs that > > continue to be discussed at every face to face event since those > > docs were dropped from openstack-manuals (in Pike). > > I completely agree that the help wanted list hasn't worked out well > in practice. It was based on requests from the board of directors to > provide some means of communicating to their business-focused > constituency where resources would be most useful to the project. > We've had a subsequent request to reorient it to be more like a set > of job descriptions along with clearer business use cases explaining > the benefit to them of contributing to these efforts. In my opinion > it's very much the responsibility of the TC to find ways to > accomplish these sorts of things as well. > > > I think it's fair to say that the people generally elected to the > > TC are those most visible in the community (it's a popularity > > contest) and those people are generally the most visible because > > they have the luxury of working upstream the majority of their > > time. As such, it's their duty to oversee and spend time working > > on the hard cross-project technical deliverables that operators > > and users are asking for, rather than think of an infinite number > > of ways to try and draw *others* to help work on those gaps. > > But not everyone who is funded for full-time involvement with the > community is necessarily "visible" in ways that make them electable. > Higher-profile involvement in such activities over time is what gets > them the visibility to be more easily elected to governance > positions via "popularity contest" mechanics. > > > As I think it's the role of a PTL within a given project to have a > > finger on the pulse of the technical priorities of that project > > and manage the developers involved (of which the PTL certainly may > > be one), it's the role of the TC to do the same across openstack > > as a whole. If a PTL doesn't have the time or willingness to do > > that within their project, they shouldn't be the PTL. The same > > goes for TC members IMO. > > Completely agree, I think we might just disagree on where to strike > the balance of purely technical priorities for the TC (as I > personally think the TC is somewhat incorrectly named). > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Sep 14 14:46:36 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 14 Sep 2018 08:46:36 -0600 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: Message-ID: On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson wrote: > In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post" > which maps to the "os--api::" format. > Thanks for explaining the justification, Michael. I'm curious if anyone has context on the "os-" part of the format? I've seen that pattern in a couple different projects. Does anyone know about its origin? Was it something we converted to our policy names because of API names/paths? > > I selected it as it uses the service-type[1], references the API > resource, and then the method. So it maps well to the API reference[2] > for the service. > > [0] https://docs.openstack.org/octavia/latest/configuration/policy.html > [1] https://service-types.openstack.org/ > [2] > https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer > > Michael > On Wed, Sep 12, 2018 at 12:52 PM Tim Bell wrote: > > > > So +1 > > > > > > > > Tim > > > > > > > > From: Lance Bragstad > > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > > > Date: Wednesday, 12 September 2018 at 20:43 > > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org>, OpenStack Operators < > openstack-operators at lists.openstack.org> > > Subject: [openstack-dev] [all] Consistent policy names > > > > > > > > The topic of having consistent policy names has popped up a few times > this week. Ultimately, if we are to move forward with this, we'll need a > convention. To help with that a little bit I started an etherpad [0] that > includes links to policy references, basic conventions *within* that > service, and some examples of each. I got through quite a few projects this > morning, but there are still a couple left. > > > > > > > > The idea is to look at what we do today and see what conventions we can > come up with to move towards, which should also help us determine how much > each convention is going to impact services (e.g. picking a convention that > will cause 70% of services to rename policies). > > > > > > > > Please have a look and we can discuss conventions in this thread. If we > come to agreement, I'll start working on some documentation in oslo.policy > so that it's somewhat official because starting to renaming policies. > > > > > > > > [0] https://etherpad.openstack.org/p/consistent-policy-names > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From codeology.lab at gmail.com Fri Sep 14 15:41:38 2018 From: codeology.lab at gmail.com (Cody) Date: Fri, 14 Sep 2018 11:41:38 -0400 Subject: [Openstack-operators] [TripleO] undercloud sshd config override Message-ID: Hello folks, I installed TripleO undercloud on a machine with a pre-existing sshd_config that disabled root and password login. The file was rewritten by Puppet after the undercloud installation and was made to allow for both options. This is not a good default practice. Is there a way to set the undercloud to respect any pre-existing sshd_config settings? Thank you to all. Regards, Cody From mriedemos at gmail.com Fri Sep 14 17:24:03 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 14 Sep 2018 11:24:03 -0600 Subject: [Openstack-operators] [openstack-dev] [nova] Hard fail if you try to rename an AZ with instances in it? In-Reply-To: <594cca34-a710-0c4b-200b-45f892e98581@gmail.com> References: <2c6ff74e-65e9-d7e2-369e-d7c6fd37798a@gmail.com> <4460ff7f-7a1b-86ac-c37e-dbd7a42631ed@gmail.com> <100034a8-57f3-1eea-a792-97ca1328967c@gmail.com> <594cca34-a710-0c4b-200b-45f892e98581@gmail.com> Message-ID: On 3/28/2018 4:35 PM, Jay Pipes wrote: > On 03/28/2018 03:35 PM, Matt Riedemann wrote: >> On 3/27/2018 10:37 AM, Jay Pipes wrote: >>> >>> If we want to actually fix the issue once and for all, we need to >>> make availability zones a real thing that has a permanent identifier >>> (UUID) and store that permanent identifier in the instance (not the >>> instance metadata). >>> >>> Or we can continue to paper over major architectural weaknesses like >>> this. >> >> Stepping back a second from the rest of this thread, what if we do the >> hard fail bug fix thing, which could be backported to stable branches, >> and then we have the option of completely re-doing this with aggregate >> UUIDs as the key rather than the aggregate name? Because I think the >> former could get done in Rocky, but the latter probably not. > > I'm fine with that (and was fine with it before, just stating that > solving the problem long-term requires different thinking) > > Best, > -jay Just FYI for anyone that cared about this thread, we agreed at the Stein PTG to resolve the immediate bug [1] by blocking AZ renames while the AZ has instances in it. There won't be a microversion for that change and we'll be able to backport it (with a release note I suppose). [1] https://bugs.launchpad.net/nova/+bug/1782539 -- Thanks, Matt From zhipengh512 at gmail.com Fri Sep 14 17:49:40 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 14 Sep 2018 11:49:40 -0600 Subject: [Openstack-operators] [tc]Global Reachout Proposal Message-ID: Hi all, Follow up the diversity discussion we had in the tc session this morning [0], I've proposed a resolution on facilitating technical community in large to engage in global reachout for OpenStack more efficiently. Your feedbacks are welcomed. Whether this should be a new resolution or not at the end of the day, this is a conversation worthy to have. [0] https://review.openstack.org/602697 -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Sep 14 20:16:16 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 14 Sep 2018 14:16:16 -0600 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: Message-ID: I don't know for sure, but I assume it is short for "OpenStack" and prefixing OpenStack policies vs. third party plugin policies for documentation purposes. I am guilty of borrowing this from existing code examples[0]. [0] http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html Michael On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad wrote: > > > > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson wrote: >> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post" >> which maps to the "os--api::" format. > > > Thanks for explaining the justification, Michael. > > I'm curious if anyone has context on the "os-" part of the format? I've seen that pattern in a couple different projects. Does anyone know about its origin? Was it something we converted to our policy names because of API names/paths? > >> >> >> I selected it as it uses the service-type[1], references the API >> resource, and then the method. So it maps well to the API reference[2] >> for the service. >> >> [0] https://docs.openstack.org/octavia/latest/configuration/policy.html >> [1] https://service-types.openstack.org/ >> [2] https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer >> >> Michael >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell wrote: >> > >> > So +1 >> > >> > >> > >> > Tim >> > >> > >> > >> > From: Lance Bragstad >> > Reply-To: "OpenStack Development Mailing List (not for usage questions)" >> > Date: Wednesday, 12 September 2018 at 20:43 >> > To: "OpenStack Development Mailing List (not for usage questions)" , OpenStack Operators >> > Subject: [openstack-dev] [all] Consistent policy names >> > >> > >> > >> > The topic of having consistent policy names has popped up a few times this week. Ultimately, if we are to move forward with this, we'll need a convention. To help with that a little bit I started an etherpad [0] that includes links to policy references, basic conventions *within* that service, and some examples of each. I got through quite a few projects this morning, but there are still a couple left. >> > >> > >> > >> > The idea is to look at what we do today and see what conventions we can come up with to move towards, which should also help us determine how much each convention is going to impact services (e.g. picking a convention that will cause 70% of services to rename policies). >> > >> > >> > >> > Please have a look and we can discuss conventions in this thread. If we come to agreement, I'll start working on some documentation in oslo.policy so that it's somewhat official because starting to renaming policies. >> > >> > >> > >> > [0] https://etherpad.openstack.org/p/consistent-policy-names >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From rico.lin.guanyu at gmail.com Fri Sep 14 21:57:22 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 14 Sep 2018 15:57:22 -0600 Subject: [Openstack-operators] [Openstack-sigs][openstack-dev][all]Expose SIGs/WGs as single window for Users/Ops scenario Message-ID: The Idea has been raising around (from me or from Matt's ML), so I would like to give people more update on this (in terms of what I have been raising, what people have been feedbacks, and what init idea I can collect or I have as actions. *Why are we doing this?* The basic concept for this is to allow users/ops get a single window for important scenario/user cases or issues (here's an example [1])into traceable tasks in single story/place and ask developers be responsible (by changing the mission of government policy) to co-work on that task. SIGs/WGs are so desired to get feedbacks or use cases, so as to project teams (not gonna speak for all projects/SIGs/WGs but we like to collect for more idea for sure). And the project team got a central place to develop for specific user requirements (Edge, NFV, Self-healing, K8s). One more idea on this is that we can also use SIGs and WGs as a place for cross-project docs and those documents can be some more general information on how a user can plan for that area (again Edge, NFV, Self-healing, K8s). There also needs clear information to Users/Ops about what's the dependency cross projects which involved. Also, a potential way to expose more projects. From this step, we can plan to cross-project gating ( in projects gate or periodic) implementation *So what's triggering and feedback:* - This idea has been raising as a topic in K8S SIG, Self-healing SIG session. Feedback from K8s-sig and Self-healing-sig are generally looking forward to this. SIGs appears are desired to get use cases and user issues (I didn't so through this idea to rest SIGs/WGs yet, but place leave feedback if you're in that group). Most because this can value up SIGs/WGs on what they're interesting on. - This idea has been raising as a topic in Ops-meetup session Most of ops think it will be super if actually anyone willing to handle their own issues. The concerns about this are that we have to make some structure or guidelines to avoid a crazy number of useless issues (maybe like setup template for issues). Another feedback from an operator is that he concerns about ops should just try to go through everything in detail by themselves and contact to teams by themselves. IMO it depends on teams to set template and say you must have some specific information or even figure out which project should be in charge of which failed. - This idea has been raising as a topic in TC session Public cloud WGs also got this idea as well (and they done a good job!), appears it's a very preferred way for them. What happens to them is public cloud WG collect bunch number of use cases, but would like to see immediate actions or a traceable way to keep tracing those task. Doug: It might be hard to push developers to SIGs/WGs, but SIGs/WGs can always raise the cross-project forum. Also, it's important to let people know that who they can talk to. Melvin: Make it easier for everyone, and give a visibility. How can we possible to make one thing done is very important. Thierry: Have a way to expose the top priority which is important for OpenStack. - Also, raise to some PTLs and UCs. Generally good, Amy (super cute UC member) do ask the concern about there are manual works to bind tasks to cross bug tracing platform (like if you like to create a story in Self-healing SIG, and said it's relative to Heat, and Neutron. you create a task for Heat in that story, but you need to create a launchpad bug and link it to that story.). That issue might in now still need to be manually done, but what we might able to change is to consider migrate most of the relative teams to a single channel in long-term. I didn't get the chance to reach most of PTLs but do hope this is the place PTLs can also share their feedbacks. - There are ML in Self-healing-sig [2] not like a lot of feedback to this ML, but generally looks good *What are the actions we can do right away:* - Please give feedback to us - Give a forum for this topic for all to discuss this (I already add a brainstorm in TC etherpad, but it's across projects, UCs, TCs, WGs, SIGs). - Set up a cross-committee discuss for restructuring missions to make sure teams are responsible for hep on development, SIGs/WGs are responsible to trace task as story level and help to trigger cross-project discussion, and operators are responsible to follow the structure to send issues and provide valuable information. - We can also do an experiment on try on SIGs/WGs who and the relative projects are willing to join this for a while and see how the outcomes and adjust on them. - Can we set cross-projects as a goal for a group of projects instead of only community goal? - Also if this is a nice idea, we can have a guideline for SIGs/WGs to like suggest how they can have a cross-project gate, have a way to let users/ops to file story/issue in a format that is useful, or how to trigger the attention from other projects to join this. These are what I got from PTG, but let's start from here together and scratch what's done shall we!! P.S. Sorry about the bad writing, but have to catch a flight. [1] https://storyboard.openstack.org/#!/story/2002684 [2] http://lists.openstack.org/pipermail/openstack-sigs/2018-July/000432.html -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Sep 14 23:25:19 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 14 Sep 2018 17:25:19 -0600 Subject: [Openstack-operators] [nova][publiccloud-wg] Proposal to shelve on stop/suspend Message-ID: <80609709-7b11-f920-5a2b-2b980e936cf3@gmail.com> tl;dr: I'm proposing a new parameter to the server stop (and suspend?) APIs to control if nova shelve offloads the server. Long form: This came up during the public cloud WG session this week based on a couple of feature requests [1][2]. When a user stops/suspends a server, the hypervisor frees up resources on the host but nova continues to track those resources as being used on the host so the scheduler can't put more servers there. What operators would like to do is that when a user stops a server, nova actually shelve offloads the server from the host so they can schedule new servers on that host. On start/resume of the server, nova would find a new host for the server. This also came up in Vancouver where operators would like to free up limited expensive resources like GPUs when the server is stopped. This is also the behavior in AWS. The problem with shelve is that it's great for operators but users just don't use it, maybe because they don't know what it is and stop works just fine. So how do you get users to opt into shelving their server? I've proposed a high-level blueprint [3] where we'd add a new (microversioned) parameter to the stop API with three options: * auto * offload * retain Naming is obviously up for debate. The point is we would default to auto and if auto is used, the API checks a config option to determine the behavior - offload or retain. By default we would retain for backward compatibility. For users that don't care, they get auto and it's fine. For users that do care, they either (1) don't opt into the microversion or (2) specify the specific behavior they want. I don't think we need to expose what the cloud's configuration for auto is because again, if you don't care then it doesn't matter and if you do care, you can opt out of this. "How do we get users to use the new microversion?" I'm glad you asked. Well, nova CLI defaults to using the latest available microversion negotiated between the client and the server, so by default, anyone using "nova stop" would get the 'auto' behavior (assuming the client and server are new enough to support it). Long-term, openstack client plans on doing the same version negotiation. As for the server status changes, if the server is stopped and shelved, the status would be 'SHELVED_OFFLOADED' rather than 'SHUTDOWN'. I believe this is fine especially if a user is not being specific and doesn't care about the actual backend behavior. On start, the API would allow starting (unshelving) shelved offloaded (rather than just stopped) instances. Trying to hide shelved servers as stopped in the API would be overly complex IMO so I don't want to try and mask that. It is possible that a user that stopped and shelved their server could hit a NoValidHost when starting (unshelving) the server, but that really shouldn't happen in a cloud that's configuring nova to shelve by default because if they are doing this, their SLA needs to reflect they have the capacity to unshelve the server. If you can't honor that SLA, don't shelve by default. So, what are the general feelings on this before I go off and start writing up a spec? [1] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791681 [2] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791679 [3] https://blueprints.launchpad.net/nova/+spec/shelve-on-stop -- Thanks, Matt From zhipengh512 at gmail.com Sat Sep 15 00:51:40 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 14 Sep 2018 18:51:40 -0600 Subject: [Openstack-operators] [tc][uc]Community Wide Long Term Goals Message-ID: Hi, Based upon the discussion we had at the TC session in the afternoon, I'm starting to draft a patch to add long term goal mechanism into governance. It is by no means a complete solution at the moment (still have not thought through the execution method yet to make sure the outcome), but feel free to provide your feedback at https://review.openstack.org/#/c/602799/ . -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Sat Sep 15 03:16:27 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 14 Sep 2018 21:16:27 -0600 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: Message-ID: Ok - yeah, I'm not sure what the history behind that is either... I'm mainly curious if that's something we can/should keep or if we are opposed to dropping 'os' and 'api' from the convention (e.g. load-balancer:loadbalancer:post as opposed to os_load-balancer_api:loadbalancer:post) and just sticking with the service-type? On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson wrote: > I don't know for sure, but I assume it is short for "OpenStack" and > prefixing OpenStack policies vs. third party plugin policies for > documentation purposes. > > I am guilty of borrowing this from existing code examples[0]. > > [0] > http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html > > Michael > On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad > wrote: > > > > > > > > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson > wrote: > >> > >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post" > >> which maps to the "os--api::" format. > > > > > > Thanks for explaining the justification, Michael. > > > > I'm curious if anyone has context on the "os-" part of the format? I've > seen that pattern in a couple different projects. Does anyone know about > its origin? Was it something we converted to our policy names because of > API names/paths? > > > >> > >> > >> I selected it as it uses the service-type[1], references the API > >> resource, and then the method. So it maps well to the API reference[2] > >> for the service. > >> > >> [0] https://docs.openstack.org/octavia/latest/configuration/policy.html > >> [1] https://service-types.openstack.org/ > >> [2] > https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer > >> > >> Michael > >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell wrote: > >> > > >> > So +1 > >> > > >> > > >> > > >> > Tim > >> > > >> > > >> > > >> > From: Lance Bragstad > >> > Reply-To: "OpenStack Development Mailing List (not for usage > questions)" > >> > Date: Wednesday, 12 September 2018 at 20:43 > >> > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org>, OpenStack Operators < > openstack-operators at lists.openstack.org> > >> > Subject: [openstack-dev] [all] Consistent policy names > >> > > >> > > >> > > >> > The topic of having consistent policy names has popped up a few times > this week. Ultimately, if we are to move forward with this, we'll need a > convention. To help with that a little bit I started an etherpad [0] that > includes links to policy references, basic conventions *within* that > service, and some examples of each. I got through quite a few projects this > morning, but there are still a couple left. > >> > > >> > > >> > > >> > The idea is to look at what we do today and see what conventions we > can come up with to move towards, which should also help us determine how > much each convention is going to impact services (e.g. picking a convention > that will cause 70% of services to rename policies). > >> > > >> > > >> > > >> > Please have a look and we can discuss conventions in this thread. If > we come to agreement, I'll start working on some documentation in > oslo.policy so that it's somewhat official because starting to renaming > policies. > >> > > >> > > >> > > >> > [0] https://etherpad.openstack.org/p/consistent-policy-names > >> > > >> > _______________________________________________ > >> > OpenStack-operators mailing list > >> > OpenStack-operators at lists.openstack.org > >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Sat Sep 15 12:38:07 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Sat, 15 Sep 2018 12:38:07 +0000 Subject: [Openstack-operators] [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend In-Reply-To: <80609709-7b11-f920-5a2b-2b980e936cf3@gmail.com> References: <80609709-7b11-f920-5a2b-2b980e936cf3@gmail.com> Message-ID: <01331699-F5B4-44AF-91CF-95416A44910B@cern.ch> One extra user motivation that came up during past forums was to have a different quota for shelved instances (or remove them from the project quota all together). Currently, I believe that a shelved instance still counts towards the instances/cores quota thus the reduction of usage by the user is not reflected in the quotas. One discussion at the time was that the user is still reserving IPs so it is not zero resource usage and the instances still occupy storage. (We disabled shelving for other reasons so I'm not able to check easily) Tim -----Original Message----- From: Matt Riedemann Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Saturday, 15 September 2018 at 01:27 To: "OpenStack Development Mailing List (not for usage questions)" , "openstack-operators at lists.openstack.org" , "openstack-sigs at lists.openstack.org" Subject: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend tl;dr: I'm proposing a new parameter to the server stop (and suspend?) APIs to control if nova shelve offloads the server. Long form: This came up during the public cloud WG session this week based on a couple of feature requests [1][2]. When a user stops/suspends a server, the hypervisor frees up resources on the host but nova continues to track those resources as being used on the host so the scheduler can't put more servers there. What operators would like to do is that when a user stops a server, nova actually shelve offloads the server from the host so they can schedule new servers on that host. On start/resume of the server, nova would find a new host for the server. This also came up in Vancouver where operators would like to free up limited expensive resources like GPUs when the server is stopped. This is also the behavior in AWS. The problem with shelve is that it's great for operators but users just don't use it, maybe because they don't know what it is and stop works just fine. So how do you get users to opt into shelving their server? I've proposed a high-level blueprint [3] where we'd add a new (microversioned) parameter to the stop API with three options: * auto * offload * retain Naming is obviously up for debate. The point is we would default to auto and if auto is used, the API checks a config option to determine the behavior - offload or retain. By default we would retain for backward compatibility. For users that don't care, they get auto and it's fine. For users that do care, they either (1) don't opt into the microversion or (2) specify the specific behavior they want. I don't think we need to expose what the cloud's configuration for auto is because again, if you don't care then it doesn't matter and if you do care, you can opt out of this. "How do we get users to use the new microversion?" I'm glad you asked. Well, nova CLI defaults to using the latest available microversion negotiated between the client and the server, so by default, anyone using "nova stop" would get the 'auto' behavior (assuming the client and server are new enough to support it). Long-term, openstack client plans on doing the same version negotiation. As for the server status changes, if the server is stopped and shelved, the status would be 'SHELVED_OFFLOADED' rather than 'SHUTDOWN'. I believe this is fine especially if a user is not being specific and doesn't care about the actual backend behavior. On start, the API would allow starting (unshelving) shelved offloaded (rather than just stopped) instances. Trying to hide shelved servers as stopped in the API would be overly complex IMO so I don't want to try and mask that. It is possible that a user that stopped and shelved their server could hit a NoValidHost when starting (unshelving) the server, but that really shouldn't happen in a cloud that's configuring nova to shelve by default because if they are doing this, their SLA needs to reflect they have the capacity to unshelve the server. If you can't honor that SLA, don't shelve by default. So, what are the general feelings on this before I go off and start writing up a spec? [1] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791681 [2] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791679 [3] https://blueprints.launchpad.net/nova/+spec/shelve-on-stop -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Tim.Bell at cern.ch Sat Sep 15 14:51:26 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Sat, 15 Sep 2018 14:51:26 +0000 Subject: [Openstack-operators] [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend In-Reply-To: <01331699-F5B4-44AF-91CF-95416A44910B@cern.ch> References: <80609709-7b11-f920-5a2b-2b980e936cf3@gmail.com> <01331699-F5B4-44AF-91CF-95416A44910B@cern.ch> Message-ID: <5D0C9FC3-38EF-4F8E-B6F0-7B3B7DD508C0@cern.ch> Found the previous discussion at http://lists.openstack.org/pipermail/openstack-operators/2016-August/011321.html from 2016. Tim -----Original Message----- From: Tim Bell Date: Saturday, 15 September 2018 at 14:38 To: "OpenStack Development Mailing List (not for usage questions)" , "openstack-operators at lists.openstack.org" , "openstack-sigs at lists.openstack.org" Subject: Re: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend One extra user motivation that came up during past forums was to have a different quota for shelved instances (or remove them from the project quota all together). Currently, I believe that a shelved instance still counts towards the instances/cores quota thus the reduction of usage by the user is not reflected in the quotas. One discussion at the time was that the user is still reserving IPs so it is not zero resource usage and the instances still occupy storage. (We disabled shelving for other reasons so I'm not able to check easily) Tim -----Original Message----- From: Matt Riedemann Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Saturday, 15 September 2018 at 01:27 To: "OpenStack Development Mailing List (not for usage questions)" , "openstack-operators at lists.openstack.org" , "openstack-sigs at lists.openstack.org" Subject: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend tl;dr: I'm proposing a new parameter to the server stop (and suspend?) APIs to control if nova shelve offloads the server. Long form: This came up during the public cloud WG session this week based on a couple of feature requests [1][2]. When a user stops/suspends a server, the hypervisor frees up resources on the host but nova continues to track those resources as being used on the host so the scheduler can't put more servers there. What operators would like to do is that when a user stops a server, nova actually shelve offloads the server from the host so they can schedule new servers on that host. On start/resume of the server, nova would find a new host for the server. This also came up in Vancouver where operators would like to free up limited expensive resources like GPUs when the server is stopped. This is also the behavior in AWS. The problem with shelve is that it's great for operators but users just don't use it, maybe because they don't know what it is and stop works just fine. So how do you get users to opt into shelving their server? I've proposed a high-level blueprint [3] where we'd add a new (microversioned) parameter to the stop API with three options: * auto * offload * retain Naming is obviously up for debate. The point is we would default to auto and if auto is used, the API checks a config option to determine the behavior - offload or retain. By default we would retain for backward compatibility. For users that don't care, they get auto and it's fine. For users that do care, they either (1) don't opt into the microversion or (2) specify the specific behavior they want. I don't think we need to expose what the cloud's configuration for auto is because again, if you don't care then it doesn't matter and if you do care, you can opt out of this. "How do we get users to use the new microversion?" I'm glad you asked. Well, nova CLI defaults to using the latest available microversion negotiated between the client and the server, so by default, anyone using "nova stop" would get the 'auto' behavior (assuming the client and server are new enough to support it). Long-term, openstack client plans on doing the same version negotiation. As for the server status changes, if the server is stopped and shelved, the status would be 'SHELVED_OFFLOADED' rather than 'SHUTDOWN'. I believe this is fine especially if a user is not being specific and doesn't care about the actual backend behavior. On start, the API would allow starting (unshelving) shelved offloaded (rather than just stopped) instances. Trying to hide shelved servers as stopped in the API would be overly complex IMO so I don't want to try and mask that. It is possible that a user that stopped and shelved their server could hit a NoValidHost when starting (unshelving) the server, but that really shouldn't happen in a cloud that's configuring nova to shelve by default because if they are doing this, their SLA needs to reflect they have the capacity to unshelve the server. If you can't honor that SLA, don't shelve by default. So, what are the general feelings on this before I go off and start writing up a spec? [1] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791681 [2] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791679 [3] https://blueprints.launchpad.net/nova/+spec/shelve-on-stop -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jean-philippe at evrard.me Sun Sep 16 13:08:53 2018 From: jean-philippe at evrard.me (Jean-philippe Evrard) Date: Sun, 16 Sep 2018 15:08:53 +0200 Subject: [Openstack-operators] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> Message-ID: <228f6b57-ab51-ad4d-6ea8-fb1f94775030@evrard.me> > For example, the extended maintenance resolution [2] was purely > birthed from frustration about talking about LTS and stable branch EOL > every time we get together. It's also the responsibility of the > operator and user communities to weigh in on proposed release goals, > but the TC should be actively trying to get feedback from those > communities about proposed goals, because I bet operators and users > don't care about mox removal [3]. As the TC is currently vouching for the goals of a cycle, I strongly agree that there is need for the TC to be in-line with the what our users are asking, and those converting business requirements to technical decisions. I strongly agree the TC should be in contact with the UC and SIGs, as both are representing user focuses (the former one is more global, while the latter is more contextual). > I want to see the TC be more of a cross-project project management > group, like a group of Ildikos and what she did between nova and > cinder to get volume multi-attach done, which took persistent > supervision to herd the cats and get it delivered. Lance is already > trying to do this with unified limits. Doug is doing this with the > python3 goal. I want my elected TC members to be pushing tangible > technical deliverables forward. Multiple points in that. 1) I agree with the "I want my elected TC members to be pushing tangible technical deliverables forward.". 2) I acknowledge the fact that not all the TC members are in a position like Ildikos or Doug. I am glad I am in an employer that believe in contributing upstream and lets me enough room to do so, being given a good incentive to do so. 3) "Necessity" + "sufficiency" vs expectations. I'd like TC members to give their times to push technical changes forward. But I would also hope non TC members would doing so. So, I am fine with Dan's opinion: _expected_ to work on improving technically OpenStack, and therefore helping PTLs (and other people) to focus on their work/"other side of the pond". 4) If you are to think TC as companies' project managers, I would think this view is incomplete. At best it would be program managers and/or product owners that can/should take a project manager role. The problem with that notion is that project managers have 3 axis to play with (time, scope, and cost), where TC members only have one with community goals (scope, as time is constrained to a cycle, and cost is unclear/outside PM hands). If you've been to that position for a long time, you know this cannot be healthy and very demoralizing. For me, there is a small link that can _wrongly_ be done: as the TC is an official "organism" of OpenStack, it could as some point be expected to deliver these projects intact in a timely fashion without having the resources to do so. So, for me, the best way to think the goals should be a 'best effort' work, and everyone championing is expected to do their best. I think we are good at that for now, and doesn't need change. If you change the mindset into being expected to deliver (as this could become a very strong force for openstack), I'd say there are two risks: - More time involved in PM duties to gather resources upfront - Less deliverables proposed, as some could be higher risk and therefore not tried. - Possible finger pointing to "this champion didn't manage to achieve its goals" or diluted goals when no resource available. I am therefore not sure we'll able to go to that mindset in the current way OpenStack and companies are organized. > I don't find any value in the TC debating ad nauseam about visions and > constellations and "what is openstack?". Agree. I have an opinion of what OpenStack is, but I don't believe discussing it over the mailing list in this message would be helpful in any way. We can have this chat over drinks to see if we are aligned, but I am not sure it matters :p > So I encourage all elected TC members to work directly with the > various SIGs to figure out their top issue and then work on managing > those deliverables across the community because the TC is particularly > well suited to do so given the elected position. Agreed. > I realize political and bureaucratic "how should openstack deal with > x?" things will come up, but those should not be the priority of the TC. Your question is unclear to me. If users want x, this is a good feedback for TC, and therefore should be passed to projects. If x is 'how things are done technically in a project', I do not believe TC have to deal with that: maybe some tc members would deal with it, but not as tc members, more said projects contributors. if x is a governance of OpenStack topic, I would hope tc would get involved the earliest possible. > So instead of philosophizing about things like, "should all compute > agents be in a single service with a REST API" for hours and hours, > every few months - immediately ask, "would doing that get us any > closer to achieving top technical priority x?" Because if not, or it's > so fuzzy in scope that no one sees the way forward, document a > decision and then drop it. That rises a point of having global design document and decisions, so that we learn better. There is still a lot of tribal knowledge in OpenStack, and we should remove that over time by setting up the right processes. I'd be happy to discuss that with you to have a real/more complete understanding of what you mean there. Jean-Philippe Evrard (evrardjp) From zhipengh512 at gmail.com Sun Sep 16 14:28:13 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Sun, 16 Sep 2018 07:28:13 -0700 Subject: [Openstack-operators] [tc][uc]Community Wide Long Term Goals In-Reply-To: References: Message-ID: Just a quick update, the execution part of the proposal has been added in patch-2 , so if you have the similar concern shared in Matt's open letter , please help review and comment. On Fri, Sep 14, 2018, 5:51 PM Zhipeng Huang wrote: > Hi, > > Based upon the discussion we had at the TC session in the afternoon, I'm > starting to draft a patch to add long term goal mechanism into governance. > It is by no means a complete solution at the moment (still have not thought > through the execution method yet to make sure the outcome), but feel free > to provide your feedback at https://review.openstack.org/#/c/602799/ . > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Mon Sep 17 02:47:07 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Sun, 16 Sep 2018 20:47:07 -0600 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: Message-ID: If we consider dropping "os", should we entertain dropping "api", too? Do we have a good reason to keep "api"? I wouldn't be opposed to simple service types (e.g "compute" or "loadbalancer"). On Sat, Sep 15, 2018 at 9:01 AM Morgan Fainberg wrote: > I am generally opposed to needlessly prefixing things with "os". > > I would advocate to drop it. > > > On Fri, Sep 14, 2018, 20:17 Lance Bragstad wrote: > >> Ok - yeah, I'm not sure what the history behind that is either... >> >> I'm mainly curious if that's something we can/should keep or if we are >> opposed to dropping 'os' and 'api' from the convention (e.g. >> load-balancer:loadbalancer:post as opposed to >> os_load-balancer_api:loadbalancer:post) and just sticking with the >> service-type? >> >> On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson >> wrote: >> >>> I don't know for sure, but I assume it is short for "OpenStack" and >>> prefixing OpenStack policies vs. third party plugin policies for >>> documentation purposes. >>> >>> I am guilty of borrowing this from existing code examples[0]. >>> >>> [0] >>> http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html >>> >>> Michael >>> On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad >>> wrote: >>> > >>> > >>> > >>> > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson >>> wrote: >>> >> >>> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post" >>> >> which maps to the "os--api::" format. >>> > >>> > >>> > Thanks for explaining the justification, Michael. >>> > >>> > I'm curious if anyone has context on the "os-" part of the format? >>> I've seen that pattern in a couple different projects. Does anyone know >>> about its origin? Was it something we converted to our policy names because >>> of API names/paths? >>> > >>> >> >>> >> >>> >> I selected it as it uses the service-type[1], references the API >>> >> resource, and then the method. So it maps well to the API reference[2] >>> >> for the service. >>> >> >>> >> [0] >>> https://docs.openstack.org/octavia/latest/configuration/policy.html >>> >> [1] https://service-types.openstack.org/ >>> >> [2] >>> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer >>> >> >>> >> Michael >>> >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell wrote: >>> >> > >>> >> > So +1 >>> >> > >>> >> > >>> >> > >>> >> > Tim >>> >> > >>> >> > >>> >> > >>> >> > From: Lance Bragstad >>> >> > Reply-To: "OpenStack Development Mailing List (not for usage >>> questions)" >>> >> > Date: Wednesday, 12 September 2018 at 20:43 >>> >> > To: "OpenStack Development Mailing List (not for usage questions)" < >>> openstack-dev at lists.openstack.org>, OpenStack Operators < >>> openstack-operators at lists.openstack.org> >>> >> > Subject: [openstack-dev] [all] Consistent policy names >>> >> > >>> >> > >>> >> > >>> >> > The topic of having consistent policy names has popped up a few >>> times this week. Ultimately, if we are to move forward with this, we'll >>> need a convention. To help with that a little bit I started an etherpad [0] >>> that includes links to policy references, basic conventions *within* that >>> service, and some examples of each. I got through quite a few projects this >>> morning, but there are still a couple left. >>> >> > >>> >> > >>> >> > >>> >> > The idea is to look at what we do today and see what conventions we >>> can come up with to move towards, which should also help us determine how >>> much each convention is going to impact services (e.g. picking a convention >>> that will cause 70% of services to rename policies). >>> >> > >>> >> > >>> >> > >>> >> > Please have a look and we can discuss conventions in this thread. >>> If we come to agreement, I'll start working on some documentation in >>> oslo.policy so that it's somewhat official because starting to renaming >>> policies. >>> >> > >>> >> > >>> >> > >>> >> > [0] https://etherpad.openstack.org/p/consistent-policy-names >>> >> > >>> >> > _______________________________________________ >>> >> > OpenStack-operators mailing list >>> >> > OpenStack-operators at lists.openstack.org >>> >> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> >>> >> >>> __________________________________________________________________________ >>> >> OpenStack Development Mailing List (not for usage questions) >>> >> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> > _______________________________________________ >>> > OpenStack-operators mailing list >>> > OpenStack-operators at lists.openstack.org >>> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Sep 17 15:41:25 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 17 Sep 2018 09:41:25 -0600 Subject: [Openstack-operators] [TripleO] undercloud sshd config override In-Reply-To: References: Message-ID: On Fri, Sep 14, 2018 at 9:41 AM, Cody wrote: > Hello folks, > > I installed TripleO undercloud on a machine with a pre-existing > sshd_config that disabled root and password login. The file was > rewritten by Puppet after the undercloud installation and was made to > allow for both options. This is not a good default practice. Is there > a way to set the undercloud to respect any pre-existing sshd_config > settings? > It depends on the version you're using. The basics are that you'll have to provide your sshd_config to the undercloud installation so that it can be merged with the one from tripleo. For >= Rocky you can use a custom_env_file to provide an updated SshServerOptions. The default can be viewed: https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/sshd.yaml#L41 For <= Queens you can use a hieradata override to specify an override for tripleo::profile::base::sshd::options. The defaults can be viewed: https://github.com/openstack/instack-undercloud/blob/ed96987af5a77579366b27a44d94442f33cd811a/elements/puppet-stack-config/os-apply-config/etc/puppet/hieradata/RedHat.yaml#L3 Thanks, -Alex > Thank you to all. > > Regards, > Cody > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From codeology.lab at gmail.com Mon Sep 17 15:56:18 2018 From: codeology.lab at gmail.com (Cody) Date: Mon, 17 Sep 2018 11:56:18 -0400 Subject: [Openstack-operators] [TripleO] undercloud sshd config override In-Reply-To: References: Message-ID: That solved my problem. Thank you so much, Alex. Best regards, Cody On Mon, Sep 17, 2018 at 11:42 AM Alex Schultz wrote: > > On Fri, Sep 14, 2018 at 9:41 AM, Cody wrote: > > Hello folks, > > > > I installed TripleO undercloud on a machine with a pre-existing > > sshd_config that disabled root and password login. The file was > > rewritten by Puppet after the undercloud installation and was made to > > allow for both options. This is not a good default practice. Is there > > a way to set the undercloud to respect any pre-existing sshd_config > > settings? > > > > It depends on the version you're using. The basics are that you'll > have to provide your sshd_config to the undercloud installation so > that it can be merged with the one from tripleo. > > For >= Rocky you can use a custom_env_file to provide an updated > SshServerOptions. The default can be viewed: > https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/sshd.yaml#L41 > > For <= Queens you can use a hieradata override to specify an override > for tripleo::profile::base::sshd::options. The defaults can be > viewed: https://github.com/openstack/instack-undercloud/blob/ed96987af5a77579366b27a44d94442f33cd811a/elements/puppet-stack-config/os-apply-config/etc/puppet/hieradata/RedHat.yaml#L3 > > Thanks, > -Alex > > > Thank you to all. > > > > Regards, > > Cody > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From jimmy at openstack.org Mon Sep 17 16:13:47 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 17 Sep 2018 11:13:47 -0500 Subject: [Openstack-operators] Forum Topic Submission Period Message-ID: <5B9FD2BB.3060806@openstack.org> Hello Everyone! The Forum Topic Submission session started September 12 and will run through September 26th. Now is the time to wrangle the topics you gathered during your Brainstorming Phase and start pushing forum topics through. Don't rely only on a PTL to make the agenda... step on up and place the items you consider important front and center. As you may have noticed on the Forum Wiki (https://wiki.openstack.org/wiki/Forum), we're reusing the normal CFP tool this year. We did our best to remove Summit specific language, but if you notice something, just know that you are submitting to the Forum. URL is here: https://www.openstack.org/summit/berlin-2018/call-for-presentations Looking forward to seeing everyone's submissions! If you have questions or concerns about the process, please don't hesitate to reach out. Cheers, Jimmy -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp.methot at planethoster.info Mon Sep 17 22:31:26 2018 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Mon, 17 Sep 2018 18:31:26 -0400 Subject: [Openstack-operators] Network metadata+userdata and rate limits Message-ID: <7FFCDA97-87DC-46C3-B3DF-F8393122F422@planethoster.info> Hi, We’ve been providing our VMs with metadata from the network for quite some time now and lately we’ve realized that when we reboot compute nodes for updates (so roughly 200 VMs are rebooting at once), some VM can’t access the metadata server. I believe this could be because of the nova-api ratelimiting, but I was unable to find proofs in the logs (No 403 forbidden at all in the log file). So, I was wondering : -Can I disable the nova-api ratelimiting from paste-api.ini just as it was possible in kilo? -Can I prevent cloud-init from running its latest user-data when it doesn’t receive metadata? We currently use Pike on centos 7. Jean-Philippe Méthot Openstack system administrator Administrateur système Openstack PlanetHoster inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Tue Sep 18 01:58:36 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 18 Sep 2018 09:58:36 +0800 Subject: [Openstack-operators] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials) In-Reply-To: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> References: <5511f82b-80d9-5818-b53f-3e7abe7adf93@gmail.com> Message-ID: Hope you all safely travel back to home now. Here is the summarize from some discussions (as much as I can trigger or attend) in PTG for SIGs/WGs expose and some idea for action, http://lists.openstack.org/pipermail/openstack-dev/2018-September/134689.html I also like the idea to at least expose the information of SIGs/WGs right away. Feel free to give your feedback. And not like the following message matters to anyone, but just in case. I believe this is a goal for all group in the community so just don't let who your duty, position, or full hand of good tasks to limit what you think about the relative of this goal with you. Give your positive or negative opinions to help us get a better shape. On Wed, Sep 12, 2018 at 11:47 PM Matt Riedemann wrote: > Rather than take a tangent on Kristi's candidacy thread [1], I'll bring > this up separately. > > Kristi said: > > "Ultimately, this list isn’t exclusive and I’d love to hear your and > other people's opinions about what you think the I should focus on." > > Well since you asked... > > Some feedback I gave to the public cloud work group yesterday was to get > their RFE/bug list ranked from the operator community (because some of > the requests are not exclusive to public cloud), and then put pressure > on the TC to help project manage the delivery of the top issue. I would > like all of the SIGs to do this. The upgrades SIG should rank and > socialize their #1 issue that needs attention from the developer > community - maybe that's better upgrade CI testing for deployment > projects, maybe it's getting the pre-upgrade checks goal done for Stein. > The UC should also be doing this; maybe that's the UC saying, "we need > help on closing feature gaps in openstack client and/or the SDK". I > don't want SIGs to bombard the developers with *all* of their > requirements, but I want to get past *talking* about the *same* issues > *every* time we get together. I want each group to say, "this is our top > issue and we want developers to focus on it." For example, the extended > maintenance resolution [2] was purely birthed from frustration about > talking about LTS and stable branch EOL every time we get together. It's > also the responsibility of the operator and user communities to weigh in > on proposed release goals, but the TC should be actively trying to get > feedback from those communities about proposed goals, because I bet > operators and users don't care about mox removal [3]. > > I want to see the TC be more of a cross-project project management > group, like a group of Ildikos and what she did between nova and cinder > to get volume multi-attach done, which took persistent supervision to > herd the cats and get it delivered. Lance is already trying to do this > with unified limits. Doug is doing this with the python3 goal. I want my > elected TC members to be pushing tangible technical deliverables forward. > > I don't find any value in the TC debating ad nauseam about visions and > constellations and "what is openstack?". Scope will change over time > depending on who is contributing to openstack, we should just accept > this. And we need to realize that if we are failing to deliver value to > operators and users, they aren't going to use openstack and then "what > is openstack?" won't matter because no one will care. > > So I encourage all elected TC members to work directly with the various > SIGs to figure out their top issue and then work on managing those > deliverables across the community because the TC is particularly well > suited to do so given the elected position. I realize political and > bureaucratic "how should openstack deal with x?" things will come up, > but those should not be the priority of the TC. So instead of > philosophizing about things like, "should all compute agents be in a > single service with a REST API" for hours and hours, every few months - > immediately ask, "would doing that get us any closer to achieving top > technical priority x?" Because if not, or it's so fuzzy in scope that no > one sees the way forward, document a decision and then drop it. > > [1] > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134490.html > [2] > > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html > [3] https://governance.openstack.org/tc/goals/rocky/mox_removal.html > > -- > > Thanks, > > Matt > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Sep 18 02:26:57 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 18 Sep 2018 11:26:57 +0900 Subject: [Openstack-operators] [tc]Global Reachout Proposal In-Reply-To: References: Message-ID: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> ---- On Sat, 15 Sep 2018 02:49:40 +0900 Zhipeng Huang wrote ---- > Hi all, > Follow up the diversity discussion we had in the tc session this morning [0], I've proposed a resolution on facilitating technical community in large to engage in global reachout for OpenStack more efficiently. > Your feedbacks are welcomed. Whether this should be a new resolution or not at the end of the day, this is a conversation worthy to have. > [0] https://review.openstack.org/602697 I like that we are discussing the Global Reachout things which i personally feel is very important. There are many obstacle to have a standard global communication way. Honestly saying, there cannot be any standard communication channel which can accommodate different language, cultures , company/govt restriction. So the better we can do is best solution. I can understand that IRC cannot be used in China which is very painful and mostly it is used weChat. But there are few key points we need to consider for any social app to use? - Technical discussions which needs more people to participate and need ref of links etc cannot be done on mobile app. You need desktop version of that app. - Many of the social app have # of participation, invitation, logging restriction. - Those apps are not restricted to other place. - It does not split the community members among more than one app or exiting channel. With all those point, we need to think what all communication channel we really want to promote as community. IMO, we should educate and motivate people to participate over existing channel like IRC, ML as much as possible. At least ML does not have any issue about usage. Ambassador and local user groups people can play a critical role here or local developers (i saw Alex volunteer for nova discussion in china) and they can ask them to start communication in ML or if they cannot then they can start the thread and proxy for them. I know slack is being used for Japan community and most of the communication there is in Japanese so i cannot help there even I join it. When talking to Akira (Japan Ambassador ) and as per him most of the developers do communicate in IRC, ML but users hesitate to do so because of culture and language. So if proposal is to participate community (Developers, TC, UC, Ambassador, User Group members etc) in local chat app and encourage people to move to ML etc then it is great idea. But if we want to promote all different chat app as community practice then, it can lead to lot of other problems than solving the current one. For example: It will divide the technical discussion etc -gmann > -- > Zhipeng (Howard) Huang > Standard EngineerIT Standard & Patent/IT Product LineHuawei Technologies Co,. LtdEmail: huangzhipeng at huawei.comOffice: Huawei Industrial Base, Longgang, Shenzhen > (Previous) > Research AssistantMobile Ad-Hoc Network Lab, Calit2University of California, IrvineEmail: zhipengh at uci.eduOffice: Calit2 Building Room 2402 > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From tobias.rydberg at citynetwork.eu Tue Sep 18 12:05:11 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Tue, 18 Sep 2018 14:05:11 +0200 Subject: [Openstack-operators] [publiccloud-wg] Meeting tomorrow Message-ID: <70976fdd-3d0f-dafa-a792-4cb4daf96af1@citynetwork.eu> Hi everyone, Don't forget that we have a meeting tomorrow at 0700 UTC at IRC channel #openstack-publiccloud. See you all there! Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From fungi at yuggoth.org Tue Sep 18 12:40:50 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 18 Sep 2018 12:40:50 +0000 Subject: [Openstack-operators] [Openstack-sigs] [tc]Global Reachout Proposal In-Reply-To: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> References: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> Message-ID: <20180918124049.jw7xbufikxfx3w37@yuggoth.org> On 2018-09-18 11:26:57 +0900 (+0900), Ghanshyam Mann wrote: [...] > I can understand that IRC cannot be used in China which is very > painful and mostly it is used weChat. [...] I have yet to hear anyone provide first-hand confirmation that access to Freenode's IRC servers is explicitly blocked by the mainland Chinese government. There has been a lot of speculation that the usual draconian corporate firewall policies (surprise, the rest of the World gets to struggle with those too, it's not just a problem in China) are blocking a variety of messaging protocols from workplace networks and the people who encounter this can't tell the difference because they're already accustomed to much of their other communications being blocked at the border. I too have heard from someone who's heard from someone that "IRC can't be used in China" but the concrete reasons why continue to be missing from these discussions. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mihalis68 at gmail.com Tue Sep 18 13:23:27 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 18 Sep 2018 09:23:27 -0400 Subject: [Openstack-operators] OpenStack Ops Meetups team meeting in ~40 minutes Message-ID: Calendar link http://eavesdrop.openstack.org/calendars/ops-meetup-team.ics Join us on #openstack-operators to discuss last weeks embedded ops meetup at the Denver PTG, the upcoming Forum at the Summit in Berlin this November and possible meetups in 2019. Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Sep 18 13:57:58 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 18 Sep 2018 13:57:58 +0000 Subject: [Openstack-operators] [openstack-dev] [Openstack-sigs] [tc]Global Reachout Proposal In-Reply-To: References: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> <20180918124049.jw7xbufikxfx3w37@yuggoth.org> Message-ID: <20180918135758.2h6fqhwc3ika3xpf@yuggoth.org> On 2018-09-18 14:52:28 +0200 (+0200), Sylvain Bauza wrote: [...] > Why are we discussing about WeChat now? Is that because a large > set of our contributors *can't* access IRC or because they > *prefer* any other? Until we get confirmation either way, I'm going to work under the assumption that there are actual network barriers to using IRC for these contributors and that it's not just a matter of preference. I mainly want to know the source of these barriers because that will determine how to go about addressing them. If it's restrictions imposed by employers, it may be hard for employees to raise the issue in predominantly confrontation-averse cultures. The First Contact SIG is working on a document which outlines the communications and workflows used by our community with a focus on explaining to managers and other staff at contributing organizations what allowances they can make to ease and improve the experience of those they've tasked with working upstream. If the barriers are instead imposed by national government, then urging contributors within those borders to flaunt the law and interact with the rest of our community over IRC is not something which should be taken lightly. That's not to say it can't be solved, but the topic then is a much more political one and our community may not be an appropriate venue for those discussions. > In the past, we made clear for a couple of times why IRC is our > communication channel. I don't see those reasons to be invalid > now, but I'm still open to understand the problems about why our > community becomes de facto fragmented. I think the extended community is already fragmented across a variety of discussion fora. Some watch for relevant hashtags on Twitter and engage in discussions there. I gather there's an unofficial OpenStack Slack channel where lots of newcomers show up to ask questions because they assume the OpenStack community relies on Slack the same way the Kubernetes community does, and so a few volunteers from our community hang out there and try to redirect questions to more appropriate places. I've also heard tell of an OpenStack subReddit which some stackers help moderate and try to provide damage control/correct misstatements there. I don't think these are necessarily a problem, and the members of our community who work to spread accurate information to these places are in many cases helping reduce the actual degree of fragmentation. I'm still trying to make up my mind on 602697 which is why I haven't weighed in on the proposal yet. So far I feel like it probably doesn't bring anything new, since we already declare how and where official discussion takes place and the measure doesn't make any attempt to change that. We also don't regulate where unofficial discussions are allowed to take place, and so it doesn't open up any new possibilities which were previously disallowed. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From thierry at openstack.org Tue Sep 18 13:59:51 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 18 Sep 2018 15:59:51 +0200 Subject: [Openstack-operators] [openstack-dev] [Openstack-sigs] [tc]Global Reachout Proposal In-Reply-To: References: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> <20180918124049.jw7xbufikxfx3w37@yuggoth.org> Message-ID: <204f0abc-6391-3001-deae-b14a8de6710f@openstack.org> Sylvain Bauza wrote: > > > Le mar. 18 sept. 2018 à 14:41, Jeremy Stanley > a écrit : > > On 2018-09-18 11:26:57 +0900 (+0900), Ghanshyam Mann wrote: > [...] > > I can understand that IRC cannot be used in China which is very > > painful and mostly it is used weChat. > [...] > > I have yet to hear anyone provide first-hand confirmation that > access to Freenode's IRC servers is explicitly blocked by the > mainland Chinese government. There has been a lot of speculation > that the usual draconian corporate firewall policies (surprise, the > rest of the World gets to struggle with those too, it's not just a > problem in China) are blocking a variety of messaging protocols from > workplace networks and the people who encounter this can't tell the > difference because they're already accustomed to much of their other > communications being blocked at the border. I too have heard from > someone who's heard from someone that "IRC can't be used in China" > but the concrete reasons why continue to be missing from these > discussions. > > Thanks fungi, that's the crux of the problem I'd like to see discussed > in the governance change. > In this change, it states the non-use of existing and official > communication tools as to be "cumbersome". See my comment on PS1, I > thought the original concern was technical. > > Why are we discussing about WeChat now ? Is that because a large set of > our contributors *can't* access IRC or because they *prefer* any other ? > In the past, we made clear for a couple of times why IRC is our > communication channel. I don't see those reasons to be invalid now, but > I'm still open to understand the problems about why our community > becomes de facto fragmented. Agreed, I'm still trying to grasp the issue we are trying to solve here. We really need to differentiate between technical blockers (firewall), cultural blockers (language) and network effect preferences (preferred platform). We should definitely try to address technical blockers, as we don't want to exclude anyone. We can also allow for a bit of flexibility in the tools used in our community, to accommodate cultural blockers as much as we possibly can (keeping in mind that in the end, the code has to be written, proposed and discussed in a single language). We can even encourage community members to reach out on local social networks... But I'm reluctant to pass an official resolution to recommend that TC members engage on specific platforms because "everyone is there". -- Thierry Carrez (ttx) From mihalis68 at gmail.com Tue Sep 18 18:13:09 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 18 Sep 2018 14:13:09 -0400 Subject: [Openstack-operators] Denver Ops Meetup post-mortem Message-ID: Hello All, Last week we had a successful Ops Meetup embedded in the OpenStack Project Team Gathering in Denver. Despite generally being a useful gathering, there were definitely lessons learned and things to work on, so I thought it would be useful to share a post-mortem. I encourage everyone to share their thoughts on this as well. What went well: - some of the sessions were great and a lot of progress was made - overall attendance in the ops room was good - more developers were able to join the discussions - facilities were generally fine - some operators leveraged being at PTG to have useful involvement in other sessions/discussions such as Keystone, User Committee, Self-Healing SIG, not to mention the usual "hallway conversations", and similarly some project devs were able to bring pressing questions directly to operators. What didn't go so well: - Merging into upgrade SIG didn't go particularly well - fewer ops attended (in particular there were fewer from outside the US) - Some of the proposed sessions were not well vetted - some ops who did attend stated the event identity was diluted, it was less attractive - we tried to adjust the day 2 schedule to include late submissions, however it was probably too late in some cases I don't think it's so important to drill down into all the whys and wherefores of how we fell down here except to say that the ops meetups team is a small bunch of volunteers all with day jobs (presumably just like everyone else on this mailing list). The usual, basically. Much more important : what will be done to improve things going forward: - The User Committee has offered to get involved with the technical content. In particular to bring forward topics from other relevant events into the ops meetup planning process, and then take output from ops meetups forward to subsequent events. We (ops meetup team) have welcomed this. - The Ops Meetups Team will endeavor to start topic selection earlier and have a more critical approach. Having a longer list of possible sessions (when starting with material from earlier events) should make it at least possible to devise a better agenda. Agenda quality drives attendance to some extent and so can ensure a virtuous circle. - We need to work out whether we're doing fixed schedule events (similar to previous mid-cycle Ops Meetups) or fully flexible PTG-style events, but grafting one onto the other ad-hoc clearly is a terrible idea. This needs more discussion. - The Ops Meetups Team continues to explore strange new worlds, or at least get in touch with more and more OpenStack operators to find out what the meetups team and these events could do for them and hence drive the process better. One specific work item here is to help the (widely disparate) operator community with technical issues such as getting setup with the openstack git/gerrit and IRC. The latter is the preferred way for the community to meet, but is particularly difficult now with the registered nickname requirement. We will add help documentation on how to get over this hurdle. - YOUR SUGGESTION HERE Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Sep 18 19:27:03 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 18 Sep 2018 14:27:03 -0500 Subject: [Openstack-operators] Are we ready to put stable/ocata into extended maintenance mode? Message-ID: The release page says Ocata is planned to go into extended maintenance mode on Aug 27 [1]. There really isn't much to this except it means we don't do releases for Ocata anymore [2]. There is a caveat that project teams that do not wish to maintain stable/ocata after this point can immediately end of life the branch for their project [3]. We can still run CI using tags, e.g. if keystone goes ocata-eol, devstack on stable/ocata can still continue to install from stable/ocata for nova and the ocata-eol tag for keystone. Having said that, if there is no undue burden on the project team keeping the lights on for stable/ocata, I would recommend not tagging the stable/ocata branch end of life at this point. So, questions that need answering are: 1. Should we cut a final release for projects with stable/ocata branches before going into extended maintenance mode? I tend to think "yes" to flush the queue of backports. In fact, [3] doesn't mention it, but the resolution said we'd tag the branch [4] to indicate it has entered the EM phase. 2. Are there any projects that would want to skip EM and go directly to EOL (yes this feels like a Monopoly question)? [1] https://releases.openstack.org/ [2] https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases [3] https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance [4] https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life -- Thanks, Matt From sean.mcginnis at gmx.com Tue Sep 18 19:29:40 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 18 Sep 2018 14:29:40 -0500 Subject: [Openstack-operators] [Openstack-sigs] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: Message-ID: <20180918192940.GA10869@sm-workstation> On Tue, Sep 18, 2018 at 02:27:03PM -0500, Matt Riedemann wrote: > The release page says Ocata is planned to go into extended maintenance mode > on Aug 27 [1]. There really isn't much to this except it means we don't do > releases for Ocata anymore [2]. There is a caveat that project teams that do > not wish to maintain stable/ocata after this point can immediately end of > life the branch for their project [3]. We can still run CI using tags, e.g. > if keystone goes ocata-eol, devstack on stable/ocata can still continue to > install from stable/ocata for nova and the ocata-eol tag for keystone. > Having said that, if there is no undue burden on the project team keeping > the lights on for stable/ocata, I would recommend not tagging the > stable/ocata branch end of life at this point. > > So, questions that need answering are: > > 1. Should we cut a final release for projects with stable/ocata branches > before going into extended maintenance mode? I tend to think "yes" to flush > the queue of backports. In fact, [3] doesn't mention it, but the resolution > said we'd tag the branch [4] to indicate it has entered the EM phase. > > 2. Are there any projects that would want to skip EM and go directly to EOL > (yes this feels like a Monopoly question)? > > [1] https://releases.openstack.org/ > [2] https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases > [3] https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance > [4] https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life > > -- > > Thanks, > > Matt I have a patch that's been pending for marking it as extended maintenance: https://review.openstack.org/#/c/598164/ That's just the state for Ocata. You raise some other good points here that I am curious to see input on. Sean From aschultz at redhat.com Tue Sep 18 19:30:20 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 18 Sep 2018 13:30:20 -0600 Subject: [Openstack-operators] [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: Message-ID: On Tue, Sep 18, 2018 at 1:27 PM, Matt Riedemann wrote: > The release page says Ocata is planned to go into extended maintenance mode > on Aug 27 [1]. There really isn't much to this except it means we don't do > releases for Ocata anymore [2]. There is a caveat that project teams that do > not wish to maintain stable/ocata after this point can immediately end of > life the branch for their project [3]. We can still run CI using tags, e.g. > if keystone goes ocata-eol, devstack on stable/ocata can still continue to > install from stable/ocata for nova and the ocata-eol tag for keystone. > Having said that, if there is no undue burden on the project team keeping > the lights on for stable/ocata, I would recommend not tagging the > stable/ocata branch end of life at this point. > > So, questions that need answering are: > > 1. Should we cut a final release for projects with stable/ocata branches > before going into extended maintenance mode? I tend to think "yes" to flush > the queue of backports. In fact, [3] doesn't mention it, but the resolution > said we'd tag the branch [4] to indicate it has entered the EM phase. > > 2. Are there any projects that would want to skip EM and go directly to EOL > (yes this feels like a Monopoly question)? > I believe TripleO would like to EOL instead of EM for Ocata as indicated by the thead http://lists.openstack.org/pipermail/openstack-dev/2018-September/134671.html Thanks, -Alex > [1] https://releases.openstack.org/ > [2] > https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases > [3] > https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance > [4] > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From codeology.lab at gmail.com Tue Sep 18 21:42:07 2018 From: codeology.lab at gmail.com (Cody) Date: Tue, 18 Sep 2018 17:42:07 -0400 Subject: [Openstack-operators] [tripleo]Pacemaker in split controller mode In-Reply-To: References: <20180831190634.GA2221@holtby> Message-ID: Hello, I have a follow up question. According to the TripleO docs for RHOPS 13/Queens, "you cannot scale up or scale down a custom role that contains OS::TripleO::Services::Pacemaker or OS::TripleO::Services::PacemakerRemote services." Does that only apply to the role itself or all the pcmk-managed services within the role? For instance, if I already split out the DB and messaging services with a custom role, can I later create another custom role just to scale up the DB service? Thank you very much. Regards, Cody On Fri, Aug 31, 2018 at 3:52 PM Cody wrote: > > Got it! Thank you, Michele. > > Cheers, > Cody > On Fri, Aug 31, 2018 at 3:07 PM Michele Baldessari wrote: > > > > Hi, > > > > On Fri, Aug 31, 2018 at 01:46:43PM -0400, Cody wrote: > > > A quick question on TripleO. If I take any pacemaker managed services > > > (e.g. database) from the monolithic controller role and put them onto > > > another cluster, would that cluster be managed as a separate pacemaker > > > cluster? > > > > No, if you split off any pcmk-managed services to a separate role they > > will still be managed by a single pacemaker cluster. Since Ocata we have > > composable HA roles, so you can split off DB/messaging/etc to separate > > nodes (roles). They will be all part of a single cluster. > > > > cheers, > > Michele > > -- > > Michele Baldessari > > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From mriedemos at gmail.com Tue Sep 18 22:30:05 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 18 Sep 2018 17:30:05 -0500 Subject: [Openstack-operators] [openstack-dev] Forum Topic Submission Period In-Reply-To: <5B9FD2BB.3060806@openstack.org> References: <5B9FD2BB.3060806@openstack.org> Message-ID: <5b5a669d-144c-bcc2-306c-c6410ef705ef@gmail.com> On 9/17/2018 11:13 AM, Jimmy McArthur wrote: > Hello Everyone! > > The Forum Topic Submission session started September 12 and will run > through September 26th.  Now is the time to wrangle the topics you > gathered during your Brainstorming Phase and start pushing forum topics > through. Don't rely only on a PTL to make the agenda... step on up and > place the items you consider important front and center. > > As you may have noticed on the Forum Wiki > (https://wiki.openstack.org/wiki/Forum), we're reusing the normal CFP > tool this year. We did our best to remove Summit specific language, but > if you notice something, just know that you are submitting to the > Forum.  URL is here: > > https://www.openstack.org/summit/berlin-2018/call-for-presentations > > Looking forward to seeing everyone's submissions! > > If you have questions or concerns about the process, please don't > hesitate to reach out. > > Cheers, > Jimmy Just a process question. I submitted a presentation for the normal marketing blitz part of the summit which wasn't accepted (I'm still dealing with this emotionally, btw...) but when I look at the CFP link for Forum topics, my thing shows up there as "Received" so does that mean my non-Forum-at-all submission is now automatically a candidate for the Forum because that would not be my intended audience (only suits and big wigs please). -- Thanks, Matt From jimmy at openstack.org Tue Sep 18 22:40:27 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 18 Sep 2018 17:40:27 -0500 Subject: [Openstack-operators] [openstack-dev] Forum Topic Submission Period In-Reply-To: <5b5a669d-144c-bcc2-306c-c6410ef705ef@gmail.com> References: <5B9FD2BB.3060806@openstack.org> <5b5a669d-144c-bcc2-306c-c6410ef705ef@gmail.com> Message-ID: <5BA17EDB.5060701@openstack.org> Hey Matt, Matt Riedemann wrote: > > Just a process question. Good question. > I submitted a presentation for the normal marketing blitz part of the > summit which wasn't accepted (I'm still dealing with this emotionally, > btw...) If there's anything I can do... > but when I look at the CFP link for Forum topics, my thing shows up > there as "Received" so does that mean my non-Forum-at-all submission > is now automatically a candidate for the Forum because that would not > be my intended audience (only suits and big wigs please). Forum Submissions would be considered separate and non-Forum submissions will not be considered for the Forum. The submission process is based on the track you submit to and, in the case of the Forum, we separate this track out from the rest of the submission process. If you think there is still something funky, send me a note via speakersupport at openstack.org or jimmy at openstack.org and I'll work through it with you. Cheers, Jimmy From emccormick at cirrusseven.com Wed Sep 19 00:54:53 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 18 Sep 2018 20:54:53 -0400 Subject: [Openstack-operators] Ops Forum Session Brainstorming In-Reply-To: References: Message-ID: This is a friendly reminder for anyone wishing to see Ops-focused sessions in Berlin to get your submissions in soon. We have a couple things there that came out of the PTG, but that's it so far. See below for details. Cheers, Erik On Wed, Sep 12, 2018, 5:07 PM Erik McCormick wrote: > Hello everyone, > > I have set up an etherpad to collect Ops related session ideas for the > Forum at the Berlin Summit. Please suggest any topics that you would > like to see covered, and +1 existing topics you like. > > https://etherpad.openstack.org/p/ops-forum-stein > > Cheers, > Erik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed Sep 19 04:35:52 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 19 Sep 2018 12:35:52 +0800 Subject: [Openstack-operators] [User-committee] [publiccloud-wg] Meeting tomorrow In-Reply-To: <70976fdd-3d0f-dafa-a792-4cb4daf96af1@citynetwork.eu> References: <70976fdd-3d0f-dafa-a792-4cb4daf96af1@citynetwork.eu> Message-ID: cc'ed sig list. Kind reminder for the meeting about 2 and half hours away, we will do a review of the denver ptg summary [0] and then go over the forum sessions which we want to propose [1] This is an EU/APAC friendly meeting so please do join us if you are in the region :) [0]https://etherpad.openstack.org/p/publiccloud-wg-stein-ptg-summary [1]https://etherpad.openstack.org/p/BER-forum-public-cloud On Tue, Sep 18, 2018 at 8:05 PM Tobias Rydberg < tobias.rydberg at citynetwork.eu> wrote: > Hi everyone, > > Don't forget that we have a meeting tomorrow at 0700 UTC at IRC channel > #openstack-publiccloud. > > See you all there! > > Cheers, > Tobias > > -- > Tobias Rydberg > Senior Developer > Twitter & IRC: tobberydberg > > www.citynetwork.eu | www.citycloud.com > > INNOVATION THROUGH OPEN IT INFRASTRUCTURE > ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED > > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Wed Sep 19 13:13:24 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 19 Sep 2018 08:13:24 -0500 Subject: [Openstack-operators] [openstack-dev] Forum Topic Submission Period In-Reply-To: References: <5B9FD2BB.3060806@openstack.org> <5b5a669d-144c-bcc2-306c-c6410ef705ef@gmail.com> <5BA17EDB.5060701@openstack.org> Message-ID: <5BA24B74.8010301@openstack.org> Sylvain Bauza wrote: > > > Le mer. 19 sept. 2018 à 00:41, Jimmy McArthur > a écrit : SNIP > > > Same as I do :-) Unrelated point, for the first time in all the > Summits I know, I wasn't able to know the track chairs for a specific > track. Ideally, I'd love to reach them in order to know what they > disliked in my proposal. They were listed on an Etherpad that was listed under Presentation Selection Process in the CFP navigation. That has since been overwritten w/ Forum Selection Process, so let me try to dig that up. We publish the Track Chairs every year. > SNIP > > I have another question, do you know why we can't propose a Forum > session with multiple speakers ? Is this a bug or an expected > behaviour ? In general, there is only one moderator for a Forum > session, but in the past, I clearly remember we had some sessions that > were having multiple moderators (for various reasons). Correct. Forum sessions aren't meant to have speakers like a normal presentation. They are all set up parliamentary style w/ one or more moderators. However, the moderator can manage the room any way they'd like. If you want to promote the people that will be in the room, this can be added to the abstract. > > -Sylvain > > > Cheers, > Jimmy > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Sep 19 18:10:05 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 19 Sep 2018 13:10:05 -0500 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: Message-ID: johnsom (from octavia) had a good idea, which was to use the service types that are defined already [0]. I like this for three reasons, specifically. First, it's already a known convention for services that we can just reuse. Second, it includes a spacing convention (e.g. load-balancer vs load_balancer). Third, it's relatively short since it doesn't include "os" or "api". So long as there isn't any objection to that, we can start figuring out how we want to do the method and resource parts. I pulled some policies into a place where I could try and query them for specific patterns and existing usage [1]. With the representation that I have (nova, neutron, glance, cinder, keystone, mistral, and octavia): - *create* is favored over post (105 occurrences to 7) - *list* is favored over get_all (74 occurrences to 28) - *update* is favored over put/patch (91 occurrences to 10) >From this perspective, using the HTTP method might be slightly redundant for projects using the DocumentedRuleDefault object from oslo.policy since it contains the URL and method for invoking the policy. It also might differ depending on the service implementing the API (some might use put instead of patch to update a resource). Conversely, using the HTTP method in the policy name itself doesn't require use of DocumentedRuleDefault, although its usage is still recommended. Thoughts on using create, list, update, and delete as opposed to post, get, put, patch, and delete in the naming convention? [0] https://service-types.openstack.org/service-types.json [1] https://gist.github.com/lbragstad/5000b46f27342589701371c88262c35b#file-policy-names-yaml On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad wrote: > If we consider dropping "os", should we entertain dropping "api", too? Do > we have a good reason to keep "api"? > > I wouldn't be opposed to simple service types (e.g "compute" or > "loadbalancer"). > > On Sat, Sep 15, 2018 at 9:01 AM Morgan Fainberg > wrote: > >> I am generally opposed to needlessly prefixing things with "os". >> >> I would advocate to drop it. >> >> >> On Fri, Sep 14, 2018, 20:17 Lance Bragstad wrote: >> >>> Ok - yeah, I'm not sure what the history behind that is either... >>> >>> I'm mainly curious if that's something we can/should keep or if we are >>> opposed to dropping 'os' and 'api' from the convention (e.g. >>> load-balancer:loadbalancer:post as opposed to >>> os_load-balancer_api:loadbalancer:post) and just sticking with the >>> service-type? >>> >>> On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson >>> wrote: >>> >>>> I don't know for sure, but I assume it is short for "OpenStack" and >>>> prefixing OpenStack policies vs. third party plugin policies for >>>> documentation purposes. >>>> >>>> I am guilty of borrowing this from existing code examples[0]. >>>> >>>> [0] >>>> http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html >>>> >>>> Michael >>>> On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad >>>> wrote: >>>> > >>>> > >>>> > >>>> > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson >>>> wrote: >>>> >> >>>> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post" >>>> >> which maps to the "os--api::" format. >>>> > >>>> > >>>> > Thanks for explaining the justification, Michael. >>>> > >>>> > I'm curious if anyone has context on the "os-" part of the format? >>>> I've seen that pattern in a couple different projects. Does anyone know >>>> about its origin? Was it something we converted to our policy names because >>>> of API names/paths? >>>> > >>>> >> >>>> >> >>>> >> I selected it as it uses the service-type[1], references the API >>>> >> resource, and then the method. So it maps well to the API >>>> reference[2] >>>> >> for the service. >>>> >> >>>> >> [0] >>>> https://docs.openstack.org/octavia/latest/configuration/policy.html >>>> >> [1] https://service-types.openstack.org/ >>>> >> [2] >>>> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer >>>> >> >>>> >> Michael >>>> >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell wrote: >>>> >> > >>>> >> > So +1 >>>> >> > >>>> >> > >>>> >> > >>>> >> > Tim >>>> >> > >>>> >> > >>>> >> > >>>> >> > From: Lance Bragstad >>>> >> > Reply-To: "OpenStack Development Mailing List (not for usage >>>> questions)" >>>> >> > Date: Wednesday, 12 September 2018 at 20:43 >>>> >> > To: "OpenStack Development Mailing List (not for usage questions)" >>>> , OpenStack Operators < >>>> openstack-operators at lists.openstack.org> >>>> >> > Subject: [openstack-dev] [all] Consistent policy names >>>> >> > >>>> >> > >>>> >> > >>>> >> > The topic of having consistent policy names has popped up a few >>>> times this week. Ultimately, if we are to move forward with this, we'll >>>> need a convention. To help with that a little bit I started an etherpad [0] >>>> that includes links to policy references, basic conventions *within* that >>>> service, and some examples of each. I got through quite a few projects this >>>> morning, but there are still a couple left. >>>> >> > >>>> >> > >>>> >> > >>>> >> > The idea is to look at what we do today and see what conventions >>>> we can come up with to move towards, which should also help us determine >>>> how much each convention is going to impact services (e.g. picking a >>>> convention that will cause 70% of services to rename policies). >>>> >> > >>>> >> > >>>> >> > >>>> >> > Please have a look and we can discuss conventions in this thread. >>>> If we come to agreement, I'll start working on some documentation in >>>> oslo.policy so that it's somewhat official because starting to renaming >>>> policies. >>>> >> > >>>> >> > >>>> >> > >>>> >> > [0] https://etherpad.openstack.org/p/consistent-policy-names >>>> >> > >>>> >> > _______________________________________________ >>>> >> > OpenStack-operators mailing list >>>> >> > OpenStack-operators at lists.openstack.org >>>> >> > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>> >> >>>> >> >>>> __________________________________________________________________________ >>>> >> OpenStack Development Mailing List (not for usage questions) >>>> >> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> > >>>> > _______________________________________________ >>>> > OpenStack-operators mailing list >>>> > OpenStack-operators at lists.openstack.org >>>> > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.j.ivey at gmail.com Wed Sep 19 20:52:35 2018 From: david.j.ivey at gmail.com (David Ivey) Date: Wed, 19 Sep 2018 16:52:35 -0400 Subject: [Openstack-operators] zun-ui install Message-ID: Hi, I am having issues getting zun-ui to work in my environment. it is a multinode deployment with queens on Ubuntu16.04 . I installed zun-ui based on the instructions from the stable/queens branch at https://github.com/openstack/zun-ui. I can confirm that everything works with openstack-dashboard, heat-dashboard, designate-dashboard before adding zun-ui. Turning debug on gives me the following error. Request Method: POST Request URL: http://10.10.5.161/horizon/auth/login/ Django Version: 1.11.15 Python Version: 2.7.12 Installed Applications: ['openstack_dashboard.dashboards.project', 'zun_ui', 'heat_dashboard', 'designatedashboard', 'openstack_dashboard.dashboards.admin', 'openstack_dashboard.dashboards.identity', 'openstack_dashboard.dashboards.settings', 'openstack_dashboard', 'django.contrib.contenttypes', 'django.contrib.auth', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.humanize', 'django_pyscss', 'openstack_dashboard.django_pyscss_fix', 'compressor', 'horizon', 'openstack_auth'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'horizon.middleware.OperationLogMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'horizon.middleware.HorizonMiddleware', 'horizon.themes.ThemeMiddleware', 'django.middleware.locale.LocaleMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'openstack_dashboard.contrib.developer.profiler.middleware.ProfilerClientMiddleware', 'openstack_dashboard.contrib.developer.profiler.middleware.ProfilerMiddleware') Traceback: File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/exception.py" in inner 41. response = get_response(request) File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py" in _legacy_get_response 244. response = middleware_method(request) File "/usr/share/openstack-dashboard/horizon/middleware/base.py" in process_request 52. if not hasattr(request, "user") or not request.user.is_authenticated(): Exception Type: TypeError at /auth/login/ Exception Value: 'bool' object is not callable Other than that I do not see any other errors except the "Something went wrong! An unexpected error has occurred. Try refreshing the page. If that doesn't help, contact your local administrator." when I go to the dashboard.. Thanks in advance David -------------- next part -------------- An HTML attachment was scrubbed... URL: From yuxcer at gmail.com Wed Sep 19 23:53:29 2018 From: yuxcer at gmail.com (Xingchao) Date: Thu, 20 Sep 2018 11:53:29 +1200 Subject: [Openstack-operators] [openstack-dev] [horizon] Dashboard memory leaks Message-ID: Hi All, Recently, we found the server which hosts horizon dashboard had serveral times OOM caused by horizon services. After restarting the dashboard, the memory usage goes up very quickly if we access /project/network_topology/ path. *How to reproduce* Login into the dashboard and go to 'Network Topology' tab, then leave it there (autorefresh 10s by default), now monitor the memory changes on the host. *Versions and Components* Dashboard: Stable/Pike Server: uWSGI 1.9.17-1 OS: Ubuntu 14.04 trusty Python: 2.7.6 As the codes of memoized has little changes since Pike, if you use Queen/Rocky release, you may also succeed to reproduce it. *The investigation* The root cause of the memory leak is the decorator memorized(horizon/utils/memoized.py) which is used to cache function calls in Horizon. After disable it, the memory increases has been controlled. The following is the comparison of memory change(with guppy) for each request of /project/network_topology: - original (no code change) 684kb - do garbage collection manually 185kb - disable memorize cache 10kb As we known, memoized uses weakref to cache objects. A weak reference to an object is not enough to keep the object alive: when the only remaining references to a referent are weak references, garbage collection is free to destroy the referent and reuse its memory for something else. In the memory, we could see lots of weakref stuffs, the following is a example: *Partition of a set of 394 objects. Total size = 37824 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 197 50 18912 50 18912 50 _cffi_backend.CDataGCP 1 197 50 18912 50 37824 100 weakref.KeyedRefq* But the rest of them are not. the following result is the memory objects changes of per /project/network_topology access with garbage collection manually. *Partition of a set of 1017 objects. Total size = 183680 bytes. Index Count % Size % Cumulative % Referrers by Kind (class / dict of class) 0 419 41 58320 32 58320 32 dict (no owner) 1 100 10 23416 13 81736 44 list 2 135 13 15184 8 96920 53 3 2 0 6704 4 103624 56 urllib3.connection.VerifiedHTTPSConnection 4 2 0 6704 4 110328 60 urllib3.connectionpool.HTTPSConnectionPool 5 1 0 3352 2 113680 62 novaclient.v2.client.Client 6 2 0 2096 1 115776 63 OpenSSL.SSL.Connection 7 2 0 2096 1 117872 64 OpenSSL.SSL.Context 8 2 0 2096 1 119968 65 Queue.LifoQueue 9 12 1 2096 1 122064 66 dict of urllib3.connectionpool.HTTPSConnectionPool* The most of them are dicts. Followings are the dicts sorted by class, as you can see most of them are not weakref objects: *Partition of a set of 419 objects. Total size = 58320 bytes. Index Count % Size % Cumulative % Class 0 362 86 50712 87 50712 87 unicode 1 27 6 3736 6 54448 93 list 2 5 1 2168 4 56616 97 dict 3 22 5 1448 2 58064 100 str 4 2 0 192 0 58256 100 weakref.KeyedRef 5 1 0 64 0 58320 100 keystoneauth1.discover.Discover* *The issue* So the problem is that memoized does not work like what we expect. It allocates memory to cache objects but some of them could not be released. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Thu Sep 20 02:04:45 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 19 Sep 2018 21:04:45 -0500 Subject: [Openstack-operators] [openstack-dev] Fwd: Denver Ops Meetup post-mortem In-Reply-To: References: Message-ID: <5BA3003D.7020405@openstack.org> Thanks for the thorough write-up as well as the detailed feedback. I'm including some of my notes from the Ops Meetup Feedback session just a bit below, as well as some comments inline. One of the critical things that would help both the Ops and Dev community is to have a holistic sense of what the Ops Meetup goals are. * Were the goals well defined ahead of the event? * Were they achieved and/or how can the larger OpenStack community help them achieve them? From our discussion at the Feedback session, this isn't something that has been tracked in the past. Having actionable, measurable goals coming out of the Ops Meetup could go a long way towards helping the projects realize them. Per our discussion, being able to present this list to the User Committee would be a good step forward for each event. I wasn't able to attend the entire time, but a couple of interesting notes: * The knowledge of deployment tools seemed pretty fragmented and it seemed like there was a desire for more clear and comprehensive documentation comparing the different deployment options, as well as documentation about how to get started with a POC. * Bare Metal in the Datacenter: It was clear that we need more Ironic 101 content and education, including how to get started, system requirements, etc. We can dig up presentations from previous Summits and also talked to TheJulia about potentially hosting a community meeting or producing another video leading up to the Berlin Summit. * Here are the notes from the sessions in case anyone on the ops list is interested: https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018 It looks like there were some action items documented at the bottom of this etherpad: https://etherpad.openstack.org/p/ops-denver-2018-further-work Ops Meetup Feedback Takeways from Feedback Session not covered below (mostly from https://etherpad.openstack.org/p/uc-stein-ptg) Chris Morgan wrote: --SNIP -- > What went well > > - some of the sessions were great and a lot of progress was made > - overall attendance in the ops room was good We had to add 5 tables to accommodate the additional attendees. It was a great crowd! > - more developers were able to join the discussions Given that this is something that wouldn't happen at a normal Ops Meetup, is there a way that would meet the Ops Community needs that we could help facilitate this int he future? > - facilities were generally fine > - some operators leveraged being at PTG to have useful involvement in > other sessions/discussions such as Keystone, User Committee, > Self-Healing SIG, not to mention the usual "hallway conversations", > and similarly some project devs were able to bring pressing questions > directly to operators. > > What didn't go so well: > > - Merging into upgrade SIG didn't go particularly well This is a tough one b/c of the fluidity of the PTG. Agreed that one can end up missing a good chunk of the discussion. OTOH, the flexibility of hte event is what allows great discussions to take place. In the future, I think better coordination w/ specific project teams + updating the PTGBot could help make sure the schedules are in synch. > - fewer ops attended (in particular there were fewer from outside the US) Do you have demographics on the Ops Meetup in Japan or NY? Curious to know how those compare to what we saw in Denver. If there is more promotion needed, or indeed these just end up being more continent/regionally focused? > - Some of the proposed sessions were not well vetted Are there any suggestions on how to improve this moving forward? Perhaps a CFP style submission process, with a small vetting group, could help this situation? My understanding was the Tokyo event, co-located with OpenStack Days, didn't suffer this problem. > - some ops who did attend stated the event identity was diluted, it > was less attractive I'd love some more info on this. Please have these people reach out to let me know how we can fix this in the future. Even if we decide not to hold another Ops Meetup at a PTG, this is relevant to how we run events. > - we tried to adjust the day 2 schedule to include late submissions, > however it was probably too late in some cases > > I don't think it's so important to drill down into all the whys and > wherefores of how we fell down here except to say that the ops meetups > team is a small bunch of volunteers all with day jobs (presumably just > like everyone else on this mailing list). The usual, basically. > > Much more important : what will be done to improve things going forward: > > - The User Committee has offered to get involved with the technical > content. In particular to bring forward topics from other relevant > events into the ops meetup planning process, and then take output from > ops meetups forward to subsequent events. We (ops meetup team) have > welcomed this. This is super critical IMO. One of the things we discussed at the Ops Meetup Feedback session (co-located w/ the UC Meeting) was to provide actionable list of takeaways from the meetup as well as measurable list of how you'd like to see them fixed. From the conversation, this isn't something that has occurred before at Ops Meetups, but I think this would be a huge step forward in working towards a solution to your problems. > > - The Ops Meetups Team will endeavor to start topic selection earlier > and have a more critical approach. Having a longer list of possible > sessions (when starting with material from earlier events) should make > it at least possible to devise a better agenda. Agenda quality drives > attendance to some extent and so can ensure a virtuous circle. Agreed 100%. For the Forum, we start about 2 months out. I think it's worth looking at that process to see if anything can be gained there. I'm very happy to assist with advice on this one... > > - We need to work out whether we're doing fixed schedule events > (similar to previous mid-cycle Ops Meetups) or fully flexible > PTG-style events, but grafting one onto the other ad-hoc clearly is a > terrible idea. This needs more discussion. +1 > > - The Ops Meetups Team continues to explore strange new worlds, or at > least get in touch with more and more OpenStack operators to find out > what the meetups team and these events could do for them and hence > drive the process better. One specific work item here is to help the > (widely disparate) operator community with technical issues such as > getting setup with the openstack git/gerrit and IRC. The latter is the > preferred way for the community to meet, but is particularly difficult > now with the registered nickname requirement. We will add help > documentation on how to get over this hurdle. The IRC issues haven't affected me, fortunately. I’d love to hear from anyone who attended, so we can share the learnings and discuss next steps…whether that means investing in documentation/education, proposing Forum sessions for the Berlin Summit, etc. Cheers, Jimmy > > - YOUR SUGGESTION HERE > > Chris > > -- > Chris Morgan > > > > -- > Chris Morgan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From anteaya at anteaya.info Thu Sep 20 04:03:48 2018 From: anteaya at anteaya.info (Anita Kuno) Date: Thu, 20 Sep 2018 00:03:48 -0400 Subject: [Openstack-operators] [openstack-dev] [Openstack-sigs] [tc]Global Reachout Proposal In-Reply-To: <20180918124049.jw7xbufikxfx3w37@yuggoth.org> References: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> <20180918124049.jw7xbufikxfx3w37@yuggoth.org> Message-ID: <96743b2c-7d12-0769-9176-746c2d4edbbe@anteaya.info> On 2018-09-18 08:40 AM, Jeremy Stanley wrote: > On 2018-09-18 11:26:57 +0900 (+0900), Ghanshyam Mann wrote: > [...] >> I can understand that IRC cannot be used in China which is very >> painful and mostly it is used weChat. > [...] > > I have yet to hear anyone provide first-hand confirmation that > access to Freenode's IRC servers is explicitly blocked by the > mainland Chinese government. There has been a lot of speculation > that the usual draconian corporate firewall policies (surprise, the > rest of the World gets to struggle with those too, it's not just a > problem in China) are blocking a variety of messaging protocols from > workplace networks and the people who encounter this can't tell the > difference because they're already accustomed to much of their other > communications being blocked at the border. I too have heard from > someone who's heard from someone that "IRC can't be used in China" > but the concrete reasons why continue to be missing from these > discussions. > I'll reply to this email arbitrarily in order to comply with Zhipeng Huang's wishes that the conversation concerned with understanding the actual obstacles to communication takes place on the mailing list. I do hope I am posting to the correct thread. In response to part of your comment on the patch at https://review.openstack.org/#/c/602697/ which you posted about 5 hours ago you said "@Anita you are absolutely right it is only me stuck my head out speaks itself the problem I stated in the patch. Many of the community tools that we are comfortable with are not that accessible to a broader ecosystem. And please assured that I meant I refer the patch to the Chinese community, as Leong also did on the ML, to try to bring them over to join the convo." and I would like to reply. I would like to say that I am honoured by your generosity. Thank you. Now, when the Chinese community consumes the patch, as well as the conversation in the comments, please encourage folks to ask for clarification if any descriptions or phrases don't make sense to them. One of the best ways of ensuring clear communication is to start off slowly and take the time to ask what the other side means. It can seem tedious and a waste of time, but I have found it to be very educational and helpful in understanding how the other person perceives the situation. It also helps me to understand how I am creating obstacles in ways that I talk. Taking time to clarify helps me to adjust how I am speaking so that my meaning is more likely to be understood by the group to which I am trying to offer my perspective. I do appreciate that many people are trying to avoid embarrassment, but I have never found any way to understand people in a culture that is not the one I group up in, other than embarrassing myself and working through it. Usually I find the group I am wanting to understand is more than willing to rescue me from my embarrassment and support me in my learning. In a strange way, the embarrassment is kind of helpful in order to create understanding between myself and those people I am trying to understand. Thank you, Anita From rico.lin.guanyu at gmail.com Thu Sep 20 05:22:17 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 20 Sep 2018 13:22:17 +0800 Subject: [Openstack-operators] [openstack-dev][heat] We need more help for actions and review. And some PTG update for Heat Message-ID: Dear all As we reach Stein and start to discuss what we should do in the next cycle, I would like to raise voice for what kind of help we need and what target we planning. BTW, If you really can't start your contact with us with any English (we don't mind any English skill you are) *First of all, we need more developers, and reviewer. I would very much like to give Heat core reviewer title to people if anyone provides fare quality of reviews. So please help us review patches. Let me know if you would like to be a part but got no clue on how can you started.* Second, we need more help to achieve actions. Here I make a list of actions base on what we discuss from PTG [1]. I mark some of them with (*) if it looks like an easy contribution: - (*) Move interop tempest tests to a separate repo - Move python3 functional job to python3.6 - (*) Implement upgrade check - (*) Copy templates from Cue project into the heat-templates repo - (*) Add Stein template versions - (*) Do document improvement or add documents for: - (*) Heat Event Notification list - Nice to have our own document and provide a link to [2] - default heat service didn't enable notification, so might be mention and link to Notify page - (*) Autoscaling doc - (*) AutoHealing doc - (*) heat agent & heat container agent - (*) external resource - (*) Upgrade guideline - (*) Move document from wiki to in repo document - (*) Fix live properties (observe reality) feature and make sure all resource works - remove any legacy pattern from .zuul.yaml - Improve autoscaling and self-healing - Create Tempest test for self-healing scenario (around Heat integration) - (*) Examine all resource type and help to update if they do not sync up with physical resource If you like to learn more of any above tasks, just reach out to me and other core members, and we're more than happy to give you the background and guideline to any of above tasks. Also, welcome to join our meeting and raise topics for any tasks. We actually got more tasks that need to be achieved (didn't list here maybe because it's already start implemented or still under planning), so if you didn't see any interesting task above, you can reach out to me and let me know which specific area you're interested in. Also, you might want to go through [1] or talk to other team members to see if any more comments added in before you start working on any task. Now here are some targets that we start to discuss or work in progress - Multi-cloud support - Within [5], we propose the ability to do multi-cloud orchestration, and the follow-on discussion is how can we provide the ability to use customized SSL options for multi-cloud or multi-region orchestration without violating any security concerns. What we plan to do now (after discussing with security sig (Barbican team)) is to only support cacert for SSL which is less sensitive. Use a template file to store that cacert and give it to keystone session for providing SSL ability to connections. If that sounds like a good idea to all without much concerns, I will implement it asap. - Autoscaling and self-healing improvement - This is a big complex task for sure and kind of relative to multiple projects. We got a fair number of users using Autoscaling feature, but not much for self-healing for now. So we will focus on each feature and the integration of two feature separately. - First, Heat got the ability to orchestrate autoscaling, but we need to improve the stability. Still go around our code base to see how can we modulize current implementation, and how can we improve from here, but will update more information for all. We also starting to discuss autoscaling integration [3], which hopefully we can get a better solution and combine forces from Heat and Senlin as a long-term target. Please give your feedback if you also care about this target. - For self-healing, we propose some co-work on cross-project gatting in Self-healing-sig, which we still not generate tempest test out, but assume we can start to set up job and raise discussion for how can we help projects to adopt that job. Also, we got discussions with Octavia team ([7], and [8]) and Monasca team about adopting the ability to support event alarm/notification. Which we plan to put into actions. If anyone also think those are important features, please provide your development resources so we can get those feature done in this cycle. - For integrating two scenarios, I try to add more tasks into [6] and eliminate as many as we can. Also, plan to work on document these scenarios down, so everyone can play with autoscaling+self-healing easily. - Glance resource update - We deprecate image resource in Heat for very long time, and now Glance got the ability to download images by URL, we should be able to adopt new image service and renew/add our Image resources. What's missing is the support of this feature in Devstack for we can use it to test on the gate. There's already discussion raised in ML [9] and in PTG [10]. So hopefully we can help to provide a better test before we adopt the feature. - Non-convergence mode deprecation discussion and User survey update - In PTG UC meeting, UC decides to renew User survey for projects. And Heat now already prepared a new question [4] for it. The reason why we raise that question is that we really like to learn from ops/users about what's adoption rate of convergence mode before we deprecated the non-convergence(legacy) mode. We gonna use that data to decide whether or not we're ready for next action. - KeyPair Issue in Heat Stack - A user-scope resource like KeyPair is a known issue for Heat (because all our actions are project-scope). For example, when User A creates Keypair+Instance in Stack. That keypair is specific user A specific. If we update that stack by User B, keypair will not be accessible (since user B didn't get any authorize to get that keypair). Unless User B can access the same keypair or another Keypair with same name and content. - For action and propose solutions, we gonna send a known issue note for users. Also will try to propose either of these two possible solutions, to make Barbican integrated with Nova Keypair, or allow Keypair to change its scope. I aware there already discussion in Nova team about changing to project-scope, but now we kind of waiting for that discussion to generate actions before we can say this issue is covered. - And more - Again, it's not possible to talk about all feature or plan in a single ML. So please take a look at our storyboard [11] if you like to see anything to be improved. Also, it always accelerates tasks when we got more resources to put on. So help us to develop, review, or provide any feedback are very very welcome! For any feedback added in etherpad but didn't get any comments, I will try to raise discussion in meeting for them. And last but not least, we got some sessions in Berlin for a project update and Onboarding. And potentially also have a ops/users feedback forum, and an autoscaling integration forum (if we actually been accepted ). So please let me know how you like to have those sessions to be taken in place, and what you wish to hear/learn from our sessions? [1] https://etherpad.openstack.org/p/2018-Denver-PTG-Heat [2] https://wiki.openstack.org/wiki/SystemUsageData#orchestration.stack..7Bcreate.2Cupdate.2Cdelete.2Csuspend.2Cresume.7D..7Bstart.2Cerror.2Cend.7D : [3] https://etherpad.openstack.org/p/autoscaling-integration-and-feedback [4] https://etherpad.openstack.org/p/heat-user-survey-brainstrom [5] https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/multiple-cloud-support [6] https://storyboard.openstack.org/#!/story/2003690 [7] https://storyboard.openstack.org/#!/story/2003782 [8] https://storyboard.openstack.org/#!/story/2003773 [9] http://lists.openstack.org/pipermail/openstack-dev/2018-August/134019.html [10] https://etherpad.openstack.org/p/stein-ptg-glance-planning [11] storyboard.openstack.org/#!/project/989 -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Thu Sep 20 08:39:43 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 20 Sep 2018 16:39:43 +0800 Subject: [Openstack-operators] [openstack-dev] [Openstack-sigs] [tc]Global Reachout Proposal In-Reply-To: <96743b2c-7d12-0769-9176-746c2d4edbbe@anteaya.info> References: <165ea808b5b.e023df6f175915.5632015242635271704@ghanshyammann.com> <20180918124049.jw7xbufikxfx3w37@yuggoth.org> <96743b2c-7d12-0769-9176-746c2d4edbbe@anteaya.info> Message-ID: Thanks Anita, will definitely do as you kindly suggested :) On Thu, Sep 20, 2018, 12:04 PM Anita Kuno wrote: > On 2018-09-18 08:40 AM, Jeremy Stanley wrote: > > On 2018-09-18 11:26:57 +0900 (+0900), Ghanshyam Mann wrote: > > [...] > >> I can understand that IRC cannot be used in China which is very > >> painful and mostly it is used weChat. > > [...] > > > > I have yet to hear anyone provide first-hand confirmation that > > access to Freenode's IRC servers is explicitly blocked by the > > mainland Chinese government. There has been a lot of speculation > > that the usual draconian corporate firewall policies (surprise, the > > rest of the World gets to struggle with those too, it's not just a > > problem in China) are blocking a variety of messaging protocols from > > workplace networks and the people who encounter this can't tell the > > difference because they're already accustomed to much of their other > > communications being blocked at the border. I too have heard from > > someone who's heard from someone that "IRC can't be used in China" > > but the concrete reasons why continue to be missing from these > > discussions. > > > > I'll reply to this email arbitrarily in order to comply with Zhipeng > Huang's wishes that the conversation concerned with understanding the > actual obstacles to communication takes place on the mailing list. I do > hope I am posting to the correct thread. > > In response to part of your comment on the patch at > https://review.openstack.org/#/c/602697/ which you posted about 5 hours > ago you said "@Anita you are absolutely right it is only me stuck my > head out speaks itself the problem I stated in the patch. Many of the > community tools that we are comfortable with are not that accessible to > a broader ecosystem. And please assured that I meant I refer the patch > to the Chinese community, as Leong also did on the ML, to try to bring > them over to join the convo." and I would like to reply. > > I would like to say that I am honoured by your generosity. Thank you. > Now, when the Chinese community consumes the patch, as well as the > conversation in the comments, please encourage folks to ask for > clarification if any descriptions or phrases don't make sense to them. > One of the best ways of ensuring clear communication is to start off > slowly and take the time to ask what the other side means. It can seem > tedious and a waste of time, but I have found it to be very educational > and helpful in understanding how the other person perceives the > situation. It also helps me to understand how I am creating obstacles in > ways that I talk. > > Taking time to clarify helps me to adjust how I am speaking so that my > meaning is more likely to be understood by the group to which I am > trying to offer my perspective. I do appreciate that many people are > trying to avoid embarrassment, but I have never found any way to > understand people in a culture that is not the one I group up in, other > than embarrassing myself and working through it. Usually I find the > group I am wanting to understand is more than willing to rescue me from > my embarrassment and support me in my learning. In a strange way, the > embarrassment is kind of helpful in order to create understanding > between myself and those people I am trying to understand. > > Thank you, Anita > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at johngarbutt.com Thu Sep 20 09:16:34 2018 From: john at johngarbutt.com (John Garbutt) Date: Thu, 20 Sep 2018 10:16:34 +0100 Subject: [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter Message-ID: Hi, Following on from the PTG discussions, I wanted to bring everyone's attention to Nova's plans to deprecate ComputeCapabilitiesFilter, including most of the the integration with Ironic Capabilities. To be specific, this is my proposal in code form: https://review.openstack.org/#/c/603102/ Once the code we propose to deprecate is removed we will stop using capabilities pushed up from Ironic for 'scheduling', but we would still pass capabilities request in the flavor down to Ironic (until we get some standard traits and/or deploy templates sorted for things like UEFI). Functionally, we believe all use cases can be replaced by using the simpler placement traits (this is more efficient than post placement filtering done using capabilities): https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/ironic-driver-traits.html Please note the recent addition of forbidden traits that helps improve the usefulness of the above approach: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-forbidden-traits.html For example, a flavor request for GPUs >= 2 could be replaced by a custom trait trait that reports if a given Ironic node has CUSTOM_MORE_THAN_2_GPUS. That is a bad example (longer term we don't want to use traits for this, but that is a discussion for another day) but it is the example that keeps being raised in discussions on this topic. The main reason for reaching out in this email is to ask if anyone has needs that the ResourceClass and Traits scheme does not currently address, or can think of a problem with a transition to the newer approach. Many thanks, John Garbutt IRC: johnthetubaguy -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at johngarbutt.com Thu Sep 20 09:43:00 2018 From: john at johngarbutt.com (John Garbutt) Date: Thu, 20 Sep 2018 10:43:00 +0100 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: Message-ID: tl;dr +1 consistent names I would make the names mirror the API ... because the Operator setting them knows the API, not the code Ignore the crazy names in Nova, I certainly hate them Lance Bragstad wrote: > I'm curious if anyone has context on the "os-" part of the format? My memory of the Nova policy mess... * Nova's policy rules traditionally followed the patterns of the code ** Yes, horrible, but it happened. * The code used to have the OpenStack API and the EC2 API, hence the "os" * API used to expand with extensions, so the policy name is often based on extensions ** note most of the extension code has now gone, including lots of related policies * Policy in code was focused on getting us to a place where we could rename policy ** Whoop whoop by the way, it feels like we are really close to something sensible now! Lance Bragstad wrote: > Thoughts on using create, list, update, and delete as opposed to post, > get, put, patch, and delete in the naming convention? > I could go either way as I think about "list servers" in the API. But my preference is for the URL stub and POST, GET, etc. On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad wrote: > If we consider dropping "os", should we entertain dropping "api", too? Do >> we have a good reason to keep "api"? >> I wouldn't be opposed to simple service types (e.g "compute" or >> "loadbalancer"). >> > +1 The API is known as "compute" in api-ref, so the policy should be for "compute", etc. From: Lance Bragstad > The topic of having consistent policy names has popped up a few times this week. I would love to have this nailed down before we go through all the policy rules again. In my head I hope in Nova we can go through each policy rule and do the following: * move to new consistent policy name, deprecate existing name * hardcode scope check to project, system or user ** (user, yes... keypairs, yuck, but its how they work) ** deprecate in rule scope checks, which are largely bogus in Nova anyway * make read/write/admin distinction ** therefore adding the "noop" role, amount other things Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Sep 20 14:19:46 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 20 Sep 2018 09:19:46 -0500 Subject: [Openstack-operators] [openstack-dev] Forum Topic Submission Period In-Reply-To: <5B9FD2BB.3060806@openstack.org> References: <5B9FD2BB.3060806@openstack.org> Message-ID: <51580429-12ad-04b8-0efa-e11a14eaa87b@gmail.com> On 9/17/2018 11:13 AM, Jimmy McArthur wrote: > The Forum Topic Submission session started September 12 and will run > through September 26th.  Now is the time to wrangle the topics you > gathered during your Brainstorming Phase and start pushing forum topics > through. Don't rely only on a PTL to make the agenda... step on up and > place the items you consider important front and center. > > As you may have noticed on the Forum Wiki > (https://wiki.openstack.org/wiki/Forum), we're reusing the normal CFP > tool this year. We did our best to remove Summit specific language, but > if you notice something, just know that you are submitting to the > Forum.  URL is here: > > https://www.openstack.org/summit/berlin-2018/call-for-presentations > > Looking forward to seeing everyone's submissions! > > If you have questions or concerns about the process, please don't > hesitate to reach out. Another question. In the before times, when we just had that simple form to submit forum sessions and then the TC/UC/Foundation reviewed the list and picked the sessions, it was very simple to see what other sessions were proposed and say, "oh good someone is covering this already, I don't need to worry about it". With the move to the CFP forms like the summit sessions, that is no longer available, as far as I know. There have been at least a few cases this week where someone has said, "this might be a good topic, but keystone is probably already covering it, or $FOO SIG is probably already covering it", but without herding the cats to ask and find out who is all doing what, it's hard to know. Is there some way we can get back to having a public view of what has been proposed for the forum so we an avoid overlap, or at worst not proposing something because people assume someone else is going to cover it? -- Thanks, Matt From mriedemos at gmail.com Thu Sep 20 15:00:54 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 20 Sep 2018 10:00:54 -0500 Subject: [Openstack-operators] [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: Message-ID: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> On 9/20/2018 4:16 AM, John Garbutt wrote: > Following on from the PTG discussions, I wanted to bring everyone's > attention to Nova's plans to deprecate ComputeCapabilitiesFilter, > including most of the the integration with Ironic Capabilities. > > To be specific, this is my proposal in code form: > https://review.openstack.org/#/c/603102/ > > Once the code we propose to deprecate is removed we will stop using > capabilities pushed up from Ironic for 'scheduling', but we would still > pass capabilities request in the flavor down to Ironic (until we get > some standard traits and/or deploy templates sorted for things like UEFI). > > Functionally, we believe all use cases can be replaced by using the > simpler placement traits (this is more efficient than post placement > filtering done using capabilities): > https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/ironic-driver-traits.html > > Please note the recent addition of forbidden traits that helps improve > the usefulness of the above approach: > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-forbidden-traits.html > > For example, a flavor request for GPUs >= 2 could be replaced by a > custom trait trait that reports if a given Ironic node has > CUSTOM_MORE_THAN_2_GPUS. That is a bad example (longer term we don't > want to use traits for this, but that is a discussion for another day) > but it is the example that keeps being raised in discussions on this topic. > > The main reason for reaching out in this email is to ask if anyone has > needs that the ResourceClass and Traits scheme does not currently > address, or can think of a problem with a transition to the newer approach. I left a few comments in the change, but I'm assuming as part of the deprecation we'd remove the filter from the default enabled_filters list so new installs don't automatically get warnings during scheduling? Another thing is about existing flavors configured for these capabilities-scoped specs. Are you saying during the deprecation we'd continue to use those even if the filter is disabled? In the review I had suggested that we add a pre-upgrade check which inspects the flavors and if any of these are found, we report a warning meaning those flavors need to be updated to use traits rather than capabilities. Would that be reasonable? -- Thanks, Matt From jimmy at openstack.org Thu Sep 20 15:23:09 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 20 Sep 2018 10:23:09 -0500 Subject: [Openstack-operators] [openstack-dev] Forum Topic Submission Period In-Reply-To: <51580429-12ad-04b8-0efa-e11a14eaa87b@gmail.com> References: <5B9FD2BB.3060806@openstack.org> <51580429-12ad-04b8-0efa-e11a14eaa87b@gmail.com> Message-ID: <5BA3BB5D.3060404@openstack.org> Matt, Another good question... Matt Riedemann wrote: > On 9/17/2018 11:13 AM, Jimmy McArthur wrote: >> SNIP > > Another question. In the before times, when we just had that simple > form to submit forum sessions and then the TC/UC/Foundation reviewed > the list and picked the sessions, it was very simple to see what other > sessions were proposed and say, "oh good someone is covering this > already, I don't need to worry about it". With the move to the CFP > forms like the summit sessions, that is no longer available, as far as > I know. There have been at least a few cases this week where someone > has said, "this might be a good topic, but keystone is probably > already covering it, or $FOO SIG is probably already covering it", but > without herding the cats to ask and find out who is all doing what, > it's hard to know. > > Is there some way we can get back to having a public view of what has > been proposed for the forum so we an avoid overlap, or at worst not > proposing something because people assume someone else is going to > cover it? This is basically the CFP equivalent: https://www.openstack.org/summit/berlin-2018/vote-for-speakers Voting isn't necessary, of course, but it should allow you to see submissions as they roll in. Does this work for your purposes? Thanks, Jimmy From mriedemos at gmail.com Thu Sep 20 16:27:25 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 20 Sep 2018 11:27:25 -0500 Subject: [Openstack-operators] [openstack-dev] Forum Topic Submission Period In-Reply-To: <5BA3BB5D.3060404@openstack.org> References: <5B9FD2BB.3060806@openstack.org> <51580429-12ad-04b8-0efa-e11a14eaa87b@gmail.com> <5BA3BB5D.3060404@openstack.org> Message-ID: On 9/20/2018 10:23 AM, Jimmy McArthur wrote: > This is basically the CFP equivalent: > https://www.openstack.org/summit/berlin-2018/vote-for-speakers  Voting > isn't necessary, of course, but it should allow you to see submissions > as they roll in. > > Does this work for your purposes? Yup, that should do it, thanks! -- Thanks, Matt From fungi at yuggoth.org Thu Sep 20 16:32:49 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 20 Sep 2018 16:32:49 +0000 Subject: [Openstack-operators] [all] We're combining the lists! (was: Bringing the community together...) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> tl;dr: The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists (to which this is being sent) will be replaced by a new openstack-discuss at lists.openstack.org mailing list. The new list is open for subscriptions[0] now, but is not yet accepting posts until Monday November 19 and it's strongly recommended to subscribe before that date so as not to miss any messages posted there. The old lists will be configured to no longer accept posts starting on Monday December 3, but in the interim posts to the old lists will also get copied to the new list so it's safe to unsubscribe from them any time after the 19th and not miss any messages. Now on to the details... The original proposal[1] I cross-posted to these lists in August received overwhelmingly positive feedback (indeed only one strong objection[2] was posted, thanks Thomas for speaking up, and my apologies in advance if this makes things less convenient for you), which is unusual since our community usually tends to operate on silent assent and tacit agreement. Seeing what we can only interpret as majority consensus for the plan among the people reading messages posted to these lists, a group of interested individuals met last week in the Infrastructure team room at the PTG to work out the finer details[3]. We devised a phased timeline: During the first phase (which begins with this announcement) the new openstack-discuss mailing list will accept subscriptions but not posts. Its short and full descriptions indicate this, as does the welcome message sent to all new subscribers during this phase. The list is configured for "emergency moderation" mode so that all posts, even those from subscribers, immediately land in the moderation queue and can be rejected with an appropriate message. We strongly recommend everyone who is on any of the current general openstack, openstack-dev, openstack-operators and openstack-sigs lists subscribe to openstack-discuss during this phase in order to avoid missing any messages to the new list. Phase one lasts roughly one month and ends on Monday November 19, just after the OpenStack Stein Summit in Berlin. The second phase picks up at the end of the first. During this phase, emergency moderation is no longer in effect and subscribers can post to the list normally (non-subscribers are subject to moderation of course in order to limit spam). Any owners/moderators from the original lists who wish it will be added to the new one to collaborate on moderation tasks. At this time the openstack-discuss list address itself will be subscribed to posts from the openstack, openstack-dev, openstack-operators and openstack-sigs mailing lists so anyone who wishes to unsubscribe from those can do so at any time during this phase without missing any replies sent there. The list descriptions and welcome message will also be updated to their production prose. Phase two runs for two weeks ending on Monday December 3. The third and final phase begins at the end of the second, when further posts to the general openstack, openstack-dev, openstack-operators and openstack-sigs lists will be refused and the descriptions for those lists updated to indicate they're indefinitely retired from use. The old archives will still be preserved of course, but no new content will appear in them. A note about DMARC/DKIM: during the planning discussion we also spoke briefly about the problems we encounter on the current lists whereby subscriber MTAs which check DKIM signatures appearing in some posts reject them and cause those subscribers to get unsubscribed after too many of these bounces. While reviewing the various possible mitigation options available to us, we eventually resolved that the least objectionable solution was to cease modifying the list subject and body. As such, for the new openstack-discuss list you won't see [openstack-discuss] prepended to message subjects, and there will be no list footer block added to the message body. Rest assured the usual RFC 2369 List-* headers[4] will still be added so MUAs can continue to take filtering actions based on them as on our other lists. I'm also including a couple of FAQs which have come up over the course of this... Why make a new list instead of just directing people to join an existing one such as the openstack general ML? For one, the above list behavior change to address DMARC/DKIM issues is a good reason to want a new list; making those changes to any of the existing lists is already likely to be disruptive anyway as subscribers may be relying on the subject mangling for purposes of filtering list traffic. Also as noted earlier in the thread for the original proposal, we have many suspected defunct subscribers who are not bouncing (either due to abandoned mailboxes or MTAs black-holing them) so this is a good opportunity to clean up the subscriber list and reduce the overall amount of E-mail unnecessarily sent by the server. Why not simply auto-subscribe everyone from the four older lists to the new one and call it a day? Well, I personally would find it rude if a list admin mass-subscribed me to a mailing list I hadn't directly requested. Doing so may even be illegal in some jurisdictions (we could probably make a case that it's warranted, but it's cleaner to not need to justify such an action). Much like the answer to the previous question, the changes in behavior (and also in the list name itself) are likely to cause lots of subscribers to need to update their message filtering rules anyway. I know by default it would all start landing in my main inbox, and annoy me mightily. What subject tags are we going to be using to identify messages of interest and to be able to skip those we don't care about? We're going to continuously deploy a list of recommended subject tags in a visible space, either on the listserv's WebUI or the Infra Manual and link to it liberally. There is already an initial set of suggestions[5] being brainstormed, so feel free to add any there you feel might be missing. It's not yet been decided whether we'll also include these in the Mailman "Topics" configuration to enable server-side filtering on them (as there's a good chance we'll be unable to continue supporting that after an upgrade to Mailman 3), so for now it's best to assume you may need to add them to your client-side filters if you rely on that capability. If you have any further questions, please feel free to respond to this announcement so we can make sure they're answered. [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [1] http://lists.openstack.org/pipermail/openstack-sigs/2018-August/000493.html [2] http://lists.openstack.org/pipermail/openstack-dev/2018-August/134074.html [3] https://etherpad.openstack.org/p/infra-ptg-denver-2018 [4] https://www.ietf.org/rfc/rfc2369.txt [5] https://etherpad.openstack.org/p/common-openstack-ml-topics -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mrhillsman at gmail.com Thu Sep 20 22:30:32 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 20 Sep 2018 17:30:32 -0500 Subject: [Openstack-operators] Capturing Feedback/Input Message-ID: Hey everyone, During the TC meeting at the PTG we discussed the ideal way to capture user-centric feedback; particular from our various groups like SIGs, WGs, etc. Options that were mentioned ranged from a wiki page to a standalone solution like discourse. While there is no perfect solution it was determined that Storyboard could facilitate this. It would play out where there is a project group openstack-uc? and each of the SIGs, WGs, etc would have a project under this group; if I am wrong someone else in the room correct me. The entire point is a first step (maybe final) in centralizing user-centric feedback that does not require any extra overhead be it cost, time, or otherwise. Just kicking off a discussion so others have a chance to chime in before anyone pulls the plug or pushes the button on anything and we settle as a community on what makes sense. -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Thu Sep 20 22:40:56 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 21 Sep 2018 06:40:56 +0800 Subject: [Openstack-operators] [Openstack-sigs] Capturing Feedback/Input In-Reply-To: References: Message-ID: big +1, really look forward to the storyboard setup On Fri, Sep 21, 2018 at 6:31 AM Melvin Hillsman wrote: > Hey everyone, > > During the TC meeting at the PTG we discussed the ideal way to capture > user-centric feedback; particular from our various groups like SIGs, WGs, > etc. > > Options that were mentioned ranged from a wiki page to a standalone > solution like discourse. > > While there is no perfect solution it was determined that Storyboard could > facilitate this. It would play out where there is a project group > openstack-uc? and each of the SIGs, WGs, etc would have a project under > this group; if I am wrong someone else in the room correct me. > > The entire point is a first step (maybe final) in centralizing > user-centric feedback that does not require any extra overhead be it cost, > time, or otherwise. Just kicking off a discussion so others have a chance > to chime in before anyone pulls the plug or pushes the button on anything > and we settle as a community on what makes sense. > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Sep 21 00:21:53 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 20 Sep 2018 19:21:53 -0500 Subject: [Openstack-operators] [openstack-dev] Capturing Feedback/Input In-Reply-To: References: Message-ID: <20180921002152.GB16789@sm-workstation> On Thu, Sep 20, 2018 at 05:30:32PM -0500, Melvin Hillsman wrote: > Hey everyone, > > During the TC meeting at the PTG we discussed the ideal way to capture > user-centric feedback; particular from our various groups like SIGs, WGs, > etc. > > Options that were mentioned ranged from a wiki page to a standalone > solution like discourse. > > While there is no perfect solution it was determined that Storyboard could > facilitate this. It would play out where there is a project group > openstack-uc? and each of the SIGs, WGs, etc would have a project under > this group; if I am wrong someone else in the room correct me. > > The entire point is a first step (maybe final) in centralizing user-centric > feedback that does not require any extra overhead be it cost, time, or > otherwise. Just kicking off a discussion so others have a chance to chime > in before anyone pulls the plug or pushes the button on anything and we > settle as a community on what makes sense. > > -- > Kind regards, > > Melvin Hillsman I think Storyboard would be a good place to manage SIG/WG feedback. It will take some time before the majority of projects have moved over from Launchpad, but once they do, this will make it much easier to track SIG initiatives all the way through to code implementation. From iwamoto at valinux.co.jp Fri Sep 21 01:36:56 2018 From: iwamoto at valinux.co.jp (IWAMOTO Toshihiro) Date: Fri, 21 Sep 2018 10:36:56 +0900 Subject: [Openstack-operators] [neutron] heads up to long time ovs users... Message-ID: <20180921013656.31737B3BA8@mail.valinux.co.jp> The neutron team is finally removing the ovs-ofctl option. https://review.openstack.org/#/c/599496/ The ovs-ofctl of_interface option wasn't default since Newton and was deprecated in Pike. So, if you are a long time ovs-agent user and upgrading to a new coming release, you must switch from the ovs-ofctl implementation to the native implementation and are affected by the following issue. https://bugs.launchpad.net/neutron/+bug/1793354 The loss of communication mentioned in this bug report would be a few seconds to a few minutes depending on the number of network interfaces. It happens when an ovs-agent is restarted with the new of_interface (so only once during the upgrade) and persists until the network interfaces are set up. Please speak up if you cannot tolerate this during upgrades. IIUC, this bug is unfixable and I'd like to move forward as maintaining two of_interface implementation is a burden for the neutron team. -- IWAMOTO Toshihiro From gmann at ghanshyammann.com Fri Sep 21 07:10:15 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 21 Sep 2018 16:10:15 +0900 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: Message-ID: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt wrote ---- > tl;dr+1 consistent names > I would make the names mirror the API... because the Operator setting them knows the API, not the codeIgnore the crazy names in Nova, I certainly hate them Big +1 on consistent naming which will help operator as well as developer to maintain those. > > Lance Bragstad wrote: > > I'm curious if anyone has context on the "os-" part of the format? > > My memory of the Nova policy mess...* Nova's policy rules traditionally followed the patterns of the code > ** Yes, horrible, but it happened.* The code used to have the OpenStack API and the EC2 API, hence the "os"* API used to expand with extensions, so the policy name is often based on extensions** note most of the extension code has now gone, including lots of related policies* Policy in code was focused on getting us to a place where we could rename policy** Whoop whoop by the way, it feels like we are really close to something sensible now! > Lance Bragstad wrote: > Thoughts on using create, list, update, and delete as opposed to post, get, put, patch, and delete in the naming convention? > I could go either way as I think about "list servers" in the API.But my preference is for the URL stub and POST, GET, etc. > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad wrote:If we consider dropping "os", should we entertain dropping "api", too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple service types (e.g "compute" or "loadbalancer"). > +1The API is known as "compute" in api-ref, so the policy should be for "compute", etc. Agree on mapping the policy name with api-ref as much as possible. Other than policy name having 'os-', we have 'os-' in resource name also in nova API url like /os-agents, /os-aggregates etc (almost every resource except servers , flavors). As we cannot get rid of those from API url, we need to keep the same in policy naming too? or we can have policy name like compute:agents:create/post but that mismatch from api-ref where agents resource url is os-agents. Also we have action API (i know from nova not sure from other services) like POST /servers/{server_id}/action {addSecurityGroup} and their current policy name is all inconsistent. few have policy name including their resource name like "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in policy name like "os_compute_api:os-admin-actions:reset_state" and few has direct action name like "os_compute_api:os-console-output" May be we can make them consistent with :: or any better opinion. > From: Lance Bragstad > The topic of having consistent policy names has popped up a few times this week. > > I would love to have this nailed down before we go through all the policy rules again. In my head I hope in Nova we can go through each policy rule and do the following: > * move to new consistent policy name, deprecate existing name* hardcode scope check to project, system or user** (user, yes... keypairs, yuck, but its how they work)** deprecate in rule scope checks, which are largely bogus in Nova anyway* make read/write/admin distinction** therefore adding the "noop" role, amount other things + policy granularity. It is good idea to make the policy improvement all together and for all rules as you mentioned. But my worries is how much load it will be on operator side to migrate all policy rules at same time? What will be the deprecation period etc which i think we can discuss on proposed spec - https://review.openstack.org/#/c/547850 -gmann > Thanks,John __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mrhillsman at gmail.com Fri Sep 21 17:55:09 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Fri, 21 Sep 2018 12:55:09 -0500 Subject: [Openstack-operators] [Openstack-sigs] Capturing Feedback/Input In-Reply-To: <1537546393-sup-9882@lrrr.local> References: <1537540740-sup-4229@lrrr.local> <1537546393-sup-9882@lrrr.local> Message-ID: On Fri, Sep 21, 2018 at 11:16 AM Doug Hellmann wrote: > Excerpts from Melvin Hillsman's message of 2018-09-21 10:18:26 -0500: > > On Fri, Sep 21, 2018 at 9:41 AM Doug Hellmann > wrote: > > > > > Excerpts from Melvin Hillsman's message of 2018-09-20 17:30:32 -0500: > > > > Hey everyone, > > > > > > > > During the TC meeting at the PTG we discussed the ideal way to > capture > > > > user-centric feedback; particular from our various groups like SIGs, > WGs, > > > > etc. > > > > > > > > Options that were mentioned ranged from a wiki page to a standalone > > > > solution like discourse. > > > > > > > > While there is no perfect solution it was determined that Storyboard > > > could > > > > facilitate this. It would play out where there is a project group > > > > openstack-uc? and each of the SIGs, WGs, etc would have a project > under > > > > this group; if I am wrong someone else in the room correct me. > > > > > > > > The entire point is a first step (maybe final) in centralizing > > > user-centric > > > > feedback that does not require any extra overhead be it cost, time, > or > > > > otherwise. Just kicking off a discussion so others have a chance to > chime > > > > in before anyone pulls the plug or pushes the button on anything and > we > > > > settle as a community on what makes sense. > > > > > > > > > > I like the idea of tracking the information in storyboard. That > > > said, one of the main purposes of creating SIGs was to separate > > > those groups from the appearance that they were "managed" by the > > > TC or UC. So, rather than creating a UC-focused project group, if > > > we need a single project group at all, I would rather we call it > > > "SIGs" or something similar. > > > > > > > What you bring up re appearances makes sense definitely. Maybe we call it > > openstack-feedback since the purpose is focused on that and I actually > > looked at -uc as user-centric rather than user-committee; but > appearances :) > > Feedback implies that SIGs aren't engaged in creating OpenStack, though, > and I think that's the perception we're trying to change. > > > I think limiting it to SIGs will well, limit it to SIGs, and again could > > appear to be specific to those groups rather than for example the Public > > Cloud WG or Financial Team. > > OK, I thought those groups were SIGs. > > Maybe we're overthinking the organization on this. What is special about > the items that would be on this list compared to items opened directly > against projects? > Yeah unfortunately we do have a tendency to overthink/complicate things. Not saying Storyboard is the right tool but suggested rather than having something extra to maintain was what I understood. There are at least 3 things that were to be addressed: - single pane so folks know where to provide/see updates - it is not a catchall/dumpsite - something still needs to be flushed out/prioritized (Public Cloud WG's missing features spreadsheet for example) - not specific to a single project (i thought this was a given since there is already a process/workflow for single project) I could very well be wrong so I am open to be corrected. From my perspective the idea in the room was to not circumvent anything internal but rather make it easy for external viewers, passerbys, etc. When feedback is gathered from Ops Meetup, OpenStack Days, Local meetups/events, we discussed putting that here as well. > > Doug > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Sep 21 19:24:32 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 21 Sep 2018 19:24:32 +0000 Subject: [Openstack-operators] [Openstack-sigs] Capturing Feedback/Input In-Reply-To: References: <1537540740-sup-4229@lrrr.local> <1537546393-sup-9882@lrrr.local> Message-ID: <20180921192432.k23x2u3w7626cder@yuggoth.org> On 2018-09-21 12:55:09 -0500 (-0500), Melvin Hillsman wrote: [...] > Yeah unfortunately we do have a tendency to overthink/complicate > things. Not saying Storyboard is the right tool but suggested > rather than having something extra to maintain was what I > understood. There are at least 3 things that were to be addressed: > > - single pane so folks know where to provide/see updates Not all OpenStack projects use the same task trackers currently and there's no guarantee that they ever will, so this is a best effort only. Odds are you may wind up duplicating some information also present in the Nova project on Launchpad, the Tripleo project on Trello and the Foobly project on Bugzilla (I made this last one up, in case it's not obvious). > - it is not a catchall/dumpsite If it looks generic enough, it will become that unless there are people actively devoted to triaging and pruning submissions to curate them... a tedious and thankless long-term commitment, to be sure. > - something still needs to be flushed out/prioritized (Public > Cloud WG's missing features spreadsheet for example) This is definitely a good source of input, but still needs someone to determine which various projects/services the tasks for them get slotted into and then help prioritizing and managing spec submissions on a per-team basis. > - not specific to a single project (i thought this was a given > since there is already a process/workflow for single project) The way to do that on storyboard.openstack.org is to give it a project of its own. Basically just couple it to a new, empty Git repository and then the people doing these tasks still have the option of also putting that repository to some use later (for example, to house their workflow documentation). > I could very well be wrong so I am open to be corrected. From my > perspective the idea in the room was to not circumvent anything > internal but rather make it easy for external viewers, passerbys, > etc. When feedback is gathered from Ops Meetup, OpenStack Days, > Local meetups/events, we discussed putting that here as well. It seems a fine plan, just keep in mind that documenting and publishing feedback doesn't magically translate into developers acting on any of it (and this is far from the first time it's been attempted). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amotoki at gmail.com Sat Sep 22 03:55:14 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Sat, 22 Sep 2018 12:55:14 +0900 Subject: [Openstack-operators] [openstack-dev] [neutron] heads up to long time ovs users... In-Reply-To: <20180921013656.31737B3BA8@mail.valinux.co.jp> References: <20180921013656.31737B3BA8@mail.valinux.co.jp> Message-ID: The important point of this notice is that packet drops will happen when switching of_interface option from ovs-ofctl (which was the default value in the old releases) to native (which is the current default ). Once neutron drops the option, if deployers use the legacy value "ovs-ofctl", they will hit some packet losses when upgrading neutron to Stein. We have no actual data on large deployments so far and don't know how this change impacts real deployments. Your feedback would be really appreciated. Best regards, Akihiro Motoki (irc: amotoki) 2018年9月21日(金) 10:37 IWAMOTO Toshihiro : > The neutron team is finally removing the ovs-ofctl option. > > https://review.openstack.org/#/c/599496/ > > The ovs-ofctl of_interface option wasn't default since Newton and was > deprecated in Pike. > > So, if you are a long time ovs-agent user and upgrading to a new > coming release, you must switch from the ovs-ofctl implementation to > the native implementation and are affected by the following issue. > > https://bugs.launchpad.net/neutron/+bug/1793354 > > The loss of communication mentioned in this bug report would be a few > seconds to a few minutes depending on the number of network > interfaces. It happens when an ovs-agent is restarted with the new > of_interface (so only once during the upgrade) and persists until the > network interfaces are set up. > > Please speak up if you cannot tolerate this during upgrades. > > IIUC, this bug is unfixable and I'd like to move forward as > maintaining two of_interface implementation is a burden for the > neutron team. > > -- > IWAMOTO Toshihiro > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Sat Sep 22 16:54:20 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sat, 22 Sep 2018 11:54:20 -0500 Subject: [Openstack-operators] [all] Nominations for the "T" Release name Message-ID: <20180922165419.GD5096@thor.bakeyournoodle.com> Hey everybody, Once again, it is time for us to pick a name for our "T" release. Since the associated Summit will be in Denver, the Geographic Location has been chosen as "Colorado" (State). Nominations are now open. Please add suitable names to https://wiki.openstack.org/wiki/Release_Naming/T_Proposals between now and 2018-10-15 23:59 UTC. In case you don't remember the rules: * Each release name must start with the letter of the ISO basic Latin alphabet following the initial letter of the previous release, starting with the initial release of "Austin". After "Z", the next name should start with "A" again. * The name must be composed only of the 26 characters of the ISO basic Latin alphabet. Names which can be transliterated into this character set are also acceptable. * The name must refer to the physical or human geography of the region encompassing the location of the OpenStack design summit for the corresponding release. The exact boundaries of the geographic region under consideration must be declared before the opening of nominations, as part of the initiation of the selection process. * The name must be a single word with a maximum of 10 characters. Words that describe the feature should not be included, so "Foo City" or "Foo Peak" would both be eligible as "Foo". Names which do not meet these criteria but otherwise sound really cool should be added to a separate section of the wiki page and the TC may make an exception for one or more of them to be considered in the Condorcet poll. The naming official is responsible for presenting the list of exceptional names for consideration to the TC before the poll opens. Let the naming begin. Tony. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From kchamart at redhat.com Mon Sep 24 13:22:50 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 24 Sep 2018 15:22:50 +0200 Subject: [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for 'T' release Message-ID: <20180924132250.GW28120@paraplu> Hey folks, Before we bump the agreed upon[1] minimum versions for libvirt and QEMU for 'Stein', we need to do the tedious work of picking the NEXT_MIN_* versions for the 'T' (which is still in the naming phase) release, which will come out in the autumn (Sep-Nov) of 2019. Proposal -------- Looking at the DistroSupportMatrix[2], it seems like we can pick the libvirt and QEMU versions supported by the next LTS release of Ubuntu -- 18.04; "Bionic", which are: libvirt: 4.0.0 QEMU: 2.11 Debian, Fedora, Ubuntu (Bionic), openSUSE currently already ship the above versions. And it seems reasonable to assume that the enterprise distribtions will also ship the said versions pretty soon; but let's double-confirm below. Considerations and open questions --------------------------------- (a) KVM for IBM z Systems: John Garbutt pointed out[3] on IRC that: "IBM announced that KVM for IBM z will be withdrawn, effective March 31, 2018 [...] development will not only continue unaffected, but the options for users grow, especially with the recent addition of SuSE to the existing support in Ubuntu." The message seems to be: "use a regular distribution". So this is covered, if we a version based on other distributions. (b) Oracle Linux: Can you please confirm if you'll be able to release libvirt and QEMU to 4.0.0 and 2.11, respectively? (c) SLES: Same question as above. Assuming Oracle Linux and SLES confirm, please let us know if there are any objections if we pick NEXT_MIN_* versions for the OpenStack 'T' release to be libvirt: 4.0.0 and QEMU: 2.11. * * * A refresher on libvirt and QEMU release schedules ------------------------------------------------- - There will be at least 12 libvirt releases (_excluding_ maintenance releases) by Autumn 2019. A new libvirt release comes out every month[4]. - And there will be about 4 releases of QEMU. A new QEMU release comes out once every four months. [1] http://git.openstack.org/cgit/openstack/nova/commit/?h=master&id=28d337b -- Pick next minimum libvirt / QEMU versions for "Stein" [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix [3] http://kvmonz.blogspot.com/2017/03/kvm-for-ibm-z-withdrawal.html [4] https://libvirt.org/downloads.html#schedule -- /kashyap From jimmy at openstack.org Mon Sep 24 15:19:59 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 24 Sep 2018 10:19:59 -0500 Subject: [Openstack-operators] [Forum] Forum Topic Submission Period - Time Running out! Message-ID: <5BA9009F.6000405@openstack.org> Just a reminder that there is a little more than 60 hours left to submit your forum topics. Please make haste to the Forum submission tool: https://www.openstack.org/summit/berlin-2018/call-for-presentations Cheers, Jimmy > Jimmy McArthur > September 17, 2018 at 11:13 AM > Hello Everyone! > > The Forum Topic Submission session started September 12 and will run > through September 26th. Now is the time to wrangle the topics you > gathered during your Brainstorming Phase and start pushing forum > topics through. Don't rely only on a PTL to make the agenda... step on > up and place the items you consider important front and center. > > As you may have noticed on the Forum Wiki > (https://wiki.openstack.org/wiki/Forum), we're reusing the normal CFP > tool this year. We did our best to remove Summit specific language, > but if you notice something, just know that you are submitting to the > Forum. URL is here: > > https://www.openstack.org/summit/berlin-2018/call-for-presentations > > Looking forward to seeing everyone's submissions! > > If you have questions or concerns about the process, please don't > hesitate to reach out. > > Cheers, > Jimmy > > _______________________________________________ > Openstack-track-chairs mailing list > Openstack-track-chairs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-track-chairs -------------- next part -------------- An HTML attachment was scrubbed... URL: From iain.macdonnell at oracle.com Mon Sep 24 16:11:42 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Mon, 24 Sep 2018 09:11:42 -0700 Subject: [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for 'T' release In-Reply-To: <20180924132250.GW28120@paraplu> References: <20180924132250.GW28120@paraplu> Message-ID: On 09/24/2018 06:22 AM, Kashyap Chamarthy wrote: > (b) Oracle Linux: Can you please confirm if you'll be able to > release libvirt and QEMU to 4.0.0 and 2.11, respectively? Hi Kashyap, Those are already available at: http://yum.oracle.com/repo/OracleLinux/OL7/developer/kvm/utils/x86_64/index.html ~iain From jaypipes at gmail.com Mon Sep 24 17:12:21 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 24 Sep 2018 13:12:21 -0400 Subject: [Openstack-operators] [openstack-dev] [penstack-dev]Discussion about the future of OpenStack in China In-Reply-To: References: Message-ID: <65bb8c01-dda8-601e-786e-9a998a99ddeb@gmail.com> Fred, I had a hard time understanding the articles. I'm not sure if you used Google Translate to do the translation from Chinese to English, but I personally found both of them difficult to follow. There were a couple points that I did manage to decipher, though. One thing that both articles seemed to say was that OpenStack doesn't meet public (AWS-ish) cloud use cases and OpenStack doesn't compare favorably to VMWare either. Is there a large contingent of Chinese OpenStack users that expect OpenStack to be a free (as in beer) version of VMware technology? What are the 3 most important features that Chinese OpenStack users would like to see included in OpenStack projects? Thanks, -jay On 09/24/2018 11:10 AM, Fred Li wrote: > Hi folks, > > Recently there are several blogs which discussed about the future of > OpenStack. If I was not wrong, the first one is > "OpenStack-8-year-itch"[1], and you can find its English version > attached. Thanks to google translation. The second one is > "5-years-my-opinion-on-OpenStack" [2] with English version attached as > well. Please translate the 3 to 6 and read them if you are interested. > > I don't want to judge anything here. I just want to share as they are > quite hot discussion and I think it is valuable for the whole community, > not part of community to know. > > [1] https://mp.weixin.qq.com/s/GM5cMOl0q3hb_6_eEiixzA > [2] https://mp.weixin.qq.com/s/qZkE4o_BHBPlbIjekjDRKw > [3] https://mp.weixin.qq.com/s/svX4z3JM5ArQ57A1jFoyLw > [4] https://mp.weixin.qq.com/s/Nyb0OxI2Z7LxDpofTTyWOg > [5] https://mp.weixin.qq.com/s/5GV4i8kyedHSbCxCO1VBRw > [6] https://mp.weixin.qq.com/s/yeBcMogumXKGQ0KyKrgbqA > -- > Regards > Fred Li (李永乐) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jichenjc at cn.ibm.com Tue Sep 25 05:51:30 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Tue, 25 Sep 2018 13:51:30 +0800 Subject: [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for'T' release In-Reply-To: <20180924132250.GW28120@paraplu> References: <20180924132250.GW28120@paraplu> Message-ID: >>>(a) KVM for IBM z Systems: John Garbutt pointed out[3] on IRC that: >>> "IBM announced that KVM for IBM z will be withdrawn, effective March >>> 31, 2018 [...] development will not only continue unaffected, but >>> the options for users grow, especially with the recent addition of >>> SuSE to the existing support in Ubuntu." >>> The message seems to be: "use a regular distribution". So this is >>> covered, if we a version based on other distributions. Yes, IBM don't have a product on s390x anymore per [3] indicated, we are cooperating with distro in enablement and for openstack, KVM on z has its own 3rd CI maintaining by us per [5] [5] http://ci-watch.tintri.com/project?project=nova (IBM zKVM CI ) Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Kashyap Chamarthy To: openstack-operators at lists.openstack.org, openstack-dev at lists.openstack.org Date: 09/24/2018 09:28 PM Subject: [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for 'T' release Hey folks, Before we bump the agreed upon[1] minimum versions for libvirt and QEMU for 'Stein', we need to do the tedious work of picking the NEXT_MIN_* versions for the 'T' (which is still in the naming phase) release, which will come out in the autumn (Sep-Nov) of 2019. Proposal -------- Looking at the DistroSupportMatrix[2], it seems like we can pick the libvirt and QEMU versions supported by the next LTS release of Ubuntu -- 18.04; "Bionic", which are: libvirt: 4.0.0 QEMU: 2.11 Debian, Fedora, Ubuntu (Bionic), openSUSE currently already ship the above versions. And it seems reasonable to assume that the enterprise distribtions will also ship the said versions pretty soon; but let's double-confirm below. Considerations and open questions --------------------------------- (a) KVM for IBM z Systems: John Garbutt pointed out[3] on IRC that: "IBM announced that KVM for IBM z will be withdrawn, effective March 31, 2018 [...] development will not only continue unaffected, but the options for users grow, especially with the recent addition of SuSE to the existing support in Ubuntu." The message seems to be: "use a regular distribution". So this is covered, if we a version based on other distributions. (b) Oracle Linux: Can you please confirm if you'll be able to release libvirt and QEMU to 4.0.0 and 2.11, respectively? (c) SLES: Same question as above. Assuming Oracle Linux and SLES confirm, please let us know if there are any objections if we pick NEXT_MIN_* versions for the OpenStack 'T' release to be libvirt: 4.0.0 and QEMU: 2.11. * * * A refresher on libvirt and QEMU release schedules ------------------------------------------------- - There will be at least 12 libvirt releases (_excluding_ maintenance releases) by Autumn 2019. A new libvirt release comes out every month[4]. - And there will be about 4 releases of QEMU. A new QEMU release comes out once every four months. [1] http://git.openstack.org/cgit/openstack/nova/commit/?h=master&id=28d337b -- Pick next minimum libvirt / QEMU versions for "Stein" [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix [3] http://kvmonz.blogspot.com/2017/03/kvm-for-ibm-z-withdrawal.html [4] https://libvirt.org/downloads.html#schedule -- /kashyap _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From mihalis68 at gmail.com Tue Sep 25 13:00:43 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 25 Sep 2018 09:00:43 -0400 Subject: [Openstack-operators] ops meetups team meeting in 30 minutes Message-ID: Hey All, The Ops Meetups team meeting is in 30 minutes on #openstack-operators Forum submissions for the Denver summit are due TODAY, please see the links on today's agenda here : https://etherpad.openstack.org/p/ops-meetups-team Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Sep 25 13:06:50 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 25 Sep 2018 09:06:50 -0400 Subject: [Openstack-operators] ops meetups team meeting in 30 minutes In-Reply-To: References: Message-ID: Oops my mistake, it's in almost an hour from now, sorry On Tue, Sep 25, 2018 at 9:00 AM Chris Morgan wrote: > Hey All, > The Ops Meetups team meeting is in 30 minutes on #openstack-operators > > Forum submissions for the Denver summit are due TODAY, please see the > links on today's agenda here : > https://etherpad.openstack.org/p/ops-meetups-team > > Chris > > -- > Chris Morgan > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at johngarbutt.com Tue Sep 25 13:36:18 2018 From: john at johngarbutt.com (John Garbutt) Date: Tue, 25 Sep 2018 14:36:18 +0100 Subject: [Openstack-operators] [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> Message-ID: On Thu, 20 Sep 2018 at 16:02, Matt Riedemann wrote: > On 9/20/2018 4:16 AM, John Garbutt wrote: > > Following on from the PTG discussions, I wanted to bring everyone's > > attention to Nova's plans to deprecate ComputeCapabilitiesFilter, > > including most of the the integration with Ironic Capabilities. > > > > To be specific, this is my proposal in code form: > > https://review.openstack.org/#/c/603102/ > > > > Once the code we propose to deprecate is removed we will stop using > > capabilities pushed up from Ironic for 'scheduling', but we would still > > pass capabilities request in the flavor down to Ironic (until we get > > some standard traits and/or deploy templates sorted for things like > UEFI). > > > > Functionally, we believe all use cases can be replaced by using the > > simpler placement traits (this is more efficient than post placement > > filtering done using capabilities): > > > https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/ironic-driver-traits.html > > > > Please note the recent addition of forbidden traits that helps improve > > the usefulness of the above approach: > > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-forbidden-traits.html > > > > For example, a flavor request for GPUs >= 2 could be replaced by a > > custom trait trait that reports if a given Ironic node has > > CUSTOM_MORE_THAN_2_GPUS. That is a bad example (longer term we don't > > want to use traits for this, but that is a discussion for another day) > > but it is the example that keeps being raised in discussions on this > topic. > > > > The main reason for reaching out in this email is to ask if anyone has > > needs that the ResourceClass and Traits scheme does not currently > > address, or can think of a problem with a transition to the newer > approach. > > I left a few comments in the change, but I'm assuming as part of the > deprecation we'd remove the filter from the default enabled_filters list > so new installs don't automatically get warnings during scheduling? > +1 Good point, we totally need to do that. > Another thing is about existing flavors configured for these > capabilities-scoped specs. Are you saying during the deprecation we'd > continue to use those even if the filter is disabled? In the review I > had suggested that we add a pre-upgrade check which inspects the flavors > and if any of these are found, we report a warning meaning those flavors > need to be updated to use traits rather than capabilities. Would that be > reasonable? > I like the idea of a warning, but there are features that have not yet moved to traits: https://specs.openstack.org/openstack/ironic-specs/specs/juno-implemented/uefi-boot-for-ironic.html There is a more general plan that will help, but its not quite ready yet: https://review.openstack.org/#/c/504952/ As such, I think we can't get pull the plug on flavors including capabilities and passing them to Ironic, but (after a cycle of deprecation) I think we can now stop pushing capabilities from Ironic into Nova and using them for placement. Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonmills at gmail.com Tue Sep 25 14:23:32 2018 From: jonmills at gmail.com (Jonathan Mills) Date: Tue, 25 Sep 2018 10:23:32 -0400 Subject: [Openstack-operators] [horizon] Odd behavior with image upload in Queens Message-ID: Hello all, I am troubleshooting an odd behavior in Horizon in Queens on CentOS 7.5 (RDO packages). When I upload images via Horizon, they fail to become associated with the tenant of the user who uploaded them. Instead, they appear only as 'Image from Other Project'. It can be fixed if you manually run "openstack image set --project=whatever someimage", but to users looking at the dashboard, it looks like an error. Uploading images via the glance client does not result in this problem, so I'm working from the assumption that the problem lies within the Horizon config. Moreover, these are small images (like Cirros) -- it isn't that the upload is failing due to size or anything; horizon is doing something odd while registering it. The only clues I've found are these messages in my horizon error logs: [Tue Sep 25 14:06:21.147196 2018] [:error] [pid 23679] DEBUG:oslo_policy.policy:Rule [compute_extension:aggregates] does not exist [Tue Sep 25 14:06:21.148665 2018] [:error] [pid 23679] DEBUG:oslo_policy.policy:Rule [default] does not exist [Tue Sep 25 14:06:21.178761 2018] [:error] [pid 23679] DEBUG:oslo_policy.policy:Rule [admin_and_matching_domain_id] does not exist [Tue Sep 25 14:06:21.180502 2018] [:error] [pid 23679] DEBUG:oslo_policy.policy:Rule [default] does not exist [Tue Sep 25 14:06:21.184129 2018] [:error] [pid 23679] DEBUG:oslo_policy.policy:Rule [admin_and_matching_domain_id] does not exist [Tue Sep 25 14:06:21.185885 2018] [:error] [pid 23679] DEBUG:oslo_policy.policy:Rule [default] does not exist [Tue Sep 25 14:06:21.189637 2018] [:error] [pid 23679] DEBUG:oslo_policy.policy:Rule [admin_and_matching_domain_id] does not exist [Tue Sep 25 14:06:21.191408 2018] [:error] [pid 23679] DEBUG:oslo_policy.policy:Rule [default] does not exist I've been trying to understand if this is somehow an oslo_policy problem. The keystone service now has a policy.json file that only contains "{}", because of policy in code, etc. But horizon still has a legacy keystone_policy.json file. Is that a mismatch? Google searches have not turned up much, so I'd be very appreciative if anyone has seen something like this before... Cheers, Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Sep 25 17:08:03 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 25 Sep 2018 12:08:03 -0500 Subject: [Openstack-operators] [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> Message-ID: <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> On 9/25/2018 8:36 AM, John Garbutt wrote: > Another thing is about existing flavors configured for these > capabilities-scoped specs. Are you saying during the deprecation we'd > continue to use those even if the filter is disabled? In the review I > had suggested that we add a pre-upgrade check which inspects the > flavors > and if any of these are found, we report a warning meaning those > flavors > need to be updated to use traits rather than capabilities. Would > that be > reasonable? > > > I like the idea of a warning, but there are features that have not yet > moved to traits: > https://specs.openstack.org/openstack/ironic-specs/specs/juno-implemented/uefi-boot-for-ironic.html > > There is a more general plan that will help, but its not quite ready yet: > https://review.openstack.org/#/c/504952/ > > As such, I think we can't get pull the plug on flavors including > capabilities and passing them to Ironic, but (after a cycle of > deprecation) I think we can now stop pushing capabilities from Ironic > into Nova and using them for placement. Forgive my ignorance, but if traits are not on par with capabilities, why are we deprecating the capabilities filter? -- Thanks, Matt From jp.methot at planethoster.info Tue Sep 25 23:26:08 2018 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Tue, 25 Sep 2018 19:26:08 -0400 Subject: [Openstack-operators] Best kernel options for openvswitch on network nodes on a large setup Message-ID: <19C41B45-CD0F-48CD-A350-1C03A61493D7@planethoster.info> Hi, Are there some recommendations regarding kernel settings configuration for openvswitch? We’ve just been hit by what we believe may be an attack of some kind we have never seen before and we’re wondering if there’s a way to optimize our network nodes kernel for openvswitch operation and thus minimize the impact of such an attack, or whatever it was. Best regards, Jean-Philippe Méthot Openstack system administrator Administrateur système Openstack PlanetHoster inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Tue Sep 25 23:37:56 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 25 Sep 2018 19:37:56 -0400 Subject: [Openstack-operators] Best kernel options for openvswitch on network nodes on a large setup In-Reply-To: <19C41B45-CD0F-48CD-A350-1C03A61493D7@planethoster.info> References: <19C41B45-CD0F-48CD-A350-1C03A61493D7@planethoster.info> Message-ID: Ate you getting any particular log messages that lead you to conclude your issue lies with OVS? I've hit lots of kernel limits under those conditions before OVS itself ever noticed. Anything in dmesg, journal or neutron logs of interest? On Tue, Sep 25, 2018, 7:27 PM Jean-Philippe Méthot < jp.methot at planethoster.info> wrote: > Hi, > > Are there some recommendations regarding kernel settings configuration for > openvswitch? We’ve just been hit by what we believe may be an attack of > some kind we have never seen before and we’re wondering if there’s a way to > optimize our network nodes kernel for openvswitch operation and thus > minimize the impact of such an attack, or whatever it was. > > Best regards, > > Jean-Philippe Méthot > Openstack system administrator > Administrateur système Openstack > PlanetHoster inc. > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp.methot at planethoster.info Tue Sep 25 23:49:18 2018 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Tue, 25 Sep 2018 19:49:18 -0400 Subject: [Openstack-operators] Best kernel options for openvswitch on network nodes on a large setup In-Reply-To: References: <19C41B45-CD0F-48CD-A350-1C03A61493D7@planethoster.info> Message-ID: <0351FCF1-DAAD-4954-83A5-502AA567D581@planethoster.info> This particular message makes it sound as if openvswitch is getting overloaded. Sep 23 03:54:08 network1 ovsdb-server: ovs|01253|reconnect|ERR|tcp:127.0.0.1:50814: no response to inactivity probe after 5.01 seconds, disconnecting A lot of those keep appear, and openvswitch always reconnects almost instantly though. I’ve done some research about that particular message, but it didn’t give me anything I can use to fix it. Jean-Philippe Méthot Openstack system administrator Administrateur système Openstack PlanetHoster inc. > Le 25 sept. 2018 à 19:37, Erik McCormick a écrit : > > Ate you getting any particular log messages that lead you to conclude your issue lies with OVS? I've hit lots of kernel limits under those conditions before OVS itself ever noticed. Anything in dmesg, journal or neutron logs of interest? > > On Tue, Sep 25, 2018, 7:27 PM Jean-Philippe Méthot > wrote: > Hi, > > Are there some recommendations regarding kernel settings configuration for openvswitch? We’ve just been hit by what we believe may be an attack of some kind we have never seen before and we’re wondering if there’s a way to optimize our network nodes kernel for openvswitch operation and thus minimize the impact of such an attack, or whatever it was. > > Best regards, > > Jean-Philippe Méthot > Openstack system administrator > Administrateur système Openstack > PlanetHoster inc. > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Wed Sep 26 07:32:53 2018 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 26 Sep 2018 08:32:53 +0100 Subject: [Openstack-operators] [scientific] IRC meeting today: Keycloak and federated authentication, SIG in Berlin Message-ID: <8312DC1E-3800-4E7B-820D-98FA30A63BDD@telfer.org> Hi All - We have an IRC meeting today at 1100 UTC in channel #openstack-meeting. Everyone is welcome. This week we are gathering requirements and sharing experiences on using Keycloak for simplifying federated authentication. We also have Berlin forum proposals to discuss. The full agenda is here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_September_26th_2018 Cheers, Stig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ebiibe82 at gmail.com Wed Sep 26 12:06:06 2018 From: ebiibe82 at gmail.com (Amit Kumar) Date: Wed, 26 Sep 2018 17:36:06 +0530 Subject: [Openstack-operators] [OpenStack][Neutron][SFC] Regarding SFC support on provider VLAN N/W Message-ID: Hi All, We are using Ocata release and we have installed networking-sfc for Service Function Chaining functionality. Installation was successful and then we tried to create port pairs on VLAN N/W and it failed. We tried creating port-pairs on VXLAN based N/W and it worked. So, is it that SFC functionality is supported only on VXLAN based N/Ws? Regards, Amit -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.leinen at switch.ch Wed Sep 26 15:48:34 2018 From: simon.leinen at switch.ch (Simon Leinen) Date: Wed, 26 Sep 2018 17:48:34 +0200 Subject: [Openstack-operators] Best kernel options for openvswitch on network nodes on a large setup In-Reply-To: <0351FCF1-DAAD-4954-83A5-502AA567D581@planethoster.info> ("Jean-Philippe \=\?utf-8\?Q\?M\=C3\=A9thot\=22's\?\= message of "Tue, 25 Sep 2018 19:49:18 -0400") References: <19C41B45-CD0F-48CD-A350-1C03A61493D7@planethoster.info> <0351FCF1-DAAD-4954-83A5-502AA567D581@planethoster.info> Message-ID: Jean-Philippe Méthot writes: > This particular message makes it sound as if openvswitch is getting overloaded. > Sep 23 03:54:08 network1 ovsdb-server: ovs|01253|reconnect|ERR|tcp:127.0.0.1:50814: no response to inactivity probe after 5.01 seconds, disconnecting We get these as well :-( > A lot of those keep appear, and openvswitch always reconnects almost > instantly though. I’ve done some research about that particular > message, but it didn’t give me anything I can use to fix it. Would be interested in solutions as well. But I'm sceptical whether kernel settings can help here, because the timeout/slowness seems to be located in the user-space/control-plane parts of Open vSwitch, i.e. OVSDB. -- Simon. > Jean-Philippe Méthot > Openstack system administrator > Administrateur système Openstack > PlanetHoster inc. > Le 25 sept. 2018 à 19:37, Erik McCormick a écrit : > Ate you getting any particular log messages that lead you to conclude your issue lies with OVS? I've hit lots of kernel limits under those conditions before OVS itself ever > noticed. Anything in dmesg, journal or neutron logs of interest? > On Tue, Sep 25, 2018, 7:27 PM Jean-Philippe Méthot wrote: > Hi, > Are there some recommendations regarding kernel settings configuration for openvswitch? We’ve just been hit by what we believe may be an attack of some kind we > have never seen before and we’re wondering if there’s a way to optimize our network nodes kernel for openvswitch operation and thus minimize the impact of such an > attack, or whatever it was. > Best regards, > Jean-Philippe Méthot > Openstack system administrator > Administrateur système Openstack > PlanetHoster inc. > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From doug at doughellmann.com Wed Sep 26 15:58:35 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 26 Sep 2018 11:58:35 -0400 Subject: [Openstack-operators] [goals][tc][ptl][uc] starting goal selection for T series Message-ID: It's time to start thinking about community-wide goals for the T series. We use community-wide goals to achieve visible common changes, push for basic levels of consistency and user experience, and efficiently improve certain areas where technical debt payments have become too high - across all OpenStack projects. Community input is important to ensure that the TC makes good decisions about the goals. We need to consider the timing, cycle length, priority, and feasibility of the suggested goals. If you are interested in proposing a goal, please make sure that before the summit it is described in the tracking etherpad [1] and that you have started a mailing list thread on the openstack-dev list about the proposal so that everyone in the forum session [2] has an opportunity to consider the details. The forum session is only one step in the selection process. See [3] for more details. Doug [1] https://etherpad.openstack.org/p/community-goals [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 [3] https://governance.openstack.org/tc/goals/index.html From mihalis68 at gmail.com Wed Sep 26 17:00:37 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 26 Sep 2018 13:00:37 -0400 Subject: [Openstack-operators] ops meetup team meeting 2018-9-25 (minutes) Message-ID: There was an ops meetups team meeting yesteryday on #openstack-operators. Minutes linked below. Please note that submissions for the forum in Berlin this November close today. If you were thinking of adding to the planning etherpad for Ops-related sessions, it's too late for that now, please go directly to the official submission tool : https://www.openstack.org/summit-login/login?BackURL=%2Fsummit%2Fberlin-2018%2Fcall-for-presentations Meeting ended Tue Sep 25 14:51:06 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 10:51 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-09-25-14.00.html 10:51 AM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-09-25-14.00.txt 10:51 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-09-25-14.00.log.html Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Wed Sep 26 18:55:49 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 26 Sep 2018 18:55:49 +0000 Subject: [Openstack-operators] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: Message-ID: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Doug, Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. To give it some context and the motivation: At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. I would strongly support a goal which targets - All new projects should have the end user facing functionality fully exposed via the unified client - Existing projects should aim to close the gap within 'N' cycles (N to be defined) - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. Tim -----Original Message----- From: Doug Hellmann Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 26 September 2018 at 18:00 To: openstack-dev , openstack-operators , openstack-sigs Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series It's time to start thinking about community-wide goals for the T series. We use community-wide goals to achieve visible common changes, push for basic levels of consistency and user experience, and efficiently improve certain areas where technical debt payments have become too high - across all OpenStack projects. Community input is important to ensure that the TC makes good decisions about the goals. We need to consider the timing, cycle length, priority, and feasibility of the suggested goals. If you are interested in proposing a goal, please make sure that before the summit it is described in the tracking etherpad [1] and that you have started a mailing list thread on the openstack-dev list about the proposal so that everyone in the forum session [2] has an opportunity to consider the details. The forum session is only one step in the selection process. See [3] for more details. Doug [1] https://etherpad.openstack.org/p/community-goals [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 [3] https://governance.openstack.org/tc/goals/index.html __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jp.methot at planethoster.info Wed Sep 26 19:16:33 2018 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Wed, 26 Sep 2018 15:16:33 -0400 Subject: [Openstack-operators] Best kernel options for openvswitch on network nodes on a large setup In-Reply-To: References: <19C41B45-CD0F-48CD-A350-1C03A61493D7@planethoster.info> <0351FCF1-DAAD-4954-83A5-502AA567D581@planethoster.info> Message-ID: <09ABAF86-5B29-4C99-8174-A5C200BFB0EB@planethoster.info> Yes, I notice that every time that message appears, at least a few packets get dropped and some of our instances pop up in nagios, even though they are reachable 1 or 2 seconds after. It’s really causing us some issues as we can’t ensure proper network quality for our customers. Have you noticed the same? By that point I think it may be best to contact openvswitch directly since it seems to be an issue with their component. I am about to do that and hope I don’t get sent back to the openstack mailing list. I would really like to know what this probe is and why it disconnects constantly under load. Jean-Philippe Méthot Openstack system administrator Administrateur système Openstack PlanetHoster inc. > Le 26 sept. 2018 à 11:48, Simon Leinen a écrit : > > Jean-Philippe Méthot writes: >> This particular message makes it sound as if openvswitch is getting overloaded. >> Sep 23 03:54:08 network1 ovsdb-server: ovs|01253|reconnect|ERR|tcp:127.0.0.1:50814: no response to inactivity probe after 5.01 seconds, disconnecting > > We get these as well :-( > >> A lot of those keep appear, and openvswitch always reconnects almost >> instantly though. I’ve done some research about that particular >> message, but it didn’t give me anything I can use to fix it. > > Would be interested in solutions as well. But I'm sceptical whether > kernel settings can help here, because the timeout/slowness seems to be > located in the user-space/control-plane parts of Open vSwitch, > i.e. OVSDB. > -- > Simon. > >> Jean-Philippe Méthot >> Openstack system administrator >> Administrateur système Openstack >> PlanetHoster inc. > >> Le 25 sept. 2018 à 19:37, Erik McCormick a écrit : > >> Ate you getting any particular log messages that lead you to conclude your issue lies with OVS? I've hit lots of kernel limits under those conditions before OVS itself ever >> noticed. Anything in dmesg, journal or neutron logs of interest? > >> On Tue, Sep 25, 2018, 7:27 PM Jean-Philippe Méthot wrote: > >> Hi, > >> Are there some recommendations regarding kernel settings configuration for openvswitch? We’ve just been hit by what we believe may be an attack of some kind we >> have never seen before and we’re wondering if there’s a way to optimize our network nodes kernel for openvswitch operation and thus minimize the impact of such an >> attack, or whatever it was. > >> Best regards, > >> Jean-Philippe Méthot >> Openstack system administrator >> Administrateur système Openstack >> PlanetHoster inc. > >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Wed Sep 26 19:16:47 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 26 Sep 2018 19:16:47 +0000 Subject: [Openstack-operators] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> References: , <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: <1A3C52DFCD06494D8528644858247BF01C1AF6BB@EX10MBOX03.pnnl.gov> +1 :) ________________________________________ From: Tim Bell [Tim.Bell at cern.ch] Sent: Wednesday, September 26, 2018 11:55 AM To: OpenStack Development Mailing List (not for usage questions); openstack-operators; openstack-sigs Subject: Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series Doug, Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. To give it some context and the motivation: At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. I would strongly support a goal which targets - All new projects should have the end user facing functionality fully exposed via the unified client - Existing projects should aim to close the gap within 'N' cycles (N to be defined) - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. Tim -----Original Message----- From: Doug Hellmann Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 26 September 2018 at 18:00 To: openstack-dev , openstack-operators , openstack-sigs Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series It's time to start thinking about community-wide goals for the T series. We use community-wide goals to achieve visible common changes, push for basic levels of consistency and user experience, and efficiently improve certain areas where technical debt payments have become too high - across all OpenStack projects. Community input is important to ensure that the TC makes good decisions about the goals. We need to consider the timing, cycle length, priority, and feasibility of the suggested goals. If you are interested in proposing a goal, please make sure that before the summit it is described in the tracking etherpad [1] and that you have started a mailing list thread on the openstack-dev list about the proposal so that everyone in the forum session [2] has an opportunity to consider the details. The forum session is only one step in the selection process. See [3] for more details. Doug [1] https://etherpad.openstack.org/p/community-goals [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 [3] https://governance.openstack.org/tc/goals/index.html __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ openstack-sigs mailing list openstack-sigs at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From Arkady.Kanevsky at dell.com Wed Sep 26 19:22:21 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Wed, 26 Sep 2018 19:22:21 +0000 Subject: [Openstack-operators] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: <0ec720bd34de4ff09ffad24b7887edfc@AUSX13MPS308.AMER.DELL.COM> +1 -----Original Message----- From: Tim Bell [mailto:Tim.Bell at cern.ch] Sent: Wednesday, September 26, 2018 1:56 PM To: OpenStack Development Mailing List (not for usage questions); openstack-operators; openstack-sigs Subject: Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series [EXTERNAL EMAIL] Please report any suspicious attachments, links, or requests for sensitive information. Doug, Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. To give it some context and the motivation: At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. I would strongly support a goal which targets - All new projects should have the end user facing functionality fully exposed via the unified client - Existing projects should aim to close the gap within 'N' cycles (N to be defined) - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. Tim -----Original Message----- From: Doug Hellmann Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 26 September 2018 at 18:00 To: openstack-dev , openstack-operators , openstack-sigs Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series It's time to start thinking about community-wide goals for the T series. We use community-wide goals to achieve visible common changes, push for basic levels of consistency and user experience, and efficiently improve certain areas where technical debt payments have become too high - across all OpenStack projects. Community input is important to ensure that the TC makes good decisions about the goals. We need to consider the timing, cycle length, priority, and feasibility of the suggested goals. If you are interested in proposing a goal, please make sure that before the summit it is described in the tracking etherpad [1] and that you have started a mailing list thread on the openstack-dev list about the proposal so that everyone in the forum session [2] has an opportunity to consider the details. The forum session is only one step in the selection process. See [3] for more details. Doug [1] https://etherpad.openstack.org/p/community-goals [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 [3] https://governance.openstack.org/tc/goals/index.html __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ openstack-sigs mailing list openstack-sigs at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From jomlowe at iu.edu Wed Sep 26 19:30:13 2018 From: jomlowe at iu.edu (Mike Lowe) Date: Wed, 26 Sep 2018 15:30:13 -0400 Subject: [Openstack-operators] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: <58804C82-9E57-4A69-BB66-BCD1C6FFB441@iu.edu> +1 I encountered the negative effects of the disparity between the cinder cli and OpenStack cli just an hour before receiving Tim’s reply. The missing features of OpenStack client relative to individual project clients trip me up multiple times per week on average. Sent from my iPad > On Sep 26, 2018, at 2:55 PM, Tim Bell wrote: > > > Doug, > > Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. > > To give it some context and the motivation: > > At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). > > One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. > > I would strongly support a goal which targets > > - All new projects should have the end user facing functionality fully exposed via the unified client > - Existing projects should aim to close the gap within 'N' cycles (N to be defined) > - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) > - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) > > The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. > > It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. > > Tim > > -----Original Message----- > From: Doug Hellmann > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 26 September 2018 at 18:00 > To: openstack-dev , openstack-operators , openstack-sigs > Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series > > It's time to start thinking about community-wide goals for the T series. > > We use community-wide goals to achieve visible common changes, push for > basic levels of consistency and user experience, and efficiently improve > certain areas where technical debt payments have become too high - > across all OpenStack projects. Community input is important to ensure > that the TC makes good decisions about the goals. We need to consider > the timing, cycle length, priority, and feasibility of the suggested > goals. > > If you are interested in proposing a goal, please make sure that before > the summit it is described in the tracking etherpad [1] and that you > have started a mailing list thread on the openstack-dev list about the > proposal so that everyone in the forum session [2] has an opportunity to > consider the details. The forum session is only one step in the > selection process. See [3] for more details. > > Doug > > [1] https://etherpad.openstack.org/p/community-goals > [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 > [3] https://governance.openstack.org/tc/goals/index.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From mgagne at calavera.ca Wed Sep 26 19:40:45 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Wed, 26 Sep 2018 15:40:45 -0400 Subject: [Openstack-operators] [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: +1 Yes please! -- Mathieu On Wed, Sep 26, 2018 at 2:56 PM Tim Bell wrote: > > > Doug, > > Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. > > To give it some context and the motivation: > > At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). > > One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. > > I would strongly support a goal which targets > > - All new projects should have the end user facing functionality fully exposed via the unified client > - Existing projects should aim to close the gap within 'N' cycles (N to be defined) > - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) > - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) > > The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. > > It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. > > Tim > > -----Original Message----- > From: Doug Hellmann > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 26 September 2018 at 18:00 > To: openstack-dev , openstack-operators , openstack-sigs > Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series > > It's time to start thinking about community-wide goals for the T series. > > We use community-wide goals to achieve visible common changes, push for > basic levels of consistency and user experience, and efficiently improve > certain areas where technical debt payments have become too high - > across all OpenStack projects. Community input is important to ensure > that the TC makes good decisions about the goals. We need to consider > the timing, cycle length, priority, and feasibility of the suggested > goals. > > If you are interested in proposing a goal, please make sure that before > the summit it is described in the tracking etherpad [1] and that you > have started a mailing list thread on the openstack-dev list about the > proposal so that everyone in the forum session [2] has an opportunity to > consider the details. The forum session is only one step in the > selection process. See [3] for more details. > > Doug > > [1] https://etherpad.openstack.org/p/community-goals > [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 > [3] https://governance.openstack.org/tc/goals/index.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From tpb at dyncloud.net Wed Sep 26 20:27:52 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 26 Sep 2018 16:27:52 -0400 Subject: [Openstack-operators] [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: <20180926202752.j56dtfyahnw4triq@barron.net> On 26/09/18 18:55 +0000, Tim Bell wrote: > >Doug, > >Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. > >To give it some context and the motivation: > >At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). > >One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). Tim, First, I endorse this goal. That said, lack of coverage of Manila in the OpenStack client was articulated as a need (by CERN and others) during the Vancouver Forum. At the recent Manila PTG we set addressing this technical debt as a Stein cycle goal, as well as OpenStack SDK integration for Manila. -- Tom Barron (tbarron) > In other cases, there are subsets of the function which require the native project client. > >I would strongly support a goal which targets > >- All new projects should have the end user facing functionality fully exposed via the unified client >- Existing projects should aim to close the gap within 'N' cycles (N to be defined) >- Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) >- Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) > >The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. > >It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. > >Tim > >-----Original Message----- >From: Doug Hellmann >Reply-To: "OpenStack Development Mailing List (not for usage questions)" >Date: Wednesday, 26 September 2018 at 18:00 >To: openstack-dev , openstack-operators , openstack-sigs >Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series > > It's time to start thinking about community-wide goals for the T series. > > We use community-wide goals to achieve visible common changes, push for > basic levels of consistency and user experience, and efficiently improve > certain areas where technical debt payments have become too high - > across all OpenStack projects. Community input is important to ensure > that the TC makes good decisions about the goals. We need to consider > the timing, cycle length, priority, and feasibility of the suggested > goals. > > If you are interested in proposing a goal, please make sure that before > the summit it is described in the tracking etherpad [1] and that you > have started a mailing list thread on the openstack-dev list about the > proposal so that everyone in the forum session [2] has an opportunity to > consider the details. The forum session is only one step in the > selection process. See [3] for more details. > > Doug > > [1] https://etherpad.openstack.org/p/community-goals > [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 > [3] https://governance.openstack.org/tc/goals/index.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >_______________________________________________ >openstack-sigs mailing list >openstack-sigs at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From mriedemos at gmail.com Wed Sep 26 20:44:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 26 Sep 2018 15:44:53 -0500 Subject: [Openstack-operators] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> Message-ID: <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> On 9/26/2018 3:01 PM, Doug Hellmann wrote: > Monty Taylor writes: > >> On 09/26/2018 01:55 PM, Tim Bell wrote: >>> Doug, >>> >>> Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. I would personally like to thank the person that put that goal in the etherpad...they must have had amazing foresight and unparalleled modesty. >>> >>> To give it some context and the motivation: >>> >>> At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). >>> >>> One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. >>> >>> I would strongly support a goal which targets >>> >>> - All new projects should have the end user facing functionality fully exposed via the unified client >>> - Existing projects should aim to close the gap within 'N' cycles (N to be defined) >>> - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) >>> - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) >>> >>> The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. >>> >>> It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. >> ++ >> >> It's also worth noting that we're REALLY close to a 1.0 of openstacksdk >> (all the patches are in flight, we just need to land them) - and once >> we've got that we'll be in a position to start shifting >> python-openstackclient to using openstacksdk instead of python-*client. >> >> This will have the additional benefit that, once we've migrated CLIs to >> python-openstackclient as per this goal, and once we've migrated >> openstackclient itself to openstacksdk, the number of different >> libraries one needs to install to interact with openstack will be >> _dramatically_ lower. > Would it be useful to have the SDK work in OSC as a prerequisite to the > goal work? I would hate to have folks have to write a bunch of things > twice. > > Do we have any sort of list of which projects aren't currently being > handled by OSC? If we could get some help building such a list, that > would help us understand the scope of the work. I started documenting the compute API gaps in OSC last release [1]. It's a big gap and needs a lot of work, even for existing CLIs (the cold/live migration CLIs in OSC are a mess, and you can't even boot from volume where nova creates the volume for you). That's also why I put something into the etherpad about the OSC core team even being able to handle an onslaught of changes for a goal like this. > > As far as admin features, I think we've been hesitant to add those to > OSC in the past, but I can see the value. I wonder if having them in a > separate library makes sense? Or is it better to have commands in the > tool that regular users can't access, and just report the permission > error when they try to run the command? I thought the same, and we talked about this at the Austin summit, but OSC is inconsistent about this (you can live migrate a server but you can't evacuate it - there is no CLI for evacuation). It also came up at the Stein PTG with Dean in the nova room giving us some direction. [2] I believe the summary of that discussion was: a) to deal with the core team sprawl, we could move the compute stuff out of python-openstackclient and into an osc-compute plugin (like the osc-placement plugin for the placement service); then we could create a new core team which would have python-openstackclient-core as a superset b) Dean suggested that we close the compute API gaps in the SDK first, but that could take a long time as well...but it sounded like we could use the SDK for things that existed in the SDK and use novaclient for things that didn't yet exist in the SDK This might be a candidate for one of these multi-release goals that the TC started talking about at the Stein PTG. I could see something like this being a goal for Stein: "Each project owns its own osc- plugin for OSC CLIs" That deals with the core team and sprawl issue, especially with stevemar being gone and dtroyer being distracted by shiny x-men bird related things. That also seems relatively manageable for all projects to do in a single release. Having a single-release goal of "close all gaps across all service types" is going to be extremely tough for any older projects that had CLIs before OSC was created (nova/cinder/glance/keystone). For newer projects, like placement, it's not a problem because they never created any other CLI outside of OSC. [1] https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc [2] https://etherpad.openstack.org/p/nova-ptg-stein (~L721) -- Thanks, Matt From melwittt at gmail.com Wed Sep 26 21:48:49 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 26 Sep 2018 14:48:49 -0700 Subject: [Openstack-operators] [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> Message-ID: <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> On Tue, 25 Sep 2018 12:08:03 -0500, Matt Riedemann wrote: > On 9/25/2018 8:36 AM, John Garbutt wrote: >> Another thing is about existing flavors configured for these >> capabilities-scoped specs. Are you saying during the deprecation we'd >> continue to use those even if the filter is disabled? In the review I >> had suggested that we add a pre-upgrade check which inspects the >> flavors >> and if any of these are found, we report a warning meaning those >> flavors >> need to be updated to use traits rather than capabilities. Would >> that be >> reasonable? >> >> >> I like the idea of a warning, but there are features that have not yet >> moved to traits: >> https://specs.openstack.org/openstack/ironic-specs/specs/juno-implemented/uefi-boot-for-ironic.html >> >> There is a more general plan that will help, but its not quite ready yet: >> https://review.openstack.org/#/c/504952/ >> >> As such, I think we can't get pull the plug on flavors including >> capabilities and passing them to Ironic, but (after a cycle of >> deprecation) I think we can now stop pushing capabilities from Ironic >> into Nova and using them for placement. > > Forgive my ignorance, but if traits are not on par with capabilities, > why are we deprecating the capabilities filter? I would like to know the answer to this as well. -melanie From rochelle.grober at huawei.com Wed Sep 26 23:17:58 2018 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Wed, 26 Sep 2018 23:17:58 +0000 Subject: [Openstack-operators] [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch>, Message-ID: 1B24E0FB-005A-4A86-AF27-6659D912A07F Oh, very definitely +1000 -------------------------------------------------- Rochelle Grober Rochelle Grober M: +1-6508889722(preferred) E: rochelle.grober at huawei.com 2012实验室-硅谷研究所技术规划及合作部 2012 Laboratories-Silicon Valley Technology Planning & Cooperation,Silicon Valley Research Center From:Mathieu Gagné To:openstack-sigs at lists.openstack.org, Cc:OpenStack Development Mailing List (not for usage questions),OpenStack Operators, Date:2018-09-26 12:41:24 Subject:Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series +1 Yes please! -- Mathieu On Wed, Sep 26, 2018 at 2:56 PM Tim Bell wrote: > > > Doug, > > Thanks for raising this. I'd like to highlight the goal "Finish moving legacy python-*client CLIs to python-openstackclient" from the etherpad and propose this for a T/U series goal. > > To give it some context and the motivation: > > At CERN, we have more than 3000 users of the OpenStack cloud. We write an extensive end user facing documentation which explains how to use the OpenStack along with CERN specific features (such as workflows for requesting projects/quotas/etc.). > > One regular problem we come across is that the end user experience is inconsistent. In some cases, we find projects which are not covered by the unified OpenStack client (e.g. Manila). In other cases, there are subsets of the function which require the native project client. > > I would strongly support a goal which targets > > - All new projects should have the end user facing functionality fully exposed via the unified client > - Existing projects should aim to close the gap within 'N' cycles (N to be defined) > - Many administrator actions would also benefit from integration (reader roles are end users too so list and show need to be covered too) > - Users should be able to use a single openrc for all interactions with the cloud (e.g. not switch between password for some CLIs and Kerberos for OSC) > > The end user perception of a solution will be greatly enhanced by a single command line tool with consistent syntax and authentication framework. > > It may be a multi-release goal but it would really benefit the cloud consumers and I feel that goals should include this audience also. > > Tim > > -----Original Message----- > From: Doug Hellmann > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 26 September 2018 at 18:00 > To: openstack-dev , openstack-operators , openstack-sigs > Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series > > It's time to start thinking about community-wide goals for the T series. > > We use community-wide goals to achieve visible common changes, push for > basic levels of consistency and user experience, and efficiently improve > certain areas where technical debt payments have become too high - > across all OpenStack projects. Community input is important to ensure > that the TC makes good decisions about the goals. We need to consider > the timing, cycle length, priority, and feasibility of the suggested > goals. > > If you are interested in proposing a goal, please make sure that before > the summit it is described in the tracking etherpad [1] and that you > have started a mailing list thread on the openstack-dev list about the > proposal so that everyone in the forum session [2] has an opportunity to > consider the details. The forum session is only one step in the > selection process. See [3] for more details. > > Doug > > [1] https://etherpad.openstack.org/p/community-goals > [2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814 > [3] https://governance.openstack.org/tc/goals/index.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs _______________________________________________ openstack-sigs mailing list openstack-sigs at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at citynetwork.eu Thu Sep 27 06:32:57 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Thu, 27 Sep 2018 08:32:57 +0200 Subject: [Openstack-operators] [publiccloud-wg] Reminder weekly meeting Public Cloud WG Message-ID: Hi everyone, Time for a new meeting for PCWG - today (27th) 1400 UTC in #openstack-publiccloud! Agenda found at https://etherpad.openstack.org/p/publiccloud-wg We will again have a short brief from the PTG for those of you that missed that last week. Also, time to start planning for the upcoming summit - forum sessions submitted etc. Another important item on the agenda is the prio/ranking of our "missing features" list. We have identified a few cross project goals already that we see as important, but we need more operators to engage in this ranking. Talk to you later today! Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From kchamart at redhat.com Thu Sep 27 08:24:45 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 27 Sep 2018 10:24:45 +0200 Subject: [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for 'T' release In-Reply-To: References: <20180924132250.GW28120@paraplu> Message-ID: <20180927082445.GA28294@paraplu> On Mon, Sep 24, 2018 at 09:11:42AM -0700, iain MacDonnell wrote: > > > On 09/24/2018 06:22 AM, Kashyap Chamarthy wrote: > > (b) Oracle Linux: Can you please confirm if you'll be able to > > release libvirt and QEMU to 4.0.0 and 2.11, respectively? > > Hi Kashyap, > > Those are already available at: > > http://yum.oracle.com/repo/OracleLinux/OL7/developer/kvm/utils/x86_64/index.html Hi Iain, Thanks for confirming. When you get a moment, please update the "FIXME" for Oracle Linux: https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix#Distro_minimum_versions -- /kashyap From thierry at openstack.org Thu Sep 27 09:30:28 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 27 Sep 2018 11:30:28 +0200 Subject: [Openstack-operators] [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> Message-ID: <3b40375e-8970-35fc-5941-b331d5ecaf63@openstack.org> First I think that is a great goal, but I want to pick up on Dean's comment: Dean Troyer wrote: > [...] > The OSC core team is very thin, yes, it seems as though companies > don't like to spend money on client-facing things...I'll be in the > hall following this thread should anyone want to talk... I think OSC (and client-facing tooling in general) is a great place for OpenStack users (deployers of OpenStack clouds) to contribute. It's a smaller territory, it's less time-consuming than the service side, they are the most obvious interested party, and a small, 20% time investment would have a dramatic impact. It's arguably difficult for OpenStack users to get involved in "OpenStack development": keeping track of what's happening in a large team is already likely to consume most of the time you can dedicate to it. But OSC is a specific, smaller area which would be a good match for the expertise and time availability of anybody running an OpenStack cloud that wants to contribute back and make OpenStack better. Shameless plug: I proposed a Forum session in Berlin to discuss "Getting OpenStack users involved in the project" -- and we'll discuss such areas that are a particularly good match for users to get involved. -- Thierry Carrez (ttx) From nicolas at lrasc.fr Thu Sep 27 13:25:43 2018 From: nicolas at lrasc.fr (nicolas at lrasc.fr) Date: Thu, 27 Sep 2018 15:25:43 +0200 Subject: [Openstack-operators] [OpenStack][Neutron][SFC] Regarding SFC support on provider VLAN N/W In-Reply-To: References: Message-ID: <8d8785eea2d9029ef36c13d13d8f7815@lrasc.fr> On 2018-09-26 14:06, Amit Kumar wrote: > Hi All, > > We are using Ocata release and we have installed networking-sfc for > Service Function Chaining functionality. Installation was successful > and then we tried to create port pairs on VLAN N/W and it failed. We > tried creating port-pairs on VXLAN based N/W and it worked. So, is it > that SFC functionality is supported only on VXLAN based N/Ws? > > Regards, > Amit > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators Hi, I had similar problems with networking-sfc (not able to create port pair groups and not able to delete port pairs). I also had trouble understanding the documentation of networking-sfc. I sent a mail (see below) to the people listed in the doc and to commiters on the github repo, but I didn't get any answer. I am interested in any feedback about my questions below! TY! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ My previous email about networking-sfc begins here. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Hi, I want to test the Service Function Chaining SFC functionalities of OpenStack when using the networking_sfc driver. But I have some problems with reproducing the tutorial in the doc [1][2]. If I execute the command in the tuto [1][2], it fails. There is a chance that I miss something, either in the networking_sfc installation phase or in the tuto test config phase. If you could be kind enough to read the following, that could help me and maybe improve my understanding of the tutorial/doc. You need to read this with a text editor to see the figures. ################################# ## Installation of networking_sfc ################################# ## My environment First, I deploy my OpenStack env with the OpenStack Ansible framework. This is a quick description of my lab environment: OpenStack version : stable/queens OpenStack Ansible OSA version : 17.0.9.dev22 python env version : python2.7 operating system : Ubuntu Server 16.04 1 controller node, 1 dedicated neutron node, 2 computes nodes ## Installation of networking_sfc Then, I manually install [over my OSA deployment] and configure networking_sfc following these links: * https://docs.openstack.org/networking-sfc/latest/install/install.html * https://docs.openstack.org/releasenotes/networking-sfc/queens.html I install with pip (python2.7). First, I must source the right python venv (OSA is prepared for that [3]): ``` user at neutron-serveur: source /openstack/venvs/neutron-17.0.9/bin/activate ``` (NB: following [3], OSA should deploy OpenStack with networkin-sfc, but it did not work for me. Therefore I installed networkin-sfc manually.) Then I install networking-sfc: ``` (neutron-17.0.9) user at neutron-serveur: pip install -c https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/queens networking-sfc==6.0.0 ``` The install seems to be ok (no error, only Ignoring python3.x version of soft). Then, I modify the neutron config files to meet this: https://docs.openstack.org/networking-sfc/latest/install/configuration.html ########################### ## Using networking_sfc CLI ########################### I want to reproduce the following steps to check my installation and get a better understanding: * [1] https://docs.openstack.org/newton/networking-guide/config-sfc.html * [2] https://docs.openstack.org/networking-sfc/latest/contributor/system_design_and_workflow.html But after reading this, I don't understand a few things. When I read the description of the example, this is what I understand: ``` +-------------+ +-----+ +-----+ +-----+ +-------------+ | service | | VM1 | | VM2 | | VM3 | | service | | VM vm1 |->--p1| SF1 |p2->--p3| SF2 |p4->--p5| SF3 |p6->--| VM vm2 | |22.1.20.1:23 | +-----+ +-----+ +-----+ |171.4.5.6:100| | Source | | Destination | +-------------+ +-------------+ ``` But when I read the next steps, this is what I see: ``` +-----+ +-----+ +-----+ | VM1 | | VM2 | | VM3 | 22.1.20.1:23->--p1| SF1 |p2->--p3| SF2 |p4->--p5| SF3 |p6->--171.4.5.6:100 +-----+ +-----+ +-----+ ``` Here I have several questions: 1. How do you configure the net1 network ? 2. Shouldn't we add an IP subnet to net1 ? Because I can not create an instance if there are no IP subnet. Maybe the 3 SFx instances VM1, 2 & 3 need 1 port for admin and 2 ports for their sfc port pair. 3. Where are the 2 objects (the 2 service VMs) with the IP address 22.1.20.1 and 172.4.5.6 ? 4. Is the proxy classifier enough to route/steer network traffic between the source and destination ? My guess is the following: if I want to test SFC feature with OpenStack and networking-sfc driver, maybe I need the following topology: ``` + + + + | | | | +---->---(X)---->-----+ | | | Router #1 | | | | | | | | +--->----+ | | | | | | | | | p1 | | | | +-----+ | | | | | VM1 | | | | | | SF1 +--- at IP-+ | | +----------+ | | | | | +---------+ | | Service | | +-----+ | | | Service | +--ps+ VM source| | p2 | +--pd+ VM Dest | | | 22.1.20.1| | | | | |171.4.5.6| | | TCP 23 | +---<----+ | | | TCP 100 | | +----------+ | | | +---------+ | +--->----+ | | | | | | | | | p3 | | | | +-----+ | | | | | VM2 | | | | | | SF2 +--- at IP-+ | | | | | | | | | +-----+ | | | | p4 | | | | | | | | +---<----+ | | | | | | | +--->----+ | | | | | | | | | p5 | | | | +-----+ | | | | | VM3 | | | | | | SF3 +--- at IP-+ | | | | | | | | | +-----+ | | | | p6 | | | | | | | | +---<----+ | | | | | | | | | | | | | | | +--->----(X)--->----~------>--------+ | | Router#2 | | | | | | | | | | +-----+-----+ +-----+-----+ +----+----+ +-----+-----+ Source Net SFC net1 SFC net admin Dest Net 22.1.20.0/24 Flow trafic L2 10.42.42.0/24 171.4.5.0/2 Openstack Tenant Openstack Tenant Openstack Tenant Openstack Tenant Network VxLAN Network VxLAN Network VxLAN Network VxLAN OvS driver OvS driver OvS driver OvS driver ``` This represent the network view in OpenStack for the 3 SF instances forming a service chain and for the source and destination network flow. For SF instance SF1, 2, 3: they have 3 ports * 1 admin port * 1 ingress port (p1, p3, p5) * 1 egress port (p2, p4, p6) Source and dest VM have only 1 port: * ps port for source VM * pd port for dest VM I have other questions with this view: 1. I am not sure how to connect the Source Net 22.1.20.0/24 and the SFC net1. Same for SFC net1 and Dest Net 171.4.5.0/24. Maybe it is enough to use the flow classifier with the logical port option (wich is mendatory when using the OvS driver, according to the doc): ``` $ openstack sfc flow classifier create \ --ethertype IPv4 \ --source-ip-prefix 22.1.20.1/32 \ --destination-ip-prefix 171.4.5.6/32 \ --protocol tcp \ --source-port 23:23 \ --destination-port 100:100 \ --logical-source-port id_ps \ --logical-destination-port id_pd \ FC1 ``` 2. Maybe I don't need the 2 neutron routers (Routers #1 and #2) because the FC1 classifier and the port chain figures out what to do with the network traffic (from 22.1.20.1 to 171.4.5.6). 3. And I am still a bit confuse on wether SFC net1 should have an IP subnet or not. My idea is to create an additional admin network separated from net1. 4. Maybe I need a SDN controller ? For the moment my OpenStack environment only use neutron. In an other environment, I have been trying to use Opendaylight as a neutron backend, but I have trouble with layer L3 network. Many thanks for your time reading this. Links: * [1] https://docs.openstack.org/newton/networking-guide/config-sfc.html * [2] https://docs.openstack.org/networking-sfc/latest/contributor/system_design_and_workflow.html * [3] https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-opendaylight.html ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ End of my previous email about networking-sfc. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- Kind regards, Nicolas From doug at doughellmann.com Thu Sep 27 14:06:06 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 27 Sep 2018 10:06:06 -0400 Subject: [Openstack-operators] [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series In-Reply-To: References: <51B94DEF-2279-43E7-844B-48408DE11F41@cern.ch> <9b25b688-8286-c34d-1fc2-386f5ab93ec4@gmail.com> Message-ID: Dean Troyer writes: > On Wed, Sep 26, 2018 at 3:44 PM, Matt Riedemann wrote: >> I started documenting the compute API gaps in OSC last release [1]. It's a >> big gap and needs a lot of work, even for existing CLIs (the cold/live >> migration CLIs in OSC are a mess, and you can't even boot from volume where >> nova creates the volume for you). That's also why I put something into the >> etherpad about the OSC core team even being able to handle an onslaught of >> changes for a goal like this. > > The OSC core team is very thin, yes, it seems as though companies > don't like to spend money on client-facing things...I'll be in the > hall following this thread should anyone want to talk... > > The migration commands are a mess, mostly because I got them wrong to > start with and we have only tried to patch it up, this is one area I > think we need to wipe clean and fix properly. Yay! Major version > release! I definitely think having details about the gaps would be a prerequisite for approving a goal, but I wonder if that's something 1 person could even do alone. Is this an area where a small team is needed? >> I thought the same, and we talked about this at the Austin summit, but OSC >> is inconsistent about this (you can live migrate a server but you can't >> evacuate it - there is no CLI for evacuation). It also came up at the Stein >> PTG with Dean in the nova room giving us some direction. [2] I believe the >> summary of that discussion was: > >> a) to deal with the core team sprawl, we could move the compute stuff out of >> python-openstackclient and into an osc-compute plugin (like the >> osc-placement plugin for the placement service); then we could create a new >> core team which would have python-openstackclient-core as a superset > > This is not my first choice but is not terrible either... We built cliff to be based on plugins to support this sort of work distribution, right? >> b) Dean suggested that we close the compute API gaps in the SDK first, but >> that could take a long time as well...but it sounded like we could use the >> SDK for things that existed in the SDK and use novaclient for things that >> didn't yet exist in the SDK > > Yup, this can be done in parallel. The unit of decision for use sdk > vs use XXXclient lib is per-API call. If the client lib can use an > SDK adapter/session it becomes even better. I think the priority for > what to address first should be guided by complete gaps in coverage > and the need for microversion-driven changes. > >> This might be a candidate for one of these multi-release goals that the TC >> started talking about at the Stein PTG. I could see something like this >> being a goal for Stein: >> >> "Each project owns its own osc- plugin for OSC CLIs" >> >> That deals with the core team and sprawl issue, especially with stevemar >> being gone and dtroyer being distracted by shiny x-men bird related things. >> That also seems relatively manageable for all projects to do in a single >> release. Having a single-release goal of "close all gaps across all service >> types" is going to be extremely tough for any older projects that had CLIs >> before OSC was created (nova/cinder/glance/keystone). For newer projects, >> like placement, it's not a problem because they never created any other CLI >> outside of OSC. Yeah, I agree this work is going to need to be split up. I'm still not sold on the idea of multi-cycle goals, personally. > I think the major difficulty here is simply how to migrate users from > today state to future state in a reasonable manner. If we could teach > OSC how to handle the same command being defined in multiple plugins > properly (hello entrypoints!) it could be much simpler as we could > start creating the new plugins and switch as the new command > implementations become available rather than having a hard cutover. > > Or maybe the definition of OSC v4 is as above and we just work at it > until complete and cut over at the end. Note that the current APIs > that are in-repo (Compute, Identity, Image, Network, Object, Volume) > are all implemented using the plugin structure, OSC v4 could start as > the breaking out of those without command changes (except new > migration commands!) and then the plugins all re-write and update at > their own tempo. Dang, did I just deconstruct my project? It sure sounds like it. Congratulations! I like the idea of moving the existing code into libraries, having python-openstackclient depend on them, and then asking project teams for more help with them. > One thing I don't like about that is we just replace N client libs > with N (or more) plugins now and the number of things a user must > install doesn't go down. I would like to hear from anyone who deals > with installing OSC if that is still a big deal or should I let go of > that worry? Don't package managers just deal with this? I can pip/yum/apt install something and get all of its dependencies, right? Doug From florian.engelmann at everyware.ch Thu Sep 27 15:58:55 2018 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Thu, 27 Sep 2018 17:58:55 +0200 Subject: [Openstack-operators] QoS Nova and Cinder Message-ID: Hi, starting a new instance on ephemeral storage all "quota:disk_*" setting are honored and work great with ceph as ephemeral backend and KVM as hypervisor. Starting a new instance with "--volume": --volume Create server using this volume as the boot disk the quota settings of the flavor are not honored. Questions: 1. Is there any way to tell nova to still honor the flavor quota settings if ---volume is used? 2. How to create a default volume type with an associated cinder qos to still have an option to prevent that volume to get unlimited iops? Thank you so much! All the best, Florian -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From jaypipes at gmail.com Thu Sep 27 20:02:58 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 27 Sep 2018 16:02:58 -0400 Subject: [Openstack-operators] [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> Message-ID: On 09/26/2018 05:48 PM, melanie witt wrote: > On Tue, 25 Sep 2018 12:08:03 -0500, Matt Riedemann wrote: >> On 9/25/2018 8:36 AM, John Garbutt wrote: >>>      Another thing is about existing flavors configured for these >>>      capabilities-scoped specs. Are you saying during the deprecation >>> we'd >>>      continue to use those even if the filter is disabled? In the >>> review I >>>      had suggested that we add a pre-upgrade check which inspects the >>>      flavors >>>      and if any of these are found, we report a warning meaning those >>>      flavors >>>      need to be updated to use traits rather than capabilities. Would >>>      that be >>>      reasonable? >>> >>> >>> I like the idea of a warning, but there are features that have not yet >>> moved to traits: >>> https://specs.openstack.org/openstack/ironic-specs/specs/juno-implemented/uefi-boot-for-ironic.html >>> >>> >>> There is a more general plan that will help, but its not quite ready >>> yet: >>> https://review.openstack.org/#/c/504952/ >>> >>> As such, I think we can't get pull the plug on flavors including >>> capabilities and passing them to Ironic, but (after a cycle of >>> deprecation) I think we can now stop pushing capabilities from Ironic >>> into Nova and using them for placement. >> >> Forgive my ignorance, but if traits are not on par with capabilities, >> why are we deprecating the capabilities filter? > > I would like to know the answer to this as well. In short, traits were never designed to be key/value pairs. They are simple strings indicating boolean capabilities. Ironic "capabilities" are key/value metadata pairs. *Some* of those Ironic "capabilities" are possible to create as boolean traits. For example, you can change the boot_mode=uefi and boot_mode=bios Ironic capabilities to be a trait called CUSTOM_BOOT_MODE_UEFI or CUSTOM_BOOT_MODE_BIOS [1]. Other Ironic "capabilities" are not, in fact, capabilities at all. Instead, they are just random key/value pairs that are not boolean in nature nor do they represent a capability of the baremetal hardware. A great example of this would be the proposed "deploy template" from [2]. This is nothing more than abusing the placement traits API in order to allow passthrough of instance configuration data from the nova flavor extra spec directly into the nodes.instance_info field in the Ironic database. It's a hack that is abusing the entire concept of the placement traits concept, IMHO. We should have a way *in Nova* of allowing instance configuration key/value information to be passed through to the virt driver's spawn() method, much the same way we provide for user_data that gets exposed after boot to the guest instance via configdrive or the metadata service API. What this deploy template thing is is just a hack to get around the fact that nova doesn't have a basic way of passing through some collated instance configuration key/value information, which is a darn shame and I'm really kind of annoyed with myself for not noticing this sooner. :( -jay [1] As I've asked for in the past, it would be great to have Ironic contributors push patches to the os-traits library for standardized baremetal capabilities like boot modes. Please do consider contributing there. [2] https://review.openstack.org/#/c/504952/16/specs/approved/deploy-templates.rst From jp.methot at planethoster.info Thu Sep 27 21:05:50 2018 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Thu, 27 Sep 2018 17:05:50 -0400 Subject: [Openstack-operators] Best kernel options for openvswitch on network nodes on a large setup In-Reply-To: <09ABAF86-5B29-4C99-8174-A5C200BFB0EB@planethoster.info> References: <19C41B45-CD0F-48CD-A350-1C03A61493D7@planethoster.info> <0351FCF1-DAAD-4954-83A5-502AA567D581@planethoster.info> <09ABAF86-5B29-4C99-8174-A5C200BFB0EB@planethoster.info> Message-ID: I got some answers from the openvswitch mailing list, essentially indicating the issue is in the connection between neutron-openvswitch-agent and ovs. Here’s an output of ovs-vsctl list controller: _uuid               : ff2dca74-9628-43c8-b89c-8d2f1242dd3f connection_mode     : out-of-band controller_burst_limit: [] controller_rate_limit: [] enable_async_messages: [] external_ids        : {} inactivity_probe    : [] is_connected        : false local_gateway       : [] local_ip            : [] local_netmask       : [] max_backoff         : [] other_config        : {} role                : other status              : {last_error="Connection timed out", sec_since_connect="22", sec_since_disconnect="1", state=BACKOFF} target              : "tcp:127.0.0.1:6633 » So OVS is still working but the connection between neutron-openvswitch-agent and OVS gets interrupted somehow. It may also be linked to the HA vrrp switching host at random as the connection between both network nodes get severed. We also see SSH lagging momentarily. I’m starting to think that a limit of some kind in linux is reached, preventing connections from happening. However, I don’t think it’s max open file since the number of open files is nowhere close to what I’ve set it. Ideas? Jean-Philippe Méthot Openstack system administrator Administrateur système Openstack PlanetHoster inc. > Le 26 sept. 2018 à 15:16, Jean-Philippe Méthot a écrit : > > Yes, I notice that every time that message appears, at least a few packets get dropped and some of our instances pop up in nagios, even though they are reachable 1 or 2 seconds after. It’s really causing us some issues as we can’t ensure proper network quality for our customers. Have you noticed the same? > > By that point I think it may be best to contact openvswitch directly since it seems to be an issue with their component. I am about to do that and hope I don’t get sent back to the openstack mailing list. I would really like to know what this probe is and why it disconnects constantly under load. > > Jean-Philippe Méthot > Openstack system administrator > Administrateur système Openstack > PlanetHoster inc. > > > > >> Le 26 sept. 2018 à 11:48, Simon Leinen > a écrit : >> >> Jean-Philippe Méthot writes: >>> This particular message makes it sound as if openvswitch is getting overloaded. >>> Sep 23 03:54:08 network1 ovsdb-server: ovs|01253|reconnect|ERR|tcp:127.0.0.1:50814: no response to inactivity probe after 5.01 seconds, disconnecting >> >> We get these as well :-( >> >>> A lot of those keep appear, and openvswitch always reconnects almost >>> instantly though. I’ve done some research about that particular >>> message, but it didn’t give me anything I can use to fix it. >> >> Would be interested in solutions as well. But I'm sceptical whether >> kernel settings can help here, because the timeout/slowness seems to be >> located in the user-space/control-plane parts of Open vSwitch, >> i.e. OVSDB. >> -- >> Simon. >> >>> Jean-Philippe Méthot >>> Openstack system administrator >>> Administrateur système Openstack >>> PlanetHoster inc. >> >>> Le 25 sept. 2018 à 19:37, Erik McCormick > a écrit : >> >>> Ate you getting any particular log messages that lead you to conclude your issue lies with OVS? I've hit lots of kernel limits under those conditions before OVS itself ever >>> noticed. Anything in dmesg, journal or neutron logs of interest? >> >>> On Tue, Sep 25, 2018, 7:27 PM Jean-Philippe Méthot > wrote: >> >>> Hi, >> >>> Are there some recommendations regarding kernel settings configuration for openvswitch? We’ve just been hit by what we believe may be an attack of some kind we >>> have never seen before and we’re wondering if there’s a way to optimize our network nodes kernel for openvswitch operation and thus minimize the impact of such an >>> attack, or whatever it was. >> >>> Best regards, >> >>> Jean-Philippe Méthot >>> Openstack system administrator >>> Administrateur système Openstack >>> PlanetHoster inc. >> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Sep 27 22:23:26 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 27 Sep 2018 17:23:26 -0500 Subject: [Openstack-operators] [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> Message-ID: <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> On 9/27/2018 3:02 PM, Jay Pipes wrote: > A great example of this would be the proposed "deploy template" from > [2]. This is nothing more than abusing the placement traits API in order > to allow passthrough of instance configuration data from the nova flavor > extra spec directly into the nodes.instance_info field in the Ironic > database. It's a hack that is abusing the entire concept of the > placement traits concept, IMHO. > > We should have a way *in Nova* of allowing instance configuration > key/value information to be passed through to the virt driver's spawn() > method, much the same way we provide for user_data that gets exposed > after boot to the guest instance via configdrive or the metadata service > API. What this deploy template thing is is just a hack to get around the > fact that nova doesn't have a basic way of passing through some collated > instance configuration key/value information, which is a darn shame and > I'm really kind of annoyed with myself for not noticing this sooner. :( We talked about this in Dublin through right? We said a good thing to do would be to have some kind of template/profile/config/whatever stored off in glare where schema could be registered on that thing, and then you pass a handle (ID reference) to that to nova when creating the (baremetal) server, nova pulls it down from glare and hands it off to the virt driver. It's just that no one is doing that work. -- Thanks, Matt From melwittt at gmail.com Thu Sep 27 22:49:47 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 27 Sep 2018 15:49:47 -0700 Subject: [Openstack-operators] [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> Message-ID: On Thu, 27 Sep 2018 17:23:26 -0500, Matt Riedemann wrote: > On 9/27/2018 3:02 PM, Jay Pipes wrote: >> A great example of this would be the proposed "deploy template" from >> [2]. This is nothing more than abusing the placement traits API in order >> to allow passthrough of instance configuration data from the nova flavor >> extra spec directly into the nodes.instance_info field in the Ironic >> database. It's a hack that is abusing the entire concept of the >> placement traits concept, IMHO. >> >> We should have a way *in Nova* of allowing instance configuration >> key/value information to be passed through to the virt driver's spawn() >> method, much the same way we provide for user_data that gets exposed >> after boot to the guest instance via configdrive or the metadata service >> API. What this deploy template thing is is just a hack to get around the >> fact that nova doesn't have a basic way of passing through some collated >> instance configuration key/value information, which is a darn shame and >> I'm really kind of annoyed with myself for not noticing this sooner. :( > > We talked about this in Dublin through right? We said a good thing to do > would be to have some kind of template/profile/config/whatever stored > off in glare where schema could be registered on that thing, and then > you pass a handle (ID reference) to that to nova when creating the > (baremetal) server, nova pulls it down from glare and hands it off to > the virt driver. It's just that no one is doing that work. If I understood correctly, that discussion was around adding a way to pass a desired hardware configuration to nova when booting an ironic instance. And that it's something that isn't yet possible to do using the existing ComputeCapabilitiesFilter. Someone please correct me if I'm wrong there. That said, I still don't understand why we are talking about deprecating the ComputeCapabilitiesFilter if there's no supported way to replace it yet. If boolean traits are not enough to replace it, then we need to hold off on deprecating it, right? Would the template/profile/config/whatever in glare approach replace what the ComputeCapabilitiesFilter is doing or no? Sorry, I'm just not clearly understanding this yet. -melanie From skaplons at redhat.com Fri Sep 28 07:03:46 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Fri, 28 Sep 2018 09:03:46 +0200 Subject: [Openstack-operators] Best kernel options for openvswitch on network nodes on a large setup In-Reply-To: References: <19C41B45-CD0F-48CD-A350-1C03A61493D7@planethoster.info> <0351FCF1-DAAD-4954-83A5-502AA567D581@planethoster.info> <09ABAF86-5B29-4C99-8174-A5C200BFB0EB@planethoster.info> Message-ID: <3EC9A862-7474-48AA-B1B2-473C7F656A36@redhat.com> Hi, What version of Neutron and ovsdbapp You are using? IIRC there was such issue somewhere around Pike version, we saw it in functional tests quite often. But later with new ovsdbapp version I think that this problem was somehow solved. Maybe try newer version of ovsdbapp and check if it will be better. > Wiadomość napisana przez Jean-Philippe Méthot w dniu 27.09.2018, o godz. 23:05: > > I got some answers from the openvswitch mailing list, essentially indicating the issue is in the connection between neutron-openvswitch-agent and ovs. > > Here’s an output of ovs-vsctl list controller: > > _uuid               : ff2dca74-9628-43c8-b89c-8d2f1242dd3f > connection_mode     : out-of-band > controller_burst_limit: [] > controller_rate_limit: [] > enable_async_messages: [] > external_ids        : {} > inactivity_probe    : [] > is_connected        : false > local_gateway       : [] > local_ip            : [] > local_netmask       : [] > max_backoff         : [] > other_config        : {} > role                : other > status              : {last_error="Connection timed out", sec_since_connect="22", sec_since_disconnect="1", state=BACKOFF} > target              : "tcp:127.0.0.1:6633 » > > So OVS is still working but the connection between neutron-openvswitch-agent and OVS gets interrupted somehow. It may also be linked to the HA vrrp switching host at random as the connection between both network nodes get severed. We also see SSH lagging momentarily. I’m starting to think that a limit of some kind in linux is reached, preventing connections from happening. However, I don’t think it’s max open file since the number of open files is nowhere close to what I’ve set it. > > Ideas? > > Jean-Philippe Méthot > Openstack system administrator > Administrateur système Openstack > PlanetHoster inc. > > > > >> Le 26 sept. 2018 à 15:16, Jean-Philippe Méthot a écrit : >> >> Yes, I notice that every time that message appears, at least a few packets get dropped and some of our instances pop up in nagios, even though they are reachable 1 or 2 seconds after. It’s really causing us some issues as we can’t ensure proper network quality for our customers. Have you noticed the same? >> >> By that point I think it may be best to contact openvswitch directly since it seems to be an issue with their component. I am about to do that and hope I don’t get sent back to the openstack mailing list. I would really like to know what this probe is and why it disconnects constantly under load. >> >> Jean-Philippe Méthot >> Openstack system administrator >> Administrateur système Openstack >> PlanetHoster inc. >> >> >> >> >>> Le 26 sept. 2018 à 11:48, Simon Leinen a écrit : >>> >>> Jean-Philippe Méthot writes: >>>> This particular message makes it sound as if openvswitch is getting overloaded. >>>> Sep 23 03:54:08 network1 ovsdb-server: ovs|01253|reconnect|ERR|tcp:127.0.0.1:50814: no response to inactivity probe after 5.01 seconds, disconnecting >>> >>> We get these as well :-( >>> >>>> A lot of those keep appear, and openvswitch always reconnects almost >>>> instantly though. I’ve done some research about that particular >>>> message, but it didn’t give me anything I can use to fix it. >>> >>> Would be interested in solutions as well. But I'm sceptical whether >>> kernel settings can help here, because the timeout/slowness seems to be >>> located in the user-space/control-plane parts of Open vSwitch, >>> i.e. OVSDB. >>> -- >>> Simon. >>> >>>> Jean-Philippe Méthot >>>> Openstack system administrator >>>> Administrateur système Openstack >>>> PlanetHoster inc. >>> >>>> Le 25 sept. 2018 à 19:37, Erik McCormick a écrit : >>> >>>> Ate you getting any particular log messages that lead you to conclude your issue lies with OVS? I've hit lots of kernel limits under those conditions before OVS itself ever >>>> noticed. Anything in dmesg, journal or neutron logs of interest? >>> >>>> On Tue, Sep 25, 2018, 7:27 PM Jean-Philippe Méthot wrote: >>> >>>> Hi, >>> >>>> Are there some recommendations regarding kernel settings configuration for openvswitch? We’ve just been hit by what we believe may be an attack of some kind we >>>> have never seen before and we’re wondering if there’s a way to optimize our network nodes kernel for openvswitch operation and thus minimize the impact of such an >>>> attack, or whatever it was. >>> >>>> Best regards, >>> >>>> Jean-Philippe Méthot >>>> Openstack system administrator >>>> Administrateur système Openstack >>>> PlanetHoster inc. >>> >>>> _______________________________________________ >>>> OpenStack-operators mailing list >>>> OpenStack-operators at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >>>> _______________________________________________ >>>> OpenStack-operators mailing list >>>> OpenStack-operators at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators — Slawek Kaplonski Senior software engineer Red Hat From sbauza at redhat.com Fri Sep 28 09:11:19 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Fri, 28 Sep 2018 11:11:19 +0200 Subject: [Openstack-operators] [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter In-Reply-To: References: <4bc8f7ee-e076-8f36-dbe2-25007b00c555@gmail.com> <949ce39a-a7f2-f5f7-9c4e-6dfcf6250893@gmail.com> <0c5dae82-1cdf-15e1-6ce8-e4c627aaac96@gmail.com> <9d13db33-9b91-770d-f0a3-595da1c5533e@gmail.com> Message-ID: On Fri, Sep 28, 2018 at 12:50 AM melanie witt wrote: > On Thu, 27 Sep 2018 17:23:26 -0500, Matt Riedemann wrote: > > On 9/27/2018 3:02 PM, Jay Pipes wrote: > >> A great example of this would be the proposed "deploy template" from > >> [2]. This is nothing more than abusing the placement traits API in order > >> to allow passthrough of instance configuration data from the nova flavor > >> extra spec directly into the nodes.instance_info field in the Ironic > >> database. It's a hack that is abusing the entire concept of the > >> placement traits concept, IMHO. > >> > >> We should have a way *in Nova* of allowing instance configuration > >> key/value information to be passed through to the virt driver's spawn() > >> method, much the same way we provide for user_data that gets exposed > >> after boot to the guest instance via configdrive or the metadata service > >> API. What this deploy template thing is is just a hack to get around the > >> fact that nova doesn't have a basic way of passing through some collated > >> instance configuration key/value information, which is a darn shame and > >> I'm really kind of annoyed with myself for not noticing this sooner. :( > > > > We talked about this in Dublin through right? We said a good thing to do > > would be to have some kind of template/profile/config/whatever stored > > off in glare where schema could be registered on that thing, and then > > you pass a handle (ID reference) to that to nova when creating the > > (baremetal) server, nova pulls it down from glare and hands it off to > > the virt driver. It's just that no one is doing that work. > > If I understood correctly, that discussion was around adding a way to > pass a desired hardware configuration to nova when booting an ironic > instance. And that it's something that isn't yet possible to do using > the existing ComputeCapabilitiesFilter. Someone please correct me if I'm > wrong there. > > That said, I still don't understand why we are talking about deprecating > the ComputeCapabilitiesFilter if there's no supported way to replace it > yet. If boolean traits are not enough to replace it, then we need to > hold off on deprecating it, right? Would the > template/profile/config/whatever in glare approach replace what the > ComputeCapabilitiesFilter is doing or no? Sorry, I'm just not clearly > understanding this yet. > > I just feel some new traits have to be defined, like Jay said, and some work has to be done on the Ironic side to make sure they expose them as traits and not by the old way. That leaves tho a question : does Ironic support custom capabilities ? If so, that leads to Jay's point about the key/pair information that's not intented for traits. If we all agree on the fact that traits shouldn't be allowed for key/value pairs, could we somehow imagine Ironic to change the customization mechanism to be boolean only ? Also, I'm a bit confused whether operators make use of Ironic capabilities for fancy operational queries, like the ones we have in https://github.com/openstack/nova/blob/3716752/nova/scheduler/filters/extra_specs_ops.py#L24-L35 and if Ironic correctly documents how to put such things into traits ? (eg. say CUSTOM_I_HAVE_MORE_THAN_2_GPUS) All of the above makes me a bit worried by a possible ComputeCapabilitiesFilter deprecation, if we aren't yet able to provide a clear upgrade path for our users. -Sylvain -melanie > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Fri Sep 28 13:24:34 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 28 Sep 2018 08:24:34 -0500 Subject: [Openstack-operators] OpenStack Summit Forum Submission Process Extended Message-ID: <5BAE2B92.4030409@openstack.org> Hello Everyone We are extended the Forum Submission process through September 30, 11:59pm Pacific (6:59am GMT). We've already gotten a ton of great submissions, but we want to leave the door open through the weekend in case we have any stragglers. Please submit your topics here: https://www.openstack.org/summit/berlin-2018/call-for-presentations If you'd like to review the submissions to date, you can go to https://www.openstack.org/summit/berlin-2018/vote-for-speakers. There is no voting period, this is just so Forum attendees can review the submissions to date. Thank you! Jimmy From lbragstad at gmail.com Fri Sep 28 13:49:32 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 28 Sep 2018 08:49:32 -0500 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: Adding the operator list back in. On Fri, Sep 28, 2018 at 8:48 AM Lance Bragstad wrote: > Bumping this thread again and proposing two conventions based on the > discussion here. I propose we decide on one of the two following > conventions: > > *::* > > or > > *:_* > > Where is the corresponding service type of the project [0], > and is either create, get, list, update, or delete. I think > decoupling the method from the policy name should aid in consistency, > regardless of the underlying implementation. The HTTP method specifics can > still be relayed using oslo.policy's DocumentedRuleDefault object [1]. > > I think the plurality of the resource should default to what makes sense > for the operation being carried out (e.g., list:foobars, create:foobar). > > I don't mind the first one because it's clear about what the delimiter is > and it doesn't look weird when projects have something like: > > ::: > > If folks are ok with this, I can start working on some documentation that > explains the motivation for this. Afterward, we can figure out how we want > to track this work. > > What color do you want the shed to be? > > [0] https://service-types.openstack.org/service-types.json > [1] > https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule > > On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad > wrote: > >> >> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann >> wrote: >> >>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt < >>> john at johngarbutt.com> wrote ---- >>> > tl;dr+1 consistent names >>> > I would make the names mirror the API... because the Operator setting >>> them knows the API, not the codeIgnore the crazy names in Nova, I certainly >>> hate them >>> >>> Big +1 on consistent naming which will help operator as well as >>> developer to maintain those. >>> >>> > >>> > Lance Bragstad wrote: >>> > > I'm curious if anyone has context on the "os-" part of the format? >>> > >>> > My memory of the Nova policy mess...* Nova's policy rules >>> traditionally followed the patterns of the code >>> > ** Yes, horrible, but it happened.* The code used to have the >>> OpenStack API and the EC2 API, hence the "os"* API used to expand with >>> extensions, so the policy name is often based on extensions** note most of >>> the extension code has now gone, including lots of related policies* Policy >>> in code was focused on getting us to a place where we could rename policy** >>> Whoop whoop by the way, it feels like we are really close to something >>> sensible now! >>> > Lance Bragstad wrote: >>> > Thoughts on using create, list, update, and delete as opposed to >>> post, get, put, patch, and delete in the naming convention? >>> > I could go either way as I think about "list servers" in the API.But >>> my preference is for the URL stub and POST, GET, etc. >>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad >>> wrote:If we consider dropping "os", should we entertain dropping "api", >>> too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple >>> service types (e.g "compute" or "loadbalancer"). >>> > +1The API is known as "compute" in api-ref, so the policy should be >>> for "compute", etc. >>> >>> Agree on mapping the policy name with api-ref as much as possible. Other >>> than policy name having 'os-', we have 'os-' in resource name also in nova >>> API url like /os-agents, /os-aggregates etc (almost every resource except >>> servers , flavors). As we cannot get rid of those from API url, we need to >>> keep the same in policy naming too? or we can have policy name like >>> compute:agents:create/post but that mismatch from api-ref where agents >>> resource url is os-agents. >>> >> >> Good question. I think this depends on how the service does policy >> enforcement. >> >> I know we did something like this in keystone, which required policy >> names and method names to be the same: >> >> "identity:list_users": "..." >> >> Because the initial implementation of policy enforcement used a decorator >> like this: >> >> from keystone import controller >> >> @controller.protected >> def list_users(self): >> ... >> >> Having the policy name the same as the method name made it easier for the >> decorator implementation to resolve the policy needed to protect the API >> because it just looked at the name of the wrapped method. The advantage was >> that it was easy to implement new APIs because you only needed to add a >> policy, implement the method, and make sure you decorate the implementation. >> >> While this worked, we are moving away from it entirely. The decorator >> implementation was ridiculously complicated. Only a handful of keystone >> developers understood it. With the addition of system-scope, it would have >> only become more convoluted. It also enables a much more copy-paste pattern >> (e.g., so long as I wrap my method with this decorator implementation, >> things should work right?). Instead, we're calling enforcement within the >> controller implementation to ensure things are easier to understand. It >> requires developers to be cognizant of how different token types affect the >> resources within an API. That said, coupling the policy name to the method >> name is no longer a requirement for keystone. >> >> Hopefully, that helps explain why we needed them to match. >> >> >>> >>> Also we have action API (i know from nova not sure from other services) >>> like POST /servers/{server_id}/action {addSecurityGroup} and their current >>> policy name is all inconsistent. few have policy name including their >>> resource name like "os_compute_api:os-flavor-access:add_tenant_access", few >>> has 'action' in policy name like >>> "os_compute_api:os-admin-actions:reset_state" and few has direct action >>> name like "os_compute_api:os-console-output" >>> >> >> Since the actions API relies on the request body and uses a single HTTP >> method, does it make sense to have the HTTP method in the policy name? It >> feels redundant, and we might be able to establish a convention that's more >> meaningful for things like action APIs. It looks like cinder has a similar >> pattern [0]. >> >> [0] >> https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action >> >> >>> >>> May be we can make them consistent with >>> :: or any better opinion. >>> >>> > From: Lance Bragstad > The topic of having >>> consistent policy names has popped up a few times this week. >>> > >>> > I would love to have this nailed down before we go through all the >>> policy rules again. In my head I hope in Nova we can go through each policy >>> rule and do the following: >>> > * move to new consistent policy name, deprecate existing name* >>> hardcode scope check to project, system or user** (user, yes... keypairs, >>> yuck, but its how they work)** deprecate in rule scope checks, which are >>> largely bogus in Nova anyway* make read/write/admin distinction** therefore >>> adding the "noop" role, amount other things >>> >>> + policy granularity. >>> >>> It is good idea to make the policy improvement all together and for all >>> rules as you mentioned. But my worries is how much load it will be on >>> operator side to migrate all policy rules at same time? What will be the >>> deprecation period etc which i think we can discuss on proposed spec - >>> https://review.openstack.org/#/c/547850 >> >> >> Yeah, that's another valid concern. I know at least one operator has >> weighed in already. I'm curious if operators have specific input here. >> >> It ultimately depends on if they override existing policies or not. If a >> deployment doesn't have any overrides, it should be a relatively simple >> change for operators to consume. >> >> >>> >>> >>> -gmann >>> >>> > Thanks,John >>> __________________________________________________________________________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Sep 28 14:33:15 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 28 Sep 2018 09:33:15 -0500 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: <20180928143314.GA18667@sm-workstation> > On Fri, Sep 28, 2018 at 8:48 AM Lance Bragstad wrote: > > > Bumping this thread again and proposing two conventions based on the > > discussion here. I propose we decide on one of the two following > > conventions: > > > > *::* > > > > or > > > > *:_* > > > > Where is the corresponding service type of the project [0], > > and is either create, get, list, update, or delete. I think > > decoupling the method from the policy name should aid in consistency, > > regardless of the underlying implementation. The HTTP method specifics can > > still be relayed using oslo.policy's DocumentedRuleDefault object [1]. > > > > I think the plurality of the resource should default to what makes sense > > for the operation being carried out (e.g., list:foobars, create:foobar). > > > > I don't mind the first one because it's clear about what the delimiter is > > and it doesn't look weird when projects have something like: > > > > ::: > > My initial preference was the second format, but you make a good point here about potential subactions. Either is fine with me - the main thing I would love to see is consistency in format. But based on this part, I vote for option 2. > > If folks are ok with this, I can start working on some documentation that > > explains the motivation for this. Afterward, we can figure out how we want > > to track this work. > > +1 thanks for working on this! From jp.methot at planethoster.info Fri Sep 28 14:53:08 2018 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Fri, 28 Sep 2018 10:53:08 -0400 Subject: [Openstack-operators] Best kernel options for openvswitch on network nodes on a large setup In-Reply-To: <3EC9A862-7474-48AA-B1B2-473C7F656A36@redhat.com> References: <19C41B45-CD0F-48CD-A350-1C03A61493D7@planethoster.info> <0351FCF1-DAAD-4954-83A5-502AA567D581@planethoster.info> <09ABAF86-5B29-4C99-8174-A5C200BFB0EB@planethoster.info> <3EC9A862-7474-48AA-B1B2-473C7F656A36@redhat.com> Message-ID: <42B3CC72-9B2E-47C6-A18F-6FAD60E1FAEF@planethoster.info> Thank you, I will try it next week (since today is Friday) and update this thread if it has fixed my issues. We are indeed using the latest RDO Pike, so ovsdbapp 0.4.3.1 . Jean-Philippe Méthot Openstack system administrator Administrateur système Openstack PlanetHoster inc. > Le 28 sept. 2018 à 03:03, Slawomir Kaplonski a écrit : > > Hi, > > What version of Neutron and ovsdbapp You are using? IIRC there was such issue somewhere around Pike version, we saw it in functional tests quite often. But later with new ovsdbapp version I think that this problem was somehow solved. > Maybe try newer version of ovsdbapp and check if it will be better. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Sep 28 18:54:01 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 28 Sep 2018 13:54:01 -0500 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki wrote: > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > wrote: > > > > Ideally I would like to see it in the form of least specific to most > specific. But more importantly in a way that there is no additional > delimiters between the service type and the resource. Finally, I do not > like the change of plurality depending on action type. > > > > I propose we consider > > > > ::[:] > > > > Example for keystone (note, action names below are strictly examples I > am fine with whatever form those actions take): > > identity:projects:create > > identity:projects:delete > > identity:projects:list > > identity:projects:get > > > > It keeps things simple and consistent when you're looking through > overrides / defaults. > > --Morgan > +1 -- I think the ordering if `resource` comes before > `action|subaction` will be more clean. > ++ These are excellent points. I especially like being able to omit the convention about plurality. Furthermore, I'd like to add that I think we should make the resource singular (e.g., project instead or projects). For example: compute:server:list compute:server:update compute:server:create compute:server:delete compute:server:action:reboot compute:server:action:confirm_resize (or confirm-resize) Otherwise, someone might mistake compute:servers:get, as "list". This is ultra-nick-picky, but something I thought of when seeing the usage of "get_all" in policy names in favor of "list." In summary, the new convention based on the most recent feedback should be: *::[:]* Rules: - service-type is always defined in the service types authority - resources are always singular Thanks to all for sticking through this tedious discussion. I appreciate it. > > /R > > Harry > > > > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad > wrote: > >> > >> Bumping this thread again and proposing two conventions based on the > discussion here. I propose we decide on one of the two following > conventions: > >> > >> :: > >> > >> or > >> > >> :_ > >> > >> Where is the corresponding service type of the project > [0], and is either create, get, list, update, or delete. I think > decoupling the method from the policy name should aid in consistency, > regardless of the underlying implementation. The HTTP method specifics can > still be relayed using oslo.policy's DocumentedRuleDefault object [1]. > >> > >> I think the plurality of the resource should default to what makes > sense for the operation being carried out (e.g., list:foobars, > create:foobar). > >> > >> I don't mind the first one because it's clear about what the delimiter > is and it doesn't look weird when projects have something like: > >> > >> ::: > >> > >> If folks are ok with this, I can start working on some documentation > that explains the motivation for this. Afterward, we can figure out how we > want to track this work. > >> > >> What color do you want the shed to be? > >> > >> [0] https://service-types.openstack.org/service-types.json > >> [1] > https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule > >> > >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad > wrote: > >>> > >>> > >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann < > gmann at ghanshyammann.com> wrote: > >>>> > >>>> ---- On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt < > john at johngarbutt.com> wrote ---- > >>>> > tl;dr+1 consistent names > >>>> > I would make the names mirror the API... because the Operator > setting them knows the API, not the codeIgnore the crazy names in Nova, I > certainly hate them > >>>> > >>>> Big +1 on consistent naming which will help operator as well as > developer to maintain those. > >>>> > >>>> > > >>>> > Lance Bragstad wrote: > >>>> > > I'm curious if anyone has context on the "os-" part of the > format? > >>>> > > >>>> > My memory of the Nova policy mess...* Nova's policy rules > traditionally followed the patterns of the code > >>>> > ** Yes, horrible, but it happened.* The code used to have the > OpenStack API and the EC2 API, hence the "os"* API used to expand with > extensions, so the policy name is often based on extensions** note most of > the extension code has now gone, including lots of related policies* Policy > in code was focused on getting us to a place where we could rename policy** > Whoop whoop by the way, it feels like we are really close to something > sensible now! > >>>> > Lance Bragstad wrote: > >>>> > Thoughts on using create, list, update, and delete as opposed to > post, get, put, patch, and delete in the naming convention? > >>>> > I could go either way as I think about "list servers" in the > API.But my preference is for the URL stub and POST, GET, etc. > >>>> > On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad < > lbragstad at gmail.com> wrote:If we consider dropping "os", should we > entertain dropping "api", too? Do we have a good reason to keep "api"?I > wouldn't be opposed to simple service types (e.g "compute" or > "loadbalancer"). > >>>> > +1The API is known as "compute" in api-ref, so the policy should > be for "compute", etc. > >>>> > >>>> Agree on mapping the policy name with api-ref as much as possible. > Other than policy name having 'os-', we have 'os-' in resource name also in > nova API url like /os-agents, /os-aggregates etc (almost every resource > except servers , flavors). As we cannot get rid of those from API url, we > need to keep the same in policy naming too? or we can have policy name like > compute:agents:create/post but that mismatch from api-ref where agents > resource url is os-agents. > >>> > >>> > >>> Good question. I think this depends on how the service does policy > enforcement. > >>> > >>> I know we did something like this in keystone, which required policy > names and method names to be the same: > >>> > >>> "identity:list_users": "..." > >>> > >>> Because the initial implementation of policy enforcement used a > decorator like this: > >>> > >>> from keystone import controller > >>> > >>> @controller.protected > >>> def list_users(self): > >>> ... > >>> > >>> Having the policy name the same as the method name made it easier for > the decorator implementation to resolve the policy needed to protect the > API because it just looked at the name of the wrapped method. The advantage > was that it was easy to implement new APIs because you only needed to add a > policy, implement the method, and make sure you decorate the implementation. > >>> > >>> While this worked, we are moving away from it entirely. The decorator > implementation was ridiculously complicated. Only a handful of keystone > developers understood it. With the addition of system-scope, it would have > only become more convoluted. It also enables a much more copy-paste pattern > (e.g., so long as I wrap my method with this decorator implementation, > things should work right?). Instead, we're calling enforcement within the > controller implementation to ensure things are easier to understand. It > requires developers to be cognizant of how different token types affect the > resources within an API. That said, coupling the policy name to the method > name is no longer a requirement for keystone. > >>> > >>> Hopefully, that helps explain why we needed them to match. > >>> > >>>> > >>>> > >>>> Also we have action API (i know from nova not sure from other > services) like POST /servers/{server_id}/action {addSecurityGroup} and > their current policy name is all inconsistent. few have policy name > including their resource name like > "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in > policy name like "os_compute_api:os-admin-actions:reset_state" and few has > direct action name like "os_compute_api:os-console-output" > >>> > >>> > >>> Since the actions API relies on the request body and uses a single > HTTP method, does it make sense to have the HTTP method in the policy name? > It feels redundant, and we might be able to establish a convention that's > more meaningful for things like action APIs. It looks like cinder has a > similar pattern [0]. > >>> > >>> [0] > https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-actions-volumes-action > >>> > >>>> > >>>> > >>>> May be we can make them consistent with > :: or any better opinion. > >>>> > >>>> > From: Lance Bragstad > The topic of having > consistent policy names has popped up a few times this week. > >>>> > > >>>> > I would love to have this nailed down before we go through all the > policy rules again. In my head I hope in Nova we can go through each policy > rule and do the following: > >>>> > * move to new consistent policy name, deprecate existing name* > hardcode scope check to project, system or user** (user, yes... keypairs, > yuck, but its how they work)** deprecate in rule scope checks, which are > largely bogus in Nova anyway* make read/write/admin distinction** therefore > adding the "noop" role, amount other things > >>>> > >>>> + policy granularity. > >>>> > >>>> It is good idea to make the policy improvement all together and for > all rules as you mentioned. But my worries is how much load it will be on > operator side to migrate all policy rules at same time? What will be the > deprecation period etc which i think we can discuss on proposed spec - > https://review.openstack.org/#/c/547850 > >>> > >>> > >>> Yeah, that's another valid concern. I know at least one operator has > weighed in already. I'm curious if operators have specific input here. > >>> > >>> It ultimately depends on if they override existing policies or not. If > a deployment doesn't have any overrides, it should be a relatively simple > change for operators to consume. > >>> > >>>> > >>>> > >>>> > >>>> -gmann > >>>> > >>>> > Thanks,John > __________________________________________________________________________ > >>>> > OpenStack Development Mailing List (not for usage questions) > >>>> > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > > >>>> > >>>> > >>>> > >>>> > __________________________________________________________________________ > >>>> OpenStack Development Mailing List (not for usage questions) > >>>> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Sep 28 20:33:18 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 28 Sep 2018 15:33:18 -0500 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> Message-ID: <20180928203318.GA3769@sm-workstation> On Fri, Sep 28, 2018 at 01:54:01PM -0500, Lance Bragstad wrote: > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki wrote: > > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > > wrote: > > > > > > Ideally I would like to see it in the form of least specific to most > > specific. But more importantly in a way that there is no additional > > delimiters between the service type and the resource. Finally, I do not > > like the change of plurality depending on action type. > > > > > > I propose we consider > > > > > > ::[:] > > > > > > Example for keystone (note, action names below are strictly examples I > > am fine with whatever form those actions take): > > > identity:projects:create > > > identity:projects:delete > > > identity:projects:list > > > identity:projects:get > > > > > > It keeps things simple and consistent when you're looking through > > overrides / defaults. > > > --Morgan > > +1 -- I think the ordering if `resource` comes before > > `action|subaction` will be more clean. > > > Great idea. This is looking better and better. From johnsomor at gmail.com Fri Sep 28 22:07:01 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 28 Sep 2018 15:07:01 -0700 Subject: [Openstack-operators] [neutron][lbaas][neutron-lbaas][octavia] Update on the previously announced deprecation of neutron-lbaas and neutron-lbaas-dashboard Message-ID: During the Queens release cycle we announced the deprecation of neutron-lbaas and neutron-lbaas-dashboard[1]. Today we are announcing the expected end date for the neutron-lbaas and neutron-lbaas-dashboard deprecation cycles. During September 2019 or the start of the “U” OpenStack release cycle, whichever comes first, neutron-lbaas and neutron-lbaas-dashboard will be retired. This means the code will be be removed and will not be released as part of the "U" OpenStack release per the infrastructure team’s “retiring a project” process[2]. We continue to maintain a Frequently Asked Questions (FAQ) wiki page to help answer additional questions you may have about this process: https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation For more information or if you have additional questions, please see the following resources: The FAQ: https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation The Octavia documentation: https://docs.openstack.org/octavia/latest/ Reach out to us via IRC on the Freenode IRC network, channel #openstack-lbaas Weekly Meeting: 20:00 UTC on Wednesdays in #openstack-lbaas on the Freenode IRC network. Sending email to the OpenStack developer mailing list: openstack-dev [at] lists [dot] openstack [dot] org. Please prefix the subject with '[openstack-dev][Octavia]' Thank you for your support and patience during this transition, Michael Johnson Octavia PTL [1] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126836.html [2] https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project From lbragstad at gmail.com Fri Sep 28 22:23:30 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 28 Sep 2018 17:23:30 -0500 Subject: [Openstack-operators] [openstack-dev] [all] Consistent policy names In-Reply-To: <20180928203318.GA3769@sm-workstation> References: <165faf6fc2f.f8e445e526276.843390207507347435@ghanshyammann.com> <20180928203318.GA3769@sm-workstation> Message-ID: Alright - I've worked up the majority of what we have in this thread and proposed a documentation patch for oslo.policy [0]. I think we're at the point where we can finish the rest of this discussion in gerrit if folks are ok with that. [0] https://review.openstack.org/#/c/606214/ On Fri, Sep 28, 2018 at 3:33 PM Sean McGinnis wrote: > On Fri, Sep 28, 2018 at 01:54:01PM -0500, Lance Bragstad wrote: > > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki > wrote: > > > > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg > > > wrote: > > > > > > > > Ideally I would like to see it in the form of least specific to most > > > specific. But more importantly in a way that there is no additional > > > delimiters between the service type and the resource. Finally, I do not > > > like the change of plurality depending on action type. > > > > > > > > I propose we consider > > > > > > > > ::[:] > > > > > > > > Example for keystone (note, action names below are strictly examples > I > > > am fine with whatever form those actions take): > > > > identity:projects:create > > > > identity:projects:delete > > > > identity:projects:list > > > > identity:projects:get > > > > > > > > It keeps things simple and consistent when you're looking through > > > overrides / defaults. > > > > --Morgan > > > +1 -- I think the ordering if `resource` comes before > > > `action|subaction` will be more clean. > > > > > > > Great idea. This is looking better and better. > -------------- next part -------------- An HTML attachment was scrubbed... URL: