From zhengzhenyulixi at gmail.com Fri Feb 1 01:40:57 2019 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Fri, 1 Feb 2019 09:40:57 +0800 Subject: [nova] Per-instance serial number implementation question In-Reply-To: References: <6ac4eac9-18ea-2cca-e2f7-76de8f80835b@gmail.com> Message-ID: > > - Add the 'unique' choice to the [libvirt]/sysinfo_serial config option > and make it the default for new deployments. > - Deprecate the sysinfo_serial config option in Stein and remove it in > Train. This would give at least some window of time for transition > and/or raising a stink if someone thinks we should leave the old > per-host behavior. > - Merge mnaser's patch to expose hostId in the metadata API and config > drive so users still have a way within the guest to determine that > affinity for servers in the same project on the same host. So can I assume this is the decision now? I will update the patch according to this and since I have 3 more days before Chinese new year, I will update again if something new happens On Thu, Jan 31, 2019 at 11:38 PM Mohammed Naser wrote: > On Thu, Jan 31, 2019 at 10:05 AM Matt Riedemann > wrote: > > > > I'm going to top post and try to summarize where we are on this thread > > since I have it on today's nova meeting agenda under the "stuck reviews" > > section. > > > > * The email started as a proposal to change the proposed image property > > and flavor extra spec from a confusing boolean to an enum. > > > > * The question was raised why even have any other option than a unique > > serial number for all instances based on the instance UUID. > > > > * Stephen asked Daniel Berrange (danpb) about the history of the > > [libvirt]/sysinfo_serial configuration option and it sounds like it was > > mostly added as a way to determine guests running on the same host, > > which can already be determined using the hostId parameter in the REST > > API (hostId is the hashed instance.host + instance.project_id so it's > > not exactly the same since it's unique per host and project, not just > > host). However, the hostId is not exposed to the guest in the metadata > > API / config drive - so that could be a regression for applications that > > used this somehow to calculate affinity within the guest based on the > > serial (note that mnaser has a patch to expose hostId in the metadata > > API / config drive [1]). > > > > * danpb said the system.serial we set today should really be > > chassis.serial but that's only available in libvirt >= 4.1.0 and our > > current minimum required version of libvirt is 1.3.1 so setting > > chassis.serial would have to be conditional on the running version of > > libvirt (this is common in that driver). > > > > * Applications that depend on the serial number within the guest were > > not guaranteed it would be unique or not change because migrating the > > guest to another host would change the serial number anyway (that's the > > point of the blueprint - to keep the serial unchanged for each guest), > > so if we just changed to always using unique serial numbers everywhere > > it should probably be OK (and tolerated/expected by guest applications). > > > > * Clearly we would have a release note if we change this behavior but > > keep in mind that end users are not reading release notes, and none of > > this is documented today anyway outside of the [libvirt]/sysinfo_serial > > config option. So a release note would really only help an operator or > > support personal if they get a ticket due to the change in behavior > > (which we probably wouldn't hear about upstream for 2+ years given how > > slow openstack deployments upgrade). > > > > So where are we? If we want the minimal amount of behavior change as > > possible then we just add the new image property / flavor extra spec / > > config option choice, but that arguably adds technical debt and > > virt-driver specific behavior to the API (again, that's not uncommon > > though). > > > > If we want to simplify, we don't add the image property / flavor extra > > spec. But what do we do about the existing config option? > > > > Do we add the 'unique' choice, make it the default, and then deprecate > > the option to at least signal the change is coming in Train? > > > > Or do we just deprecate the option in Stein and completely ignore it, > > always setting the unique serial number as the instance.uuid (and set > > the host serial in chassis.serial if libvirt>=4.1.0)? > > > > In addition, do we expose hostId in the metadata API / config drive via > > [1] so there is a true alternative *from within the guest* to determine > > guest affinity on the same host? I'm personally OK with [1] if there is > > some user documentation around it (as noted in the review). > > > > If we are not going to add the new image property / extra spec, my > > personal choice would be to: > > > > - Add the 'unique' choice to the [libvirt]/sysinfo_serial config option > > and make it the default for new deployments. > > - Deprecate the sysinfo_serial config option in Stein and remove it in > > Train. This would give at least some window of time for transition > > and/or raising a stink if someone thinks we should leave the old > > per-host behavior. > > - Merge mnaser's patch to expose hostId in the metadata API and config > > drive so users still have a way within the guest to determine that > > affinity for servers in the same project on the same host. > > I agree with this for a few reasons > > Assuming that a system serial means that it is colocated with another > machine seems just taking advantage of a bug in the first place. That > is not *documented* behaviour and serials should inherently be unique, > it also exposes information about the host which should not be necessary, > Matt has pointed me to an OSSN about this too: > > https://wiki.openstack.org/wiki/OSSN/OSSN-0028 > > I think we should indeed provide a unique serials (only, ideally) to avoid > having the user shooting themselves in the foot by exposing information > they didn't know they were exposing. > > The patch that I supplied was really meant to make that information > available > in a controllable way, it also provides a much more secure way of exposing > that information because hostId is actually hashed with the tenant ID which > means that one VM from one tenant can't know that it's hosted on the same > VM as another one by usnig the hostId (and with all of the recent processor > issues, this is a big plus in security). > > > > What do others think? > > > > [1] https://review.openstack.org/#/c/577933/ > > > > On 1/24/2019 9:09 AM, Matt Riedemann wrote: > > > The proposal from the spec for this feature was to add an image > property > > > (hw_unique_serial), flavor extra spec (hw:unique_serial) and new > > > "unique" choice to the [libvirt]/sysinfo_serial config option. The > image > > > property and extra spec would be booleans but really only True values > > > make sense and False would be more or less ignored. There were no plans > > > to enforce strict checking of a boolean value, e.g. if the image > > > property was True but the flavor extra spec was False, we would not > > > raise an exception for incompatible values, we would just use OR logic > > > and take the image property True value. > > > > > > The boolean usage proposed is a bit confusing, as can be seen from > > > comments in the spec [1] and the proposed code change [2]. > > > > > > After thinking about this a bit, I'm now thinking maybe we should just > > > use a single-value enum for the image property and flavor extra spec: > > > > > > image: hw_guest_serial=unique > > > flavor: hw:guest_serial=unique > > > > > > If either are set, then we use a unique serial number for the guest. If > > > neither are set, then the serial number is based on the host > > > configuration as it is today. > > > > > > I think that's more clear usage, do others agree? Alex does. I can't > > > think of any cases where users would want hw_unique_serial=False, so > > > this removes that ability and confusion over whether or not to enforce > > > mismatching booleans. > > > > > > [1] > > > > https://review.openstack.org/#/c/612531/2/specs/stein/approved/per-instance-libvirt-sysinfo-serial.rst at 43 > > > > > > [2] > > > > https://review.openstack.org/#/c/619953/7/nova/virt/libvirt/driver.py at 4894 > > > > > > -- > > > > Thanks, > > > > Matt > > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Feb 1 04:33:49 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 1 Feb 2019 15:33:49 +1100 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox Message-ID: <20190201043349.GB6183@thor.bakeyournoodle.com> Hi All, During the Berlin forum the idea of running some kinda of bot on the sandbox [1] repo cam up as another way to onboard/encourage contributors. The general idea is that the bot would: 1. Leave a -1 review on 'qualifying'[2] changes along with a request for some small change 2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) on the change Showing new contributors approximately what code review looks like[2], and also reduce the human requirements. The OpenStack Upstream Institute would make use of the bot and we'd also use it as an interactive tutorial from the contributors portal. I think this can be done as a 'normal' CI job with the following considerations: * Because we want this service to be reasonably robust we don't want to code or the job definitions to live in repo so I guess they'd need to live in project-config[4]. The bot itself doesn't need to be stateful as gerrit comments / meta-data would act as the store/state sync. * We'd need a gerrit account we can use to lodge these votes, as using 'proposal-bot' or tonyb would be a bad idea. My initial plan would be to develop the bot locally and then migrate it into the opendev infra once we've proven its utility. So thoughts on the design or considerations or should I just code something up and see what it looks like? Yours Tony. [1] http://git.openstack.org/cgit/openstack-dev/sandbox [2] The details of what counts as qualifying can be fleshed out later but there needs to be something so that contributors using the sandbox that don't want to be bothered by the bot wont be. [3] So it would a) be faster than typical and b) not all new changes are greeted with a -1 ;P [4] Another repo would be better as project-config is trusted we can't use Depends-On to test changes to the bot itself, but we need to consider the bots access to secrets -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From cjeanner at redhat.com Fri Feb 1 06:44:34 2019 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Fri, 1 Feb 2019 07:44:34 +0100 Subject: [TripleO] containers logging to stdout In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C28DCCB@EX10MBOX03.pnnl.gov> References: <7cee5db5-f4cd-9e11-e0a3-7438154fb9af@redhat.com> <1A3C52DFCD06494D8528644858247BF01C28DCCB@EX10MBOX03.pnnl.gov> Message-ID: <0860f38e-0f10-df12-256a-12df18fe7d9e@redhat.com> On 1/30/19 6:26 PM, Fox, Kevin M wrote: > k8s's offical way of dealing with logs is to ensure use of the docker json logger, not the journald one. then all the k8s log shippers have a standard way to gather the logs. Docker supports log rotation and other options too. seems to work out pretty well in practice. sending directly to a file looks a good option indeed. Journald and (r)syslog have both some throttle issue, and it might create some issues in case of service restarting and the like. Pushing logs directly from the container engine (podman does actually support the same options) might be the way to go. As long as we have a common, easy way to output the logs, it's all for the best. The only concern I have with the "not-journald" path is the possible lack of "journalctl -f CONTAINER_NAME=foo". But, compared to the risks exposed in this thread about the possible crash if journald isn't available, and throttling, I think it's fine. Also, a small note regarding "log re-shipping": some people might want to push their logs to some elk/kelk/others - pushing the logs directly as json in plain files might help a log for that, as (r)syslog can then read them (and there, no bottleneck with throttle) and send it in the proper format to the remote logging infra. Soooo... yeah. imho the "direct writing as json" might be the way to go :). > > log shipping with other cri drivers such as containerd seems to work well too. Not tested yet, but at least podman has the option (as a work for this engine integration is done). Cheers, C. > > Thanks, > Kevin > ________________________________________ > From: Sean Mooney [smooney at redhat.com] > Sent: Wednesday, January 30, 2019 8:23 AM > To: Emilien Macchi; Juan Antonio Osorio Robles > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [TripleO] containers logging to stdout > > On Wed, 2019-01-30 at 07:37 -0500, Emilien Macchi wrote: >> >> >> On Wed, Jan 30, 2019 at 5:53 AM Juan Antonio Osorio Robles wrote: >>> Hello! >>> >>> >>> In Queens, the a spec to provide the option to make containers log to >>> standard output was proposed [1] [2]. Some work was done on that side, >>> but due to the lack of traction, it wasn't completed. With the Train >>> release coming, I think it would be a good idea to revive this effort, >>> but make logging to stdout the default in that release. >>> >>> This would allow several benefits: >>> >>> * All logging from the containers would en up in journald; this would >>> make it easier for us to forward the logs, instead of having to keep >>> track of the different directories in /var/log/containers >>> >>> * The journald driver would add metadata to the logs about the container >>> (we would automatically get what container ID issued the logs). >>> >>> * This wouldo also simplify the stacks (removing the Logging nested >>> stack which is present in several templates). >>> >>> * Finally... if at some point we move towards kubernetes (or something >>> in between), managing our containers, it would work with their logging >>> tooling as well >> >> Also, I would add that it'll be aligned with what we did for Paunch-managed containers (with Podman backend) where >> each ("long life") container has its own SystemD service (+ SystemD timer sometimes); so using journald makes total >> sense to me. > one thing to keep in mind is that journald apparently has rate limiting so if you contaiern are very verbose journald > will actully slowdown the execution of the contaienr application as it slows down the rate at wich it can log. > this came form a downstream conversation on irc were they were recommending that such applciation bypass journald and > log to a file for best performacne. >> -- >> Emilien Macchi > > > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From ignaziocassano at gmail.com Fri Feb 1 06:28:00 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 1 Feb 2019 07:28:00 +0100 Subject: [manila][glusterfs] on queens error In-Reply-To: References: Message-ID: Thanks Goutham. If there are not mantainers for this driver I will switch on ceph and or netapp. I am already using netapp but I would like to export shares from an openstack installation to another. Since these 2 installations do non share any openstack component and have different openstack database, I would like to know it is possible . Regards Ignazio Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi ha scritto: > Hi Ignazio, > > On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano > wrote: > > > > Hello All, > > I installed manila on my queens openstack based on centos 7. > > I configured two servers with glusterfs replocation and ganesha nfs. > > I configured my controllers octavia,conf but when I try to create a share > > the manila scheduler logs reports: > > > > Failed to schedule create_share: No valid host was found. Failed to find > a weighted host, the last executed filter was CapabilitiesFilter.: > NoValidHost: No valid host was found. Failed to find a weighted host, the > last executed filter was CapabilitiesFilter. > > 2019-01-31 16:07:32.614 159380 INFO manila.message.api > [req-241d66b3-8004-410b-b000-c6d2d3536e4a 89f76bc5de5545f381da2c10c7df7f15 > 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for > request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a > > > The scheduler failure points out that you have a mismatch in > expectations (backend capabilities vs share type extra-specs) and > there was no host to schedule your share to. So a few things to check > here: > > - What is the share type you're using? Can you list the share type > extra-specs and confirm that the backend (your GlusterFS storage) > capabilities are appropriate with whatever you've set up as > extra-specs ($ manila pool-list --detail)? > - Is your backend operating correctly? You can list the manila > services ($ manila service-list) and see if the backend is both > 'enabled' and 'up'. If it isn't, there's a good chance there was a > problem with the driver initialization, please enable debug logging, > and look at the log file for the manila-share service, you might see > why and be able to fix it. > > > Please be aware that we're on a look out for a maintainer for the > GlusterFS driver for the past few releases. We're open to bug fixes > and maintenance patches, but there is currently no active maintainer > for this driver. > > > > I did not understand if controllers node must be connected to the > network where shares must be exported for virtual machines, so my glusterfs > are connected on the management network where openstack controllers are > conencted and to the network where virtual machine are connected. > > > > My manila.conf section for glusterfs section is the following > > > > [gluster-manila565] > > driver_handles_share_servers = False > > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver > > glusterfs_target = root at 10.102.184.229:/manila565 > > glusterfs_path_to_private_key = /etc/manila/id_rsa > > glusterfs_ganesha_server_username = root > > glusterfs_nfs_server_type = Ganesha > > glusterfs_ganesha_server_ip = 10.102.184.229 > > #glusterfs_servers = root at 10.102.185.19 > > ganesha_config_dir = /etc/ganesha > > > > > > PS > > 10.102.184.0/24 is the network where controlelrs expose endpoint > > > > 10.102.189.0/24 is the shared network inside openstack where virtual > machines are connected. > > > > The gluster servers are connected on both. > > > > > > Any help, please ? > > > > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at redhat.com Fri Feb 1 06:58:19 2019 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Fri, 1 Feb 2019 08:58:19 +0200 Subject: [TripleO] containers logging to stdout In-Reply-To: <0860f38e-0f10-df12-256a-12df18fe7d9e@redhat.com> References: <7cee5db5-f4cd-9e11-e0a3-7438154fb9af@redhat.com> <1A3C52DFCD06494D8528644858247BF01C28DCCB@EX10MBOX03.pnnl.gov> <0860f38e-0f10-df12-256a-12df18fe7d9e@redhat.com> Message-ID: On 2/1/19 8:44 AM, Cédric Jeanneret wrote: > On 1/30/19 6:26 PM, Fox, Kevin M wrote: >> k8s's offical way of dealing with logs is to ensure use of the docker json logger, not the journald one. then all the k8s log shippers have a standard way to gather the logs. Docker supports log rotation and other options too. seems to work out pretty well in practice. > sending directly to a file looks a good option indeed. Journald and > (r)syslog have both some throttle issue, and it might create some issues > in case of service restarting and the like. > > Pushing logs directly from the container engine (podman does actually > support the same options) might be the way to go. > > As long as we have a common, easy way to output the logs, it's all for > the best. > The only concern I have with the "not-journald" path is the possible > lack of "journalctl -f CONTAINER_NAME=foo". But, compared to the risks > exposed in this thread about the possible crash if journald isn't > available, and throttling, I think it's fine. > > Also, a small note regarding "log re-shipping": some people might want > to push their logs to some elk/kelk/others - pushing the logs directly > as json in plain files might help a log for that, as (r)syslog can then > read them (and there, no bottleneck with throttle) and send it in the > proper format to the remote logging infra. > > Soooo... yeah. imho the "direct writing as json" might be the way to go :). That is just fine IMO. the runtime engine usually allows you to configure the logging driver (docker in CentOS defaults... or used to default, to journald); but if we find out that file is a better choice; that's entirely fine. The whole point is to let the runtime engine do its job, and handle the logging with the driver. > >> log shipping with other cri drivers such as containerd seems to work well too. > Not tested yet, but at least podman has the option (as a work for this > engine integration is done). > > Cheers, > > C. > >> Thanks, >> Kevin >> ________________________________________ >> From: Sean Mooney [smooney at redhat.com] >> Sent: Wednesday, January 30, 2019 8:23 AM >> To: Emilien Macchi; Juan Antonio Osorio Robles >> Cc: openstack-discuss at lists.openstack.org >> Subject: Re: [TripleO] containers logging to stdout >> >> On Wed, 2019-01-30 at 07:37 -0500, Emilien Macchi wrote: >>> >>> On Wed, Jan 30, 2019 at 5:53 AM Juan Antonio Osorio Robles wrote: >>>> Hello! >>>> >>>> >>>> In Queens, the a spec to provide the option to make containers log to >>>> standard output was proposed [1] [2]. Some work was done on that side, >>>> but due to the lack of traction, it wasn't completed. With the Train >>>> release coming, I think it would be a good idea to revive this effort, >>>> but make logging to stdout the default in that release. >>>> >>>> This would allow several benefits: >>>> >>>> * All logging from the containers would en up in journald; this would >>>> make it easier for us to forward the logs, instead of having to keep >>>> track of the different directories in /var/log/containers >>>> >>>> * The journald driver would add metadata to the logs about the container >>>> (we would automatically get what container ID issued the logs). >>>> >>>> * This wouldo also simplify the stacks (removing the Logging nested >>>> stack which is present in several templates). >>>> >>>> * Finally... if at some point we move towards kubernetes (or something >>>> in between), managing our containers, it would work with their logging >>>> tooling as well >>> Also, I would add that it'll be aligned with what we did for Paunch-managed containers (with Podman backend) where >>> each ("long life") container has its own SystemD service (+ SystemD timer sometimes); so using journald makes total >>> sense to me. >> one thing to keep in mind is that journald apparently has rate limiting so if you contaiern are very verbose journald >> will actully slowdown the execution of the contaienr application as it slows down the rate at wich it can log. >> this came form a downstream conversation on irc were they were recommending that such applciation bypass journald and >> log to a file for best performacne. >>> -- >>> Emilien Macchi >> >> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From smooney at redhat.com Fri Feb 1 11:25:47 2019 From: smooney at redhat.com (Sean Mooney) Date: Fri, 01 Feb 2019 11:25:47 +0000 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <20190201043349.GB6183@thor.bakeyournoodle.com> References: <20190201043349.GB6183@thor.bakeyournoodle.com> Message-ID: <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> On Fri, 2019-02-01 at 15:33 +1100, Tony Breeds wrote: > Hi All, > During the Berlin forum the idea of running some kinda of bot on the > sandbox [1] repo cam up as another way to onboard/encourage > contributors. > > The general idea is that the bot would: > 1. Leave a -1 review on 'qualifying'[2] changes along with a request for > some small change > 2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) > on the change > > Showing new contributors approximately what code review looks like[2], > and also reduce the human requirements. The OpenStack Upstream > Institute would make use of the bot and we'd also use it as an > interactive tutorial from the contributors portal. > > I think this can be done as a 'normal' CI job with the following > considerations: > > * Because we want this service to be reasonably robust we don't want to > code or the job definitions to live in repo so I guess they'd need to > live in project-config[4]. The bot itself doesn't need to be > stateful as gerrit comments / meta-data would act as the store/state > sync. > * We'd need a gerrit account we can use to lodge these votes, as using > 'proposal-bot' or tonyb would be a bad idea. do you need an actual bot why not just have a job defiend in the sandbox repo itself that runs say pep8 or some simple test like check the commit message for Close-Bug: or somting like that. i noticed that if you are modifying zuul jobs and have a syntax error we actully comment on the patch to say where it is. like this https://review.openstack.org/#/c/632484/2/.zuul.yaml at 31 so you could just develop a custom job that ran in the a seperate pipline and set the sucess action to Code-Review: +2 an failure to Code-Review: -1 the authour could then add the second +2 and +w to complete the normal workflow. as far as i know the sandbox repo allowas all users to +2 +w correct? > > My initial plan would be to develop the bot locally and then migrate it > into the opendev infra once we've proven its utility. > > So thoughts on the design or considerations or should I just code > something up and see what it looks like? > > Yours Tony. > > [1] http://git.openstack.org/cgit/openstack-dev/sandbox > [2] The details of what counts as qualifying can be fleshed out later > but there needs to be something so that contributors using the sandbox > that don't want to be bothered by the bot wont be. > [3] So it would a) be faster than typical and b) not all new changes are > greeted with a -1 ;P > [4] Another repo would be better as project-config is trusted we can't > use Depends-On to test changes to the bot itself, but we need to > consider the bots access to secrets From thierry at openstack.org Fri Feb 1 11:49:19 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 1 Feb 2019 12:49:19 +0100 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> Message-ID: Lance Bragstad wrote: > [..] > Outside of having a formal name, do we expect the "pop-up" teams to > include processes that make what we went through easier? Ultimately, we > still had to self-organize and do a bunch of socializing to make progress. I think being listed as a pop-up team would definitely facilitate getting mentioned in TC reports, community newsletters or other high-vsibility community communications. It would help getting space to meet at PTGs, too. None of those things were impossible before... but they were certainly easier to achieve for people with name-recognition or the right connections. It was also easier for things to slip between the cracks. I agree that we should consider adding processes that would facilitate going through the steps you described... But I don't really want this to become a bureaucratic nightmare hindering volunteers stepping up to get things done. So it's a thin line to walk on :) -- Thierry Carrez (ttx) From alfredo.deluca at gmail.com Fri Feb 1 09:20:50 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Fri, 1 Feb 2019 10:20:50 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> Message-ID: thanks Feilong, clemens et all. I going to have a look later on today and see what I can do and see. Just a question: Does the kube master need internet access to download stuff or not? Cheers On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang wrote: > I'm echoing Von's comments. > > From the log of cloud-init-output.log, you should be able to see below > error: > > *Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 > +0000. Up 76.51 seconds.* > *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running > /var/lib/cloud/instance/scripts/part-011 [1]* > *+ _prefix=docker.io/openstackmagnum/ * > *+ atomic install --storage ostree --system --system-package no --set > REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name > heat-container-agent > docker.io/openstackmagnum/heat-container-agent:queens-stable > * > *The docker daemon does not appear to be running.* > *+ systemctl start heat-container-agent* > *Failed to start heat-container-agent.service: Unit > heat-container-agent.service not found.* > *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running > /var/lib/cloud/instance/scripts/part-013 [5]* > > Then please go to /var/lib/cloud/instances//scripts to find > the script 011 and 013 to run it manually to get the root cause. And > welcome to pop up into #openstack-containers irc channel. > > > > On 30/01/19 11:43 PM, Clemens Hardewig wrote: > > Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 > part of the config script finishes with error. Check why. > > Von meinem iPhone gesendet > > Am 30.01.2019 um 10:11 schrieb Alfredo De Luca : > > here are also the logs for the cloud init logs from the k8s master.... > > > > On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca > wrote: > >> >> In the meantime this is my cluster >> template >> >> >> >> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca >> wrote: >> >>> hi Clemens and Ignazio. thanks for your support. >>> it must be network related but I don't do something special apparently >>> to create a simple k8s cluster. >>> I ll post later on configurations and logs as you Clemens suggested. >>> >>> >>> Cheers >>> >>> >>> >>> On Tue, Jan 29, 2019 at 9:16 PM Clemens >>> wrote: >>> >>>> … an more important: check the other log cloud-init.log for error >>>> messages (not only cloud-init-output.log) >>>> >>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < >>>> alfredo.deluca at gmail.com>: >>>> >>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs >>>> on the kube master keep saying the following >>>> >>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>> [+]poststarthook/extensions/third-party-resources ok >>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>> healthz check failed' ']' >>>> + sleep 5 >>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>> + '[' ok = '' ']' >>>> + sleep 5 >>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>> [+]poststarthook/extensions/third-party-resources ok >>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>> healthz check failed' ']' >>>> + sleep 5 >>>> >>>> Not sure what to do. >>>> My configuration is ... >>>> eth0 - 10.1.8.113 >>>> >>>> But the openstack configration in terms of networkin is the default >>>> from ansible-openstack which is 172.29.236.100/22 >>>> >>>> Maybe that's the problem? >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < >>>> ignaziocassano at gmail.com> wrote: >>>> >>>>> Hello Alfredo, >>>>> your external network is using proxy ? >>>>> If you using a proxy, and yuo configured it in cluster template, you >>>>> must setup no proxy for 127.0.0.1 >>>>> Ignazio >>>>> >>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < >>>>> clemens.hardewig at crandale.de> ha scritto: >>>>> >>>>>> At least on fedora there is a second cloud Init log as far as I >>>>>> remember-Look into both >>>>>> >>>>>> Br c >>>>>> >>>>>> Von meinem iPhone gesendet >>>>>> >>>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >>>>>> alfredo.deluca at gmail.com>: >>>>>> >>>>>> thanks Clemens. >>>>>> I looked at the cloud-init-output.log on the master... and at the >>>>>> moment is doing the following.... >>>>>> >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> >>>>>> Network ....could be but not sure where to look at >>>>>> >>>>>> >>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >>>>>> clemens.hardewig at crandale.de> wrote: >>>>>> >>>>>>> Yes, you should check the cloud-init logs of your master. Without >>>>>>> having seen them, I would guess a network issue or you have selected for >>>>>>> your minion nodes a flavor using swap perhaps ... >>>>>>> So, log files are the first step you could dig into... >>>>>>> Br c >>>>>>> Von meinem iPhone gesendet >>>>>>> >>>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>>>>>> alfredo.deluca at gmail.com>: >>>>>>> >>>>>>> Hi all. >>>>>>> I finally instaledl successufully openstack ansible (queens) but, >>>>>>> after creating a cluster template I create k8s cluster, it stuck on >>>>>>> >>>>>>> >>>>>>> kube_masters >>>>>>> >>>>>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>>>>>> >>>>>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create >>>>>>> in progress....and after around an hour it says...time out. k8s master >>>>>>> seems to be up.....at least as VM. >>>>>>> >>>>>>> any idea? >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> *Alfredo* >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> *Alfredo* >>>>>> >>>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> >> >> -- >> *Alfredo* >> >> > > -- > *Alfredo* > > > > > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > -------------------------------------------------------------------------- > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > -------------------------------------------------------------------------- > > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Feb 1 12:34:20 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 1 Feb 2019 12:34:20 +0000 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> References: <20190201043349.GB6183@thor.bakeyournoodle.com> <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> Message-ID: <20190201123420.sjhvwuwjxbyvru3x@yuggoth.org> On 2019-02-01 11:25:47 +0000 (+0000), Sean Mooney wrote: > do you need an actual bot > why not just have a job defiend in the sandbox repo itself that > runs say pep8 or some simple test like check the commit message > for Close-Bug: or somting like that. I think that's basically what he was suggesting: a Zuul job which votes on (some) changes to the openstack/sandbox repository. Some challenges there... first, you'd probably want credentials set as Zuul secrets, but in-repository secrets can only be used by jobs in safe "post-review" pipelines (gate, promote, post, release...) to prevent leakage through speculative execution of changes to those job definitions. The workaround would be to place the secrets and any playbooks which use them into a trusted config repository such as openstack-infra/project-config so they can be safely used in "pre-review" pipelines like check. > i noticed that if you are modifying zuul jobs and have a syntax > error we actully comment on the patch to say where it is. like > this https://review.openstack.org/#/c/632484/2/.zuul.yaml at 31 > > so you could just develop a custom job that ran in the a seperate > pipline and set the sucess action to Code-Review: +2 an failure to > Code-Review: -1 [...] It would be a little weird to have those code review votes showing up for the Zuul account and might further confuse students. Also, what you describe would require a custom pipeline definition as those behaviors apply to pipelines, not to jobs. I think Tony's suggestion of doing this as a job with custom credentials to log into Gerrit and leave code review votes is probably the most workable and least confusing solution, but I also think a bulk of that job definition will end up having to live outside the sandbox repo for logistical reasons described above. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rfolco at redhat.com Fri Feb 1 12:38:40 2019 From: rfolco at redhat.com (Rafael Folco) Date: Fri, 1 Feb 2019 10:38:40 -0200 Subject: [openstack-dev][tripleo] TripleO CI Summary: Sprint 25 Message-ID: Greetings, The TripleO CI team has just completed Sprint 25 / Unified Sprint 4 (Jan 10 thru Jan 30). The following is a summary of completed work during this sprint cycle: - Setup a Fedora-28 promotion pipeline based on the current CentOS-7 pipeline. The Fedora28 pipeline is expected not to work atm, updates from the DF are required and will be pulled in Unified Sprint 5 (Sprint 26). - Completed transition from multinode scenarios (1-4) to standalone across all TripleO projects. Standalone scenarios (1-4) have been fixed with missing services and are now voting jobs. - Continued work on our next-gen upstream TripleO CI job reproducer. Both cloud and libvirt based deployments are working but not fully merged. - Enabled CI on the new openstack-virtual-baremetal repo. The integration CI uses the same standard job as TripleO third party e.g. https://review.openstack.org/#/c/633681/. - Started moving RDO Phase 2 jobs to upstream tripleo by triggering master jobs on tripleo-ci-testing hash. The planned work for the next sprint [1] extends work on previous sprint, which includes: - Add a check job for containers build on Fedora 28 using the new tripleo-build-containers playbook. Update the promotion pipeline jobs to use the same workflow for building containers on CentOS 7 and Fedora 28. - Convert scenarios (9 and 12) from multinode to singlenode standalone. This work will enable upstream TLS CI and testing. - Improve usability of Zuul container reproducer with launcher and user documentation to merge a MVP. - Complete support of additional OVB node in TripleO jobs. - Implement a FreeIPA deployment via CI tooling (tripleo-quickstart / tripleo-quickstart-extras). The Ruck and Rover for this sprint are Felix Quique (quiquell) and Chandan Kumar (chkumar). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Notes are recorded on etherpad [2]. Thanks, rfolco [1] https://tree.taiga.io/project/tripleo-ci-board/taskboard/unified-sprint-5 [2] https://review.rdoproject.org/etherpad/p/ruckrover-sprint26 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Feb 1 12:54:32 2019 From: smooney at redhat.com (Sean Mooney) Date: Fri, 01 Feb 2019 12:54:32 +0000 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <20190201123420.sjhvwuwjxbyvru3x@yuggoth.org> References: <20190201043349.GB6183@thor.bakeyournoodle.com> <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> <20190201123420.sjhvwuwjxbyvru3x@yuggoth.org> Message-ID: On Fri, 2019-02-01 at 12:34 +0000, Jeremy Stanley wrote: > On 2019-02-01 11:25:47 +0000 (+0000), Sean Mooney wrote: > > do you need an actual bot > > why not just have a job defiend in the sandbox repo itself that > > runs say pep8 or some simple test like check the commit message > > for Close-Bug: or somting like that. > > I think that's basically what he was suggesting: a Zuul job which > votes on (some) changes to the openstack/sandbox repository. > > Some challenges there... first, you'd probably want credentials set > as Zuul secrets, but in-repository secrets can only be used by jobs > in safe "post-review" pipelines (gate, promote, post, release...) to > prevent leakage through speculative execution of changes to those > job definitions. The workaround would be to place the secrets and > any playbooks which use them into a trusted config repository such > as openstack-infra/project-config so they can be safely used in > "pre-review" pipelines like check. > > > i noticed that if you are modifying zuul jobs and have a syntax > > error we actully comment on the patch to say where it is. like > > this https://review.openstack.org/#/c/632484/2/.zuul.yaml at 31 > > > > so you could just develop a custom job that ran in the a seperate > > pipline and set the sucess action to Code-Review: +2 an failure to > > Code-Review: -1 > > [...] > > It would be a little weird to have those code review votes showing > up for the Zuul account and might further confuse students. Also, > what you describe would require a custom pipeline definition as > those behaviors apply to pipelines, not to jobs. yes i was suggsting a custom pipeline. > > I think Tony's suggestion of doing this as a job with custom > credentials to log into Gerrit and leave code review votes is > probably the most workable and least confusing solution, but I also > think a bulk of that job definition will end up having to live > outside the sandbox repo for logistical reasons described above. no disagreement that that might be a better path. when i hear both i think some long lived thing like an irc bot that would presumably have to listen to the event queue. so i was just wondering if we could avoid having to wite an acutal "bot" application and just have zuul jobs do it instead. From fungi at yuggoth.org Fri Feb 1 13:11:49 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 1 Feb 2019 13:11:49 +0000 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: References: <20190201043349.GB6183@thor.bakeyournoodle.com> <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> <20190201123420.sjhvwuwjxbyvru3x@yuggoth.org> Message-ID: <20190201131149.th4rqgej2tmwnicp@yuggoth.org> On 2019-02-01 12:54:32 +0000 (+0000), Sean Mooney wrote: [...] > when i hear bot i think some long lived thing like an irc bot that > would presumably have to listen to the event queue. so i was just > wondering if we could avoid having to wite an acutal "bot" > application and just have zuul jobs do it instead. Yes, we have a number of stateless/momentary processes like Zuul jobs and Gerrit hook scripts which get confusingly referred to as "bots," so I've learned to stop making such assumptions where that term is bandied about. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From colleen at gazlene.net Fri Feb 1 13:13:43 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 01 Feb 2019 14:13:43 +0100 Subject: [dev][keystone] Keystone Team Update - Week of 28 January 2019 Message-ID: <1549026823.2932754.1648592648.39A7D7DD@webmail.messagingengine.com> # Keystone Team Update - Week of 28 January 2019 ## News ### JWS Key Rotation Since JSON Web Tokens are asymmetrically signed and not encrypted, we discussed whether we needed to implement the full rotation procedure that we have for fernet tokens and came to the conclusion that probably not[1][2]. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-01-30.log.html#t2019-01-30T14:29:39 [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-01-30.log.html#t2019-01-30T22:40:50 ### Alembic Migration Vishakha reminded us that most projects are moving away from sqlalchemy-migrate to Alembic but that we hadn't done so yet[3]. In fact we already have a spec published for it[4] but we need someone to do the work. Now might be a good time to revive our rolling upgrade testing and revisit how we manage upgrades and migrations. [3] http://eavesdrop.openstack.org/meetings/keystone/2019/keystone.2019-01-29-16.00.log.html#l-65 [4] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/backlog/alembic.html ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 17 changes this week. Among these were changes introducing the JWS token functionality. ## Changes that need Attention Search query: https://bit.ly/2RLApdA There are 75 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs This week we opened 2 new bugs and closed 6. Bugs opened (2) Bug #1813926 (keystone:Undecided) opened by Shrey bhatnagar https://bugs.launchpad.net/keystone/+bug/1813926 Bug #1813739 (keystonemiddleware:Undecided) opened by Yang Youseok https://bugs.launchpad.net/keystonemiddleware/+bug/1813739 Bugs closed (2) Bug #1805817 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1805817 Bug #1813926 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1813926 Bugs fixed (4) Bug #1813085 (keystone:High) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1813085 Bug #1798184 (keystone:Medium) fixed by Corey Bryant https://bugs.launchpad.net/keystone/+bug/1798184 Bug #1804520 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1804520 Bug #1798184 (ldappool:Undecided) fixed by no one https://bugs.launchpad.net/ldappool/+bug/1798184 ## Milestone Outlook https://releases.openstack.org/stein/schedule.html This week is the feature proposal freeze, so code implementing specs should be available for review by now. ## Shout-outs Congratulations and thank you to our Outreachy intern Erus for getting CentOS supported in the keystone devstack plugin! Great work! ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From openstack at fried.cc Fri Feb 1 14:25:03 2019 From: openstack at fried.cc (Eric Fried) Date: Fri, 1 Feb 2019 08:25:03 -0600 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <20190201043349.GB6183@thor.bakeyournoodle.com> References: <20190201043349.GB6183@thor.bakeyournoodle.com> Message-ID: <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> Tony- Thanks for following up on this! > The general idea is that the bot would: > 1. Leave a -1 review on 'qualifying'[2] changes along with a request for > some small change As I mentioned in the room, to give a realistic experience the bot should wait two or three weeks before tendering its -1. I kid (in case that wasn't clear). > 2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) > on the change If you're compiling a list of eventual features for the bot, another one that could be neat is, after the second patch set, the bot merges a change that creates a merge conflict on the student's patch, which they then have to go resolve. Also, cross-referencing [1], it might be nice to update that tutorial at some point to use the sandbox repo instead of nova. That could be done once we have bot action so said action could be incorporated into the tutorial flow. > [2] The details of what counts as qualifying can be fleshed out later > but there needs to be something so that contributors using the > sandbox that don't want to be bothered by the bot wont be. Yeah, I had been assuming it would be some tag in the commit message. If we ultimately enact different flows of varying complexity, the tag syntax could be enriched so students in different courses/grades could get different experiences. For example: Bot-Reviewer: or Bot-Reviewer: Level 2 or Bot-Reviewer: initial-downvote, merge-conflict, series-depth=3 The possibilities are endless :P -efried [1] https://review.openstack.org/#/c/634333/ From sean.mcginnis at gmx.com Fri Feb 1 14:55:53 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 1 Feb 2019 08:55:53 -0600 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> Message-ID: <20190201145553.GA5625@sm-workstation> On Fri, Feb 01, 2019 at 12:49:19PM +0100, Thierry Carrez wrote: > Lance Bragstad wrote: > > [..] > > Outside of having a formal name, do we expect the "pop-up" teams to > > include processes that make what we went through easier? Ultimately, we > > still had to self-organize and do a bunch of socializing to make progress. > > I think being listed as a pop-up team would definitely facilitate > getting mentioned in TC reports, community newsletters or other > high-vsibility community communications. It would help getting space to > meet at PTGs, too. > I guess this is the main value I see from this proposal. If it helps with visibility and communications around the effort then it does add some value to give them an official name. I don't think it changes much else. Those working in the group will still need to socialize the changes they would like to make, get buy-in from the project teams affected that the design approach is good, and find enough folks interested in the changes to drive it forward and propose the patches and do the other work needed to get things to happen. We can try looking at processes to help support that. But ultimately, as with most open source projects, I think it comes down to having enough people interested enough to get the work done. Sean From lars at redhat.com Fri Feb 1 15:20:35 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 1 Feb 2019 10:20:35 -0500 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <5354829D-31EA-4CB2-A054-239D105C7EC9@cern.ch> <20190130170501.hs2vsmm7iqdhmftc@redhat.com> Message-ID: <20190201152035.bfw2bbg27hswqhbd@redhat.com> On Thu, Jan 31, 2019 at 10:58:58AM +0000, Pierre Riteau wrote: > > This would require Ironic to support multi-tenancy first, right? > > Yes, assuming this would be available as per your initial message. > Although technically you could use the Blazar API as a wrapper to > provide the multi-tenancy, it would require duplicating a lot of the > Ironic API into Blazar, so I wouldn't recommend this approach. I think that it would be best to implement the multi-tenenacy at a lower level than Blazar. Our thought was to prototype this by putting multi-tenancy and the related access control logic into a proxy service that sits between Ironic and the end user, although that still suffers from the same problem of needing the shim service to be aware of the much of the ironic API. Ultimately it would be great to see Ironic develop native support multi-tenant operation. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From lars at redhat.com Fri Feb 1 15:26:52 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 1 Feb 2019 10:26:52 -0500 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <5354829D-31EA-4CB2-A054-239D105C7EC9@cern.ch> <20190130170501.hs2vsmm7iqdhmftc@redhat.com> Message-ID: <20190201152652.cnudbniuraiflybj@redhat.com> On Thu, Jan 31, 2019 at 12:09:07PM +0100, Dmitry Tantsur wrote: > Some first steps have been done: > http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ownership-field.html. > We need someone to drive the futher design and implementation > though. That spec seems to be for a strictly informational field. Reading through it, I guess it's because doing something like this... openstack baremetal node set --property owner=lars ...leads to sub-optimal performance when trying to filter a large number of hosts. I see that it's merged already, so I guess this is commenting-after-the-fact, but that seems like the wrong path to follow: I can see properties like "the contract id under which this system was purchased" being as or more important than "owner" from a large business perspective, so making it easier to filter by property on the server side would seem to be a better solution. Or implement full multi-tenancy so that "owner" is more than simply informational, of course :). -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From eblock at nde.ag Fri Feb 1 15:27:01 2019 From: eblock at nde.ag (Eugen Block) Date: Fri, 01 Feb 2019 15:27:01 +0000 Subject: [Openstack] [Nova][Glance] Nova imports flat images from base file despite ceph backend In-Reply-To: References: <20180928115051.Horde.ZC_55UzSXeK4hiOjJt6tajA@webmail.nde.ag> <20180928125224.Horde.33aqtdk0B9Ncylg-zxjA5to@webmail.nde.ag> <9F3C86CE-862D-469A-AD79-3F334CD5DB41@enter.eu> <20181004124417.Horde.py2wEG4JmO1oFXbjX5u1uw3@webmail.nde.ag> <20181009080101.Horde.---iO9LIrKkWvTsNJwWk_Mj@webmail.nde.ag> <679352a8-c082-d851-d8a5-ea7b2348b7d3@gmail.com> <20181012215027.Horde.t5xm_KfkoEE4YEnrewHQZPG@webmail.nde.ag> <9df7167b-ea3b-51d6-9fad-7c9298caa7be@gmail.com> <72242CC2-621E-4037-A8F0-8AE56C4A6F36@italy1.com> Message-ID: <20190201152701.Horde.Qt9AVNDrBgrTv9KJLB6WOBX@webmail.nde.ag> Hi, I'd like to share that I found the solution to my problem in [1]. It was the config option "cache_images" in nova that is set to "all" per default. Despite changing the glance image properties of my images to raw it didn't prevent nova from downloading a local copy to /var/lib/nova/instances/_base. Setting "cache_images = none" disables the nova image cache, and after deleting all cache files in _base a new instance is not flat anymore but a copy-on-write clone like it's supposed to be. Sorry for the noise in this thread. :-) Have a nice weekend! Eugen [1] https://ask.openstack.org/en/question/79843/prefetched-and-cached-images-in-glance/ Zitat von melanie witt : > On Fri, 12 Oct 2018 20:06:04 -0700, Remo Mattei wrote: >> I do not have it handy now but you can verify that the image is >> indeed raw or qcow2 >> >> As soon as I get home I will dig the command and pass it on. I have >> seen where images have extensions thinking it is raw and it is not. > > You could try 'qemu-img info ' and get output like this, > notice "file format": > > $ qemu-img info test.vmdk > (VMDK) image open: flags=0x2 filename=test.vmdk > image: test.vmdk > file format: vmdk > virtual size: 20M (20971520 bytes) > disk size: 17M > > [1] https://en.wikibooks.org/wiki/QEMU/Images#Getting_information > > -melanie > > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From lars at redhat.com Fri Feb 1 17:09:53 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 1 Feb 2019 12:09:53 -0500 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: <20190130152604.ik7zi2w7hrpabahd@redhat.com> References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> Message-ID: On Wed, Jan 30, 2019 at 10:26:04AM -0500, Lars Kellogg-Stedman wrote: > Howdy. > > I'm working with a group of people who are interested in enabling some > form of baremetal leasing/reservations using Ironic... Hey everyone, Thanks for the feedback! Based on the what I've heard so far, I'm beginning to think our best course of action is: 1. Implement multi-tenancy either (a) directly in Ironic or (b) in a shim service that sits between Ironic and the client. 2. Implement a Blazar plugin that is able to talk to whichever service in (1) is appropriate. 3. Work with Blazar developers to implement any lease logic that we think is necessary. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From smooney at redhat.com Fri Feb 1 18:16:42 2019 From: smooney at redhat.com (Sean Mooney) Date: Fri, 01 Feb 2019 18:16:42 +0000 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> Message-ID: On Fri, 2019-02-01 at 12:09 -0500, Lars Kellogg-Stedman wrote: > On Wed, Jan 30, 2019 at 10:26:04AM -0500, Lars Kellogg-Stedman wrote: > > Howdy. > > > > I'm working with a group of people who are interested in enabling some > > form of baremetal leasing/reservations using Ironic... > > Hey everyone, > > Thanks for the feedback! Based on the what I've heard so far, I'm > beginning to think our best course of action is: > > 1. Implement multi-tenancy either (a) directly in Ironic or (b) in a > shim service that sits between Ironic and the client. that shim service could be nova, which already has multi tenancy. > > 2. Implement a Blazar plugin that is able to talk to whichever service > in (1) is appropriate. and nova is supported by blazar > > 3. Work with Blazar developers to implement any lease logic that we > think is necessary. +1 by they im sure there is a reason why you dont want to have blazar drive nova and nova dirve ironic but it seam like all the fucntionality would already be there in that case. > > -- > Lars Kellogg-Stedman | larsks @ {irc,twitter,github} > http://blog.oddbit.com/m/ | > From ignaziocassano at gmail.com Fri Feb 1 12:21:56 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 1 Feb 2019 13:21:56 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> Message-ID: Yes, it needs Internet access. Ignazio Il giorno Ven 1 Feb 2019 13:20 Alfredo De Luca ha scritto: > thanks Feilong, clemens et all. > > I going to have a look later on today and see what I can do and see. > > Just a question: > Does the kube master need internet access to download stuff or not? > > Cheers > > > On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang > wrote: > >> I'm echoing Von's comments. >> >> From the log of cloud-init-output.log, you should be able to see below >> error: >> >> *Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 >> +0000. Up 76.51 seconds.* >> *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running >> /var/lib/cloud/instance/scripts/part-011 [1]* >> *+ _prefix=docker.io/openstackmagnum/ * >> *+ atomic install --storage ostree --system --system-package no --set >> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >> heat-container-agent >> docker.io/openstackmagnum/heat-container-agent:queens-stable >> * >> *The docker daemon does not appear to be running.* >> *+ systemctl start heat-container-agent* >> *Failed to start heat-container-agent.service: Unit >> heat-container-agent.service not found.* >> *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running >> /var/lib/cloud/instance/scripts/part-013 [5]* >> >> Then please go to /var/lib/cloud/instances//scripts to find >> the script 011 and 013 to run it manually to get the root cause. And >> welcome to pop up into #openstack-containers irc channel. >> >> >> >> On 30/01/19 11:43 PM, Clemens Hardewig wrote: >> >> Read the cloud-Init.log! There you can see that your >> /var/lib/.../part-011 part of the config script finishes with error. Check >> why. >> >> Von meinem iPhone gesendet >> >> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca > >: >> >> here are also the logs for the cloud init logs from the k8s master.... >> >> >> >> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca >> wrote: >> >>> >>> In the meantime this is my cluster >>> template >>> >>> >>> >>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < >>> alfredo.deluca at gmail.com> wrote: >>> >>>> hi Clemens and Ignazio. thanks for your support. >>>> it must be network related but I don't do something special apparently >>>> to create a simple k8s cluster. >>>> I ll post later on configurations and logs as you Clemens suggested. >>>> >>>> >>>> Cheers >>>> >>>> >>>> >>>> On Tue, Jan 29, 2019 at 9:16 PM Clemens >>>> wrote: >>>> >>>>> … an more important: check the other log cloud-init.log for error >>>>> messages (not only cloud-init-output.log) >>>>> >>>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < >>>>> alfredo.deluca at gmail.com>: >>>>> >>>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the >>>>> logs on the kube master keep saying the following >>>>> >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> >>>>> Not sure what to do. >>>>> My configuration is ... >>>>> eth0 - 10.1.8.113 >>>>> >>>>> But the openstack configration in terms of networkin is the default >>>>> from ansible-openstack which is 172.29.236.100/22 >>>>> >>>>> Maybe that's the problem? >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < >>>>> ignaziocassano at gmail.com> wrote: >>>>> >>>>>> Hello Alfredo, >>>>>> your external network is using proxy ? >>>>>> If you using a proxy, and yuo configured it in cluster template, you >>>>>> must setup no proxy for 127.0.0.1 >>>>>> Ignazio >>>>>> >>>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < >>>>>> clemens.hardewig at crandale.de> ha scritto: >>>>>> >>>>>>> At least on fedora there is a second cloud Init log as far as I >>>>>>> remember-Look into both >>>>>>> >>>>>>> Br c >>>>>>> >>>>>>> Von meinem iPhone gesendet >>>>>>> >>>>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >>>>>>> alfredo.deluca at gmail.com>: >>>>>>> >>>>>>> thanks Clemens. >>>>>>> I looked at the cloud-init-output.log on the master... and at the >>>>>>> moment is doing the following.... >>>>>>> >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> >>>>>>> Network ....could be but not sure where to look at >>>>>>> >>>>>>> >>>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>> >>>>>>>> Yes, you should check the cloud-init logs of your master. Without >>>>>>>> having seen them, I would guess a network issue or you have selected for >>>>>>>> your minion nodes a flavor using swap perhaps ... >>>>>>>> So, log files are the first step you could dig into... >>>>>>>> Br c >>>>>>>> Von meinem iPhone gesendet >>>>>>>> >>>>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>> >>>>>>>> Hi all. >>>>>>>> I finally instaledl successufully openstack ansible (queens) but, >>>>>>>> after creating a cluster template I create k8s cluster, it stuck on >>>>>>>> >>>>>>>> >>>>>>>> kube_masters >>>>>>>> >>>>>>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>>>>>>> >>>>>>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create >>>>>>>> in progress....and after around an hour it says...time out. k8s master >>>>>>>> seems to be up.....at least as VM. >>>>>>>> >>>>>>>> any idea? >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> *Alfredo* >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Alfredo* >>>>>>> >>>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> >> >> -- >> *Alfredo* >> >> >> >> >> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> -------------------------------------------------------------------------- >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> -------------------------------------------------------------------------- >> >> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ing.gloriapalmagonzalez at gmail.com Fri Feb 1 17:54:28 2019 From: ing.gloriapalmagonzalez at gmail.com (=?UTF-8?Q?Gloria_Palma_Gonz=C3=A1lez?=) Date: Fri, 1 Feb 2019 11:54:28 -0600 Subject: [openstack-community] Open Infrastructure Summit Denver - Community Voting Open In-Reply-To: <6B02F9A1-28A7-4F43-85E1-66AD570ED37B@openstack.org> References: <6B02F9A1-28A7-4F43-85E1-66AD570ED37B@openstack.org> Message-ID: Done! Thanks! El jue., 31 ene. 2019 a las 12:36, Ashlee Ferguson () escribió: > Hi everyone, > > Community voting for the Open Infrastructure Summit Denver sessions is > open! > > You can VOTE HERE > , but > what does that mean? > > Now that the Call for Presentations has closed, all submissions are > available for community vote and input. After community voting closes, the > volunteer Programming Committee members will receive the presentations to > review and determine the final selections for Summit schedule. While > community votes are meant to help inform the decision, Programming > Committee members are expected to exercise judgment in their area of > expertise and help ensure diversity of sessions and speakers. View full > details of the session selection process here > > . > > In order to vote, you need an OSF community membership. If you do not have > an account, please create one by going to openstack.org/join. If you need > to reset your password, you can do that here > . > > Hurry, voting closes Monday, February 4 at 11:59pm Pacific Time (Tuesday, > February 5 at 7:59 UTC). > > Continue to visit https://www.openstack.org/summit/denver-2019 for all > Summit-related information. > > REGISTER > Register for the Summit > before prices > increase in late February! > > VISA APPLICATION PROCESS > Make sure to secure your Visa soon. More information > > about the Visa application process. > > TRAVEL SUPPORT PROGRAM > February 27 is the last day to submit applications. Please submit your > applications > by > 11:59pm Pacific Time (February 28 at 7:59am UTC). > > If you have any questions, please email summit at openstack.org > . > > Cheers, > Ashlee > > > Ashlee Ferguson > OpenStack Foundation > ashlee at openstack.org > > > > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > -- Gloria Palma González GloriaPG | @GloriaPalmaGlez -------------- next part -------------- An HTML attachment was scrubbed... URL: From dbingham at godaddy.com Fri Feb 1 18:16:10 2019 From: dbingham at godaddy.com (David G. Bingham) Date: Fri, 1 Feb 2019 18:16:10 +0000 Subject: [neutron] Multi-segment per host support for routed networks Message-ID: <7D8DEE81-6D5F-4424-9482-12C80A5C15DA@godaddy.com> Neutron land, Problem: Neutron currently only allows a single network segment per host. This becomes a problem when networking teams want to limit the number of IPs it supports on a segment. This means that at times the number of IPs available to the host is the limiting factor for the number of instances we can deploy on a host. Ref: https://bugs.launchpad.net/neutron/+bug/1764738 Ongoing Work: We are excited in our work add "multi-segment support for routed networks". We currently have a proof of concept here https://review.openstack.org/#/c/623115 that for routed networks effectively: * Removes validation preventing multiple segments. * Injects segment_id into fixed IP records. * Uses the segment_id when creating a bridge (rather than network_id). In effect, it gives each segment its own bridge. It works pretty well for new networks and deployments. For existing routed networks, however, it breaks networking. Please use *caution* if you decide to try it. TODOs: Things TODO before this before it is fully baked: * Need to add code to handle ensuring bridges are also updated/deleted using the segment_id (rather than network_id). * Need to add something (a feature flag?) that prevents this from breaking routed networks when a cloud admin updates to master and is configured for routed networks. * Need to create checker and upgrade migration code that will convert existing bridges from network_id based to segment_id based (ideally live or with little network traffic downtime). Once converted, the feature flag could enable the feature and start using the new code. Need: 1. How does one go about adding a migration tool? Maybe some examples? 2. Will nova need to be notified/upgraded to have bridge related files updated? 3. Is there a way to migrate without (or minimal) downtime? 4. How to repeatably test this migration code? Grenade? Looking for any ideas that can keep this moving :) Thanks a ton, David Bingham (wwriverrat on irc) Kris Lindgren (klindgren on irc) Cloud Engineers at GoDaddy From tony at bakeyournoodle.com Fri Feb 1 23:51:44 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sat, 2 Feb 2019 10:51:44 +1100 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> References: <20190201043349.GB6183@thor.bakeyournoodle.com> <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> Message-ID: <20190201235143.GC6183@thor.bakeyournoodle.com> On Fri, Feb 01, 2019 at 11:25:47AM +0000, Sean Mooney wrote: > On Fri, 2019-02-01 at 15:33 +1100, Tony Breeds wrote: > > Hi All, > > During the Berlin forum the idea of running some kinda of bot on the > > sandbox [1] repo cam up as another way to onboard/encourage > > contributors. > > > > The general idea is that the bot would: > > 1. Leave a -1 review on 'qualifying'[2] changes along with a request for > > some small change > > 2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) > > on the change > > > > Showing new contributors approximately what code review looks like[2], > > and also reduce the human requirements. The OpenStack Upstream > > Institute would make use of the bot and we'd also use it as an > > interactive tutorial from the contributors portal. > > > > I think this can be done as a 'normal' CI job with the following > > considerations: > > > > * Because we want this service to be reasonably robust we don't want to > > code or the job definitions to live in repo so I guess they'd need to > > live in project-config[4]. The bot itself doesn't need to be > > stateful as gerrit comments / meta-data would act as the store/state > > sync. > > * We'd need a gerrit account we can use to lodge these votes, as using > > 'proposal-bot' or tonyb would be a bad idea. > do you need an actual bot > why not just have a job defiend in the sandbox repo itself that runs say > pep8 or some simple test like check the commit message for Close-Bug: or somting like > that. Yup sorry for using the overloaded term 'Bot' what you describe is what I was trying to suggest. > i noticed that if you are modifying zuul jobs and have a syntax error > we actully comment on the patch to say where it is. > like this https://review.openstack.org/#/c/632484/2/.zuul.yaml at 31 Yup. > so you could just develop a custom job that ran in the a seperate pipline and > set the sucess action to Code-Review: +2 an failure to Code-Review: -1 > > the authour could then add the second +2 and +w to complete the normal workflow. > as far as i know the sandbox repo allowas all users to +2 +w correct? Correct. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Fri Feb 1 23:55:20 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sat, 2 Feb 2019 10:55:20 +1100 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <20190201123420.sjhvwuwjxbyvru3x@yuggoth.org> References: <20190201043349.GB6183@thor.bakeyournoodle.com> <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> <20190201123420.sjhvwuwjxbyvru3x@yuggoth.org> Message-ID: <20190201235520.GD6183@thor.bakeyournoodle.com> On Fri, Feb 01, 2019 at 12:34:20PM +0000, Jeremy Stanley wrote: > On 2019-02-01 11:25:47 +0000 (+0000), Sean Mooney wrote: > > do you need an actual bot > > why not just have a job defiend in the sandbox repo itself that > > runs say pep8 or some simple test like check the commit message > > for Close-Bug: or somting like that. > > I think that's basically what he was suggesting: a Zuul job which > votes on (some) changes to the openstack/sandbox repository. > > Some challenges there... first, you'd probably want credentials set > as Zuul secrets, but in-repository secrets can only be used by jobs > in safe "post-review" pipelines (gate, promote, post, release...) to > prevent leakage through speculative execution of changes to those > job definitions. The workaround would be to place the secrets and > any playbooks which use them into a trusted config repository such > as openstack-infra/project-config so they can be safely used in > "pre-review" pipelines like check. Yup that was my plan. It also means that new contributors can't accidentallt break the bot :) > > > i noticed that if you are modifying zuul jobs and have a syntax > > error we actully comment on the patch to say where it is. like > > this https://review.openstack.org/#/c/632484/2/.zuul.yaml at 31 > > > > so you could just develop a custom job that ran in the a seperate > > pipline and set the sucess action to Code-Review: +2 an failure to > > Code-Review: -1 > [...] > > It would be a little weird to have those code review votes showing > up for the Zuul account and might further confuse students. Also, > what you describe would require a custom pipeline definition as > those behaviors apply to pipelines, not to jobs. > > I think Tony's suggestion of doing this as a job with custom > credentials to log into Gerrit and leave code review votes is > probably the most workable and least confusing solution, but I also > think a bulk of that job definition will end up having to live > outside the sandbox repo for logistical reasons described above. Cool. There clearly isn't a rush on this but it would be really good to have it in place before the Denver summit. Can someone that knows how either create the gerrit user and zuul secrets or point me at how to do it. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Sat Feb 2 00:01:39 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sat, 2 Feb 2019 11:01:39 +1100 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> References: <20190201043349.GB6183@thor.bakeyournoodle.com> <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> Message-ID: <20190202000139.GE6183@thor.bakeyournoodle.com> On Fri, Feb 01, 2019 at 08:25:03AM -0600, Eric Fried wrote: > Yeah, I had been assuming it would be some tag in the commit message. If > we ultimately enact different flows of varying complexity, the tag > syntax could be enriched so students in different courses/grades could > get different experiences. For example: > > Bot-Reviewer: > > or > > Bot-Reviewer: Level 2 > > or > > Bot-Reviewer: initial-downvote, merge-conflict, series-depth=3 Something like that would work well. A nice thing about it is it begins the process of teaching about other tags we but in commit messages. > The possibilities are endless :P :) Of course it should be Auto-Bot[1] instead of Bot-Reviewer ;P Yours Tony. [1] The bike shed it pink! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From miguel at mlavalle.com Sat Feb 2 01:06:42 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 1 Feb 2019 19:06:42 -0600 Subject: [neutron] OVS OpenFlow L3 DVR / dvr_bridge agent_mode In-Reply-To: References: Message-ID: Hi Igor, Please see my comments in-line below On Tue, Jan 29, 2019 at 1:26 AM Duarte Cardoso, Igor < igor.duarte.cardoso at intel.com> wrote: > Hi Neutron, > > > > I've been internally collaborating on the ``dvr_bridge`` L3 agent mode > [1][2][3] work (David Shaughnessy, Xubo Zhang), which allows the L3 agent > to make use of Open vSwitch / OpenFlow to implement ``distributed`` IPv4 > Routers thus bypassing kernel namespaces and iptables and opening the door > for higher performance by keeping packets in OVS for longer. > > > > I want to share a few questions in order to gather feedback from you. I > understand parts of these questions may have been answered in the past > before my involvement, but I believe it's still important to revisit and > clarify them. This can impact how long it's going to take to complete the > work and whether it can make it to stein-3. > > > > 1. Should OVS support also be added to the legacy router? > > And if so, would it make more sense to have a new variable (not > ``agent_mode``) to specify what backend to use (OVS or kernel) instead of > creating more combinations? > I would like to see the legacy router also implemented. And yes, we need to specify a new config option. As it has already been pointed out, we need to separate what the agent does in each host from the backend technology implementing the routers. > > > 2. What is expected in terms of CI for this? Regarding testing, what > should this first patch include apart from the unit tests? (since the > l3_agent.ini needs to be configured differently). > I agree with Slawek. We would like to see a scenario job. > > > 3. What problems can be anticipated by having the same agent managing both > kernel and OVS powered routers (depending on whether they were created as > ``distributed``)? > > We are experimenting with different ways of decoupling RouterInfo (mainly > as part of the L3 agent refactor patch) and haven't been able to find the > right balance yet. On one end we have an agent that is still coupled with > kernel-based RouterInfo, and on the other end we have an agent that either > only accepts OVS-based RouterInfos or only kernel-based RouterInfos > depending on the ``agent_mode``. > I also agree with Slawek here. It would a good idea if we can get the two efforts in synch so we can untangle RouterInfo from the agent code > > > We'd also appreciate reviews on the 2 patches [4][5]. The L3 refactor one > should be able to pass Zuul after a recheck. > > > > [1] Spec: > https://blueprints.launchpad.net/neutron/+spec/openflow-based-dvr > > [2] RFE: https://bugs.launchpad.net/neutron/+bug/1705536 > > [3] Gerrit topic: > https://review.openstack.org/#/q/topic:dvr_bridge+(status:open+OR+status:merged) > > [4] L3 agent refactor patch: https://review.openstack.org/#/c/528336/29 > > [5] dvr_bridge patch: https://review.openstack.org/#/c/472289/17 > > > > Thank you! > > > > Best regards, > > Igor D.C. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 2 08:37:49 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 2 Feb 2019 09:37:49 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> Message-ID: Alfredo, if you configured your template for using floatingip you can connect to the master and check if it can connect to Internet. Il giorno Ven 1 Feb 2019 13:20 Alfredo De Luca ha scritto: > thanks Feilong, clemens et all. > > I going to have a look later on today and see what I can do and see. > > Just a question: > Does the kube master need internet access to download stuff or not? > > Cheers > > > On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang > wrote: > >> I'm echoing Von's comments. >> >> From the log of cloud-init-output.log, you should be able to see below >> error: >> >> *Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 >> +0000. Up 76.51 seconds.* >> *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running >> /var/lib/cloud/instance/scripts/part-011 [1]* >> *+ _prefix=docker.io/openstackmagnum/ * >> *+ atomic install --storage ostree --system --system-package no --set >> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >> heat-container-agent >> docker.io/openstackmagnum/heat-container-agent:queens-stable >> * >> *The docker daemon does not appear to be running.* >> *+ systemctl start heat-container-agent* >> *Failed to start heat-container-agent.service: Unit >> heat-container-agent.service not found.* >> *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running >> /var/lib/cloud/instance/scripts/part-013 [5]* >> >> Then please go to /var/lib/cloud/instances//scripts to find >> the script 011 and 013 to run it manually to get the root cause. And >> welcome to pop up into #openstack-containers irc channel. >> >> >> >> On 30/01/19 11:43 PM, Clemens Hardewig wrote: >> >> Read the cloud-Init.log! There you can see that your >> /var/lib/.../part-011 part of the config script finishes with error. Check >> why. >> >> Von meinem iPhone gesendet >> >> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca > >: >> >> here are also the logs for the cloud init logs from the k8s master.... >> >> >> >> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca >> wrote: >> >>> >>> In the meantime this is my cluster >>> template >>> >>> >>> >>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < >>> alfredo.deluca at gmail.com> wrote: >>> >>>> hi Clemens and Ignazio. thanks for your support. >>>> it must be network related but I don't do something special apparently >>>> to create a simple k8s cluster. >>>> I ll post later on configurations and logs as you Clemens suggested. >>>> >>>> >>>> Cheers >>>> >>>> >>>> >>>> On Tue, Jan 29, 2019 at 9:16 PM Clemens >>>> wrote: >>>> >>>>> … an more important: check the other log cloud-init.log for error >>>>> messages (not only cloud-init-output.log) >>>>> >>>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < >>>>> alfredo.deluca at gmail.com>: >>>>> >>>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the >>>>> logs on the kube master keep saying the following >>>>> >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> >>>>> Not sure what to do. >>>>> My configuration is ... >>>>> eth0 - 10.1.8.113 >>>>> >>>>> But the openstack configration in terms of networkin is the default >>>>> from ansible-openstack which is 172.29.236.100/22 >>>>> >>>>> Maybe that's the problem? >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < >>>>> ignaziocassano at gmail.com> wrote: >>>>> >>>>>> Hello Alfredo, >>>>>> your external network is using proxy ? >>>>>> If you using a proxy, and yuo configured it in cluster template, you >>>>>> must setup no proxy for 127.0.0.1 >>>>>> Ignazio >>>>>> >>>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < >>>>>> clemens.hardewig at crandale.de> ha scritto: >>>>>> >>>>>>> At least on fedora there is a second cloud Init log as far as I >>>>>>> remember-Look into both >>>>>>> >>>>>>> Br c >>>>>>> >>>>>>> Von meinem iPhone gesendet >>>>>>> >>>>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >>>>>>> alfredo.deluca at gmail.com>: >>>>>>> >>>>>>> thanks Clemens. >>>>>>> I looked at the cloud-init-output.log on the master... and at the >>>>>>> moment is doing the following.... >>>>>>> >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> >>>>>>> Network ....could be but not sure where to look at >>>>>>> >>>>>>> >>>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>> >>>>>>>> Yes, you should check the cloud-init logs of your master. Without >>>>>>>> having seen them, I would guess a network issue or you have selected for >>>>>>>> your minion nodes a flavor using swap perhaps ... >>>>>>>> So, log files are the first step you could dig into... >>>>>>>> Br c >>>>>>>> Von meinem iPhone gesendet >>>>>>>> >>>>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>> >>>>>>>> Hi all. >>>>>>>> I finally instaledl successufully openstack ansible (queens) but, >>>>>>>> after creating a cluster template I create k8s cluster, it stuck on >>>>>>>> >>>>>>>> >>>>>>>> kube_masters >>>>>>>> >>>>>>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>>>>>>> >>>>>>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create >>>>>>>> in progress....and after around an hour it says...time out. k8s master >>>>>>>> seems to be up.....at least as VM. >>>>>>>> >>>>>>>> any idea? >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> *Alfredo* >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Alfredo* >>>>>>> >>>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> >> >> -- >> *Alfredo* >> >> >> >> >> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> -------------------------------------------------------------------------- >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> -------------------------------------------------------------------------- >> >> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemens.hardewig at crandale.de Sat Feb 2 13:26:02 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 14:26:02 +0100 Subject: Fwd: [openstack-ansible][magnum] References: <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: > Anfang der weitergeleiteten Nachricht: > > Von: Clemens > Betreff: Aw: [openstack-ansible][magnum] > Datum: 2. Februar 2019 um 14:20:37 MEZ > An: Alfredo De Luca > Kopie: Feilong Wang , openstack-discuss at lists.openstack.org > > Well - it seems that failure of part-013 has its root cause in failure of part-011: > > in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore the certificates for the access to Etcd are created; this is prerequisite for any kinda of access authorization maintained by Etcd. The ip address config items require an appropriate definition as metadata. If there is no definition of that, then internet access fails and it can also not install docker in part-013 ... > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From clemens.hardewig at crandale.de Sat Feb 2 16:36:12 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 17:36:12 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: > Am 02.02.2019 um 17:26 schrieb Clemens : > > Hi Alfredo, > > This is basics of Openstack: curl -s http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the metadata service with its special IP address 169.254.169.254 , to obtain the local ip address; the second one to get the public ip address > It look like from remote that your network is not properly configured so that this information is not answered from metadata service successfully. What happens if you execute that command manually? > > BR C > >> Am 02.02.2019 um 17:18 schrieb Alfredo De Luca >: >> >> Hi Clemens. Yes...you are right but not sure why the IPs are not correct >> >> if [ -z "${KUBE_NODE_IP}" ]; then >> KUBE_NODE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4 ) >> fi >> >> sans="IP:${KUBE_NODE_IP}" >> >> if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then >> KUBE_NODE_PUBLIC_IP=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4 ) >> >> I don't have that IP at all. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From clemens.hardewig at crandale.de Sat Feb 2 16:39:52 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 17:39:52 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: <03D38DC1-D6BB-492D-96CE-05673E411C26@crandale.de> OK - and your floating ip 172.29.249.112 has access to the internet? > Am 02.02.2019 um 17:33 schrieb Alfredo De Luca : > > [root at freddo-5oyez3ot5pxi-master-0 scripts]# curl -s http://169.254.169.254/latest/meta-data/local-ipv4 > 10.0.0.5[root at freddo-5oyez3ot5pxi-master-0 scripts]# > > [root at freddo-5oyez3ot5pxi-master-0 scripts]# curl -s http://169.254.169.254/latest/meta-data/public-ipv4 > 172.29.249.112[root at freddo-5oyez3ot5pxi-master-0 scripts]# > > 172.29.249.112 is the Floating IP... which I use to connect to the master > > > > > On Sat, Feb 2, 2019 at 5:26 PM Clemens > wrote: > Hi Alfredo, > > This is basics of Openstack: curl -s http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the metadata service with its special IP address 169.254.169.254 , to obtain the local ip address; the second one to get the public ip address > It look like from remote that your network is not properly configured so that this information is not answered from metadata service successfully. What happens if you execute that command manually? > > BR C -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From maruthi.inukonda at gmail.com Sat Feb 2 16:43:44 2019 From: maruthi.inukonda at gmail.com (Maruthi Inukonda) Date: Sat, 2 Feb 2019 22:13:44 +0530 Subject: Suggestions on OpenStack installer Message-ID: hi All, We are planning to build a private cloud for our department at academic institution. We are a born-in public-cloud institution. Idea is to have a hybrid cloud. Few important requirements: * No vendor lock-in of hardware and software/distribution. * Preferably stable software from openstack.org. * Also need to support Accelerated instances (GPU,FPGA,Other PCIecard). * On standard rack servers with remote systems management for the nodes (IPMI based) * Need to support Instances [VMs (Qemu-KVM), Containers (Docker, LXC), Bare metal (Ubuntu)]. * Workload will be Software Development/Test (on VMs) and Benchmarking (on baremetal/container). * Smooth upgrades. Could anyone suggest stable Openstack installer for our multi-node setup (initially 40 physical machines, later around 80)? Upgrades should be smooth. Any pointers to reference architecture will also be helpful. PS: I have tried devstack recently. It works. Kolla fails. I tried packstack few years back. Appreciate any help. cheers, Maruthi Inukonda. -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemens.hardewig at crandale.de Sat Feb 2 16:47:27 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 17:47:27 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: <931331F0-2313-4CC0-8D86-77F506239F16@crandale.de> One after the other: First of all part-011 needs to run successfully: Did your certificates create successfully? What is in /etc/kubernetes/certs ? Or did you run part-011 already successfully? > Am 02.02.2019 um 17:36 schrieb Alfredo De Luca : > > Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From alfredo.deluca at gmail.com Sat Feb 2 16:55:36 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Sat, 2 Feb 2019 17:55:36 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: <931331F0-2313-4CC0-8D86-77F506239F16@crandale.de> References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> <931331F0-2313-4CC0-8D86-77F506239F16@crandale.de> Message-ID: part-011 run succesfully.... + set -o errexit + set -o nounset + set -o pipefail + '[' True == True ']' + exit 0 But what I think it's wrong is the floating IP . It's not the IP that goes on internet which is the eth0 on my machine that has 10.1.8.113... anyway here is the network image [image: image.png] On Sat, Feb 2, 2019 at 5:47 PM Clemens wrote: > One after the other: First of all part-011 needs to run successfully: Did > your certificates create successfully? What is in /etc/kubernetes/certs ? > Or did you run part-011 already successfully? > > Am 02.02.2019 um 17:36 schrieb Alfredo De Luca : > > Failed to start heat-container-agent.service: Unit > heat-container-agent.service not found. > > > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 10192 bytes Desc: not available URL: From dabarren at gmail.com Sat Feb 2 17:19:41 2019 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Sat, 2 Feb 2019 18:19:41 +0100 Subject: Suggestions on OpenStack installer In-Reply-To: References: Message-ID: Hi, could you describe what are the issues faced with kolla so we can fix them? Thanks On Sat, Feb 2, 2019, 5:46 PM Maruthi Inukonda wrote: > hi All, > > We are planning to build a private cloud for our department at academic > institution. We are a born-in public-cloud institution. Idea is to have a > hybrid cloud. > > Few important requirements: > * No vendor lock-in of hardware and software/distribution. > * Preferably stable software from openstack.org. > * Also need to support Accelerated instances (GPU,FPGA,Other PCIecard). > * On standard rack servers with remote systems management for the nodes > (IPMI based) > * Need to support Instances [VMs (Qemu-KVM), Containers (Docker, LXC), > Bare metal (Ubuntu)]. > * Workload will be Software Development/Test (on VMs) and Benchmarking (on > baremetal/container). > * Smooth upgrades. > > Could anyone suggest stable Openstack installer for our multi-node setup > (initially 40 physical machines, later around 80)? Upgrades should be > smooth. > > Any pointers to reference architecture will also be helpful. > > PS: I have tried devstack recently. It works. Kolla fails. I tried > packstack few years back. > > Appreciate any help. > > cheers, > Maruthi Inukonda. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Sat Feb 2 17:32:37 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Sat, 2 Feb 2019 12:32:37 -0500 Subject: Suggestions on OpenStack installer In-Reply-To: References: Message-ID: On Sat, Feb 2, 2019, 12:27 PM Eduardo Gonzalez Hi, could you describe what are the issues faced with kolla so we can fix > them? What he said. I run multiple clusters deployed with Kolla. There's a bit of a learning curve as with anything, but it works great for me. The openstack-kolla IRC channel on freenode is also a good resource if you're having issues. -Erik > Thanks > > On Sat, Feb 2, 2019, 5:46 PM Maruthi Inukonda > wrote: > >> hi All, >> >> We are planning to build a private cloud for our department at academic >> institution. We are a born-in public-cloud institution. Idea is to have a >> hybrid cloud. >> >> Few important requirements: >> * No vendor lock-in of hardware and software/distribution. >> * Preferably stable software from openstack.org. >> * Also need to support Accelerated instances (GPU,FPGA,Other PCIecard). >> * On standard rack servers with remote systems management for the nodes >> (IPMI based) >> * Need to support Instances [VMs (Qemu-KVM), Containers (Docker, LXC), >> Bare metal (Ubuntu)]. >> * Workload will be Software Development/Test (on VMs) and Benchmarking >> (on baremetal/container). >> * Smooth upgrades. >> >> Could anyone suggest stable Openstack installer for our multi-node setup >> (initially 40 physical machines, later around 80)? Upgrades should be >> smooth. >> >> Any pointers to reference architecture will also be helpful. >> >> PS: I have tried devstack recently. It works. Kolla fails. I tried >> packstack few years back. >> >> Appreciate any help. >> >> cheers, >> Maruthi Inukonda. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemens.hardewig at crandale.de Sat Feb 2 18:45:28 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 19:45:28 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> <931331F0-2313-4CC0-8D86-77F506239F16@crandale.de> Message-ID: <289BE90C-210E-497E-BEAB-B1EDE380362B@crandale.de> Nope - this looks ok: When a cluster is created, then it creates a private network for you (in your case 10.0.0.0/24), connecting this network via a router to your public network. Floating ip is the assigned to your machine accordingly. So - if now your part-011 runs ok, do you have also now all the Etcd certificates/keys in your /etc/kubernetes/certs > Am 02.02.2019 um 17:55 schrieb Alfredo De Luca : > > part-011 run succesfully.... > + set -o errexit > + set -o nounset > + set -o pipefail > + '[' True == True ']' > + exit 0 > > But what I think it's wrong is the floating IP . It's not the IP that goes on internet which is the eth0 on my machine that has 10.1.8.113... > anyway here is the network image > > > > > On Sat, Feb 2, 2019 at 5:47 PM Clemens > wrote: > One after the other: First of all part-011 needs to run successfully: Did your certificates create successfully? What is in /etc/kubernetes/certs ? Or did you run part-011 already successfully? > >> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca >: >> >> Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. > > > > -- > Alfredo > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From clemens.hardewig at crandale.de Sat Feb 2 18:53:11 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 19:53:11 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Now to the failure of your part-013: Are you sure that you used the glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your error message below suggests that your image does not contain ‚atomic‘ as part of the image … + _prefix=docker.io/openstackmagnum/ + atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable ./part-013: line 8: atomic: command not found + systemctl start heat-container-agent Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. > Am 02.02.2019 um 17:36 schrieb Alfredo De Luca : > > Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From mriedemos at gmail.com Sat Feb 2 20:59:07 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 2 Feb 2019 14:59:07 -0600 Subject: [goals][upgrade-checkers] Week R-10 Update Message-ID: <1c440b08-efd5-ac8d-ebd7-9945b0302f6f@gmail.com> There are a few open changes: https://review.openstack.org/#/q/topic:upgrade-checkers+status:open Some of those are getting a bit dusty, specifically: * aodh: https://review.openstack.org/614401 * ceilometer: https://review.openstack.org/614400 * cloudkitty: https://review.openstack.org/613076 It looks like the horizon team is moving forward with adding an upgrade check script and discussing how to enable plugin support: https://review.openstack.org/#/c/631785/ As for mistral https://review.openstack.org/#/c/611513/ and swift https://review.openstack.org/#/c/611634/ those should probably just be abandoned since they don't fit with the project plans. There are no other projects that need the framework added: https://storyboard.openstack.org/#!/story/2003657 So once we complete those mentioned above we will have the basic framework in place for teams to add non-placeholder upgrade checks for Stein, and some projects are already leveraging it. This is also a good time for projects that have completed the framework to be thinking about adding specific checks as we get closer to feature freeze on March 7. -- Thanks, Matt From clemens.hardewig at crandale.de Sat Feb 2 13:20:37 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 14:20:37 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> Message-ID: <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Well - it seems that failure of part-013 has its root cause in failure of part-011: in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore the certificates for the access to Etcd are created; this is prerequisite for any kinda of access authorization maintained by Etcd. The ip address config items require an appropriate definition as metadata. If there is no definition of that, then internet access fails and it can also not install docker in part-013 ... > Am 01.02.2019 um 10:20 schrieb Alfredo De Luca : > > thanks Feilong, clemens et all. > > I going to have a look later on today and see what I can do and see. > > Just a question: > Does the kube master need internet access to download stuff or not? > > Cheers > > > On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang > wrote: > I'm echoing Von's comments. > > From the log of cloud-init-output.log, you should be able to see below error: > > Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 +0000. Up 76.51 seconds. > 2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-011 [1] > + _prefix=docker.io/openstackmagnum/ > + atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable > The docker daemon does not appear to be running. > + systemctl start heat-container-agent > Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. > 2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-013 [5] > > Then please go to /var/lib/cloud/instances//scripts to find the script 011 and 013 to run it manually to get the root cause. And welcome to pop up into #openstack-containers irc channel. > > > > > > On 30/01/19 11:43 PM, Clemens Hardewig wrote: >> Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 part of the config script finishes with error. Check why. >> >> Von meinem iPhone gesendet >> >> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca >: >> >>> here are also the logs for the cloud init logs from the k8s master.... >>> >>> >>> >>> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca > wrote: >>> >>> In the meantime this is my cluster >>> template >>> >>> >>> >>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca > wrote: >>> hi Clemens and Ignazio. thanks for your support. >>> it must be network related but I don't do something special apparently to create a simple k8s cluster. >>> I ll post later on configurations and logs as you Clemens suggested. >>> >>> >>> Cheers >>> >>> >>> >>> On Tue, Jan 29, 2019 at 9:16 PM Clemens > wrote: >>> … an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log) >>> >>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca >: >>>> >>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following >>>> >>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>> [+]poststarthook/extensions/third-party-resources ok >>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>> healthz check failed' ']' >>>> + sleep 5 >>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>> + '[' ok = '' ']' >>>> + sleep 5 >>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>> [+]poststarthook/extensions/third-party-resources ok >>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>> healthz check failed' ']' >>>> + sleep 5 >>>> >>>> Not sure what to do. >>>> My configuration is ... >>>> eth0 - 10.1.8.113 >>>> >>>> But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22 >>>> >>>> Maybe that's the problem? >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano > wrote: >>>> Hello Alfredo, >>>> your external network is using proxy ? >>>> If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 >>>> Ignazio >>>> >>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig > ha scritto: >>>> At least on fedora there is a second cloud Init log as far as I remember-Look into both >>>> >>>> Br c >>>> >>>> Von meinem iPhone gesendet >>>> >>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca >: >>>> >>>>> thanks Clemens. >>>>> I looked at the cloud-init-output.log on the master... and at the moment is doing the following.... >>>>> >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> >>>>> Network ....could be but not sure where to look at >>>>> >>>>> >>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig > wrote: >>>>> Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... >>>>> So, log files are the first step you could dig into... >>>>> Br c >>>>> Von meinem iPhone gesendet >>>>> >>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca >: >>>>> >>>>>> Hi all. >>>>>> I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on >>>>>> >>>>>> >>>>>> kube_masters b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 OS::Heat::ResourceGroup 16 minutes Create In Progress state changed >>>>>> create in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM. >>>>>> >>>>>> any idea? >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> Alfredo >>>>>> >>>>> >>>>> >>>>> -- >>>>> Alfredo >>>>> >>>> >>>> >>>> -- >>>> Alfredo >>>> >>> >>> >>> >>> -- >>> Alfredo >>> >>> >>> >>> -- >>> Alfredo >>> >>> >>> >>> -- >>> Alfredo >>> >>> >>> > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > -------------------------------------------------------------------------- > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > -------------------------------------------------------------------------- > > > -- > Alfredo > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From alfredo.deluca at gmail.com Sat Feb 2 16:16:02 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Sat, 2 Feb 2019 17:16:02 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> Message-ID: Hi Ignazio. I ve already done that so that's why I can connect to the master. Then I can ping 8.8.8.8 any other IP on internet but not through domainnames..... such as google.com or yahoo.com. Doesn't resolve names. the server doesn't have either dig or nslookup and I can't install them cause the domainname. So I changed the domainname into IP but still the same issue... [root at freddo-5oyez3ot5pxi-master-0 ~]# yum repolist Fedora Modular 29 - x86_64 0.0 B/s | 0 B 00:20 Error: Failed to synchronize cache for repo 'fedora-modular' On Sat, Feb 2, 2019 at 9:38 AM Ignazio Cassano wrote: > Alfredo, if you configured your template for using floatingip you can > connect to the master and check if it can connect to Internet. > > Il giorno Ven 1 Feb 2019 13:20 Alfredo De Luca > ha scritto: > >> thanks Feilong, clemens et all. >> >> I going to have a look later on today and see what I can do and see. >> >> Just a question: >> Does the kube master need internet access to download stuff or not? >> >> Cheers >> >> >> On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang >> wrote: >> >>> I'm echoing Von's comments. >>> >>> From the log of cloud-init-output.log, you should be able to see below >>> error: >>> >>> *Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 >>> 08:33:41 +0000. Up 76.51 seconds.* >>> *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running >>> /var/lib/cloud/instance/scripts/part-011 [1]* >>> *+ _prefix=docker.io/openstackmagnum/ >>> * >>> *+ atomic install --storage ostree --system --system-package no --set >>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>> heat-container-agent >>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>> * >>> *The docker daemon does not appear to be running.* >>> *+ systemctl start heat-container-agent* >>> *Failed to start heat-container-agent.service: Unit >>> heat-container-agent.service not found.* >>> *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running >>> /var/lib/cloud/instance/scripts/part-013 [5]* >>> >>> Then please go to /var/lib/cloud/instances//scripts to >>> find the script 011 and 013 to run it manually to get the root cause. And >>> welcome to pop up into #openstack-containers irc channel. >>> >>> >>> >>> On 30/01/19 11:43 PM, Clemens Hardewig wrote: >>> >>> Read the cloud-Init.log! There you can see that your >>> /var/lib/.../part-011 part of the config script finishes with error. Check >>> why. >>> >>> Von meinem iPhone gesendet >>> >>> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca >> >: >>> >>> here are also the logs for the cloud init logs from the k8s master.... >>> >>> >>> >>> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca < >>> alfredo.deluca at gmail.com> wrote: >>> >>>> >>>> In the meantime this is my cluster >>>> template >>>> >>>> >>>> >>>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < >>>> alfredo.deluca at gmail.com> wrote: >>>> >>>>> hi Clemens and Ignazio. thanks for your support. >>>>> it must be network related but I don't do something special apparently >>>>> to create a simple k8s cluster. >>>>> I ll post later on configurations and logs as you Clemens suggested. >>>>> >>>>> >>>>> Cheers >>>>> >>>>> >>>>> >>>>> On Tue, Jan 29, 2019 at 9:16 PM Clemens >>>>> wrote: >>>>> >>>>>> … an more important: check the other log cloud-init.log for error >>>>>> messages (not only cloud-init-output.log) >>>>>> >>>>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < >>>>>> alfredo.deluca at gmail.com>: >>>>>> >>>>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the >>>>>> logs on the kube master keep saying the following >>>>>> >>>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>>> [+]poststarthook/extensions/third-party-resources ok >>>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>>> healthz check failed' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>>> [+]poststarthook/extensions/third-party-resources ok >>>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>>> healthz check failed' ']' >>>>>> + sleep 5 >>>>>> >>>>>> Not sure what to do. >>>>>> My configuration is ... >>>>>> eth0 - 10.1.8.113 >>>>>> >>>>>> But the openstack configration in terms of networkin is the default >>>>>> from ansible-openstack which is 172.29.236.100/22 >>>>>> >>>>>> Maybe that's the problem? >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < >>>>>> ignaziocassano at gmail.com> wrote: >>>>>> >>>>>>> Hello Alfredo, >>>>>>> your external network is using proxy ? >>>>>>> If you using a proxy, and yuo configured it in cluster template, you >>>>>>> must setup no proxy for 127.0.0.1 >>>>>>> Ignazio >>>>>>> >>>>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < >>>>>>> clemens.hardewig at crandale.de> ha scritto: >>>>>>> >>>>>>>> At least on fedora there is a second cloud Init log as far as I >>>>>>>> remember-Look into both >>>>>>>> >>>>>>>> Br c >>>>>>>> >>>>>>>> Von meinem iPhone gesendet >>>>>>>> >>>>>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>> >>>>>>>> thanks Clemens. >>>>>>>> I looked at the cloud-init-output.log on the master... and at the >>>>>>>> moment is doing the following.... >>>>>>>> >>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> + '[' ok = '' ']' >>>>>>>> + sleep 5 >>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> + '[' ok = '' ']' >>>>>>>> + sleep 5 >>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> + '[' ok = '' ']' >>>>>>>> + sleep 5 >>>>>>>> >>>>>>>> Network ....could be but not sure where to look at >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >>>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>>> >>>>>>>>> Yes, you should check the cloud-init logs of your master. Without >>>>>>>>> having seen them, I would guess a network issue or you have selected for >>>>>>>>> your minion nodes a flavor using swap perhaps ... >>>>>>>>> So, log files are the first step you could dig into... >>>>>>>>> Br c >>>>>>>>> Von meinem iPhone gesendet >>>>>>>>> >>>>>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>>> >>>>>>>>> Hi all. >>>>>>>>> I finally instaledl successufully openstack ansible (queens) but, >>>>>>>>> after creating a cluster template I create k8s cluster, it stuck on >>>>>>>>> >>>>>>>>> >>>>>>>>> kube_masters >>>>>>>>> >>>>>>>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>>>>>>>> >>>>>>>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state >>>>>>>>> changed create in progress....and after around an hour it >>>>>>>>> says...time out. k8s master seems to be up.....at least as VM. >>>>>>>>> >>>>>>>>> any idea? >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> *Alfredo* >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> *Alfredo* >>>>>>>> >>>>>>>> >>>>>> >>>>>> -- >>>>>> *Alfredo* >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> >>> >>> >>> >>> -- >>> Cheers & Best regards, >>> Feilong Wang (王飞龙) >>> -------------------------------------------------------------------------- >>> Senior Cloud Software Engineer >>> Tel: +64-48032246 >>> Email: flwang at catalyst.net.nz >>> Catalyst IT Limited >>> Level 6, Catalyst House, 150 Willis Street, Wellington >>> -------------------------------------------------------------------------- >>> >>> >> >> -- >> *Alfredo* >> >> -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Sat Feb 2 16:18:29 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Sat, 2 Feb 2019 17:18:29 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Hi Clemens. Yes...you are right but not sure why the IPs are not correct if [ -z "${KUBE_NODE_IP}" ]; then KUBE_NODE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4) fi sans="IP:${KUBE_NODE_IP}" if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then KUBE_NODE_PUBLIC_IP=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4) I don't have that IP at all. On Sat, Feb 2, 2019 at 2:20 PM Clemens wrote: > Well - it seems that failure of part-013 has its root cause in failure of > part-011: > > in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore > the certificates for the access to Etcd are created; this is prerequisite > for any kinda of access authorization maintained by Etcd. The ip address > config items require an appropriate definition as metadata. If there is no > definition of that, then internet access fails and it can also not install > docker in part-013 ... > > Am 01.02.2019 um 10:20 schrieb Alfredo De Luca : > > thanks Feilong, clemens et all. > > I going to have a look later on today and see what I can do and see. > > Just a question: > Does the kube master need internet access to download stuff or not? > > Cheers > > > On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang > wrote: > >> I'm echoing Von's comments. >> >> From the log of cloud-init-output.log, you should be able to see below >> error: >> >> *Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 >> +0000. Up 76.51 seconds.* >> *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running >> /var/lib/cloud/instance/scripts/part-011 [1]* >> *+ _prefix=docker.io/openstackmagnum/ * >> *+ atomic install --storage ostree --system --system-package no --set >> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >> heat-container-agent >> docker.io/openstackmagnum/heat-container-agent:queens-stable >> * >> *The docker daemon does not appear to be running.* >> *+ systemctl start heat-container-agent* >> *Failed to start heat-container-agent.service: Unit >> heat-container-agent.service not found.* >> *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running >> /var/lib/cloud/instance/scripts/part-013 [5]* >> >> Then please go to /var/lib/cloud/instances//scripts to find >> the script 011 and 013 to run it manually to get the root cause. And >> welcome to pop up into #openstack-containers irc channel. >> >> >> >> On 30/01/19 11:43 PM, Clemens Hardewig wrote: >> >> Read the cloud-Init.log! There you can see that your >> /var/lib/.../part-011 part of the config script finishes with error. Check >> why. >> >> Von meinem iPhone gesendet >> >> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca > >: >> >> here are also the logs for the cloud init logs from the k8s master.... >> >> >> >> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca >> wrote: >> >>> >>> In the meantime this is my cluster >>> template >>> >>> >>> >>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < >>> alfredo.deluca at gmail.com> wrote: >>> >>>> hi Clemens and Ignazio. thanks for your support. >>>> it must be network related but I don't do something special apparently >>>> to create a simple k8s cluster. >>>> I ll post later on configurations and logs as you Clemens suggested. >>>> >>>> >>>> Cheers >>>> >>>> >>>> >>>> On Tue, Jan 29, 2019 at 9:16 PM Clemens >>>> wrote: >>>> >>>>> … an more important: check the other log cloud-init.log for error >>>>> messages (not only cloud-init-output.log) >>>>> >>>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < >>>>> alfredo.deluca at gmail.com>: >>>>> >>>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the >>>>> logs on the kube master keep saying the following >>>>> >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> >>>>> Not sure what to do. >>>>> My configuration is ... >>>>> eth0 - 10.1.8.113 >>>>> >>>>> But the openstack configration in terms of networkin is the default >>>>> from ansible-openstack which is 172.29.236.100/22 >>>>> >>>>> Maybe that's the problem? >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < >>>>> ignaziocassano at gmail.com> wrote: >>>>> >>>>>> Hello Alfredo, >>>>>> your external network is using proxy ? >>>>>> If you using a proxy, and yuo configured it in cluster template, you >>>>>> must setup no proxy for 127.0.0.1 >>>>>> Ignazio >>>>>> >>>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < >>>>>> clemens.hardewig at crandale.de> ha scritto: >>>>>> >>>>>>> At least on fedora there is a second cloud Init log as far as I >>>>>>> remember-Look into both >>>>>>> >>>>>>> Br c >>>>>>> >>>>>>> Von meinem iPhone gesendet >>>>>>> >>>>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >>>>>>> alfredo.deluca at gmail.com>: >>>>>>> >>>>>>> thanks Clemens. >>>>>>> I looked at the cloud-init-output.log on the master... and at the >>>>>>> moment is doing the following.... >>>>>>> >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> >>>>>>> Network ....could be but not sure where to look at >>>>>>> >>>>>>> >>>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>> >>>>>>>> Yes, you should check the cloud-init logs of your master. Without >>>>>>>> having seen them, I would guess a network issue or you have selected for >>>>>>>> your minion nodes a flavor using swap perhaps ... >>>>>>>> So, log files are the first step you could dig into... >>>>>>>> Br c >>>>>>>> Von meinem iPhone gesendet >>>>>>>> >>>>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>> >>>>>>>> Hi all. >>>>>>>> I finally instaledl successufully openstack ansible (queens) but, >>>>>>>> after creating a cluster template I create k8s cluster, it stuck on >>>>>>>> >>>>>>>> >>>>>>>> kube_masters >>>>>>>> >>>>>>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>>>>>>> >>>>>>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create >>>>>>>> in progress....and after around an hour it says...time out. k8s master >>>>>>>> seems to be up.....at least as VM. >>>>>>>> >>>>>>>> any idea? >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> *Alfredo* >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Alfredo* >>>>>>> >>>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> >> >> -- >> *Alfredo* >> >> >> >> >> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> -------------------------------------------------------------------------- >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> -------------------------------------------------------------------------- >> >> > > -- > *Alfredo* > > > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemens.hardewig at crandale.de Sat Feb 2 16:26:43 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 17:26:43 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Hi Alfredo, This is basics of Openstack: curl -s http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the metadata service with its special IP address 169.254.169.254 , to obtain the local ip address; the second one to get the public ip address It look like from remote that your network is not properly configured so that this information is not answered from metadata service successfully. What happens if you execute that command manually? BR C > Am 02.02.2019 um 17:18 schrieb Alfredo De Luca : > > Hi Clemens. Yes...you are right but not sure why the IPs are not correct > > if [ -z "${KUBE_NODE_IP}" ]; then > KUBE_NODE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4 ) > fi > > sans="IP:${KUBE_NODE_IP}" > > if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then > KUBE_NODE_PUBLIC_IP=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4 ) > > I don't have that IP at all. > > > On Sat, Feb 2, 2019 at 2:20 PM Clemens > wrote: > Well - it seems that failure of part-013 has its root cause in failure of part-011: > > in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore the certificates for the access to Etcd are created; this is prerequisite for any kinda of access authorization maintained by Etcd. The ip address config items require an appropriate definition as metadata. If there is no definition of that, then internet access fails and it can also not install docker in part-013 ... > >> Am 01.02.2019 um 10:20 schrieb Alfredo De Luca >: >> >> thanks Feilong, clemens et all. >> >> I going to have a look later on today and see what I can do and see. >> >> Just a question: >> Does the kube master need internet access to download stuff or not? >> >> Cheers >> >> >> On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang > wrote: >> I'm echoing Von's comments. >> >> From the log of cloud-init-output.log, you should be able to see below error: >> >> Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 +0000. Up 76.51 seconds. >> 2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-011 [1] >> + _prefix=docker.io/openstackmagnum/ >> + atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable >> The docker daemon does not appear to be running. >> + systemctl start heat-container-agent >> Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. >> 2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-013 [5] >> >> Then please go to /var/lib/cloud/instances//scripts to find the script 011 and 013 to run it manually to get the root cause. And welcome to pop up into #openstack-containers irc channel. >> >> >> >> >> >> On 30/01/19 11:43 PM, Clemens Hardewig wrote: >>> Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 part of the config script finishes with error. Check why. >>> >>> Von meinem iPhone gesendet >>> >>> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca >: >>> >>>> here are also the logs for the cloud init logs from the k8s master.... >>>> >>>> >>>> >>>> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca > wrote: >>>> >>>> In the meantime this is my cluster >>>> template >>>> >>>> >>>> >>>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca > wrote: >>>> hi Clemens and Ignazio. thanks for your support. >>>> it must be network related but I don't do something special apparently to create a simple k8s cluster. >>>> I ll post later on configurations and logs as you Clemens suggested. >>>> >>>> >>>> Cheers >>>> >>>> >>>> >>>> On Tue, Jan 29, 2019 at 9:16 PM Clemens > wrote: >>>> … an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log) >>>> >>>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca >: >>>>> >>>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following >>>>> >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> >>>>> Not sure what to do. >>>>> My configuration is ... >>>>> eth0 - 10.1.8.113 >>>>> >>>>> But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22 >>>>> >>>>> Maybe that's the problem? >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano > wrote: >>>>> Hello Alfredo, >>>>> your external network is using proxy ? >>>>> If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 >>>>> Ignazio >>>>> >>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig > ha scritto: >>>>> At least on fedora there is a second cloud Init log as far as I remember-Look into both >>>>> >>>>> Br c >>>>> >>>>> Von meinem iPhone gesendet >>>>> >>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca >: >>>>> >>>>>> thanks Clemens. >>>>>> I looked at the cloud-init-output.log on the master... and at the moment is doing the following.... >>>>>> >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> >>>>>> Network ....could be but not sure where to look at >>>>>> >>>>>> >>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig > wrote: >>>>>> Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... >>>>>> So, log files are the first step you could dig into... >>>>>> Br c >>>>>> Von meinem iPhone gesendet >>>>>> >>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca >: >>>>>> >>>>>>> Hi all. >>>>>>> I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on >>>>>>> >>>>>>> >>>>>>> kube_masters b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 OS::Heat::ResourceGroup 16 minutes Create In Progress state changed >>>>>>> create in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM. >>>>>>> >>>>>>> any idea? >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Alfredo >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Alfredo >>>>>> >>>>> >>>>> >>>>> -- >>>>> Alfredo >>>>> >>>> >>>> >>>> >>>> -- >>>> Alfredo >>>> >>>> >>>> >>>> -- >>>> Alfredo >>>> >>>> >>>> >>>> -- >>>> Alfredo >>>> >>>> >>>> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> -------------------------------------------------------------------------- >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> -------------------------------------------------------------------------- >> >> >> -- >> Alfredo >> > > > > -- > Alfredo > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From alfredo.deluca at gmail.com Sat Feb 2 16:33:36 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Sat, 2 Feb 2019 17:33:36 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: [root at freddo-5oyez3ot5pxi-master-0 scripts]# curl -s http://169.254.169.254/latest/meta-data/local-ipv4 10.0.0.5[root at freddo-5oyez3ot5pxi-master-0 scripts]# [root at freddo-5oyez3ot5pxi-master-0 scripts]# curl -s http://169.254.169.254/latest/meta-data/public-ipv4 172.29.249.112[root at freddo-5oyez3ot5pxi-master-0 scripts]# 172.29.249.112 is the Floating IP... which I use to connect to the master On Sat, Feb 2, 2019 at 5:26 PM Clemens wrote: > Hi Alfredo, > > This is basics of Openstack: curl -s > http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the > metadata service with its special IP address 169.254.169.254 > , to obtain the local > ip address; the second one to get the public ip address > It look like from remote that your network is not properly configured so > that this information is not answered from metadata service successfully. > What happens if you execute that command manually? > > BR C > > Am 02.02.2019 um 17:18 schrieb Alfredo De Luca : > > Hi Clemens. Yes...you are right but not sure why the IPs are not correct > > if [ -z "${KUBE_NODE_IP}" ]; then > KUBE_NODE_IP=$(curl -s > http://169.254.169.254/latest/meta-data/local-ipv4) > fi > > sans="IP:${KUBE_NODE_IP}" > > if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then > KUBE_NODE_PUBLIC_IP=$(curl -s > http://169.254.169.254/latest/meta-data/public-ipv4) > > I don't have that IP at all. > > > On Sat, Feb 2, 2019 at 2:20 PM Clemens > wrote: > >> Well - it seems that failure of part-013 has its root cause in failure of >> part-011: >> >> in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore >> the certificates for the access to Etcd are created; this is prerequisite >> for any kinda of access authorization maintained by Etcd. The ip address >> config items require an appropriate definition as metadata. If there is no >> definition of that, then internet access fails and it can also not install >> docker in part-013 ... >> >> Am 01.02.2019 um 10:20 schrieb Alfredo De Luca > >: >> >> thanks Feilong, clemens et all. >> >> I going to have a look later on today and see what I can do and see. >> >> Just a question: >> Does the kube master need internet access to download stuff or not? >> >> Cheers >> >> >> On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang >> wrote: >> >>> I'm echoing Von's comments. >>> >>> From the log of cloud-init-output.log, you should be able to see below >>> error: >>> >>> *Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 >>> 08:33:41 +0000. Up 76.51 seconds.* >>> *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running >>> /var/lib/cloud/instance/scripts/part-011 [1]* >>> *+ _prefix=docker.io/openstackmagnum/ >>> * >>> *+ atomic install --storage ostree --system --system-package no --set >>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>> heat-container-agent >>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>> * >>> *The docker daemon does not appear to be running.* >>> *+ systemctl start heat-container-agent* >>> *Failed to start heat-container-agent.service: Unit >>> heat-container-agent.service not found.* >>> *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running >>> /var/lib/cloud/instance/scripts/part-013 [5]* >>> >>> Then please go to /var/lib/cloud/instances//scripts to >>> find the script 011 and 013 to run it manually to get the root cause. And >>> welcome to pop up into #openstack-containers irc channel. >>> >>> >>> >>> On 30/01/19 11:43 PM, Clemens Hardewig wrote: >>> >>> Read the cloud-Init.log! There you can see that your >>> /var/lib/.../part-011 part of the config script finishes with error. Check >>> why. >>> >>> Von meinem iPhone gesendet >>> >>> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca >> >: >>> >>> here are also the logs for the cloud init logs from the k8s master.... >>> >>> >>> >>> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca < >>> alfredo.deluca at gmail.com> wrote: >>> >>>> >>>> In the meantime this is my cluster >>>> template >>>> >>>> >>>> >>>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < >>>> alfredo.deluca at gmail.com> wrote: >>>> >>>>> hi Clemens and Ignazio. thanks for your support. >>>>> it must be network related but I don't do something special apparently >>>>> to create a simple k8s cluster. >>>>> I ll post later on configurations and logs as you Clemens suggested. >>>>> >>>>> >>>>> Cheers >>>>> >>>>> >>>>> >>>>> On Tue, Jan 29, 2019 at 9:16 PM Clemens >>>>> wrote: >>>>> >>>>>> … an more important: check the other log cloud-init.log for error >>>>>> messages (not only cloud-init-output.log) >>>>>> >>>>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < >>>>>> alfredo.deluca at gmail.com>: >>>>>> >>>>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the >>>>>> logs on the kube master keep saying the following >>>>>> >>>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>>> [+]poststarthook/extensions/third-party-resources ok >>>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>>> healthz check failed' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>>> [+]poststarthook/extensions/third-party-resources ok >>>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>>> healthz check failed' ']' >>>>>> + sleep 5 >>>>>> >>>>>> Not sure what to do. >>>>>> My configuration is ... >>>>>> eth0 - 10.1.8.113 >>>>>> >>>>>> But the openstack configration in terms of networkin is the default >>>>>> from ansible-openstack which is 172.29.236.100/22 >>>>>> >>>>>> Maybe that's the problem? >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < >>>>>> ignaziocassano at gmail.com> wrote: >>>>>> >>>>>>> Hello Alfredo, >>>>>>> your external network is using proxy ? >>>>>>> If you using a proxy, and yuo configured it in cluster template, you >>>>>>> must setup no proxy for 127.0.0.1 >>>>>>> Ignazio >>>>>>> >>>>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < >>>>>>> clemens.hardewig at crandale.de> ha scritto: >>>>>>> >>>>>>>> At least on fedora there is a second cloud Init log as far as I >>>>>>>> remember-Look into both >>>>>>>> >>>>>>>> Br c >>>>>>>> >>>>>>>> Von meinem iPhone gesendet >>>>>>>> >>>>>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>> >>>>>>>> thanks Clemens. >>>>>>>> I looked at the cloud-init-output.log on the master... and at the >>>>>>>> moment is doing the following.... >>>>>>>> >>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> + '[' ok = '' ']' >>>>>>>> + sleep 5 >>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> + '[' ok = '' ']' >>>>>>>> + sleep 5 >>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> + '[' ok = '' ']' >>>>>>>> + sleep 5 >>>>>>>> >>>>>>>> Network ....could be but not sure where to look at >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >>>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>>> >>>>>>>>> Yes, you should check the cloud-init logs of your master. Without >>>>>>>>> having seen them, I would guess a network issue or you have selected for >>>>>>>>> your minion nodes a flavor using swap perhaps ... >>>>>>>>> So, log files are the first step you could dig into... >>>>>>>>> Br c >>>>>>>>> Von meinem iPhone gesendet >>>>>>>>> >>>>>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>>> >>>>>>>>> Hi all. >>>>>>>>> I finally instaledl successufully openstack ansible (queens) but, >>>>>>>>> after creating a cluster template I create k8s cluster, it stuck on >>>>>>>>> >>>>>>>>> >>>>>>>>> kube_masters >>>>>>>>> >>>>>>>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>>>>>>>> >>>>>>>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state >>>>>>>>> changed create in progress....and after around an hour it >>>>>>>>> says...time out. k8s master seems to be up.....at least as VM. >>>>>>>>> >>>>>>>>> any idea? >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> *Alfredo* >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> *Alfredo* >>>>>>>> >>>>>>>> >>>>>> >>>>>> -- >>>>>> *Alfredo* >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> >>> >>> >>> >>> -- >>> Cheers & Best regards, >>> Feilong Wang (王飞龙) >>> -------------------------------------------------------------------------- >>> Senior Cloud Software Engineer >>> Tel: +64-48032246 >>> Email: flwang at catalyst.net.nz >>> Catalyst IT Limited >>> Level 6, Catalyst House, 150 Willis Street, Wellington >>> -------------------------------------------------------------------------- >>> >>> >> >> -- >> *Alfredo* >> >> >> > > -- > *Alfredo* > > > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Sat Feb 2 16:36:37 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Sat, 2 Feb 2019 17:36:37 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: so if I run part-013 I get the following oot at freddo-5oyez3ot5pxi-master-0 scripts]# ./part-013 + _prefix=docker.io/openstackmagnum/ + atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable ./part-013: line 8: atomic: command not found + systemctl start heat-container-agent Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. On Sat, Feb 2, 2019 at 5:33 PM Alfredo De Luca wrote: > [root at freddo-5oyez3ot5pxi-master-0 scripts]# curl -s > http://169.254.169.254/latest/meta-data/local-ipv4 > 10.0.0.5[root at freddo-5oyez3ot5pxi-master-0 scripts]# > > [root at freddo-5oyez3ot5pxi-master-0 scripts]# curl -s > http://169.254.169.254/latest/meta-data/public-ipv4 > 172.29.249.112[root at freddo-5oyez3ot5pxi-master-0 scripts]# > > 172.29.249.112 is the Floating IP... which I use to connect to the master > > > > > On Sat, Feb 2, 2019 at 5:26 PM Clemens > wrote: > >> Hi Alfredo, >> >> This is basics of Openstack: curl -s >> http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the >> metadata service with its special IP address 169.254.169.254 >> , to obtain the >> local ip address; the second one to get the public ip address >> It look like from remote that your network is not properly configured so >> that this information is not answered from metadata service successfully. >> What happens if you execute that command manually? >> >> BR C >> >> Am 02.02.2019 um 17:18 schrieb Alfredo De Luca > >: >> >> Hi Clemens. Yes...you are right but not sure why the IPs are not correct >> >> if [ -z "${KUBE_NODE_IP}" ]; then >> KUBE_NODE_IP=$(curl -s >> http://169.254.169.254/latest/meta-data/local-ipv4) >> fi >> >> sans="IP:${KUBE_NODE_IP}" >> >> if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then >> KUBE_NODE_PUBLIC_IP=$(curl -s >> http://169.254.169.254/latest/meta-data/public-ipv4) >> >> I don't have that IP at all. >> >> >> On Sat, Feb 2, 2019 at 2:20 PM Clemens >> wrote: >> >>> Well - it seems that failure of part-013 has its root cause in failure >>> of part-011: >>> >>> in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. >>> Furthermore the certificates for the access to Etcd are created; this is >>> prerequisite for any kinda of access authorization maintained by Etcd. The >>> ip address config items require an appropriate definition as metadata. If >>> there is no definition of that, then internet access fails and it can also >>> not install docker in part-013 ... >>> >>> Am 01.02.2019 um 10:20 schrieb Alfredo De Luca >> >: >>> >>> thanks Feilong, clemens et all. >>> >>> I going to have a look later on today and see what I can do and see. >>> >>> Just a question: >>> Does the kube master need internet access to download stuff or not? >>> >>> Cheers >>> >>> >>> On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang >>> wrote: >>> >>>> I'm echoing Von's comments. >>>> >>>> From the log of cloud-init-output.log, you should be able to see below >>>> error: >>>> >>>> *Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 >>>> 08:33:41 +0000. Up 76.51 seconds.* >>>> *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running >>>> /var/lib/cloud/instance/scripts/part-011 [1]* >>>> *+ _prefix=docker.io/openstackmagnum/ >>>> * >>>> *+ atomic install --storage ostree --system --system-package no --set >>>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>> heat-container-agent >>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>> * >>>> *The docker daemon does not appear to be running.* >>>> *+ systemctl start heat-container-agent* >>>> *Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found.* >>>> *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running >>>> /var/lib/cloud/instance/scripts/part-013 [5]* >>>> >>>> Then please go to /var/lib/cloud/instances//scripts to >>>> find the script 011 and 013 to run it manually to get the root cause. And >>>> welcome to pop up into #openstack-containers irc channel. >>>> >>>> >>>> >>>> On 30/01/19 11:43 PM, Clemens Hardewig wrote: >>>> >>>> Read the cloud-Init.log! There you can see that your >>>> /var/lib/.../part-011 part of the config script finishes with error. Check >>>> why. >>>> >>>> Von meinem iPhone gesendet >>>> >>>> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca < >>>> alfredo.deluca at gmail.com>: >>>> >>>> here are also the logs for the cloud init logs from the k8s master.... >>>> >>>> >>>> >>>> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca < >>>> alfredo.deluca at gmail.com> wrote: >>>> >>>>> >>>>> In the meantime this is my cluster >>>>> template >>>>> >>>>> >>>>> >>>>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < >>>>> alfredo.deluca at gmail.com> wrote: >>>>> >>>>>> hi Clemens and Ignazio. thanks for your support. >>>>>> it must be network related but I don't do something special >>>>>> apparently to create a simple k8s cluster. >>>>>> I ll post later on configurations and logs as you Clemens suggested. >>>>>> >>>>>> >>>>>> Cheers >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Jan 29, 2019 at 9:16 PM Clemens >>>>>> wrote: >>>>>> >>>>>>> … an more important: check the other log cloud-init.log for error >>>>>>> messages (not only cloud-init-output.log) >>>>>>> >>>>>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < >>>>>>> alfredo.deluca at gmail.com>: >>>>>>> >>>>>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the >>>>>>> logs on the kube master keep saying the following >>>>>>> >>>>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not >>>>>>> finished >>>>>>> [+]poststarthook/extensions/third-party-resources ok >>>>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>>>> healthz check failed' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not >>>>>>> finished >>>>>>> [+]poststarthook/extensions/third-party-resources ok >>>>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>>>> healthz check failed' ']' >>>>>>> + sleep 5 >>>>>>> >>>>>>> Not sure what to do. >>>>>>> My configuration is ... >>>>>>> eth0 - 10.1.8.113 >>>>>>> >>>>>>> But the openstack configration in terms of networkin is the default >>>>>>> from ansible-openstack which is 172.29.236.100/22 >>>>>>> >>>>>>> Maybe that's the problem? >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < >>>>>>> ignaziocassano at gmail.com> wrote: >>>>>>> >>>>>>>> Hello Alfredo, >>>>>>>> your external network is using proxy ? >>>>>>>> If you using a proxy, and yuo configured it in cluster template, >>>>>>>> you must setup no proxy for 127.0.0.1 >>>>>>>> Ignazio >>>>>>>> >>>>>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < >>>>>>>> clemens.hardewig at crandale.de> ha scritto: >>>>>>>> >>>>>>>>> At least on fedora there is a second cloud Init log as far as I >>>>>>>>> remember-Look into both >>>>>>>>> >>>>>>>>> Br c >>>>>>>>> >>>>>>>>> Von meinem iPhone gesendet >>>>>>>>> >>>>>>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >>>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>>> >>>>>>>>> thanks Clemens. >>>>>>>>> I looked at the cloud-init-output.log on the master... and at the >>>>>>>>> moment is doing the following.... >>>>>>>>> >>>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>>> + '[' ok = '' ']' >>>>>>>>> + sleep 5 >>>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>>> + '[' ok = '' ']' >>>>>>>>> + sleep 5 >>>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>>> + '[' ok = '' ']' >>>>>>>>> + sleep 5 >>>>>>>>> >>>>>>>>> Network ....could be but not sure where to look at >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >>>>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>>>> >>>>>>>>>> Yes, you should check the cloud-init logs of your master. Without >>>>>>>>>> having seen them, I would guess a network issue or you have selected for >>>>>>>>>> your minion nodes a flavor using swap perhaps ... >>>>>>>>>> So, log files are the first step you could dig into... >>>>>>>>>> Br c >>>>>>>>>> Von meinem iPhone gesendet >>>>>>>>>> >>>>>>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>>>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>>>> >>>>>>>>>> Hi all. >>>>>>>>>> I finally instaledl successufully openstack ansible (queens) but, >>>>>>>>>> after creating a cluster template I create k8s cluster, it stuck on >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> kube_masters >>>>>>>>>> >>>>>>>>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>>>>>>>>> >>>>>>>>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state >>>>>>>>>> changed create in progress....and after around an hour it >>>>>>>>>> says...time out. k8s master seems to be up.....at least as VM. >>>>>>>>>> >>>>>>>>>> any idea? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> *Alfredo* >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> *Alfredo* >>>>>>>>> >>>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Alfredo* >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> *Alfredo* >>>>>> >>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >>>> >>>> >>>> >>>> -- >>>> Cheers & Best regards, >>>> Feilong Wang (王飞龙) >>>> -------------------------------------------------------------------------- >>>> Senior Cloud Software Engineer >>>> Tel: +64-48032246 >>>> Email: flwang at catalyst.net.nz >>>> Catalyst IT Limited >>>> Level 6, Catalyst House, 150 Willis Street, Wellington >>>> -------------------------------------------------------------------------- >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> >>> >> >> -- >> *Alfredo* >> >> >> > > -- > *Alfredo* > > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sun Feb 3 02:30:07 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 3 Feb 2019 02:30:07 +0000 Subject: [all] Two months with openstack-discuss (another progress report) Message-ID: <20190203023007.ysbjvegzbp7rsjop@yuggoth.org> This is just a quick followup to see how things have progressed since we cut the old openstack, openstack-dev, openstack-operators and openstack-sigs mailing lists over to openstack-discuss two months ago, as compared to my previous report[*] from the one-month anniversary. We're still seeing a fair number of posts from non-subscribers landing in the moderation queue (around one or two a day, sometimes more, sometimes less) but most of them are newcomers and many subscribe immediately after receiving the moderation notice. We're now at 830 subscribers to openstack-discuss (up from 708 in the previous report). 75% of the addresses used to send 10 or more messages to the old lists in 2018 are now subscribed to the new one (it was 70% a month ago). While posting volume is up compared to December (unsurprising given the usual end-of-year holiday slump), we only had a total of 958 posts over the month of January; comparing to the 1196 from January 2018 that's a 20% drop which (considering that right at 10% of the messages on the old lists were duplicates from cross-posting), is still less of a drop than was typical on average across the old lists over the previous five Januaries. One change worth mentioning: we noticed a rash of bounce-disabled subscriptions triggered by messages occasionally containing invalid DKIM signatures (inconsistently for some posters, fairly consistently for a few others). We're unsure as of yet whether the messages are arriving with invalid signatures or whether Mailman is modifying them in unanticipated ways prior to forwarding, but have re-enabled all affected subscribers and temporarily turned off the automatic subscription disabling feature while investigation is underway. If you missed receiving some messages which are present in the list archive, that's quite possibly the cause. Apologies for the inconvenience! [*] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001386.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tpb at dyncloud.net Sun Feb 3 10:05:49 2019 From: tpb at dyncloud.net (Tom Barron) Date: Sun, 3 Feb 2019 05:05:49 -0500 Subject: [manila][glusterfs] on queens error In-Reply-To: References: Message-ID: <20190203100549.urtnvf2iatmqm6oy@barron.net> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: >Thanks Goutham. >If there are not mantainers for this driver I will switch on ceph and or >netapp. >I am already using netapp but I would like to export shares from an >openstack installation to another. >Since these 2 installations do non share any openstack component and have >different openstack database, I would like to know it is possible . >Regards >Ignazio Hi Ignazio, If by "export shares from an openstack installation to another" you mean removing them from management by manila in installation A and instead managing them by manila in installation B then you can do that while leaving them in place on your Net App back end using the manila "manage-unmanage" administrative commands. Here's some documentation [1] that should be helpful. If on the other hand by "export shares ... to another" you mean to leave the shares under management of manila in installation A but consume them from compute instances in installation B it's all about the networking. One can use manila to "allow-access" to consumers of shares anywhere but the consumers must be able to reach the "export locations" for those shares and mount them. Cheers, -- Tom Barron [1] https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 > >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi >ha scritto: > >> Hi Ignazio, >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano >> wrote: >> > >> > Hello All, >> > I installed manila on my queens openstack based on centos 7. >> > I configured two servers with glusterfs replocation and ganesha nfs. >> > I configured my controllers octavia,conf but when I try to create a share >> > the manila scheduler logs reports: >> > >> > Failed to schedule create_share: No valid host was found. Failed to find >> a weighted host, the last executed filter was CapabilitiesFilter.: >> NoValidHost: No valid host was found. Failed to find a weighted host, the >> last executed filter was CapabilitiesFilter. >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a 89f76bc5de5545f381da2c10c7df7f15 >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a >> >> >> The scheduler failure points out that you have a mismatch in >> expectations (backend capabilities vs share type extra-specs) and >> there was no host to schedule your share to. So a few things to check >> here: >> >> - What is the share type you're using? Can you list the share type >> extra-specs and confirm that the backend (your GlusterFS storage) >> capabilities are appropriate with whatever you've set up as >> extra-specs ($ manila pool-list --detail)? >> - Is your backend operating correctly? You can list the manila >> services ($ manila service-list) and see if the backend is both >> 'enabled' and 'up'. If it isn't, there's a good chance there was a >> problem with the driver initialization, please enable debug logging, >> and look at the log file for the manila-share service, you might see >> why and be able to fix it. >> >> >> Please be aware that we're on a look out for a maintainer for the >> GlusterFS driver for the past few releases. We're open to bug fixes >> and maintenance patches, but there is currently no active maintainer >> for this driver. >> >> >> > I did not understand if controllers node must be connected to the >> network where shares must be exported for virtual machines, so my glusterfs >> are connected on the management network where openstack controllers are >> conencted and to the network where virtual machine are connected. >> > >> > My manila.conf section for glusterfs section is the following >> > >> > [gluster-manila565] >> > driver_handles_share_servers = False >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver >> > glusterfs_target = root at 10.102.184.229:/manila565 >> > glusterfs_path_to_private_key = /etc/manila/id_rsa >> > glusterfs_ganesha_server_username = root >> > glusterfs_nfs_server_type = Ganesha >> > glusterfs_ganesha_server_ip = 10.102.184.229 >> > #glusterfs_servers = root at 10.102.185.19 >> > ganesha_config_dir = /etc/ganesha >> > >> > >> > PS >> > 10.102.184.0/24 is the network where controlelrs expose endpoint >> > >> > 10.102.189.0/24 is the shared network inside openstack where virtual >> machines are connected. >> > >> > The gluster servers are connected on both. >> > >> > >> > Any help, please ? >> > >> > Ignazio >> From ignaziocassano at gmail.com Sun Feb 3 11:45:02 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sun, 3 Feb 2019 12:45:02 +0100 Subject: [manila][glusterfs] on queens error In-Reply-To: <20190203100549.urtnvf2iatmqm6oy@barron.net> References: <20190203100549.urtnvf2iatmqm6oy@barron.net> Message-ID: Many Thanks. I will check it [1]. Regards Ignazio Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha scritto: > On 01/02/19 07:28 +0100, Ignazio Cassano wrote: > >Thanks Goutham. > >If there are not mantainers for this driver I will switch on ceph and or > >netapp. > >I am already using netapp but I would like to export shares from an > >openstack installation to another. > >Since these 2 installations do non share any openstack component and have > >different openstack database, I would like to know it is possible . > >Regards > >Ignazio > > Hi Ignazio, > > If by "export shares from an openstack installation to another" you > mean removing them from management by manila in installation A and > instead managing them by manila in installation B then you can do that > while leaving them in place on your Net App back end using the manila > "manage-unmanage" administrative commands. Here's some documentation > [1] that should be helpful. > > If on the other hand by "export shares ... to another" you mean to > leave the shares under management of manila in installation A but > consume them from compute instances in installation B it's all about > the networking. One can use manila to "allow-access" to consumers of > shares anywhere but the consumers must be able to reach the "export > locations" for those shares and mount them. > > Cheers, > > -- Tom Barron > > [1] > https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 > > > >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < > gouthampravi at gmail.com> > >ha scritto: > > > >> Hi Ignazio, > >> > >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano > >> wrote: > >> > > >> > Hello All, > >> > I installed manila on my queens openstack based on centos 7. > >> > I configured two servers with glusterfs replocation and ganesha nfs. > >> > I configured my controllers octavia,conf but when I try to create a > share > >> > the manila scheduler logs reports: > >> > > >> > Failed to schedule create_share: No valid host was found. Failed to > find > >> a weighted host, the last executed filter was CapabilitiesFilter.: > >> NoValidHost: No valid host was found. Failed to find a weighted host, > the > >> last executed filter was CapabilitiesFilter. > >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api > >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a > 89f76bc5de5545f381da2c10c7df7f15 > >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for > >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a > >> > >> > >> The scheduler failure points out that you have a mismatch in > >> expectations (backend capabilities vs share type extra-specs) and > >> there was no host to schedule your share to. So a few things to check > >> here: > >> > >> - What is the share type you're using? Can you list the share type > >> extra-specs and confirm that the backend (your GlusterFS storage) > >> capabilities are appropriate with whatever you've set up as > >> extra-specs ($ manila pool-list --detail)? > >> - Is your backend operating correctly? You can list the manila > >> services ($ manila service-list) and see if the backend is both > >> 'enabled' and 'up'. If it isn't, there's a good chance there was a > >> problem with the driver initialization, please enable debug logging, > >> and look at the log file for the manila-share service, you might see > >> why and be able to fix it. > >> > >> > >> Please be aware that we're on a look out for a maintainer for the > >> GlusterFS driver for the past few releases. We're open to bug fixes > >> and maintenance patches, but there is currently no active maintainer > >> for this driver. > >> > >> > >> > I did not understand if controllers node must be connected to the > >> network where shares must be exported for virtual machines, so my > glusterfs > >> are connected on the management network where openstack controllers are > >> conencted and to the network where virtual machine are connected. > >> > > >> > My manila.conf section for glusterfs section is the following > >> > > >> > [gluster-manila565] > >> > driver_handles_share_servers = False > >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver > >> > glusterfs_target = root at 10.102.184.229:/manila565 > >> > glusterfs_path_to_private_key = /etc/manila/id_rsa > >> > glusterfs_ganesha_server_username = root > >> > glusterfs_nfs_server_type = Ganesha > >> > glusterfs_ganesha_server_ip = 10.102.184.229 > >> > #glusterfs_servers = root at 10.102.185.19 > >> > ganesha_config_dir = /etc/ganesha > >> > > >> > > >> > PS > >> > 10.102.184.0/24 is the network where controlelrs expose endpoint > >> > > >> > 10.102.189.0/24 is the shared network inside openstack where virtual > >> machines are connected. > >> > > >> > The gluster servers are connected on both. > >> > > >> > > >> > Any help, please ? > >> > > >> > Ignazio > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From honjo.rikimaru at po.ntt-tx.co.jp Mon Feb 4 01:32:54 2019 From: honjo.rikimaru at po.ntt-tx.co.jp (Rikimaru Honjo) Date: Mon, 4 Feb 2019 10:32:54 +0900 Subject: [infra][zuul]Run only my 3rd party CI on my environment In-Reply-To: References: <17a356c2-9911-a4e9-43f3-6df04bf18a59@po.ntt-tx.co.jp> Message-ID: <071fc91d-1e35-8838-3046-a237b681d59e@po.ntt-tx.co.jp> On 2019/01/31 21:34, Sean Mooney wrote: > On Thu, 2019-01-31 at 14:27 +0900, Rikimaru Honjo wrote: >> Hello, >> >> I have a question about Zuulv3. >> >> I'm preparing third party CI for openstack/masakari PJ. >> I'd like to run my CI by my Zuulv3 instance on my environment. >> >> In my understand, I should add my pipeline to the project of the following .zuul.yaml for my purpose. >> >> https://github.com/openstack/masakari/blob/master/.zuul.yaml >> >> But, as a result, my Zuulv3 instance also run existed pipelines(check & gate). >> I want to run only my pipeline on my environment. >> (And, existed piplines will be run on openstack-infra environment.) >> >> How can I make my Zuulv3 instance ignore other pipeline? > you have two options that i know of. > > first you can simply not define a pipeline called gate and check in your zuul config repo. > since you are already usign it that is not an option for you. > > second if you have your own ci config project that is hosted > seperatly from upstream gerrit you can define in you pipeline that > the gate and check piplines are only for that other souce. > > e.g. if you have two connections defiend in zuul you can use the pipline > triggers to define that the triggers for the gate an check pipeline only work with your > own gerrit instance and not openstacks > > i am similar seting up a personal thridparty ci at present. > i have chosen to create a seperate pipeline with a different name for running > against upstream changes using the git.openstack.org gerrit source > > i have not pushed the patch to trigger form upstream gerrit yet > https://review.seanmooney.info/plugins/gitiles/ci-config/+/master/zuul.d/pipelines.yaml > but you can see that my gate and check piplines only trigger form the gerrit source > which is my own gerrit instacne at review.seanmooney.info > > i will be adding a dedicated pipeline for upstream as unlike my personal gerrit i never > want my ci to submit/merge patches upstream. > > i hope that helps. > > the gerrit trigger docs can be found here > https://zuul-ci.org/docs/zuul/admin/drivers/gerrit.html#trigger-configuration Thanks a lot! My question has been solved completely with your advice. I would choose the second method. > regards > sean >> >> Best regards, > > -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp From mikal at stillhq.com Mon Feb 4 02:15:54 2019 From: mikal at stillhq.com (Michael Still) Date: Mon, 4 Feb 2019 13:15:54 +1100 Subject: [kolla] Debugging with kolla-ansible Message-ID: Heya, I'm chasing a bug at the moment, and have been able to recreate it with a stock kolla-ansible install. The next step is to add more debugging to the OpenStack code to try and chase down what's happening. Before I go off and do something wildly bonkers, does anyone have a nice way of overriding locally the container image that kolla is using for a given container? The best I've come up with at the moment is something like: - copy the contents of the container out to a directory on the host node - delete the docker container - create a new container which mimics the previous container (docker inspect and some muttering) and have that container mount the copied out stuff as a volume I considered just snapshotting the image being used by the current container, but I want a faster edit cycle than edit, snapshot, start provides. Thoughts? Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Mon Feb 4 02:36:33 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Sun, 3 Feb 2019 21:36:33 -0500 Subject: [kolla] Debugging with kolla-ansible In-Reply-To: References: Message-ID: On Sun, Feb 3, 2019, 9:17 PM Michael Still Heya, > > I'm chasing a bug at the moment, and have been able to recreate it with a > stock kolla-ansible install. The next step is to add more debugging to the > OpenStack code to try and chase down what's happening. > > Before I go off and do something wildly bonkers, does anyone have a nice > way of overriding locally the container image that kolla is using for a > given container? > > The best I've come up with at the moment is something like: > > - copy the contents of the container out to a directory on the host node > - delete the docker container > - create a new container which mimics the previous container (docker > inspect and some muttering) and have that container mount the copied out > stuff as a volume > > I considered just snapshotting the image being used by the current > container, but I want a faster edit cycle than edit, snapshot, start > provides. > > Thoughts? > Michael > Easiest way would be to deploy from a local registry. You can pull everything from docker hub and just use kolla-build to build and push the ones you're working on. Then just delete the image from wherever it's running, run a deploy with --tags of the project you're messing with, and it'll deploy the new image, or increment the docker tag when you push it and run upgrade. If I'm missing something and oversimplifying, let me know :). -Erik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Mon Feb 4 02:38:53 2019 From: mikal at stillhq.com (Michael Still) Date: Mon, 4 Feb 2019 13:38:53 +1100 Subject: [kolla] Debugging with kolla-ansible In-Reply-To: References: Message-ID: That sounds interesting... So if I only wanted to redeploy say the ironic_neutron_agent container, how would I do that with a tag? Its not immediately obvious to me where the command line for docker comes from in the ansible. Is that just in ansible/roles/neutron/defaults/main.yml ? If so, I could tweak the container definition for the container I want to hack with the get its code from a volume, and then redeploy just that one container, yes? Thanks for your help! Michael On Mon, Feb 4, 2019 at 1:36 PM Erik McCormick wrote: > > > On Sun, Feb 3, 2019, 9:17 PM Michael Still >> Heya, >> >> I'm chasing a bug at the moment, and have been able to recreate it with a >> stock kolla-ansible install. The next step is to add more debugging to the >> OpenStack code to try and chase down what's happening. >> >> Before I go off and do something wildly bonkers, does anyone have a nice >> way of overriding locally the container image that kolla is using for a >> given container? >> >> The best I've come up with at the moment is something like: >> >> - copy the contents of the container out to a directory on the host node >> - delete the docker container >> - create a new container which mimics the previous container (docker >> inspect and some muttering) and have that container mount the copied out >> stuff as a volume >> >> I considered just snapshotting the image being used by the current >> container, but I want a faster edit cycle than edit, snapshot, start >> provides. >> >> Thoughts? >> Michael >> > > Easiest way would be to deploy from a local registry. You can pull > everything from docker hub and just use kolla-build to build and push the > ones you're working on. > > Then just delete the image from wherever it's running, run a deploy with > --tags of the project you're messing with, and it'll deploy the new image, > or increment the docker tag when you push it and run upgrade. > > If I'm missing something and oversimplifying, let me know :). > > -Erik > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Mon Feb 4 05:01:35 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Mon, 4 Feb 2019 00:01:35 -0500 Subject: [kolla] Debugging with kolla-ansible In-Reply-To: References: Message-ID: Sorry for the delay. I didn't want to try and write this on my phone... On Sun, Feb 3, 2019, 9:39 PM Michael Still That sounds interesting... So if I only wanted to redeploy say the > ironic_neutron_agent container, how would I do that with a tag? > To roll it out, update it's config, or upgrade to a new ticker tag, you'd just do kolla-ansible --tags neutron deploy | reconfigure | upgrade I don't think its granular enough to do just the agent and I'm not sure you'd want to anyway. > Its not immediately obvious to me where the command line for docker comes > from in the ansible. Is that just > in ansible/roles/neutron/defaults/main.yml ? If so, I could tweak the > container definition for the container I want to hack with the get its code > from a volume, and then redeploy just that one container, yes? > I suppose that's one way to go about quick hacks. You could add a new volume in that main.yml and then modify things in it. I think that would get messy though. There aren't any volume definitions explicitly for that container, so you'd have to add a whole section in there for it and I don't know what other side effects that might have. The slow but safe way to do it would be to point Kolla at your feature branch and rebuild the image each time you want to test a new patch set. In kolla-build.conf do something like: [neutron-base-plugin-networking-baremetal] type = url location = https://github.com/openstack/networking-baremetal.git reference = tonys-hacks then something like kolla-build --config-file /etc/kolla/kolla-build.conf --base centos --type source --push --registry localhost:5000 --logs-dir /tmp ironic-neutron-agent The really dirty but useful way to test small changes would be to just push them into the container with 'docker cp' and restart the container. Note that this will not work for config changes as those files get clobbered at startup, but for hacking the actual python bits, it'll do. Hope that's what you're looking for. If you drop by #opensatck-kolla during US daylight hours you might get more suggestions from Eduardo or one of the actual project devs. They probably have fancier methods. Cheers, Erik > > Thanks for your help! > > Michael > > > On Mon, Feb 4, 2019 at 1:36 PM Erik McCormick > wrote: > >> >> >> On Sun, Feb 3, 2019, 9:17 PM Michael Still > >>> Heya, >>> >>> I'm chasing a bug at the moment, and have been able to recreate it with >>> a stock kolla-ansible install. The next step is to add more debugging to >>> the OpenStack code to try and chase down what's happening. >>> >>> Before I go off and do something wildly bonkers, does anyone have a nice >>> way of overriding locally the container image that kolla is using for a >>> given container? >>> >>> The best I've come up with at the moment is something like: >>> >>> - copy the contents of the container out to a directory on the host node >>> - delete the docker container >>> - create a new container which mimics the previous container (docker >>> inspect and some muttering) and have that container mount the copied out >>> stuff as a volume >>> >>> I considered just snapshotting the image being used by the current >>> container, but I want a faster edit cycle than edit, snapshot, start >>> provides. >>> >>> Thoughts? >>> Michael >>> >> >> Easiest way would be to deploy from a local registry. You can pull >> everything from docker hub and just use kolla-build to build and push the >> ones you're working on. >> >> Then just delete the image from wherever it's running, run a deploy with >> --tags of the project you're messing with, and it'll deploy the new image, >> or increment the docker tag when you push it and run upgrade. >> >> If I'm missing something and oversimplifying, let me know :). >> >> -Erik >> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Mon Feb 4 06:12:10 2019 From: mikal at stillhq.com (Michael Still) Date: Mon, 4 Feb 2019 17:12:10 +1100 Subject: [kolla] Debugging with kolla-ansible In-Reply-To: References: Message-ID: On Mon, Feb 4, 2019 at 4:01 PM Erik McCormick wrote: [snip detailed helpful stuff] The really dirty but useful way to test small changes would be to just push > them into the container with 'docker cp' and restart the container. Note > that this will not work for config changes as those files get clobbered at > startup, but for hacking the actual python bits, it'll do. > This was news to me to be honest. I had assumed the container filesystem got reset on process restart, but you're right and that's not true. So, editing files in the container works for my current needs. Thanks heaps! Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Mon Feb 4 08:23:23 2019 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Mon, 4 Feb 2019 09:23:23 +0100 Subject: [kolla] Debugging with kolla-ansible In-Reply-To: References: Message-ID: Hi Michael, You could use a custom image and change the image definition in ansible, ie for define a different image for neutron_server you would add a variable in globals.yml like: neutron_server_image_full: "registry/repo/image_name:mytag:" If what you are debugin is openstack code, you could use kolla dev mode, where you can change git code locally and mount the code into the python path https://docs.openstack.org/kolla-ansible/latest/contributor/kolla-for-openstack-development.html Regards El lun., 4 feb. 2019 a las 7:16, Michael Still () escribió: > On Mon, Feb 4, 2019 at 4:01 PM Erik McCormick > wrote: > > [snip detailed helpful stuff] > > The really dirty but useful way to test small changes would be to just >> push them into the container with 'docker cp' and restart the container. >> Note that this will not work for config changes as those files get >> clobbered at startup, but for hacking the actual python bits, it'll do. >> > > This was news to me to be honest. I had assumed the container filesystem > got reset on process restart, but you're right and that's not true. So, > editing files in the container works for my current needs. > > Thanks heaps! > > Michael > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Feb 4 08:31:46 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 04 Feb 2019 17:31:46 +0900 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> Message-ID: <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> ---- On Thu, 31 Jan 2019 19:45:25 +0900 Thierry Carrez wrote ---- > Hi everyone, > > The "Help most needed" list[1] was created by the Technical Committee to > clearly describe areas of the OpenStack open source project which were > in the most need of urgent help. This was done partly to facilitate > communications with corporate sponsors and engineering managers, and be > able to point them to an official statement of need from "the project". > > [1] https://governance.openstack.org/tc/reference/help-most-needed.html > > This list encounters two issues. First it's hard to limit entries: a lot > of projects teams, SIGs and other forms of working groups could use > extra help. But more importantly, this list has had a very limited > impact -- new contributors did not exactly magically show up in the > areas we designated as in most need of help. > > When we raised that topic (again) at a Board+TC meeting, a suggestion > was made that we should turn the list more into a "job description" > style that would make it more palatable to the corporate world. I fear > that would not really solve the underlying issue (which is that at our > stage of the hype curve, no organization really has spare contributors > to throw at random hard problems). > > So I wonder if we should not reframe the list and make it less "this > team needs help" and more "I offer peer-mentoring in this team". A list > of contributor internships offers, rather than a call for corporate help > in the dark. I feel like that would be more of a win-win offer, and more > likely to appeal to students, or OpenStack users trying to contribute back. > > Proper 1:1 mentoring takes a lot of time, and I'm not underestimating > that. Only people that are ready to dedicate mentoring time should show > up on this new "list"... which is why it should really list identified > individuals rather than anonymous teams. It should also probably be > one-off offers -- once taken, the offer should probably go off the list. > > Thoughts on that? Do you think reframing help-needed as > mentoring-offered could help? Do you have alternate suggestions? Reframing to "mentoring-offered " is a nice idea which is something can give the best result if there will be. Being mentor few times or as FC SIG member, I agree that it is very hard to get new contributors, especially for the long term. Many times, they disappear after few weeks. Having a peer mentor can attract few contributors if they technically hesitate to start working on that. Along with that we need this list as a live list and should be reiterated every cycle with the latest items, priority, peer-mentors mapping. For example, if any team adding any item as help-wanted do they provide peer-mentor or we ask the volunteer for peer-mentorship and based on that priority should go. If I recall it correctly from Board+TC meeting, TC is looking for a new home for this list ? Or we continue to maintain this in TC itself which should not be much effort I feel. One of the TC members can volunteer on this and keep it up to date every cycle by organizing a forum sessions discussion etc. Further, we ask other groups like Outreachy, FC SIG, OUI to publishing this list every time they get chance to interact with new contributors. -gmann > > -- > Thierry Carrez (ttx) > > From alfredo.deluca at gmail.com Mon Feb 4 08:36:02 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Mon, 4 Feb 2019 09:36:02 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Hi Clemens. So the image I downloaded is this https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 which is the latest I think. But you are right...and I noticed that too.... It doesn't have atomic binary the os-release is *NAME=Fedora* *VERSION="29 (Cloud Edition)"* *ID=fedora* *VERSION_ID=29* *PLATFORM_ID="platform:f29"* *PRETTY_NAME="Fedora 29 (Cloud Edition)"* *ANSI_COLOR="0;34"* *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* *HOME_URL="https://fedoraproject.org/ "* *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ "* *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help "* *BUG_REPORT_URL="https://bugzilla.redhat.com/ "* *REDHAT_BUGZILLA_PRODUCT="Fedora"* *REDHAT_BUGZILLA_PRODUCT_VERSION=29* *REDHAT_SUPPORT_PRODUCT="Fedora"* *REDHAT_SUPPORT_PRODUCT_VERSION=29* *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy "* *VARIANT="Cloud Edition"* *VARIANT_ID=cloud* so not sure why I don't have atomic tho On Sat, Feb 2, 2019 at 7:53 PM Clemens wrote: > Now to the failure of your part-013: Are you sure that you used the glance > image ‚fedora-atomic-latest‘ and not some other fedora image? Your error > message below suggests that your image does not contain ‚atomic‘ as part of > the image … > > + _prefix=docker.io/openstackmagnum/ > + atomic install --storage ostree --system --system-package no --set > REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name > heat-container-agent > docker.io/openstackmagnum/heat-container-agent:queens-stable > ./part-013: line 8: atomic: command not found > + systemctl start heat-container-agent > Failed to start heat-container-agent.service: Unit > heat-container-agent.service not found. > > Am 02.02.2019 um 17:36 schrieb Alfredo De Luca : > > Failed to start heat-container-agent.service: Unit > heat-container-agent.service not found. > > > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Feb 4 09:48:05 2019 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 4 Feb 2019 09:48:05 +0000 Subject: [kolla] Debugging with kolla-ansible In-Reply-To: References: Message-ID: On Mon, 4 Feb 2019 at 08:25, Eduardo Gonzalez wrote: > Hi Michael, > > You could use a custom image and change the image definition in ansible, > ie for define a different image for neutron_server you would add a variable > in globals.yml like: > > > neutron_server_image_full: "registry/repo/image_name:mytag:" > > If what you are debugin is openstack code, you could use kolla dev mode, > where you can change git code locally and mount the code into the python > path > https://docs.openstack.org/kolla-ansible/latest/contributor/kolla-for-openstack-development.html > > Regards > Just a warning: I have recently had issues with dev mode because it does not do a pip install, but mounts the source code into the site-packages // directory, if there are new source files these will not be included in the package's file manifest. Also this won't affect any files outside of site-packages//. I just raised a bug [1] on this. What I often do when developing in a tight-ish loop on a single host is something like this: docker exec -it pip install -e git+https://# docker restart You have to be careful, since if the service doesn't start, the container will fail to start, and docker exec won't work. At that point you need to delete the container and redeploy. [1] https://bugs.launchpad.net/kolla-ansible/+bug/1814515 Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Feb 4 10:45:14 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 4 Feb 2019 11:45:14 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: I used fedora-magnum-27-4 and it works Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < alfredo.deluca at gmail.com> ha scritto: > Hi Clemens. > So the image I downloaded is this > https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 > which is the latest I think. > But you are right...and I noticed that too.... It doesn't have atomic > binary > the os-release is > > *NAME=Fedora* > *VERSION="29 (Cloud Edition)"* > *ID=fedora* > *VERSION_ID=29* > *PLATFORM_ID="platform:f29"* > *PRETTY_NAME="Fedora 29 (Cloud Edition)"* > *ANSI_COLOR="0;34"* > *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* > *HOME_URL="https://fedoraproject.org/ "* > *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ > "* > *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help > "* > *BUG_REPORT_URL="https://bugzilla.redhat.com/ > "* > *REDHAT_BUGZILLA_PRODUCT="Fedora"* > *REDHAT_BUGZILLA_PRODUCT_VERSION=29* > *REDHAT_SUPPORT_PRODUCT="Fedora"* > *REDHAT_SUPPORT_PRODUCT_VERSION=29* > *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy > "* > *VARIANT="Cloud Edition"* > *VARIANT_ID=cloud* > > > so not sure why I don't have atomic tho > > > On Sat, Feb 2, 2019 at 7:53 PM Clemens > wrote: > >> Now to the failure of your part-013: Are you sure that you used the >> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >> error message below suggests that your image does not contain ‚atomic‘ as >> part of the image … >> >> + _prefix=docker.io/openstackmagnum/ >> + atomic install --storage ostree --system --system-package no --set >> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >> heat-container-agent >> docker.io/openstackmagnum/heat-container-agent:queens-stable >> ./part-013: line 8: atomic: command not found >> + systemctl start heat-container-agent >> Failed to start heat-container-agent.service: Unit >> heat-container-agent.service not found. >> >> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca > >: >> >> Failed to start heat-container-agent.service: Unit >> heat-container-agent.service not found. >> >> >> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Mon Feb 4 11:39:25 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Mon, 4 Feb 2019 12:39:25 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: thanks ignazio Where can I get it from? On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano wrote: > I used fedora-magnum-27-4 and it works > > Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < > alfredo.deluca at gmail.com> ha scritto: > >> Hi Clemens. >> So the image I downloaded is this >> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >> which is the latest I think. >> But you are right...and I noticed that too.... It doesn't have atomic >> binary >> the os-release is >> >> *NAME=Fedora* >> *VERSION="29 (Cloud Edition)"* >> *ID=fedora* >> *VERSION_ID=29* >> *PLATFORM_ID="platform:f29"* >> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >> *ANSI_COLOR="0;34"* >> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >> *HOME_URL="https://fedoraproject.org/ "* >> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >> "* >> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >> "* >> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >> "* >> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >> *REDHAT_SUPPORT_PRODUCT="Fedora"* >> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >> "* >> *VARIANT="Cloud Edition"* >> *VARIANT_ID=cloud* >> >> >> so not sure why I don't have atomic tho >> >> >> On Sat, Feb 2, 2019 at 7:53 PM Clemens >> wrote: >> >>> Now to the failure of your part-013: Are you sure that you used the >>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>> error message below suggests that your image does not contain ‚atomic‘ as >>> part of the image … >>> >>> + _prefix=docker.io/openstackmagnum/ >>> + atomic install --storage ostree --system --system-package no --set >>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>> heat-container-agent >>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>> ./part-013: line 8: atomic: command not found >>> + systemctl start heat-container-agent >>> Failed to start heat-container-agent.service: Unit >>> heat-container-agent.service not found. >>> >>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca >> >: >>> >>> Failed to start heat-container-agent.service: Unit >>> heat-container-agent.service not found. >>> >>> >>> >> >> -- >> *Alfredo* >> >> -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Feb 4 11:55:41 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 4 Feb 2019 12:55:41 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: wget https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180212.2/CloudImages/x86_64/images/Fedora-Atomic-27-20180212.2.x86_64.qcow2 Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < alfredo.deluca at gmail.com> ha scritto: > thanks ignazio > Where can I get it from? > > > On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano > wrote: > >> I used fedora-magnum-27-4 and it works >> >> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >> alfredo.deluca at gmail.com> ha scritto: >> >>> Hi Clemens. >>> So the image I downloaded is this >>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>> which is the latest I think. >>> But you are right...and I noticed that too.... It doesn't have atomic >>> binary >>> the os-release is >>> >>> *NAME=Fedora* >>> *VERSION="29 (Cloud Edition)"* >>> *ID=fedora* >>> *VERSION_ID=29* >>> *PLATFORM_ID="platform:f29"* >>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>> *ANSI_COLOR="0;34"* >>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>> *HOME_URL="https://fedoraproject.org/ "* >>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>> "* >>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>> "* >>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>> "* >>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>> "* >>> *VARIANT="Cloud Edition"* >>> *VARIANT_ID=cloud* >>> >>> >>> so not sure why I don't have atomic tho >>> >>> >>> On Sat, Feb 2, 2019 at 7:53 PM Clemens >>> wrote: >>> >>>> Now to the failure of your part-013: Are you sure that you used the >>>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>>> error message below suggests that your image does not contain ‚atomic‘ as >>>> part of the image … >>>> >>>> + _prefix=docker.io/openstackmagnum/ >>>> + atomic install --storage ostree --system --system-package no --set >>>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>> heat-container-agent >>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>> ./part-013: line 8: atomic: command not found >>>> + systemctl start heat-container-agent >>>> Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found. >>>> >>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>> alfredo.deluca at gmail.com>: >>>> >>>> Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found. >>>> >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Feb 4 11:57:25 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 4 Feb 2019 12:57:25 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Then upload it with: openstack image create \ --disk-format=qcow2 \ --container-format=bare \ --file=Fedora-Atomic-27-20180212.2.x86_64.qcow2\ --property os_distro='fedora-atomic' \ fedora-atomic-latest Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < alfredo.deluca at gmail.com> ha scritto: > thanks ignazio > Where can I get it from? > > > On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano > wrote: > >> I used fedora-magnum-27-4 and it works >> >> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >> alfredo.deluca at gmail.com> ha scritto: >> >>> Hi Clemens. >>> So the image I downloaded is this >>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>> which is the latest I think. >>> But you are right...and I noticed that too.... It doesn't have atomic >>> binary >>> the os-release is >>> >>> *NAME=Fedora* >>> *VERSION="29 (Cloud Edition)"* >>> *ID=fedora* >>> *VERSION_ID=29* >>> *PLATFORM_ID="platform:f29"* >>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>> *ANSI_COLOR="0;34"* >>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>> *HOME_URL="https://fedoraproject.org/ "* >>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>> "* >>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>> "* >>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>> "* >>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>> "* >>> *VARIANT="Cloud Edition"* >>> *VARIANT_ID=cloud* >>> >>> >>> so not sure why I don't have atomic tho >>> >>> >>> On Sat, Feb 2, 2019 at 7:53 PM Clemens >>> wrote: >>> >>>> Now to the failure of your part-013: Are you sure that you used the >>>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>>> error message below suggests that your image does not contain ‚atomic‘ as >>>> part of the image … >>>> >>>> + _prefix=docker.io/openstackmagnum/ >>>> + atomic install --storage ostree --system --system-package no --set >>>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>> heat-container-agent >>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>> ./part-013: line 8: atomic: command not found >>>> + systemctl start heat-container-agent >>>> Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found. >>>> >>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>> alfredo.deluca at gmail.com>: >>>> >>>> Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found. >>>> >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Feb 4 12:02:25 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 4 Feb 2019 13:02:25 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: I also suggest to change dns in your external network used by magnum. Using openstack dashboard you can change it to 8.8.8.8 (If I remember fine you wrote that you can ping 8.8.8.8 from kuke baster) Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < alfredo.deluca at gmail.com> ha scritto: > thanks ignazio > Where can I get it from? > > > On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano > wrote: > >> I used fedora-magnum-27-4 and it works >> >> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >> alfredo.deluca at gmail.com> ha scritto: >> >>> Hi Clemens. >>> So the image I downloaded is this >>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>> which is the latest I think. >>> But you are right...and I noticed that too.... It doesn't have atomic >>> binary >>> the os-release is >>> >>> *NAME=Fedora* >>> *VERSION="29 (Cloud Edition)"* >>> *ID=fedora* >>> *VERSION_ID=29* >>> *PLATFORM_ID="platform:f29"* >>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>> *ANSI_COLOR="0;34"* >>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>> *HOME_URL="https://fedoraproject.org/ "* >>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>> "* >>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>> "* >>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>> "* >>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>> "* >>> *VARIANT="Cloud Edition"* >>> *VARIANT_ID=cloud* >>> >>> >>> so not sure why I don't have atomic tho >>> >>> >>> On Sat, Feb 2, 2019 at 7:53 PM Clemens >>> wrote: >>> >>>> Now to the failure of your part-013: Are you sure that you used the >>>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>>> error message below suggests that your image does not contain ‚atomic‘ as >>>> part of the image … >>>> >>>> + _prefix=docker.io/openstackmagnum/ >>>> + atomic install --storage ostree --system --system-package no --set >>>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>> heat-container-agent >>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>> ./part-013: line 8: atomic: command not found >>>> + systemctl start heat-container-agent >>>> Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found. >>>> >>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>> alfredo.deluca at gmail.com>: >>>> >>>> Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found. >>>> >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bansalnehal26 at gmail.com Mon Feb 4 05:16:14 2019 From: bansalnehal26 at gmail.com (Nehal Bansal) Date: Mon, 4 Feb 2019 10:46:14 +0530 Subject: Regarding supporting version of Nova-Docker Driver Message-ID: Hi, I have done a manual installation of OpenStack Queens version and wanted to run docker containers on it using Nova-Docker driver. But the git repository says it is no longer a maintained project. Could you tell me if it supports the Queens release. Thank you. Regards, Nehal Bansal -------------- next part -------------- An HTML attachment was scrubbed... URL: From davanum at gmail.com Mon Feb 4 13:02:07 2019 From: davanum at gmail.com (Davanum Srinivas) Date: Mon, 4 Feb 2019 08:02:07 -0500 Subject: Regarding supporting version of Nova-Docker Driver In-Reply-To: References: Message-ID: Nehal, you found the right info. it is not maintained. please look at alternatives like Zun ( https://docs.openstack.org/zun/latest/ ) Thanks, Dims On Mon, Feb 4, 2019 at 7:45 AM Nehal Bansal wrote: > Hi, > > I have done a manual installation of OpenStack Queens version and wanted > to run docker containers on it using Nova-Docker driver. But the git > repository says it is no longer a maintained project. Could you tell me if > it supports the Queens release. > > Thank you. > > Regards, > Nehal Bansal > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Mon Feb 4 13:25:41 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Mon, 4 Feb 2019 14:25:41 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Hi Ignazio. Thanks for the link...... so Now at least atomic is present on the system. Also I ve already had 8.8.8.8 on the system. So I can connect on the floating IP to the kube master....than I can ping 8.8.8.8 but for example doesn't resolve the names...so if I ping 8.8.8.8 *root at my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* *64 bytes from 8.8.8.8 : icmp_seq=1 ttl=118 time=12.1 ms* *64 bytes from 8.8.8.8 : icmp_seq=2 ttl=118 time=12.2 ms* but if I ping google.com doesn't resolve. I can't either find on fedora dig or nslookup to check resolv.conf has *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* *nameserver 8.8.8.8* It\s all so weird. On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano wrote: > I also suggest to change dns in your external network used by magnum. > Using openstack dashboard you can change it to 8.8.8.8 (If I remember > fine you wrote that you can ping 8.8.8.8 from kuke baster) > > Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < > alfredo.deluca at gmail.com> ha scritto: > >> thanks ignazio >> Where can I get it from? >> >> >> On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano >> wrote: >> >>> I used fedora-magnum-27-4 and it works >>> >>> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >>> alfredo.deluca at gmail.com> ha scritto: >>> >>>> Hi Clemens. >>>> So the image I downloaded is this >>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>>> which is the latest I think. >>>> But you are right...and I noticed that too.... It doesn't have atomic >>>> binary >>>> the os-release is >>>> >>>> *NAME=Fedora* >>>> *VERSION="29 (Cloud Edition)"* >>>> *ID=fedora* >>>> *VERSION_ID=29* >>>> *PLATFORM_ID="platform:f29"* >>>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>>> *ANSI_COLOR="0;34"* >>>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>>> *HOME_URL="https://fedoraproject.org/ "* >>>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>>> "* >>>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>>> "* >>>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>>> "* >>>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>>> "* >>>> *VARIANT="Cloud Edition"* >>>> *VARIANT_ID=cloud* >>>> >>>> >>>> so not sure why I don't have atomic tho >>>> >>>> >>>> On Sat, Feb 2, 2019 at 7:53 PM Clemens >>>> wrote: >>>> >>>>> Now to the failure of your part-013: Are you sure that you used the >>>>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>>>> error message below suggests that your image does not contain ‚atomic‘ as >>>>> part of the image … >>>>> >>>>> + _prefix=docker.io/openstackmagnum/ >>>>> + atomic install --storage ostree --system --system-package no --set >>>>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>>> heat-container-agent >>>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>>> ./part-013: line 8: atomic: command not found >>>>> + systemctl start heat-container-agent >>>>> Failed to start heat-container-agent.service: Unit >>>>> heat-container-agent.service not found. >>>>> >>>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>>> alfredo.deluca at gmail.com>: >>>>> >>>>> Failed to start heat-container-agent.service: Unit >>>>> heat-container-agent.service not found. >>>>> >>>>> >>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >> >> -- >> *Alfredo* >> >> -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From shakhat at gmail.com Mon Feb 4 13:42:04 2019 From: shakhat at gmail.com (Ilya Shakhat) Date: Mon, 4 Feb 2019 14:42:04 +0100 Subject: OpenStack code and GPL libraries Message-ID: Hi, I am experimenting with automatic verification of code licenses of OpenStack projects and see that one of Rally dependencies has GPL3 license [1]. I'm not a big expert in licenses, but isn't it a violation of GPL? In particular what concerns me is: [2] - " If a library is released under the GPL (not the LGPL), does that mean that any software which uses it has to be under the GPL or a GPL-compatible license? (#IfLibraryIsGPL) Yes, because the program actually links to the library. As such, the terms of the GPL apply to the entire combination. The software modules that link with the library may be under various GPL compatible licenses, but the work as a whole must be licensed under the GPL. " and [3] - " This licensing incompatibility applies only when some Apache project software becomes a derivative work of some GPLv3 software, because then the Apache software would have to be distributed under GPLv3. This would be incompatible with ASF's requirement that all Apache software must be distributed under the Apache License 2.0. We avoid GPLv3 software because merely linking to it is considered by the GPLv3 authors to create a derivative work. " [1] http://paste.openstack.org/show/744483/ [2] https://www.gnu.org/licenses/gpl-faq.html#IfLibraryIsGPL [3] https://www.apache.org/licenses/GPL-compatibility.html Should this issue be fixed? If yes, should we have a gate job to block adding of such dependencies? Thanks, Ilya -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon Feb 4 13:51:04 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 4 Feb 2019 05:51:04 -0800 Subject: [ironic] [thirdparty-ci] BaremetalBasicOps test In-Reply-To: <1bf8f3b4-ea39-6c17-3609-9289ceeeb7ed@linux.vnet.ibm.com> References: <1bf8f3b4-ea39-6c17-3609-9289ceeeb7ed@linux.vnet.ibm.com> Message-ID: On Thu, Jan 31, 2019 at 8:37 AM Michael Turek wrote: > [trim] > The job is able to clean the node during devstack, successfully deploy > to the node during the tempest run, and is successfully validated via > ssh. The node then moves to clean failed with a network error [1], and > the job subsequently fails. Sometime between the validation and > attempting to clean, the neutron port associated with the ironic port is > deleted and a new port comes into existence. Where I'm having trouble is > finding out what this port is. Based on it's MAC address It's a virtual > port, and its MAC is not the same as the ironic port. I think we landed code around then to address the issue of duplicate mac addresses where a port gets orphaned by external processes, so by default I seem to remember the logic now just resets the MAC if we no longer need the port. What are the network settings your operating the job with? It seems like 'flat' is at least the network_interface based on what your describing. > > We could add an IP to the job to fix it, but I'd rather not do that > needlessly. > From juliaashleykreger at gmail.com Mon Feb 4 14:03:30 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 4 Feb 2019 06:03:30 -0800 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: <20190201152652.cnudbniuraiflybj@redhat.com> References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <5354829D-31EA-4CB2-A054-239D105C7EC9@cern.ch> <20190130170501.hs2vsmm7iqdhmftc@redhat.com> <20190201152652.cnudbniuraiflybj@redhat.com> Message-ID: On Fri, Feb 1, 2019 at 7:34 AM Lars Kellogg-Stedman wrote: > > On Thu, Jan 31, 2019 at 12:09:07PM +0100, Dmitry Tantsur wrote: > > Some first steps have been done: > > http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ownership-field.html. > > We need someone to drive the futher design and implementation > > though. > > That spec seems to be for a strictly informational field. Reading > through it, I guess it's because doing something like this... > > openstack baremetal node set --property owner=lars > > ...leads to sub-optimal performance when trying to filter a large > number of hosts. I see that it's merged already, so I guess this is > commenting-after-the-fact, but that seems like the wrong path to > follow: I can see properties like "the contract id under which this > system was purchased" being as or more important than "owner" from a > large business perspective, so making it easier to filter by property > on the server side would seem to be a better solution. > > Or implement full multi-tenancy so that "owner" is more than simply > informational, of course :). > My original thought was more enable multi-purpose usage and should we ever get to a point where we want to offer filtered views by saying a baremetal_user can only see machines whose owner is set by their tenant. Sub-optimal for sure, but in order not to break baremetal_admin level usage we have to have a compromise. The alternative that comes to mind is build a new permission matrix model that delineates the two, but at some point someone is still the "owner" and is responsible for the hardware. The details we kind of want to keep out of storage and consideration in ironic are the more CMDB-ish details that would things like contracts and acquisition dates. The other things we should consider is "Give me a physical machine" versus "I have my machines, I need to use them" approaches and such a model. I suspect this is quickly becoming a Forum worthy session. > -- > Lars Kellogg-Stedman | larsks @ {irc,twitter,github} > http://blog.oddbit.com/ | > From km.giuseppesannino at gmail.com Mon Feb 4 14:25:22 2019 From: km.giuseppesannino at gmail.com (Giuseppe Sannino) Date: Mon, 4 Feb 2019 15:25:22 +0100 Subject: [kolla] Magnum K8s cluster creation time out due to "Failed to contact endpoint at https ... certificate verify failed" error in magnum-conductor Message-ID: Hi all, this is my first post on this mailing list especially for "kolla" related issues. Hope you can help and hope this is the right channel to reuqest support. I have a problem with Magnum during the creation of a K8S cluster. The request gets timed out. Looking at the magnum-conductor logs I can see: Failed to contact the endpoint at https://:5000 for discovery. Fallback to using that endpoint as the base url.: SSLError: SSL exception connecting to https:// :5000: HTTPSConnectionPool(host=' ', port=5000): Max retries exceeded with url: / (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)) I had a similar issue with Kuryr. the service is trying to contact keystone over the external IP address without certificates. In kuryr, the workaround was to set the "endpoint_type" for neutron to "internal". In magnum.conf that's already the situation. Any suggestion on how to address this issue ? Here you can find some details about the deployment: --------------------------- Host nodes: Baremetal OS: Queens kolla-ansible: 6.1.0 Deployment: multinode (1+1). Kolla installed on the controller host kolla_install_type: source kolla_base_distro: ubuntu External/internal interfaces: separated kolla_enable_tls_external: "yes" Services: enable_cinder: "yes" enable_cinder_backend_lvm: "yes" enable_etcd: "yes" enable_fluentd: "yes" enable_haproxy: "yes" enable_heat: "yes" enable_horizon: "yes" enable_horizon_magnum: "{{ enable_magnum | bool }}" enable_horizon_zun: "{{ enable_zun | bool }}" enable_kuryr: "yes" enable_magnum: "yes" enable_openvswitch: "{{ neutron_plugin_agent != 'linuxbridge' }}" enable_zun: "yes" glance_backend_file: "yes" nova_compute_virt_type: "qemu" --------------------------- BR and many thanks in advance /Giuseppe -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Feb 4 14:36:29 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 04 Feb 2019 14:36:29 +0000 Subject: OpenStack code and GPL libraries In-Reply-To: References: Message-ID: <6cc948eae81115321508d2d6fa8bcc236012d9d9.camel@redhat.com> On Mon, 2019-02-04 at 14:42 +0100, Ilya Shakhat wrote: > Hi, > > I am experimenting with automatic verification of code licenses of OpenStack projects and see that one of Rally > dependencies has GPL3 license [1]. > I'm not a big expert in licenses, but isn't it a violation of GPL? In particular what concerns me is: > > [2] - " > If a library is released under the GPL (not the LGPL), does that mean that any software which uses it has to be under > the GPL or a GPL-compatible license? (#IfLibraryIsGPL) > > Yes, because the program actually links to the library. As such, the terms of the GPL apply to the entire combination. > The software modules that link with the library may be under various GPL compatible licenses, but the work as a whole > must be licensed under the GPL. > " > > and > > [3] - " > This licensing incompatibility applies only when some Apache project software becomes a derivative work of some GPLv3 > software, because then the Apache software would have to be distributed under GPLv3. This would be incompatible with > ASF's requirement that all Apache software must be distributed under the Apache License 2.0. > > We avoid GPLv3 software because merely linking to it is considered by the GPLv3 authors to create a derivative work. > " > > [1] http://paste.openstack.org/show/744483/ > [2] https://www.gnu.org/licenses/gpl-faq.html#IfLibraryIsGPL > [3] https://www.apache.org/licenses/GPL-compatibility.html > > Should this issue be fixed? If yes, should we have a gate job to block adding of such dependencies? it looks like it was added as part of this change https://github.com/openstack/rally/commit/ee2f469d8f347fbf8e0dcd84cf3f52e41eb98090 I have not checked but if it is only used by the optional elasticSearch plugin then im not sure there is a licence conflict in the general case. rally can be used entirly without the elastic serach exporter plugin so at most the GPL contamination whould be confied to that plugin provided the combination fo the plugin and rally is not considerd a sincel combinded work. the clause of the GPL only take effect on distibution as such if you distibute rally without the elastic search plugin or you distibute in such a way as the elastitc search plugin is not loaded i think no conclict would exist. im not a legal expert so this is just my oppion but from reviewing https://www.gnu.org/licenses/gpl-faq.en.html#GPLPlugins breifly it is arguable that loading the elastic search pluging would make rally and that plugin a single combined application which looking at https://www.gnu.org/licenses/gpl-faq.en.html#NFUseGPLPlugins would imply that the GPL would have to apply to the entire combination fo rally and the elastic search plugin. that would depend on how the plugin was loaded. if the exporter plugin is forked into a seperate python inteperater instance instaead of imported as a lib and invoked via a fuction call it would not form a single combined program but i have not looked at how rally uses the plugin. it would likely be good for legal and the rally core team to review. the simplest soltution if an issue is determinted to exist would be to move the elastic search plugin into its own repos so it si distibuted seperately from rally. failing that the code that depends on morph would have to be removed to resolve the conflict. as i said im not a leagl expert so this is just my personal opinion as such take it with a grain of salt. regard sean > > Thanks, > Ilya From fungi at yuggoth.org Mon Feb 4 15:05:15 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 Feb 2019 15:05:15 +0000 Subject: [tc] OpenStack code and GPL libraries In-Reply-To: References: Message-ID: <20190204150515.7zxgq2pj7pgnjaxk@yuggoth.org> On 2019-02-04 14:42:04 +0100 (+0100), Ilya Shakhat wrote: > I am experimenting with automatic verification of code licenses of > OpenStack projects and see that one of Rally dependencies has GPL3 > license [...] To start off, it looks like the license for morph is already known to the Rally developers, based on the inline comment for it at https://git.openstack.org/cgit/openstack/rally/tree/requirements.txt?id=3625758#n10 (so hopefully this is no real surprise). The source of truth for our licensing policies, as far as projects governed by the OpenStack Technical Committee are concerned (which openstack/rally is), can be found here: https://governance.openstack.org/tc/reference/licensing.html It has a carve out for "tools that are run with or on OpenStack projects only during validation or testing phases of development" which "may be licensed under any OSI-approved license" and since the README.rst for Rally states it's a "tool & framework that allows one to write simple plugins and combine them in complex tests scenarios that allows to perform all kinds of testing" it probably meets those criteria. As for concern that a Python application which imports another Python library at runtime inherits its license and so becomes derivative of that work, that has been the subject of much speculation. In particular, whether a Python import counts as "dynamic linking" in GPL 3.0 section 1 is debatable: https://bytes.com/topic/python/answers/41019-python-gpl https://opensource.stackexchange.com/questions/1487/how-does-the-gpls-linking-restriction-apply-when-using-a-proprietary-library-wi https://softwareengineering.stackexchange.com/questions/87446/using-a-gplv3-python-module-will-my-entire-project-have-to-be-gplv3-licensed https://stackoverflow.com/questions/40492518/is-an-import-in-python-considered-to-be-dynamic-linking I'm most definitely not a lawyer, but from what I've been able to piece together it's the combination of rally+morph which potentially becomes GPLv3-licensed when distributed, not the openstack/rally source code itself. This is really more of a topic for the legal-discuss mailing list, however, so I am cross-posting my reply there for completeness. To readers only of the legal-discuss ML, the original post can be found archived here: http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002356.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon Feb 4 15:22:31 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 Feb 2019 15:22:31 +0000 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> Message-ID: <20190204152231.qgiryyjn7omu642z@yuggoth.org> On 2019-02-04 17:31:46 +0900 (+0900), Ghanshyam Mann wrote: [...] > If I recall it correctly from Board+TC meeting, TC is looking for > a new home for this list ? Or we continue to maintain this in TC > itself which should not be much effort I feel. [...] It seems like you might be referring to the in-person TC meeting we held on the Sunday prior to the Stein PTG in Denver (Alan from the OSF BoD was also present). Doug's recap can be found in the old openstack-dev archive here: http://lists.openstack.org/pipermail/openstack-dev/2018-September/134744.html Quoting Doug, "...it wasn't clear that the TC was the best group to manage a list of 'roles' or other more detailed information. We discussed placing that information into team documentation or hosting it somewhere outside of the governance repository where more people could contribute." (If memory serves, this was in response to earlier OSF BoD suggestions that retooling the Help Wanted list to be a set of business-case-focused job descriptions might garner more uptake from the organizations they represent.) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Mon Feb 4 15:40:22 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 04 Feb 2019 10:40:22 -0500 Subject: [karbor][goals][python3] looking for karbor PTL Message-ID: I am trying to reach the Karbor PTL, Pengju Jiao, to ask some questions about the status of python 3 support. My email sent to the address on file in the governance repository has bounced. Does anyone have a current email address? -- Doug From doug at doughellmann.com Mon Feb 4 15:56:01 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 04 Feb 2019 10:56:01 -0500 Subject: [goal][python3] week R-9 update Message-ID: This is the periodic update for the "Run under Python 3 by default" goal (https://governance.openstack.org/tc/goals/stein/python3-first.html). == Current Status == We still have a fairly large number of projects without a python 3 functional test job running at all: adjutant aodh ceilometer cloudkitty cyborg freezer horizon karbor magnum masakari mistral monasca-agent monasca-ui murano murano-agent neutron-vpnaas qinling rally searchlight storlets swift tricircle watcher zaqar networking-l2gw and several with the job listed as non-voting: designate neutron-fwaas sahara senlin tacker I have contacted the PTLs of all of the affected teams directly to ask for updates. == Ongoing and Completed Work == There are still a handful of open patches to update tox, documentation, and python 3.6 unit tests. +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ | Team | tox defaults | Docs | 3.6 unit | Failing | Unreviewed | Total | Champion | +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ | adjutant | 1/ 1 | - | + | 0 | 1 | 2 | Doug Hellmann | | barbican | + | 1/ 3 | + | 1 | 1 | 7 | Doug Hellmann | | heat | 1/ 8 | + | 1/ 7 | 0 | 0 | 21 | Doug Hellmann | | InteropWG | 2/ 3 | + | + | 0 | 0 | 9 | Doug Hellmann | | ironic | 1/ 10 | + | + | 0 | 0 | 35 | Doug Hellmann | | magnum | 1/ 5 | + | + | 0 | 0 | 10 | | | masakari | 1/ 4 | + | - | 0 | 1 | 5 | Nguyen Hai | | monasca | 1/ 17 | + | + | 0 | 1 | 34 | Doug Hellmann | | neutron | 2/ 17 | + | + | 1 | 1 | 44 | Doug Hellmann | | OpenStack Charms | 8/ 73 | - | - | 7 | 2 | 73 | Doug Hellmann | | Quality Assurance | 2/ 10 | + | + | 0 | 1 | 31 | Doug Hellmann | | rally | 1/ 3 | + | - | 1 | 1 | 5 | Nguyen Hai | | sahara | 1/ 6 | + | + | 0 | 0 | 13 | Doug Hellmann | | swift | 2/ 3 | + | + | 1 | 1 | 6 | Nguyen Hai | | tacker | 2/ 4 | + | + | 1 | 0 | 9 | Nguyen Hai | | Telemetry | 1/ 7 | + | + | 0 | 1 | 19 | Doug Hellmann | | tripleo | 1/ 54 | + | + | 0 | 1 | 92 | Doug Hellmann | | trove | 1/ 5 | + | + | 0 | 0 | 11 | Doug Hellmann | | User Committee | 3/ 3 | + | - | 0 | 2 | 5 | Doug Hellmann | | | 43/ 61 | 56/ 57 | 54/ 55 | 12 | 14 | 1071 | | +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ == Next Steps == We need to be wrapping up work on this goal by approving or abandoning the patches listed above (assuming they aren't needed) and adding the functional test jobs to the projects that don't have them. == How can you help? == 1. Choose a patch that has failing tests and help fix it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) 2. Review the patches for the zuul changes. Keep in mind that some of those patches will be on the stable branches for projects. 3. Work on adding functional test jobs that run under Python 3. == How can you ask for help? == If you have any questions, please post them here to the openstack-dev list with the topic tag [python3] in the subject line. Posting questions to the mailing list will give the widest audience the chance to see the answers. We are using the #openstack-dev IRC channel for discussion as well, but I'm not sure how good our timezone coverage is so it's probably better to use the mailing list. == Reference Material == Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open Storyboard: https://storyboard.openstack.org/#!/board/104 Zuul migration notes: https://etherpad.openstack.org/p/python3-first Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 -- Doug From jiaopengju at qq.com Mon Feb 4 16:06:55 2019 From: jiaopengju at qq.com (=?ISO-8859-1?B?amlhb3BlbmdqdQ==?=) Date: Tue, 5 Feb 2019 00:06:55 +0800 Subject: [karbor][goals][python3] looking for karbor PTL References: Message-ID: Hi Doug, This is the email which I am using for subscribing the email list. And my openstack account emails:jiaopengju at cmss.chinamobile.com and pj.jiao at 139.com are still in use. You can choose any one of them to contact me. I am on vacation now, but I will reply your email ASAP. Thanks, Pengju Jiao ------------------ Original ------------------ From: Doug Hellmann Date: Mon,Feb 4,2019 11:42 PM To: openstack-discuss Subject: Re: [karbor][goals][python3] looking for karbor PTL I am trying to reach the Karbor PTL, Pengju Jiao, to ask some questions about the status of python 3 support. My email sent to the address on file in the governance repository has bounced. Does anyone have a current email address? -- Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Mon Feb 4 16:36:49 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 4 Feb 2019 10:36:49 -0600 Subject: [cinder] Proposed mid-cycle schedule available Message-ID: All, I have put together a proposed schedule for our mid-cycle that starts tomorrow.  You can see the schedule here: https://etherpad.openstack.org/p/cinder-stein-mid-cycle-planning I have tried to keep the topics that are of interest to people in Europe/Asia earlier in the day.  If anyone has concerns with the schedule, please add notes in the etherpad. Look forward to meeting with you all tomorrow. Jay From ignaziocassano at gmail.com Mon Feb 4 16:45:49 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 4 Feb 2019 17:45:49 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Alfredo, try to check security group linked to your kubemaster. Il giorno Lun 4 Feb 2019 14:25 Alfredo De Luca ha scritto: > Hi Ignazio. Thanks for the link...... so > > Now at least atomic is present on the system. > Also I ve already had 8.8.8.8 on the system. So I can connect on the > floating IP to the kube master....than I can ping 8.8.8.8 but for example > doesn't resolve the names...so if I ping 8.8.8.8 > *root at my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* > *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* > *64 bytes from 8.8.8.8 : icmp_seq=1 ttl=118 time=12.1 ms* > *64 bytes from 8.8.8.8 : icmp_seq=2 ttl=118 time=12.2 ms* > > but if I ping google.com doesn't resolve. I can't either find on fedora > dig or nslookup to check > resolv.conf has > *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* > *nameserver 8.8.8.8* > > It\s all so weird. > > > > > On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano > wrote: > >> I also suggest to change dns in your external network used by magnum. >> Using openstack dashboard you can change it to 8.8.8.8 (If I remember >> fine you wrote that you can ping 8.8.8.8 from kuke baster) >> >> Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < >> alfredo.deluca at gmail.com> ha scritto: >> >>> thanks ignazio >>> Where can I get it from? >>> >>> >>> On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano < >>> ignaziocassano at gmail.com> wrote: >>> >>>> I used fedora-magnum-27-4 and it works >>>> >>>> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >>>> alfredo.deluca at gmail.com> ha scritto: >>>> >>>>> Hi Clemens. >>>>> So the image I downloaded is this >>>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>>>> which is the latest I think. >>>>> But you are right...and I noticed that too.... It doesn't have atomic >>>>> binary >>>>> the os-release is >>>>> >>>>> *NAME=Fedora* >>>>> *VERSION="29 (Cloud Edition)"* >>>>> *ID=fedora* >>>>> *VERSION_ID=29* >>>>> *PLATFORM_ID="platform:f29"* >>>>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>>>> *ANSI_COLOR="0;34"* >>>>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>>>> *HOME_URL="https://fedoraproject.org/ "* >>>>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>>>> "* >>>>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>>>> "* >>>>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>>>> "* >>>>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>>>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>>>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>>>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>>>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>>>> "* >>>>> *VARIANT="Cloud Edition"* >>>>> *VARIANT_ID=cloud* >>>>> >>>>> >>>>> so not sure why I don't have atomic tho >>>>> >>>>> >>>>> On Sat, Feb 2, 2019 at 7:53 PM Clemens >>>>> wrote: >>>>> >>>>>> Now to the failure of your part-013: Are you sure that you used the >>>>>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>>>>> error message below suggests that your image does not contain ‚atomic‘ as >>>>>> part of the image … >>>>>> >>>>>> + _prefix=docker.io/openstackmagnum/ >>>>>> + atomic install --storage ostree --system --system-package no --set >>>>>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>>>> heat-container-agent >>>>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>>>> ./part-013: line 8: atomic: command not found >>>>>> + systemctl start heat-container-agent >>>>>> Failed to start heat-container-agent.service: Unit >>>>>> heat-container-agent.service not found. >>>>>> >>>>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>>>> alfredo.deluca at gmail.com>: >>>>>> >>>>>> Failed to start heat-container-agent.service: Unit >>>>>> heat-container-agent.service not found. >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>> >>> -- >>> *Alfredo* >>> >>> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjturek at linux.vnet.ibm.com Mon Feb 4 16:52:11 2019 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Mon, 4 Feb 2019 11:52:11 -0500 Subject: [ironic] [thirdparty-ci] BaremetalBasicOps test In-Reply-To: References: <1bf8f3b4-ea39-6c17-3609-9289ceeeb7ed@linux.vnet.ibm.com> Message-ID: <8bc77794-ddc9-6b08-138a-a741729fcd48@linux.vnet.ibm.com> Hey Julia On 2/4/19 8:51 AM, Julia Kreger wrote: > On Thu, Jan 31, 2019 at 8:37 AM Michael Turek > wrote: > [trim] >> The job is able to clean the node during devstack, successfully deploy >> to the node during the tempest run, and is successfully validated via >> ssh. The node then moves to clean failed with a network error [1], and >> the job subsequently fails. Sometime between the validation and >> attempting to clean, the neutron port associated with the ironic port is >> deleted and a new port comes into existence. Where I'm having trouble is >> finding out what this port is. Based on it's MAC address It's a virtual >> port, and its MAC is not the same as the ironic port. > I think we landed code around then to address the issue of duplicate > mac addresses where a port gets orphaned by external processes, so by > default I seem to remember the logic now just resets the MAC if we no > longer need the port. Interesting! I'll look for the patch. If you have it handy please share. > What are the network settings your operating the job with? It seems > like 'flat' is at least the network_interface based on what your > describing. We are using a single  flat provider network with two available IPs (one for the DHCP server and one for the server itself) Here is a paste of a bunch of the network resources (censored here and there just in case). http://paste.openstack.org/show/744513/ >> We could add an IP to the job to fix it, but I'd rather not do that >> needlessly. >> From doug at doughellmann.com Mon Feb 4 17:25:36 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 04 Feb 2019 12:25:36 -0500 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <20190204152231.qgiryyjn7omu642z@yuggoth.org> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> Message-ID: Jeremy Stanley writes: > On 2019-02-04 17:31:46 +0900 (+0900), Ghanshyam Mann wrote: > [...] >> If I recall it correctly from Board+TC meeting, TC is looking for >> a new home for this list ? Or we continue to maintain this in TC >> itself which should not be much effort I feel. > [...] > > It seems like you might be referring to the in-person TC meeting we > held on the Sunday prior to the Stein PTG in Denver (Alan from the > OSF BoD was also present). Doug's recap can be found in the old > openstack-dev archive here: > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134744.html > > Quoting Doug, "...it wasn't clear that the TC was the best group to > manage a list of 'roles' or other more detailed information. We > discussed placing that information into team documentation or > hosting it somewhere outside of the governance repository where more > people could contribute." (If memory serves, this was in response to > earlier OSF BoD suggestions that retooling the Help Wanted list to > be a set of business-case-focused job descriptions might garner more > uptake from the organizations they represent.) > -- > Jeremy Stanley Right, the feedback was basically that we might have more luck convincing companies to provide resources if we were more specific about how they would be used by describing the work in more detail. When we started thinking about how that change might be implemented, it seemed like managing the information a well-defined job in its own right, and our usual pattern is to establish a group of people interested in doing something and delegating responsibility to them. When we talked about it in the TC meeting in Denver we did not have any TC members volunteer to drive the implementation to the next step by starting to recruit a team. During the Train series goal discussion in Berlin we talked about having a goal of ensuring that each team had documentation for bringing new contributors onto the team. Offering specific mentoring resources seems to fit nicely with that goal, and doing it in each team's repository in a consistent way would let us build a central page on docs.openstack.org to link to all of the team contributor docs, like we link to the user and installation documentation, without requiring us to find a separate group of people to manage the information across the entire community. So, maybe the next step is to convince someone to champion a goal of improving our contributor documentation, and to have them describe what the documentation should include, covering the usual topics like how to actually submit patches as well as suggestions for how to describe areas where help is needed in a project and offers to mentor contributors. Does anyone want to volunteer to serve as the goal champion for that? -- Doug From andr.kurilin at gmail.com Mon Feb 4 17:57:11 2019 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Mon, 4 Feb 2019 19:57:11 +0200 Subject: [tc] OpenStack code and GPL libraries In-Reply-To: <20190204150515.7zxgq2pj7pgnjaxk@yuggoth.org> References: <20190204150515.7zxgq2pj7pgnjaxk@yuggoth.org> Message-ID: Hi stackers! Thanks for raising this topic. I recently removed morph dependency ( https://review.openstack.org/#/c/634741 ) and I hope to release a new version of Rally as soon as possible. пн, 4 февр. 2019 г. в 17:14, Jeremy Stanley : > On 2019-02-04 14:42:04 +0100 (+0100), Ilya Shakhat wrote: > > I am experimenting with automatic verification of code licenses of > > OpenStack projects and see that one of Rally dependencies has GPL3 > > license > [...] > > To start off, it looks like the license for morph is already known > to the Rally developers, based on the inline comment for it at > > https://git.openstack.org/cgit/openstack/rally/tree/requirements.txt?id=3625758#n10 > (so hopefully this is no real surprise). > > The source of truth for our licensing policies, as far as projects > governed by the OpenStack Technical Committee are concerned (which > openstack/rally is), can be found here: > > https://governance.openstack.org/tc/reference/licensing.html > > It has a carve out for "tools that are run with or on OpenStack > projects only during validation or testing phases of development" > which "may be licensed under any OSI-approved license" and since > the README.rst for Rally states it's a "tool & framework that allows > one to write simple plugins and combine them in complex tests > scenarios that allows to perform all kinds of testing" it probably > meets those criteria. > > As for concern that a Python application which imports another > Python library at runtime inherits its license and so becomes > derivative of that work, that has been the subject of much > speculation. In particular, whether a Python import counts as > "dynamic linking" in GPL 3.0 section 1 is debatable: > > https://bytes.com/topic/python/answers/41019-python-gpl > > https://opensource.stackexchange.com/questions/1487/how-does-the-gpls-linking-restriction-apply-when-using-a-proprietary-library-wi > > https://softwareengineering.stackexchange.com/questions/87446/using-a-gplv3-python-module-will-my-entire-project-have-to-be-gplv3-licensed > > https://stackoverflow.com/questions/40492518/is-an-import-in-python-considered-to-be-dynamic-linking > > I'm most definitely not a lawyer, but from what I've been able to > piece together it's the combination of rally+morph which potentially > becomes GPLv3-licensed when distributed, not the openstack/rally > source code itself. This is really more of a topic for the > legal-discuss mailing list, however, so I am cross-posting my reply > there for completeness. > > To readers only of the legal-discuss ML, the original post can be > found archived here: > > > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002356.html > > -- > Jeremy Stanley > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashlee at openstack.org Mon Feb 4 18:26:57 2019 From: ashlee at openstack.org (Ashlee Ferguson) Date: Mon, 4 Feb 2019 12:26:57 -0600 Subject: [OpenStack Foundation] Open Infrastructure Summit Denver - Community Voting Open In-Reply-To: <6B02F9A1-28A7-4F43-85E1-66AD570ED37B@openstack.org> References: <6B02F9A1-28A7-4F43-85E1-66AD570ED37B@openstack.org> Message-ID: <5164AFCF-285F-43F0-8718-A8F9DDCAF48A@openstack.org> Hi everyone, Just under 12 hours left to vote for the sessions you’d like to see at the Denver Open Infrastructure Summit ! REGISTER Register for the Summit before prices increase in late February! VISA APPLICATION PROCESS Make sure to secure your Visa soon. More information about the Visa application process. TRAVEL SUPPORT PROGRAM February 27 is the last day to submit applications. Please submit your applications by 11:59pm Pacific Time (February 28 at 7:59am UTC). If you have any questions, please email summit at openstack.org . Cheers, Ashlee Ashlee Ferguson OpenStack Foundation ashlee at openstack.org > On Jan 31, 2019, at 12:29 PM, Ashlee Ferguson wrote: > > Hi everyone, > > Community voting for the Open Infrastructure Summit Denver sessions is open! > > You can VOTE HERE , but what does that mean? > > Now that the Call for Presentations has closed, all submissions are available for community vote and input. After community voting closes, the volunteer Programming Committee members will receive the presentations to review and determine the final selections for Summit schedule. While community votes are meant to help inform the decision, Programming Committee members are expected to exercise judgment in their area of expertise and help ensure diversity of sessions and speakers. View full details of the session selection process here . > > In order to vote, you need an OSF community membership. If you do not have an account, please create one by going to openstack.org/join . If you need to reset your password, you can do that here . > > Hurry, voting closes Monday, February 4 at 11:59pm Pacific Time (Tuesday, February 5 at 7:59 UTC). > > Continue to visit https://www.openstack.org/summit/denver-2019 for all Summit-related information. > > REGISTER > Register for the Summit before prices increase in late February! > > VISA APPLICATION PROCESS > Make sure to secure your Visa soon. More information about the Visa application process. > > TRAVEL SUPPORT PROGRAM > February 27 is the last day to submit applications. Please submit your applications by 11:59pm Pacific Time (February 28 at 7:59am UTC). > > If you have any questions, please email summit at openstack.org . > > Cheers, > Ashlee > > > Ashlee Ferguson > OpenStack Foundation > ashlee at openstack.org > > > > > _______________________________________________ > Foundation mailing list > Foundation at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From svasudevan at suse.com Mon Feb 4 19:36:03 2019 From: svasudevan at suse.com (Swaminathan Vasudevan) Date: Mon, 04 Feb 2019 12:36:03 -0700 Subject: [Neutron] - Bug Report for the week of Jan 29th- Feb4th. Message-ID: <5C589423020000D7000400BA@prv-mh.provo.novell.com> Item Type: Note Date: Monday, 4 Feb 2019 Hi Neutrinos,Here is the summary of the neutron bugs that came in last week ( starting from Jan 29th - Feb 4th). https://docs.google.com/spreadsheets/d/1MwoHgK_Ve_6JGYaM8tZxWha2HDaMeAYtq4qFdZ4TUAU/edit?usp=sharing Thanks Swaminathan Vasudevan. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: journal.ics Type: text/calendar Size: 947 bytes Desc: not available URL: From fungi at yuggoth.org Mon Feb 4 19:57:06 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 Feb 2019 19:57:06 +0000 Subject: [Neutron] - Bug Report for the week of Jan 29th- Feb4th. In-Reply-To: <5C589423020000D7000400BA@prv-mh.provo.novell.com> References: <5C589423020000D7000400BA@prv-mh.provo.novell.com> Message-ID: <20190204195705.v6to7bmqe2ib2nfd@yuggoth.org> On 2019-02-04 12:36:03 -0700 (-0700), Swaminathan Vasudevan wrote: > Hi Neutrinos,Here is the summary of the neutron bugs that came in last week ( starting from Jan 29th - Feb 4th). > > https://docs.google.com/spreadsheets/d/1MwoHgK_Ve_6JGYaM8tZxWha2HDaMeAYtq4qFdZ4TUAU/edit?usp=sharing If it's just a collaboratively-edited spreadsheet application you need, don't forget we maintain https://ethercalc.openstack.org/ (hopefully soon also reachable as ethercalc.opendev.org) which runs entirely on free software and is usable from parts of the World where Google's services are not (for example, mainland China). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From e0ne at e0ne.info Mon Feb 4 20:28:30 2019 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 4 Feb 2019 22:28:30 +0200 Subject: [horizon][plugins][vitrage][heat][ironic][manila] Integration tests on gates Message-ID: Hi team, A few weeks ago we enabled horizon-integration-tests job[1]. It's a set of selenium-based test cases to verify that Horizon works as expected from the user's perspective. Like any new job, it's added in a non-voting mode for now. During the PTG, I'd got several conversations with project teams that it would be good to have such tests in each plugin to verify that plugin works correctly with a current Horizon version. We've got about 30 plugins in the Plugin Registry [2]. Honestly, without any kind of testing in most of the plugins, we can't be sure that they work well with a current version of Horizon. That's why we decided to implement some kind of smoke tests for plugins based on Horizon integration tests framework. These tests should verify that a plugin is installed and pages could be opened in a browser. We will run these tests on the experimental queue and/or on some schedule on Horizon gates to verify that plugins are maintained and working properly. My idea is to have such a list of 'tested' plugins, so we can add 'Maintained' label to the Plugin Registry. Once these jobs become voting, we can add a label 'Verified'. I think such a schedule looks reasonable: * Stein-Train release cycles - add non-voting jobs for each maintained plugin and introduce "Maintained" label * Train-U release cycles - makes stable jobs voting and introduce "Verified" label in the Horizon Plugin registry I do understand that some teams don't have enough resources to maintain integration tests, so I'm stepping as a volunteer to introduce such tests and jobs for the project. I already published patches for Vitrage and Heat [3] plugins and will do the same for Ironic and Manila dashboards in a short time. Any help or feedback is welcome:). [1] https://review.openstack.org/#/c/580469/ [2] https://docs.openstack.org/horizon/latest/install/plugin-registry.html [3] https://review.openstack.org/#/q/topic:horizon-integration-tests+(status:open+OR+status:merged) Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Feb 4 21:01:51 2019 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 4 Feb 2019 23:01:51 +0200 Subject: [neutron] CI meeting this week cancelled Message-ID: Hi, I’m traveling this week and I will not be able to run Neutron CI meeting on Tuesday, 5.02. As some other people usually involved in this meeting are also traveling, lets skip it this week. We will have next meeting as usual on Tuesday, 12.02.2019. — Slawek Kaplonski Senior software engineer Red Hat From tpb at dyncloud.net Mon Feb 4 21:38:34 2019 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 4 Feb 2019 16:38:34 -0500 Subject: [manila][glusterfs] on queens error In-Reply-To: References: <20190203100549.urtnvf2iatmqm6oy@barron.net> Message-ID: <20190204213834.reohoqqk6gsxel33@barron.net> On 03/02/19 12:45 +0100, Ignazio Cassano wrote: >Many Thanks. >I will check it [1]. >Regards >Ignazio > And Goutham just gave me a more current doc link: https://netapp-openstack-dev.github.io/openstack-docs/rocky/manila/examples/openstack_command_line/section_manila-cli.html#importing-and-exporting-manila-shares -- Tom >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha >scritto: > >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: >> >Thanks Goutham. >> >If there are not mantainers for this driver I will switch on ceph and or >> >netapp. >> >I am already using netapp but I would like to export shares from an >> >openstack installation to another. >> >Since these 2 installations do non share any openstack component and have >> >different openstack database, I would like to know it is possible . >> >Regards >> >Ignazio >> >> Hi Ignazio, >> >> If by "export shares from an openstack installation to another" you >> mean removing them from management by manila in installation A and >> instead managing them by manila in installation B then you can do that >> while leaving them in place on your Net App back end using the manila >> "manage-unmanage" administrative commands. Here's some documentation >> [1] that should be helpful. >> >> If on the other hand by "export shares ... to another" you mean to >> leave the shares under management of manila in installation A but >> consume them from compute instances in installation B it's all about >> the networking. One can use manila to "allow-access" to consumers of >> shares anywhere but the consumers must be able to reach the "export >> locations" for those shares and mount them. >> >> Cheers, >> >> -- Tom Barron >> >> [1] >> https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 >> > >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < >> gouthampravi at gmail.com> >> >ha scritto: >> > >> >> Hi Ignazio, >> >> >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano >> >> wrote: >> >> > >> >> > Hello All, >> >> > I installed manila on my queens openstack based on centos 7. >> >> > I configured two servers with glusterfs replocation and ganesha nfs. >> >> > I configured my controllers octavia,conf but when I try to create a >> share >> >> > the manila scheduler logs reports: >> >> > >> >> > Failed to schedule create_share: No valid host was found. Failed to >> find >> >> a weighted host, the last executed filter was CapabilitiesFilter.: >> >> NoValidHost: No valid host was found. Failed to find a weighted host, >> the >> >> last executed filter was CapabilitiesFilter. >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a >> 89f76bc5de5545f381da2c10c7df7f15 >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a >> >> >> >> >> >> The scheduler failure points out that you have a mismatch in >> >> expectations (backend capabilities vs share type extra-specs) and >> >> there was no host to schedule your share to. So a few things to check >> >> here: >> >> >> >> - What is the share type you're using? Can you list the share type >> >> extra-specs and confirm that the backend (your GlusterFS storage) >> >> capabilities are appropriate with whatever you've set up as >> >> extra-specs ($ manila pool-list --detail)? >> >> - Is your backend operating correctly? You can list the manila >> >> services ($ manila service-list) and see if the backend is both >> >> 'enabled' and 'up'. If it isn't, there's a good chance there was a >> >> problem with the driver initialization, please enable debug logging, >> >> and look at the log file for the manila-share service, you might see >> >> why and be able to fix it. >> >> >> >> >> >> Please be aware that we're on a look out for a maintainer for the >> >> GlusterFS driver for the past few releases. We're open to bug fixes >> >> and maintenance patches, but there is currently no active maintainer >> >> for this driver. >> >> >> >> >> >> > I did not understand if controllers node must be connected to the >> >> network where shares must be exported for virtual machines, so my >> glusterfs >> >> are connected on the management network where openstack controllers are >> >> conencted and to the network where virtual machine are connected. >> >> > >> >> > My manila.conf section for glusterfs section is the following >> >> > >> >> > [gluster-manila565] >> >> > driver_handles_share_servers = False >> >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver >> >> > glusterfs_target = root at 10.102.184.229:/manila565 >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa >> >> > glusterfs_ganesha_server_username = root >> >> > glusterfs_nfs_server_type = Ganesha >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 >> >> > #glusterfs_servers = root at 10.102.185.19 >> >> > ganesha_config_dir = /etc/ganesha >> >> > >> >> > >> >> > PS >> >> > 10.102.184.0/24 is the network where controlelrs expose endpoint >> >> > >> >> > 10.102.189.0/24 is the shared network inside openstack where virtual >> >> machines are connected. >> >> > >> >> > The gluster servers are connected on both. >> >> > >> >> > >> >> > Any help, please ? >> >> > >> >> > Ignazio >> >> >> From chris at openstack.org Mon Feb 4 22:45:07 2019 From: chris at openstack.org (Chris Hoge) Date: Mon, 4 Feb 2019 14:45:07 -0800 Subject: [baremetal-sig][ironic] Proposing Formation of Bare Metal SIG In-Reply-To: <4191B2EA-A6F0-4183-B0EF-C5C013E3A982@openstack.org> References: <4191B2EA-A6F0-4183-B0EF-C5C013E3A982@openstack.org> Message-ID: <098CC2A3-B207-47D5-A0F1-F227C33C2F01@openstack.org> Based on the number of folks signed up in the planning etherpad[1], we have a good initial showing and I've gone ahead and sent up a review[2] to formalize the creation of the SIG. One thing missing is additional leads to help guide the Bare-metal SIG. If you would like to be added as a co-lead, please respond here or on the review and I can make the necessary update. I'll start looking for UC and TC approval early next week on the patch. In the meantime, I'd like to use this thread to start talking about some of the initial items we can start collaborating on. A few things that I was thinking we could begin on are: * A bare metal white paper, similar to the containers white paper we published last year[3]. * A getting started with Ironic demo, run as a community webinar that would not only be a way to give an easy introduction to Ironic but also get larger feedback on the sort of things the community would like to see the SIG produce. What are some other items that we could get started with, and do we have volunteers to participate in any of the items listed above? [1] https://etherpad.openstack.org/p/bare-metal-sig [2] https://review.openstack.org/#/c/634824/1 [3] https://www.openstack.org/containers -Chris From mikal at stillhq.com Mon Feb 4 22:54:10 2019 From: mikal at stillhq.com (Michael Still) Date: Tue, 5 Feb 2019 09:54:10 +1100 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging Message-ID: Hi, I’ve been chasing a bug in ironic’s neutron agent for the last few days and I think its time to ask for some advice. Specifically, I was asked to debug why a set of controllers was using so much RAM, and the answer was that rabbitmq had a queue called ironic-neutron-agent-heartbeat.info with 800,000 messages enqueued. This notification queue is used by ironic’s neutron agent to calculate the hash ring. I have been able to duplicate this issue in a stock kolla-ansible install with ironic turned on but no bare metal nodes enrolled in ironic. About 0.6 messages are queued per second. I added some debugging code (hence the thread yesterday about mangling the code kolla deploys), and I can see that the messages in the queue are being read by the ironic neutron agent and acked correctly. However, they are not removed from the queue. You can see your queue size while using kolla with this command: docker exec rabbitmq rabbitmqctl list_queues messages name messages_ready consumers | sort -n | tail -1 My stock install that’s been running for about 12 hours currently has 8,244 messages in that queue. Where I’m a bit stumped is I had assumed that the messages weren’t being acked correctly, which is not the case. Is there something obvious about notification queues like them being persistent that I’ve missed in my general ignorance of the underlying implementation of notifications? Thanks, Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Tue Feb 5 02:52:22 2019 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Tue, 05 Feb 2019 03:52:22 +0100 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: References: Message-ID: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> On Tue, 2019-02-05 at 09:54 +1100, Michael Still wrote: > Hi, > > I’ve been chasing a bug in ironic’s neutron agent for the last few > days and I think its time to ask for some advice. > I'm working on the same issue. (In fact there are two issues.) > Specifically, I was asked to debug why a set of controllers was using > so much RAM, and the answer was that rabbitmq had a queue called > ironic-neutron-agent-heartbeat.info with 800,000 messages enqueued. > This notification queue is used by ironic’s neutron agent to > calculate the hash ring. I have been able to duplicate this issue in > a stock kolla-ansible install with ironic turned on but no bare metal > nodes enrolled in ironic. About 0.6 messages are queued per second. > > I added some debugging code (hence the thread yesterday about > mangling the code kolla deploys), and I can see that the messages in > the queue are being read by the ironic neutron agent and acked > correctly. However, they are not removed from the queue. > > You can see your queue size while using kolla with this command: > > docker exec rabbitmq rabbitmqctl list_queues messages name > messages_ready consumers | sort -n | tail -1 > > My stock install that’s been running for about 12 hours currently has > 8,244 messages in that queue. > > Where I’m a bit stumped is I had assumed that the messages weren’t > being acked correctly, which is not the case. Is there something > obvious about notification queues like them being persistent that > I’ve missed in my general ignorance of the underlying implementation > of notifications? > I opened a oslo.messaging bug[1] yesterday. When using notifications and all consumers use one or more pools. The ironic-neutron-agent does use pools for all listeners in it's hash-ring member manager. And the result is that notifications are published to the 'ironic-neutron- agent-heartbeat.info' queue and they are never consumed. The second issue, each instance of the agent uses it's own pool to ensure all agents are notified about the existance of peer-agents. The pools use a uuid that is generated at startup (and re-generated on restart, stop/start etc). In the case where `[oslo_messaging_rabbit]/amqp_auto_delete = false` in neutron config these uuid queues are not automatically removed. So after a restart of the ironic-neutron-agent the queue with the old UUID is left in the message broker without no consumers, growing ... I intend to push patches to fix both issues. As a workaround (or the permanent solution) will create another listener consuming the notifications without a pool. This should fix the first issue. Second change will set amqp_auto_delete for these specific queues to 'true' no matter. What I'm currently stuck on here is that I need to change the control_exchange for the transport. According to oslo.messaging documentation it should be possible to override the control_exchange in the transport_url[3]. The idea is to set amqp_auto_delete and a ironic-neutron-agent specific exchange on the url when setting up the transport for notifications, but so far I belive the doc string on the control_exchange option is wrong. NOTE: The second issue can be worked around by stopping and starting rabbitmq as a dependency of the ironic-neutron-agent service. This ensure only queues for active agent uuid's are present, and those queues will be consumed. -- Harald Jensås [1] https://bugs.launchpad.net/oslo.messaging/+bug/1814544 [2] https://storyboard.openstack.org/#!/story/2004933 [3] https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/transport.py#L58-L62 From mikal at stillhq.com Tue Feb 5 02:56:38 2019 From: mikal at stillhq.com (Michael Still) Date: Tue, 5 Feb 2019 13:56:38 +1100 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> References: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> Message-ID: Cool thanks for the summary. You seem to have this under control so I might bravely run away. I definitely think these are issues that deserve a backport when the time comes. Michael On Tue, Feb 5, 2019 at 1:52 PM Harald Jensås wrote: > On Tue, 2019-02-05 at 09:54 +1100, Michael Still wrote: > > Hi, > > > > I’ve been chasing a bug in ironic’s neutron agent for the last few > > days and I think its time to ask for some advice. > > > > I'm working on the same issue. (In fact there are two issues.) > > > Specifically, I was asked to debug why a set of controllers was using > > so much RAM, and the answer was that rabbitmq had a queue called > > ironic-neutron-agent-heartbeat.info with 800,000 messages enqueued. > > This notification queue is used by ironic’s neutron agent to > > calculate the hash ring. I have been able to duplicate this issue in > > a stock kolla-ansible install with ironic turned on but no bare metal > > nodes enrolled in ironic. About 0.6 messages are queued per second. > > > > I added some debugging code (hence the thread yesterday about > > mangling the code kolla deploys), and I can see that the messages in > > the queue are being read by the ironic neutron agent and acked > > correctly. However, they are not removed from the queue. > > > > You can see your queue size while using kolla with this command: > > > > docker exec rabbitmq rabbitmqctl list_queues messages name > > messages_ready consumers | sort -n | tail -1 > > > > My stock install that’s been running for about 12 hours currently has > > 8,244 messages in that queue. > > > > Where I’m a bit stumped is I had assumed that the messages weren’t > > being acked correctly, which is not the case. Is there something > > obvious about notification queues like them being persistent that > > I’ve missed in my general ignorance of the underlying implementation > > of notifications? > > > > I opened a oslo.messaging bug[1] yesterday. When using notifications > and all consumers use one or more pools. The ironic-neutron-agent does > use pools for all listeners in it's hash-ring member manager. And the > result is that notifications are published to the 'ironic-neutron- > agent-heartbeat.info' queue and they are never consumed. > > The second issue, each instance of the agent uses it's own pool to > ensure all agents are notified about the existance of peer-agents. The > pools use a uuid that is generated at startup (and re-generated on > restart, stop/start etc). In the case where > `[oslo_messaging_rabbit]/amqp_auto_delete = false` in neutron config > these uuid queues are not automatically removed. So after a restart of > the ironic-neutron-agent the queue with the old UUID is left in the > message broker without no consumers, growing ... > > > I intend to push patches to fix both issues. As a workaround (or the > permanent solution) will create another listener consuming the > notifications without a pool. This should fix the first issue. > > Second change will set amqp_auto_delete for these specific queues to > 'true' no matter. What I'm currently stuck on here is that I need to > change the control_exchange for the transport. According to > oslo.messaging documentation it should be possible to override the > control_exchange in the transport_url[3]. The idea is to set > amqp_auto_delete and a ironic-neutron-agent specific exchange on the > url when setting up the transport for notifications, but so far I > belive the doc string on the control_exchange option is wrong. > > > NOTE: The second issue can be worked around by stopping and starting > rabbitmq as a dependency of the ironic-neutron-agent service. This > ensure only queues for active agent uuid's are present, and those > queues will be consumed. > > > -- > Harald Jensås > > > [1] https://bugs.launchpad.net/oslo.messaging/+bug/1814544 > [2] https://storyboard.openstack.org/#!/story/2004933 > [3] > > https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/transport.py#L58-L62 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Tue Feb 5 04:43:55 2019 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Tue, 05 Feb 2019 05:43:55 +0100 Subject: [ironic] [thirdparty-ci] BaremetalBasicOps test In-Reply-To: <1bf8f3b4-ea39-6c17-3609-9289ceeeb7ed@linux.vnet.ibm.com> References: <1bf8f3b4-ea39-6c17-3609-9289ceeeb7ed@linux.vnet.ibm.com> Message-ID: On Thu, 2019-01-31 at 11:30 -0500, Michael Turek wrote: > Hello all, > > Our ironic job has been broken and it seems to be due to a lack of > IPs. > We allocate two IPs to our job, one for the dhcp server, and one for > the > target node. This had been working for as long as the job has > existed > but recently (since about early December 2018), we've been broken. > > The job is able to clean the node during devstack, successfully > deploy > to the node during the tempest run, and is successfully validated > via > ssh. The node then moves to clean failed with a network error [1], > and > the job subsequently fails. Sometime between the validation and > attempting to clean, the neutron port associated with the ironic port > is > deleted and a new port comes into existence. Where I'm having trouble > is > finding out what this port is. Based on it's MAC address It's a > virtual > port, and its MAC is not the same as the ironic port. > > We could add an IP to the job to fix it, but I'd rather not do that > needlessly. > > Any insight or advice would be appreciated here! > While working on the neutron events I noticed a pattern I thought was a bit strange. (Note, this was with neutron networking.) Create nova baremetal instance: 1. The tenant VIF is created. 2. The provision port is created. 3. Provision port plugged (bound) 4. Provision port un-plugged (deleted) 5. Tenant port plugged (bound) On nova delete of barametal instance: 1. Tenant VIF is un-plugged (unbound) 2. Cleaning port created 3. Cleaning port plugged (bound) 4. Cleaning port un-plugged (deleted) 5. Tenant port deleted I think step 5, deleting the tenant port could happen after step 1. But it looks like it is'nt deleted before after cleaning is done. If this is the case with flat networks as well it could explain why you get the error on cleaning. The "tenant" port still exist, and there are no free IP's in the allocation pool to create a new port for cleaning. -- Harald From chkumar246 at gmail.com Tue Feb 5 05:26:15 2019 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 5 Feb 2019 10:56:15 +0530 Subject: [tripleo][openstack-ansible] collaboration on os_tempest role update IX - Feb 05, 2019 Message-ID: Hello, Here is the 9 th update (Jan 29 to Feb 05, 2019) on collaboration on os_tempest[1] role between TripleO and OpenStack-Ansible projects. Summary: It was a great week, * we unblocked the os_tempest centos gate failure thanks to slaweq (neutron) and jrosser (OSA) to fixing the tempest container vlan issue. * TripleO is now using os_tempest for standalone job and os_tempest is also getted with the same job: -> http://zuul.openstack.org/builds?job_name=tripleo-ci-centos-7-standalone-os-tempest * Few other improvements: * generate stackviz irrespective of tempest tests failure * Port security is now enabled in tempest.conf * Cirros Image got updated from 3.5 to 3.6 * Use tempest run command with --test-list option Things got merged: os_tempest: * Update all plugin urls to use https rather than git - https://review.openstack.org/625670 * Add an ip address to eth12 in OSA test containers - https://review.openstack.org/633732 * Adds tempest run command with --test-list option - https://review.openstack.org/631351 * Enable port security - https://review.openstack.org/617719 * Use tempest_cloud_name in tempestconf - https://review.openstack.org/631708 * Always generate stackviz irrespective of tests pass or fail - https://review.openstack.org/631967 * Update cirros from 3.5 to 3.6 - https://review.openstack.org/633208 * Disable nova-lxd tempest plugin - https://review.openstack.org/633711 * Only init a workspace if doesn't exists - https://review.openstack.org/633549 * Add tripleo-ci-centos-7-standalone-os-tempest job - https://review.openstack.org/633931 Tripleo: * Enable standalone-full on validate-tempest role - https://review.openstack.org/634644 Things IN-Progress: os_tempest: * Ping router once it is created - https://review.openstack.org/633883 * Improve overview subpage - https://review.openstack.org/633934 * Added tempest.conf for heat_plugin - https://review.openstack.org/632021 * Add telemetry distro plugin install for aodh - https://review.openstack.org/632125 * Use the correct heat tests - https://review.openstack.org/630695 Tripleo: * Reuse the validate-tempest skip list in os_tempest - https://review.openstack.org/634380 Goal of this week: * Finish ongoing patches and reusing of skip list in TripleO from validate-tempest which will allow to move standalone scenario jobs to os_tempest Here is the 8th update [2]. Have queries, Feel free to ping us on #tripleo or #openstack-ansible channel. Links: [1.] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest [2.] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/002151.html Thanks, Chandan Kumar From cjeanner at redhat.com Tue Feb 5 10:11:22 2019 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 5 Feb 2019 11:11:22 +0100 Subject: [TripleO] containers logging to stdout In-Reply-To: <7cee5db5-f4cd-9e11-e0a3-7438154fb9af@redhat.com> References: <7cee5db5-f4cd-9e11-e0a3-7438154fb9af@redhat.com> Message-ID: <95dc4e6c-dc4a-7cc6-a34d-7999566725ba@redhat.com> Hello there! small thoughts: - we might already push the stdout logging, in parallel of the current existing one - that would already point some weakness and issues, without making the whole thing crash, since there aren't that many logs in stdout for now - that would already allow to check what's the best way to do it, and what's the best format for re-usability (thinking: sending logs to some (k)elk and the like) This would also allow devs to actually test that for their services. And thus going forward on this topic. Any thoughts? Cheers, C. On 1/30/19 11:49 AM, Juan Antonio Osorio Robles wrote: > Hello! > > > In Queens, the a spec to provide the option to make containers log to > standard output was proposed [1] [2]. Some work was done on that side, > but due to the lack of traction, it wasn't completed. With the Train > release coming, I think it would be a good idea to revive this effort, > but make logging to stdout the default in that release. > > This would allow several benefits: > > * All logging from the containers would en up in journald; this would > make it easier for us to forward the logs, instead of having to keep > track of the different directories in /var/log/containers > > * The journald driver would add metadata to the logs about the container > (we would automatically get what container ID issued the logs). > > * This wouldo also simplify the stacks (removing the Logging nested > stack which is present in several templates). > > * Finally... if at some point we move towards kubernetes (or something > in between), managing our containers, it would work with their logging > tooling as well. > > > Any thoughts? > > > [1] > https://specs.openstack.org/openstack/tripleo-specs/specs/queens/logging-stdout.html > > [2] https://blueprints.launchpad.net/tripleo/+spec/logging-stdout-rsyslog > > > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From andr.kurilin at gmail.com Tue Feb 5 10:42:08 2019 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Tue, 5 Feb 2019 12:42:08 +0200 Subject: [tc] OpenStack code and GPL libraries In-Reply-To: References: <20190204150515.7zxgq2pj7pgnjaxk@yuggoth.org> Message-ID: a quick update: the latest release of Rally ( https://pypi.org/project/rally/1.4.0/ ) doesn't include morph dependency пн, 4 февр. 2019 г. в 19:57, Andrey Kurilin : > Hi stackers! > > Thanks for raising this topic. > I recently removed morph dependency ( > https://review.openstack.org/#/c/634741 ) and I hope to release a new > version of Rally as soon as possible. > > пн, 4 февр. 2019 г. в 17:14, Jeremy Stanley : > >> On 2019-02-04 14:42:04 +0100 (+0100), Ilya Shakhat wrote: >> > I am experimenting with automatic verification of code licenses of >> > OpenStack projects and see that one of Rally dependencies has GPL3 >> > license >> [...] >> >> To start off, it looks like the license for morph is already known >> to the Rally developers, based on the inline comment for it at >> >> https://git.openstack.org/cgit/openstack/rally/tree/requirements.txt?id=3625758#n10 >> (so hopefully this is no real surprise). >> >> The source of truth for our licensing policies, as far as projects >> governed by the OpenStack Technical Committee are concerned (which >> openstack/rally is), can be found here: >> >> https://governance.openstack.org/tc/reference/licensing.html >> >> It has a carve out for "tools that are run with or on OpenStack >> projects only during validation or testing phases of development" >> which "may be licensed under any OSI-approved license" and since >> the README.rst for Rally states it's a "tool & framework that allows >> one to write simple plugins and combine them in complex tests >> scenarios that allows to perform all kinds of testing" it probably >> meets those criteria. >> >> As for concern that a Python application which imports another >> Python library at runtime inherits its license and so becomes >> derivative of that work, that has been the subject of much >> speculation. In particular, whether a Python import counts as >> "dynamic linking" in GPL 3.0 section 1 is debatable: >> >> https://bytes.com/topic/python/answers/41019-python-gpl >> >> https://opensource.stackexchange.com/questions/1487/how-does-the-gpls-linking-restriction-apply-when-using-a-proprietary-library-wi >> >> https://softwareengineering.stackexchange.com/questions/87446/using-a-gplv3-python-module-will-my-entire-project-have-to-be-gplv3-licensed >> >> https://stackoverflow.com/questions/40492518/is-an-import-in-python-considered-to-be-dynamic-linking >> >> I'm most definitely not a lawyer, but from what I've been able to >> piece together it's the combination of rally+morph which potentially >> becomes GPLv3-licensed when distributed, not the openstack/rally >> source code itself. This is really more of a topic for the >> legal-discuss mailing list, however, so I am cross-posting my reply >> there for completeness. >> >> To readers only of the legal-discuss ML, the original post can be >> found archived here: >> >> >> http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002356.html >> >> -- >> Jeremy Stanley >> > > > -- > Best regards, > Andrey Kurilin. > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Tue Feb 5 12:11:22 2019 From: aspiers at suse.com (Adam Spiers) Date: Tue, 5 Feb 2019 12:11:22 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190201145553.GA5625@sm-workstation> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> Message-ID: <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> Sean McGinnis wrote: >On Fri, Feb 01, 2019 at 12:49:19PM +0100, Thierry Carrez wrote: >>Lance Bragstad wrote: >>>[..] >>>Outside of having a formal name, do we expect the "pop-up" teams to >>>include processes that make what we went through easier? Ultimately, we >>>still had to self-organize and do a bunch of socializing to make progress. >> >>I think being listed as a pop-up team would definitely facilitate >>getting mentioned in TC reports, community newsletters or other >>high-vsibility community communications. It would help getting space to >>meet at PTGs, too. > >I guess this is the main value I see from this proposal. If it helps with >visibility and communications around the effort then it does add some value to >give them an official name. I agree - speaking from SIG experience, visibility and communications is one of the biggest challenges with small initiatives. >I don't think it changes much else. Those working in the group will still need >to socialize the changes they would like to make, get buy-in from the project >teams affected that the design approach is good, and find enough folks >interested in the changes to drive it forward and propose the patches and do >the other work needed to get things to happen. > >We can try looking at processes to help support that. But ultimately, as with >most open source projects, I think it comes down to having enough people >interested enough to get the work done. Sure. I particularly agree with your point about processes; I think the TC (or whoever else volunteers) could definitely help lower the barrier to starting up a pop-up team by creating a cookie-cutter kind of approach which would quickly set up any required infrastructure. For example it could be a simple form or CLI-based tool posing questions like the following, where the answers could facilitate the bootstrapping process: - What is the name of your pop-up team? - Please enter a brief description of the purpose of your pop-up team. - If you will use an IRC channel, please state it here. - Do you need regular IRC meetings? - Do you need a new git repository? [If so, ...] - Do you need a new StoryBoard project? [If so, ...] - Do you need a [badge] for use in Subject: headers on openstack-discuss? etc. The outcome of the form could be anything from pointers to specific bits of documentation on how to set up the various bits of infrastructure, all the way through to automation of as much of the setup as is possible. The slicker the process, the more agile the community could become in this respect. From lauren at openstack.org Tue Feb 5 13:23:27 2019 From: lauren at openstack.org (Lauren Sell) Date: Tue, 5 Feb 2019 07:23:27 -0600 Subject: Why COA exam is being retired? In-Reply-To: References: <25c27f7e-80ec-2eb5-6b88-5627bc9f1f01@admin.grnet.gr> <16640d78-1124-a21d-8658-b7d9b2d50509@gmail.com> <5077d9dc-c4af-8736-0db3-2e05cbc1e992@gmail.com> <20190125152713.dxbxgkzoevzw35f2@csail.mit.edu> <1688640cbe0.27a5.eb5fa01e01bf15c6e0d805bdb1ad935e@jbryce.com> Message-ID: <268F8E4B-0DBA-464A-B44C-A4023634EF94@openstack.org> Hi everyone, I had a few direct responses to my email, so I’m scheduling a community call for anyone who wants to discuss the COA and options going forward. Friday, February 15 @ 10:00 am CT / 15:00 UTC Zoom meeting: https://zoom.us/j/361542002 Find your local number: https://zoom.us/u/akLt1CD2H For those who cannot attend, we will take notes in an etherpad and share back with the list. Best, Lauren > On Jan 25, 2019, at 12:34 PM, Lauren Sell wrote: > > Thanks very much for the feedback. When we launched the COA, the commercial market for OpenStack was much more crowded (read: fragmented), and the availability of individuals with OpenStack experience was more scarce. That indicated a need for a vendor neutral certification to test baseline OpenStack proficiency, and to help provide a target for training curriculum being developed by companies in the ecosystem. > > Three years on, the commercial ecosystem has become easier to navigate, and there are a few thousand professionals who have taken the COA and had on-the-job experience. As those conditions have changed, we've been trying to evaluate the best ways to use the Foundation's resources and time to support the current needs for education and certification. The COA in its current form is pretty resource intensive, because it’s a hands-on exam that runs in a virtual OpenStack environment. To maintain the exam (including keeping it current to OpenStack releases) would require a pretty significant investment in terms of time and money this year. From the data and demand we’re seeing, the COA did not seem to be a top priority compared to our investments in programs that push knowledge and training into the ecosystem like Upstream Institute, supporting OpenStack training partners, mentoring, and sponsoring internship programs like Outreachy and Google Summer of Code. > > That said, we’ve honestly been surprised by the response from training partners and the community as plans have been trickling out these past few weeks, and are open to discussing it. If there are people and companies who are willing to invest time and resources into a neutral certification exam, we could investigate alternative paths. It's very helpful to hear which education activities you find most valuable, and if you'd like to have a deeper discussion or volunteer to help, let me know and we can schedule a community call next week. > > Regardless of the future of the COA exam, we will of course continue to maintain the training marketplace at openstack.org to promote commercial training partners and certifications. There are also some great books and resources developed by community members listed alongside the community training. > > >> From: Jay Bryant jungleboyj at gmail.com >> Date: January 25, 2019 07:42:55 >> Subject: Re: Why COA exam is being retired? >> To: openstack-discuss at lists.openstack.org >> >>> On 1/25/2019 9:27 AM, Jonathan Proulx wrote: >>>> On Fri, Jan 25, 2019 at 10:09:04AM -0500, Jay Pipes wrote: >>>> :On 01/25/2019 09:09 AM, Erik McCormick wrote: >>>> :> On Fri, Jan 25, 2019, 8:58 AM Jay Bryant >>> >>>> :> That's sad. I really appreciated having a non-vendory, ubiased, >>>> :> community-driven option. >>>> : >>>> :+10 >>>> : >>>> :> If a vendor folds or moves on from Openstack, your certification >>>> :> becomes worthless. Presumably, so long as there is Openstack, there >>>> :> will be the foundation at its core. I hope they might reconsider. >>>> : >>>> :+100 >>>> >>>> So to clarify is the COA certifiaction going away or is the Foundation >>>> just no longer administerign the exam? >>>> >>>> It would be a shame to loose a standard unbiased certification, but if >>>> this is a transition away from directly providing the training and >>>> only providing the exam specification that may be reasonable. >>>> >>>> -Jon >>> >>> When Allison e-mailed me last week they said they were having meetings >>> to figure out how to go forward with the COA. The foundations partners >>> were going to be offering the exam through September and they were >>> working on communicating the status of things to the community. >>> >>> So, probably best to not jump to conclusions and wait for the official >>> word from the community. >>> >>> - Jay >> >> >> > From mihalis68 at gmail.com Tue Feb 5 13:47:34 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 5 Feb 2019 08:47:34 -0500 Subject: [ops] ops meetups team meeting minutes 2019-1-29 Message-ID: minutes from last week's meeting: Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-01-29-15.08.html 10:32 AM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-01-29-15.08.txt 10:32 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-01-29-15.08.log.html The next meeting is due in about 1h 15 minutes on #openstack-operators We are trying to finalise the evenbrite for the upcoming ops meetup in berlin March 6th,7th and we're collecting session topics here: https://etherpad.openstack.org/p/BER-ops-meetup Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Feb 5 14:35:20 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 5 Feb 2019 08:35:20 -0600 Subject: [nova][qa][cinder] CI job changes Message-ID: <666ffefd-7344-1853-7fd5-a2a32ea8d222@gmail.com> I'd like to propose some changes primarily to the CI jobs that run on nova changes, but also impact cinder and tempest. 1. Drop the nova-multiattach job and move test coverage to other jobs This is actually an old thread [1] and I had started the work but got hung up on a bug that was teased out of one of the tests when running in the multi-node tempest-slow job [2]. For now I've added a conditional skip on that test if running in a multi-node job. The open changes are here [3]. 2. Only run compute.api and scenario tests in nova-next job and run under python3 only The nova-next job is a place to test new or advanced nova features like placement and cells v2 when those were still optional in Newton. It currently runs with a few changes from the normal tempest-full job: * configures service user tokens * configures nova console proxy to use TLS * disables the resource provider association refresh interval * it runs the post_test_hook which runs some commands like archive_delete_rows, purge, and looks for leaked resource allocations [4] Like tempest-full, it runs the non-slow tempest API tests concurrently and then the scenario tests serially. I'm proposing that we: a) change that job to only run tempest compute API tests and scenario tests to cut down on the number of tests to run; since the job is really only about testing nova features, we don't need to spend time running glance/keystone/cinder/neutron tests which don't touch nova. b) run it with python3 [5] which is the direction all jobs are moving anyway 3. Drop the integrated-gate (py2) template jobs (from nova) Nova currently runs with both the integrated-gate and integrated-gate-py3 templates, which adds a set of tempest-full and grenade jobs each to the check and gate pipelines. I don't think we need to be gating on both py2 and py3 at this point when it comes to tempest/grenade changes. Tempest changes are still gating on both so we have coverage there against breaking changes, but I think anything that's py2 specific would be caught in unit and functional tests (which we're running on both py27 and py3*). Who's with me? [1] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135299.html [2] https://bugs.launchpad.net/tempest/+bug/1807723 [3] https://review.openstack.org/#/q/topic:drop-multiattach-job+(status:open+OR+status:merged) [4] https://github.com/openstack/nova/blob/5283b464b/gate/post_test_hook.sh [5] https://review.openstack.org/#/c/634739/ -- Thanks, Matt From mnaser at vexxhost.com Tue Feb 5 16:22:09 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 5 Feb 2019 11:22:09 -0500 Subject: [openstack-ansible] bug squash day! In-Reply-To: References: <717c065910a2365e8d9674f987227771@arcor.de> Message-ID: Hi everyone, We've discussed this over the ML today and we've decided for it to be next Wednesday (13th of February). Due to the distributed nature of our teams, we'll be aiming to go throughout the day and we'll all be hanging out on #openstack-ansible with a few more high bandwidth way of discussion if that is needed Thanks! Mohammed On Thu, Jan 31, 2019 at 2:35 PM Mohammed Naser wrote: > > On Tue, Jan 29, 2019 at 2:26 PM Frank Kloeker wrote: > > > > Am 2019-01-29 17:09, schrieb Mohammed Naser: > > > Hi team, > > > > > > As you may have noticed, bug triage during our meetings has been > > > something that has kinda killed attendance (really, no one seems to > > > enjoy it, believe it or not!) > > > > > > I wanted to propose for us to take a day to go through as much bugs as > > > possible, triaging and fixing as much as we can. It'd be a fun day > > > and we can also hop on a more higher bandwidth way to talk about this > > > stuff while we grind through it all. > > > > > > Is this something that people are interested in, if so, is there any > > > times/days that work better in the week to organize? > > > > Interesting. Something in EU timezone would be nice. Or what about: Bug > > around the clock? > > So 24 hours of bug triage :) > > I'd be up for that too, we have a pretty distributed team so that > would be awesome, > I'm still wondering if there are enough resources or folks available > to be doing this, > as we haven't had a response yet on a timeline that might work or > availabilities yet. > > > kind regards > > > > Frank > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From kgiusti at gmail.com Tue Feb 5 16:43:09 2019 From: kgiusti at gmail.com (Ken Giusti) Date: Tue, 5 Feb 2019 11:43:09 -0500 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> References: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> Message-ID: On 2/4/19, Harald Jensås wrote: > On Tue, 2019-02-05 at 09:54 +1100, Michael Still wrote: >> Hi, >> >> I’ve been chasing a bug in ironic’s neutron agent for the last few >> days and I think its time to ask for some advice. >> > > I'm working on the same issue. (In fact there are two issues.) > >> Specifically, I was asked to debug why a set of controllers was using >> so much RAM, and the answer was that rabbitmq had a queue called >> ironic-neutron-agent-heartbeat.info with 800,000 messages enqueued. >> This notification queue is used by ironic’s neutron agent to >> calculate the hash ring. I have been able to duplicate this issue in >> a stock kolla-ansible install with ironic turned on but no bare metal >> nodes enrolled in ironic. About 0.6 messages are queued per second. >> >> I added some debugging code (hence the thread yesterday about >> mangling the code kolla deploys), and I can see that the messages in >> the queue are being read by the ironic neutron agent and acked >> correctly. However, they are not removed from the queue. >> >> You can see your queue size while using kolla with this command: >> >> docker exec rabbitmq rabbitmqctl list_queues messages name >> messages_ready consumers | sort -n | tail -1 >> >> My stock install that’s been running for about 12 hours currently has >> 8,244 messages in that queue. >> >> Where I’m a bit stumped is I had assumed that the messages weren’t >> being acked correctly, which is not the case. Is there something >> obvious about notification queues like them being persistent that >> I’ve missed in my general ignorance of the underlying implementation >> of notifications? >> > > I opened a oslo.messaging bug[1] yesterday. When using notifications > and all consumers use one or more pools. The ironic-neutron-agent does > use pools for all listeners in it's hash-ring member manager. And the > result is that notifications are published to the 'ironic-neutron- > agent-heartbeat.info' queue and they are never consumed. > This is an issue with the design of the notification pool feature. The Notification service is designed so notification events can be sent even though there may currently be no consumers. It supports the ability for events to be queued until a consumer(s) is ready to process them. So when a notifier issues an event and there are no consumers subscribed, a queue must be provisioned to hold that event until consumers appear. For notification pools the pool identifier is supplied by the notification listener when it subscribes. The value of any pool id is not known beforehand by the notifier, which is important because pool ids can be dynamically created by the listeners. And in many cases pool ids are not even used. So notifications are always published to a non-pooled queue. If there are pooled subscriptions we rely on the broker to do the fanout. This means that the application should always have at least one non-pooled listener for the topic, since any events that may be published _before_ the listeners are established will be stored on a non-pooled queue. The documentation doesn't make that clear AFAIKT - that needs to be fixed. > The second issue, each instance of the agent uses it's own pool to > ensure all agents are notified about the existance of peer-agents. The > pools use a uuid that is generated at startup (and re-generated on > restart, stop/start etc). In the case where > `[oslo_messaging_rabbit]/amqp_auto_delete = false` in neutron config > these uuid queues are not automatically removed. So after a restart of > the ironic-neutron-agent the queue with the old UUID is left in the > message broker without no consumers, growing ... > > > I intend to push patches to fix both issues. As a workaround (or the > permanent solution) will create another listener consuming the > notifications without a pool. This should fix the first issue. > > Second change will set amqp_auto_delete for these specific queues to > 'true' no matter. What I'm currently stuck on here is that I need to > change the control_exchange for the transport. According to > oslo.messaging documentation it should be possible to override the > control_exchange in the transport_url[3]. The idea is to set > amqp_auto_delete and a ironic-neutron-agent specific exchange on the > url when setting up the transport for notifications, but so far I > belive the doc string on the control_exchange option is wrong. > Yes the doc string is wrong - you can override the default control_exchange via the Target's exchange field: https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/target.py#n40 At least that's the intent... ... however the Notifier API does not take a Target, it takes a list of topic _strings_: https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/notify/notifier.py#n239 Which seems wrong, especially since the notification Listener subscribes to a list of Targets: https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/notify/listener.py#n227 I've opened a bug for this and will provide a patch for review shortly: https://bugs.launchpad.net/oslo.messaging/+bug/1814797 > > NOTE: The second issue can be worked around by stopping and starting > rabbitmq as a dependency of the ironic-neutron-agent service. This > ensure only queues for active agent uuid's are present, and those > queues will be consumed. > > > -- > Harald Jensås > > > [1] https://bugs.launchpad.net/oslo.messaging/+bug/1814544 > [2] https://storyboard.openstack.org/#!/story/2004933 > [3] > https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/transport.py#L58-L62 > > > -- Ken Giusti (kgiusti at gmail.com) From mriedemos at gmail.com Tue Feb 5 17:00:41 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 5 Feb 2019 11:00:41 -0600 Subject: [publiccloud] New Contributor Joining In-Reply-To: References: Message-ID: On 1/27/2019 4:34 PM, Sindisiwe Chuma wrote: > Hi All, > > I am Sindi, a new member. I am interested in participating in the Pubic > Cloud Operators Working Group. Are there current projects or initiatives > running and documentation available to familiarize myself with the work > done and currently being done? > > Could you please refer me to resources containing information. Welcome Sindi. Here is some information that can maybe get you started: * The wiki is here: https://wiki.openstack.org/wiki/PublicCloudWorkingGroup but I'm not sure how up to date it is. * The IRC channel is #openstack-publiccloud. * Meeting information can be found here: http://eavesdrop.openstack.org/#Public_Cloud_Working_Group * Public cloud requirements / RFEs are tracked in launchpad: https://bugs.launchpad.net/openstack-publiccloud-wg The IRC channel may not be very active given the different time zones that people are operating in, so the best time to try and discuss anything in IRC is during the meeting, otherwise feel free to post to the #openstack-discuss mailing list and tag your subject with "[ops]" so it is filtered properly. -- Thanks, Matt From eumel at arcor.de Tue Feb 5 18:04:56 2019 From: eumel at arcor.de (Frank Kloeker) Date: Tue, 05 Feb 2019 19:04:56 +0100 Subject: [openstack-ansible] bug squash day! In-Reply-To: References: <717c065910a2365e8d9674f987227771@arcor.de> Message-ID: <5f88b97f42da5cd3015ec738d4d7a6f9@arcor.de> Hi Mohammed, will there be an extra invitation or an etherpad for logistic? many thanks Frank Am 2019-02-05 17:22, schrieb Mohammed Naser: > Hi everyone, > > We've discussed this over the ML today and we've decided for it to be > next Wednesday (13th of February). Due to the distributed nature of > our teams, we'll be aiming to go throughout the day and we'll all be > hanging out on #openstack-ansible with a few more high bandwidth way > of discussion if that is needed > > Thanks! > Mohammed > > On Thu, Jan 31, 2019 at 2:35 PM Mohammed Naser > wrote: >> >> On Tue, Jan 29, 2019 at 2:26 PM Frank Kloeker wrote: >> > >> > Am 2019-01-29 17:09, schrieb Mohammed Naser: >> > > Hi team, >> > > >> > > As you may have noticed, bug triage during our meetings has been >> > > something that has kinda killed attendance (really, no one seems to >> > > enjoy it, believe it or not!) >> > > >> > > I wanted to propose for us to take a day to go through as much bugs as >> > > possible, triaging and fixing as much as we can. It'd be a fun day >> > > and we can also hop on a more higher bandwidth way to talk about this >> > > stuff while we grind through it all. >> > > >> > > Is this something that people are interested in, if so, is there any >> > > times/days that work better in the week to organize? >> > >> > Interesting. Something in EU timezone would be nice. Or what about: Bug >> > around the clock? >> > So 24 hours of bug triage :) >> >> I'd be up for that too, we have a pretty distributed team so that >> would be awesome, >> I'm still wondering if there are enough resources or folks available >> to be doing this, >> as we haven't had a response yet on a timeline that might work or >> availabilities yet. >> >> > kind regards >> > >> > Frank >> >> >> >> -- >> Mohammed Naser — vexxhost >> ----------------------------------------------------- >> D. 514-316-8872 >> D. 800-910-1726 ext. 200 >> E. mnaser at vexxhost.com >> W. http://vexxhost.com From martin.chlumsky at gmail.com Tue Feb 5 18:16:07 2019 From: martin.chlumsky at gmail.com (Martin Chlumsky) Date: Tue, 5 Feb 2019 13:16:07 -0500 Subject: [Cinder][driver][ScaleIO] Message-ID: Hello, We are using EMC ScaleIO as our backend to cinder. When we delete VMs that have attached volumes and then try deleting said volumes, the volumes will sometimes end in state error_deleting. The state is reached because for some reason the volumes are still mapped (in the ScaleIO sense of the word) to the hypervisor despite the VM being deleted. We fixed the issue by setting the following option to True in cinder.conf: # Unmap volume before deletion. (boolean value) sio_unmap_volume_before_deletion=False What is the reasoning behind this option? Why would we ever set this to False and why is it False by default? It seems you would always want to unmap the volume from the hypervisor before deleting it. Thank you, Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Tue Feb 5 19:08:35 2019 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Tue, 05 Feb 2019 20:08:35 +0100 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: References: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> Message-ID: <4c3eda3d27c7e8199d23f6739bdad4ffcc132137.camel@redhat.com> On Tue, 2019-02-05 at 11:43 -0500, Ken Giusti wrote: > On 2/4/19, Harald Jensås wrote: > > > > I opened a oslo.messaging bug[1] yesterday. When using > > notifications > > and all consumers use one or more pools. The ironic-neutron-agent > > does > > use pools for all listeners in it's hash-ring member manager. And > > the > > result is that notifications are published to the 'ironic-neutron- > > agent-heartbeat.info' queue and they are never consumed. > > > > This is an issue with the design of the notification pool feature. > > The Notification service is designed so notification events can be > sent even though there may currently be no consumers. It supports > the > ability for events to be queued until a consumer(s) is ready to > process them. So when a notifier issues an event and there are no > consumers subscribed, a queue must be provisioned to hold that event > until consumers appear. > > For notification pools the pool identifier is supplied by the > notification listener when it subscribes. The value of any pool id > is > not known beforehand by the notifier, which is important because pool > ids can be dynamically created by the listeners. And in many cases > pool ids are not even used. > > So notifications are always published to a non-pooled queue. If > there > are pooled subscriptions we rely on the broker to do the fanout. > This means that the application should always have at least one > non-pooled listener for the topic, since any events that may be > published _before_ the listeners are established will be stored on a > non-pooled queue. > >From what I observer any message published _before_ or _after_ pool listeners are established are stored on the non-pooled queue. > The documentation doesn't make that clear AFAIKT - that needs to be > fixed. > I agree with your conclusion here. This is not clear in the documentation. And it should be updated to reflect the requirement of at least one non-pool listener to consume the non-pooled queue. > > The second issue, each instance of the agent uses it's own pool to > > ensure all agents are notified about the existance of peer-agents. > > The > > pools use a uuid that is generated at startup (and re-generated on > > restart, stop/start etc). In the case where > > `[oslo_messaging_rabbit]/amqp_auto_delete = false` in neutron > > config > > these uuid queues are not automatically removed. So after a restart > > of > > the ironic-neutron-agent the queue with the old UUID is left in the > > message broker without no consumers, growing ... > > > > > > I intend to push patches to fix both issues. As a workaround (or > > the > > permanent solution) will create another listener consuming the > > notifications without a pool. This should fix the first issue. > > > > Second change will set amqp_auto_delete for these specific queues > > to > > 'true' no matter. What I'm currently stuck on here is that I need > > to > > change the control_exchange for the transport. According to > > oslo.messaging documentation it should be possible to override the > > control_exchange in the transport_url[3]. The idea is to set > > amqp_auto_delete and a ironic-neutron-agent specific exchange on > > the > > url when setting up the transport for notifications, but so far I > > belive the doc string on the control_exchange option is wrong. > > > > Yes the doc string is wrong - you can override the default > control_exchange via the Target's exchange field: > > https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/target.py#n40 > > At least that's the intent... > > ... however the Notifier API does not take a Target, it takes a list > of topic _strings_: > > https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/notify/notifier.py#n239 > > Which seems wrong, especially since the notification Listener > subscribes to a list of Targets: > > https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/notify/listener.py#n227 > > I've opened a bug for this and will provide a patch for review > shortly: > > https://bugs.launchpad.net/oslo.messaging/+bug/1814797 > > Thanks, this makes sense. One question, in target I can see that there is the 'fanout' parameter. https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/target.py#n62 """ Clients may request that a copy of the message be delivered to all servers listening on a topic by setting fanout to ``True``, rather than just one of them. """ In my usecase I actually want exactly that. So once your patch lands I can drop the use of pools and just set fanout=true on the target instead? > > > > > > > NOTE: The second issue can be worked around by stopping and > > starting > > rabbitmq as a dependency of the ironic-neutron-agent service. This > > ensure only queues for active agent uuid's are present, and those > > queues will be consumed. > > > > > > -- > > Harald Jensås > > > > > > [1] https://bugs.launchpad.net/oslo.messaging/+bug/1814544 > > [2] https://storyboard.openstack.org/#!/story/2004933 > > [3] > > https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/transport.py#L58-L62 > > > > > > > > From mvanwinkle at salesforce.com Tue Feb 5 19:18:24 2019 From: mvanwinkle at salesforce.com (Matt Van Winkle) Date: Tue, 5 Feb 2019 13:18:24 -0600 Subject: User Committee Elections - call for candidates Message-ID: Hello all, It's that time again! The candidacy period for the upcoming UC election is open. Three seats are up for voting. If you are an AUC, and are interested in running for one of them, now is the time to announce it. Here are the important dates: February 04 - February 17, 05:59 UTC: Open candidacy for UC positions February 18 - February 24, 11:59 UTC: UC elections (voting) Special thanks to our election officials - Mohamed Elsakhawy and Jonathan Prolux! You can find al the info for the election here: https://governance.openstack.org/uc/reference/uc-election-feb2019.html Note: there are a couple of typos on the page that have an older date for the items above. That is being sorted in a patch today, but we wanted to go and get the notification out. The dates above and at the top of the linked page are correct. Thanks! VW -- Matt Van Winkle Senior Manager, Software Engineering | Salesforce Mobile: 210-445-4183 -------------- next part -------------- An HTML attachment was scrubbed... URL: From shokoofa.hosseini at gmail.com Tue Feb 5 11:35:05 2019 From: shokoofa.hosseini at gmail.com (shokoofa Hosseini) Date: Tue, 5 Feb 2019 15:05:05 +0330 Subject: Rally verify issue Message-ID: Dear Sir / Madam, I recently install Rally version: 1.3.0 with Installed Plugins: rally-openstack :1.3.0 by python 34. on centos 7 It work property. I benchmark my openstack environment correctly with scenarios of rally. But I have some issue with rally verification create, according to the link bellow: https://docs.openstack.org/developer/rally/quick_start/tutorial/step_10_verifying_cloud_via_tempest_verifier.html I run the command: " rally verify create-verifier --type tempest --name tempest-verifier " but I get the bellow error message: "TypeError: startswith first arg must be bytes or a tuple of bytes, not str" What should I do? I will appreciate if you could help me your sincerely shokoofa Attachments area -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rally-verify-ERROR.png Type: image/png Size: 190484 bytes Desc: not available URL: From Arkady.Kanevsky at dell.com Tue Feb 5 16:15:02 2019 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Tue, 5 Feb 2019 16:15:02 +0000 Subject: [openstack-dev] [stackalytics] Stackalytics Facelift In-Reply-To: <5C378410.6050603@openstack.org> References: <45e9c80f282d4d2a880b279b990a964c@AUSX13MPS308.AMER.DELL.COM> <5C378231.8010603@openstack.org> <4b8edd5beecd4915b06278524482431e@AUSX13MPS308.AMER.DELL.COM> <5C378410.6050603@openstack.org> Message-ID: <5b9d8dc2519b4f358e051bf9e6cb5c5f@AUSX13MPS304.AMER.DELL.COM> Team, With the new stackalytics how can I see current (Train release) data? Thanks, Arkady From: Jimmy McArthur Sent: Thursday, January 10, 2019 11:43 AM To: Kanevsky, Arkady Cc: openstack-dev at lists.openstack.org; openstack-discuss at lists.openstack.org Subject: Re: [openstack-dev] [stackalytics] Stackalytics Facelift [EXTERNAL EMAIL] Absolutely. When we get there, I'll send an announcement to the MLs and ping you :) I don't currently have a timeline, but given the Stackalytics changes, this might speed it up a bit. Arkady.Kanevsky at dell.com January 10, 2019 at 11:38 AM Thanks Jimmy. Since I am responsible for updating marketplace per release I just need to know what mechanism to use and which file I need to patch. Thanks, Arkady From: Jimmy McArthur Sent: Thursday, January 10, 2019 11:35 AM To: openstack-dev at lists.openstack.org; openstack-discuss at lists.openstack.org Subject: Re: [openstack-dev] [stackalytics] Stackalytics Facelift [EXTERNAL EMAIL] Arkady.Kanevsky at dell.com January 9, 2019 at 9:20 AM Thanks Boris. Do we still use DriverLog for marketplace driver status updates? We do still use DriverLog for the Marketplace drivers listing. We have a cronjob set up to ingest nightly from Stackalytics. We also have the ability to CRUD the listings in the Foundation website CMS. That said, as Boris mentioned, the list is really not used much and I know there is a lot of out of date info there. We're planning to move the marketplace list to yaml in a public repo, similar to what we did for OpenStack Map [1]. Cheers, Jimmy [1] https://git.openstack.org/cgit/openstack/openstack-map/ Thanks, Arkady From: Boris Renski Sent: Tuesday, January 8, 2019 11:11 AM To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg Subject: [openstack-dev] [stackalytics] Stackalytics Facelift [EXTERNAL EMAIL] Folks, Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics openstack project). Brief summary of updates: * We have new look and feel at stackalytics.com * We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still available via direct links, but not in the men on the top * BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary Happy to hear comments or feedback or answer questions. -Boris Boris Renski January 8, 2019 at 11:10 AM Folks, Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics openstack project). Brief summary of updates: * We have new look and feel at stackalytics.com * We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still available via direct links, but not in the men on the top * BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary Happy to hear comments or feedback or answer questions. -Boris Jimmy McArthur January 10, 2019 at 11:34 AM Arkady.Kanevsky at dell.com January 9, 2019 at 9:20 AM Thanks Boris. Do we still use DriverLog for marketplace driver status updates? We do still use DriverLog for the Marketplace drivers listing. We have a cronjob set up to ingest nightly from Stackalytics. We also have the ability to CRUD the listings in the Foundation website CMS. That said, as Boris mentioned, the list is really not used much and I know there is a lot of out of date info there. We're planning to move the marketplace list to yaml in a public repo, similar to what we did for OpenStack Map [1]. Cheers, Jimmy [1] https://git.openstack.org/cgit/openstack/openstack-map/ Thanks, Arkady From: Boris Renski Sent: Tuesday, January 8, 2019 11:11 AM To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg Subject: [openstack-dev] [stackalytics] Stackalytics Facelift [EXTERNAL EMAIL] Folks, Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics openstack project). Brief summary of updates: * We have new look and feel at stackalytics.com * We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still available via direct links, but not in the men on the top * BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary Happy to hear comments or feedback or answer questions. -Boris Boris Renski January 8, 2019 at 11:10 AM Folks, Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics openstack project). Brief summary of updates: * We have new look and feel at stackalytics.com * We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still available via direct links, but not in the men on the top * BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary Happy to hear comments or feedback or answer questions. -Boris Arkady.Kanevsky at dell.com January 9, 2019 at 9:20 AM Thanks Boris. Do we still use DriverLog for marketplace driver status updates? Thanks, Arkady From: Boris Renski Sent: Tuesday, January 8, 2019 11:11 AM To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg Subject: [openstack-dev] [stackalytics] Stackalytics Facelift [EXTERNAL EMAIL] Folks, Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics openstack project). Brief summary of updates: * We have new look and feel at stackalytics.com * We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still available via direct links, but not in the men on the top * BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary Happy to hear comments or feedback or answer questions. -Boris Boris Renski January 8, 2019 at 11:10 AM Folks, Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics openstack project). Brief summary of updates: * We have new look and feel at stackalytics.com * We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still available via direct links, but not in the men on the top * BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary Happy to hear comments or feedback or answer questions. -Boris -------------- next part -------------- An HTML attachment was scrubbed... URL: From blair.bethwaite at gmail.com Tue Feb 5 19:57:27 2019 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Wed, 6 Feb 2019 08:57:27 +1300 Subject: [scientific-sig] IRC meeting today 2100 UTC (in one hour): Continued HPC container discussion, Open Infra Summit Lightning Talks Message-ID: Hi all, Probably just a quick meeting today. Keen to collect HPC container war stories and looking for interest from lightning talk presenters for the SIG BoF at the Summit... Cheers, b1airo -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Tue Feb 5 20:11:10 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Tue, 5 Feb 2019 14:11:10 -0600 Subject: [openstack-dev] [Neutron] Propose Liu Yulong for Neutron core In-Reply-To: References: Message-ID: Hi everybody, It has been a week since I sent out this nomination and I have only received positive feedback from the community. As a consequence, Liu Yulong has been added as a member of the Neutron core team. Congratulations and keep up all the great contributions! Best regards Miguel On Thu, Jan 31, 2019 at 2:45 AM Qin, Kailun wrote: > Big +1 J Congrats Yulong, well-deserved! > > > > BR, > > Kailun > > > > *From:* Miguel Lavalle [mailto:miguel at mlavalle.com] > *Sent:* Wednesday, January 30, 2019 7:19 AM > *To:* openstack-discuss at lists.openstack.org > *Subject:* [openstack-dev] [Neutron] Propose Liu Yulong for Neutron core > > > > Hi Stackers, > > > > I want to nominate Liu Yulong (irc: liuyulong) as a member of the Neutron > core team. Liu started contributing to Neutron back in Mitaka, fixing bugs > in HA routers. Since then, he has specialized in L3 networking, developing > a deep knowledge of DVR. More recently, he single handedly implemented QoS > for floating IPs with this series of patches: > https://review.openstack.org/#/q/topic:bp/floating-ip-rate-limit+(status:open+OR+status:merged). > He has also been very busy helping to improve the implementation of port > forwardings and adding QoS to them. He also works for a large operator in > China, which allows him to bring an important operational perspective from > that part of the world to our project. The quality and number of his code > reviews during the Stein cycle is on par with the leading members of the > core team: https://www.stackalytics.com/?module=neutron-group. > > > > I will keep this nomination open for a week as customary. > > > > Best regards > > > > Miguel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Tue Feb 5 20:18:33 2019 From: eumel at arcor.de (Frank Kloeker) Date: Tue, 05 Feb 2019 21:18:33 +0100 Subject: [I18n] Meeting on Demand Message-ID: <38930a466b50140a2cbb05e2f2370b66@arcor.de> Hello Stackers, in the past we changed often the format of our team meeting to find out the right requirements and the highest comfort for all participants. We cover different time zones, tried Office Hours and joint the docs team meeting as well, so we have both meeting behind each other. At the end there are no participants and from I18n perspective also not so much topics to discuss outside the translation period. For that reason I want to change to a "Meeting on Demand" format. Feel free to add your topics on the wiki page [1] for the upcoming meeting slot (as usually Thursday [2]) or raise the topic on the mailing list with the proposal of a regular meeting. We will then arrange the next meeting. many thanks kind regards Frank [1] https://wiki.openstack.org/wiki/Meetings/I18nTeamMeeting [2] http://eavesdrop.openstack.org/#I18N_Team_Meeting From smooney at redhat.com Tue Feb 5 20:18:43 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 05 Feb 2019 20:18:43 +0000 Subject: [openstack-dev] [stackalytics] Stackalytics Facelift In-Reply-To: <5b9d8dc2519b4f358e051bf9e6cb5c5f@AUSX13MPS304.AMER.DELL.COM> References: <45e9c80f282d4d2a880b279b990a964c@AUSX13MPS308.AMER.DELL.COM> <5C378231.8010603@openstack.org> <4b8edd5beecd4915b06278524482431e@AUSX13MPS308.AMER.DELL.COM> <5C378410.6050603@openstack.org> <5b9d8dc2519b4f358e051bf9e6cb5c5f@AUSX13MPS304.AMER.DELL.COM> Message-ID: <0ae39e2c1f285345f554f6205bdbc53d80db62eb.camel@redhat.com> On Tue, 2019-02-05 at 16:15 +0000, Arkady.Kanevsky at dell.com wrote: > Team, > With the new stackalytics how can I see current (Train release) data? the current devlopment cycle is stein and the current released version is Rocky Train is the name of the next developemnt version that will be starting later this year. > Thanks, > Arkady > > From: Jimmy McArthur > Sent: Thursday, January 10, 2019 11:43 AM > To: Kanevsky, Arkady > Cc: openstack-dev at lists.openstack.org; openstack-discuss at lists.openstack.org > Subject: Re: [openstack-dev] [stackalytics] Stackalytics Facelift > > [EXTERNAL EMAIL] > Absolutely. When we get there, I'll send an announcement to the MLs and ping you :) I don't currently have a > timeline, but given the Stackalytics changes, this might speed it up a bit. > > > > Arkady.Kanevsky at dell.com > > January 10, 2019 at 11:38 AM > > Thanks Jimmy. > > Since I am responsible for updating marketplace per release I just need to know what mechanism to use and which file > > I need to patch. > > Thanks, > > Arkady > > > > From: Jimmy McArthur > > Sent: Thursday, January 10, 2019 11:35 AM > > To: openstack-dev at lists.openstack.org; openstack-discuss at lists.openstack.org > > Subject: Re: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > [EXTERNAL EMAIL] > > > > > > > > > Arkady.Kanevsky at dell.com > > > January 9, 2019 at 9:20 AM > > > Thanks Boris. > > > Do we still use DriverLog for marketplace driver status updates? > > > > We do still use DriverLog for the Marketplace drivers listing. We have a cronjob set up to ingest nightly from > > Stackalytics. We also have the ability to CRUD the listings in the Foundation website CMS. > > > > That said, as Boris mentioned, the list is really not used much and I know there is a lot of out of date info > > there. We're planning to move the marketplace list to yaml in a public repo, similar to what we did for OpenStack > > Map [1]. > > > > Cheers, > > Jimmy > > > > [1] https://git.openstack.org/cgit/openstack/openstack-map/ > > > > > > > Thanks, > > > Arkady > > > > > > From: Boris Renski > > > Sent: Tuesday, January 8, 2019 11:11 AM > > > To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg > > > Subject: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > > > [EXTERNAL EMAIL] > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > Boris Renski > > > January 8, 2019 at 11:10 AM > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > > > > Jimmy McArthur > > January 10, 2019 at 11:34 AM > > > > > > > Arkady.Kanevsky at dell.com > > > January 9, 2019 at 9:20 AM > > > Thanks Boris. > > > Do we still use DriverLog for marketplace driver status updates? > > > > We do still use DriverLog for the Marketplace drivers listing. We have a cronjob set up to ingest nightly from > > Stackalytics. We also have the ability to CRUD the listings in the Foundation website CMS. > > > > That said, as Boris mentioned, the list is really not used much and I know there is a lot of out of date info > > there. We're planning to move the marketplace list to yaml in a public repo, similar to what we did for OpenStack > > Map [1]. > > > > Cheers, > > Jimmy > > > > [1] https://git.openstack.org/cgit/openstack/openstack-map/ > > > > > Thanks, > > > Arkady > > > > > > From: Boris Renski > > > Sent: Tuesday, January 8, 2019 11:11 AM > > > To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg > > > Subject: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > > > [EXTERNAL EMAIL] > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > Boris Renski > > > January 8, 2019 at 11:10 AM > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > > > > Arkady.Kanevsky at dell.com > > January 9, 2019 at 9:20 AM > > Thanks Boris. > > Do we still use DriverLog for marketplace driver status updates? > > Thanks, > > Arkady > > > > From: Boris Renski > > Sent: Tuesday, January 8, 2019 11:11 AM > > To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg > > Subject: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > [EXTERNAL EMAIL] > > Folks, > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > openstack project). Brief summary of updates: > > We have new look and feel at stackalytics.com > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > available via direct links, but not in the men on the top > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > Happy to hear comments or feedback or answer questions. > > > > -Boris > > Boris Renski > > January 8, 2019 at 11:10 AM > > Folks, > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > openstack project). Brief summary of updates: > > We have new look and feel at stackalytics.com > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > available via direct links, but not in the men on the top > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > Happy to hear comments or feedback or answer questions. > > > > -Boris > > From smooney at redhat.com Tue Feb 5 20:18:43 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 05 Feb 2019 20:18:43 +0000 Subject: [openstack-dev] [stackalytics] Stackalytics Facelift In-Reply-To: <5b9d8dc2519b4f358e051bf9e6cb5c5f@AUSX13MPS304.AMER.DELL.COM> References: <45e9c80f282d4d2a880b279b990a964c@AUSX13MPS308.AMER.DELL.COM> <5C378231.8010603@openstack.org> <4b8edd5beecd4915b06278524482431e@AUSX13MPS308.AMER.DELL.COM> <5C378410.6050603@openstack.org> <5b9d8dc2519b4f358e051bf9e6cb5c5f@AUSX13MPS304.AMER.DELL.COM> Message-ID: <0ae39e2c1f285345f554f6205bdbc53d80db62eb.camel@redhat.com> On Tue, 2019-02-05 at 16:15 +0000, Arkady.Kanevsky at dell.com wrote: > Team, > With the new stackalytics how can I see current (Train release) data? the current devlopment cycle is stein and the current released version is Rocky Train is the name of the next developemnt version that will be starting later this year. > Thanks, > Arkady > > From: Jimmy McArthur > Sent: Thursday, January 10, 2019 11:43 AM > To: Kanevsky, Arkady > Cc: openstack-dev at lists.openstack.org; openstack-discuss at lists.openstack.org > Subject: Re: [openstack-dev] [stackalytics] Stackalytics Facelift > > [EXTERNAL EMAIL] > Absolutely. When we get there, I'll send an announcement to the MLs and ping you :) I don't currently have a > timeline, but given the Stackalytics changes, this might speed it up a bit. > > > > Arkady.Kanevsky at dell.com > > January 10, 2019 at 11:38 AM > > Thanks Jimmy. > > Since I am responsible for updating marketplace per release I just need to know what mechanism to use and which file > > I need to patch. > > Thanks, > > Arkady > > > > From: Jimmy McArthur > > Sent: Thursday, January 10, 2019 11:35 AM > > To: openstack-dev at lists.openstack.org; openstack-discuss at lists.openstack.org > > Subject: Re: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > [EXTERNAL EMAIL] > > > > > > > > > Arkady.Kanevsky at dell.com > > > January 9, 2019 at 9:20 AM > > > Thanks Boris. > > > Do we still use DriverLog for marketplace driver status updates? > > > > We do still use DriverLog for the Marketplace drivers listing. We have a cronjob set up to ingest nightly from > > Stackalytics. We also have the ability to CRUD the listings in the Foundation website CMS. > > > > That said, as Boris mentioned, the list is really not used much and I know there is a lot of out of date info > > there. We're planning to move the marketplace list to yaml in a public repo, similar to what we did for OpenStack > > Map [1]. > > > > Cheers, > > Jimmy > > > > [1] https://git.openstack.org/cgit/openstack/openstack-map/ > > > > > > > Thanks, > > > Arkady > > > > > > From: Boris Renski > > > Sent: Tuesday, January 8, 2019 11:11 AM > > > To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg > > > Subject: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > > > [EXTERNAL EMAIL] > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > Boris Renski > > > January 8, 2019 at 11:10 AM > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > > > > Jimmy McArthur > > January 10, 2019 at 11:34 AM > > > > > > > Arkady.Kanevsky at dell.com > > > January 9, 2019 at 9:20 AM > > > Thanks Boris. > > > Do we still use DriverLog for marketplace driver status updates? > > > > We do still use DriverLog for the Marketplace drivers listing. We have a cronjob set up to ingest nightly from > > Stackalytics. We also have the ability to CRUD the listings in the Foundation website CMS. > > > > That said, as Boris mentioned, the list is really not used much and I know there is a lot of out of date info > > there. We're planning to move the marketplace list to yaml in a public repo, similar to what we did for OpenStack > > Map [1]. > > > > Cheers, > > Jimmy > > > > [1] https://git.openstack.org/cgit/openstack/openstack-map/ > > > > > Thanks, > > > Arkady > > > > > > From: Boris Renski > > > Sent: Tuesday, January 8, 2019 11:11 AM > > > To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg > > > Subject: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > > > [EXTERNAL EMAIL] > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > Boris Renski > > > January 8, 2019 at 11:10 AM > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > > > > Arkady.Kanevsky at dell.com > > January 9, 2019 at 9:20 AM > > Thanks Boris. > > Do we still use DriverLog for marketplace driver status updates? > > Thanks, > > Arkady > > > > From: Boris Renski > > Sent: Tuesday, January 8, 2019 11:11 AM > > To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg > > Subject: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > [EXTERNAL EMAIL] > > Folks, > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > openstack project). Brief summary of updates: > > We have new look and feel at stackalytics.com > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > available via direct links, but not in the men on the top > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > Happy to hear comments or feedback or answer questions. > > > > -Boris > > Boris Renski > > January 8, 2019 at 11:10 AM > > Folks, > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > openstack project). Brief summary of updates: > > We have new look and feel at stackalytics.com > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > available via direct links, but not in the men on the top > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > Happy to hear comments or feedback or answer questions. > > > > -Boris > > From igor.duarte.cardoso at intel.com Tue Feb 5 20:25:23 2019 From: igor.duarte.cardoso at intel.com (Duarte Cardoso, Igor) Date: Tue, 5 Feb 2019 20:25:23 +0000 Subject: [neutron] OVS OpenFlow L3 DVR / dvr_bridge agent_mode In-Reply-To: References: Message-ID: Thank you Slawek, Seán, Ryan, Miguel. We’ll get to work on this new refactoring, legacy router implementation and the missing unit/functional tests. We’re setting lower priority to the scenario job but hopefully it can be done in stein-3 as well. Best regards, Igor D.C. From: Miguel Lavalle Sent: Friday, February 1, 2019 5:07 PM To: openstack-discuss at lists.openstack.org Subject: Re: [neutron] OVS OpenFlow L3 DVR / dvr_bridge agent_mode Hi Igor, Please see my comments in-line below On Tue, Jan 29, 2019 at 1:26 AM Duarte Cardoso, Igor > wrote: Hi Neutron, I've been internally collaborating on the ``dvr_bridge`` L3 agent mode [1][2][3] work (David Shaughnessy, Xubo Zhang), which allows the L3 agent to make use of Open vSwitch / OpenFlow to implement ``distributed`` IPv4 Routers thus bypassing kernel namespaces and iptables and opening the door for higher performance by keeping packets in OVS for longer. I want to share a few questions in order to gather feedback from you. I understand parts of these questions may have been answered in the past before my involvement, but I believe it's still important to revisit and clarify them. This can impact how long it's going to take to complete the work and whether it can make it to stein-3. 1. Should OVS support also be added to the legacy router? And if so, would it make more sense to have a new variable (not ``agent_mode``) to specify what backend to use (OVS or kernel) instead of creating more combinations? I would like to see the legacy router also implemented. And yes, we need to specify a new config option. As it has already been pointed out, we need to separate what the agent does in each host from the backend technology implementing the routers. 2. What is expected in terms of CI for this? Regarding testing, what should this first patch include apart from the unit tests? (since the l3_agent.ini needs to be configured differently). I agree with Slawek. We would like to see a scenario job. 3. What problems can be anticipated by having the same agent managing both kernel and OVS powered routers (depending on whether they were created as ``distributed``)? We are experimenting with different ways of decoupling RouterInfo (mainly as part of the L3 agent refactor patch) and haven't been able to find the right balance yet. On one end we have an agent that is still coupled with kernel-based RouterInfo, and on the other end we have an agent that either only accepts OVS-based RouterInfos or only kernel-based RouterInfos depending on the ``agent_mode``. I also agree with Slawek here. It would a good idea if we can get the two efforts in synch so we can untangle RouterInfo from the agent code We'd also appreciate reviews on the 2 patches [4][5]. The L3 refactor one should be able to pass Zuul after a recheck. [1] Spec: https://blueprints.launchpad.net/neutron/+spec/openflow-based-dvr [2] RFE: https://bugs.launchpad.net/neutron/+bug/1705536 [3] Gerrit topic: https://review.openstack.org/#/q/topic:dvr_bridge+(status:open+OR+status:merged) [4] L3 agent refactor patch: https://review.openstack.org/#/c/528336/29 [5] dvr_bridge patch: https://review.openstack.org/#/c/472289/17 Thank you! Best regards, Igor D.C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgrosso at redhat.com Tue Feb 5 20:37:29 2019 From: jgrosso at redhat.com (Jason Grosso) Date: Tue, 5 Feb 2019 15:37:29 -0500 Subject: Manila Upstream Bugs Message-ID: Hello All, This is an email to the OpenStack manila upstream community but anyone can chime in would be great to get some input from other projects and how they organize their upstream defects and what tools they use... My goal here is to make the upstream manila bug process easier, cleaner, and more effective. My thoughts to accomplish this are by establishing a process that we can all agree upon. I have the following points/questions that I wanted to address to help create a more effective process: - Can we as a group go through some of the manila bugs so we can drive the visible bug count down? - How often as a group do you have bug scrubs? - Might be beneficial if we had bug scrubs every few months possibly? - It might be a good idea to go through the current upstream bugs and weed out one that can be closed or invalid. - When a new bug is logged how to we normally process this bug - How do we handle the importance? - When a manila bugs comes into launchpad I am assuming one of the people on this email will set the importance? - "Assigned" I will also assume it just picked by the person on this email list. - I am seeing some bugs "fixed committed" with no assignment. How do we know who was working on it? - What is the criteria for setting the importance. Do we have a standard understanding of what is CRITICAL or HIGH? - If there is a critical or high bug what is the response turn-around? Days or weeks? - I see some defect with HIGH that have not been assigned or looked at in a year? - I understand OpenStack has some long releases but how long do we normally keep defects around? - Do we have a way to archive bugs that are not looked at? I was told we can possibly set the status of a defect to “Invalid” or “Opinion” or “Won’t Fix” or “Expired" - Status needs to be something other than "NEW" after the first week - How can we have a defect over a year that is NEW? - Who is possible for see if there is enough information and if the bug is invalid or incomplete and if incomplete ask for relevant information. Do we randomly look at the list daily , weekly, or monthly to see if new info is needed? I started to create a google sheet [1] to see if it is easier to track some of the defect vs the manila-triage pad[2] . I have added both links here. I know a lot will not have access to this page I am working on transitioning to OpenStack ether cal. [1] https://docs.google.com/spreadsheets/d/1oaXEgo_BEkY2KleISN3M58waqw9U5W7xTR_O1jQmQ74/edit#gid=758082340 [2] https://etherpad.openstack.org/p/manila-bug-triage-pad *[3]* https://ethercalc.openstack.org/uc8b4567fpf4 I would also like to hear from all of you on what your issues are with the current process for upstream manila bugs using launchpad. I have not had the time to look at storyboard https://storyboard.openstack.org/ but I have heard that the OpenStack community is pushing toward using Storyboard, so I will be looking at that shortly. Any input would be greatly appreciated... Thanks All, Jason Grosso Senior Quality Engineer - Cloud Red Hat OpenStack Manila jgrosso at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgiusti at gmail.com Tue Feb 5 20:38:47 2019 From: kgiusti at gmail.com (Ken Giusti) Date: Tue, 5 Feb 2019 15:38:47 -0500 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: <4c3eda3d27c7e8199d23f6739bdad4ffcc132137.camel@redhat.com> References: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> <4c3eda3d27c7e8199d23f6739bdad4ffcc132137.camel@redhat.com> Message-ID: On 2/5/19, Harald Jensås wrote: > On Tue, 2019-02-05 at 11:43 -0500, Ken Giusti wrote: >> On 2/4/19, Harald Jensås wrote: >> > >> > I opened a oslo.messaging bug[1] yesterday. When using >> > notifications >> > and all consumers use one or more pools. The ironic-neutron-agent >> > does >> > use pools for all listeners in it's hash-ring member manager. And >> > the >> > result is that notifications are published to the 'ironic-neutron- >> > agent-heartbeat.info' queue and they are never consumed. >> > >> >> This is an issue with the design of the notification pool feature. >> >> The Notification service is designed so notification events can be >> sent even though there may currently be no consumers. It supports >> the >> ability for events to be queued until a consumer(s) is ready to >> process them. So when a notifier issues an event and there are no >> consumers subscribed, a queue must be provisioned to hold that event >> until consumers appear. >> >> For notification pools the pool identifier is supplied by the >> notification listener when it subscribes. The value of any pool id >> is >> not known beforehand by the notifier, which is important because pool >> ids can be dynamically created by the listeners. And in many cases >> pool ids are not even used. >> >> So notifications are always published to a non-pooled queue. If >> there >> are pooled subscriptions we rely on the broker to do the fanout. >> This means that the application should always have at least one >> non-pooled listener for the topic, since any events that may be >> published _before_ the listeners are established will be stored on a >> non-pooled queue. >> > > From what I observer any message published _before_ or _after_ pool > listeners are established are stored on the non-pooled queue. > True that. Even if listeners are established before a notification is issued the notifier still doesn't know that and blindly creates a non pooled queue just in case there aren't any listeners. Not intuitive I agree. >> The documentation doesn't make that clear AFAIKT - that needs to be >> fixed. >> > > I agree with your conclusion here. This is not clear in the > documentation. And it should be updated to reflect the requirement of > at least one non-pool listener to consume the non-pooled queue. > +1 I can do that. > >> > The second issue, each instance of the agent uses it's own pool to >> > ensure all agents are notified about the existance of peer-agents. >> > The >> > pools use a uuid that is generated at startup (and re-generated on >> > restart, stop/start etc). In the case where >> > `[oslo_messaging_rabbit]/amqp_auto_delete = false` in neutron >> > config >> > these uuid queues are not automatically removed. So after a restart >> > of >> > the ironic-neutron-agent the queue with the old UUID is left in the >> > message broker without no consumers, growing ... >> > >> > >> > I intend to push patches to fix both issues. As a workaround (or >> > the >> > permanent solution) will create another listener consuming the >> > notifications without a pool. This should fix the first issue. >> > >> > Second change will set amqp_auto_delete for these specific queues >> > to >> > 'true' no matter. What I'm currently stuck on here is that I need >> > to >> > change the control_exchange for the transport. According to >> > oslo.messaging documentation it should be possible to override the >> > control_exchange in the transport_url[3]. The idea is to set >> > amqp_auto_delete and a ironic-neutron-agent specific exchange on >> > the >> > url when setting up the transport for notifications, but so far I >> > belive the doc string on the control_exchange option is wrong. >> > >> >> Yes the doc string is wrong - you can override the default >> control_exchange via the Target's exchange field: >> >> > https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/target.py#n40 >> >> At least that's the intent... >> >> ... however the Notifier API does not take a Target, it takes a list >> of topic _strings_: >> >> > https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/notify/notifier.py#n239 >> >> Which seems wrong, especially since the notification Listener >> subscribes to a list of Targets: >> >> > https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/notify/listener.py#n227 >> >> I've opened a bug for this and will provide a patch for review >> shortly: >> >> https://bugs.launchpad.net/oslo.messaging/+bug/1814797 >> >> > > Thanks, this makes sense. > I've hacked in the ability to override the default exchange for notifiers, but I don't think it would help in your case. In rabbitmq exchange and queue names are scoped independently. This means that if you have an exchange named "openstack' and another named 'my-exchange' but use the same topic (say 'foo') you end up with a single instance of queue 'foo' bound to both exchanges. IOW declaring one listener on exchange=openstack and topic=foo, and another listener on exchange=my-exchange and topic=foo they will compete for messages because they are consuming from the same queue (foo). So if your intent is to partition notification traffic you'd still need unique topics as well. > > One question, in target I can see that there is the 'fanout' parameter. > > https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/target.py#n62 > > """ Clients may request that a copy of the message be delivered to all > servers listening on a topic by setting fanout to ``True``, rather than > just one of them. """ > > In my usecase I actually want exactly that. So once your patch lands I > can drop the use of pools and just set fanout=true on the target > instead? > The 'fanout' attribute is only used with RPC messaging, not Notifications. Can you use RPC fanout instead of Notifications? RPC fanout ('cast' as the API calls it) is different from 'normal' RPC in that no reply is returned to the caller. So it's a lot like Notifications in that regard. However RPC fanout is different from Notifications in two important ways: 1) RPC fanout messages are sent 'least effort', meaning they can be silently discarded, and 2) RPC fanout messages are not stored - they are only delivered to active subscribers (listeners). I've always felt that notification pools are an attempt to implement a Publish/Subscribe messaging pattern on top of an event queuing service. That's hard to do since event queuing has strict delivery guarantees (avoid dropping) which Pub/Sub doesn't (drop if no consumers). >> >> >> >> > >> > NOTE: The second issue can be worked around by stopping and >> > starting >> > rabbitmq as a dependency of the ironic-neutron-agent service. This >> > ensure only queues for active agent uuid's are present, and those >> > queues will be consumed. >> > >> > >> > -- >> > Harald Jensås >> > >> > >> > [1] https://bugs.launchpad.net/oslo.messaging/+bug/1814544 >> > [2] https://storyboard.openstack.org/#!/story/2004933 >> > [3] >> > > https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/transport.py#L58-L62 >> > >> > >> > >> >> > > -- Ken Giusti (kgiusti at gmail.com) From doug at doughellmann.com Tue Feb 5 21:35:04 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 05 Feb 2019 16:35:04 -0500 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: References: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> Message-ID: Ken Giusti writes: > On 2/4/19, Harald Jensås wrote: >> On Tue, 2019-02-05 at 09:54 +1100, Michael Still wrote: >>> Hi, >>> >>> I’ve been chasing a bug in ironic’s neutron agent for the last few >>> days and I think its time to ask for some advice. >>> >> >> I'm working on the same issue. (In fact there are two issues.) >> >>> Specifically, I was asked to debug why a set of controllers was using >>> so much RAM, and the answer was that rabbitmq had a queue called >>> ironic-neutron-agent-heartbeat.info with 800,000 messages enqueued. >>> This notification queue is used by ironic’s neutron agent to >>> calculate the hash ring. I have been able to duplicate this issue in >>> a stock kolla-ansible install with ironic turned on but no bare metal >>> nodes enrolled in ironic. About 0.6 messages are queued per second. >>> >>> I added some debugging code (hence the thread yesterday about >>> mangling the code kolla deploys), and I can see that the messages in >>> the queue are being read by the ironic neutron agent and acked >>> correctly. However, they are not removed from the queue. >>> >>> You can see your queue size while using kolla with this command: >>> >>> docker exec rabbitmq rabbitmqctl list_queues messages name >>> messages_ready consumers | sort -n | tail -1 >>> >>> My stock install that’s been running for about 12 hours currently has >>> 8,244 messages in that queue. >>> >>> Where I’m a bit stumped is I had assumed that the messages weren’t >>> being acked correctly, which is not the case. Is there something >>> obvious about notification queues like them being persistent that >>> I’ve missed in my general ignorance of the underlying implementation >>> of notifications? >>> >> >> I opened a oslo.messaging bug[1] yesterday. When using notifications >> and all consumers use one or more pools. The ironic-neutron-agent does >> use pools for all listeners in it's hash-ring member manager. And the >> result is that notifications are published to the 'ironic-neutron- >> agent-heartbeat.info' queue and they are never consumed. >> > > This is an issue with the design of the notification pool feature. > > The Notification service is designed so notification events can be > sent even though there may currently be no consumers. It supports the > ability for events to be queued until a consumer(s) is ready to > process them. So when a notifier issues an event and there are no > consumers subscribed, a queue must be provisioned to hold that event > until consumers appear. This has come up several times over the last few years, and it's always a surprise to whoever it has bitten. I wonder if we should change the default behavior to not create the consumer queue in the publisher? -- Doug From mikal at stillhq.com Tue Feb 5 22:07:29 2019 From: mikal at stillhq.com (Michael Still) Date: Wed, 6 Feb 2019 09:07:29 +1100 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: References: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> Message-ID: I'm also interested in how we catch future instances of this. Is there something we can do in CI or in a runtime warning to let people know? I am sure there are plenty of ironic deployments out there consuming heaps more RAM than is required for this queue. Michael On Wed, Feb 6, 2019 at 8:41 AM Doug Hellmann wrote: > Ken Giusti writes: > > > On 2/4/19, Harald Jensås wrote: > >> On Tue, 2019-02-05 at 09:54 +1100, Michael Still wrote: > >>> Hi, > >>> > >>> I’ve been chasing a bug in ironic’s neutron agent for the last few > >>> days and I think its time to ask for some advice. > >>> > >> > >> I'm working on the same issue. (In fact there are two issues.) > >> > >>> Specifically, I was asked to debug why a set of controllers was using > >>> so much RAM, and the answer was that rabbitmq had a queue called > >>> ironic-neutron-agent-heartbeat.info with 800,000 messages enqueued. > >>> This notification queue is used by ironic’s neutron agent to > >>> calculate the hash ring. I have been able to duplicate this issue in > >>> a stock kolla-ansible install with ironic turned on but no bare metal > >>> nodes enrolled in ironic. About 0.6 messages are queued per second. > >>> > >>> I added some debugging code (hence the thread yesterday about > >>> mangling the code kolla deploys), and I can see that the messages in > >>> the queue are being read by the ironic neutron agent and acked > >>> correctly. However, they are not removed from the queue. > >>> > >>> You can see your queue size while using kolla with this command: > >>> > >>> docker exec rabbitmq rabbitmqctl list_queues messages name > >>> messages_ready consumers | sort -n | tail -1 > >>> > >>> My stock install that’s been running for about 12 hours currently has > >>> 8,244 messages in that queue. > >>> > >>> Where I’m a bit stumped is I had assumed that the messages weren’t > >>> being acked correctly, which is not the case. Is there something > >>> obvious about notification queues like them being persistent that > >>> I’ve missed in my general ignorance of the underlying implementation > >>> of notifications? > >>> > >> > >> I opened a oslo.messaging bug[1] yesterday. When using notifications > >> and all consumers use one or more pools. The ironic-neutron-agent does > >> use pools for all listeners in it's hash-ring member manager. And the > >> result is that notifications are published to the 'ironic-neutron- > >> agent-heartbeat.info' queue and they are never consumed. > >> > > > > This is an issue with the design of the notification pool feature. > > > > The Notification service is designed so notification events can be > > sent even though there may currently be no consumers. It supports the > > ability for events to be queued until a consumer(s) is ready to > > process them. So when a notifier issues an event and there are no > > consumers subscribed, a queue must be provisioned to hold that event > > until consumers appear. > > This has come up several times over the last few years, and it's always > a surprise to whoever it has bitten. I wonder if we should change the > default behavior to not create the consumer queue in the publisher? > > -- > Doug > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manuel.sb at garvan.org.au Tue Feb 5 22:45:27 2019 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Tue, 5 Feb 2019 22:45:27 +0000 Subject: virt-install error while trying to create a new image Message-ID: <9D8A2486E35F0941A60430473E29F15B017BB3C9AE@MXDB2.ad.garvan.unsw.edu.au> Dear Openstack community, I am trying to create a new image for Ironic. I followed the documentation but got an error with virt-install. Please note: The OS has been reinstalled The host is a physical machine BIOS has virtualization enabled I changed /etc/libvirt/qemu.conf group from root to kvm following some linux forum instructions about this error but the issue persists # virt-install --virt-type kvm --name centos --ram 1024 --disk /tmp/centos.qcow2,format=qcow2 --network network=default --graphics vnc,listen=0.0.0.0 --noautoconsole --os-type=linux --os-variant=centos7.0 --location=/root/CentOS-7-x86_64-NetInstall-1810.iso Starting install... Retrieving file .treeinfo... | 0 B 00:00:00 Retrieving file content... | 0 B 00:00:00 Retrieving file vmlinuz... | 6.3 MB 00:00:00 Retrieving file initrd.img... | 50 MB 00:00:00 ERROR unsupported configuration: CPU mode 'custom' for x86_64 kvm domain on x86_64 host is not supported by hypervisor Domain installation does not appear to have been successful. If it was, you can restart your domain by running: virsh --connect qemu:///system start centos otherwise, please restart your installation. Any thoughts? Thank you very much Manuel Sopena Ballesteros | Big data Engineer Garvan Institute of Medical Research The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010 T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel.sb at garvan.org.au NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Tue Feb 5 23:29:41 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Tue, 5 Feb 2019 17:29:41 -0600 Subject: [Cinder][driver][ScaleIO] In-Reply-To: References: Message-ID: <9d98a006-a062-0a9b-a9d3-68ed0ef4078f@gmail.com> Adding Helen Walsh to this as she may be able to provide insight. Jay On 2/5/2019 12:16 PM, Martin Chlumsky wrote: > Hello, > > We are using EMC ScaleIO as our backend to cinder. > When we delete VMs that have attached volumes and then try deleting > said volumes, the volumes will sometimes end in state error_deleting. > The state is reached because for some reason the volumes are still > mapped (in the ScaleIO sense of the word) to the hypervisor despite > the VM being deleted. > We fixed the issue by setting the following option to True in cinder.conf: > > # Unmap volume before deletion. (boolean value) > sio_unmap_volume_before_deletion=False > > > What is the reasoning behind this option? Why would we ever set this > to False and why is it False by default? It seems you would always > want to unmap the volume from the hypervisor before deleting it. > > Thank you, > > Martin From Arkady.Kanevsky at dell.com Wed Feb 6 04:24:28 2019 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Wed, 6 Feb 2019 04:24:28 +0000 Subject: [Cinder][driver][ScaleIO] In-Reply-To: <9d98a006-a062-0a9b-a9d3-68ed0ef4078f@gmail.com> References: <9d98a006-a062-0a9b-a9d3-68ed0ef4078f@gmail.com> Message-ID: <74b2c779ee644a64b5b1939537ddffd1@AUSX13MPS304.AMER.DELL.COM> Adding Vlad who is the right person for ScaleIO driver. -----Original Message----- From: Jay Bryant Sent: Tuesday, February 5, 2019 5:30 PM To: openstack-discuss at lists.openstack.org; Walsh, Helen Subject: Re: [Cinder][driver][ScaleIO] [EXTERNAL EMAIL] Adding Helen Walsh to this as she may be able to provide insight. Jay On 2/5/2019 12:16 PM, Martin Chlumsky wrote: > Hello, > > We are using EMC ScaleIO as our backend to cinder. > When we delete VMs that have attached volumes and then try deleting > said volumes, the volumes will sometimes end in state error_deleting. > The state is reached because for some reason the volumes are still > mapped (in the ScaleIO sense of the word) to the hypervisor despite > the VM being deleted. > We fixed the issue by setting the following option to True in cinder.conf: > > # Unmap volume before deletion. (boolean value) > sio_unmap_volume_before_deletion=False > > > What is the reasoning behind this option? Why would we ever set this > to False and why is it False by default? It seems you would always > want to unmap the volume from the hypervisor before deleting it. > > Thank you, > > Martin From Arkady.Kanevsky at dell.com Wed Feb 6 04:25:54 2019 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Wed, 6 Feb 2019 04:25:54 +0000 Subject: [openstack-dev] [stackalytics] Stackalytics Facelift In-Reply-To: <5C378410.6050603@openstack.org> References: <45e9c80f282d4d2a880b279b990a964c@AUSX13MPS308.AMER.DELL.COM> <5C378231.8010603@openstack.org> <4b8edd5beecd4915b06278524482431e@AUSX13MPS308.AMER.DELL.COM> <5C378410.6050603@openstack.org> Message-ID: <0a2078f2b8ec44b19252633da58e3610@AUSX13MPS304.AMER.DELL.COM> How does Stackalytics shows statistics for current Train release work? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Wed Feb 6 05:01:25 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 6 Feb 2019 13:01:25 +0800 Subject: [heat]No meeting today Message-ID: Hi all, since it's current Chinese New Year time for me, I will not be able to host the meeting today. Also, I believe Zane is not available for today's meeting too, so let's run our meeting next week. Here's something we need feedback on since heat-agents still broken, I still need feedback on [1] and [2]. Two features that we can use some reviews on [3], [4] and [5], so please help us if you can. [1] https://review.openstack.org/#/c/634383/ [2] https://review.openstack.org/#/c/634563/ [3] https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/multiple-cloud-support [4] https://storyboard.openstack.org/#!/story/2003579 [5] https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/heat-plugin-blazar -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Feb 6 05:09:47 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 06 Feb 2019 14:09:47 +0900 Subject: [nova][qa][cinder] CI job changes In-Reply-To: <666ffefd-7344-1853-7fd5-a2a32ea8d222@gmail.com> References: <666ffefd-7344-1853-7fd5-a2a32ea8d222@gmail.com> Message-ID: <168c1364bfb.b6bfd9ad351371.5730819222747190801@ghanshyammann.com> ---- On Tue, 05 Feb 2019 23:35:20 +0900 Matt Riedemann wrote ---- > I'd like to propose some changes primarily to the CI jobs that run on > nova changes, but also impact cinder and tempest. > > 1. Drop the nova-multiattach job and move test coverage to other jobs > > This is actually an old thread [1] and I had started the work but got > hung up on a bug that was teased out of one of the tests when running in > the multi-node tempest-slow job [2]. For now I've added a conditional > skip on that test if running in a multi-node job. The open changes are > here [3]. +1. The only question I commented on review - this test is skipped in all jobs now. If that all ok as of now then I am +2. > > 2. Only run compute.api and scenario tests in nova-next job and run > under python3 only > > The nova-next job is a place to test new or advanced nova features like > placement and cells v2 when those were still optional in Newton. It > currently runs with a few changes from the normal tempest-full job: > > * configures service user tokens > * configures nova console proxy to use TLS > * disables the resource provider association refresh interval > * it runs the post_test_hook which runs some commands like > archive_delete_rows, purge, and looks for leaked resource allocations [4] > > Like tempest-full, it runs the non-slow tempest API tests concurrently > and then the scenario tests serially. I'm proposing that we: > > a) change that job to only run tempest compute API tests and scenario > tests to cut down on the number of tests to run; since the job is really > only about testing nova features, we don't need to spend time running > glance/keystone/cinder/neutron tests which don't touch nova. > > b) run it with python3 [5] which is the direction all jobs are moving anyway +1. It make sense to run only compute test in this job. > > 3. Drop the integrated-gate (py2) template jobs (from nova) > > Nova currently runs with both the integrated-gate and > integrated-gate-py3 templates, which adds a set of tempest-full and > grenade jobs each to the check and gate pipelines. I don't think we need > to be gating on both py2 and py3 at this point when it comes to > tempest/grenade changes. Tempest changes are still gating on both so we > have coverage there against breaking changes, but I think anything > that's py2 specific would be caught in unit and functional tests (which > we're running on both py27 and py3*). > IMO, we should keep running integrated-gate py2 templates on the project gate also along with Tempest. Jobs in integrated-gate-* templates cover a large amount of code so running that for both versions make sure we keep our code running on py2 also. Rest other job like tempest-slow, nova-next etc are good to run only py3 on project side (Tempest gate keep running py2 version also). I am not sure if unit/functional jobs cover all code coverage and it is safe to ignore the py version consideration from integration CI. As per TC resolution, python2 can be dropped during begning of U cycle [1]. You have good point of having the integrated-gate py2 coverage on Tempest gate only is enough but it has risk of merging the py2 breaking code on project side which will block the Tempest gate. I agree that such chances are rare but still it can happen. Other point is that we need integrated-gate template running when Stein and Train become stable branch (means on stable/stein and stable/train gate). Otherwise there are chance when py2 broken code from U (because we will test only py3 in U) is backported to stable/Train or stable/stein. My opinion on this proposal is to wait till we officially drop py2 which is starting of U. [1] https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html -gmann > Who's with me? > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-October/135299.html > [2] https://bugs.launchpad.net/tempest/+bug/1807723 > [3] > https://review.openstack.org/#/q/topic:drop-multiattach-job+(status:open+OR+status:merged) > [4] https://github.com/openstack/nova/blob/5283b464b/gate/post_test_hook.sh > [5] https://review.openstack.org/#/c/634739/ > > -- > > Thanks, > > Matt > > From alfredo.deluca at gmail.com Wed Feb 6 08:00:07 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Wed, 6 Feb 2019 09:00:07 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Hi Ignazio. sorry for late reply. security group is fine. It\s not blocking the network traffic. Not sure why but, with this fedora release I can finally find atomic but there is no yum,nslookup,dig,host command..... why is so different from another version (latest) which had yum but not atomic. It's all weird Cheers On Mon, Feb 4, 2019 at 5:46 PM Ignazio Cassano wrote: > Alfredo, try to check security group linked to your kubemaster. > > Il giorno Lun 4 Feb 2019 14:25 Alfredo De Luca > ha scritto: > >> Hi Ignazio. Thanks for the link...... so >> >> Now at least atomic is present on the system. >> Also I ve already had 8.8.8.8 on the system. So I can connect on the >> floating IP to the kube master....than I can ping 8.8.8.8 but for example >> doesn't resolve the names...so if I ping 8.8.8.8 >> *root at my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* >> *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* >> *64 bytes from 8.8.8.8 : icmp_seq=1 ttl=118 time=12.1 ms* >> *64 bytes from 8.8.8.8 : icmp_seq=2 ttl=118 time=12.2 ms* >> >> but if I ping google.com doesn't resolve. I can't either find on fedora >> dig or nslookup to check >> resolv.conf has >> *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* >> *nameserver 8.8.8.8* >> >> It\s all so weird. >> >> >> >> >> On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano >> wrote: >> >>> I also suggest to change dns in your external network used by magnum. >>> Using openstack dashboard you can change it to 8.8.8.8 (If I remember >>> fine you wrote that you can ping 8.8.8.8 from kuke baster) >>> >>> Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < >>> alfredo.deluca at gmail.com> ha scritto: >>> >>>> thanks ignazio >>>> Where can I get it from? >>>> >>>> >>>> On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano < >>>> ignaziocassano at gmail.com> wrote: >>>> >>>>> I used fedora-magnum-27-4 and it works >>>>> >>>>> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >>>>> alfredo.deluca at gmail.com> ha scritto: >>>>> >>>>>> Hi Clemens. >>>>>> So the image I downloaded is this >>>>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>>>>> which is the latest I think. >>>>>> But you are right...and I noticed that too.... It doesn't have atomic >>>>>> binary >>>>>> the os-release is >>>>>> >>>>>> *NAME=Fedora* >>>>>> *VERSION="29 (Cloud Edition)"* >>>>>> *ID=fedora* >>>>>> *VERSION_ID=29* >>>>>> *PLATFORM_ID="platform:f29"* >>>>>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>>>>> *ANSI_COLOR="0;34"* >>>>>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>>>>> *HOME_URL="https://fedoraproject.org/ "* >>>>>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>>>>> "* >>>>>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>>>>> "* >>>>>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>>>>> "* >>>>>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>>>>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>>>>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>>>>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>>>>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>>>>> "* >>>>>> *VARIANT="Cloud Edition"* >>>>>> *VARIANT_ID=cloud* >>>>>> >>>>>> >>>>>> so not sure why I don't have atomic tho >>>>>> >>>>>> >>>>>> On Sat, Feb 2, 2019 at 7:53 PM Clemens >>>>>> wrote: >>>>>> >>>>>>> Now to the failure of your part-013: Are you sure that you used the >>>>>>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>>>>>> error message below suggests that your image does not contain ‚atomic‘ as >>>>>>> part of the image … >>>>>>> >>>>>>> + _prefix=docker.io/openstackmagnum/ >>>>>>> + atomic install --storage ostree --system --system-package no --set >>>>>>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>>>>> heat-container-agent >>>>>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>>>>> ./part-013: line 8: atomic: command not found >>>>>>> + systemctl start heat-container-agent >>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>> heat-container-agent.service not found. >>>>>>> >>>>>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>>>>> alfredo.deluca at gmail.com>: >>>>>>> >>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>> heat-container-agent.service not found. >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> *Alfredo* >>>>>> >>>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >> >> -- >> *Alfredo* >> >> -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Wed Feb 6 08:18:35 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 6 Feb 2019 17:18:35 +0900 Subject: [TC][Searchlight] Project health evaluation Message-ID: Hi TC members and Searchlight team, As we discussed at the beginning of the Stein cycle, Searchlight would go through a propagation period to consider whether to let it continue to operate under the OpenStack foundation's umbrella [1]. For the last two milestones, we have achieved some results [2] [3] and designed a sustainable future for Searchlight with a vision [4]. As we're reaching the Stein-3 milestone [5] and preparing for the Denver summit. We, as a team, would like have a formal project health evaluation in several aspects such as active contributors / team, planning, bug fixes, features, etc. We would love to have some voice from the TC team and anyone from the community who follows our effort during the Stein cycle. We then would want to update the information at [6] and [7] to avoid any confusion that may stop potential contributors or users to come to Searchlight. [1] https://review.openstack.org/#/c/588644/ [2] https://www.dangtrinh.com/2018/10/searchlight-at-stein-1-weekly-report.html [3] https://www.dangtrinh.com/2019/01/searchlight-at-stein-2-r-14-r-13.html [4] https://docs.openstack.org/searchlight/latest/user/usecases.html#our-vision [5] https://releases.openstack.org/stein/schedule.html [6] https://governance.openstack.org/election/results/stein/ptl.html [7] https://wiki.openstack.org/wiki/OpenStack_health_tracker Many thanks, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Wed Feb 6 09:32:20 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 06 Feb 2019 10:32:20 +0100 Subject: [openstack-ansible] bug squash day! In-Reply-To: <5f88b97f42da5cd3015ec738d4d7a6f9@arcor.de> References: <717c065910a2365e8d9674f987227771@arcor.de> <5f88b97f42da5cd3015ec738d4d7a6f9@arcor.de> Message-ID: <2ddb206f78e4c79ed6bc45a0d027b656473f09e7.camel@evrard.me> On Tue, 2019-02-05 at 19:04 +0100, Frank Kloeker wrote: > Hi Mohammed, > > will there be an extra invitation or an etherpad for logistic? > > many thanks > > Frank > > Am 2019-02-05 17:22, schrieb Mohammed Naser: > > Hi everyone, > > > > We've discussed this over the ML today and we've decided for it to > > be > > next Wednesday (13th of February). Due to the distributed nature > > of > > our teams, we'll be aiming to go throughout the day and we'll all > > be > > hanging out on #openstack-ansible with a few more high bandwidth > > way > > of discussion if that is needed > > > > Thanks! > > Mohammed What I did in the past was to prepare an etherpad of the most urgent ones, but wasn't the most successful bug squash we had. I also took the other approach, BYO bug, list it in the etherpad, so we can track the bug squashers. And in both cases, I brought belgian cookies/chocolates to the most successful bug squasher (please note you should ponderate with the task criticality level, else people might solve the simplest bugs to get the chocolates :p) This was my informal motivational, but I didn't have to do that. I justliked doing so :) Regards, JP. From jean-philippe at evrard.me Wed Feb 6 09:36:37 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 06 Feb 2019 10:36:37 +0100 Subject: [Neutron] - Bug Report for the week of Jan 29th- Feb4th. In-Reply-To: <20190204195705.v6to7bmqe2ib2nfd@yuggoth.org> References: <5C589423020000D7000400BA@prv-mh.provo.novell.com> <20190204195705.v6to7bmqe2ib2nfd@yuggoth.org> Message-ID: On Mon, 2019-02-04 at 19:57 +0000, Jeremy Stanley wrote: > On 2019-02-04 12:36:03 -0700 (-0700), Swaminathan Vasudevan wrote: > > Hi Neutrinos,Here is the summary of the neutron bugs that came in > > last week ( starting from Jan 29th - Feb 4th). > > > > https://docs.google.com/spreadsheets/d/1MwoHgK_Ve_6JGYaM8tZxWha2HDaMeAYtq4qFdZ4TUAU/edit?usp=sharing > > If it's just a collaboratively-edited spreadsheet application you > need, don't forget we maintain https://ethercalc.openstack.org/ > (hopefully soon also reachable as ethercalc.opendev.org) which runs > entirely on free software and is usable from parts of the World > where Google's services are not (for example, mainland China). I agree with Jeremy here. Let's make use of infra as much as we can. From jean-philippe at evrard.me Wed Feb 6 09:45:03 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 06 Feb 2019 10:45:03 +0100 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> Message-ID: > So, maybe the next step is to convince someone to champion a goal of > improving our contributor documentation, and to have them describe > what > the documentation should include, covering the usual topics like how > to > actually submit patches as well as suggestions for how to describe > areas > where help is needed in a project and offers to mentor contributors. > > Does anyone want to volunteer to serve as the goal champion for that? > This doesn't get visibility yet, as this thread is under [tc] only. Lance and I will raise this in our next update (which should be tomorrow) if we don't have a volunteer here. JP. From jean-philippe at evrard.me Wed Feb 6 10:00:08 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 06 Feb 2019 11:00:08 +0100 Subject: [horizon] Horizon slowing down proportionally to the amount of instances (was: Horizon extremely slow with 400 instances) In-Reply-To: References: Message-ID: <33f1bdebb0efbb36dbb40af9564dde5daba62ffe.camel@evrard.me> On Wed, 2019-01-30 at 21:10 -0500, Satish Patel wrote: > folks, > > we have mid size openstack cloud running 400 instances, and day by > day > its getting slower, i can understand it render every single machine > during loading instance page but it seems it's design issue, why not > it load page from MySQL instead of running bunch of API calls behind > then page? > > is this just me or someone else also having this issue? i am > surprised > why there is no good and robust Web GUI for very popular openstack? > > I am curious how people running openstack in large environment using > Horizon. > > I have tired all kind of setting and tuning like memcache etc.. > > ~S > Hello, I took the liberty to change the mailing list and topic name: FYI, the openstack-discuss ML will help you reach more people (developers/operators). When you prefix your mail with [horizon], it will even pass filters for some people:) Anyway... I would say horizon performance depends on many aspects of your deployment, including keystone and caching, it's hard to know what's going on with your environment with so little data. I hope you're figure it out :) Regards, JP From jean-philippe at evrard.me Wed Feb 6 10:11:42 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 06 Feb 2019 11:11:42 +0100 Subject: [tc][all] Project deletion community goal for Train cycle In-Reply-To: <1689d71d0ef.ef1d5f8d185664.5395252099905607931@ghanshyammann.com> References: <8d25cbc43d4fc43f8a98de37992d5531c8662cdc.camel@evrard.me> <47F67A8C-8C89-4B0A-BCF3-7F3100D2A1B7@leafe.com> <86ed4afc-056e-602a-e30c-08a51c2a2080@catalyst.net.nz> <1689d71d0ef.ef1d5f8d185664.5395252099905607931@ghanshyammann.com> Message-ID: On Wed, 2019-01-30 at 15:28 +0900, Ghanshyam Mann wrote: > ---- On Wed, 23 Jan 2019 08:21:27 +0900 Adrian Turjak < > adriant at catalyst.net.nz> wrote ---- > > Thanks for the input! I'm willing to bet there are many people > excited > > about this goal, or will be when they realise it exists! > > > > The 'dirty' state I think would be solved with a report API in > each > > service (tell me everything a given project has resource wise). > Such an > > API would be useful without needing to query each resource list, > and > > potentially could be an easy thing to implement to help a purge > library > > figure out what to delete. I know right now our method for > checking if a > > project is 'dirty' is part of our quota checking scripts, and it > has to > > query a lot of APIs per service to build an idea of what a project > has. > > > > As for using existing code, OSPurge could well be a starting > point, but > > the major part of this goal has to be that each OpenStack service > (that > > creates resources owned by a project) takes ownership of their > own > > deletion logic. This is why a top level library for cross project > logic, > > with per service plugin libraries is possibly the best approach. > Each > > library would follow the same template and abstraction layers (as > > inherited from the top level library), but how each service > implements > > their own deletion is up to them. I would also push for them using > the > > SDK only as their point of interaction with the APIs (lets set > some hard > > requirements and standards!), because that is the python library > we > > should be using going forward. In addition such an approach could > mean > > that anyone can write a plugin for the top level library (e.g. > internal > > company only services) which will automatically get picked up if > installed. > > +100 for not making keystone as Actor. Leaving purge responsibility > to service > side is the best way without any doubt. > > Instead of accepting Purge APIs from each service, I am thinking > we should consider another approach also which can be the plugin-able > approach. > Ewe can expose the plugin interface from purge library/tool. Each > service implements > the interface with purge functionality(script or command etc). > On discovery of each service's purge plugin, purge library/tool will > start the deletion > in required order etc. > > This can give 2 simple benefits > 1. No need to detect the service availability before requesting them > to purge the resources. > I am not sure OSpurge check the availability of services or not. But > in plugin approach case, > that will not be required. For example, if Congress is not installed > in my env then, > congress's purge plugin will not be discovered so no need to check > Congress service availability. > > 2. purge all resources interface will not be exposed to anyone except > the Purge library/tool. > In case of API, we are exposing the interface to user(admin/system > scopped etc) which can > delete all the resources of that service which is little security > issue may be. This can be argued > with existing delete API but those are per resource not all. Other > side we can say those can be > taken care by RBAC but still IMO exposing anything to even > permissiable user(especially human) > which can destruct the env is not a good idea where only right usage > of that interface is something > else (Purge library/tool in this case). > > Plugin-able can also have its cons but Let's first discuss all those > possibilities. > > -gmann Wasn't it what was proposed in the etherpad? I am a little confused there. From jean-philippe at evrard.me Wed Feb 6 10:13:51 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 06 Feb 2019 11:13:51 +0100 Subject: [openstack-helm] How to specify nova override for multiple pci alias In-Reply-To: <8E5740EC88EF3E4BA3196F2545DC8625BA1CD21F@ALA-MBD.corp.ad.wrs.com> References: <8E5740EC88EF3E4BA3196F2545DC8625BA1CD21F@ALA-MBD.corp.ad.wrs.com> Message-ID: <2f17068ba3452c230e3dbe1d581d940f85961a12.camel@evrard.me> On Wed, 2019-01-30 at 15:40 +0000, Gauld, James wrote: > How can I specify a helm override to configure nova PCI alias when > there are multiple aliases? > I haven't been able to come up with a YAML compliant specification > for this. > > Are there other alternatives to be able to specify this as an > override? I assume that a nova Chart change would be required to > support this custom one-alias-entry-per-line formatting. > > Any insights on how to achieve this in helm are welcomed. > > Background: > There is a limitation in the nova.conf specification of PCI alias in > that it does not allow multiple PCI aliases as a list. The code says > "Supports multiple aliases by repeating the option (not by specifying > a list value)". Basically nova currently only supports one-alias- > entry-per-line format. > > Ideally I would specify global pci alias in a format similar to what > can be achieved with PCI passthrough_whitelist, which can takes JSON > list of dictionaries. > > This is what I am trying to specify in nova.conf (i.e., for nova-api- > osapi and nova-compute): > [pci] > alias = {dict 1} > alias = {dict 2} > . . . > > The following nova configuration format is desired, but not as yet > supported by nova: > [pci] > alias = [{dict 1}, {dict 2}] > > The following snippet of YAML works for PCI passthrough_whitelist, > where the value encoded is a JSON string: > > conf: > nova: > overrides: > nova_compute: > hosts: > - conf: > nova: > pci: > passthrough_whitelist: '[{"class_id": "030000", > "address": "0000:00:02.0"}]' > > Jim Gauld Could the '?' symbol (for complex keys) help here? I don't know, but I would love to see an answer, and I can't verify that now. Regards, JP From eumel at arcor.de Wed Feb 6 11:10:20 2019 From: eumel at arcor.de (Frank Kloeker) Date: Wed, 06 Feb 2019 12:10:20 +0100 Subject: [openstack-ansible] bug squash day! In-Reply-To: <2ddb206f78e4c79ed6bc45a0d027b656473f09e7.camel@evrard.me> References: <717c065910a2365e8d9674f987227771@arcor.de> <5f88b97f42da5cd3015ec738d4d7a6f9@arcor.de> <2ddb206f78e4c79ed6bc45a0d027b656473f09e7.camel@evrard.me> Message-ID: Am 2019-02-06 10:32, schrieb Jean-Philippe Evrard: > On Tue, 2019-02-05 at 19:04 +0100, Frank Kloeker wrote: >> Hi Mohammed, >> >> will there be an extra invitation or an etherpad for logistic? >> >> many thanks >> >> Frank >> >> Am 2019-02-05 17:22, schrieb Mohammed Naser: >> > Hi everyone, >> > >> > We've discussed this over the ML today and we've decided for it to >> > be >> > next Wednesday (13th of February). Due to the distributed nature >> > of >> > our teams, we'll be aiming to go throughout the day and we'll all >> > be >> > hanging out on #openstack-ansible with a few more high bandwidth >> > way >> > of discussion if that is needed >> > >> > Thanks! >> > Mohammed > > What I did in the past was to prepare an etherpad of the most urgent > ones, but wasn't the most successful bug squash we had. > > I also took the other approach, BYO bug, list it in the etherpad, so we > can track the bug squashers. > > And in both cases, I brought belgian cookies/chocolates to the most > successful bug squasher (please note you should ponderate with the task > criticality level, else people might solve the simplest bugs to get the > chocolates :p) > This was my informal motivational, but I didn't have to do that. I > justliked doing so :) Very generous, we appreciate that. Would it be possible to expand the list with Belgian beer? :) kind regards Frank From cdent+os at anticdent.org Wed Feb 6 12:14:12 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 6 Feb 2019 12:14:12 +0000 (GMT) Subject: [nova] [placement] extraction checkin meeting at 1700 UTC today Message-ID: A reminder that as discussed at the last placement extraction checkin meeting [1] we've got another one today at 1700 UTC. Join the #openstack-placement IRC channel around then if you are interested, and a google hangout url will be provided. In the thread with the notes, there was a question that didn't get answered [2] in email and remains open as far as I know. There's an etherpad [3] with pending extraction related tasks. If you've done some of the work on there, please make sure it is up to date. From that, it appears that the main pending things are deployment (with upgrade) and the vgpu reshaper work (which is close). Note that the main question we're trying to answer here is "when can we delete the nova code?", which is closely tied to the unanswered question [2] mentioned above. We are already using the extracted code in the integrate gate and we are not testing the unextracted code anywhere. [1] notes from the last meeting are at http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001789.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001805.html [3] https://etherpad.openstack.org/p/placement-extract-stein-5 -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From sean.mcginnis at gmx.com Wed Feb 6 13:32:36 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 6 Feb 2019 07:32:36 -0600 Subject: [openstack-dev] [stackalytics] Stackalytics Facelift In-Reply-To: <0a2078f2b8ec44b19252633da58e3610@AUSX13MPS304.AMER.DELL.COM> References: <45e9c80f282d4d2a880b279b990a964c@AUSX13MPS308.AMER.DELL.COM> <5C378231.8010603@openstack.org> <4b8edd5beecd4915b06278524482431e@AUSX13MPS308.AMER.DELL.COM> <5C378410.6050603@openstack.org> <0a2078f2b8ec44b19252633da58e3610@AUSX13MPS304.AMER.DELL.COM> Message-ID: <20190206133235.GA28569@sm-workstation> On Wed, Feb 06, 2019 at 04:25:54AM +0000, Arkady.Kanevsky at dell.com wrote: > How does Stackalytics shows statistics for current Train release work? As mentioned yesterday, we are currently on the Stein release. So there is no Train work yet. From sean.mcginnis at gmx.com Wed Feb 6 13:36:48 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 6 Feb 2019 07:36:48 -0600 Subject: [TC][Searchlight] Project health evaluation In-Reply-To: References: Message-ID: <20190206133648.GB28569@sm-workstation> > > As we're reaching the Stein-3 milestone [5] and preparing for the Denver > summit. We, as a team, would like have a formal project health evaluation > in several aspects such as active contributors / team, planning, bug fixes, > features, etc. We would love to have some voice from the TC team and anyone > from the community who follows our effort during the Stein cycle. We then > would want to update the information at [6] and [7] to avoid any confusion > that may stop potential contributors or users to come to Searchlight. > > [1] https://review.openstack.org/#/c/588644/ > [2] > https://www.dangtrinh.com/2018/10/searchlight-at-stein-1-weekly-report.html > [3] https://www.dangtrinh.com/2019/01/searchlight-at-stein-2-r-14-r-13.html > [4] > https://docs.openstack.org/searchlight/latest/user/usecases.html#our-vision > [5] https://releases.openstack.org/stein/schedule.html > [6] https://governance.openstack.org/election/results/stein/ptl.html > [7] https://wiki.openstack.org/wiki/OpenStack_health_tracker > It really looks like great progress with Searchlight over this release. Nice work Trinh and all that have been involved in that. [6] is a historical record of what happened with the PTL election. What would you want to update there? The best path forward, in my opinion, is to make sure there is a clear PTL candidate for the Train release. [7] is a periodic update of notes between TC members and the projects. If you would like to get more information added there, I would recommend working with the two TC members assigned to Searchlight to get an update. That appears to be Chris Dent and Dims. Sean From kchamart at redhat.com Wed Feb 6 13:40:54 2019 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 6 Feb 2019 14:40:54 +0100 Subject: virt-install error while trying to create a new image In-Reply-To: <9D8A2486E35F0941A60430473E29F15B017BB3C9AE@MXDB2.ad.garvan.unsw.edu.au> References: <9D8A2486E35F0941A60430473E29F15B017BB3C9AE@MXDB2.ad.garvan.unsw.edu.au> Message-ID: <20190206134054.GV5349@paraplu.home> On Tue, Feb 05, 2019 at 10:45:27PM +0000, Manuel Sopena Ballesteros wrote: > Dear Openstack community, > > I am trying to create a new image for Ironic. I followed the > documentation but got an error with virt-install. [...] > Please note: > > The OS has been reinstalled The host is a physical machine BIOS has > virtualization enabled I changed /etc/libvirt/qemu.conf group from > root to kvm following some linux forum instructions about this error > but the issue persists That's fine. Please also post your host kernel, QEMU and libvirt versions. > # virt-install --virt-type kvm --name centos --ram 1024 --disk > /tmp/centos.qcow2,format=qcow2 --network network=default > --graphics vnc,listen=0.0.0.0 --noautoconsole --os-type=linux > --os-variant=centos7.0 > --location=/root/CentOS-7-x86_64-NetInstall-1810.iso > > > Starting install... > > Retrieving file .treeinfo... > | 0 B 00:00:00 Retrieving file content... > | 0 B 00:00:00 Retrieving file vmlinuz... > | 6.3 MB 00:00:00 Retrieving file initrd.img... > | 50 MB 00:00:00 ERROR unsupported configuration: CPU mode > 'custom' for x86_64 kvm domain on x86_64 host is not supported by > hypervisor Domain installation does not appear to have been > successful. That error means a low-level QEMU command (that queries for what vCPUs QEMU supports) has failed for "some reason". To debug this, we need /var/log/libvirt/libvirtd.log with log filters. (a) Remove this directory and its contents (this step is specific to this problem; it's not always required): $ rm /var/cache/libvirt/qemu/ (b) Set the following in your /etc/libvirt/libvirtd.conf: log_filters="1:libvirt 1:qemu 1:conf 1:security 3:event 3:json 3:file 3:object 1:util 1:cpu" log_outputs="1:file:/var/log/libvirt/libvirtd.log" (c) Restart libvirtd: `systemctl restart libvirtd` (d) Repeat the test; and post the /var/log/libvirt/libvirtd.log somewhere. [...] BTW, I would highly recommend the `virt-builder` approach to create disk images for various operating systems and importing it to libvirt. (1) Download a CentOS 7.6 template (with latest updates) 20G of disk: $ sudo dnf install libguestfs-tools-c $ virt-builder centos-7.6 --update -o centos-vm1.qcow2 \ --selinux-relabel --size 20G (2) Import the downloaded disk image into libvirt: $ virt-install \ --name centosvm1 --ram 2048 \ --disk path=centos.img,format=qcow2 \ --os-variant centos7.0 \ --import Note-1: Although the command is called `virt-install`, we aren't _installing_ anything in this case. Note-2: The '--os-variant' can be whatever the nearest possible variant that's available on your host. To find the list of variants for your current Fedora release, run: `osinfo-query os | grep centos`. (The `osinfo-query` tool comes with the 'libosinfo' package.) The `virt-builder` tool is also available in Debian and Ubuntu. [...] -- /kashyap From jaypipes at gmail.com Wed Feb 6 13:57:32 2019 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 6 Feb 2019 08:57:32 -0500 Subject: [openstack-dev] [stackalytics] Stackalytics Facelift In-Reply-To: <0a2078f2b8ec44b19252633da58e3610@AUSX13MPS304.AMER.DELL.COM> References: <45e9c80f282d4d2a880b279b990a964c@AUSX13MPS308.AMER.DELL.COM> <5C378231.8010603@openstack.org> <4b8edd5beecd4915b06278524482431e@AUSX13MPS308.AMER.DELL.COM> <5C378410.6050603@openstack.org> <0a2078f2b8ec44b19252633da58e3610@AUSX13MPS304.AMER.DELL.COM> Message-ID: <2d401bcf-abda-222c-710a-8f5ee7162072@gmail.com> On 02/05/2019 11:25 PM, Arkady.Kanevsky at dell.com wrote: > How does Stackalytics shows statistics for current Train release work? The current release is Stein, not Train. https://releases.openstack.org/ Best, -jay From lyarwood at redhat.com Wed Feb 6 14:12:38 2019 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 6 Feb 2019 14:12:38 +0000 Subject: [nova] [placement] [packaging] placement extraction check in meeting In-Reply-To: <5c80b99e-e7b3-bc65-9556-c80608de0347@gmail.com> References: <5c80b99e-e7b3-bc65-9556-c80608de0347@gmail.com> Message-ID: <20190206141238.aavcywhimevxnerd@lyarwood.usersys.redhat.com> On 17-01-19 09:09:24, Matt Riedemann wrote: > On 1/17/2019 6:07 AM, Chris Dent wrote: > > > Deployment tools: > > > > > > * Lee is working on TripleO support for extracted placement and > > > estimates 3 more weeks for just deploy (base install) support to be > > > done, and at least 3 more weeks for upgrade support after that. Read > > > Lee's status update for details [2]. > > > * If nova were to go ahead and drop placement code and require > > > extracted placement before TripleO is ready, they would have to pin > > > nova to a git SHA before that which would delay their Stein release. > > > * Having the extraction span release boundaries would ease the > > > upgrade pain for TripleO. > > > > Can you (or Dan?) clarify if spanning the release boundaries is > > usefully specifically for tooling that chooses to upgrade everything > > at once and thus is forced to run Stein nova with Stein placement? > > > > And if someone were able/willing to run Rocky nova with Stein > > placement (briefly) the challenges are less of a concern? > > > > I'm not asking because I disagree with the assertion, I just want to > > be sure I understand (and by proxy our adoring readers do as well) > > what "ease" really means in this context as the above bullet doesn't > > really explain it. > > I didn't go into details on that point because honestly I also could use > some written words explaining the differences for TripleO in doing the > upgrade and migration in-step with the Stein upgrade versus upgrading to > Stein and then upgrading to Train, and how the migration with that is any > less painful. AFAIK it wouldn't make the migration itself any less painful but having an overlap release would provide additional development and validation time. Time that is currently lacking given the very late breaking way upgrades are developed by TripleO, often only stabilising after the official upstream release is out. Anyway, I think this was Dan's point here but I'm happy to be corrected. > I know Dan talked about it on the call, but I can't say I followed it > all well enough to be able to summarize the pros/cons (which is why I > didn't in my summary email). This might already be something I know > about, but the lights just aren't turning on right now. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From ignaziocassano at gmail.com Wed Feb 6 14:34:08 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 6 Feb 2019 15:34:08 +0100 Subject: [manila][glusterfs] on queens error In-Reply-To: <20190203100549.urtnvf2iatmqm6oy@barron.net> References: <20190203100549.urtnvf2iatmqm6oy@barron.net> Message-ID: Hello Tom, I think cases you suggested do not meet my needs. I have an openstack installation A with a fas netapp A. I have another openstack installation B with fas netapp B. I would like to use manila replication dr. If I replicate manila volumes from A to B the manila db on B does not knows anything about the replicated volume but only the backends on netapp B. Can I discover replicated volumes on openstack B? Or I must modify the manila db on B? Regards Ignazio Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha scritto: > On 01/02/19 07:28 +0100, Ignazio Cassano wrote: > >Thanks Goutham. > >If there are not mantainers for this driver I will switch on ceph and or > >netapp. > >I am already using netapp but I would like to export shares from an > >openstack installation to another. > >Since these 2 installations do non share any openstack component and have > >different openstack database, I would like to know it is possible . > >Regards > >Ignazio > > Hi Ignazio, > > If by "export shares from an openstack installation to another" you > mean removing them from management by manila in installation A and > instead managing them by manila in installation B then you can do that > while leaving them in place on your Net App back end using the manila > "manage-unmanage" administrative commands. Here's some documentation > [1] that should be helpful. > > If on the other hand by "export shares ... to another" you mean to > leave the shares under management of manila in installation A but > consume them from compute instances in installation B it's all about > the networking. One can use manila to "allow-access" to consumers of > shares anywhere but the consumers must be able to reach the "export > locations" for those shares and mount them. > > Cheers, > > -- Tom Barron > > [1] > https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 > > > >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < > gouthampravi at gmail.com> > >ha scritto: > > > >> Hi Ignazio, > >> > >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano > >> wrote: > >> > > >> > Hello All, > >> > I installed manila on my queens openstack based on centos 7. > >> > I configured two servers with glusterfs replocation and ganesha nfs. > >> > I configured my controllers octavia,conf but when I try to create a > share > >> > the manila scheduler logs reports: > >> > > >> > Failed to schedule create_share: No valid host was found. Failed to > find > >> a weighted host, the last executed filter was CapabilitiesFilter.: > >> NoValidHost: No valid host was found. Failed to find a weighted host, > the > >> last executed filter was CapabilitiesFilter. > >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api > >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a > 89f76bc5de5545f381da2c10c7df7f15 > >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for > >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a > >> > >> > >> The scheduler failure points out that you have a mismatch in > >> expectations (backend capabilities vs share type extra-specs) and > >> there was no host to schedule your share to. So a few things to check > >> here: > >> > >> - What is the share type you're using? Can you list the share type > >> extra-specs and confirm that the backend (your GlusterFS storage) > >> capabilities are appropriate with whatever you've set up as > >> extra-specs ($ manila pool-list --detail)? > >> - Is your backend operating correctly? You can list the manila > >> services ($ manila service-list) and see if the backend is both > >> 'enabled' and 'up'. If it isn't, there's a good chance there was a > >> problem with the driver initialization, please enable debug logging, > >> and look at the log file for the manila-share service, you might see > >> why and be able to fix it. > >> > >> > >> Please be aware that we're on a look out for a maintainer for the > >> GlusterFS driver for the past few releases. We're open to bug fixes > >> and maintenance patches, but there is currently no active maintainer > >> for this driver. > >> > >> > >> > I did not understand if controllers node must be connected to the > >> network where shares must be exported for virtual machines, so my > glusterfs > >> are connected on the management network where openstack controllers are > >> conencted and to the network where virtual machine are connected. > >> > > >> > My manila.conf section for glusterfs section is the following > >> > > >> > [gluster-manila565] > >> > driver_handles_share_servers = False > >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver > >> > glusterfs_target = root at 10.102.184.229:/manila565 > >> > glusterfs_path_to_private_key = /etc/manila/id_rsa > >> > glusterfs_ganesha_server_username = root > >> > glusterfs_nfs_server_type = Ganesha > >> > glusterfs_ganesha_server_ip = 10.102.184.229 > >> > #glusterfs_servers = root at 10.102.185.19 > >> > ganesha_config_dir = /etc/ganesha > >> > > >> > > >> > PS > >> > 10.102.184.0/24 is the network where controlelrs expose endpoint > >> > > >> > 10.102.189.0/24 is the shared network inside openstack where virtual > >> machines are connected. > >> > > >> > The gluster servers are connected on both. > >> > > >> > > >> > Any help, please ? > >> > > >> > Ignazio > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Feb 6 14:39:43 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 6 Feb 2019 15:39:43 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Alfredo it is very strange you can ping 8.8.8.8 but you cannot resolve names. I think atomic command uses names for finishing master installation. Curl is installed on master.... Il giorno Mer 6 Feb 2019 09:00 Alfredo De Luca ha scritto: > Hi Ignazio. sorry for late reply. security group is fine. It\s not > blocking the network traffic. > > Not sure why but, with this fedora release I can finally find atomic but > there is no yum,nslookup,dig,host command..... why is so different from > another version (latest) which had yum but not atomic. > > It's all weird > > > Cheers > > > > > On Mon, Feb 4, 2019 at 5:46 PM Ignazio Cassano > wrote: > >> Alfredo, try to check security group linked to your kubemaster. >> >> Il giorno Lun 4 Feb 2019 14:25 Alfredo De Luca >> ha scritto: >> >>> Hi Ignazio. Thanks for the link...... so >>> >>> Now at least atomic is present on the system. >>> Also I ve already had 8.8.8.8 on the system. So I can connect on the >>> floating IP to the kube master....than I can ping 8.8.8.8 but for example >>> doesn't resolve the names...so if I ping 8.8.8.8 >>> *root at my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* >>> *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* >>> *64 bytes from 8.8.8.8 : icmp_seq=1 ttl=118 time=12.1 ms* >>> *64 bytes from 8.8.8.8 : icmp_seq=2 ttl=118 time=12.2 ms* >>> >>> but if I ping google.com doesn't resolve. I can't either find on fedora >>> dig or nslookup to check >>> resolv.conf has >>> *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* >>> *nameserver 8.8.8.8* >>> >>> It\s all so weird. >>> >>> >>> >>> >>> On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano >>> wrote: >>> >>>> I also suggest to change dns in your external network used by magnum. >>>> Using openstack dashboard you can change it to 8.8.8.8 (If I remember >>>> fine you wrote that you can ping 8.8.8.8 from kuke baster) >>>> >>>> Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < >>>> alfredo.deluca at gmail.com> ha scritto: >>>> >>>>> thanks ignazio >>>>> Where can I get it from? >>>>> >>>>> >>>>> On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano < >>>>> ignaziocassano at gmail.com> wrote: >>>>> >>>>>> I used fedora-magnum-27-4 and it works >>>>>> >>>>>> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >>>>>> alfredo.deluca at gmail.com> ha scritto: >>>>>> >>>>>>> Hi Clemens. >>>>>>> So the image I downloaded is this >>>>>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>>>>>> which is the latest I think. >>>>>>> But you are right...and I noticed that too.... It doesn't have >>>>>>> atomic binary >>>>>>> the os-release is >>>>>>> >>>>>>> *NAME=Fedora* >>>>>>> *VERSION="29 (Cloud Edition)"* >>>>>>> *ID=fedora* >>>>>>> *VERSION_ID=29* >>>>>>> *PLATFORM_ID="platform:f29"* >>>>>>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>>>>>> *ANSI_COLOR="0;34"* >>>>>>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>>>>>> *HOME_URL="https://fedoraproject.org/ "* >>>>>>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>>>>>> "* >>>>>>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>>>>>> "* >>>>>>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>>>>>> "* >>>>>>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>>>>>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>>>>>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>>>>>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>>>>>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>>>>>> "* >>>>>>> *VARIANT="Cloud Edition"* >>>>>>> *VARIANT_ID=cloud* >>>>>>> >>>>>>> >>>>>>> so not sure why I don't have atomic tho >>>>>>> >>>>>>> >>>>>>> On Sat, Feb 2, 2019 at 7:53 PM Clemens >>>>>>> wrote: >>>>>>> >>>>>>>> Now to the failure of your part-013: Are you sure that you used the >>>>>>>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>>>>>>> error message below suggests that your image does not contain ‚atomic‘ as >>>>>>>> part of the image … >>>>>>>> >>>>>>>> + _prefix=docker.io/openstackmagnum/ >>>>>>>> + atomic install --storage ostree --system --system-package no >>>>>>>> --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>>>>>> heat-container-agent >>>>>>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>>>>>> ./part-013: line 8: atomic: command not found >>>>>>>> + systemctl start heat-container-agent >>>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>>> heat-container-agent.service not found. >>>>>>>> >>>>>>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>> >>>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>>> heat-container-agent.service not found. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Alfredo* >>>>>>> >>>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>> >>> -- >>> *Alfredo* >>> >>> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgiusti at gmail.com Wed Feb 6 15:00:08 2019 From: kgiusti at gmail.com (Ken Giusti) Date: Wed, 6 Feb 2019 10:00:08 -0500 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: References: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> Message-ID: On 2/5/19, Doug Hellmann wrote: > Ken Giusti writes: > >> On 2/4/19, Harald Jensås wrote: >>> On Tue, 2019-02-05 at 09:54 +1100, Michael Still wrote: >>>> Hi, >>>> >>>> I’ve been chasing a bug in ironic’s neutron agent for the last few >>>> days and I think its time to ask for some advice. >>>> >>> >>> I'm working on the same issue. (In fact there are two issues.) >>> >>>> Specifically, I was asked to debug why a set of controllers was using >>>> so much RAM, and the answer was that rabbitmq had a queue called >>>> ironic-neutron-agent-heartbeat.info with 800,000 messages enqueued. >>>> This notification queue is used by ironic’s neutron agent to >>>> calculate the hash ring. I have been able to duplicate this issue in >>>> a stock kolla-ansible install with ironic turned on but no bare metal >>>> nodes enrolled in ironic. About 0.6 messages are queued per second. >>>> >>>> I added some debugging code (hence the thread yesterday about >>>> mangling the code kolla deploys), and I can see that the messages in >>>> the queue are being read by the ironic neutron agent and acked >>>> correctly. However, they are not removed from the queue. >>>> >>>> You can see your queue size while using kolla with this command: >>>> >>>> docker exec rabbitmq rabbitmqctl list_queues messages name >>>> messages_ready consumers | sort -n | tail -1 >>>> >>>> My stock install that’s been running for about 12 hours currently has >>>> 8,244 messages in that queue. >>>> >>>> Where I’m a bit stumped is I had assumed that the messages weren’t >>>> being acked correctly, which is not the case. Is there something >>>> obvious about notification queues like them being persistent that >>>> I’ve missed in my general ignorance of the underlying implementation >>>> of notifications? >>>> >>> >>> I opened a oslo.messaging bug[1] yesterday. When using notifications >>> and all consumers use one or more pools. The ironic-neutron-agent does >>> use pools for all listeners in it's hash-ring member manager. And the >>> result is that notifications are published to the 'ironic-neutron- >>> agent-heartbeat.info' queue and they are never consumed. >>> >> >> This is an issue with the design of the notification pool feature. >> >> The Notification service is designed so notification events can be >> sent even though there may currently be no consumers. It supports the >> ability for events to be queued until a consumer(s) is ready to >> process them. So when a notifier issues an event and there are no >> consumers subscribed, a queue must be provisioned to hold that event >> until consumers appear. > > This has come up several times over the last few years, and it's always > a surprise to whoever it has bitten. I wonder if we should change the > default behavior to not create the consumer queue in the publisher? > +1 One possibility is to provide options on the Notifier constructor allowing the app to control the queue creation behavior. Something like "create_queue=True/False". We can document this as a 'dead letter' queue feature for events published w/o active listeners. > -- > Doug > -- Ken Giusti (kgiusti at gmail.com) From mnaser at vexxhost.com Wed Feb 6 15:15:21 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 6 Feb 2019 10:15:21 -0500 Subject: [openstack-ansible] bug squash day! In-Reply-To: References: <717c065910a2365e8d9674f987227771@arcor.de> <5f88b97f42da5cd3015ec738d4d7a6f9@arcor.de> <2ddb206f78e4c79ed6bc45a0d027b656473f09e7.camel@evrard.me> Message-ID: Hi all: We're likely going to have an etherpad and we'll be coordinating in IRC. Bring your own bug is probably the best avenue! Thanks all! Regards, Mohammed On Wed, Feb 6, 2019 at 6:10 AM Frank Kloeker wrote: > > Am 2019-02-06 10:32, schrieb Jean-Philippe Evrard: > > On Tue, 2019-02-05 at 19:04 +0100, Frank Kloeker wrote: > >> Hi Mohammed, > >> > >> will there be an extra invitation or an etherpad for logistic? > >> > >> many thanks > >> > >> Frank > >> > >> Am 2019-02-05 17:22, schrieb Mohammed Naser: > >> > Hi everyone, > >> > > >> > We've discussed this over the ML today and we've decided for it to > >> > be > >> > next Wednesday (13th of February). Due to the distributed nature > >> > of > >> > our teams, we'll be aiming to go throughout the day and we'll all > >> > be > >> > hanging out on #openstack-ansible with a few more high bandwidth > >> > way > >> > of discussion if that is needed > >> > > >> > Thanks! > >> > Mohammed > > > > What I did in the past was to prepare an etherpad of the most urgent > > ones, but wasn't the most successful bug squash we had. > > > > I also took the other approach, BYO bug, list it in the etherpad, so we > > can track the bug squashers. > > > > And in both cases, I brought belgian cookies/chocolates to the most > > successful bug squasher (please note you should ponderate with the task > > criticality level, else people might solve the simplest bugs to get the > > chocolates :p) > > This was my informal motivational, but I didn't have to do that. I > > justliked doing so :) > > Very generous, we appreciate that. Would it be possible to expand the > list with Belgian beer? :) > > kind regards > > Frank -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From martin.chlumsky at gmail.com Wed Feb 6 15:19:53 2019 From: martin.chlumsky at gmail.com (Martin Chlumsky) Date: Wed, 6 Feb 2019 10:19:53 -0500 Subject: [Cinder][driver][ScaleIO] In-Reply-To: References: <9d98a006-a062-0a9b-a9d3-68ed0ef4078f@gmail.com> <74b2c779ee644a64b5b1939537ddffd1@AUSX13MPS304.AMER.DELL.COM> Message-ID: Hi Yury, Thank you for the clarification. So if we get volumes that are still mapped to hypervisors after deleting the attached instances with sio_unmap_volume_before_deletion set to False, there's a good chance it's a bug? I will open a bug report in this case. Cheers, Martin On Wed, Feb 6, 2019 at 9:35 AM Kulazhenkov, Yury wrote: > Hi Martin, > > Martin wrote: > > It seems you would always > > want to unmap the volume from the hypervisor before deleting it. > If you remove or shelve instance from hypervisor host, then nova will > trigger ScaleIO to unmap volume from that host. > No issues should happen during deletion at this point, because volume is > already unmapped(unmounted). > No need to change sio_unmap_volume_before_deletion default value here. > > Martin wrote: > > What is the reasoning behind this option? > Setting sio_unmap_volume_before_deletion option to True means that cinder > driver will force unmount volume from ALL ScaleIO client nodes (not only > Openstack nodes) during volume deletion. > Enabling this option can be useful if you periodically detect compute > nodes with unmanaged ScaleIO volume mappings(volume mappings that not > managed by Openstack) in your environment. You can get such unmanaged > mappings in some cases, for example if there was hypervisor node power > failure. If during that power failure instances with mapped volumes were > moved to another host, than unmanaged mappings may appear on failed node > after its recovery. > > Martin wrote: > >Why would we ever set this > > to False and why is it False by default? > Force unmounting volumes from ALL ScaleIO clients is additional overhead. > It doesn't required in most environments. > > > Best regards, > Yury > > -----Original Message----- > From: Arkady.Kanevsky at dell.com > Sent: Wednesday, February 6, 2019 7:24 AM > To: jsbryant at electronicjungle.net; openstack-discuss at lists.openstack.org; > Walsh, Helen; Belogrudov, Vladislav > Subject: RE: [Cinder][driver][ScaleIO] > > Adding Vlad who is the right person for ScaleIO driver. > > -----Original Message----- > From: Jay Bryant > Sent: Tuesday, February 5, 2019 5:30 PM > To: openstack-discuss at lists.openstack.org; Walsh, Helen > Subject: Re: [Cinder][driver][ScaleIO] > > Adding Helen Walsh to this as she may be able to provide insight. > > Jay > > On 2/5/2019 12:16 PM, Martin Chlumsky wrote: > > Hello, > > > > We are using EMC ScaleIO as our backend to cinder. > > When we delete VMs that have attached volumes and then try deleting > > said volumes, the volumes will sometimes end in state error_deleting. > > The state is reached because for some reason the volumes are still > > mapped (in the ScaleIO sense of the word) to the hypervisor despite > > the VM being deleted. > > We fixed the issue by setting the following option to True in > cinder.conf: > > > > # Unmap volume before deletion. (boolean value) > > sio_unmap_volume_before_deletion=False > > > > > > What is the reasoning behind this option? Why would we ever set this > > to False and why is it False by default? It seems you would always > > want to unmap the volume from the hypervisor before deleting it. > > > > Thank you, > > > > Martin > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Wed Feb 6 15:32:19 2019 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 6 Feb 2019 10:32:19 -0500 Subject: [manila][glusterfs] on queens error In-Reply-To: References: <20190203100549.urtnvf2iatmqm6oy@barron.net> Message-ID: <20190206153219.yyir5m5tyw7bvrj7@barron.net> On 06/02/19 15:34 +0100, Ignazio Cassano wrote: >Hello Tom, I think cases you suggested do not meet my needs. >I have an openstack installation A with a fas netapp A. >I have another openstack installation B with fas netapp B. >I would like to use manila replication dr. >If I replicate manila volumes from A to B the manila db on B does not >knows anything about the replicated volume but only the backends on netapp >B. Can I discover replicated volumes on openstack B? >Or I must modify the manila db on B? >Regards >Ignazio I guess I don't understand your use case. Do Openstack installation A and Openstack installation B know *anything* about one another? For example, are their keystone and neutron databases somehow synced? Are they going to be operative for the same set of manila shares at the same time, or are you contemplating a migration of the shares from installation A to installation B? Probably it would be helpful to have a statement of the problem that you intend to solve before we consider the potential mechanisms for solving it. Cheers, -- Tom > > >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha scritto: > >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: >> >Thanks Goutham. >> >If there are not mantainers for this driver I will switch on ceph and or >> >netapp. >> >I am already using netapp but I would like to export shares from an >> >openstack installation to another. >> >Since these 2 installations do non share any openstack component and have >> >different openstack database, I would like to know it is possible . >> >Regards >> >Ignazio >> >> Hi Ignazio, >> >> If by "export shares from an openstack installation to another" you >> mean removing them from management by manila in installation A and >> instead managing them by manila in installation B then you can do that >> while leaving them in place on your Net App back end using the manila >> "manage-unmanage" administrative commands. Here's some documentation >> [1] that should be helpful. >> >> If on the other hand by "export shares ... to another" you mean to >> leave the shares under management of manila in installation A but >> consume them from compute instances in installation B it's all about >> the networking. One can use manila to "allow-access" to consumers of >> shares anywhere but the consumers must be able to reach the "export >> locations" for those shares and mount them. >> >> Cheers, >> >> -- Tom Barron >> >> [1] >> https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 >> > >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < >> gouthampravi at gmail.com> >> >ha scritto: >> > >> >> Hi Ignazio, >> >> >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano >> >> wrote: >> >> > >> >> > Hello All, >> >> > I installed manila on my queens openstack based on centos 7. >> >> > I configured two servers with glusterfs replocation and ganesha nfs. >> >> > I configured my controllers octavia,conf but when I try to create a >> share >> >> > the manila scheduler logs reports: >> >> > >> >> > Failed to schedule create_share: No valid host was found. Failed to >> find >> >> a weighted host, the last executed filter was CapabilitiesFilter.: >> >> NoValidHost: No valid host was found. Failed to find a weighted host, >> the >> >> last executed filter was CapabilitiesFilter. >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a >> 89f76bc5de5545f381da2c10c7df7f15 >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a >> >> >> >> >> >> The scheduler failure points out that you have a mismatch in >> >> expectations (backend capabilities vs share type extra-specs) and >> >> there was no host to schedule your share to. So a few things to check >> >> here: >> >> >> >> - What is the share type you're using? Can you list the share type >> >> extra-specs and confirm that the backend (your GlusterFS storage) >> >> capabilities are appropriate with whatever you've set up as >> >> extra-specs ($ manila pool-list --detail)? >> >> - Is your backend operating correctly? You can list the manila >> >> services ($ manila service-list) and see if the backend is both >> >> 'enabled' and 'up'. If it isn't, there's a good chance there was a >> >> problem with the driver initialization, please enable debug logging, >> >> and look at the log file for the manila-share service, you might see >> >> why and be able to fix it. >> >> >> >> >> >> Please be aware that we're on a look out for a maintainer for the >> >> GlusterFS driver for the past few releases. We're open to bug fixes >> >> and maintenance patches, but there is currently no active maintainer >> >> for this driver. >> >> >> >> >> >> > I did not understand if controllers node must be connected to the >> >> network where shares must be exported for virtual machines, so my >> glusterfs >> >> are connected on the management network where openstack controllers are >> >> conencted and to the network where virtual machine are connected. >> >> > >> >> > My manila.conf section for glusterfs section is the following >> >> > >> >> > [gluster-manila565] >> >> > driver_handles_share_servers = False >> >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver >> >> > glusterfs_target = root at 10.102.184.229:/manila565 >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa >> >> > glusterfs_ganesha_server_username = root >> >> > glusterfs_nfs_server_type = Ganesha >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 >> >> > #glusterfs_servers = root at 10.102.185.19 >> >> > ganesha_config_dir = /etc/ganesha >> >> > >> >> > >> >> > PS >> >> > 10.102.184.0/24 is the network where controlelrs expose endpoint >> >> > >> >> > 10.102.189.0/24 is the shared network inside openstack where virtual >> >> machines are connected. >> >> > >> >> > The gluster servers are connected on both. >> >> > >> >> > >> >> > Any help, please ? >> >> > >> >> > Ignazio >> >> >> From lars at redhat.com Wed Feb 6 15:41:38 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 6 Feb 2019 10:41:38 -0500 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> Message-ID: <20190206154138.qfhgh5cax3j2r4qh@redhat.com> On Fri, Feb 01, 2019 at 06:16:42PM +0000, Sean Mooney wrote: > > 1. Implement multi-tenancy either (a) directly in Ironic or (b) in a > > shim service that sits between Ironic and the client. > that shim service could be nova, which already has multi tenancy. > > > > 2. Implement a Blazar plugin that is able to talk to whichever service > > in (1) is appropriate. > and nova is supported by blazar > > > > 3. Work with Blazar developers to implement any lease logic that we > > think is necessary. > +1 > by they im sure there is a reason why you dont want to have blazar drive > nova and nova dirve ironic but it seam like all the fucntionality would > already be there in that case. Sean, Being able to use Nova is a really attractive idea. I'm a little fuzzy on some of the details, though, starting with how to handle node discovery. A key goal is being able to parametrically request systems ("I want a system with a GPU and >= 40GB of memory"). With Nova, would this require effectively creating a flavor for every unique hardware configuration? Conceptually, I want "... create server --flavor any --filter 'has_gpu and member_mb>40000' ...", but it's not clear to me if that's something we could do now or if that would require changes to the way Nova handles baremetal scheduling. Additionally, we also want the ability to acquire a node without provisioning it, so that a consumer can use their own provisioning tool. From Nova's perspective, I guess this would be like requesting a system without specifying an image. Is that possible right now? I'm sure I'll have other questions, but these are the first few that crop up. Thanks, -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From mihalis68 at gmail.com Wed Feb 6 15:42:19 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 6 Feb 2019 10:42:19 -0500 Subject: [ops] OpenStack operators meetup, Berlin, March 6th,7th Message-ID: Dear All, The Evenbrite for the next ops meetup is now open, see https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 Thanks for Allison Price from the foundation for making this for us. We'll be sharing more details on the event soon. Chris on behalf of the ops meetups team -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Wed Feb 6 15:45:23 2019 From: openstack at fried.cc (Eric Fried) Date: Wed, 6 Feb 2019 09:45:23 -0600 Subject: [openstack-helm] How to specify nova override for multiple pci alias In-Reply-To: <2f17068ba3452c230e3dbe1d581d940f85961a12.camel@evrard.me> References: <8E5740EC88EF3E4BA3196F2545DC8625BA1CD21F@ALA-MBD.corp.ad.wrs.com> <2f17068ba3452c230e3dbe1d581d940f85961a12.camel@evrard.me> Message-ID: Folks- On 2/6/19 4:13 AM, Jean-Philippe Evrard wrote: > On Wed, 2019-01-30 at 15:40 +0000, Gauld, James wrote: >> How can I specify a helm override to configure nova PCI alias when >> there are multiple aliases? >> I haven't been able to come up with a YAML compliant specification >> for this. >> >> Are there other alternatives to be able to specify this as an >> override? I assume that a nova Chart change would be required to >> support this custom one-alias-entry-per-line formatting. >> >> Any insights on how to achieve this in helm are welcomed. >> The following nova configuration format is desired, but not as yet >> supported by nova: >> [pci] >> alias = [{dict 1}, {dict 2}] >> >> The following snippet of YAML works for PCI passthrough_whitelist, >> where the value encoded is a JSON string: >> >> conf: >> nova: >> overrides: >> nova_compute: >> hosts: >> - conf: >> nova: >> pci: >> passthrough_whitelist: '[{"class_id": "030000", >> "address": "0000:00:02.0"}]' I played around with the code as it stands, and I agree there doesn't seem to be a way around having to specify the alias key multiple times to get multiple aliases. Lacking some fancy way to make YAML understand a dict with repeated keys ((how) do you handle HTTP headers?), I've hacked up a solution on the nova side [1] which should allow you to do what you've described above. Do you have a way to pull it down and try it? (Caveat: I put this up as a proof of concept, but it (or anything that messes with the existing pci passthrough mechanisms) may not be a mergeable solution.) -efried [1] https://review.openstack.org/#/c/635191/ From thierry at openstack.org Wed Feb 6 15:55:49 2019 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 6 Feb 2019 16:55:49 +0100 Subject: [tc][uc] Becoming an Open Source Initiative affiliate org Message-ID: I started a thread on the Foundation mailing-list about the OSF becoming an OSI affiliate org: http://lists.openstack.org/pipermail/foundation/2019-February/002680.html Please follow-up there is you have any concerns or questions. -- Thierry Carrez (ttx) From Tim.Bell at cern.ch Wed Feb 6 16:00:40 2019 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 6 Feb 2019 16:00:40 +0000 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: <20190206154138.qfhgh5cax3j2r4qh@redhat.com> References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <20190206154138.qfhgh5cax3j2r4qh@redhat.com> Message-ID: A few years ago, there was a discussion in one of the summit forums where users wanted to be able to come along to a generic OpenStack cloud and say "give me the flavor that has at least X GB RAM and Y GB disk space". At the time, the thoughts were that this could be done by doing a flavour list and then finding the smallest one which matched the requirements. Would that be an option or would it require some more Nova internals? For reserving, you could install the machine with a simple image and then let the user rebuild with their choice? Not sure if these meet what you'd like but it may allow a proof-of-concept without needing too many code changes. Tim -----Original Message----- From: Lars Kellogg-Stedman Date: Wednesday, 6 February 2019 at 16:44 To: Sean Mooney Cc: "Ansari, Mohhamad Naved" , Julia Kreger , Ian Ballou , Kristi Nikolla , "openstack-discuss at lists.openstack.org" , Tzu-Mainn Chen Subject: Re: [ironic] Hardware leasing with Ironic On Fri, Feb 01, 2019 at 06:16:42PM +0000, Sean Mooney wrote: > > 1. Implement multi-tenancy either (a) directly in Ironic or (b) in a > > shim service that sits between Ironic and the client. > that shim service could be nova, which already has multi tenancy. > > > > 2. Implement a Blazar plugin that is able to talk to whichever service > > in (1) is appropriate. > and nova is supported by blazar > > > > 3. Work with Blazar developers to implement any lease logic that we > > think is necessary. > +1 > by they im sure there is a reason why you dont want to have blazar drive > nova and nova dirve ironic but it seam like all the fucntionality would > already be there in that case. Sean, Being able to use Nova is a really attractive idea. I'm a little fuzzy on some of the details, though, starting with how to handle node discovery. A key goal is being able to parametrically request systems ("I want a system with a GPU and >= 40GB of memory"). With Nova, would this require effectively creating a flavor for every unique hardware configuration? Conceptually, I want "... create server --flavor any --filter 'has_gpu and member_mb>40000' ...", but it's not clear to me if that's something we could do now or if that would require changes to the way Nova handles baremetal scheduling. Additionally, we also want the ability to acquire a node without provisioning it, so that a consumer can use their own provisioning tool. From Nova's perspective, I guess this would be like requesting a system without specifying an image. Is that possible right now? I'm sure I'll have other questions, but these are the first few that crop up. Thanks, -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From martin.chlumsky at gmail.com Wed Feb 6 16:24:16 2019 From: martin.chlumsky at gmail.com (Martin Chlumsky) Date: Wed, 6 Feb 2019 11:24:16 -0500 Subject: [Cinder][driver][ScaleIO] In-Reply-To: References: <9d98a006-a062-0a9b-a9d3-68ed0ef4078f@gmail.com> <74b2c779ee644a64b5b1939537ddffd1@AUSX13MPS304.AMER.DELL.COM> Message-ID: Thanks! Martin On Wed, Feb 6, 2019 at 11:20 AM Kulazhenkov, Yury wrote: > Hi Martin, > > > > Martin wrote: > > > So if we get volumes that are still mapped to hypervisors after deleting > the attached instances with sio_unmap_volume_before_deletion set to False, > there's a good chance it's a bug? > > Yes, volumes should be detached from host even without set > sio_unmap_volume_before_deletion = True. > > > > Yury > > > > *From:* Martin Chlumsky > *Sent:* Wednesday, February 6, 2019 6:20 PM > *To:* Kulazhenkov, Yury > *Cc:* Kanevsky, Arkady; jsbryant at electronicjungle.net; > openstack-discuss at lists.openstack.org; Walsh, Helen; Belogrudov, Vladislav > *Subject:* Re: [Cinder][driver][ScaleIO] > > > > [EXTERNAL EMAIL] > > Hi Yury, > > Thank you for the clarification. > So if we get volumes that are still mapped to hypervisors after deleting > the attached instances with sio_unmap_volume_before_deletion set to False, > there's a good chance it's a bug? I will open a bug report in this case. > > Cheers, > > Martin > > > > On Wed, Feb 6, 2019 at 9:35 AM Kulazhenkov, Yury < > Yury.Kulazhenkov at dell.com> wrote: > > Hi Martin, > > Martin wrote: > > It seems you would always > > want to unmap the volume from the hypervisor before deleting it. > If you remove or shelve instance from hypervisor host, then nova will > trigger ScaleIO to unmap volume from that host. > No issues should happen during deletion at this point, because volume is > already unmapped(unmounted). > No need to change sio_unmap_volume_before_deletion default value here. > > Martin wrote: > > What is the reasoning behind this option? > Setting sio_unmap_volume_before_deletion option to True means that cinder > driver will force unmount volume from ALL ScaleIO client nodes (not only > Openstack nodes) during volume deletion. > Enabling this option can be useful if you periodically detect compute > nodes with unmanaged ScaleIO volume mappings(volume mappings that not > managed by Openstack) in your environment. You can get such unmanaged > mappings in some cases, for example if there was hypervisor node power > failure. If during that power failure instances with mapped volumes were > moved to another host, than unmanaged mappings may appear on failed node > after its recovery. > > Martin wrote: > >Why would we ever set this > > to False and why is it False by default? > Force unmounting volumes from ALL ScaleIO clients is additional overhead. > It doesn't required in most environments. > > > Best regards, > Yury > > -----Original Message----- > From: Arkady.Kanevsky at dell.com > Sent: Wednesday, February 6, 2019 7:24 AM > To: jsbryant at electronicjungle.net; openstack-discuss at lists.openstack.org; > Walsh, Helen; Belogrudov, Vladislav > Subject: RE: [Cinder][driver][ScaleIO] > > Adding Vlad who is the right person for ScaleIO driver. > > -----Original Message----- > From: Jay Bryant > Sent: Tuesday, February 5, 2019 5:30 PM > To: openstack-discuss at lists.openstack.org; Walsh, Helen > Subject: Re: [Cinder][driver][ScaleIO] > > Adding Helen Walsh to this as she may be able to provide insight. > > Jay > > On 2/5/2019 12:16 PM, Martin Chlumsky wrote: > > Hello, > > > > We are using EMC ScaleIO as our backend to cinder. > > When we delete VMs that have attached volumes and then try deleting > > said volumes, the volumes will sometimes end in state error_deleting. > > The state is reached because for some reason the volumes are still > > mapped (in the ScaleIO sense of the word) to the hypervisor despite > > the VM being deleted. > > We fixed the issue by setting the following option to True in > cinder.conf: > > > > # Unmap volume before deletion. (boolean value) > > sio_unmap_volume_before_deletion=False > > > > > > What is the reasoning behind this option? Why would we ever set this > > to False and why is it False by default? It seems you would always > > want to unmap the volume from the hypervisor before deleting it. > > > > Thank you, > > > > Martin > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Feb 6 16:48:39 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 6 Feb 2019 17:48:39 +0100 Subject: [manila][glusterfs] on queens error In-Reply-To: <20190206153219.yyir5m5tyw7bvrj7@barron.net> References: <20190203100549.urtnvf2iatmqm6oy@barron.net> <20190206153219.yyir5m5tyw7bvrj7@barron.net> Message-ID: The 2 openstack Installations do not share anything. The manila on each one works on different netapp storage, but the 2 netapp can be synchronized. Site A with an openstack instalkation and netapp A. Site B with an openstack with netapp B. Netapp A and netapp B can be synchronized via network. Ignazio Il giorno Mer 6 Feb 2019 16:32 Tom Barron ha scritto: > On 06/02/19 15:34 +0100, Ignazio Cassano wrote: > >Hello Tom, I think cases you suggested do not meet my needs. > >I have an openstack installation A with a fas netapp A. > >I have another openstack installation B with fas netapp B. > >I would like to use manila replication dr. > >If I replicate manila volumes from A to B the manila db on B does not > >knows anything about the replicated volume but only the backends on > netapp > >B. Can I discover replicated volumes on openstack B? > >Or I must modify the manila db on B? > >Regards > >Ignazio > > I guess I don't understand your use case. Do Openstack installation A > and Openstack installation B know *anything* about one another? For > example, are their keystone and neutron databases somehow synced? Are > they going to be operative for the same set of manila shares at the > same time, or are you contemplating a migration of the shares from > installation A to installation B? > > Probably it would be helpful to have a statement of the problem that > you intend to solve before we consider the potential mechanisms for > solving it. > > Cheers, > > -- Tom > > > > > > >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha scritto: > > > >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: > >> >Thanks Goutham. > >> >If there are not mantainers for this driver I will switch on ceph and > or > >> >netapp. > >> >I am already using netapp but I would like to export shares from an > >> >openstack installation to another. > >> >Since these 2 installations do non share any openstack component and > have > >> >different openstack database, I would like to know it is possible . > >> >Regards > >> >Ignazio > >> > >> Hi Ignazio, > >> > >> If by "export shares from an openstack installation to another" you > >> mean removing them from management by manila in installation A and > >> instead managing them by manila in installation B then you can do that > >> while leaving them in place on your Net App back end using the manila > >> "manage-unmanage" administrative commands. Here's some documentation > >> [1] that should be helpful. > >> > >> If on the other hand by "export shares ... to another" you mean to > >> leave the shares under management of manila in installation A but > >> consume them from compute instances in installation B it's all about > >> the networking. One can use manila to "allow-access" to consumers of > >> shares anywhere but the consumers must be able to reach the "export > >> locations" for those shares and mount them. > >> > >> Cheers, > >> > >> -- Tom Barron > >> > >> [1] > >> > https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 > >> > > >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < > >> gouthampravi at gmail.com> > >> >ha scritto: > >> > > >> >> Hi Ignazio, > >> >> > >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano > >> >> wrote: > >> >> > > >> >> > Hello All, > >> >> > I installed manila on my queens openstack based on centos 7. > >> >> > I configured two servers with glusterfs replocation and ganesha > nfs. > >> >> > I configured my controllers octavia,conf but when I try to create a > >> share > >> >> > the manila scheduler logs reports: > >> >> > > >> >> > Failed to schedule create_share: No valid host was found. Failed to > >> find > >> >> a weighted host, the last executed filter was CapabilitiesFilter.: > >> >> NoValidHost: No valid host was found. Failed to find a weighted host, > >> the > >> >> last executed filter was CapabilitiesFilter. > >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api > >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a > >> 89f76bc5de5545f381da2c10c7df7f15 > >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for > >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a > >> >> > >> >> > >> >> The scheduler failure points out that you have a mismatch in > >> >> expectations (backend capabilities vs share type extra-specs) and > >> >> there was no host to schedule your share to. So a few things to check > >> >> here: > >> >> > >> >> - What is the share type you're using? Can you list the share type > >> >> extra-specs and confirm that the backend (your GlusterFS storage) > >> >> capabilities are appropriate with whatever you've set up as > >> >> extra-specs ($ manila pool-list --detail)? > >> >> - Is your backend operating correctly? You can list the manila > >> >> services ($ manila service-list) and see if the backend is both > >> >> 'enabled' and 'up'. If it isn't, there's a good chance there was a > >> >> problem with the driver initialization, please enable debug logging, > >> >> and look at the log file for the manila-share service, you might see > >> >> why and be able to fix it. > >> >> > >> >> > >> >> Please be aware that we're on a look out for a maintainer for the > >> >> GlusterFS driver for the past few releases. We're open to bug fixes > >> >> and maintenance patches, but there is currently no active maintainer > >> >> for this driver. > >> >> > >> >> > >> >> > I did not understand if controllers node must be connected to the > >> >> network where shares must be exported for virtual machines, so my > >> glusterfs > >> >> are connected on the management network where openstack controllers > are > >> >> conencted and to the network where virtual machine are connected. > >> >> > > >> >> > My manila.conf section for glusterfs section is the following > >> >> > > >> >> > [gluster-manila565] > >> >> > driver_handles_share_servers = False > >> >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver > >> >> > glusterfs_target = root at 10.102.184.229:/manila565 > >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa > >> >> > glusterfs_ganesha_server_username = root > >> >> > glusterfs_nfs_server_type = Ganesha > >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 > >> >> > #glusterfs_servers = root at 10.102.185.19 > >> >> > ganesha_config_dir = /etc/ganesha > >> >> > > >> >> > > >> >> > PS > >> >> > 10.102.184.0/24 is the network where controlelrs expose endpoint > >> >> > > >> >> > 10.102.189.0/24 is the shared network inside openstack where > virtual > >> >> machines are connected. > >> >> > > >> >> > The gluster servers are connected on both. > >> >> > > >> >> > > >> >> > Any help, please ? > >> >> > > >> >> > Ignazio > >> >> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Wed Feb 6 17:18:37 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 6 Feb 2019 12:18:37 -0500 Subject: [ops] OpenStack operators meetup, Berlin, March 6th,7th In-Reply-To: References: Message-ID: I'm all signed up. See you in Berlin! On Wed, Feb 6, 2019, 10:43 AM Chris Morgan Dear All, > The Evenbrite for the next ops meetup is now open, see > > > https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 > > Thanks for Allison Price from the foundation for making this for us. We'll > be sharing more details on the event soon. > > Chris > on behalf of the ops meetups team > > -- > Chris Morgan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanthini.a.a at ericsson.com Wed Feb 6 06:12:08 2019 From: nanthini.a.a at ericsson.com (NANTHINI A A) Date: Wed, 6 Feb 2019 06:12:08 +0000 Subject: [Heat] Reg accessing variables of resource group heat api Message-ID: Hi , We are developing heat templates for our vnf deployment .It includes multiple resources .We want to repeat the resource and hence used the api RESOURCE GROUP . Attached are the templates which we used Set1.yaml -> has the resources we want to repeat Setrepeat.yaml -> has the resource group api with count . We want to access the variables of resource in set1.yaml while repeating it with count .Eg . port name ,port fixed ip address we want to change in each set . Please let us know how we can have a variable with each repeated resource . Thanks, A.Nanthini -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: set1.yaml.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: setrepeat.yaml.txt URL: From linus.nilsson at it.uu.se Wed Feb 6 12:04:57 2019 From: linus.nilsson at it.uu.se (Linus Nilsson) Date: Wed, 6 Feb 2019 13:04:57 +0100 Subject: Rocky and older Ceph compatibility Message-ID: <88212313-4fde-8e01-d804-27c6354b7046@it.uu.se> Hi all, I'm working on upgrading our cloud, which consists of a block storage system running Ceph 11.2.1 ("Kraken") and a controlplane running OSA Newton. We want to migrate to Ceph Mimic and OSA Rocky respectively. As part of the upgrade plan we are discussing first going to Rocky while keeping the block system at the "Kraken" release. It would be helpful to know if anyone has attempted to run the Rocky Cinder/Glance drivers with Ceph Kraken or older? References or documentation is welcomed. I fail to find much information online, but perhaps I'm looking in the wrong places or I'm asking a question with an obvious answer. Thanks! Best regards, Linus UPPMAX När du har kontakt med oss på Uppsala universitet med e-post så innebär det att vi behandlar dina personuppgifter. För att läsa mer om hur vi gör det kan du läsa här: http://www.uu.se/om-uu/dataskydd-personuppgifter/ E-mailing Uppsala University means that we will process your personal data. For more information on how this is performed, please read here: http://www.uu.se/en/about-uu/data-protection-policy From Yury.Kulazhenkov at dell.com Wed Feb 6 14:35:11 2019 From: Yury.Kulazhenkov at dell.com (Kulazhenkov, Yury) Date: Wed, 6 Feb 2019 14:35:11 +0000 Subject: [Cinder][driver][ScaleIO] In-Reply-To: <74b2c779ee644a64b5b1939537ddffd1@AUSX13MPS304.AMER.DELL.COM> References: <9d98a006-a062-0a9b-a9d3-68ed0ef4078f@gmail.com> <74b2c779ee644a64b5b1939537ddffd1@AUSX13MPS304.AMER.DELL.COM> Message-ID: Hi Martin, Martin wrote: > It seems you would always > want to unmap the volume from the hypervisor before deleting it. If you remove or shelve instance from hypervisor host, then nova will trigger ScaleIO to unmap volume from that host. No issues should happen during deletion at this point, because volume is already unmapped(unmounted). No need to change sio_unmap_volume_before_deletion default value here. Martin wrote: > What is the reasoning behind this option? Setting sio_unmap_volume_before_deletion option to True means that cinder driver will force unmount volume from ALL ScaleIO client nodes (not only Openstack nodes) during volume deletion. Enabling this option can be useful if you periodically detect compute nodes with unmanaged ScaleIO volume mappings(volume mappings that not managed by Openstack) in your environment. You can get such unmanaged mappings in some cases, for example if there was hypervisor node power failure. If during that power failure instances with mapped volumes were moved to another host, than unmanaged mappings may appear on failed node after its recovery. Martin wrote: >Why would we ever set this > to False and why is it False by default? Force unmounting volumes from ALL ScaleIO clients is additional overhead. It doesn't required in most environments. Best regards, Yury -----Original Message----- From: Arkady.Kanevsky at dell.com Sent: Wednesday, February 6, 2019 7:24 AM To: jsbryant at electronicjungle.net; openstack-discuss at lists.openstack.org; Walsh, Helen; Belogrudov, Vladislav Subject: RE: [Cinder][driver][ScaleIO] Adding Vlad who is the right person for ScaleIO driver. -----Original Message----- From: Jay Bryant Sent: Tuesday, February 5, 2019 5:30 PM To: openstack-discuss at lists.openstack.org; Walsh, Helen Subject: Re: [Cinder][driver][ScaleIO] Adding Helen Walsh to this as she may be able to provide insight. Jay On 2/5/2019 12:16 PM, Martin Chlumsky wrote: > Hello, > > We are using EMC ScaleIO as our backend to cinder. > When we delete VMs that have attached volumes and then try deleting > said volumes, the volumes will sometimes end in state error_deleting. > The state is reached because for some reason the volumes are still > mapped (in the ScaleIO sense of the word) to the hypervisor despite > the VM being deleted. > We fixed the issue by setting the following option to True in cinder.conf: > > # Unmap volume before deletion. (boolean value) > sio_unmap_volume_before_deletion=False > > > What is the reasoning behind this option? Why would we ever set this > to False and why is it False by default? It seems you would always > want to unmap the volume from the hypervisor before deleting it. > > Thank you, > > Martin From Yury.Kulazhenkov at dell.com Wed Feb 6 16:19:15 2019 From: Yury.Kulazhenkov at dell.com (Kulazhenkov, Yury) Date: Wed, 6 Feb 2019 16:19:15 +0000 Subject: [Cinder][driver][ScaleIO] In-Reply-To: References: <9d98a006-a062-0a9b-a9d3-68ed0ef4078f@gmail.com> <74b2c779ee644a64b5b1939537ddffd1@AUSX13MPS304.AMER.DELL.COM> Message-ID: Hi Martin, Martin wrote: > So if we get volumes that are still mapped to hypervisors after deleting the attached instances with sio_unmap_volume_before_deletion set to False, there's a good chance it's a bug? Yes, volumes should be detached from host even without set sio_unmap_volume_before_deletion = True. Yury From: Martin Chlumsky Sent: Wednesday, February 6, 2019 6:20 PM To: Kulazhenkov, Yury Cc: Kanevsky, Arkady; jsbryant at electronicjungle.net; openstack-discuss at lists.openstack.org; Walsh, Helen; Belogrudov, Vladislav Subject: Re: [Cinder][driver][ScaleIO] [EXTERNAL EMAIL] Hi Yury, Thank you for the clarification. So if we get volumes that are still mapped to hypervisors after deleting the attached instances with sio_unmap_volume_before_deletion set to False, there's a good chance it's a bug? I will open a bug report in this case. Cheers, Martin On Wed, Feb 6, 2019 at 9:35 AM Kulazhenkov, Yury > wrote: Hi Martin, Martin wrote: > It seems you would always > want to unmap the volume from the hypervisor before deleting it. If you remove or shelve instance from hypervisor host, then nova will trigger ScaleIO to unmap volume from that host. No issues should happen during deletion at this point, because volume is already unmapped(unmounted). No need to change sio_unmap_volume_before_deletion default value here. Martin wrote: > What is the reasoning behind this option? Setting sio_unmap_volume_before_deletion option to True means that cinder driver will force unmount volume from ALL ScaleIO client nodes (not only Openstack nodes) during volume deletion. Enabling this option can be useful if you periodically detect compute nodes with unmanaged ScaleIO volume mappings(volume mappings that not managed by Openstack) in your environment. You can get such unmanaged mappings in some cases, for example if there was hypervisor node power failure. If during that power failure instances with mapped volumes were moved to another host, than unmanaged mappings may appear on failed node after its recovery. Martin wrote: >Why would we ever set this > to False and why is it False by default? Force unmounting volumes from ALL ScaleIO clients is additional overhead. It doesn't required in most environments. Best regards, Yury -----Original Message----- From: Arkady.Kanevsky at dell.com > Sent: Wednesday, February 6, 2019 7:24 AM To: jsbryant at electronicjungle.net; openstack-discuss at lists.openstack.org; Walsh, Helen; Belogrudov, Vladislav Subject: RE: [Cinder][driver][ScaleIO] Adding Vlad who is the right person for ScaleIO driver. -----Original Message----- From: Jay Bryant > Sent: Tuesday, February 5, 2019 5:30 PM To: openstack-discuss at lists.openstack.org; Walsh, Helen Subject: Re: [Cinder][driver][ScaleIO] Adding Helen Walsh to this as she may be able to provide insight. Jay On 2/5/2019 12:16 PM, Martin Chlumsky wrote: > Hello, > > We are using EMC ScaleIO as our backend to cinder. > When we delete VMs that have attached volumes and then try deleting > said volumes, the volumes will sometimes end in state error_deleting. > The state is reached because for some reason the volumes are still > mapped (in the ScaleIO sense of the word) to the hypervisor despite > the VM being deleted. > We fixed the issue by setting the following option to True in cinder.conf: > > # Unmap volume before deletion. (boolean value) > sio_unmap_volume_before_deletion=False > > > What is the reasoning behind this option? Why would we ever set this > to False and why is it False by default? It seems you would always > want to unmap the volume from the hypervisor before deleting it. > > Thank you, > > Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Wed Feb 6 17:37:29 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 6 Feb 2019 12:37:29 -0500 Subject: [ops] OpenStack operators meetup, Berlin, March 6th,7th In-Reply-To: References: Message-ID: See you there! On Wed, Feb 6, 2019 at 12:18 PM Erik McCormick wrote: > I'm all signed up. See you in Berlin! > > On Wed, Feb 6, 2019, 10:43 AM Chris Morgan >> Dear All, >> The Evenbrite for the next ops meetup is now open, see >> >> >> https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 >> >> Thanks for Allison Price from the foundation for making this for us. >> We'll be sharing more details on the event soon. >> >> Chris >> on behalf of the ops meetups team >> >> -- >> Chris Morgan >> > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Wed Feb 6 17:55:17 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 6 Feb 2019 12:55:17 -0500 Subject: Rocky and older Ceph compatibility In-Reply-To: <88212313-4fde-8e01-d804-27c6354b7046@it.uu.se> References: <88212313-4fde-8e01-d804-27c6354b7046@it.uu.se> Message-ID: On Wed, Feb 6, 2019 at 12:37 PM Linus Nilsson wrote: > > Hi all, > > I'm working on upgrading our cloud, which consists of a block storage > system running Ceph 11.2.1 ("Kraken") and a controlplane running OSA > Newton. We want to migrate to Ceph Mimic and OSA Rocky respectively. As > part of the upgrade plan we are discussing first going to Rocky while > keeping the block system at the "Kraken" release. > For the most part it comes down to your client libraries. Personally, I would upgrade Ceph first, leaving Openstack running older client libraries. I did this with Jewel clients talking to a Luminous cluster, so you should be fine with K->M. Then, when you upgrade Openstack, your client libraries can get updated along with it. If you do Openstack first, you'll need to come back around and update your clients, and that will require you to restart everything a second time. . > It would be helpful to know if anyone has attempted to run the Rocky > Cinder/Glance drivers with Ceph Kraken or older? > I haven't done this specific combination, but I have mixed and matched Openstack and Ceph versions without any issues. I have MItaka, Queens, and Rocky all talking to Luminous without incident. -Erik > References or documentation is welcomed. I fail to find much information > online, but perhaps I'm looking in the wrong places or I'm asking a > question with an obvious answer. > > Thanks! > > Best regards, > Linus > UPPMAX > > > > > > > > > När du har kontakt med oss på Uppsala universitet med e-post så innebär det att vi behandlar dina personuppgifter. För att läsa mer om hur vi gör det kan du läsa här: http://www.uu.se/om-uu/dataskydd-personuppgifter/ > > E-mailing Uppsala University means that we will process your personal data. For more information on how this is performed, please read here: http://www.uu.se/en/about-uu/data-protection-policy > From bharat at stackhpc.com Wed Feb 6 18:04:50 2019 From: bharat at stackhpc.com (Bharat Kunwar) Date: Wed, 6 Feb 2019 18:04:50 +0000 Subject: [magnum][kayobe][kolla-ansible] heat-container-agent reports that `publicURL endpoint for orchestration service in null region not found` and `Source [heat] Unavailable.` Message-ID: I have a Magnum deployment using stable/queens which appears to be successful in every way when you look at `/var/log/cloud-init.log` and `/var/log/cloud-init-output.log`. They look something like this: http://paste.openstack.org/show/744620/ http://paste.openstack.org/show/744621/ However, `heat-container-agent` log reports this on repeat: Feb 06 17:56:38 tesom31-q7fhuprr64fp-master-0.novalocal runc[2040]: Source [heat] Unavailable. Feb 06 17:56:38 tesom31-q7fhuprr64fp-master-0.novalocal runc[2040]: /var/lib/os-collect-config/local-data not found. Skipping Feb 06 17:56:54 tesom31-q7fhuprr64fp-master-0.novalocal runc[2040]: publicURL endpoint for orchestration service in null region not found These are the parts of the heat stack that stay in CREATE_IN_PROGRESS for a long time before eventually failing: http://paste.openstack.org/show/744638/ As a result, the workers never get created. Here is what my magnum.conf looks like: http://paste.openstack.org/show/744625 . And this is what the heat params looks like: http://paste.openstack.org/show/744639/ I have tried setting `send_cluster_metrics=False` and also tried adding `region_name` to [keystone*] inside `magnum.conf`. What else should I try? Best Bharat -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Feb 6 18:17:53 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 6 Feb 2019 19:17:53 +0100 Subject: [all] Denver Open Infrastructure Summit Community Contributor Awards! Message-ID: Hello Everyone! As we approach the Summit (still a ways away thankfully), its time to kick off the Community Contributor Award nominations[1]! For those of you that have never heard of the CCA, I'll briefly explain what they are :) We all know people in our communities that do the dirty jobs, we all know people that will bend over backwards trying to help someone new, we all know someone that is a savant in some area of the code we could never hope to understand. These people rarely get the thanks they deserve and the Community Contributor Awards are a chance to make sure they know that they are appreciated for the amazing work they do and skills they have. As always, participation is voluntary :) Nominations will close on April 14th at 7:00 UTC and recipients will be announced at the Open Infrastructure Summit in Denver[2]. Recipients will be selected by a panel of top-level OSF project representatives who wish to participate. Finally, congrats again to recipients in Berlin[3]! -Kendall Nelson (diablo_rojo) [1] https://openstackfoundation.formstack.com/forms/train_cca_nominations [2]https://www.openstack.org/summit/denver-2019/ [3] http://superuser.openstack.org/articles/openstack-community-contributor-awards-berlin-summit-edition/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed Feb 6 18:52:10 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 6 Feb 2019 18:52:10 +0000 (GMT) Subject: [nova] [placement] extraction checkin meeting at 1700 UTC today In-Reply-To: References: Message-ID: On Wed, 6 Feb 2019, Chris Dent wrote: > A reminder that as discussed at the last placement extraction > checkin meeting [1] we've got another one today at 1700 UTC. Join > the #openstack-placement IRC channel around then if you are > interested, and a google hangout url will be provided. We did this. What follows are some notes. TL;DR: We're going to keep the placement code in nova, but freeze it. The extracted code is unfrozen and open for API changes. We did some warning up in IRC throughout the day, which may be useful context for readers: * update on tripleo situation: http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2019-02-06.log.html#t2019-02-06T14:19:59 * update on osa situation, especially testing: http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2019-02-06.log.html#t2019-02-06T15:30:46 * leaving the placement code in nova: http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2019-02-06.log.html#t2019-02-06T16:43:19 On the call we used the extraction etherpad to take notes and be an agenda: https://etherpad.openstack.org/p/placement-extract-stein-5 The main points to summarize are: * Progress is going well in TripleO with regard to being able to deploy (by lyarwood) extracted placement but upgrades, especially CI of those upgrades is going to be challenging if not impossible in the near term. This is a relatively new development, resulting from a change in procedure within tripleo. Not deleting the code from nova will help when the time comes, later, to test those upgrades. * In OSA there have been some resourcing gaps on the work, but mnaser is currently iterating on making things go. cdent is going to help add some placement-only live tests (to avoid deploying nova) to the code mnaser is working (by mnaser, cdent). As with tripleo, upgrade testing can be eased by leaving the placement code in nova. * The nested VGPU reshaper work is undergoing some light refactoring but confidence is high that it is ready (by bauzas). Functional testing is ready to go too. A manual test with real hardware was done some months ago, but not on the extracted code. It was decided that doing this again but we took it off the requirements because nobody has easy access to the right hardware[1]. * Based on the various issues above, and a general sense that it was the right thing to do, we're not going to delete the placement code from nova. This will allow upgrade testing Both OSA and TripleO are currently able to test with not-extracted placement, and will continue to do so. A patch will be made to nova to add a job using OSA (by mnaser). Other avenues are being explored to make sure the kept-in-nova placement code is tested. The previous functional and unit tests were already deleted and devstack and grenade use extracted. **The code still in nova is now considered frozen and nova's use of placement will be frozen (that is, it will assume microversion 1.30 or less) for the rest of Stein [2].** * The documentation changes which are currently stacked behind the change to delete the placement code from nova will be pulled out (by cdent). * The API presented by the extracted placement is now allowed to change. There are a few pending specs that we can make progress on if people would like to do so: https://blueprints.launchpad.net/nova/+spec/alloc-candidates-in-tree https://blueprints.launchpad.net/nova/+spec/any-traits-in-allocation-candidates-query https://blueprints.launchpad.net/nova/+spec/mixing-required-traits-with-any-traits https://blueprints.launchpad.net/nova/+spec/negative-aggregate-membership Nobody has yet made any commitment to do this stuff, and there's a general sense that people are busy, but if there is time and interest we should talk about it. We did not schedule a next check in meeting. When one needs to happen, which it will, we'll figure that out and make an announcement. Thanks for your attention. If I made any errors above, or left something out, please followup. If you have questions, please ask them. [1] Our employers should really be ashamed of themselves. This happens over and over and over again, across OpenStack, and is a huge drag on velocity. [2] While technically it would be possible to do version discovery or version-guarded changes, to remove ambiguity and because nova is already overbooked and moving slowly, easier to just say "no" and leave it. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From tpb at dyncloud.net Wed Feb 6 20:16:19 2019 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 6 Feb 2019 15:16:19 -0500 Subject: [manila][glusterfs] on queens error In-Reply-To: References: <20190203100549.urtnvf2iatmqm6oy@barron.net> <20190206153219.yyir5m5tyw7bvrj7@barron.net> Message-ID: <20190206201619.o6turxaps6iv65p7@barron.net> On 06/02/19 17:48 +0100, Ignazio Cassano wrote: >The 2 openstack Installations do not share anything. The manila on each one >works on different netapp storage, but the 2 netapp can be synchronized. >Site A with an openstack instalkation and netapp A. >Site B with an openstack with netapp B. >Netapp A and netapp B can be synchronized via network. >Ignazio OK, thanks. You can likely get the share data and its netapp metadata to show up on B via replication and (gouthamr may explain details) but you will lose all the Openstack/manila information about the share unless Openstack database info (more than just manila tables) is imported. That may be OK foryour use case. -- Tom > > >Il giorno Mer 6 Feb 2019 16:32 Tom Barron ha scritto: > >> On 06/02/19 15:34 +0100, Ignazio Cassano wrote: >> >Hello Tom, I think cases you suggested do not meet my needs. >> >I have an openstack installation A with a fas netapp A. >> >I have another openstack installation B with fas netapp B. >> >I would like to use manila replication dr. >> >If I replicate manila volumes from A to B the manila db on B does not >> >knows anything about the replicated volume but only the backends on >> netapp >> >B. Can I discover replicated volumes on openstack B? >> >Or I must modify the manila db on B? >> >Regards >> >Ignazio >> >> I guess I don't understand your use case. Do Openstack installation A >> and Openstack installation B know *anything* about one another? For >> example, are their keystone and neutron databases somehow synced? Are >> they going to be operative for the same set of manila shares at the >> same time, or are you contemplating a migration of the shares from >> installation A to installation B? >> >> Probably it would be helpful to have a statement of the problem that >> you intend to solve before we consider the potential mechanisms for >> solving it. >> >> Cheers, >> >> -- Tom >> >> > >> > >> >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha scritto: >> > >> >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: >> >> >Thanks Goutham. >> >> >If there are not mantainers for this driver I will switch on ceph and >> or >> >> >netapp. >> >> >I am already using netapp but I would like to export shares from an >> >> >openstack installation to another. >> >> >Since these 2 installations do non share any openstack component and >> have >> >> >different openstack database, I would like to know it is possible . >> >> >Regards >> >> >Ignazio >> >> >> >> Hi Ignazio, >> >> >> >> If by "export shares from an openstack installation to another" you >> >> mean removing them from management by manila in installation A and >> >> instead managing them by manila in installation B then you can do that >> >> while leaving them in place on your Net App back end using the manila >> >> "manage-unmanage" administrative commands. Here's some documentation >> >> [1] that should be helpful. >> >> >> >> If on the other hand by "export shares ... to another" you mean to >> >> leave the shares under management of manila in installation A but >> >> consume them from compute instances in installation B it's all about >> >> the networking. One can use manila to "allow-access" to consumers of >> >> shares anywhere but the consumers must be able to reach the "export >> >> locations" for those shares and mount them. >> >> >> >> Cheers, >> >> >> >> -- Tom Barron >> >> >> >> [1] >> >> >> https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 >> >> > >> >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < >> >> gouthampravi at gmail.com> >> >> >ha scritto: >> >> > >> >> >> Hi Ignazio, >> >> >> >> >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano >> >> >> wrote: >> >> >> > >> >> >> > Hello All, >> >> >> > I installed manila on my queens openstack based on centos 7. >> >> >> > I configured two servers with glusterfs replocation and ganesha >> nfs. >> >> >> > I configured my controllers octavia,conf but when I try to create a >> >> share >> >> >> > the manila scheduler logs reports: >> >> >> > >> >> >> > Failed to schedule create_share: No valid host was found. Failed to >> >> find >> >> >> a weighted host, the last executed filter was CapabilitiesFilter.: >> >> >> NoValidHost: No valid host was found. Failed to find a weighted host, >> >> the >> >> >> last executed filter was CapabilitiesFilter. >> >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api >> >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a >> >> 89f76bc5de5545f381da2c10c7df7f15 >> >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for >> >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a >> >> >> >> >> >> >> >> >> The scheduler failure points out that you have a mismatch in >> >> >> expectations (backend capabilities vs share type extra-specs) and >> >> >> there was no host to schedule your share to. So a few things to check >> >> >> here: >> >> >> >> >> >> - What is the share type you're using? Can you list the share type >> >> >> extra-specs and confirm that the backend (your GlusterFS storage) >> >> >> capabilities are appropriate with whatever you've set up as >> >> >> extra-specs ($ manila pool-list --detail)? >> >> >> - Is your backend operating correctly? You can list the manila >> >> >> services ($ manila service-list) and see if the backend is both >> >> >> 'enabled' and 'up'. If it isn't, there's a good chance there was a >> >> >> problem with the driver initialization, please enable debug logging, >> >> >> and look at the log file for the manila-share service, you might see >> >> >> why and be able to fix it. >> >> >> >> >> >> >> >> >> Please be aware that we're on a look out for a maintainer for the >> >> >> GlusterFS driver for the past few releases. We're open to bug fixes >> >> >> and maintenance patches, but there is currently no active maintainer >> >> >> for this driver. >> >> >> >> >> >> >> >> >> > I did not understand if controllers node must be connected to the >> >> >> network where shares must be exported for virtual machines, so my >> >> glusterfs >> >> >> are connected on the management network where openstack controllers >> are >> >> >> conencted and to the network where virtual machine are connected. >> >> >> > >> >> >> > My manila.conf section for glusterfs section is the following >> >> >> > >> >> >> > [gluster-manila565] >> >> >> > driver_handles_share_servers = False >> >> >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver >> >> >> > glusterfs_target = root at 10.102.184.229:/manila565 >> >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa >> >> >> > glusterfs_ganesha_server_username = root >> >> >> > glusterfs_nfs_server_type = Ganesha >> >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 >> >> >> > #glusterfs_servers = root at 10.102.185.19 >> >> >> > ganesha_config_dir = /etc/ganesha >> >> >> > >> >> >> > >> >> >> > PS >> >> >> > 10.102.184.0/24 is the network where controlelrs expose endpoint >> >> >> > >> >> >> > 10.102.189.0/24 is the shared network inside openstack where >> virtual >> >> >> machines are connected. >> >> >> > >> >> >> > The gluster servers are connected on both. >> >> >> > >> >> >> > >> >> >> > Any help, please ? >> >> >> > >> >> >> > Ignazio >> >> >> >> >> >> From gouthampravi at gmail.com Wed Feb 6 20:26:18 2019 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 6 Feb 2019 12:26:18 -0800 Subject: [manila][glusterfs] on queens error In-Reply-To: <20190206201619.o6turxaps6iv65p7@barron.net> References: <20190203100549.urtnvf2iatmqm6oy@barron.net> <20190206153219.yyir5m5tyw7bvrj7@barron.net> <20190206201619.o6turxaps6iv65p7@barron.net> Message-ID: On Wed, Feb 6, 2019 at 12:16 PM Tom Barron wrote: > > On 06/02/19 17:48 +0100, Ignazio Cassano wrote: > >The 2 openstack Installations do not share anything. The manila on each one > >works on different netapp storage, but the 2 netapp can be synchronized. > >Site A with an openstack instalkation and netapp A. > >Site B with an openstack with netapp B. > >Netapp A and netapp B can be synchronized via network. > >Ignazio > > OK, thanks. > > You can likely get the share data and its netapp metadata to show up > on B via replication and (gouthamr may explain details) but you will > lose all the Openstack/manila information about the share unless > Openstack database info (more than just manila tables) is imported. > That may be OK foryour use case. > > -- Tom Checking if I understand your request correctly, you have setup manila's "dr" replication in OpenStack A and now want to move your shares from OpenStack A to OpenStack B's manila. Is this correct? If yes, you must: * Promote your replicas - this will make the mirrored shares available. This action does not delete the old "primary" shares though, you need to clean them up yourself, because manila will attempt to reverse the replication relationships if the primary shares are still accessible * Note the export locations and Unmanage your shares from OpenStack A's manila * Manage your shares in OpenStack B's manila with the export locations you noted. > > > > > >Il giorno Mer 6 Feb 2019 16:32 Tom Barron ha scritto: > > > >> On 06/02/19 15:34 +0100, Ignazio Cassano wrote: > >> >Hello Tom, I think cases you suggested do not meet my needs. > >> >I have an openstack installation A with a fas netapp A. > >> >I have another openstack installation B with fas netapp B. > >> >I would like to use manila replication dr. > >> >If I replicate manila volumes from A to B the manila db on B does not > >> >knows anything about the replicated volume but only the backends on > >> netapp > >> >B. Can I discover replicated volumes on openstack B? > >> >Or I must modify the manila db on B? > >> >Regards > >> >Ignazio > >> > >> I guess I don't understand your use case. Do Openstack installation A > >> and Openstack installation B know *anything* about one another? For > >> example, are their keystone and neutron databases somehow synced? Are > >> they going to be operative for the same set of manila shares at the > >> same time, or are you contemplating a migration of the shares from > >> installation A to installation B? > >> > >> Probably it would be helpful to have a statement of the problem that > >> you intend to solve before we consider the potential mechanisms for > >> solving it. > >> > >> Cheers, > >> > >> -- Tom > >> > >> > > >> > > >> >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha scritto: > >> > > >> >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: > >> >> >Thanks Goutham. > >> >> >If there are not mantainers for this driver I will switch on ceph and > >> or > >> >> >netapp. > >> >> >I am already using netapp but I would like to export shares from an > >> >> >openstack installation to another. > >> >> >Since these 2 installations do non share any openstack component and > >> have > >> >> >different openstack database, I would like to know it is possible . > >> >> >Regards > >> >> >Ignazio > >> >> > >> >> Hi Ignazio, > >> >> > >> >> If by "export shares from an openstack installation to another" you > >> >> mean removing them from management by manila in installation A and > >> >> instead managing them by manila in installation B then you can do that > >> >> while leaving them in place on your Net App back end using the manila > >> >> "manage-unmanage" administrative commands. Here's some documentation > >> >> [1] that should be helpful. > >> >> > >> >> If on the other hand by "export shares ... to another" you mean to > >> >> leave the shares under management of manila in installation A but > >> >> consume them from compute instances in installation B it's all about > >> >> the networking. One can use manila to "allow-access" to consumers of > >> >> shares anywhere but the consumers must be able to reach the "export > >> >> locations" for those shares and mount them. > >> >> > >> >> Cheers, > >> >> > >> >> -- Tom Barron > >> >> > >> >> [1] > >> >> > >> https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 > >> >> > > >> >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < > >> >> gouthampravi at gmail.com> > >> >> >ha scritto: > >> >> > > >> >> >> Hi Ignazio, > >> >> >> > >> >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano > >> >> >> wrote: > >> >> >> > > >> >> >> > Hello All, > >> >> >> > I installed manila on my queens openstack based on centos 7. > >> >> >> > I configured two servers with glusterfs replocation and ganesha > >> nfs. > >> >> >> > I configured my controllers octavia,conf but when I try to create a > >> >> share > >> >> >> > the manila scheduler logs reports: > >> >> >> > > >> >> >> > Failed to schedule create_share: No valid host was found. Failed to > >> >> find > >> >> >> a weighted host, the last executed filter was CapabilitiesFilter.: > >> >> >> NoValidHost: No valid host was found. Failed to find a weighted host, > >> >> the > >> >> >> last executed filter was CapabilitiesFilter. > >> >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api > >> >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a > >> >> 89f76bc5de5545f381da2c10c7df7f15 > >> >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for > >> >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a > >> >> >> > >> >> >> > >> >> >> The scheduler failure points out that you have a mismatch in > >> >> >> expectations (backend capabilities vs share type extra-specs) and > >> >> >> there was no host to schedule your share to. So a few things to check > >> >> >> here: > >> >> >> > >> >> >> - What is the share type you're using? Can you list the share type > >> >> >> extra-specs and confirm that the backend (your GlusterFS storage) > >> >> >> capabilities are appropriate with whatever you've set up as > >> >> >> extra-specs ($ manila pool-list --detail)? > >> >> >> - Is your backend operating correctly? You can list the manila > >> >> >> services ($ manila service-list) and see if the backend is both > >> >> >> 'enabled' and 'up'. If it isn't, there's a good chance there was a > >> >> >> problem with the driver initialization, please enable debug logging, > >> >> >> and look at the log file for the manila-share service, you might see > >> >> >> why and be able to fix it. > >> >> >> > >> >> >> > >> >> >> Please be aware that we're on a look out for a maintainer for the > >> >> >> GlusterFS driver for the past few releases. We're open to bug fixes > >> >> >> and maintenance patches, but there is currently no active maintainer > >> >> >> for this driver. > >> >> >> > >> >> >> > >> >> >> > I did not understand if controllers node must be connected to the > >> >> >> network where shares must be exported for virtual machines, so my > >> >> glusterfs > >> >> >> are connected on the management network where openstack controllers > >> are > >> >> >> conencted and to the network where virtual machine are connected. > >> >> >> > > >> >> >> > My manila.conf section for glusterfs section is the following > >> >> >> > > >> >> >> > [gluster-manila565] > >> >> >> > driver_handles_share_servers = False > >> >> >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver > >> >> >> > glusterfs_target = root at 10.102.184.229:/manila565 > >> >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa > >> >> >> > glusterfs_ganesha_server_username = root > >> >> >> > glusterfs_nfs_server_type = Ganesha > >> >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 > >> >> >> > #glusterfs_servers = root at 10.102.185.19 > >> >> >> > ganesha_config_dir = /etc/ganesha > >> >> >> > > >> >> >> > > >> >> >> > PS > >> >> >> > 10.102.184.0/24 is the network where controlelrs expose endpoint > >> >> >> > > >> >> >> > 10.102.189.0/24 is the shared network inside openstack where > >> virtual > >> >> >> machines are connected. > >> >> >> > > >> >> >> > The gluster servers are connected on both. > >> >> >> > > >> >> >> > > >> >> >> > Any help, please ? > >> >> >> > > >> >> >> > Ignazio > >> >> >> > >> >> > >> From James.Gauld at windriver.com Wed Feb 6 21:24:39 2019 From: James.Gauld at windriver.com (Gauld, James) Date: Wed, 6 Feb 2019 21:24:39 +0000 Subject: [openstack-helm] How to specify nova override for multiple pci alias In-Reply-To: References: <8E5740EC88EF3E4BA3196F2545DC8625BA1CD21F@ALA-MBD.corp.ad.wrs.com> <2f17068ba3452c230e3dbe1d581d940f85961a12.camel@evrard.me> Message-ID: <8E5740EC88EF3E4BA3196F2545DC8625BA1CF471@ALA-MBD.corp.ad.wrs.com> Eric, I had assistance from portdirect in IRC who provided the 'multistring' solution to this problem. This solution does not require a change on nova side, or a change to nova chart. I should have replied a day ago. The nova solution WIP you coded would work. It requires slight documentation change to remove the one-line-per-entry input limitation. The following helm multistring method works for OSLO.conf compatible with oslo_config.MultiStringOpt(). I get correct nova.conf output if I individually JSON encode each string in the list of values (eg, for PCI alias, PCI passthrough whitelist). Here is sample YAML for multistring : conf: nova: pci: alias: type: multistring values: - '{"vendor_id": "8086", "product_id": "0435", "name": "qat-dh895xcc-pf"}' - '{"class_id": "030000", "name": "gpu"}' Here is the resultant nova.conf : [pci] alias = {"vendor_id": "8086", "product_id": "0435", "name": "qat-dh895xcc-pf"} alias = {"class_id": "030000", "name": "gpu"} This solution does not require a change on nova side, or a change to nova helm chart. IMO, I did not find the multistring example obvious when I was looking for documentation. -Jim Gauld -----Original Message----- From: Eric Fried [mailto:openstack at fried.cc] Sent: February-06-19 10:45 AM To: openstack-discuss at lists.openstack.org Subject: Re: [openstack-helm] How to specify nova override for multiple pci alias Folks- On 2/6/19 4:13 AM, Jean-Philippe Evrard wrote: > On Wed, 2019-01-30 at 15:40 +0000, Gauld, James wrote: >> How can I specify a helm override to configure nova PCI alias when >> there are multiple aliases? >> I haven't been able to come up with a YAML compliant specification >> for this. >> >> Are there other alternatives to be able to specify this as an >> override? I assume that a nova Chart change would be required to >> support this custom one-alias-entry-per-line formatting. >> >> Any insights on how to achieve this in helm are welcomed. >> The following nova configuration format is desired, but not as yet >> supported by nova: >> [pci] >> alias = [{dict 1}, {dict 2}] >> >> The following snippet of YAML works for PCI passthrough_whitelist, >> where the value encoded is a JSON string: >> >> conf: >> nova: >> overrides: >> nova_compute: >> hosts: >> - conf: >> nova: >> pci: >> passthrough_whitelist: '[{"class_id": "030000", >> "address": "0000:00:02.0"}]' I played around with the code as it stands, and I agree there doesn't seem to be a way around having to specify the alias key multiple times to get multiple aliases. Lacking some fancy way to make YAML understand a dict with repeated keys ((how) do you handle HTTP headers?), I've hacked up a solution on the nova side [1] which should allow you to do what you've described above. Do you have a way to pull it down and try it? (Caveat: I put this up as a proof of concept, but it (or anything that messes with the existing pci passthrough mechanisms) may not be a mergeable solution.) -efried [1] https://review.openstack.org/#/c/635191/ From lars at redhat.com Wed Feb 6 21:32:22 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 6 Feb 2019 16:32:22 -0500 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <20190206154138.qfhgh5cax3j2r4qh@redhat.com> Message-ID: <20190206213222.43nin24mkbqhsrw7@redhat.com> On Wed, Feb 06, 2019 at 04:00:40PM +0000, Tim Bell wrote: > > A few years ago, there was a discussion in one of the summit forums > where users wanted to be able to come along to a generic OpenStack > cloud and say "give me the flavor that has at least X GB RAM and Y > GB disk space". At the time, the thoughts were that this could be > done by doing a flavour list and then finding the smallest one which > matched the requirements. The problem is that "flavor list" part: that implies that every time someone adds a new hardware configuration to the environment (maybe they add a new group of machines, or maybe they simply upgrade RAM/disk/etc in some existing nodes), they need to manually create corresponding flavors. That also implies that you could quickly end up with an egregious number of flavors to represent different types of available hardware. Really, what we want is the ability to select hardware based on Ironic introspection data, without any manual steps in between. I'm still not clear on whether there's any way to make this work with existing tools, or if it makes sense to figure out to make Nova do this or if we need something else sitting in front of Ironic. > For reserving, you could install the machine with a simple image and then let the user rebuild with their choice? That's probably a fine workaround for now. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From openstack at fried.cc Wed Feb 6 21:34:50 2019 From: openstack at fried.cc (Eric Fried) Date: Wed, 6 Feb 2019 15:34:50 -0600 Subject: [openstack-helm] How to specify nova override for multiple pci alias In-Reply-To: <8E5740EC88EF3E4BA3196F2545DC8625BA1CF471@ALA-MBD.corp.ad.wrs.com> References: <8E5740EC88EF3E4BA3196F2545DC8625BA1CD21F@ALA-MBD.corp.ad.wrs.com> <2f17068ba3452c230e3dbe1d581d940f85961a12.camel@evrard.me> <8E5740EC88EF3E4BA3196F2545DC8625BA1CF471@ALA-MBD.corp.ad.wrs.com> Message-ID: On 2/6/19 3:24 PM, Gauld, James wrote: > Eric, > I had assistance from portdirect in IRC who provided the 'multistring' solution to this problem. > This solution does not require a change on nova side, or a change to nova chart. I should have replied a day ago. > > The nova solution WIP you coded would work. It requires slight documentation change to remove the one-line-per-entry input limitation. > > The following helm multistring method works for OSLO.conf compatible with oslo_config.MultiStringOpt(). > I get correct nova.conf output if I individually JSON encode each string in the list of values (eg, for PCI alias, PCI passthrough whitelist). > > Here is sample YAML for multistring : > conf: > nova: > pci: > alias: > type: multistring > values: > - '{"vendor_id": "8086", "product_id": "0435", "name": "qat-dh895xcc-pf"}' > - '{"class_id": "030000", "name": "gpu"}' > > Here is the resultant nova.conf : > [pci] > alias = {"vendor_id": "8086", "product_id": "0435", "name": "qat-dh895xcc-pf"} > alias = {"class_id": "030000", "name": "gpu"} > > This solution does not require a change on nova side, or a change to nova helm chart. > IMO, I did not find the multistring example obvious when I was looking for documentation. > > -Jim Gauld > > -----Original Message----- > From: Eric Fried [mailto:openstack at fried.cc] > Sent: February-06-19 10:45 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: [openstack-helm] How to specify nova override for multiple pci alias > > Folks- > > On 2/6/19 4:13 AM, Jean-Philippe Evrard wrote: >> On Wed, 2019-01-30 at 15:40 +0000, Gauld, James wrote: >>> How can I specify a helm override to configure nova PCI alias when >>> there are multiple aliases? >>> I haven't been able to come up with a YAML compliant specification >>> for this. >>> >>> Are there other alternatives to be able to specify this as an >>> override? I assume that a nova Chart change would be required to >>> support this custom one-alias-entry-per-line formatting. >>> >>> Any insights on how to achieve this in helm are welcomed. > > > >>> The following nova configuration format is desired, but not as yet >>> supported by nova: >>> [pci] >>> alias = [{dict 1}, {dict 2}] >>> >>> The following snippet of YAML works for PCI passthrough_whitelist, >>> where the value encoded is a JSON string: >>> >>> conf: >>> nova: >>> overrides: >>> nova_compute: >>> hosts: >>> - conf: >>> nova: >>> pci: >>> passthrough_whitelist: '[{"class_id": "030000", >>> "address": "0000:00:02.0"}]' > > > > I played around with the code as it stands, and I agree there doesn't seem to be a way around having to specify the alias key multiple times to get multiple aliases. Lacking some fancy way to make YAML understand a dict with repeated keys ((how) do you handle HTTP headers?), I've hacked up a solution on the nova side [1] which should allow you to do what you've described above. Do you have a way to pull it down and try it? > > (Caveat: I put this up as a proof of concept, but it (or anything that messes with the existing pci passthrough mechanisms) may not be a mergeable solution.) > > -efried > > [1] https://review.openstack.org/#/c/635191/ > From openstack at fried.cc Wed Feb 6 21:38:06 2019 From: openstack at fried.cc (Eric Fried) Date: Wed, 6 Feb 2019 15:38:06 -0600 Subject: [openstack-helm] How to specify nova override for multiple pci alias In-Reply-To: <8E5740EC88EF3E4BA3196F2545DC8625BA1CF471@ALA-MBD.corp.ad.wrs.com> References: <8E5740EC88EF3E4BA3196F2545DC8625BA1CD21F@ALA-MBD.corp.ad.wrs.com> <2f17068ba3452c230e3dbe1d581d940f85961a12.camel@evrard.me> <8E5740EC88EF3E4BA3196F2545DC8625BA1CF471@ALA-MBD.corp.ad.wrs.com> Message-ID: <42206c34-dd17-75ef-ba02-2b5f8f905e21@fried.cc> James- On 2/6/19 3:24 PM, Gauld, James wrote: > Eric, > I had assistance from portdirect in IRC who provided the 'multistring' solution to this problem. > This solution does not require a change on nova side, or a change to nova chart. I should have replied a day ago. Ah, I'm glad you got it figured out. I'll abandon my change. > IMO, I did not find the multistring example obvious when I was looking for documentation. I don't know if you're referring to the nova docs or something else, but I couldn't agree more. The syntax of both this and passthrough_whitelist is very confusing. We're working to come up with nicer ways to talk about device passthrough. It's been a multi-release effort, but we're getting closer all the time. Stay tuned. -efried From mriedemos at gmail.com Wed Feb 6 22:00:44 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 6 Feb 2019 16:00:44 -0600 Subject: [nova] [placement] extraction checkin meeting at 1700 UTC today In-Reply-To: References: Message-ID: On 2/6/2019 12:52 PM, Chris Dent wrote: > Thanks for your attention. If I made any errors above, or left > something out, please followup. If you have questions, please ask > them. Thanks for the summary, it matches with my recollection and notes in the etherpad. -- Thanks, Matt From gouthampravi at gmail.com Wed Feb 6 22:42:37 2019 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 6 Feb 2019 14:42:37 -0800 Subject: Manila Upstream Bugs In-Reply-To: References: Message-ID: First off, thank you so much for starting this effort Jason! Responses inline: On Tue, Feb 5, 2019 at 12:37 PM Jason Grosso wrote: > Hello All, > > > This is an email to the OpenStack manila upstream community but anyone can chime in would be great to get some input from other projects and how they organize their upstream defects and what tools they use... > > > > My goal here is to make the upstream manila bug process easier, cleaner, and more effective. > > > My thoughts to accomplish this are by establishing a process that we can all agree upon. > > > > I have the following points/questions that I wanted to address to help create a more effective process: > > > > Can we as a group go through some of the manila bugs so we can drive the visible bug count down? > > How often as a group do you have bug scrubs? > > Might be beneficial if we had bug scrubs every few months possibly? IMHO, we could start doing these biweekly with a synchronized meeting. I feel once we bring down the number of bugs to a manageable number, we can go back to using our IRC meeting slot to triage new bugs as they come and gather progress on existing bugs. > It might be a good idea to go through the current upstream bugs and weed out one that can be closed or invalid. > > > When a new bug is logged how to we normally process this bug > > How do we handle the importance? > When a manila bugs comes into launchpad I am assuming one of the people on this email will set the importance? > "Assigned" I will also assume it just picked by the person on this email list. > I am seeing some bugs "fixed committed" with no assignment. How do we know who was working on it? If a fix has been committed, an appropriate Gerrit review patch should be noted, unless our automation fails (which happens sometimes) > What is the criteria for setting the importance. Do we have a standard understanding of what is CRITICAL or HIGH? > If there is a critical or high bug what is the response turn-around? Days or weeks? > I see some defect with HIGH that have not been assigned or looked at in a year? This has been informal so far, and our bug supervisors group (https://launchpad.net/~manila-bug-supervisors) on Launchpad is a small subset of our contributors and maintainers; if a bug causes a security issue, or data loss it is marked CRITICAL and scheduled to be fixed right away. If a bug affects manila's API and core internals, it is marked between HIGH and LOW depending on whether we can live with a bug not being fixed right away. Typically vendor driver bugs are marked LOW unless the driver is badly broken. We usually reduce the bug importance if it is HIGH, but goes un-fixed for a long time. We don't have stats around turnaround time, gathering those would be an interesting exercise. > I understand OpenStack has some long releases but how long do we normally keep defects around? > Do we have a way to archive bugs that are not looked at? I was told we can possibly set the status of a defect to “Invalid” or “Opinion” or “Won’t Fix” or “Expired" > Status needs to be something other than "NEW" after the first week > How can we have a defect over a year that is NEW? Great point, it feels like we shouldn't. If we start triaging new bugs as they come, they should not be "NEW" for too long. > Who is possible for see if there is enough information and if the bug is invalid or incomplete and if incomplete ask for relevant information. Do we randomly look at the list daily , weekly, or monthly to see if new info is needed? > > > > > I started to create a google sheet [1] to see if it is easier to track some of the defect vs the manila-triage pad[2] . I have added both links here. I know a lot will not have access to this page I am working on transitioning to OpenStack ether cal. Great stuff :) The sheet [1] requires permissions, so your thought of using ethercalc.openstack.org [3] may be the way to go! We can start referring to this in our meetings. > [1] https://docs.google.com/spreadsheets/d/1oaXEgo_BEkY2KleISN3M58waqw9U5W7xTR_O1jQmQ74/edit#gid=758082340 > > [2] https://etherpad.openstack.org/p/manila-bug-triage-pad > > [3] https://ethercalc.openstack.org/uc8b4567fpf4 > > > > > I would also like to hear from all of you on what your issues are with the current process for upstream manila bugs using launchpad. I have not had the time to look at storyboard https://storyboard.openstack.org/ but I have heard that the OpenStack community is pushing toward using Storyboard, so I will be looking at that shortly. > > > Any input would be greatly appreciated... > > > Thanks All, > > Jason Grosso > > Senior Quality Engineer - Cloud > > Red Hat OpenStack Manila > > jgrosso at redhat.com From juliaashleykreger at gmail.com Wed Feb 6 23:04:06 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 6 Feb 2019 15:04:06 -0800 Subject: [ironic] cisco-ucs-managed and cisco-ucs-standalone drivers - Python3 and CI Message-ID: Greetings fellow OpenStack humans, AIs, and unknown entities! At present, ironic has two hardware types, or drivers, that are presently in the code base to support Cisco UCS hardware. These are "cisco-ucs-managed" and "cisco-ucs-standalone". At present they utilize an underlying library which is not python3 compatible and has been deprecated by the vendor. In their current state the drivers will need to be removed from ironic when python2 support is removed. While work was started[1][2] to convert these drivers, the patch author seems to have stopped working on updating these drivers. Repeated attempts to contact prior ironic contributors from Cisco and the aforementioned patch author have gone unanswered. To further complicate matters, it appears the last time Cisco CI [3] last voted was on January 30th [4] of this year and the the log server [5] appears to be unreachable. Ironic's requirement is that a vendor driver has to have third-party CI to remain in-tree. At present it appears ironic will have no choice but to deprecate the "cisco-ucs-managed" and "cisco-ucs-standalone" hardware types and remove them in the Train cycle. If nobody steps forward to maintain the drivers and CI does not return, the drivers will be marked deprecated during the Stein cycle, and ironic shall proceed to remove them during Train. Please let me know if there are any questions or concerns. Thanks, -Julia [1]: https://review.openstack.org/#/c/607732 [2]: https://review.openstack.org/#/c/598194 [3]: https://review.openstack.org/#/q/reviewedby:%22Cisco+CI+%253Cml2.ci%2540cisco.com%253E%22+project:openstack/ironic [4]: https://review.openstack.org/#/c/620376/ [5]: http://3ci-logs.ciscolabs.net/76/620376/4/check/dsvm-tempest-ironic-cimc-job/a984082/ From pierre at stackhpc.com Wed Feb 6 23:17:45 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Wed, 6 Feb 2019 23:17:45 +0000 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: <20190206154138.qfhgh5cax3j2r4qh@redhat.com> References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <20190206154138.qfhgh5cax3j2r4qh@redhat.com> Message-ID: On Wed, 6 Feb 2019 at 15:47, Lars Kellogg-Stedman wrote: > > On Fri, Feb 01, 2019 at 06:16:42PM +0000, Sean Mooney wrote: > > > 1. Implement multi-tenancy either (a) directly in Ironic or (b) in a > > > shim service that sits between Ironic and the client. > > that shim service could be nova, which already has multi tenancy. > > > > > > 2. Implement a Blazar plugin that is able to talk to whichever service > > > in (1) is appropriate. > > and nova is supported by blazar > > > > > > 3. Work with Blazar developers to implement any lease logic that we > > > think is necessary. > > +1 > > by they im sure there is a reason why you dont want to have blazar drive > > nova and nova dirve ironic but it seam like all the fucntionality would > > already be there in that case. > > Sean, > > Being able to use Nova is a really attractive idea. I'm a little > fuzzy on some of the details, though, starting with how to handle node > discovery. A key goal is being able to parametrically request systems > ("I want a system with a GPU and >= 40GB of memory"). With Nova, > would this require effectively creating a flavor for every unique > hardware configuration? Conceptually, I want "... create server > --flavor any --filter 'has_gpu and member_mb>40000' ...", but it's not > clear to me if that's something we could do now or if that would > require changes to the way Nova handles baremetal scheduling. Such node selection is something you can already do with Blazar using the parameters "hypervisor_properties" (which are hypervisor details automatically imported from Nova) and "resource_properties" (extra key/value pairs that can be tagged on the resource, which could be has_gpu=true) when creating reservations: https://developer.openstack.org/api-ref/reservation/v1/index.html?expanded=create-lease-detail#id3 I believe you can also do such filtering with the ComputeCapabilitiesFilter directly with Nova. It was supposed to be deprecated (https://review.openstack.org/#/c/603102/) but it looks like it's staying around for now. In either case, using Nova still requires a flavor to be selected, but you could have a single "baremetal" flavor associated with a single resource class for the whole baremetal cloud. From pierre at stackhpc.com Wed Feb 6 23:26:57 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Wed, 6 Feb 2019 23:26:57 +0000 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <20190206154138.qfhgh5cax3j2r4qh@redhat.com> Message-ID: On Wed, 6 Feb 2019 at 23:17, Pierre Riteau wrote: > I believe you can also do such filtering with the > ComputeCapabilitiesFilter directly with Nova. It was supposed to be > deprecated (https://review.openstack.org/#/c/603102/) but it looks > like it's staying around for now. Sorry, I was actually thinking about JsonFilter rather than ComputeCapabilitiesFilter. The former allows users to pass a query via scheduler hints, while the latter filters based on flavors. From gmann at ghanshyammann.com Thu Feb 7 02:33:32 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 07 Feb 2019 11:33:32 +0900 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> Message-ID: <168c5cd9aac.103a6ed0c31827.3131004736809589089@ghanshyammann.com> ---- On Wed, 06 Feb 2019 18:45:03 +0900 Jean-Philippe Evrard wrote ---- > > So, maybe the next step is to convince someone to champion a goal of > > improving our contributor documentation, and to have them describe > > what > > the documentation should include, covering the usual topics like how > > to > > actually submit patches as well as suggestions for how to describe > > areas > > where help is needed in a project and offers to mentor contributors. If I am not wrong, you are saying to have help-wanted-list owned by each project side which is nothing but a part of contributor documentation? As you mentioned that complete doc can be linked as a central page on docs.openstack.org. If so then, it looks perfect to me. Further, that list can be updated/maintained with mentor mapping by the project team on every cycle which is nothing but what projects present in onboarding sessions etc. > > > > Does anyone want to volunteer to serve as the goal champion for that? > > > > This doesn't get visibility yet, as this thread is under [tc] only. > > Lance and I will raise this in our next update (which should be > tomorrow) if we don't have a volunteer here. I was waiting in case anyone shows up for that but looks like no. I can take this goal. -gmann > > JP. > > > From ignaziocassano at gmail.com Thu Feb 7 05:11:47 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 7 Feb 2019 06:11:47 +0100 Subject: [manila][glusterfs] on queens error In-Reply-To: References: <20190203100549.urtnvf2iatmqm6oy@barron.net> <20190206153219.yyir5m5tyw7bvrj7@barron.net> <20190206201619.o6turxaps6iv65p7@barron.net> Message-ID: Many thanks. I'll check today. Ignazio Il giorno Mer 6 Feb 2019 21:26 Goutham Pacha Ravi ha scritto: > On Wed, Feb 6, 2019 at 12:16 PM Tom Barron wrote: > > > > On 06/02/19 17:48 +0100, Ignazio Cassano wrote: > > >The 2 openstack Installations do not share anything. The manila on each > one > > >works on different netapp storage, but the 2 netapp can be > synchronized. > > >Site A with an openstack instalkation and netapp A. > > >Site B with an openstack with netapp B. > > >Netapp A and netapp B can be synchronized via network. > > >Ignazio > > > > OK, thanks. > > > > You can likely get the share data and its netapp metadata to show up > > on B via replication and (gouthamr may explain details) but you will > > lose all the Openstack/manila information about the share unless > > Openstack database info (more than just manila tables) is imported. > > That may be OK foryour use case. > > > > -- Tom > > > Checking if I understand your request correctly, you have setup > manila's "dr" replication in OpenStack A and now want to move your > shares from OpenStack A to OpenStack B's manila. Is this correct? > > If yes, you must: > * Promote your replicas > - this will make the mirrored shares available. This action does > not delete the old "primary" shares though, you need to clean them up > yourself, because manila will attempt to reverse the replication > relationships if the primary shares are still accessible > * Note the export locations and Unmanage your shares from OpenStack A's > manila > * Manage your shares in OpenStack B's manila with the export locations > you noted. > > > > > > > > > >Il giorno Mer 6 Feb 2019 16:32 Tom Barron ha > scritto: > > > > > >> On 06/02/19 15:34 +0100, Ignazio Cassano wrote: > > >> >Hello Tom, I think cases you suggested do not meet my needs. > > >> >I have an openstack installation A with a fas netapp A. > > >> >I have another openstack installation B with fas netapp B. > > >> >I would like to use manila replication dr. > > >> >If I replicate manila volumes from A to B the manila db on B does > not > > >> >knows anything about the replicated volume but only the backends on > > >> netapp > > >> >B. Can I discover replicated volumes on openstack B? > > >> >Or I must modify the manila db on B? > > >> >Regards > > >> >Ignazio > > >> > > >> I guess I don't understand your use case. Do Openstack installation A > > >> and Openstack installation B know *anything* about one another? For > > >> example, are their keystone and neutron databases somehow synced? Are > > >> they going to be operative for the same set of manila shares at the > > >> same time, or are you contemplating a migration of the shares from > > >> installation A to installation B? > > >> > > >> Probably it would be helpful to have a statement of the problem that > > >> you intend to solve before we consider the potential mechanisms for > > >> solving it. > > >> > > >> Cheers, > > >> > > >> -- Tom > > >> > > >> > > > >> > > > >> >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha > scritto: > > >> > > > >> >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: > > >> >> >Thanks Goutham. > > >> >> >If there are not mantainers for this driver I will switch on ceph > and > > >> or > > >> >> >netapp. > > >> >> >I am already using netapp but I would like to export shares from > an > > >> >> >openstack installation to another. > > >> >> >Since these 2 installations do non share any openstack component > and > > >> have > > >> >> >different openstack database, I would like to know it is possible > . > > >> >> >Regards > > >> >> >Ignazio > > >> >> > > >> >> Hi Ignazio, > > >> >> > > >> >> If by "export shares from an openstack installation to another" you > > >> >> mean removing them from management by manila in installation A and > > >> >> instead managing them by manila in installation B then you can do > that > > >> >> while leaving them in place on your Net App back end using the > manila > > >> >> "manage-unmanage" administrative commands. Here's some > documentation > > >> >> [1] that should be helpful. > > >> >> > > >> >> If on the other hand by "export shares ... to another" you mean to > > >> >> leave the shares under management of manila in installation A but > > >> >> consume them from compute instances in installation B it's all > about > > >> >> the networking. One can use manila to "allow-access" to consumers > of > > >> >> shares anywhere but the consumers must be able to reach the "export > > >> >> locations" for those shares and mount them. > > >> >> > > >> >> Cheers, > > >> >> > > >> >> -- Tom Barron > > >> >> > > >> >> [1] > > >> >> > > >> > https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 > > >> >> > > > >> >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < > > >> >> gouthampravi at gmail.com> > > >> >> >ha scritto: > > >> >> > > > >> >> >> Hi Ignazio, > > >> >> >> > > >> >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano > > >> >> >> wrote: > > >> >> >> > > > >> >> >> > Hello All, > > >> >> >> > I installed manila on my queens openstack based on centos 7. > > >> >> >> > I configured two servers with glusterfs replocation and > ganesha > > >> nfs. > > >> >> >> > I configured my controllers octavia,conf but when I try to > create a > > >> >> share > > >> >> >> > the manila scheduler logs reports: > > >> >> >> > > > >> >> >> > Failed to schedule create_share: No valid host was found. > Failed to > > >> >> find > > >> >> >> a weighted host, the last executed filter was > CapabilitiesFilter.: > > >> >> >> NoValidHost: No valid host was found. Failed to find a weighted > host, > > >> >> the > > >> >> >> last executed filter was CapabilitiesFilter. > > >> >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api > > >> >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a > > >> >> 89f76bc5de5545f381da2c10c7df7f15 > > >> >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record > for > > >> >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a > > >> >> >> > > >> >> >> > > >> >> >> The scheduler failure points out that you have a mismatch in > > >> >> >> expectations (backend capabilities vs share type extra-specs) > and > > >> >> >> there was no host to schedule your share to. So a few things to > check > > >> >> >> here: > > >> >> >> > > >> >> >> - What is the share type you're using? Can you list the share > type > > >> >> >> extra-specs and confirm that the backend (your GlusterFS > storage) > > >> >> >> capabilities are appropriate with whatever you've set up as > > >> >> >> extra-specs ($ manila pool-list --detail)? > > >> >> >> - Is your backend operating correctly? You can list the manila > > >> >> >> services ($ manila service-list) and see if the backend is both > > >> >> >> 'enabled' and 'up'. If it isn't, there's a good chance there > was a > > >> >> >> problem with the driver initialization, please enable debug > logging, > > >> >> >> and look at the log file for the manila-share service, you > might see > > >> >> >> why and be able to fix it. > > >> >> >> > > >> >> >> > > >> >> >> Please be aware that we're on a look out for a maintainer for > the > > >> >> >> GlusterFS driver for the past few releases. We're open to bug > fixes > > >> >> >> and maintenance patches, but there is currently no active > maintainer > > >> >> >> for this driver. > > >> >> >> > > >> >> >> > > >> >> >> > I did not understand if controllers node must be connected to > the > > >> >> >> network where shares must be exported for virtual machines, so > my > > >> >> glusterfs > > >> >> >> are connected on the management network where openstack > controllers > > >> are > > >> >> >> conencted and to the network where virtual machine are > connected. > > >> >> >> > > > >> >> >> > My manila.conf section for glusterfs section is the following > > >> >> >> > > > >> >> >> > [gluster-manila565] > > >> >> >> > driver_handles_share_servers = False > > >> >> >> > share_driver = > manila.share.drivers.glusterfs.GlusterfsShareDriver > > >> >> >> > glusterfs_target = root at 10.102.184.229:/manila565 > > >> >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa > > >> >> >> > glusterfs_ganesha_server_username = root > > >> >> >> > glusterfs_nfs_server_type = Ganesha > > >> >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 > > >> >> >> > #glusterfs_servers = root at 10.102.185.19 > > >> >> >> > ganesha_config_dir = /etc/ganesha > > >> >> >> > > > >> >> >> > > > >> >> >> > PS > > >> >> >> > 10.102.184.0/24 is the network where controlelrs expose > endpoint > > >> >> >> > > > >> >> >> > 10.102.189.0/24 is the shared network inside openstack where > > >> virtual > > >> >> >> machines are connected. > > >> >> >> > > > >> >> >> > The gluster servers are connected on both. > > >> >> >> > > > >> >> >> > > > >> >> >> > Any help, please ? > > >> >> >> > > > >> >> >> > Ignazio > > >> >> >> > > >> >> > > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Thu Feb 7 06:11:23 2019 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Thu, 7 Feb 2019 07:11:23 +0100 Subject: [TripleO] containers logging to stdout In-Reply-To: <95dc4e6c-dc4a-7cc6-a34d-7999566725ba@redhat.com> References: <7cee5db5-f4cd-9e11-e0a3-7438154fb9af@redhat.com> <95dc4e6c-dc4a-7cc6-a34d-7999566725ba@redhat.com> Message-ID: <05cc6365-0502-0fa8-ce0d-741269b0c389@redhat.com> Hello, I'm currently testing things, related to this LP: https://bugs.launchpad.net/tripleo/+bug/1814897 We might hit some issues: - With docker, json-file log driver doesn't support any "path" options, and it outputs the files inside the container namespace (/var/lib/docker/container/ID/ID-json.log) - With podman, we actually have a "path" option, and it works nice. But the json-file isn't a JSON at all. - Docker supports journald and some other outputs - Podman doesn't support anything else than json-file Apparently, Docker seems to support a failing "journald" backend. So we might end with two ways of logging, if we're to keep docker in place. Cheers, C. On 2/5/19 11:11 AM, Cédric Jeanneret wrote: > Hello there! > > small thoughts: > - we might already push the stdout logging, in parallel of the current > existing one > > - that would already point some weakness and issues, without making the > whole thing crash, since there aren't that many logs in stdout for now > > - that would already allow to check what's the best way to do it, and > what's the best format for re-usability (thinking: sending logs to some > (k)elk and the like) > > This would also allow devs to actually test that for their services. And > thus going forward on this topic. > > Any thoughts? > > Cheers, > > C. > > On 1/30/19 11:49 AM, Juan Antonio Osorio Robles wrote: >> Hello! >> >> >> In Queens, the a spec to provide the option to make containers log to >> standard output was proposed [1] [2]. Some work was done on that side, >> but due to the lack of traction, it wasn't completed. With the Train >> release coming, I think it would be a good idea to revive this effort, >> but make logging to stdout the default in that release. >> >> This would allow several benefits: >> >> * All logging from the containers would en up in journald; this would >> make it easier for us to forward the logs, instead of having to keep >> track of the different directories in /var/log/containers >> >> * The journald driver would add metadata to the logs about the container >> (we would automatically get what container ID issued the logs). >> >> * This wouldo also simplify the stacks (removing the Logging nested >> stack which is present in several templates). >> >> * Finally... if at some point we move towards kubernetes (or something >> in between), managing our containers, it would work with their logging >> tooling as well. >> >> >> Any thoughts? >> >> >> [1] >> https://specs.openstack.org/openstack/tripleo-specs/specs/queens/logging-stdout.html >> >> [2] https://blueprints.launchpad.net/tripleo/+spec/logging-stdout-rsyslog >> >> >> > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From iwienand at redhat.com Thu Feb 7 06:39:40 2019 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 7 Feb 2019 17:39:40 +1100 Subject: [cinder] Help with Fedora 29 devstack volume/iscsi issues Message-ID: <20190207063940.GA1754@fedora19.localdomain> Hello, I'm trying to diagnose what has gone wrong with Fedora 29 in our gate devstack test; it seems there is a problem with the iscsi setup and consequently the volume based tempest tests all fail. AFAICS we end up with nova hitting parsing errors inside os_brick's iscsi querying routines; so it seems whatever error path we've hit is outside the usual as it's made it pretty far down the stack. I have a rather haphazard bug report going on at https://bugs.launchpad.net/os-brick/+bug/1814849 as I've tried to trace it down. At this point, it's exceeding the abilities of my cinder/nova/lvm/iscsi/how-this-all-hangs-together knowledge. The final comment there has a link the devstack logs and a few bits and pieces of gleaned off the host (which I have on hold and can examine) which is hopefully useful to someone skilled in the art. I'm hoping ultimately it's a rather simple case of a missing package or config option; I would greatly appreciate any input so we can get this test stable. Thanks, -i From kennelson11 at gmail.com Thu Feb 7 06:59:45 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 6 Feb 2019 22:59:45 -0800 Subject: Manila Upstream Bugs In-Reply-To: References: Message-ID: Hello :) Another thing to consider is what process might look like and how you want to organize things after migrating to StoryBoard. While there isn't a set date yet, it should be kept in mind :) If you have any questions, please let us (the storyboard team) know by pinging us in #storyboard or by using the [storyboard] tag to the openstack-discuss list. -Kendall (diablo_rojo) On Tue, Feb 5, 2019 at 12:38 PM Jason Grosso wrote: > Hello All, > > > This is an email to the OpenStack manila upstream community but anyone can > chime in would be great to get some input from other projects and how they > organize their upstream defects and what tools they use... > > > > My goal here is to make the upstream manila bug process easier, cleaner, > and more effective. > > My thoughts to accomplish this are by establishing a process that we can > all agree upon. > > > I have the following points/questions that I wanted to address to help > create a more effective process: > > > > - > > Can we as a group go through some of the manila bugs so we can drive > the visible bug count down? > > > - > > How often as a group do you have bug scrubs? > > > - > > Might be beneficial if we had bug scrubs every few months possibly? > - > > It might be a good idea to go through the current upstream bugs and > weed out one that can be closed or invalid. > > > > - > > When a new bug is logged how to we normally process this bug > > > - > > How do we handle the importance? > > > - > > When a manila bugs comes into launchpad I am assuming one of the > people on this email will set the importance? > > > - > > "Assigned" I will also assume it just picked by the person on this > email list. > > > - > > I am seeing some bugs "fixed committed" with no assignment. How do we > know who was working on it? > > > - > > What is the criteria for setting the importance. Do we have a standard > understanding of what is CRITICAL or HIGH? > > > - > > If there is a critical or high bug what is the response turn-around? > Days or weeks? > > > - > > I see some defect with HIGH that have not been assigned or looked at > in a year? > > > - > > I understand OpenStack has some long releases but how long do we > normally keep defects around? > > > - > > Do we have a way to archive bugs that are not looked at? I was told we > can possibly set the status of a defect to “Invalid” or “Opinion” or > “Won’t Fix” or “Expired" > > > - > > Status needs to be something other than "NEW" after the first week > > > - > > How can we have a defect over a year that is NEW? > > > - > > Who is possible for see if there is enough information and if the bug > is invalid or incomplete and if incomplete ask for relevant information. Do > we randomly look at the list daily , weekly, or monthly to see if new > info is needed? > > > > > I started to create a google sheet [1] to see if it is easier to track > some of the defect vs the manila-triage pad[2] . I have added both links > here. I know a lot will not have access to this page I am working on > transitioning to OpenStack ether cal. > > [1] > https://docs.google.com/spreadsheets/d/1oaXEgo_BEkY2KleISN3M58waqw9U5W7xTR_O1jQmQ74/edit#gid=758082340 > > [2] https://etherpad.openstack.org/p/manila-bug-triage-pad > > *[3]* https://ethercalc.openstack.org/uc8b4567fpf4 > > > > > I would also like to hear from all of you on what your issues are with the > current process for upstream manila bugs using launchpad. I have not had > the time to look at storyboard https://storyboard.openstack.org/ but I > have heard that the OpenStack community is pushing toward using Storyboard, > so I will be looking at that shortly. > > > Any input would be greatly appreciated... > > > Thanks All, > > Jason Grosso > > Senior Quality Engineer - Cloud > > Red Hat OpenStack Manila > > jgrosso at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Thu Feb 7 07:29:28 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 7 Feb 2019 08:29:28 +0100 Subject: [nova[metadata] queens issues Message-ID: Hello All, I am facing an issue with nova metadata or probably something is missed by design. If I create an instance from an image with os_require_quiesce='yes' and hw_qemu_guest_agent='yes' and the image contain the package quemu-guest-agent, when the instance boots the service quemu.gust-agent starts fine because all needed device are created in kvm for the instance. If I destory the instance and boot another instance from the volume used by the previous instance, metadata are missed and qemu-guest agent does not start, I think this is a problem from backupping and restoring instances. Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From lujinluo at gmail.com Thu Feb 7 07:31:59 2019 From: lujinluo at gmail.com (Lujin Luo) Date: Wed, 6 Feb 2019 23:31:59 -0800 Subject: [neutron] [upgrade] No meeting on Feb. 7th Message-ID: Hi team, I will not be able to chair the meeting tomorrow. Let's skip it and resume next week! Sorry for any inconvenience caused. Best regards, Lujin From Yury.Kulazhenkov at dell.com Thu Feb 7 08:02:31 2019 From: Yury.Kulazhenkov at dell.com (Kulazhenkov, Yury) Date: Thu, 7 Feb 2019 08:02:31 +0000 Subject: [cinder][nova][os-brick] os-brick initiator rename In-Reply-To: References: Message-ID: Hi all, Some time ago Dell EMC software-defined storage ScaleIO was renamed to VxFlex OS. I am currently working on renaming ScaleIO to VxFlex OS in Openstack code to prevent confusion with storage documentation from vendor. This changes require patches at least for cinder, nova and os-brick repos. I already submitted patches for cinder(634397) and nova(634866), but for now code in these patches relies on os-brick initiator with name SCALEIO. Now I'm looking for right way to rename os-brick initiator. Renaming initiator in os-brick library and then make required changes in nova and cinder is quiet easy, but os-brick is library and those changes can break someone else code. Is some sort of policy for updates with breaking changes exist for os-brick? One possible solution is to rename initiator to new name and create alias with deprecation warning for old initiator name(should this alias be preserved more than one release?). What do you think about it? Thanks, Yury From alfredo.deluca at gmail.com Thu Feb 7 08:17:27 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Thu, 7 Feb 2019 09:17:27 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: hi Ignazio. Unfortunately doesn't resolve either with ping or curl .... but what is strange also it doesn't have yum or dnf o any installer ....unless it use only atomic..... I think at the end it\s the issue with the network as I found out my all-in-one deployment doesn't have the br-ex which it supposed to be the external network interface. I installed OS with ansible-openstack Cheers On Wed, Feb 6, 2019 at 3:39 PM Ignazio Cassano wrote: > Alfredo it is very strange you can ping 8.8.8.8 but you cannot resolve > names. I think atomic command uses names for finishing master installation. > Curl is installed on master.... > > > Il giorno Mer 6 Feb 2019 09:00 Alfredo De Luca > ha scritto: > >> Hi Ignazio. sorry for late reply. security group is fine. It\s not >> blocking the network traffic. >> >> Not sure why but, with this fedora release I can finally find atomic but >> there is no yum,nslookup,dig,host command..... why is so different from >> another version (latest) which had yum but not atomic. >> >> It's all weird >> >> >> Cheers >> >> >> >> >> On Mon, Feb 4, 2019 at 5:46 PM Ignazio Cassano >> wrote: >> >>> Alfredo, try to check security group linked to your kubemaster. >>> >>> Il giorno Lun 4 Feb 2019 14:25 Alfredo De Luca >>> ha scritto: >>> >>>> Hi Ignazio. Thanks for the link...... so >>>> >>>> Now at least atomic is present on the system. >>>> Also I ve already had 8.8.8.8 on the system. So I can connect on the >>>> floating IP to the kube master....than I can ping 8.8.8.8 but for example >>>> doesn't resolve the names...so if I ping 8.8.8.8 >>>> *root at my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* >>>> *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* >>>> *64 bytes from 8.8.8.8 : icmp_seq=1 ttl=118 time=12.1 >>>> ms* >>>> *64 bytes from 8.8.8.8 : icmp_seq=2 ttl=118 time=12.2 >>>> ms* >>>> >>>> but if I ping google.com doesn't resolve. I can't either find on >>>> fedora dig or nslookup to check >>>> resolv.conf has >>>> *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* >>>> *nameserver 8.8.8.8* >>>> >>>> It\s all so weird. >>>> >>>> >>>> >>>> >>>> On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano < >>>> ignaziocassano at gmail.com> wrote: >>>> >>>>> I also suggest to change dns in your external network used by magnum. >>>>> Using openstack dashboard you can change it to 8.8.8.8 (If I remember >>>>> fine you wrote that you can ping 8.8.8.8 from kuke baster) >>>>> >>>>> Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < >>>>> alfredo.deluca at gmail.com> ha scritto: >>>>> >>>>>> thanks ignazio >>>>>> Where can I get it from? >>>>>> >>>>>> >>>>>> On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano < >>>>>> ignaziocassano at gmail.com> wrote: >>>>>> >>>>>>> I used fedora-magnum-27-4 and it works >>>>>>> >>>>>>> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >>>>>>> alfredo.deluca at gmail.com> ha scritto: >>>>>>> >>>>>>>> Hi Clemens. >>>>>>>> So the image I downloaded is this >>>>>>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>>>>>>> which is the latest I think. >>>>>>>> But you are right...and I noticed that too.... It doesn't have >>>>>>>> atomic binary >>>>>>>> the os-release is >>>>>>>> >>>>>>>> *NAME=Fedora* >>>>>>>> *VERSION="29 (Cloud Edition)"* >>>>>>>> *ID=fedora* >>>>>>>> *VERSION_ID=29* >>>>>>>> *PLATFORM_ID="platform:f29"* >>>>>>>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>>>>>>> *ANSI_COLOR="0;34"* >>>>>>>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>>>>>>> *HOME_URL="https://fedoraproject.org/ "* >>>>>>>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>>>>>>> "* >>>>>>>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>>>>>>> "* >>>>>>>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>>>>>>> "* >>>>>>>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>>>>>>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>>>>>>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>>>>>>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>>>>>>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>>>>>>> "* >>>>>>>> *VARIANT="Cloud Edition"* >>>>>>>> *VARIANT_ID=cloud* >>>>>>>> >>>>>>>> >>>>>>>> so not sure why I don't have atomic tho >>>>>>>> >>>>>>>> >>>>>>>> On Sat, Feb 2, 2019 at 7:53 PM Clemens < >>>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>>> >>>>>>>>> Now to the failure of your part-013: Are you sure that you used >>>>>>>>> the glance image ‚fedora-atomic-latest‘ and not some other fedora image? >>>>>>>>> Your error message below suggests that your image does not contain ‚atomic‘ >>>>>>>>> as part of the image … >>>>>>>>> >>>>>>>>> + _prefix=docker.io/openstackmagnum/ >>>>>>>>> + atomic install --storage ostree --system --system-package no >>>>>>>>> --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>>>>>>> heat-container-agent >>>>>>>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>>>>>>> ./part-013: line 8: atomic: command not found >>>>>>>>> + systemctl start heat-container-agent >>>>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>>>> heat-container-agent.service not found. >>>>>>>>> >>>>>>>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>>> >>>>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>>>> heat-container-agent.service not found. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> *Alfredo* >>>>>>>> >>>>>>>> >>>>>> >>>>>> -- >>>>>> *Alfredo* >>>>>> >>>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >> >> -- >> *Alfredo* >> >> -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Thu Feb 7 09:07:33 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 7 Feb 2019 10:07:33 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Hi Alfredo, I know some utilities are not installed on the fedora image but on my installation it is not a problem. As you wrote there are some issues on networking. I've never used openstack-ansible, so I cannot help you. I am sorry Ignazio Il giorno gio 7 feb 2019 alle ore 09:17 Alfredo De Luca < alfredo.deluca at gmail.com> ha scritto: > hi Ignazio. Unfortunately doesn't resolve either with ping or curl .... > but what is strange also it doesn't have yum or dnf o any installer > ....unless it use only atomic..... > > I think at the end it\s the issue with the network as I found out my > all-in-one deployment doesn't have the br-ex which it supposed to be the > external network interface. > > I installed OS with ansible-openstack > > > Cheers > > > On Wed, Feb 6, 2019 at 3:39 PM Ignazio Cassano > wrote: > >> Alfredo it is very strange you can ping 8.8.8.8 but you cannot resolve >> names. I think atomic command uses names for finishing master installation. >> Curl is installed on master.... >> >> >> Il giorno Mer 6 Feb 2019 09:00 Alfredo De Luca >> ha scritto: >> >>> Hi Ignazio. sorry for late reply. security group is fine. It\s not >>> blocking the network traffic. >>> >>> Not sure why but, with this fedora release I can finally find atomic but >>> there is no yum,nslookup,dig,host command..... why is so different from >>> another version (latest) which had yum but not atomic. >>> >>> It's all weird >>> >>> >>> Cheers >>> >>> >>> >>> >>> On Mon, Feb 4, 2019 at 5:46 PM Ignazio Cassano >>> wrote: >>> >>>> Alfredo, try to check security group linked to your kubemaster. >>>> >>>> Il giorno Lun 4 Feb 2019 14:25 Alfredo De Luca < >>>> alfredo.deluca at gmail.com> ha scritto: >>>> >>>>> Hi Ignazio. Thanks for the link...... so >>>>> >>>>> Now at least atomic is present on the system. >>>>> Also I ve already had 8.8.8.8 on the system. So I can connect on the >>>>> floating IP to the kube master....than I can ping 8.8.8.8 but for example >>>>> doesn't resolve the names...so if I ping 8.8.8.8 >>>>> *root at my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* >>>>> *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* >>>>> *64 bytes from 8.8.8.8 : icmp_seq=1 ttl=118 time=12.1 >>>>> ms* >>>>> *64 bytes from 8.8.8.8 : icmp_seq=2 ttl=118 time=12.2 >>>>> ms* >>>>> >>>>> but if I ping google.com doesn't resolve. I can't either find on >>>>> fedora dig or nslookup to check >>>>> resolv.conf has >>>>> *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* >>>>> *nameserver 8.8.8.8* >>>>> >>>>> It\s all so weird. >>>>> >>>>> >>>>> >>>>> >>>>> On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano < >>>>> ignaziocassano at gmail.com> wrote: >>>>> >>>>>> I also suggest to change dns in your external network used by magnum. >>>>>> Using openstack dashboard you can change it to 8.8.8.8 (If I remember >>>>>> fine you wrote that you can ping 8.8.8.8 from kuke baster) >>>>>> >>>>>> Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < >>>>>> alfredo.deluca at gmail.com> ha scritto: >>>>>> >>>>>>> thanks ignazio >>>>>>> Where can I get it from? >>>>>>> >>>>>>> >>>>>>> On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano < >>>>>>> ignaziocassano at gmail.com> wrote: >>>>>>> >>>>>>>> I used fedora-magnum-27-4 and it works >>>>>>>> >>>>>>>> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >>>>>>>> alfredo.deluca at gmail.com> ha scritto: >>>>>>>> >>>>>>>>> Hi Clemens. >>>>>>>>> So the image I downloaded is this >>>>>>>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>>>>>>>> which is the latest I think. >>>>>>>>> But you are right...and I noticed that too.... It doesn't have >>>>>>>>> atomic binary >>>>>>>>> the os-release is >>>>>>>>> >>>>>>>>> *NAME=Fedora* >>>>>>>>> *VERSION="29 (Cloud Edition)"* >>>>>>>>> *ID=fedora* >>>>>>>>> *VERSION_ID=29* >>>>>>>>> *PLATFORM_ID="platform:f29"* >>>>>>>>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>>>>>>>> *ANSI_COLOR="0;34"* >>>>>>>>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>>>>>>>> *HOME_URL="https://fedoraproject.org/ >>>>>>>>> "* >>>>>>>>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>>>>>>>> "* >>>>>>>>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>>>>>>>> "* >>>>>>>>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>>>>>>>> "* >>>>>>>>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>>>>>>>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>>>>>>>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>>>>>>>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>>>>>>>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>>>>>>>> "* >>>>>>>>> *VARIANT="Cloud Edition"* >>>>>>>>> *VARIANT_ID=cloud* >>>>>>>>> >>>>>>>>> >>>>>>>>> so not sure why I don't have atomic tho >>>>>>>>> >>>>>>>>> >>>>>>>>> On Sat, Feb 2, 2019 at 7:53 PM Clemens < >>>>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>>>> >>>>>>>>>> Now to the failure of your part-013: Are you sure that you used >>>>>>>>>> the glance image ‚fedora-atomic-latest‘ and not some other fedora image? >>>>>>>>>> Your error message below suggests that your image does not contain ‚atomic‘ >>>>>>>>>> as part of the image … >>>>>>>>>> >>>>>>>>>> + _prefix=docker.io/openstackmagnum/ >>>>>>>>>> + atomic install --storage ostree --system --system-package no >>>>>>>>>> --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>>>>>>>> heat-container-agent >>>>>>>>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>>>>>>>> ./part-013: line 8: atomic: command not found >>>>>>>>>> + systemctl start heat-container-agent >>>>>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>>>>> heat-container-agent.service not found. >>>>>>>>>> >>>>>>>>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>>>> >>>>>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>>>>> heat-container-agent.service not found. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> *Alfredo* >>>>>>>>> >>>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Alfredo* >>>>>>> >>>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>> >>> -- >>> *Alfredo* >>> >>> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arne.Wiebalck at cern.ch Thu Feb 7 10:08:04 2019 From: Arne.Wiebalck at cern.ch (Arne Wiebalck) Date: Thu, 7 Feb 2019 10:08:04 +0000 Subject: Rocky and older Ceph compatibility In-Reply-To: References: <88212313-4fde-8e01-d804-27c6354b7046@it.uu.se> Message-ID: Linus, We've basically upgraded Ceph and OpenStack independently over the past years (now on Luminous/Rocky). One thing to keep in mind after upgrading Ceph is to not enable new Ceph tunables that older clients may not know about. FWIU, upgrading alone will not enable new tunables, though. HTH, Arne > On 6 Feb 2019, at 18:55, Erik McCormick wrote: > > On Wed, Feb 6, 2019 at 12:37 PM Linus Nilsson wrote: >> >> Hi all, >> >> I'm working on upgrading our cloud, which consists of a block storage >> system running Ceph 11.2.1 ("Kraken") and a controlplane running OSA >> Newton. We want to migrate to Ceph Mimic and OSA Rocky respectively. As >> part of the upgrade plan we are discussing first going to Rocky while >> keeping the block system at the "Kraken" release. >> > > For the most part it comes down to your client libraries. Personally, > I would upgrade Ceph first, leaving Openstack running older client > libraries. I did this with Jewel clients talking to a Luminous > cluster, so you should be fine with K->M. Then, when you upgrade > Openstack, your client libraries can get updated along with it. If you > do Openstack first, you'll need to come back around and update your > clients, and that will require you to restart everything a second > time. > . >> It would be helpful to know if anyone has attempted to run the Rocky >> Cinder/Glance drivers with Ceph Kraken or older? >> > I haven't done this specific combination, but I have mixed and matched > Openstack and Ceph versions without any issues. I have MItaka, Queens, > and Rocky all talking to Luminous without incident. > > -Erik >> References or documentation is welcomed. I fail to find much information >> online, but perhaps I'm looking in the wrong places or I'm asking a >> question with an obvious answer. >> >> Thanks! >> >> Best regards, >> Linus >> UPPMAX >> >> >> >> >> >> >> >> >> När du har kontakt med oss på Uppsala universitet med e-post så innebär det att vi behandlar dina personuppgifter. För att läsa mer om hur vi gör det kan du läsa här: http://www.uu.se/om-uu/dataskydd-personuppgifter/ >> >> E-mailing Uppsala University means that we will process your personal data. For more information on how this is performed, please read here: http://www.uu.se/en/about-uu/data-protection-policy >> > -- Arne Wiebalck CERN IT From cdent+os at anticdent.org Thu Feb 7 10:34:12 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 7 Feb 2019 10:34:12 +0000 (GMT) Subject: [ironic] Hardware leasing with Ironic In-Reply-To: <20190206213222.43nin24mkbqhsrw7@redhat.com> References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <20190206154138.qfhgh5cax3j2r4qh@redhat.com> <20190206213222.43nin24mkbqhsrw7@redhat.com> Message-ID: On Wed, 6 Feb 2019, Lars Kellogg-Stedman wrote: > I'm still not clear on whether there's any way to make this work with > existing tools, or if it makes sense to figure out to make Nova do > this or if we need something else sitting in front of Ironic. If I recall the early conversations correctly, one of the thoughts/frustrations that brought placement into existence was the way in which there needed to be a pile of flavors, constantly managed to reflect the variety of resources in the "cloud"; wouldn't it be nice to simply reflect those resources, ask for the things you wanted, not need to translate that into a flavor, and not need to create a new flavor every time some new thing came along? It wouldn't be super complicated for Ironic to interact directly with placement to report hardware inventory at regular intervals and to get a list of machines that meet the "at least X GB RAM and Y GB disk space" requirements when somebody wants to boot (or otherwise select, perhaps for later use) a machine, circumventing nova and concepts like flavors. As noted elsewhere in the thread you lose concepts of tenancy, affinity and other orchestration concepts that nova provides. But if those don't matter, or if the shape of those things doesn't fit, it might (might!) be a simple matter of programming... I seem to recall there have been several efforts in this direction over the years, but not any that take advantage of placement. One thing to keep in mind is the reasons behind the creation of custom resource classes like CUSTOM_BAREMETAL_GOLD for reporting ironic inventory (instead of the actual available hardware): A job on baremetal consumes all of it. If Ironic is reporting granular inventory, when it claims a big machine if the initial request was for a smaller machine, the claim would either need to be for all the stuff (to not leave inventory something else might like to claim) or some other kind of inventory manipulation (such as adjusting reserved) might be required. One option might be to have all inventoried machines to have classes of resource for hardware and then something like a PHYSICAL_MACHINE class with a value of 1. When a request is made (including the PHSYICAL_MACHINE=1), the returned resources are sorted by "best fit" and an allocation is made. PHYSICAL_MACHINE goes to 0, taking that resource provider out of service, but leaving the usage an accurate representation of reality. I think it might be worth exploring, and so it's clear I'm not talking from my armchair here, I've been doing some experiments/hacks with launching VMs with just placement, etcd and a bit of python that have proven quite elegant and may help to demonstrate how simple an initial POC that talked with ironic instead could be: https://github.com/cdent/etcd-compute -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From thierry at openstack.org Thu Feb 7 11:06:46 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 7 Feb 2019 12:06:46 +0100 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> Message-ID: <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> Doug Hellmann wrote: > [...] > During the Train series goal discussion in Berlin we talked about having > a goal of ensuring that each team had documentation for bringing new > contributors onto the team. Offering specific mentoring resources seems > to fit nicely with that goal, and doing it in each team's repository in > a consistent way would let us build a central page on docs.openstack.org > to link to all of the team contributor docs, like we link to the user > and installation documentation, without requiring us to find a separate > group of people to manage the information across the entire community. I'm a bit skeptical of that approach. Proper peer mentoring takes a lot of time, so I expect there will be a limited number of "I'll spend significant time helping you if you help us" offers. I don't envision potential contributors to browse dozens of project-specific "on-boarding doc" to find them. I would rather consolidate those offers on a single page. So.. either some magic consolidation job that takes input from all of those project-specific repos to build a nice rendered list... Or just a wiki page ? -- Thierry Carrez (ttx) From dangtrinhnt at gmail.com Thu Feb 7 11:20:46 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 7 Feb 2019 20:20:46 +0900 Subject: [TC][Searchlight] Project health evaluation In-Reply-To: <20190206133648.GB28569@sm-workstation> References: <20190206133648.GB28569@sm-workstation> Message-ID: Thank Sean for your comments. [6] I thought it would be the indication of the current PTLs. [7] So I will just communicate with the responding TC members to update. Thanks again, On Wed, Feb 6, 2019 at 10:36 PM Sean McGinnis wrote: > > > > As we're reaching the Stein-3 milestone [5] and preparing for the Denver > > summit. We, as a team, would like have a formal project health evaluation > > in several aspects such as active contributors / team, planning, bug > fixes, > > features, etc. We would love to have some voice from the TC team and > anyone > > from the community who follows our effort during the Stein cycle. We then > > would want to update the information at [6] and [7] to avoid any > confusion > > that may stop potential contributors or users to come to Searchlight. > > > > [1] https://review.openstack.org/#/c/588644/ > > [2] > > > https://www.dangtrinh.com/2018/10/searchlight-at-stein-1-weekly-report.html > > [3] > https://www.dangtrinh.com/2019/01/searchlight-at-stein-2-r-14-r-13.html > > [4] > > > https://docs.openstack.org/searchlight/latest/user/usecases.html#our-vision > > [5] https://releases.openstack.org/stein/schedule.html > > [6] https://governance.openstack.org/election/results/stein/ptl.html > > [7] https://wiki.openstack.org/wiki/OpenStack_health_tracker > > > > It really looks like great progress with Searchlight over this release. > Nice > work Trinh and all that have been involved in that. > > [6] is a historical record of what happened with the PTL election. What > would > you want to update there? The best path forward, in my opinion, is to make > sure > there is a clear PTL candidate for the Train release. > > [7] is a periodic update of notes between TC members and the projects. If > you > would like to get more information added there, I would recommend working > with > the two TC members assigned to Searchlight to get an update. That appears > to be > Chris Dent and Dims. > > Sean > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Thu Feb 7 11:22:26 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 7 Feb 2019 12:22:26 +0100 Subject: [nova][queens] qeumu-guest-agent Message-ID: Hello, is it possible to force metadata for instances like the following ? hw_qemu_guest_agent='yes' os_require_quiesce='yes' I know if an instance is created from an image with the above metadata the quemu-guest-agent works, but sometimes instances can start from volumes ( for example after a cnder backup). Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Feb 7 11:23:54 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 7 Feb 2019 12:23:54 +0100 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> Message-ID: <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> Adam Spiers wrote: > [...] > Sure.  I particularly agree with your point about processes; I think the > TC (or whoever else volunteers) could definitely help lower the barrier > to starting up a pop-up team by creating a cookie-cutter kind of > approach which would quickly set up any required infrastructure. For > example it could be a simple form or CLI-based tool posing questions > like the following, where the answers could facilitate the bootstrapping > process: > - What is the name of your pop-up team? > - Please enter a brief description of the purpose of your pop-up team. > - If you will use an IRC channel, please state it here. > - Do you need regular IRC meetings? > - Do you need a new git repository?  [If so, ...] > - Do you need a new StoryBoard project?  [If so, ...] > - Do you need a [badge] for use in Subject: headers on openstack-discuss? > etc. > > The outcome of the form could be anything from pointers to specific bits > of documentation on how to set up the various bits of infrastructure, > all the way through to automation of as much of the setup as is > possible.  The slicker the process, the more agile the community could > become in this respect. That's a great idea -- if the pop-up team concept takes on we could definitely automate stuff. In the mean time I feel like the next step is to document what we mean by pop-up team, list them, and give pointers to the type of resources you can have access to (and how to ask for them). In terms of "blessing" do you think pop-up teams should be ultimately approved by the TC ? On one hand that adds bureaucracy / steps to the process, but on the other having some kind of official recognition can help them... So maybe some after-the-fact recognition would work ? Let pop-up teams freely form and be listed, then have the TC declaring some of them (if not all of them) to be of public interest ? -- Thierry Carrez (ttx) From kchamart at redhat.com Thu Feb 7 11:29:59 2019 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 7 Feb 2019 12:29:59 +0100 Subject: [nova] Floppy drive support =?utf-8?B?4oCU?= =?utf-8?Q?_does?= anyone rely on it? Message-ID: <20190207112959.GF5349@paraplu.home> Question for operators: Do anyone rely on floppy disk support in Nova? Background ---------- The "VENOM" vulnerability (CVE-2015-3456)[1] was caused due to a Floppy Disk Controller (FDC) being initialized for all x86 guests, regardless of their configuration — so even if a guest does not explicitly have a virtual floppy disk configured and attached, this issue was exploitable. As a result of that, upstream QEMU has suppressed the FDC for modern machine types (e.g. 'q35') by default — commit ea96bc629cb; from QEMU v2.4.0 onwards. From the commit message: "It is Very annoying to carry forward an outdatEd coNtroller with a mOdern Machine type." QEMU users can still get floppy devices, but they have to ask for them explicitly on the command-line. * * * Given that, and the use of floppy drives is generally not recommended in 2019, any objection to go ahead and remove support for floppy drives? Currently Nova allows the use of the floppy drive via these two disk image metadata properties: - hw_floppy_bus=fd - hw_rescue_device=floppy Filed this blueprint[2] to track this. * * * [1] https://access.redhat.com/articles/1444903 [2] https://blueprints.launchpad.net/nova/+spec/remove-support-for-floppy-disks -- /kashyap From smooney at redhat.com Thu Feb 7 11:53:51 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 07 Feb 2019 11:53:51 +0000 Subject: [nova][queens] qeumu-guest-agent In-Reply-To: References: Message-ID: On Thu, 2019-02-07 at 12:22 +0100, Ignazio Cassano wrote: > Hello, is it possible to force metadata for instances like the following ? > hw_qemu_guest_agent='yes' > os_require_quiesce='yes' > > > I know if an instance is created from an image with the above metadata the quemu-guest-agent works, but sometimes > instances can start from volumes ( for example after a cnder backup). if you are using a recent version of cinder/nova the image metadata is copied to the volume. i don't know if that is in queens or not but the only other way to force it that i can think of would be via the flavor. unfortunately the guest agnet is not supported in the flavor https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L7-L11 > > Regards > Ignazio From ignaziocassano at gmail.com Thu Feb 7 12:16:39 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 7 Feb 2019 13:16:39 +0100 Subject: [nova][queens] qeumu-guest-agent In-Reply-To: References: Message-ID: Thanks. In queens hw_qemu_guest_agent is not considerated for v olumes because it belongs to "libvirt driver option for images". It is a problem for starting instance from backupped volumes :-( Ignazio Il giorno Gio 7 Feb 2019 12:53 Sean Mooney ha scritto: > On Thu, 2019-02-07 at 12:22 +0100, Ignazio Cassano wrote: > > Hello, is it possible to force metadata for instances like the following > ? > > hw_qemu_guest_agent='yes' > > os_require_quiesce='yes' > > > > > > I know if an instance is created from an image with the above metadata > the quemu-guest-agent works, but sometimes > > instances can start from volumes ( for example after a cnder backup). > if you are using a recent version of cinder/nova the image metadata is > copied to the volume. > i don't know if that is in queens or not but the only other way to force > it that i can think of would be via the flavor. > unfortunately the guest agnet is not supported in the flavor > > https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L7-L11 > > > > > Regards > > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Feb 7 12:42:53 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Feb 2019 07:42:53 -0500 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> Message-ID: Thierry Carrez writes: > Doug Hellmann wrote: >> [...] >> During the Train series goal discussion in Berlin we talked about having >> a goal of ensuring that each team had documentation for bringing new >> contributors onto the team. Offering specific mentoring resources seems >> to fit nicely with that goal, and doing it in each team's repository in >> a consistent way would let us build a central page on docs.openstack.org >> to link to all of the team contributor docs, like we link to the user >> and installation documentation, without requiring us to find a separate >> group of people to manage the information across the entire community. > > I'm a bit skeptical of that approach. > > Proper peer mentoring takes a lot of time, so I expect there will be a > limited number of "I'll spend significant time helping you if you help > us" offers. I don't envision potential contributors to browse dozens of > project-specific "on-boarding doc" to find them. I would rather > consolidate those offers on a single page. > > So.. either some magic consolidation job that takes input from all of > those project-specific repos to build a nice rendered list... Or just a > wiki page ? > > -- > Thierry Carrez (ttx) > A wiki page would be nicely lightweight, so that approach makes some sense. Maybe if the only maintenance is to review the page periodically, we can convince one of the existing mentorship groups or the first contact SIG to do that. -- Doug From ignaziocassano at gmail.com Thu Feb 7 13:16:21 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 7 Feb 2019 14:16:21 +0100 Subject: [nova][queens] qeumu-guest-agent In-Reply-To: References: Message-ID: Hello, I also tryed to unprotect "libvirt driver options for images" and "instance config data" metadata definitions Then I associated them to flavors. With the above configuration I can define . os_require_quiesce='yes' and hw_qemu_guest_agent='yes' in a flavor bu starting an instance from a volume using that flavor did not solved the issue: qemu-guest agent did not work. It works only when an instance is created from an image with the above metadata. Ignazio Il giorno gio 7 feb 2019 alle ore 12:53 Sean Mooney ha scritto: > On Thu, 2019-02-07 at 12:22 +0100, Ignazio Cassano wrote: > > Hello, is it possible to force metadata for instances like the following > ? > > hw_qemu_guest_agent='yes' > > os_require_quiesce='yes' > > > > > > I know if an instance is created from an image with the above metadata > the quemu-guest-agent works, but sometimes > > instances can start from volumes ( for example after a cnder backup). > if you are using a recent version of cinder/nova the image metadata is > copied to the volume. > i don't know if that is in queens or not but the only other way to force > it that i can think of would be via the flavor. > unfortunately the guest agnet is not supported in the flavor > > https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L7-L11 > > > > > Regards > > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Feb 7 13:47:31 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 07 Feb 2019 13:47:31 +0000 Subject: [nova][queens] qeumu-guest-agent In-Reply-To: References: Message-ID: <6e3c9a74b0921c5f2ffa898367ae6f40131f88bf.camel@redhat.com> On Thu, 2019-02-07 at 14:16 +0100, Ignazio Cassano wrote: > Hello, > I also tryed to unprotect "libvirt driver options for images" and "instance config data" metadata definitions Then I > associated them to flavors. > With the above configuration I can define . os_require_quiesce='yes' and hw_qemu_guest_agent='yes' in a flavor bu > starting an instance from a volume using that flavor did not solved the issue: you can add the key to the flavor extra_spces but nova does not support that this is only supported in the image metadata. if you boot form volume unless you have this set in the image_metatada section of the volume nova will not use it. it may not use it even in that case. > qemu-guest agent did not work. > It works only when an instance is created from an image with the above metadata. > > Ignazio > > Il giorno gio 7 feb 2019 alle ore 12:53 Sean Mooney ha scritto: > > On Thu, 2019-02-07 at 12:22 +0100, Ignazio Cassano wrote: > > > Hello, is it possible to force metadata for instances like the following ? > > > hw_qemu_guest_agent='yes' > > > os_require_quiesce='yes' > > > > > > > > > I know if an instance is created from an image with the above metadata the quemu-guest-agent works, but sometimes > > > instances can start from volumes ( for example after a cnder backup). > > if you are using a recent version of cinder/nova the image metadata is copied to the volume. > > i don't know if that is in queens or not but the only other way to force it that i can think of would be via the > > flavor. > > unfortunately the guest agnet is not supported in the flavor > > https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L7-L11 > > > > > > > > Regards > > > Ignazio > > From ignaziocassano at gmail.com Thu Feb 7 13:55:23 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 7 Feb 2019 14:55:23 +0100 Subject: [nova][queens] qeumu-guest-agent In-Reply-To: <6e3c9a74b0921c5f2ffa898367ae6f40131f88bf.camel@redhat.com> References: <6e3c9a74b0921c5f2ffa898367ae6f40131f88bf.camel@redhat.com> Message-ID: Hello, do you mean there is not any workaround? Ignazio Il giorno Gio 7 Feb 2019 14:47 Sean Mooney ha scritto: > On Thu, 2019-02-07 at 14:16 +0100, Ignazio Cassano wrote: > > Hello, > > I also tryed to unprotect "libvirt driver options for images" and > "instance config data" metadata definitions Then I > > associated them to flavors. > > With the above configuration I can define . os_require_quiesce='yes' and > hw_qemu_guest_agent='yes' in a flavor bu > > starting an instance from a volume using that flavor did not solved the > issue: > you can add the key to the flavor extra_spces but nova does not support > that > this is only supported in the image metadata. > if you boot form volume unless you have this set in the image_metatada > section of the volume > nova will not use it. it may not use it even in that case. > > > qemu-guest agent did not work. > > It works only when an instance is created from an image with the above > metadata. > > > > Ignazio > > > > Il giorno gio 7 feb 2019 alle ore 12:53 Sean Mooney > ha scritto: > > > On Thu, 2019-02-07 at 12:22 +0100, Ignazio Cassano wrote: > > > > Hello, is it possible to force metadata for instances like the > following ? > > > > hw_qemu_guest_agent='yes' > > > > os_require_quiesce='yes' > > > > > > > > > > > > I know if an instance is created from an image with the above > metadata the quemu-guest-agent works, but sometimes > > > > instances can start from volumes ( for example after a cnder backup). > > > if you are using a recent version of cinder/nova the image metadata is > copied to the volume. > > > i don't know if that is in queens or not but the only other way to > force it that i can think of would be via the > > > flavor. > > > unfortunately the guest agnet is not supported in the flavor > > > > https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L7-L11 > > > > > > > > > > > Regards > > > > Ignazio > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Feb 7 14:01:40 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 07 Feb 2019 23:01:40 +0900 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> Message-ID: <168c8439d24.feed3a49551.7656492683145817726@ghanshyammann.com> ---- On Thu, 07 Feb 2019 21:42:53 +0900 Doug Hellmann wrote ---- > Thierry Carrez writes: > > > Doug Hellmann wrote: > >> [...] > >> During the Train series goal discussion in Berlin we talked about having > >> a goal of ensuring that each team had documentation for bringing new > >> contributors onto the team. Offering specific mentoring resources seems > >> to fit nicely with that goal, and doing it in each team's repository in > >> a consistent way would let us build a central page on docs.openstack.org > >> to link to all of the team contributor docs, like we link to the user > >> and installation documentation, without requiring us to find a separate > >> group of people to manage the information across the entire community. > > > > I'm a bit skeptical of that approach. > > > > Proper peer mentoring takes a lot of time, so I expect there will be a > > limited number of "I'll spend significant time helping you if you help > > us" offers. I don't envision potential contributors to browse dozens of > > project-specific "on-boarding doc" to find them. I would rather > > consolidate those offers on a single page. > > > > So.. either some magic consolidation job that takes input from all of > > those project-specific repos to build a nice rendered list... Or just a > > wiki page ? > > > > -- > > Thierry Carrez (ttx) > > > > A wiki page would be nicely lightweight, so that approach makes some > sense. Maybe if the only maintenance is to review the page periodically, > we can convince one of the existing mentorship groups or the first > contact SIG to do that. Same can be achieved If we have a single link on doc.openstack.org or contributor guide with top section "Help-wanted" with subsection of each project specific help-wanted. project help wanted subsection can be build from help wanted section from project contributor doc. That way it is easy for the project team to maintain their help wanted list. Wiki page can have the challenge of prioritizing and maintain the list. -gmann > > -- > Doug > > From frode.nordahl at canonical.com Thu Feb 7 14:04:54 2019 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Thu, 7 Feb 2019 15:04:54 +0100 Subject: [charms][zuul] State of external GitHub dependencies Message-ID: Hello all, The Charms projects are increasingly heavy users of external GitHub dependencies, and we are facing intermittent issues with this at the gate. Does anyone have ideas as to how we should handle this from the point of view of Charm teams? Anyone from Zuul have ideas/pointers on how we could help improve the external GitHub dependency support? As many of you know the OpenStack Charms project is in the process of replacing the framework for performing functional deployment testing of Charms with ``Zaza`` [0]. Two of the key features of the Zaza framework is reusability of tests simply by referencing already written tests with a Python module path in a test definition in a YAML file, and general applicability across other Charms, not just OpenStack specific ones. Because of this the Zaza project, which also contains the individual functional test modules, is hosted on GitHub and not on the OpenStack Infrastructure. Whenever a change is proposed to a charm that require new or changes to existing functional tests, we need a effective way for the individual contributor to have their Charm change (which is proposed on OpenStack Infrastructure) tested with code from their Zaza change (which is proposed as a PR on GitHub). We have had some success with adding ``Depends-On:`` and the full URL to the GitHub PR in the commit message. There is experimental support for using that as a gate check in Zuul, and Canonical's third party Charm CI is configured to pull the correct version of Zaza based on Depends-On referencing GitHub PRs. However, we often have to go through extra hoops to land things as the gate code appears to not always successfully handle GitHub PR references in Depends-On. For reference, after discussion in #openstack-infra I got a log excerpt [1] and a reference to a WIP PR [2] that might be relevant. 0: https://zaza.readthedocs.io/en/latest/ 1: http://paste.openstack.org/show/744664/ 2: https://review.openstack.org/#/c/613143/ -- Frode Nordahl -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at canonical.com Thu Feb 7 14:08:42 2019 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Thu, 7 Feb 2019 15:08:42 +0100 Subject: [charms][zuul] State of external GitHub dependencies In-Reply-To: References: Message-ID: Hello all, The Charms projects are increasingly heavy users of external GitHub dependencies, and we are facing intermittent issues with this at the gate. Does anyone have ideas as to how we should handle this from the point of view of Charm teams? Anyone from Zuul have ideas/pointers on how we could help improve the external GitHub dependency support? As many of you know the OpenStack Charms project is in the process of replacing the framework for performing functional deployment testing of Charms with ``Zaza`` [0]. Two of the key features of the Zaza framework is reusability of tests simply by referencing already written tests with a Python module path in a test definition in a YAML file, and general applicability across other Charms, not just OpenStack specific ones. Because of this the Zaza project, which also contains the individual functional test modules, is hosted on GitHub and not on the OpenStack Infrastructure. Whenever a change is proposed to a charm that require new or changes to existing functional tests, we need a effective way for the individual contributor to have their Charm change (which is proposed on OpenStack Infrastructure) tested with code from their Zaza change (which is proposed as a PR on GitHub). We have had some success with adding ``Depends-On:`` and the full URL to the GitHub PR in the commit message. There is experimental support for using that as a gate check in Zuul, and Canonical's third party Charm CI is configured to pull the correct version of Zaza based on Depends-On referencing GitHub PRs. However, we often have to go through extra hoops to land things as the gate code appears to not always successfully handle GitHub PR references in Depends-On. For reference, after discussion in #openstack-infra I got a log excerpt [1] and a reference to a WIP PR [2] that might be relevant. 0: https://zaza.readthedocs.io/en/latest/ 1: http://paste.openstack.org/show/744664/ 2: https://review.openstack.org/#/c/613143/ -- Frode Nordahl -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Thu Feb 7 14:16:46 2019 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 7 Feb 2019 09:16:46 -0500 Subject: [cinder][nova][os-brick] os-brick initiator rename In-Reply-To: References: Message-ID: <9bec296b-3bcf-1f06-1927-62f71395e03d@gmail.com> On 02/07/2019 03:02 AM, Kulazhenkov, Yury wrote: > One possible solution is to rename initiator to new name and create alias with deprecation warning for > old initiator name(should this alias be preserved more than one release?). > What do you think about it? That's exactly what I would suggest. Best, -jay From lyarwood at redhat.com Thu Feb 7 14:32:31 2019 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 7 Feb 2019 14:32:31 +0000 Subject: [nova] [placement] extraction checkin meeting at 1700 UTC today In-Reply-To: References: Message-ID: <20190207143231.gtxpdounr3neleig@lyarwood.usersys.redhat.com> On 06-02-19 18:52:10, Chris Dent wrote: > We did not schedule a next check in meeting. When one needs to happen, > which it will, we'll figure that out and make an announcement. I had assumed this would be at the PTG so we could agree on a date early in T for the deletion of the code from Nova. I'm not sure that we need to meet anytime before that. > Thanks for your attention. If I made any errors above, or left > something out, please followup. If you have questions, please ask > them. This all looks present and correct, thanks for writing this up Chris! -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From jaypipes at gmail.com Thu Feb 7 14:41:19 2019 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 7 Feb 2019 09:41:19 -0500 Subject: =?UTF-8?Q?Re:_[nova]_Floppy_drive_support_=e2=80=94_does_anyone_rel?= =?UTF-8?Q?y_on_it=3f?= In-Reply-To: <20190207112959.GF5349@paraplu.home> References: <20190207112959.GF5349@paraplu.home> Message-ID: On 02/07/2019 06:29 AM, Kashyap Chamarthy wrote: > Given that, and the use of floppy drives is generally not recommended in > 2019, any objection to go ahead and remove support for floppy drives? No objections from me. -jay From aspiers at suse.com Thu Feb 7 14:42:27 2019 From: aspiers at suse.com (Adam Spiers) Date: Thu, 7 Feb 2019 14:42:27 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> Message-ID: <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> Thierry Carrez wrote: >Adam Spiers wrote: >>[...] >>Sure.  I particularly agree with your point about processes; I think >>the TC (or whoever else volunteers) could definitely help lower the >>barrier to starting up a pop-up team by creating a cookie-cutter >>kind of approach which would quickly set up any required >>infrastructure. For example it could be a simple form or CLI-based >>tool posing questions like the following, where the answers could >>facilitate the bootstrapping process: >>- What is the name of your pop-up team? >>- Please enter a brief description of the purpose of your pop-up team. >>- If you will use an IRC channel, please state it here. >>- Do you need regular IRC meetings? >>- Do you need a new git repository?  [If so, ...] >>- Do you need a new StoryBoard project?  [If so, ...] >>- Do you need a [badge] for use in Subject: headers on openstack-discuss? >>etc. >> >>The outcome of the form could be anything from pointers to specific >>bits of documentation on how to set up the various bits of >>infrastructure, all the way through to automation of as much of the >>setup as is possible.  The slicker the process, the more agile the >>community could become in this respect. > >That's a great idea -- if the pop-up team concept takes on we could >definitely automate stuff. In the mean time I feel like the next step >is to document what we mean by pop-up team, list them, and give >pointers to the type of resources you can have access to (and how to >ask for them). Agreed - a quickstart document would be a great first step. >In terms of "blessing" do you think pop-up teams should be ultimately >approved by the TC ? On one hand that adds bureaucracy / steps to the >process, but on the other having some kind of official recognition can >help them... > >So maybe some after-the-fact recognition would work ? Let pop-up teams >freely form and be listed, then have the TC declaring some of them (if >not all of them) to be of public interest ? Yeah, good questions. The official recognition is definitely beneficial; OTOH I agree that requiring steps up-front might deter some teams from materialising. Automating these as much as possible would reduce the risk of that. One challenge I see facing an after-the-fact approach is that any requests for infrastructure (IRC channel / meetings / git repo / Storyboard project etc.) would still need to be approved in advance, and presumably a coordinated approach to approval might be more effective than one where some of these requests could be approved and others denied. I'm not sure what the best approach is - sorry ;-) From doug at doughellmann.com Thu Feb 7 15:04:14 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Feb 2019 10:04:14 -0500 Subject: [tc] agenda for upcoming TC meeting on 7 Feb In-Reply-To: References: Message-ID: Doug Hellmann writes: > TC Members, > > Our next meeting will be on Thursday, 7 Feb at 1400 UTC in > #openstack-tc. This email contains the agenda for the meeting. > > If you will not be able to attend, please include your name in the > "Apologies for Absence" section of the wiki page [0]. > > * corrections to TC member election section of bylaws are completed > (fungi, dhellmann) > > * status update for project team evaluations based on technical vision > (cdent, TheJulia) > > * defining the role of the TC (cdent, ttx) > > * keeping up with python 3 releases (dhellmann, gmann) > > * status update of Train cycle goals selection update (lbragstad, > evrardjp) > > * TC governance resolution voting procedures (dhellmann) > > * upcoming TC election (dhellmann) > > * review proposed OIP acceptance criteria (dhellmann, wendar) > > * TC goals for Stein (dhellmann) > > [0] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee > > -- > Doug > The minutes and logs for the meeting are available on the eavesdrop server: Minutes: http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-02-07-14.00.html Log: http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-02-07-14.00.log.html -- Doug From mriedemos at gmail.com Thu Feb 7 15:11:10 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 7 Feb 2019 09:11:10 -0600 Subject: [nova][qa][cinder] CI job changes In-Reply-To: <168c1364bfb.b6bfd9ad351371.5730819222747190801@ghanshyammann.com> References: <666ffefd-7344-1853-7fd5-a2a32ea8d222@gmail.com> <168c1364bfb.b6bfd9ad351371.5730819222747190801@ghanshyammann.com> Message-ID: On 2/5/2019 11:09 PM, Ghanshyam Mann wrote: > > 3. Drop the integrated-gate (py2) template jobs (from nova) > > > > Nova currently runs with both the integrated-gate and > > integrated-gate-py3 templates, which adds a set of tempest-full and > > grenade jobs each to the check and gate pipelines. I don't think we need > > to be gating on both py2 and py3 at this point when it comes to > > tempest/grenade changes. Tempest changes are still gating on both so we > > have coverage there against breaking changes, but I think anything > > that's py2 specific would be caught in unit and functional tests (which > > we're running on both py27 and py3*). > > > > IMO, we should keep running integrated-gate py2 templates on the project gate also > along with Tempest. Jobs in integrated-gate-* templates cover a large amount of code so > running that for both versions make sure we keep our code running on py2 also. Rest other > job like tempest-slow, nova-next etc are good to run only py3 on project side (Tempest gate > keep running py2 version also). > > I am not sure if unit/functional jobs cover all code coverage and it is safe to ignore the py version > consideration from integration CI. As per TC resolution, python2 can be dropped during begning of > U cycle [1]. > > You have good point of having the integrated-gate py2 coverage on Tempest gate only is enough > but it has risk of merging the py2 breaking code on project side which will block the Tempest gate. > I agree that such chances are rare but still it can happen. > > Other point is that we need integrated-gate template running when Stein and Train become > stable branch (means on stable/stein and stable/train gate). Otherwise there are chance when > py2 broken code from U (because we will test only py3 in U) is backported to stable/Train or > stable/stein. > > My opinion on this proposal is to wait till we officially drop py2 which is starting of U. > > [1]https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html > > -gmann We talked about this during the nova meeting today [1]. My main concern right now is efficiency and avoid running redundant test coverage when there is otherwise not much of a difference in the configured environment, which is what we have between the py2 and py3 integrated-gate templates. This is also driving my push to drop the nova-multiattach job and fold those tests into the integrated gate and slim down the number of tests we run in the nova-next job. I understand the concern of dropping the integrated-gate template from nova is a risk to break something in those jobs unknowingly. However, I assume that most py2-specific issues in nova will be caught in unit and functional test jobs which we continue to run. Also, nova is also running a few integration jobs that run on py27 (devstack-plugin-ceph-tempest and neutron-grenade-multinode), so we still have py2 test coverage. We're not dropping py27 support and we're still testing it, but it's a lower priority with everything moving to python3 and I think our test coverage should reflect that. I think we should try this [2] and if it does become a major issue we can revisit adding the integrated-gate py2 template jobs in nova until the U release. [1] http://eavesdrop.openstack.org/meetings/nova/2019/nova.2019-02-07-14.00.log.html#l-113 [2] https://review.openstack.org/#/c/634949/ -- Thanks, Matt From doug at doughellmann.com Thu Feb 7 15:58:58 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Feb 2019 10:58:58 -0500 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> Message-ID: Adam Spiers writes: > Thierry Carrez wrote: >>Adam Spiers wrote: >>>[...] >>>Sure.  I particularly agree with your point about processes; I think >>>the TC (or whoever else volunteers) could definitely help lower the >>>barrier to starting up a pop-up team by creating a cookie-cutter >>>kind of approach which would quickly set up any required >>>infrastructure. For example it could be a simple form or CLI-based >>>tool posing questions like the following, where the answers could >>>facilitate the bootstrapping process: >>>- What is the name of your pop-up team? >>>- Please enter a brief description of the purpose of your pop-up team. >>>- If you will use an IRC channel, please state it here. >>>- Do you need regular IRC meetings? >>>- Do you need a new git repository?  [If so, ...] >>>- Do you need a new StoryBoard project?  [If so, ...] >>>- Do you need a [badge] for use in Subject: headers on openstack-discuss? >>>etc. >>> >>>The outcome of the form could be anything from pointers to specific >>>bits of documentation on how to set up the various bits of >>>infrastructure, all the way through to automation of as much of the >>>setup as is possible.  The slicker the process, the more agile the >>>community could become in this respect. >> >>That's a great idea -- if the pop-up team concept takes on we could >>definitely automate stuff. In the mean time I feel like the next step >>is to document what we mean by pop-up team, list them, and give >>pointers to the type of resources you can have access to (and how to >>ask for them). > > Agreed - a quickstart document would be a great first step. > >>In terms of "blessing" do you think pop-up teams should be ultimately >>approved by the TC ? On one hand that adds bureaucracy / steps to the >>process, but on the other having some kind of official recognition can >>help them... >> >>So maybe some after-the-fact recognition would work ? Let pop-up teams >>freely form and be listed, then have the TC declaring some of them (if >>not all of them) to be of public interest ? > > Yeah, good questions. The official recognition is definitely > beneficial; OTOH I agree that requiring steps up-front might deter > some teams from materialising. Automating these as much as possible > would reduce the risk of that. What benefit do you perceive to having official recognition? > > One challenge I see facing an after-the-fact approach is that any > requests for infrastructure (IRC channel / meetings / git repo / > Storyboard project etc.) would still need to be approved in advance, > and presumably a coordinated approach to approval might be more > effective than one where some of these requests could be approved and > others denied. Isn't the point of these teams that they would be coordinating work within other existing projects? So I wouldn't expect them to need git repositories or new IRC channels. Meeting times, yes. > > I'm not sure what the best approach is - sorry ;-) > -- Doug From doug at doughellmann.com Thu Feb 7 16:07:33 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Feb 2019 11:07:33 -0500 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <168c8439d24.feed3a49551.7656492683145817726@ghanshyammann.com> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <168c8439d24.feed3a49551.7656492683145817726@ghanshyammann.com> Message-ID: Ghanshyam Mann writes: > ---- On Thu, 07 Feb 2019 21:42:53 +0900 Doug Hellmann wrote ---- > > Thierry Carrez writes: > > > > > Doug Hellmann wrote: > > >> [...] > > >> During the Train series goal discussion in Berlin we talked about having > > >> a goal of ensuring that each team had documentation for bringing new > > >> contributors onto the team. Offering specific mentoring resources seems > > >> to fit nicely with that goal, and doing it in each team's repository in > > >> a consistent way would let us build a central page on docs.openstack.org > > >> to link to all of the team contributor docs, like we link to the user > > >> and installation documentation, without requiring us to find a separate > > >> group of people to manage the information across the entire community. > > > > > > I'm a bit skeptical of that approach. > > > > > > Proper peer mentoring takes a lot of time, so I expect there will be a > > > limited number of "I'll spend significant time helping you if you help > > > us" offers. I don't envision potential contributors to browse dozens of > > > project-specific "on-boarding doc" to find them. I would rather > > > consolidate those offers on a single page. > > > > > > So.. either some magic consolidation job that takes input from all of > > > those project-specific repos to build a nice rendered list... Or just a > > > wiki page ? > > > > > > -- > > > Thierry Carrez (ttx) > > > > > > > A wiki page would be nicely lightweight, so that approach makes some > > sense. Maybe if the only maintenance is to review the page periodically, > > we can convince one of the existing mentorship groups or the first > > contact SIG to do that. > > Same can be achieved If we have a single link on doc.openstack.org or contributor guide with > top section "Help-wanted" with subsection of each project specific help-wanted. project help > wanted subsection can be build from help wanted section from project contributor doc. > > That way it is easy for the project team to maintain their help wanted list. Wiki page can > have the challenge of prioritizing and maintain the list. > > -gmann > > > > > -- > > Doug Another benefit of using the wiki is that SIGs and pop-up teams can add their own items. We don't have a good way for those groups to be integrated with docs.openstack.org right now. -- Doug From Kevin.Fox at pnnl.gov Thu Feb 7 16:19:03 2019 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 7 Feb 2019 16:19:03 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall>, Message-ID: <1A3C52DFCD06494D8528644858247BF01C293652@EX10MBOX03.pnnl.gov> Currently cross project work is very hard due to contributors not having enough political capital (review capital) in each project to get attention/priority. By the TC putting its weight behind a popupgroup, the projects can know, that this is important, even though I haven't seen that contributor much before. They may not need git repo's but new IRC channels do make sense I think. Sometimes you need to coordinate work between projects and trying to do that in one of the project channels might not facilitate that. Thanks, Kevin ________________________________________ From: Doug Hellmann [doug at doughellmann.com] Sent: Thursday, February 07, 2019 7:58 AM To: Adam Spiers; Thierry Carrez Cc: Sean McGinnis; openstack-discuss at lists.openstack.org Subject: Re: [all][tc] Formalizing cross-project pop-up teams Adam Spiers writes: > Thierry Carrez wrote: >>Adam Spiers wrote: >>>[...] >>>Sure. I particularly agree with your point about processes; I think >>>the TC (or whoever else volunteers) could definitely help lower the >>>barrier to starting up a pop-up team by creating a cookie-cutter >>>kind of approach which would quickly set up any required >>>infrastructure. For example it could be a simple form or CLI-based >>>tool posing questions like the following, where the answers could >>>facilitate the bootstrapping process: >>>- What is the name of your pop-up team? >>>- Please enter a brief description of the purpose of your pop-up team. >>>- If you will use an IRC channel, please state it here. >>>- Do you need regular IRC meetings? >>>- Do you need a new git repository? [If so, ...] >>>- Do you need a new StoryBoard project? [If so, ...] >>>- Do you need a [badge] for use in Subject: headers on openstack-discuss? >>>etc. >>> >>>The outcome of the form could be anything from pointers to specific >>>bits of documentation on how to set up the various bits of >>>infrastructure, all the way through to automation of as much of the >>>setup as is possible. The slicker the process, the more agile the >>>community could become in this respect. >> >>That's a great idea -- if the pop-up team concept takes on we could >>definitely automate stuff. In the mean time I feel like the next step >>is to document what we mean by pop-up team, list them, and give >>pointers to the type of resources you can have access to (and how to >>ask for them). > > Agreed - a quickstart document would be a great first step. > >>In terms of "blessing" do you think pop-up teams should be ultimately >>approved by the TC ? On one hand that adds bureaucracy / steps to the >>process, but on the other having some kind of official recognition can >>help them... >> >>So maybe some after-the-fact recognition would work ? Let pop-up teams >>freely form and be listed, then have the TC declaring some of them (if >>not all of them) to be of public interest ? > > Yeah, good questions. The official recognition is definitely > beneficial; OTOH I agree that requiring steps up-front might deter > some teams from materialising. Automating these as much as possible > would reduce the risk of that. What benefit do you perceive to having official recognition? > > One challenge I see facing an after-the-fact approach is that any > requests for infrastructure (IRC channel / meetings / git repo / > Storyboard project etc.) would still need to be approved in advance, > and presumably a coordinated approach to approval might be more > effective than one where some of these requests could be approved and > others denied. Isn't the point of these teams that they would be coordinating work within other existing projects? So I wouldn't expect them to need git repositories or new IRC channels. Meeting times, yes. > > I'm not sure what the best approach is - sorry ;-) > -- Doug From juliaashleykreger at gmail.com Thu Feb 7 16:21:45 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 7 Feb 2019 08:21:45 -0800 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <20190206154138.qfhgh5cax3j2r4qh@redhat.com> <20190206213222.43nin24mkbqhsrw7@redhat.com> Message-ID: An awesome email Chris, thanks! Various thoughts below. On Thu, Feb 7, 2019 at 2:40 AM Chris Dent wrote: > > On Wed, 6 Feb 2019, Lars Kellogg-Stedman wrote: > > > I'm still not clear on whether there's any way to make this work with > > existing tools, or if it makes sense to figure out to make Nova do > > this or if we need something else sitting in front of Ironic. The community is not going to disagree with supporting a different model for access. For some time we've had a consensus that there is a need, it is just getting there and understanding the full of extent of the needs that is the conundrum. Today, a user doesn't need nova to deploy a baremetal machine, they just need baremetal_admin access rights and to have chosen which machine they want. I kind of feel like if there are specific access patterns and usage rights, then it would be good to write those down because the ironic api has always been geared for admin usage or usage via nova. While not perfect, each API endpoint is ultimately represent a pool of hardware resources to be managed. Different patterns do have different needs, and some of that may be filtering the view of hardware from a user, or only showing a user what they have rights to access. For example, with some of the discussion, there would conceivably be a need to expose or point to bmc credentials for machines checked out. That seems like a huge conundrum and would require access rights and an entire workflow, that is outside of a fully trusted or single tenant admin trusted environment. Ultimately I think some of this is going to require discussion in a specification document to hammer out exactly what is needed from ironic. > > If I recall the early conversations correctly, one of the > thoughts/frustrations that brought placement into existence was the > way in which there needed to be a pile of flavors, constantly > managed to reflect the variety of resources in the "cloud"; wouldn't > it be nice to simply reflect those resources, ask for the things you > wanted, not need to translate that into a flavor, and not need to > create a new flavor every time some new thing came along? > I feel like this is also why we started heading in the direction of traits and why we now have the capability to have traits described about a specific node. Granted, traits doesn't solve it all, and operators kind of agreed (In the Sydney Forum) that they couldn't really agree on common trait names for additional baremetal traits. > It wouldn't be super complicated for Ironic to interact directly > with placement to report hardware inventory at regular intervals > and to get a list of machines that meet the "at least X > GB RAM and Y GB disk space" requirements when somebody wants to boot > (or otherwise select, perhaps for later use) a machine, circumventing > nova and concepts like flavors. As noted elsewhere in the thread you > lose concepts of tenancy, affinity and other orchestration concepts > that nova provides. But if those don't matter, or if the shape of > those things doesn't fit, it might (might!) be a simple matter of > programming... I seem to recall there have been several efforts in > this direction over the years, but not any that take advantage of > placement. > I know myself and others in the ironic community would be interested to see a proof of concept and to support this behavior. Admittedly I don't know enough about placement and I suspect the bulk of our primary contributors are in a similar boat as myself with multiple commitments that would really prevent spending time on an experiment such as this. > One thing to keep in mind is the reasons behind the creation of > custom resource classes like CUSTOM_BAREMETAL_GOLD for reporting > ironic inventory (instead of the actual available hardware): A job > on baremetal consumes all of it. If Ironic is reporting granular > inventory, when it claims a big machine if the initial request was > for a smaller machine, the claim would either need to be for all the > stuff (to not leave inventory something else might like to claim) or > some other kind of inventory manipulation (such as adjusting > reserved) might be required. I think some of this logic and some of the conundrums we've hit with nova interaction in the past is also one of the items that might seem as too much to take on, then again I guess it should end up being kind of simpler... I think. > > One option might be to have all inventoried machines to have classes > of resource for hardware and then something like a PHYSICAL_MACHINE > class with a value of 1. When a request is made (including the > PHSYICAL_MACHINE=1), the returned resources are sorted by "best fit" > and an allocation is made. PHYSICAL_MACHINE goes to 0, taking that > resource provider out of service, but leaving the usage an accurate > representation of reality. > I feel like this was kind of already the next discussion direction, but I suspect I'm going to need to see a data model to picture it in my head. :( > I think it might be worth exploring, and so it's clear I'm not > talking from my armchair here, I've been doing some > experiments/hacks with launching VMs with just placement, etcd and a > bit of python that have proven quite elegant and may help to > demonstrate how simple an initial POC that talked with ironic > instead could be: > > https://github.com/cdent/etcd-compute Awesome, I'll add it to my list of things to check out! > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent From Kevin.Fox at pnnl.gov Thu Feb 7 16:32:23 2019 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 7 Feb 2019 16:32:23 +0000 Subject: [TripleO] containers logging to stdout In-Reply-To: <05cc6365-0502-0fa8-ce0d-741269b0c389@redhat.com> References: <7cee5db5-f4cd-9e11-e0a3-7438154fb9af@redhat.com> <95dc4e6c-dc4a-7cc6-a34d-7999566725ba@redhat.com>, <05cc6365-0502-0fa8-ce0d-741269b0c389@redhat.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C29CAC3@EX10MBOX03.pnnl.gov> k8s only supports the json driver too. So if its the end goal, sticking to that might be easier. Thanks, Kevin ________________________________________ From: Cédric Jeanneret [cjeanner at redhat.com] Sent: Wednesday, February 06, 2019 10:11 PM To: openstack-discuss at lists.openstack.org Subject: Re: [TripleO] containers logging to stdout Hello, I'm currently testing things, related to this LP: https://bugs.launchpad.net/tripleo/+bug/1814897 We might hit some issues: - With docker, json-file log driver doesn't support any "path" options, and it outputs the files inside the container namespace (/var/lib/docker/container/ID/ID-json.log) - With podman, we actually have a "path" option, and it works nice. But the json-file isn't a JSON at all. - Docker supports journald and some other outputs - Podman doesn't support anything else than json-file Apparently, Docker seems to support a failing "journald" backend. So we might end with two ways of logging, if we're to keep docker in place. Cheers, C. On 2/5/19 11:11 AM, Cédric Jeanneret wrote: > Hello there! > > small thoughts: > - we might already push the stdout logging, in parallel of the current > existing one > > - that would already point some weakness and issues, without making the > whole thing crash, since there aren't that many logs in stdout for now > > - that would already allow to check what's the best way to do it, and > what's the best format for re-usability (thinking: sending logs to some > (k)elk and the like) > > This would also allow devs to actually test that for their services. And > thus going forward on this topic. > > Any thoughts? > > Cheers, > > C. > > On 1/30/19 11:49 AM, Juan Antonio Osorio Robles wrote: >> Hello! >> >> >> In Queens, the a spec to provide the option to make containers log to >> standard output was proposed [1] [2]. Some work was done on that side, >> but due to the lack of traction, it wasn't completed. With the Train >> release coming, I think it would be a good idea to revive this effort, >> but make logging to stdout the default in that release. >> >> This would allow several benefits: >> >> * All logging from the containers would en up in journald; this would >> make it easier for us to forward the logs, instead of having to keep >> track of the different directories in /var/log/containers >> >> * The journald driver would add metadata to the logs about the container >> (we would automatically get what container ID issued the logs). >> >> * This wouldo also simplify the stacks (removing the Logging nested >> stack which is present in several templates). >> >> * Finally... if at some point we move towards kubernetes (or something >> in between), managing our containers, it would work with their logging >> tooling as well. >> >> >> Any thoughts? >> >> >> [1] >> https://specs.openstack.org/openstack/tripleo-specs/specs/queens/logging-stdout.html >> >> [2] https://blueprints.launchpad.net/tripleo/+spec/logging-stdout-rsyslog >> >> >> > -- Cédric Jeanneret Software Engineer DFG:DF From allison at openstack.org Thu Feb 7 17:31:52 2019 From: allison at openstack.org (Allison Price) Date: Thu, 7 Feb 2019 11:31:52 -0600 Subject: OpenStack Foundation 2018 Annual Report Message-ID: <74148057-916E-4953-9E17-2193B269333A@openstack.org> Hi everyone, Today, we have published the OpenStack Foundation 2018 Annual Report [1], a yearly report highlighting the incredible work and advancements being achieved by the community. Thank you to all of the community contributors who helped pull the report together. Read the latest on: The Foundation’s latest initiatives to support Open Infrastructure Project updates from the OpenStack, Airship, Kata Containers, StarlingX, and Zuul communities Highlights from OpenStack Workings Groups and SIGs Community programs including OpenStack Upstream Institute, the Travel Support Program, Outreachy Internship Programs, and Contributor recognition OpenStack Foundation events including PTGs, Forums, OpenStack / OpenInfra Days, and the OpenStack Summit With almost 100,000 individual members, our community accomplished a lot last year. If you would like to continue to stay updated in the latest Foundation and project news, subscribe to the bi-weekly Open Infrastructure newsletter [2]. We look forward to another successful year in 2019! Cheers, Allison [1] https://www.openstack.org/foundation/2018-openstack-foundation-annual-report [2] https://www.openstack.org/community/email-signup Allison Price OpenStack Foundation allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Feb 7 17:58:50 2019 From: melwittt at gmail.com (melanie witt) Date: Thu, 7 Feb 2019 09:58:50 -0800 Subject: [nova][dev] project self-evaluation against TC technical vision Message-ID: <7176c3c4-52a3-50e9-2d6c-c4f546428c4b@gmail.com> Howdy everyone, About a month ago, the TC sent out a mail [1] asking projects to complete a self-evaluation exercise against the technical vision for OpenStack clouds, published by the TC [2]. The self-evaluation is to be added to our in-tree docs as a living document to be updated over time as things change. To paraphrase from [1], the intent of the exercise is to help projects identify areas they can work on to improve alignment with the rest of OpenStack. The doc should be a concise, easily consumable list of things that interested contributors can work on. Here are examples of vision reflection documents: * openstack/ironic: https://review.openstack.org/629060 * openstack/placement: https://review.openstack.org/630216 I have created an etherpad for us to use to fill in ideas for our vision reflection document: https://etherpad.openstack.org/p/nova-tc-vision-self-eval I'd like to invite everyone in the nova community including operators, users, and developers to join the etherpad and share their thoughts on how nova can improve its alignment to the technical vision for OpenStack clouds. Feel free to add or modify sections as you like. And once we've collected ideas for the doc, I (or anyone) can propose a doc patch to openstack/nova, to be included with our in-tree documentation. Cheers, -melanie [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001417.html [2] https://governance.openstack.org/tc/reference/technical-vision.html From sean.mcginnis at gmx.com Thu Feb 7 18:20:34 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 7 Feb 2019 12:20:34 -0600 Subject: [release] Release countdown for week R-8, February 11-15 Message-ID: <20190207182034.GA4139@sm-workstation> Your long awaited countdown email... Development Focus ----------------- It's probably a good time for teams to take stock of their library and client work that needs to be completed yet. The non-client library freeze is coming up, followed closely by the client lib freeze. Please plan accordingly so avoid any last minute rushes to get key functionality in. General Information ------------------- We have a few deadlines coming up as we get closer to the end of the cycle: * Non-client libraries (generally, any library that is not python-${PROJECT}client) must have a final release by February 28. Only critical bugfixes will be allowed past this point. Please make sure any important feature works has required library changes by this time. * Client libraries must have a final release by March 7. We will be proposing a few patches to switch some cycle-with-intermediary deliverables over to cycle-with-rc if they are not actually doing intermediary releases. PTLs and release liaisons, please watch for being added to any of those reviews. If the switch is not desired, this is a good time to do an intermediary release if you have been putting it off. More information can be found in our post to the mailing list back in December: http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000465.html It is also a good time to start planning what highlights you want for your project team in the cycle highlights: Background on cycle-highlights: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125613.html Project Team Guide, Cycle-Highlights: https://docs.openstack.org/project-team-guide/release-management.html#cycle-highlights knelson [at] openstack.org/diablo_rojo on IRC is available if you need help selecting or writing your highlights Upcoming Deadlines & Dates -------------------------- Non-client library freeze: February 28 Stein-3 milestone: March 7 -- Sean McGinnis (smcginnis) From jgrosso at redhat.com Thu Feb 7 18:59:48 2019 From: jgrosso at redhat.com (Jason Grosso) Date: Thu, 7 Feb 2019 13:59:48 -0500 Subject: [storyboard] sandbox to play with Message-ID: Hello Storyboard, Is there a sandbox where I can test some of the functionality compared to launchpad? Any help would be appreciated! Thanks, Jason Grosso Senior Quality Engineer - Cloud Red Hat OpenStack Manila jgrosso at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 7 19:27:00 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 7 Feb 2019 11:27:00 -0800 Subject: [storyboard] sandbox to play with In-Reply-To: References: Message-ID: Yes there is! [1] Let us know if you have any other questions! -Kendall (diablo_rojo) [1] https://storyboard-dev.openstack.org/ On Thu, Feb 7, 2019 at 11:01 AM Jason Grosso wrote: > Hello Storyboard, > > Is there a sandbox where I can test some of the functionality compared to > launchpad? > > Any help would be appreciated! > > Thanks, > > Jason Grosso > > Senior Quality Engineer - Cloud > > Red Hat OpenStack Manila > > jgrosso at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 7 19:45:21 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 7 Feb 2019 11:45:21 -0800 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> Message-ID: On Mon, Feb 4, 2019 at 9:26 AM Doug Hellmann wrote: > Jeremy Stanley writes: > > > On 2019-02-04 17:31:46 +0900 (+0900), Ghanshyam Mann wrote: > > [...] > >> If I recall it correctly from Board+TC meeting, TC is looking for > >> a new home for this list ? Or we continue to maintain this in TC > >> itself which should not be much effort I feel. > > [...] > > > > It seems like you might be referring to the in-person TC meeting we > > held on the Sunday prior to the Stein PTG in Denver (Alan from the > > OSF BoD was also present). Doug's recap can be found in the old > > openstack-dev archive here: > > > > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134744.html > > > > Quoting Doug, "...it wasn't clear that the TC was the best group to > > manage a list of 'roles' or other more detailed information. We > > discussed placing that information into team documentation or > > hosting it somewhere outside of the governance repository where more > > people could contribute." (If memory serves, this was in response to > > earlier OSF BoD suggestions that retooling the Help Wanted list to > > be a set of business-case-focused job descriptions might garner more > > uptake from the organizations they represent.) > > -- > > Jeremy Stanley > > Right, the feedback was basically that we might have more luck > convincing companies to provide resources if we were more specific about > how they would be used by describing the work in more detail. When we > started thinking about how that change might be implemented, it seemed > like managing the information a well-defined job in its own right, and > our usual pattern is to establish a group of people interested in doing > something and delegating responsibility to them. When we talked about it > in the TC meeting in Denver we did not have any TC members volunteer to > drive the implementation to the next step by starting to recruit a team. > > During the Train series goal discussion in Berlin we talked about having > a goal of ensuring that each team had documentation for bringing new > contributors onto the team. This was something I thought the docs team was working on pushing with all of the individual projects, but I am happy to help if they need extra hands. I think this is suuuuuper important. Each Upstream Institute we teach all the general info we can, but we always mention that there are project specific ways of handling things and project specific processes. If we want to lower the barrier for new contributors, good per project documentation is vital. > Offering specific mentoring resources seems > to fit nicely with that goal, and doing it in each team's repository in > a consistent way would let us build a central page on docs.openstack.org > to link to all of the team contributor docs, like we link to the user > and installation documentation, without requiring us to find a separate > group of people to manage the information across the entire community. I think maintaining the project liaison list[1] that the First Contact SIG has kind of does this? Between that list and the mentoring cohort program that lives under the D&I WG, I think we have things covered. Its more a matter of publicizing those than starting something new I think? > > So, maybe the next step is to convince someone to champion a goal of > improving our contributor documentation, and to have them describe what > the documentation should include, covering the usual topics like how to > actually submit patches as well as suggestions for how to describe areas > where help is needed in a project and offers to mentor contributors. Does anyone want to volunteer to serve as the goal champion for that? > > I can probably draft a rough outline of places where I see projects diverge and make a template, but where should we have that live? /me imagines a template similar to the infra spec template > -- > Doug > > [1] https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 7 19:52:15 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 7 Feb 2019 11:52:15 -0800 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> Message-ID: On Thu, Feb 7, 2019 at 4:45 AM Doug Hellmann wrote: > Thierry Carrez writes: > > > Doug Hellmann wrote: > >> [...] > >> During the Train series goal discussion in Berlin we talked about having > >> a goal of ensuring that each team had documentation for bringing new > >> contributors onto the team. Offering specific mentoring resources seems > >> to fit nicely with that goal, and doing it in each team's repository in > >> a consistent way would let us build a central page on > docs.openstack.org > >> to link to all of the team contributor docs, like we link to the user > >> and installation documentation, without requiring us to find a separate > >> group of people to manage the information across the entire community. > > > > I'm a bit skeptical of that approach. > > > > Proper peer mentoring takes a lot of time, so I expect there will be a > > limited number of "I'll spend significant time helping you if you help > > us" offers. I don't envision potential contributors to browse dozens of > > project-specific "on-boarding doc" to find them. I would rather > > consolidate those offers on a single page. > > > > So.. either some magic consolidation job that takes input from all of > > those project-specific repos to build a nice rendered list... Or just a > > wiki page ? > > > > -- > > Thierry Carrez (ttx) > > > > A wiki page would be nicely lightweight, so that approach makes some > sense. Maybe if the only maintenance is to review the page periodically, > we can convince one of the existing mentorship groups or the first > contact SIG to do that. > So I think that the First Contact SIG project liaison list kind of fits this. Its already maintained in a wiki and its already a list of people willing to be contacted for helping people get started. It probably just needs more attention and refreshing. When it was first set up we (the FC SIG) kind of went around begging for volunteers and then once we maxxed out on them, we said those projects without volunteers will have the role defaulted to the PTL unless they delegate (similar to how other liaison roles work). Long story short, I think we have the sort of mentoring things covered. And to back up an earlier email, project specific onboarding would be a good help too. In my mind I see the help most wanted list as being useful if we want to point people at specific projects that need more hands than others, but I think that the problem is that its hard to quanitfy/keep up to date and the TC was put in charge thinking that they had a better lay of the overall landscape? I think it could go away as documentation maintained by the TC. If we wanted to try to keep a like.. top 5 projects that need friends list... that could live in the FC SIG wiki as well I think. > -- > Doug > > -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 7 20:02:48 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 7 Feb 2019 12:02:48 -0800 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> References: <20190201043349.GB6183@thor.bakeyournoodle.com> <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> Message-ID: On Fri, Feb 1, 2019 at 6:26 AM Eric Fried wrote: > Tony- > > Thanks for following up on this! > > > The general idea is that the bot would: > > 1. Leave a -1 review on 'qualifying'[2] changes along with a request for > > some small change > > As I mentioned in the room, to give a realistic experience the bot > should wait two or three weeks before tendering its -1. > > I kid (in case that wasn't clear). > > > 2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) > > on the change > > If you're compiling a list of eventual features for the bot, another one > that could be neat is, after the second patch set, the bot merges a > change that creates a merge conflict on the student's patch, which they > then have to go resolve. > Another, other eventual feature I talked about with Jimmy MacArthur a few weeks ago was if we could have the bot ask the new contributors how it was they got to this point in their contributions? Was it self driven? Was it a part of OUI, was it from other documentation? Would be interesting to see how our new contributors are making their way in so that we can better help them/fix where the system is falling down. Would also be really interesting data :) And who doesn't live data? > > Also, cross-referencing [1], it might be nice to update that tutorial at > some point to use the sandbox repo instead of nova. That could be done > once we have bot action so said action could be incorporated into the > tutorial flow. > > > [2] The details of what counts as qualifying can be fleshed out later > > but there needs to be something so that contributors using the > > sandbox that don't want to be bothered by the bot wont be. > > Yeah, I had been assuming it would be some tag in the commit message. If > we ultimately enact different flows of varying complexity, the tag > syntax could be enriched so students in different courses/grades could > get different experiences. For example: > > Bot-Reviewer: > > or > > Bot-Reviewer: Level 2 > > or > > Bot-Reviewer: initial-downvote, merge-conflict, series-depth=3 > > The possibilities are endless :P > > -efried > > [1] https://review.openstack.org/#/c/634333/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Thu Feb 7 20:21:03 2019 From: aspiers at suse.com (Adam Spiers) Date: Thu, 7 Feb 2019 20:21:03 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> Message-ID: <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> Doug Hellmann wrote: >Adam Spiers writes: >>Thierry Carrez wrote: >>>Adam Spiers wrote: >>>>[...] >>>>Sure.  I particularly agree with your point about processes; I think >>>>the TC (or whoever else volunteers) could definitely help lower the >>>>barrier to starting up a pop-up team by creating a cookie-cutter >>>>kind of approach which would quickly set up any required >>>>infrastructure. For example it could be a simple form or CLI-based >>>>tool posing questions like the following, where the answers could >>>>facilitate the bootstrapping process: >>>>- What is the name of your pop-up team? >>>>- Please enter a brief description of the purpose of your pop-up team. >>>>- If you will use an IRC channel, please state it here. >>>>- Do you need regular IRC meetings? >>>>- Do you need a new git repository?  [If so, ...] >>>>- Do you need a new StoryBoard project?  [If so, ...] >>>>- Do you need a [badge] for use in Subject: headers on openstack-discuss? >>>>etc. >>>> >>>>The outcome of the form could be anything from pointers to specific >>>>bits of documentation on how to set up the various bits of >>>>infrastructure, all the way through to automation of as much of the >>>>setup as is possible.  The slicker the process, the more agile the >>>>community could become in this respect. >>> >>>That's a great idea -- if the pop-up team concept takes on we could >>>definitely automate stuff. In the mean time I feel like the next step >>>is to document what we mean by pop-up team, list them, and give >>>pointers to the type of resources you can have access to (and how to >>>ask for them). >> >>Agreed - a quickstart document would be a great first step. >> >>>In terms of "blessing" do you think pop-up teams should be ultimately >>>approved by the TC ? On one hand that adds bureaucracy / steps to the >>>process, but on the other having some kind of official recognition can >>>help them... >>> >>>So maybe some after-the-fact recognition would work ? Let pop-up teams >>>freely form and be listed, then have the TC declaring some of them (if >>>not all of them) to be of public interest ? >> >>Yeah, good questions. The official recognition is definitely >>beneficial; OTOH I agree that requiring steps up-front might deter >>some teams from materialising. Automating these as much as possible >>would reduce the risk of that. > >What benefit do you perceive to having official recognition? Difficult to quantify a cultural impact ... Maybe it's not a big deal, but I'm pretty sure it makes a difference in that news of "official" things seems to propagate along the various grapevines better than skunkworks initiatives. One possibility is that the TC is the mother of all other grapevines ;-) So if the TC is aware of something then (perhaps naively) I expect that the ensuing discussion will accelerate spreading of awareness amongst rest of the community. And of course there are other official communication channels which could have a similar effect. >>One challenge I see facing an after-the-fact approach is that any >>requests for infrastructure (IRC channel / meetings / git repo / >>Storyboard project etc.) would still need to be approved in advance, >>and presumably a coordinated approach to approval might be more >>effective than one where some of these requests could be approved and >>others denied. > >Isn't the point of these teams that they would be coordinating work >within other existing projects? Yes. >So I wouldn't expect them to need git repositories or new IRC >channels. Never? Code and documentation doesn't always naturally belong in a single project, especially when it relates to cross-project work. Similarly, if (say) Monasca, Vitrage, and Heat all need an IRC channel in which to collaborate on a specific topic, it seems fairly clear that none of #openstack-{monasca,vitrage,heat} are optimal choices. The self-healing SIG has both a dedicated git repository (for docs, code, and in order to be able to use StoryBoard) and a dedicated IRC channel. We find both useful. Of course SIGs are more heavy-weight and long-lived so I'm not suggesting that all or even necessarily the majority of popup teams would need git/IRC. But I imagine it's possible in some cases, at least. From doug at doughellmann.com Thu Feb 7 20:27:39 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Feb 2019 15:27:39 -0500 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> Message-ID: Adam Spiers writes: > Doug Hellmann wrote: >>Adam Spiers writes: >>>Thierry Carrez wrote: >>>>Adam Spiers wrote: >>>>>[...] >>>>>Sure.  I particularly agree with your point about processes; I think >>>>>the TC (or whoever else volunteers) could definitely help lower the >>>>>barrier to starting up a pop-up team by creating a cookie-cutter >>>>>kind of approach which would quickly set up any required >>>>>infrastructure. For example it could be a simple form or CLI-based >>>>>tool posing questions like the following, where the answers could >>>>>facilitate the bootstrapping process: >>>>>- What is the name of your pop-up team? >>>>>- Please enter a brief description of the purpose of your pop-up team. >>>>>- If you will use an IRC channel, please state it here. >>>>>- Do you need regular IRC meetings? >>>>>- Do you need a new git repository?  [If so, ...] >>>>>- Do you need a new StoryBoard project?  [If so, ...] >>>>>- Do you need a [badge] for use in Subject: headers on openstack-discuss? >>>>>etc. >>>>> >>>>>The outcome of the form could be anything from pointers to specific >>>>>bits of documentation on how to set up the various bits of >>>>>infrastructure, all the way through to automation of as much of the >>>>>setup as is possible.  The slicker the process, the more agile the >>>>>community could become in this respect. >>>> >>>>That's a great idea -- if the pop-up team concept takes on we could >>>>definitely automate stuff. In the mean time I feel like the next step >>>>is to document what we mean by pop-up team, list them, and give >>>>pointers to the type of resources you can have access to (and how to >>>>ask for them). >>> >>>Agreed - a quickstart document would be a great first step. >>> >>>>In terms of "blessing" do you think pop-up teams should be ultimately >>>>approved by the TC ? On one hand that adds bureaucracy / steps to the >>>>process, but on the other having some kind of official recognition can >>>>help them... >>>> >>>>So maybe some after-the-fact recognition would work ? Let pop-up teams >>>>freely form and be listed, then have the TC declaring some of them (if >>>>not all of them) to be of public interest ? >>> >>>Yeah, good questions. The official recognition is definitely >>>beneficial; OTOH I agree that requiring steps up-front might deter >>>some teams from materialising. Automating these as much as possible >>>would reduce the risk of that. >> >>What benefit do you perceive to having official recognition? > > Difficult to quantify a cultural impact ... Maybe it's not a big > deal, but I'm pretty sure it makes a difference in that news of > "official" things seems to propagate along the various grapevines > better than skunkworks initiatives. One possibility is that the TC is > the mother of all other grapevines ;-) So if the TC is aware of > something then (perhaps naively) I expect that the ensuing discussion > will accelerate spreading of awareness amongst rest of the community. > And of course there are other official communication channels which > could have a similar effect. > >>>One challenge I see facing an after-the-fact approach is that any >>>requests for infrastructure (IRC channel / meetings / git repo / >>>Storyboard project etc.) would still need to be approved in advance, >>>and presumably a coordinated approach to approval might be more >>>effective than one where some of these requests could be approved and >>>others denied. >> >>Isn't the point of these teams that they would be coordinating work >>within other existing projects? > > Yes. > >>So I wouldn't expect them to need git repositories or new IRC >>channels. > > Never? Code and documentation doesn't always naturally belong in a > single project, especially when it relates to cross-project work. > Similarly, if (say) Monasca, Vitrage, and Heat all need an IRC channel > in which to collaborate on a specific topic, it seems fairly clear > that none of #openstack-{monasca,vitrage,heat} are optimal choices. What's wrong with #openstack-dev? > The self-healing SIG has both a dedicated git repository (for docs, > code, and in order to be able to use StoryBoard) and a dedicated IRC > channel. We find both useful. > > Of course SIGs are more heavy-weight and long-lived so I'm not > suggesting that all or even necessarily the majority of popup teams > would need git/IRC. But I imagine it's possible in some cases, at > least. Right, SIGs are not designed to disappear after a task is done in the way that popup teams are. If a popup team is going to create code, it needs to end up in a repository that is owned and maintained by someone over the long term. If that requires a new repo, and one of the existing teams isn't a natural home, then I think a new regular team is likely a better fit for the task than a popup team. -- Doug From Kevin.Fox at pnnl.gov Thu Feb 7 20:29:04 2019 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 7 Feb 2019 20:29:04 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> , <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> Message-ID: <1A3C52DFCD06494D8528644858247BF01C29DE58@EX10MBOX03.pnnl.gov> yeah, I don't think k8s working groups have repos, just sigs. as working groups are short lived. Popup Groups should be similar to working groups I think. Thanks, Kevin ________________________________________ From: Adam Spiers [aspiers at suse.com] Sent: Thursday, February 07, 2019 12:21 PM To: Doug Hellmann Cc: Thierry Carrez; Sean McGinnis; openstack-discuss at lists.openstack.org Subject: Re: [all][tc] Formalizing cross-project pop-up teams Doug Hellmann wrote: >Adam Spiers writes: >>Thierry Carrez wrote: >>>Adam Spiers wrote: >>>>[...] >>>>Sure. I particularly agree with your point about processes; I think >>>>the TC (or whoever else volunteers) could definitely help lower the >>>>barrier to starting up a pop-up team by creating a cookie-cutter >>>>kind of approach which would quickly set up any required >>>>infrastructure. For example it could be a simple form or CLI-based >>>>tool posing questions like the following, where the answers could >>>>facilitate the bootstrapping process: >>>>- What is the name of your pop-up team? >>>>- Please enter a brief description of the purpose of your pop-up team. >>>>- If you will use an IRC channel, please state it here. >>>>- Do you need regular IRC meetings? >>>>- Do you need a new git repository? [If so, ...] >>>>- Do you need a new StoryBoard project? [If so, ...] >>>>- Do you need a [badge] for use in Subject: headers on openstack-discuss? >>>>etc. >>>> >>>>The outcome of the form could be anything from pointers to specific >>>>bits of documentation on how to set up the various bits of >>>>infrastructure, all the way through to automation of as much of the >>>>setup as is possible. The slicker the process, the more agile the >>>>community could become in this respect. >>> >>>That's a great idea -- if the pop-up team concept takes on we could >>>definitely automate stuff. In the mean time I feel like the next step >>>is to document what we mean by pop-up team, list them, and give >>>pointers to the type of resources you can have access to (and how to >>>ask for them). >> >>Agreed - a quickstart document would be a great first step. >> >>>In terms of "blessing" do you think pop-up teams should be ultimately >>>approved by the TC ? On one hand that adds bureaucracy / steps to the >>>process, but on the other having some kind of official recognition can >>>help them... >>> >>>So maybe some after-the-fact recognition would work ? Let pop-up teams >>>freely form and be listed, then have the TC declaring some of them (if >>>not all of them) to be of public interest ? >> >>Yeah, good questions. The official recognition is definitely >>beneficial; OTOH I agree that requiring steps up-front might deter >>some teams from materialising. Automating these as much as possible >>would reduce the risk of that. > >What benefit do you perceive to having official recognition? Difficult to quantify a cultural impact ... Maybe it's not a big deal, but I'm pretty sure it makes a difference in that news of "official" things seems to propagate along the various grapevines better than skunkworks initiatives. One possibility is that the TC is the mother of all other grapevines ;-) So if the TC is aware of something then (perhaps naively) I expect that the ensuing discussion will accelerate spreading of awareness amongst rest of the community. And of course there are other official communication channels which could have a similar effect. >>One challenge I see facing an after-the-fact approach is that any >>requests for infrastructure (IRC channel / meetings / git repo / >>Storyboard project etc.) would still need to be approved in advance, >>and presumably a coordinated approach to approval might be more >>effective than one where some of these requests could be approved and >>others denied. > >Isn't the point of these teams that they would be coordinating work >within other existing projects? Yes. >So I wouldn't expect them to need git repositories or new IRC >channels. Never? Code and documentation doesn't always naturally belong in a single project, especially when it relates to cross-project work. Similarly, if (say) Monasca, Vitrage, and Heat all need an IRC channel in which to collaborate on a specific topic, it seems fairly clear that none of #openstack-{monasca,vitrage,heat} are optimal choices. The self-healing SIG has both a dedicated git repository (for docs, code, and in order to be able to use StoryBoard) and a dedicated IRC channel. We find both useful. Of course SIGs are more heavy-weight and long-lived so I'm not suggesting that all or even necessarily the majority of popup teams would need git/IRC. But I imagine it's possible in some cases, at least. From doug at doughellmann.com Thu Feb 7 20:29:22 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Feb 2019 15:29:22 -0500 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> Message-ID: Kendall Nelson writes: > On Mon, Feb 4, 2019 at 9:26 AM Doug Hellmann wrote: > >> Jeremy Stanley writes: >> >> > On 2019-02-04 17:31:46 +0900 (+0900), Ghanshyam Mann wrote: >> > [...] >> >> If I recall it correctly from Board+TC meeting, TC is looking for >> >> a new home for this list ? Or we continue to maintain this in TC >> >> itself which should not be much effort I feel. >> > [...] >> > >> > It seems like you might be referring to the in-person TC meeting we >> > held on the Sunday prior to the Stein PTG in Denver (Alan from the >> > OSF BoD was also present). Doug's recap can be found in the old >> > openstack-dev archive here: >> > >> > >> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134744.html >> > >> > Quoting Doug, "...it wasn't clear that the TC was the best group to >> > manage a list of 'roles' or other more detailed information. We >> > discussed placing that information into team documentation or >> > hosting it somewhere outside of the governance repository where more >> > people could contribute." (If memory serves, this was in response to >> > earlier OSF BoD suggestions that retooling the Help Wanted list to >> > be a set of business-case-focused job descriptions might garner more >> > uptake from the organizations they represent.) >> > -- >> > Jeremy Stanley >> >> Right, the feedback was basically that we might have more luck >> convincing companies to provide resources if we were more specific about >> how they would be used by describing the work in more detail. When we >> started thinking about how that change might be implemented, it seemed >> like managing the information a well-defined job in its own right, and >> our usual pattern is to establish a group of people interested in doing >> something and delegating responsibility to them. When we talked about it >> in the TC meeting in Denver we did not have any TC members volunteer to >> drive the implementation to the next step by starting to recruit a team. >> >> During the Train series goal discussion in Berlin we talked about having >> a goal of ensuring that each team had documentation for bringing new >> contributors onto the team. > > > This was something I thought the docs team was working on pushing with all > of the individual projects, but I am happy to help if they need extra > hands. I think this is suuuuuper important. Each Upstream Institute we > teach all the general info we can, but we always mention that there are > project specific ways of handling things and project specific processes. If > we want to lower the barrier for new contributors, good per project > documentation is vital. > > >> Offering specific mentoring resources seems >> to fit nicely with that goal, and doing it in each team's repository in >> a consistent way would let us build a central page on docs.openstack.org >> to link to all of the team contributor docs, like we link to the user >> and installation documentation, without requiring us to find a separate >> group of people to manage the information across the entire community. > > > I think maintaining the project liaison list[1] that the First Contact SIG > has kind of does this? Between that list and the mentoring cohort program > that lives under the D&I WG, I think we have things covered. Its more a > matter of publicizing those than starting something new I think? > > >> >> So, maybe the next step is to convince someone to champion a goal of >> improving our contributor documentation, and to have them describe what >> the documentation should include, covering the usual topics like how to >> actually submit patches as well as suggestions for how to describe areas >> where help is needed in a project and offers to mentor contributors. > >> Does anyone want to volunteer to serve as the goal champion for that? >> >> > I can probably draft a rough outline of places where I see projects diverge > and make a template, but where should we have that live? > > /me imagines a template similar to the infra spec template Could we put it in the project team guide? > > >> -- >> Doug >> >> > [1] https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons -- Doug From doug at doughellmann.com Thu Feb 7 20:32:26 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Feb 2019 15:32:26 -0500 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> Message-ID: Kendall Nelson writes: > On Thu, Feb 7, 2019 at 4:45 AM Doug Hellmann wrote: > >> Thierry Carrez writes: >> >> > Doug Hellmann wrote: >> >> [...] >> >> During the Train series goal discussion in Berlin we talked about having >> >> a goal of ensuring that each team had documentation for bringing new >> >> contributors onto the team. Offering specific mentoring resources seems >> >> to fit nicely with that goal, and doing it in each team's repository in >> >> a consistent way would let us build a central page on >> docs.openstack.org >> >> to link to all of the team contributor docs, like we link to the user >> >> and installation documentation, without requiring us to find a separate >> >> group of people to manage the information across the entire community. >> > >> > I'm a bit skeptical of that approach. >> > >> > Proper peer mentoring takes a lot of time, so I expect there will be a >> > limited number of "I'll spend significant time helping you if you help >> > us" offers. I don't envision potential contributors to browse dozens of >> > project-specific "on-boarding doc" to find them. I would rather >> > consolidate those offers on a single page. >> > >> > So.. either some magic consolidation job that takes input from all of >> > those project-specific repos to build a nice rendered list... Or just a >> > wiki page ? >> > >> > -- >> > Thierry Carrez (ttx) >> > >> >> A wiki page would be nicely lightweight, so that approach makes some >> sense. Maybe if the only maintenance is to review the page periodically, >> we can convince one of the existing mentorship groups or the first >> contact SIG to do that. >> > > So I think that the First Contact SIG project liaison list kind of fits > this. Its already maintained in a wiki and its already a list of people > willing to be contacted for helping people get started. It probably just > needs more attention and refreshing. When it was first set up we (the FC > SIG) kind of went around begging for volunteers and then once we maxxed out > on them, we said those projects without volunteers will have the role > defaulted to the PTL unless they delegate (similar to how other liaison > roles work). > > Long story short, I think we have the sort of mentoring things covered. And > to back up an earlier email, project specific onboarding would be a good > help too. OK, that does sound pretty similar. I guess the piece that's missing is a description of the sort of help the team is interested in receiving. > In my mind I see the help most wanted list as being useful if we want to > point people at specific projects that need more hands than others, but I > think that the problem is that its hard to quanitfy/keep up to date and the > TC was put in charge thinking that they had a better lay of the overall > landscape? I think it could go away as documentation maintained by the TC. > If we wanted to try to keep a like.. top 5 projects that need friends > list... that could live in the FC SIG wiki as well I think. When we started the current list we had a pretty small set of very high priority gaps to fill. The list is growing, the priorities are changes, and the previous list wasn't especially effective. All of which is driving this desire to have a new list of some sort. -- Doug From jimmy at openstack.org Thu Feb 7 21:25:01 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 07 Feb 2019 15:25:01 -0600 Subject: [TC] [UC] Volunteers for Forum Selection Committee Message-ID: <5C5CA22D.8010202@openstack.org> Hello! We need 2 volunteers from the TC and 2 from the UC for the Denver Forum Selection Committee. For more information, please see: https://wiki.openstack.org/wiki/Forum Please reach out to myself or knelson at openstack.org if you're interested. Volunteers should respond before Feb 15, 2019. Note: volunteers are required to be currently serving on either the UC or the TC. Cheers, Jimmy From tony at bakeyournoodle.com Thu Feb 7 22:07:57 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 8 Feb 2019 09:07:57 +1100 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: References: <20190201043349.GB6183@thor.bakeyournoodle.com> <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> Message-ID: <20190207220756.GA12795@thor.bakeyournoodle.com> On Thu, Feb 07, 2019 at 12:02:48PM -0800, Kendall Nelson wrote: > Another, other eventual feature I talked about with Jimmy MacArthur a few > weeks ago was if we could have the bot ask the new contributors how it was > they got to this point in their contributions? Was it self driven? Was it a > part of OUI, was it from other documentation? Would be interesting to see > how our new contributors are making their way in so that we can better help > them/fix where the system is falling down. > > Would also be really interesting data :) And who doesn't live data? We could do that. Do you think it should block the 'approval' of the sandbox change or would it be a purely optional question/response? Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jgrosso at redhat.com Thu Feb 7 22:13:23 2019 From: jgrosso at redhat.com (Jason Grosso) Date: Thu, 7 Feb 2019 17:13:23 -0500 Subject: [storyboard] sandbox to play with In-Reply-To: References: Message-ID: Awesome, thanks! On Thu, Feb 7, 2019 at 2:27 PM Kendall Nelson wrote: > Yes there is! [1] > > Let us know if you have any other questions! > > -Kendall (diablo_rojo) > > [1] https://storyboard-dev.openstack.org/ > > > On Thu, Feb 7, 2019 at 11:01 AM Jason Grosso wrote: > >> Hello Storyboard, >> >> Is there a sandbox where I can test some of the functionality compared to >> launchpad? >> >> Any help would be appreciated! >> >> Thanks, >> >> Jason Grosso >> >> Senior Quality Engineer - Cloud >> >> Red Hat OpenStack Manila >> >> jgrosso at redhat.com >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Feb 7 23:06:11 2019 From: melwittt at gmail.com (melanie witt) Date: Thu, 7 Feb 2019 15:06:11 -0800 Subject: [nova][dev] 4 weeks until feature freeze Message-ID: <0022f4bb-43c0-d35c-e3c3-d33269bdb843@gmail.com> Hey all, We've 4 weeks left until feature freeze milestone s-3 on March 7. I've updated the blueprint status tracking etherpad: https://etherpad.openstack.org/p/nova-stein-blueprint-status For our Cycle Themes: Multi-cell operational enhancements: We have good progress going on handling of down cells and cross-cell resize. Counting quota usage from placement is still a WIP and I will be pushing updates this week. Compute nodes able to upgrade and exist with nested resource providers for multiple vGPU types: This effort has stalled during the cycle but the libvirt driver reshaper patch has updates coming soon. The xenapi driver reshaper patch has a -1 from Nov 28 and has not been updated yet in response. Help is needed here. The patches for multiple vGPU types (libvirt and xenapi) are stale since Rocky (as they depend on the reshapers). Volume-backed user experience and API improvement: The ability to specify volume type during server create is complete since 2018-10-16. However, the patches for being able to detach a boot volume and volume-backed server rebuild are in merge conflict/stale. Help is needed here. If you are the owner of an approved blueprint, please: * Add the blueprint if I've missed it * Update the status if it is not accurate * If your blueprint is in the "Wayward changes" section, please upload and update patches as soon as you can, to allow maximum time for review * If your patches are noted as Merge Conflict or WIP or needing an update, please update them and update the status on the etherpad * Add a note under your blueprint if you're no longer able to work on it this cycle Let us know if you have any questions or need assistance with your blueprint. Cheers, -melanie From miguel at mlavalle.com Thu Feb 7 23:47:58 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 7 Feb 2019 17:47:58 -0600 Subject: [openstack-dev] [neutron] Cancelling Drivers meeting on February 8th Message-ID: Hi Neutron Drivers, We don't have RFEs ready to be discussed during our weekly meeting. On top of that, some of you are traveling. So let's cancel this week's meeting. We will resume on the 15th. Best regards and safe travels for those of you returning home Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Fri Feb 8 00:20:37 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Fri, 8 Feb 2019 13:20:37 +1300 Subject: Subject: Re: [Trove] State of the Trove service tenant deployment model In-Reply-To: References: Message-ID: Sorry for bringing this thread back to the top again. But I am wondering if there are people who have already deployed Trove in production? If yes, are you using service tenant model(create the database vm and related resources in the admin project) or using the flatten mode that the end user has access to the database vm and the control plane network as well? I am asking because we are going to deploy Trove in a private cloud, and we want to take more granular control of the resources created, e.g for every database vm, we will create the vm in the admin tenant, plug a port to the control plane(`CONF.default_neutron_networks`) and the other ports to the network given by the users, we also need to specify different security groups to different types of neutron ports for security reasons, etc. There are something missing in trove in order to achieve the above, I'm working on that, but I'd like to hear more suggestions. My irc name is lxkong in #openstack-trove, please ping me if you have something to share. Cheers, Lingxian Kong On Wed, Jan 23, 2019 at 7:35 PM Darek Król wrote: > On Wed, Jan 23, 2019 at 9:27 AM Fox, Kevin M > > wrote: > > > > I'd recommend at this point to maybe just run kubernetes across the > vms and push the guest agents/workload to them. > > > This sounds like an overkill to me. Currently, different projects in > openstack are solving this issue > in different ways, e.g. Octavia is using > two-way SSL authentication API between the controller service and > amphora(which is the vm running HTTP server inside), Magnum is using > heat-container-agent that is communicating with Heat via API, etc. However, > Trove chooses another option which has brought a lot of discussions over a > long time. > > > In the current situation, I don't think it's doable for each project > heading to one common solution, but Trove can learn from other projects to > solve its own problem. > > Cheers, > > Lingxian Kong > > The Octavia way of communication was discussed by Trove several times > in the context of secuirty. However, the security threat has been > eliminated by encryption. > I'm wondering if the Octavia way prevents DDOS attacks also ? > > Implementation of two-way SSL authentication API could be included in > the Trove priority list IMHO if it solves all issues with > security/DDOS attacks. This could also creates some share code between > both projects and help other services as well. > > Best, > Darek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Feb 8 01:26:36 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 7 Feb 2019 19:26:36 -0600 Subject: [nova][dev] 4 weeks until feature freeze In-Reply-To: <0022f4bb-43c0-d35c-e3c3-d33269bdb843@gmail.com> References: <0022f4bb-43c0-d35c-e3c3-d33269bdb843@gmail.com> Message-ID: <1ea6d402-61bf-0528-6de7-55c8fe920bc9@gmail.com> On 2/7/2019 5:06 PM, melanie witt wrote: > The ability to specify volume type during server create is complete > since 2018-10-16. However, the patches for being able to detach a boot > volume and volume-backed server rebuild are in merge conflict/stale. > Help is needed here. It's Chinese New Year / Spring Festival this week so the developers that own these changes are on holiday. Kevin told me last week that once he's back he's going to complete the detach/attach root volume work. The spec was amended [1] and needs another spec core to approve (probably would be good to have Dan do that since was involved in the initial spec review). As for the volume-backed rebuild change, I asked Jie Li on the review if he needed someone to help push it forward and he said he did. It sounds like Kevin and/or Yikun might be able to help there. Yikun already has the Cinder side API changes all done and there is a patch for the python-cinderclient change, but the Cinder API change is blocked until we have an end-to-end working scenario in Tempest for the volume-backed rebuild flow in nova. I can help with the Tempest change when the time comes since that should be pretty straightforward. [1] https://review.openstack.org/#/c/619161/ -- Thanks, Matt From sam47priya at gmail.com Fri Feb 8 02:06:49 2019 From: sam47priya at gmail.com (Sam P) Date: Fri, 8 Feb 2019 11:06:49 +0900 Subject: [ops] OpenStack operators meetup, Berlin, March 6th,7th In-Reply-To: References: Message-ID: Hi Chris, I need an invitation letter to get my German visa. Please let me know who to contact. --- Regards, Sampath On Thu, Feb 7, 2019 at 2:38 AM Chris Morgan wrote: > > See you there! > > On Wed, Feb 6, 2019 at 12:18 PM Erik McCormick wrote: >> >> I'm all signed up. See you in Berlin! >> >> On Wed, Feb 6, 2019, 10:43 AM Chris Morgan >> >>> Dear All, >>> The Evenbrite for the next ops meetup is now open, see >>> >>> https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 >>> >>> Thanks for Allison Price from the foundation for making this for us. We'll be sharing more details on the event soon. >>> >>> Chris >>> on behalf of the ops meetups team >>> >>> -- >>> Chris Morgan > > > > -- > Chris Morgan From cjeanner at redhat.com Fri Feb 8 08:40:22 2019 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Fri, 8 Feb 2019 09:40:22 +0100 Subject: [TripleO] containers logging to stdout In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C29CAC3@EX10MBOX03.pnnl.gov> References: <7cee5db5-f4cd-9e11-e0a3-7438154fb9af@redhat.com> <95dc4e6c-dc4a-7cc6-a34d-7999566725ba@redhat.com> <05cc6365-0502-0fa8-ce0d-741269b0c389@redhat.com> <1A3C52DFCD06494D8528644858247BF01C29CAC3@EX10MBOX03.pnnl.gov> Message-ID: On 2/7/19 5:32 PM, Fox, Kevin M wrote: > k8s only supports the json driver too. So if its the end goal, sticking to that might be easier. Cool then - the only big difference being the path, it shouldn't be that hard: docker outputs its json directly in a container-related path, while podman needs a parameter for it (in tripleo world, I've set it to /var/lib/containers/stdouts - we can change it if needed). Oh, not to mention the format - podman doesn't output a proper JSON, it's more a "kubernetes-like-ish" format iiuc[1]... A first patch has been merged by the way: https://review.openstack.org/635437 A second is waiting for reviews: https://review.openstack.org/635438 And a third will hit tripleo-heat-templates once we get the new paunch, in order to inject "--container-log-path /var/log/containers/stdouts". I suppose it would be best to push a parameter in heat (ContainerLogPath for example), I'll check how to do that and reflect its value in docker-puppet.py. Cheers, C. [1] https://github.com/containers/libpod/issues/2265#issuecomment-461060541 > > Thanks, > Kevin > ________________________________________ > From: Cédric Jeanneret [cjeanner at redhat.com] > Sent: Wednesday, February 06, 2019 10:11 PM > To: openstack-discuss at lists.openstack.org > Subject: Re: [TripleO] containers logging to stdout > > Hello, > > I'm currently testing things, related to this LP: > https://bugs.launchpad.net/tripleo/+bug/1814897 > > We might hit some issues: > - With docker, json-file log driver doesn't support any "path" options, > and it outputs the files inside the container namespace > (/var/lib/docker/container/ID/ID-json.log) > > - With podman, we actually have a "path" option, and it works nice. But > the json-file isn't a JSON at all. > > - Docker supports journald and some other outputs > > - Podman doesn't support anything else than json-file > > Apparently, Docker seems to support a failing "journald" backend. So we > might end with two ways of logging, if we're to keep docker in place. > > Cheers, > > C. > > On 2/5/19 11:11 AM, Cédric Jeanneret wrote: >> Hello there! >> >> small thoughts: >> - we might already push the stdout logging, in parallel of the current >> existing one >> >> - that would already point some weakness and issues, without making the >> whole thing crash, since there aren't that many logs in stdout for now >> >> - that would already allow to check what's the best way to do it, and >> what's the best format for re-usability (thinking: sending logs to some >> (k)elk and the like) >> >> This would also allow devs to actually test that for their services. And >> thus going forward on this topic. >> >> Any thoughts? >> >> Cheers, >> >> C. >> >> On 1/30/19 11:49 AM, Juan Antonio Osorio Robles wrote: >>> Hello! >>> >>> >>> In Queens, the a spec to provide the option to make containers log to >>> standard output was proposed [1] [2]. Some work was done on that side, >>> but due to the lack of traction, it wasn't completed. With the Train >>> release coming, I think it would be a good idea to revive this effort, >>> but make logging to stdout the default in that release. >>> >>> This would allow several benefits: >>> >>> * All logging from the containers would en up in journald; this would >>> make it easier for us to forward the logs, instead of having to keep >>> track of the different directories in /var/log/containers >>> >>> * The journald driver would add metadata to the logs about the container >>> (we would automatically get what container ID issued the logs). >>> >>> * This wouldo also simplify the stacks (removing the Logging nested >>> stack which is present in several templates). >>> >>> * Finally... if at some point we move towards kubernetes (or something >>> in between), managing our containers, it would work with their logging >>> tooling as well. >>> >>> >>> Any thoughts? >>> >>> >>> [1] >>> https://specs.openstack.org/openstack/tripleo-specs/specs/queens/logging-stdout.html >>> >>> [2] https://blueprints.launchpad.net/tripleo/+spec/logging-stdout-rsyslog >>> >>> >>> >> > > -- > Cédric Jeanneret > Software Engineer > DFG:DF > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From aspiers at suse.com Fri Feb 8 09:18:29 2019 From: aspiers at suse.com (Adam Spiers) Date: Fri, 8 Feb 2019 09:18:29 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> Message-ID: <20190208091829.6tiig7lgef6txcxk@pacific.linksys.moosehall> Doug Hellmann wrote: >Adam Spiers writes: >>Doug Hellmann wrote: >>>Isn't the point of these teams that they would be coordinating work >>>within other existing projects? >> >>Yes. >> >>>So I wouldn't expect them to need git repositories or new IRC >>>channels. >> >>Never? Code and documentation doesn't always naturally belong in a >>single project, especially when it relates to cross-project work. >>Similarly, if (say) Monasca, Vitrage, and Heat all need an IRC channel >>in which to collaborate on a specific topic, it seems fairly clear >>that none of #openstack-{monasca,vitrage,heat} are optimal choices. > >What's wrong with #openstack-dev? Maybe nothing, or maybe it's too noisy - I dunno ;-) Maybe the latter could be solved by setting up #openstack-breakout{1..10} for impromptu meetings where meetbot and channel logging are provided. >>The self-healing SIG has both a dedicated git repository (for docs, >>code, and in order to be able to use StoryBoard) and a dedicated IRC >>channel. We find both useful. >> >>Of course SIGs are more heavy-weight and long-lived so I'm not >>suggesting that all or even necessarily the majority of popup teams >>would need git/IRC. But I imagine it's possible in some cases, at >>least. > >Right, SIGs are not designed to disappear after a task is done in the >way that popup teams are. If a popup team is going to create code, it >needs to end up in a repository that is owned and maintained by someone >over the long term. If that requires a new repo, and one of the existing >teams isn't a natural home, then I think a new regular team is likely a >better fit for the task than a popup team. True. And for temporary docs / notes / brainstorming there's the wiki and etherpad. So yeah, in terms of infrastructure maybe IRC meetings in one of the communal meeting channels is the only thing needed. We'd still need to take care of ensuring that popups are easily discoverable by anyone, however. And this ties in with the "should we require official approval" debate - maybe a halfway house is the right balance between red tape and agility? For example, set up a table on a page like https://wiki.openstack.org/wiki/Popup_teams and warmly encourage newly forming teams to register themselves by adding a row to that table. Suggested columns: - Team name - One-line summary of team purpose - Expected life span (optional) - Link to team wiki page or etherpad - Link to IRC meeting schedule (if any) - Other comments Or if that's too much of a free-for-all, it could be a slightly more formal process of submitting a review to add a row to a page: https://governance.openstack.org/popup-teams/ which would be similar in spirit to: https://governance.openstack.org/sigs/ Either this or a wiki page would ensure that anyone can easily discover what teams are currently in existence, or have been in the past (since historical information is often useful too). Just thinking out aloud ... From cdent+os at anticdent.org Fri Feb 8 12:34:18 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 8 Feb 2019 12:34:18 +0000 (GMT) Subject: [tc] cdent non-nomination for TC Message-ID: Next week sees the start of election season for the TC [1]. People often worry that incumbents always get re-elected so it is considered good form to announce if you are an incumbent and do not intend to run. I do not intend to run. I've done two years and that's enough. When I was first elected I had no intention of doing any more than one year but at the end of the first term I had not accomplished much of what I hoped, so stayed on. Now, at the end of the second term I still haven't accomplished much of what I hoped, so I think it is time to focus my energy in the places where I've been able to get some traction and give someone else—someone with a different approach—a chance. If you're interested in being on the TC, I encourage you to run. If you have questions about it, please feel free to ask me, but also ask others so you get plenty of opinions. And do your due diligence: Make sure you're clear with yourself about what the TC has been, is now, what you would like it to be, and what it can be. Elections are fairly far in advance of the end of term this time around. I'll continue in my TC responsibilities until the end of term, which is some time in April. I'm not leaving the community or anything like that, I'm simply narrowing my focus. Over the past several months I've been stripping things back so I can be sure that I'm not ineffectively over-committing myself to OpenStack but am instead focusing where I can be most useful and make the most progress. Stepping away from the TC is just one more part of that. Thanks very much for the experiences and for the past votes. [1] https://governance.openstack.org/election/ -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From cdent+os at anticdent.org Fri Feb 8 13:15:42 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 8 Feb 2019 13:15:42 +0000 (GMT) Subject: [dev][tc][ptl] Continuing Evaluating projects in relation to OpenStack cloud vision Message-ID: Yesterday at the TC meeting [1] we decided that the in-progress task to make sure the technical vision document [2] has been fully evaluated by project teams needs a bit more time, so this message is being produced as a reminder. Back in January Julia produced a message [3] suggesting that each project consider producing a document where they compare their current state with an idealized state if they were in complete alignment with the vision. There were two hoped for outcomes: * A useful in-project document that could help guide future development. * Patches to the vision document to clarify or correct the vision where it is discovered to be not quite right. A few projects have started that process (see, for example, melwitt's recent message for some links [4]) resulting in some good plans as well as some improvements to the vision document [5]. In the future the TC would like to use the vision document to help evaluate projects applying to be "official" as well as determining if projects are "healthy". As such it is important that the document be constantly evolving toward whatever "correct" means. The process described in Julia's message [3] is a useful to make it so. Please check it out. Thanks. [1] http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-02-07-14.00.html [2] https://governance.openstack.org/tc/reference/technical-vision.html [3] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001417.html [4] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002501.html [5] https://review.openstack.org/#/q/project:openstack/governance+file:reference/technical-vision.rst -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From sean.mcginnis at gmx.com Fri Feb 8 13:52:32 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 8 Feb 2019 07:52:32 -0600 Subject: [tc] cdent non-nomination for TC In-Reply-To: References: Message-ID: <20190208135231.GA8848@sm-workstation> On Fri, Feb 08, 2019 at 12:34:18PM +0000, Chris Dent wrote: > > Next week sees the start of election season for the TC [1]. People > often worry that incumbents always get re-elected so it is > considered good form to announce if you are an incumbent and do > not intend to run. > Thanks for all you've done on the TC Chris! From sean.mcginnis at gmx.com Fri Feb 8 14:00:51 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 8 Feb 2019 08:00:51 -0600 Subject: [tc] smcginnis non-nomination for TC Message-ID: <20190208140051.GB8848@sm-workstation> As Chris said, it is probably good for incumbents to make it known if they are not running. This is my second term on the TC. It's been great being part of this group and trying to contribute whatever I can. But I do feel it is important to make room for new folks to regularly join and help shape things. So with that in mind, along with the need to focus on some other areas for a bit, I do not plan to run in the upcoming TC election. I would highly encourage anyone interested to run for the TC. If you have any questions about it, feel free to ping me for any thoughts/advice/feedback. Thanks for the last two years. I think I've learned a lot since joining the TC, and hopefully I have been able to contribute some positive things over the years. I will still be around, so hopefully I will see folks in Denver. Sean From lbragstad at gmail.com Fri Feb 8 14:39:32 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 8 Feb 2019 08:39:32 -0600 Subject: [dev][tc][ptl] Continuing Evaluating projects in relation to OpenStack cloud vision In-Reply-To: References: Message-ID: <4b835a4c-a2a7-d6ac-f8e9-ad61a591dd46@gmail.com> On 2/8/19 7:15 AM, Chris Dent wrote: > > Yesterday at the TC meeting [1] we decided that the in-progress task > to make sure the technical vision document [2] has been fully > evaluated by project teams needs a bit more time, so this message is > being produced as a reminder. > > Back in January Julia produced a message [3] suggesting that each > project consider producing a document where they compare their > current state with an idealized state if they were in complete > alignment with the vision. There were two hoped for outcomes: > > * A useful in-project document that could help guide future >   development. > * Patches to the vision document to clarify or correct the vision >   where it is discovered to be not quite right. > > A few projects have started that process (see, for example, > melwitt's recent message for some links [4]) resulting in some good > plans as well as some improvements to the vision document [5]. Is it worth knowing which projects have this underway? If so, do we want to track that somewhere? Colleen started generating notes for keystone [0] and there is a plan to get it proposed for review to our contributor guide sometime before the the Summit [1]. [0] https://etherpad.openstack.org/p/keystone-technical-vision-notes [1] http://eavesdrop.openstack.org/meetings/keystone/2019/keystone.2019-02-05-16.01.log.html#l-13 > > In the future the TC would like to use the vision document to help > evaluate projects applying to be "official" as well as determining > if projects are "healthy". As such it is important that the document > be constantly evolving toward whatever "correct" means. The process > described in Julia's message [3] is a useful to make it so. Please > check it out. > > Thanks. > > [1] > http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-02-07-14.00.html > [2] https://governance.openstack.org/tc/reference/technical-vision.html > [3] > http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001417.html > [4] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002501.html > [5] > https://review.openstack.org/#/q/project:openstack/governance+file:reference/technical-vision.rst > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lars at redhat.com Fri Feb 8 15:38:17 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 8 Feb 2019 10:38:17 -0500 Subject: [tripleo] puppet failing with "cannot load such file -- json" Message-ID: <20190208153817.5kjcpu6ebrs35sop@redhat.com> Our "openstack tripleo deploy" is failing during "step 1" while trying to configure swift. It looks like the error comes from puppet apply. Looking at the ansible output, the command is: /usr/bin/puppet apply --summarize --detailed-exitcodes \ --color=false --logdest syslog --logdest console \ --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules \ --tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server \ /etc/config.pp And the error is: "cannot load such file -- json" We're running recent delorean packages: so, python-tripleoclient @ 034edf0, and puppet-swift @ bc8dc51. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From emilien at redhat.com Fri Feb 8 16:51:46 2019 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 8 Feb 2019 11:51:46 -0500 Subject: [tripleo] puppet failing with "cannot load such file -- json" In-Reply-To: <20190208153817.5kjcpu6ebrs35sop@redhat.com> References: <20190208153817.5kjcpu6ebrs35sop@redhat.com> Message-ID: Hey Lars, I wish I could help but I suspect we'll need more infos. Please file a bug in Launchpad, explain how to reproduce, and provide more logs, like /var/log/messages maybe. Once the bug filed, we'll take a look and hopefully help you. Thank you, On Fri, Feb 8, 2019 at 10:44 AM Lars Kellogg-Stedman wrote: > Our "openstack tripleo deploy" is failing during "step 1" while trying > to configure swift. It looks like the error comes from puppet apply. > Looking at the ansible output, the command is: > > /usr/bin/puppet apply --summarize --detailed-exitcodes \ > --color=false --logdest syslog --logdest console \ > --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules \ > --tags > file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server > \ > /etc/config.pp > > And the error is: > > "cannot load such file -- json" > > We're running recent delorean packages: so, python-tripleoclient @ > 034edf0, and puppet-swift @ bc8dc51. > > -- > Lars Kellogg-Stedman | larsks @ {irc,twitter,github} > http://blog.oddbit.com/ | > > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Fri Feb 8 17:30:38 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Fri, 8 Feb 2019 12:30:38 -0500 Subject: [ops] OpenStack operators meetup, Berlin, March 6th,7th In-Reply-To: References: Message-ID: Hi Sam, On Thu, Feb 7, 2019 at 9:07 PM Sam P wrote: > > Hi Chris, > > I need an invitation letter to get my German visa. Please let me know > who to contact. > You can contact Ashlee at the foundation and she will be able to assist you. Her email is ashlee at openstack.org. See you in Berlin! > --- Regards, > Sampath > > > On Thu, Feb 7, 2019 at 2:38 AM Chris Morgan wrote: > > > > See you there! > > > > On Wed, Feb 6, 2019 at 12:18 PM Erik McCormick wrote: > >> > >> I'm all signed up. See you in Berlin! > >> > >> On Wed, Feb 6, 2019, 10:43 AM Chris Morgan >>> > >>> Dear All, > >>> The Evenbrite for the next ops meetup is now open, see > >>> > >>> https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 > >>> > >>> Thanks for Allison Price from the foundation for making this for us. We'll be sharing more details on the event soon. > >>> > >>> Chris > >>> on behalf of the ops meetups team > >>> > >>> -- > >>> Chris Morgan > > > > > > > > -- > > Chris Morgan From linus.nilsson at it.uu.se Thu Feb 7 09:44:34 2019 From: linus.nilsson at it.uu.se (Linus Nilsson) Date: Thu, 7 Feb 2019 10:44:34 +0100 Subject: Rocky and older Ceph compatibility In-Reply-To: References: <88212313-4fde-8e01-d804-27c6354b7046@it.uu.se> Message-ID: <4ff504fe-23dd-2763-aa08-bc98952db5be@it.uu.se> On 2/6/19 6:55 PM, Erik McCormick wrote: > On Wed, Feb 6, 2019 at 12:37 PM Linus Nilsson wrote: >> Hi all, >> >> I'm working on upgrading our cloud, which consists of a block storage >> system running Ceph 11.2.1 ("Kraken") and a controlplane running OSA >> Newton. We want to migrate to Ceph Mimic and OSA Rocky respectively. As >> part of the upgrade plan we are discussing first going to Rocky while >> keeping the block system at the "Kraken" release. >> > For the most part it comes down to your client libraries. Personally, > I would upgrade Ceph first, leaving Openstack running older client > libraries. I did this with Jewel clients talking to a Luminous > cluster, so you should be fine with K->M. Then, when you upgrade > Openstack, your client libraries can get updated along with it. If you > do Openstack first, you'll need to come back around and update your > clients, and that will require you to restart everything a second > time. > . Thanks. Upgrading first to Luminous is certainly an option. >> It would be helpful to know if anyone has attempted to run the Rocky >> Cinder/Glance drivers with Ceph Kraken or older? >> > I haven't done this specific combination, but I have mixed and matched > Openstack and Ceph versions without any issues. I have MItaka, Queens, > and Rocky all talking to Luminous without incident. > > -Erik OK, good to know. Perhaps the plan becomes upgrade to Luminous first and then Newton -> Ocata -> Pike -> Queens -> Rocky and finally go Luminous -> Mimic. Best regards, Linus UPPMAX >> References or documentation is welcomed. I fail to find much information >> online, but perhaps I'm looking in the wrong places or I'm asking a >> question with an obvious answer. >> >> Thanks! >> >> Best regards, >> Linus >> UPPMAX >> >> >> >> >> >> >> >> >> När du har kontakt med oss på Uppsala universitet med e-post så innebär det att vi behandlar dina personuppgifter. För att läsa mer om hur vi gör det kan du läsa här: http://www.uu.se/om-uu/dataskydd-personuppgifter/ >> >> E-mailing Uppsala University means that we will process your personal data. For more information on how this is performed, please read here: http://www.uu.se/en/about-uu/data-protection-policy >> From rnoriega at redhat.com Thu Feb 7 17:45:45 2019 From: rnoriega at redhat.com (Ricardo Noriega De Soto) Date: Thu, 7 Feb 2019 18:45:45 +0100 Subject: [Neutron] Multi segment networks Message-ID: Hello guys, Quick question about multi-segment provider networks. Let's say I create a network and a subnet this way: neutron net-create multinet --segments type=dict list=true provider:physical_network='',provider:segmentation_id=1500,provider:network_type=vxlan provider:physical_network=physnet_sriov,provider:segmentation_id=2201,provider:network_typ e=vlan neutron subnet-create multinet --allocation-pool start=10.100.5.2,end=10.100.5.254 --name mn-subnet --dns-nameserver 8.8.8.8 10.100.5.0/24 Does it mean, that placing two VMs (with regular virtio interfaces), one in the vxlan segment and one on the vlan segment, would be able to ping each other without the need of a router? Or would it require an external router that belongs to the owner of the infrastructure? Thanks in advance! -- Ricardo Noriega Senior Software Engineer - NFV Partner Engineer | Office of Technology | Red Hat irc: rnoriega @freenode -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Feb 8 19:25:51 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 8 Feb 2019 19:25:51 +0000 Subject: [tc] cdent non-nomination for TC In-Reply-To: References: Message-ID: <20190208192550.5s2sx52fnvzps4sl@yuggoth.org> On 2019-02-08 12:34:18 +0000 (+0000), Chris Dent wrote: [...] > I do not intend to run. I've done two years and that's enough. When > I was first elected I had no intention of doing any more than one > year but at the end of the first term I had not accomplished much of > what I hoped, so stayed on. Now, at the end of the second term I > still haven't accomplished much of what I hoped [...] You may not have accomplished what you set out to, but you certainly have made a difference. You've nudged lines of discussion into useful directions they might not otherwise have gone, provided a frequent reminder of the representative nature of our governance, and produced broadly useful summaries of our long-running conversations. I really appreciate what you brought to the TC, and am glad you'll still be around to hold the rest of us (and those who succeed you/us) accountable. Thanks! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Feb 8 19:28:50 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 8 Feb 2019 19:28:50 +0000 Subject: [tc] smcginnis non-nomination for TC In-Reply-To: <20190208140051.GB8848@sm-workstation> References: <20190208140051.GB8848@sm-workstation> Message-ID: <20190208192849.zx6equh4h5zibkqa@yuggoth.org> On 2019-02-08 08:00:51 -0600 (-0600), Sean McGinnis wrote: [...] > This is my second term on the TC. It's been great being part of > this group and trying to contribute whatever I can. But I do feel > it is important to make room for new folks to regularly join and > help shape things. So with that in mind, along with the need to > focus on some other areas for a bit, I do not plan to run in the > upcoming TC election. [...] Thanks for everything you've done these past couple of years, and I'm glad we'll have your experience as a contributor, PTL and TC member to help guide the OSF board of directors for the coming year! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Fri Feb 8 21:15:40 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 8 Feb 2019 15:15:40 -0600 Subject: [telemetry][cloudkitty][magnum][solum][tacker][watcher][zun][release] Switching to cycle-with-rc Message-ID: <20190208211540.GA24049@sm-workstation> Following up from [1], I have proposed changes to a few cycle-with-intermediary service releases to by cycle-with-rc. We've already received some feedback from the affected projects, but just posting here to make sure there's an easy reference and to make sure others are aware of the changes. The patches proposed are: aodh - https://review.openstack.org/635656 ceilometer panko tricircle cloudkitty - https://review.openstack.org/635657 magnum - https://review.openstack.org/635658 solum - https://review.openstack.org/635659 tacker - https://review.openstack.org/635660 watcher - https://review.openstack.org/635662 zun - https://review.openstack.org/635663 If there are any questions, just let us know here or in the #openstack-release channel Thanks! Sean [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002502.html From mriedemos at gmail.com Fri Feb 8 22:36:28 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 8 Feb 2019 16:36:28 -0600 Subject: [cinder][qa] Cinder 3rd party CI jobs and multiattach tests Message-ID: <5ad5391b-9f3a-6d08-5eca-89e690dd9b03@gmail.com> With tempest change [1] the multiattach tests are enabled in tempest-full, tempest-full-py3 and tempest-slow job configurations. This was to allow dropping the nova-multiattach job and still retain test coverage in the upstream gate. There are 3rd party CI jobs that are basing their job configs on these tempest job configs, and as a result they will now fail if the storage backend driver does not support multiattach volumes and the job configuration does not override and set: ENABLE_VOLUME_MULTIATTACH: false in the tempest job config, like was done in the devstack-plugin-ceph-tempest jobs [2]. Let me know if there are any questions. [1] https://review.openstack.org/#/c/606978/ [2] https://review.openstack.org/#/c/634977/ -- Thanks, Matt From melwittt at gmail.com Fri Feb 8 23:59:28 2019 From: melwittt at gmail.com (melanie witt) Date: Fri, 8 Feb 2019 15:59:28 -0800 Subject: [nova][dev] 4 weeks until feature freeze In-Reply-To: <1ea6d402-61bf-0528-6de7-55c8fe920bc9@gmail.com> References: <0022f4bb-43c0-d35c-e3c3-d33269bdb843@gmail.com> <1ea6d402-61bf-0528-6de7-55c8fe920bc9@gmail.com> Message-ID: On Thu, 7 Feb 2019 19:26:36 -0600, Matt Riedemann wrote: > On 2/7/2019 5:06 PM, melanie witt wrote: >> The ability to specify volume type during server create is complete >> since 2018-10-16. However, the patches for being able to detach a boot >> volume and volume-backed server rebuild are in merge conflict/stale. >> Help is needed here. > > It's Chinese New Year / Spring Festival this week so the developers that > own these changes are on holiday. Kevin told me last week that once he's > back he's going to complete the detach/attach root volume work. The spec > was amended [1] and needs another spec core to approve (probably would > be good to have Dan do that since was involved in the initial spec review). > > As for the volume-backed rebuild change, I asked Jie Li on the review if > he needed someone to help push it forward and he said he did. It sounds > like Kevin and/or Yikun might be able to help there. Yikun already has > the Cinder side API changes all done and there is a patch for the > python-cinderclient change, but the Cinder API change is blocked until > we have an end-to-end working scenario in Tempest for the volume-backed > rebuild flow in nova. I can help with the Tempest change when the time > comes since that should be pretty straightforward. > > [1] https://review.openstack.org/#/c/619161/ That's all great news! Thanks for the summary and for volunteering to help with the Tempest change. We'll keep our eyes peeled for updates to those patch series coming soon. Cheers, -melanie From lars at redhat.com Sat Feb 9 00:21:08 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 8 Feb 2019 19:21:08 -0500 Subject: [tripleo] puppet failing with "cannot load such file -- json" In-Reply-To: <20190208153817.5kjcpu6ebrs35sop@redhat.com> References: <20190208153817.5kjcpu6ebrs35sop@redhat.com> Message-ID: <20190209002108.6fulrhehg2ro62pi@redhat.com> On Fri, Feb 08, 2019 at 10:38:17AM -0500, Lars Kellogg-Stedman wrote: > And the error is: > > "cannot load such file -- json" > > We're running recent delorean packages: so, python-tripleoclient @ > 034edf0, and puppet-swift @ bc8dc51. False alarm, that was just failure to selinux relabel a filesystem after relocating /var/lib/docker. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From colleen at gazlene.net Sat Feb 9 14:27:32 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Sat, 09 Feb 2019 15:27:32 +0100 Subject: [dev][keystone] Keystone Team Update - Week of 4 February 2019 Message-ID: <1549722452.3566947.1654366432.049A66E5@webmail.messagingengine.com> # Keystone Team Update - Week of 4 February 2019 ## News ### Performance of Loading Fernet/JWT Key Repositories Lance noticed that it seems that token signing/encryption keys are loaded from disk on every request and is therefore not very performant, and started investigating ways we could improve this[1][2]. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-02-07.log.html#t2019-02-07T17:55:34 [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-02-08.log.html#t2019-02-08T17:09:24 ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 10 changes this week. ## Changes that need Attention Search query: https://bit.ly/2RLApdA There are 73 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs This week we opened 2 new bugs and closed 3. Bugs opened (2) Bug #1814589 (keystone:High) opened by Guang Yee https://bugs.launchpad.net/keystone/+bug/1814589 Bug #1814570 (keystone:Medium) opened by Guang Yee https://bugs.launchpad.net/keystone/+bug/1814570 Bugs fixed (3) Bug #1804483 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1804483 Bug #1805406 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1805406 Bug #1801095 (keystone:Wishlist) fixed by Artem Vasilyev https://bugs.launchpad.net/keystone/+bug/1801095 ## Milestone Outlook https://releases.openstack.org/stein/schedule.html Feature freeze is in four weeks. Be mindful of the gate and try to submit and review things early. ## Shout-outs Congratulations and thank you to our Outreachy intern Islam for completing the first step in refactoring our unit tests to lean on our shiny new Flask framework! Great work! ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From dkrol3 at gmail.com Sat Feb 9 18:04:24 2019 From: dkrol3 at gmail.com (=?UTF-8?Q?Darek_Kr=C3=B3l?=) Date: Sat, 9 Feb 2019 19:04:24 +0100 Subject: Subject: Re: [Trove] State of the Trove service tenant deployment model In-Reply-To: References: Message-ID: Hello Lingxian, I’ve heard about a few tries of running Trove in production. Unfortunately, I didn’t have opportunity to get details about networking. At Samsung, we introducing Trove into our products for on-premise cloud platforms. However, I cannot share too many details about it, besides it is oriented towards performance and security is not a concern. Hence, the networking is very basic without any layers of abstractions if possible. Could you share more details about your topology and goals you want to achieve in Trove ? Maybe Trove team could help you in this ? Unfortunately, I’m not a network expert so I would need to get more details to understand your use case better. I would also like to get this opportunity to ask you for details about Octavia way of communication ? I'm wondering if the Octavia way prevents DDOS attacks also ? Best, Darek On Fri, 8 Feb 2019 at 01:20, Lingxian Kong wrote: > Sorry for bringing this thread back to the top again. > > But I am wondering if there are people who have already deployed Trove in > production? If yes, are you using service tenant model(create the database > vm and related resources in the admin project) or using the flatten mode > that the end user has access to the database vm and the control plane > network as well? > > I am asking because we are going to deploy Trove in a private cloud, and > we want to take more granular control of the resources created, e.g for > every database vm, we will create the vm in the admin tenant, plug a port > to the control plane(`CONF.default_neutron_networks`) and the other ports > to the network given by the users, we also need to specify different > security groups to different types of neutron ports for security reasons, > etc. > > There are something missing in trove in order to achieve the above, I'm > working on that, but I'd like to hear more suggestions. > > My irc name is lxkong in #openstack-trove, please ping me if you have > something to share. > > Cheers, > Lingxian Kong > > > On Wed, Jan 23, 2019 at 7:35 PM Darek Król wrote: > >> On Wed, Jan 23, 2019 at 9:27 AM Fox, Kevin M >> > wrote: >> >> > > I'd recommend at this point to maybe just run kubernetes across the >> vms and push the guest agents/workload to them. >> >> > This sounds like an overkill to me. Currently, different projects in >> openstack are solving this issue > in different ways, e.g. Octavia is using >> two-way SSL authentication API between the controller service and >> amphora(which is the vm running HTTP server inside), Magnum is using >> heat-container-agent that is communicating with Heat via API, etc. However, >> Trove chooses another option which has brought a lot of discussions over a >> long time. >> >> > In the current situation, I don't think it's doable for each project >> heading to one common solution, but Trove can learn from other projects to >> solve its own problem. >> > Cheers, >> > Lingxian Kong >> >> The Octavia way of communication was discussed by Trove several times >> in the context of secuirty. However, the security threat has been >> eliminated by encryption. >> I'm wondering if the Octavia way prevents DDOS attacks also ? >> >> Implementation of two-way SSL authentication API could be included in >> the Trove priority list IMHO if it solves all issues with >> security/DDOS attacks. This could also creates some share code between >> both projects and help other services as well. >> >> Best, >> Darek >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Sat Feb 9 18:54:33 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 9 Feb 2019 12:54:33 -0600 Subject: [goals][upgrade-checkers] Week R-9 Update Message-ID: <78c401f2-e138-6491-219f-ee78c855548a@gmail.com> The only change since last week [1] is the swift patch was abandoned. The next closest patches to merge should be cloudkitty, ceilometer and aodh so if someone from those teams is reading this please check the open reviews [2]. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002328.html [2] https://review.openstack.org/#/q/topic:upgrade-checkers+status:open -- Thanks, Matt From amy at demarco.com Sun Feb 10 16:04:45 2019 From: amy at demarco.com (Amy Marrich) Date: Sun, 10 Feb 2019 10:04:45 -0600 Subject: D&I WG Meeting Reminder Message-ID: The Diversity & Inclusion WG will hold it's next meeting Monday(2/11) at 17:00 UTC in the #openstack-diversity channel. The agenda can be found at https://etherpad.openstack.org/p/diversity-wg-agenda. Please feel free to add any other topics you wish to discuss at the meeting. Including the discuss list to invite potential new members! Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Sun Feb 10 17:28:26 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Sun, 10 Feb 2019 18:28:26 +0100 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190208091829.6tiig7lgef6txcxk@pacific.linksys.moosehall> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> <20190208091829.6tiig7lgef6txcxk@pacific.linksys.moosehall> Message-ID: <723736DB-ED80-4600-AA98-F51FE70A8D73@gmail.com> First of all I like the idea of pop-up teams. > On 2019. Feb 8., at 10:18, Adam Spiers wrote: > > Doug Hellmann wrote: >> Adam Spiers writes: >>> Doug Hellmann wrote: >>>> Isn't the point of these teams that they would be coordinating work within other existing projects? >>> >>> Yes. >>> >>>> So I wouldn't expect them to need git repositories or new IRC channels. >>> >>> Never? Code and documentation doesn't always naturally belong in a single project, especially when it relates to cross-project work. Similarly, if (say) Monasca, Vitrage, and Heat all need an IRC channel in which to collaborate on a specific topic, it seems fairly clear that none of #openstack-{monasca,vitrage,heat} are optimal choices. >> >> What's wrong with #openstack-dev? > > Maybe nothing, or maybe it's too noisy - I dunno ;-) Maybe the latter could be solved by setting up #openstack-breakout{1..10} for impromptu meetings where meetbot and channel logging are provided. I think the project channels along with #opentack-dev should be enough to start with. As we are talking about activities concerning multiple projects many of the conversations will naturally land in one of the project channels depending on the stage of the design/development/testing work. Using the multi-attach work as an example we used the Cinder and Nova channels for daily communication which worked out well as we had all the stakeholders around without asking them to join yet-another-IRC-channel. Discussing more general items can happen on the regular meetings and details can be moved to the project channels where the details often hint which project team is the most affected. I would expect the pop-up team having representatives from all teams as well as all pop-up team members hanging out in all relevant project team channels. As a fall back for high level, all-project topics I believe #openstack-dev is a good choice expecting most of the people being in that channel already while also gaining further visibility to the topic there. >>> The self-healing SIG has both a dedicated git repository (for docs, code, and in order to be able to use StoryBoard) and a dedicated IRC channel. We find both useful. >>> Of course SIGs are more heavy-weight and long-lived so I'm not suggesting that all or even necessarily the majority of popup teams would need git/IRC. But I imagine it's possible in some cases, at least. >> >> Right, SIGs are not designed to disappear after a task is done in the way that popup teams are. If a popup team is going to create code, it needs to end up in a repository that is owned and maintained by someone over the long term. If that requires a new repo, and one of the existing teams isn't a natural home, then I think a new regular team is likely a better fit for the task than a popup team. > > True. And for temporary docs / notes / brainstorming there's the wiki and etherpad. So yeah, in terms of infrastructure maybe IRC meetings in one of the communal meeting channels is the only thing needed. > We'd still need to take care of ensuring that popups are easily discoverable by anyone, however. And this ties in with the "should we require official approval" debate - maybe a halfway house is the right balance between red tape and agility? For example, set up a table on a page like > https://wiki.openstack.org/wiki/Popup_teams > > and warmly encourage newly forming teams to register themselves by adding a row to that table. Suggested columns: > - Team name > - One-line summary of team purpose > - Expected life span (optional) > - Link to team wiki page or etherpad > - Link to IRC meeting schedule (if any) > - Other comments > > Or if that's too much of a free-for-all, it could be a slightly more formal process of submitting a review to add a row to a page: > https://governance.openstack.org/popup-teams/ > > which would be similar in spirit to: > https://governance.openstack.org/sigs/ > > Either this or a wiki page would ensure that anyone can easily discover what teams are currently in existence, or have been in the past (since historical information is often useful too). > Just thinking out aloud … In my experience there are two crucial steps to make a cross-project team work successful. The first is making sure that the proposed new feature/enhancement is accepted by all teams. The second is to have supporters from every affected project team preferably also resulting in involvement during both design and review time maybe also during feature development and testing phase. When these two steps are done you can work on the design part and making sure you have the work items prioritized on each side in a way that you don’t end up with road blocks that would delay the work by multiple release cycles. To help with all this I would start the experiment with wiki pages and etherpads as these are all materials you can point to without too much formality to follow so the goals, drivers, supporters and progress are visible to everyone who’s interested and to the TC to follow-up on. Do we expect an approval process to help with or even drive either of the crucial steps I listed above? Thanks, Ildikó From ildiko.vancsa at gmail.com Sun Feb 10 17:43:38 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Sun, 10 Feb 2019 18:43:38 +0100 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> References: <20190201043349.GB6183@thor.bakeyournoodle.com> <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> Message-ID: @Tony: Thank you for working on this! > […] > > >> 2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) >> on the change > > If you're compiling a list of eventual features for the bot, another one > that could be neat is, after the second patch set, the bot merges a > change that creates a merge conflict on the student's patch, which they > then have to go resolve. > > Also, cross-referencing [1], it might be nice to update that tutorial at > some point to use the sandbox repo instead of nova. That could be done > once we have bot action so said action could be incorporated into the > tutorial flow. > >> [2] The details of what counts as qualifying can be fleshed out later >> but there needs to be something so that contributors using the >> sandbox that don't want to be bothered by the bot wont be. > > Yeah, I had been assuming it would be some tag in the commit message. If > we ultimately enact different flows of varying complexity, the tag > syntax could be enriched so students in different courses/grades could > get different experiences. For example: > > Bot-Reviewer: > > or > > Bot-Reviewer: Level 2 > > or > > Bot-Reviewer: initial-downvote, merge-conflict, series-depth=3 > > The possibilities are endless :P By having tags we can turn off the bot for the in person trainings while we can also help people practice different things outside of trainings so I really like the approach! Once we have prototype working we can also think of putting some more pointers in the training slides to the Contributor Guide sections describing how to manage open reviews/changes to make sure people find it. Thanks, Ildikó > > -efried > > [1] https://review.openstack.org/#/c/634333/ > From cdent+os at anticdent.org Sun Feb 10 20:33:02 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Sun, 10 Feb 2019 20:33:02 +0000 (GMT) Subject: [dev][tc] Part 2: Evaluating projects in relation to OpenStack cloud vision Message-ID: This a "part 2" or "other half" of evaluating OpenStack projects in relation to the technical vision. See the other threads [1][2] for more information. In the conversations that led up to the creation of the vision document [3] one of the things we hoped was that the process could help identify ways in which existing projects could evolve to be better at what they do. This was couched in two ideas: * Helping to make sure that OpenStack continuously improves, in the right direction. * Helping to make sure that developers were working on projects that leaned more towards interesting and educational than frustrating and embarrassing, where choices about what to do and how to do it were straightforward, easy to share with others, so well-founded in agreed good practice that argument would be rare, and so few that it was easy to decide. Of course, to have a "right direction" you first have to have a direction, and thus the vision document and the idea of evaluating how aligned a project is with that. The other half, then, is looking at the projects from a development standpoint and thinking about what aspects of the project are: * Things (techniques, tools) the project contributors would encourage others to try. Stuff that has worked out well. * Things—given a clean slate, unlimited time and resources, the benefit of hindsight and without the weight of legacy—the project contributors would encourage others to not repeat. And documenting those things so they can be carried forward in time some place other than people's heads, and new projects or refactorings of existing projects can start on a good foot. A couple of examples: * Whatever we might say about the implementation (in itself and how it is used), the concept of a unified configuration file format, via oslo_config, is probably considered a good choice, and we should keep on doing that. * On the other hand, given hindsight and improvements in commonly available tools, using a homegrown WSGI (non-)framework (unless you are Swift) plus eventlet may not have been the way to go, yet because it is what's still there in nova, it often gets copied. It's not clear at this point whether these sorts of things should be documented in projects, or somewhere more central. So perhaps we can just talk about it here in email and figure something out. I'll followup with some I have for placement, since that's the project I've given the most attention. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001417.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002524.html [3] https://governance.openstack.org/tc/reference/technical-vision.html -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From cdent+os at anticdent.org Sun Feb 10 21:08:29 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Sun, 10 Feb 2019 21:08:29 +0000 (GMT) Subject: [dev][tc] Part 2: Evaluating projects in relation to OpenStack cloud vision In-Reply-To: References: Message-ID: On Sun, 10 Feb 2019, Chris Dent wrote: > It's not clear at this point whether these sorts of things should be > documented in projects, or somewhere more central. So perhaps we can > just talk about it here in email and figure something out. I'll > followup with some I have for placement, since that's the project > I've given the most attention. Conversation on vision reflection for placement [1] is what reminded me that this part 2 is something we should be doing. I should disclaim that I'm the author of a lot of the architecture of placement so I'm hugely biased. Please call me out where my preferences are clouding reality. Other contributors to placement probably have other ideas. They would be great to hear. However, it's been at least two years since we started, so I think we can extract some useful lessons. Things have have worked out well (you can probably see a theme): * Placement is a single purpose service with, until very recently, only the WSGI service as the sole moving part. There are now placement-manage and placement-status commands, but they are rarely used (thankfully). This makes the system easier to reason about than something with multiple agents. Obviously some things need lots of agents. Placement isn't one of them. * Using gabbi [2] as the framework for functional tests of the API and using them to enable test-driven-development, via those functional tests, has worked out really well. It keeps the focus on that sole moving part: The API. * No RPC, no messaging, no notifications. * Very little configuration, reasonable defaults to that config. It's possible to run a working placement service with two config settings, if you are not using keystone. Keystone adds a few more, but not that much. * String adherence to WSGI norms (that is, any WSGI server can run a placement WSGI app) and avoidance of eventlet, but see below. The combination of this with small number of moving parts and little configuration make it super easy to deploy placement [3] in lots of different setups, from tiny to huge, scaling and robustifying those setups as required. * Declarative URL routing. There's a dict which maps HTTP method:URL pairs to python functions. Clear dispatch is a _huge_ help when debugging. Look one place, as a computer or human, to find where to go. * microversion-parse [4] has made microversion handling easy. Things that haven't gone so well (none of these are dire) and would have been nice to do differently had we but known: * Because of a combination of "we might need it later", "it's a handy tool and constraint" and "that's the way we do things" the interface between the placement URL handlers and the database is mediated through oslo versioned objects. Since there's no RPC, nor inter-version interaction, this is overkill. It also turns out that OVO getters and setters are a moderate factor in performance. Initially we were versioning the versioned objects, which created a lot of cognitive overhead when evolving the system, but we no longer do that, now that we've declared RPC isn't going to happen. * Despite the strict adherence to being a good WSGI citizen mentioned above, placement is using a custom (very limited) framework for the WSGI application. An initial proof of concept used flask but it was decided that introducing flask into the nova development environment would be introducing another thing to know when decoding nova. I suspect the expected outcome was that placement would reuse nova's framework, but the truth is I simply couldn't do it. Declarative URL dispatch was a critical feature that has proven worth it. The resulting code is relatively straightforward but it is unicorn where a boring pony would have been the right thing. Boring ponies are very often the right thing. I'm sure there are more here, but I've run out of brain. [1] https://review.openstack.org/#/c/630216/ [2] https://gabbi.readthedocs.io/ [3] https://anticdent.org/placement-from-pypi.html [4] https://pypi.org/project/microversion_parse/ -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From feilong at catalyst.net.nz Sun Feb 10 22:10:30 2019 From: feilong at catalyst.net.nz (Feilong Wang) Date: Mon, 11 Feb 2019 11:10:30 +1300 Subject: [horizon] Horizon slowing down proportionally to the amount of instances (was: Horizon extremely slow with 400 instances) In-Reply-To: <33f1bdebb0efbb36dbb40af9564dde5daba62ffe.camel@evrard.me> References: <33f1bdebb0efbb36dbb40af9564dde5daba62ffe.camel@evrard.me> Message-ID: Hi JP, We run into same problem before (and now I think). The root cause is because when Horizon loading the instances page, for each instance row, it has to decide if show an action, unfortunately, for each instance, there are more than 20+ actions to check, and more worse, some actions may involve an API call. And whenever you have 20+ instances (the default page size is 20), you will run into this issue. I have done some upstream before to mitigate this, but it definitely needs ajax to load those actions after loading the page. On 6/02/19 11:00 PM, Jean-Philippe Evrard wrote: > On Wed, 2019-01-30 at 21:10 -0500, Satish Patel wrote: >> folks, >> >> we have mid size openstack cloud running 400 instances, and day by >> day >> its getting slower, i can understand it render every single machine >> during loading instance page but it seems it's design issue, why not >> it load page from MySQL instead of running bunch of API calls behind >> then page? >> >> is this just me or someone else also having this issue? i am >> surprised >> why there is no good and robust Web GUI for very popular openstack? >> >> I am curious how people running openstack in large environment using >> Horizon. >> >> I have tired all kind of setting and tuning like memcache etc.. >> >> ~S >> > Hello, > > I took the liberty to change the mailing list and topic name: > FYI, the openstack-discuss ML will help you reach more people > (developers/operators). When you prefix your mail with [horizon], it > will even pass filters for some people:) > > Anyway... I would say horizon performance depends on many aspects of > your deployment, including keystone and caching, it's hard to know > what's going on with your environment with so little data. > > I hope you're figure it out :) > > Regards, > JP > > -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- From anlin.kong at gmail.com Sun Feb 10 22:44:07 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Mon, 11 Feb 2019 11:44:07 +1300 Subject: Subject: Re: [Trove] State of the Trove service tenant deployment model In-Reply-To: References: Message-ID: On Sun, Feb 10, 2019 at 7:04 AM Darek Król wrote: > Hello Lingxian, > > I’ve heard about a few tries of running Trove in production. > Unfortunately, I didn’t have opportunity to get details about networking. > At Samsung, we introducing Trove into our products for on-premise cloud > platforms. However, I cannot share too many details about it, besides it is > oriented towards performance and security is not a concern. Hence, the > networking is very basic without any layers of abstractions if possible. > > Could you share more details about your topology and goals you want to > achieve in Trove ? Maybe Trove team could help you in this ? Unfortunately, > I’m not a network expert so I would need to get more details to understand > your use case better. > Yeah, I think trove team could definitely help. I've been working on a patch[1] to support different sgs for different type of neutron ports, the patch is for the use case that `CONF.default_neutron_networks` is configured as trove management network. Besides, I also have some patches[2][3] for trove need to be reviewed, not sure who are the right people I should ask for review now, but would appriciate if you could help. [1]: https://review.openstack.org/#/c/635705/ [2]: https://review.openstack.org/#/c/635099/ [3]: https://review.openstack.org/#/c/635138/ Cheers, Lingxian Kong -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufar at onf-ambassador.org Mon Feb 11 02:33:15 2019 From: zufar at onf-ambassador.org (Zufar Dhiyaulhaq) Date: Mon, 11 Feb 2019 09:33:15 +0700 Subject: [Neutron] Split Network node from controller Node Message-ID: Hi everyone, I Have existing OpenStack with 1 controller node (Network Node in controller node) and 2 compute node. I need to expand the architecture by splitting the network node from controller node (create 1 node for network). Do you have any recommended step or tutorial for doing this? Thanks Best Regards, Zufar Dhiyaulhaq -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyangii at gmail.com Mon Feb 11 06:54:47 2019 From: hyangii at gmail.com (Jae Sang Lee) Date: Mon, 11 Feb 2019 15:54:47 +0900 Subject: [cinder][dev] Bug for deferred deletion in RBD Message-ID: Hello, I recently ran a volume deletion test with deferred deletion enabled on the pike release. We experienced a cinder-volume hung when we were deleting a large amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it. However, while deleting 200 volumes, after 50 volumes, the cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api. If these test results are my fault, please let me know the correct test method. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arne.Wiebalck at cern.ch Mon Feb 11 07:39:27 2019 From: Arne.Wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 11 Feb 2019 07:39:27 +0000 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: References: Message-ID: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> Hi Jae, You back ported the deferred deletion patch to Pike? Cheers, Arne > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: > > Hello, > > I recently ran a volume deletion test with deferred deletion enabled on the pike release. > > We experienced a cinder-volume hung when we were deleting a large amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it. > > However, while deleting 200 volumes, after 50 volumes, the cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api. > > If these test results are my fault, please let me know the correct test method. > -- Arne Wiebalck CERN IT From skaplons at redhat.com Mon Feb 11 08:13:28 2019 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 11 Feb 2019 09:13:28 +0100 Subject: [Neutron] Split Network node from controller Node In-Reply-To: References: Message-ID: <3DC9635F-4B85-41D4-B615-E6E2A8234B38@redhat.com> Hi, I don’t know if there is any tutorial for that but You can just deploy new node with agents which You need, then disable old DHCP/L3 agents with neutron API [1] and move existing networks/routers to agents in new host with neutron API. Docs for agents scheduler API is in [2] and [3]. Please keep in mind that when You will move routers to new agent You will have some downtime in data plane. [1] https://developer.openstack.org/api-ref/network/v2/#update-agent [2] https://developer.openstack.org/api-ref/network/v2/#l3-agent-scheduler [3] https://developer.openstack.org/api-ref/network/v2/#dhcp-agent-scheduler > Wiadomość napisana przez Zufar Dhiyaulhaq w dniu 11.02.2019, o godz. 03:33: > > Hi everyone, > > I Have existing OpenStack with 1 controller node (Network Node in controller node) and 2 compute node. I need to expand the architecture by splitting the network node from controller node (create 1 node for network). > > Do you have any recommended step or tutorial for doing this? > Thanks > > Best Regards, > Zufar Dhiyaulhaq — Slawek Kaplonski Senior software engineer Red Hat From hyangii at gmail.com Mon Feb 11 08:47:56 2019 From: hyangii at gmail.com (Jae Sang Lee) Date: Mon, 11 Feb 2019 17:47:56 +0900 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> Message-ID: Yes, I added your code to pike release manually. 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 작성: > Hi Jae, > > You back ported the deferred deletion patch to Pike? > > Cheers, > Arne > > > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: > > > > Hello, > > > > I recently ran a volume deletion test with deferred deletion enabled on > the pike release. > > > > We experienced a cinder-volume hung when we were deleting a large amount > of the volume in which the data was actually written(I make 15GB file in > every volumes), and we thought deferred deletion would solve it. > > > > However, while deleting 200 volumes, after 50 volumes, the cinder-volume > downed as before. In my opinion, the trash_move api does not seem to work > properly when removing multiple volumes, just like remove api. > > > > If these test results are my fault, please let me know the correct test > method. > > > > -- > Arne Wiebalck > CERN IT > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Feb 11 09:00:36 2019 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 11 Feb 2019 10:00:36 +0100 Subject: [tc] cdent non-nomination for TC In-Reply-To: <20190208192550.5s2sx52fnvzps4sl@yuggoth.org> References: <20190208192550.5s2sx52fnvzps4sl@yuggoth.org> Message-ID: <0047dff9-7138-fa7b-16a6-6bbad31a493a@openstack.org> Jeremy Stanley wrote: > On 2019-02-08 12:34:18 +0000 (+0000), Chris Dent wrote: > [...] >> I do not intend to run. I've done two years and that's enough. When >> I was first elected I had no intention of doing any more than one >> year but at the end of the first term I had not accomplished much of >> what I hoped, so stayed on. Now, at the end of the second term I >> still haven't accomplished much of what I hoped > [...] > > You may not have accomplished what you set out to, but you certainly > have made a difference. You've nudged lines of discussion into > useful directions they might not otherwise have gone, provided a > frequent reminder of the representative nature of our governance, > and produced broadly useful summaries of our long-running > conversations. I really appreciate what you brought to the TC, and > am glad you'll still be around to hold the rest of us (and those who > succeed you/us) accountable. Thanks! Jeremy said it better than I could have ! While I really appreciated the perspective you brought to the TC, I understand the need to focus to have the most impact. It's also a good reminder that the role that the TC fills can be shared beyond the elected membership -- so if you care about a specific aspect of governance, OpenStack-wide technical leadership or community health, I encourage you to participate in the TC activities, whether you are elected or not. -- Thierry Carrez (ttx) From thierry at openstack.org Mon Feb 11 09:02:49 2019 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 11 Feb 2019 10:02:49 +0100 Subject: [tc] smcginnis non-nomination for TC In-Reply-To: <20190208140051.GB8848@sm-workstation> References: <20190208140051.GB8848@sm-workstation> Message-ID: <66a20d02-bd05-2ace-80dc-4880befabbd7@openstack.org> Sean McGinnis wrote: > As Chris said, it is probably good for incumbents to make it known if they are > not running. > > This is my second term on the TC. It's been great being part of this group and > trying to contribute whatever I can. But I do feel it is important to make room > for new folks to regularly join and help shape things. So with that in mind, > along with the need to focus on some other areas for a bit, I do not plan to > run in the upcoming TC election. > > I would highly encourage anyone interested to run for the TC. If you have any > questions about it, feel free to ping me for any thoughts/advice/feedback. > > Thanks for the last two years. I think I've learned a lot since joining the TC, > and hopefully I have been able to contribute some positive things over the > years. I will still be around, so hopefully I will see folks in Denver. Thanks Sean for all your help and insights during those two TC runs ! -- Thierry Carrez (ttx) From Arne.Wiebalck at cern.ch Mon Feb 11 09:13:42 2019 From: Arne.Wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 11 Feb 2019 09:13:42 +0000 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> Message-ID: <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> Jae, To make sure deferred deletion is properly working: when you delete individual large volumes with data in them, do you see that - the volume is fully “deleted" within a few seconds, ie. not staying in ‘deleting’ for a long time? - that the volume shows up in trash (with “rbd trash ls”)? - the periodic task reports it is deleting volumes from the trash? Another option to look at is “backend_native_threads_pool_size": this will increase the number of threads to work on deleting volumes. It is independent from deferred deletion, but can also help with situations where Cinder has more work to do than it can cope with at the moment. Cheers, Arne On 11 Feb 2019, at 09:47, Jae Sang Lee > wrote: Yes, I added your code to pike release manually. 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck >님이 작성: Hi Jae, You back ported the deferred deletion patch to Pike? Cheers, Arne > On 11 Feb 2019, at 07:54, Jae Sang Lee > wrote: > > Hello, > > I recently ran a volume deletion test with deferred deletion enabled on the pike release. > > We experienced a cinder-volume hung when we were deleting a large amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it. > > However, while deleting 200 volumes, after 50 volumes, the cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api. > > If these test results are my fault, please let me know the correct test method. > -- Arne Wiebalck CERN IT -- Arne Wiebalck CERN IT -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Mon Feb 11 09:21:42 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 11 Feb 2019 10:21:42 +0100 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> Message-ID: <20190211092142.pva6t6zol77fowsn@localhost> On 11/02, Jae Sang Lee wrote: > Yes, I added your code to pike release manually. > Hi, Did you enable the feature? If I remember correctly, 50 is the default value of the native thread pool size, so it seems that the 50 available threads are busy deleting the volumes. I would double check that the feature is actually enabled (enable_deferred_deletion = True in the backend section configuration and checking the logs to see if there are any messages indicating that a volume is being deleted from the trash), and increase the thread pool size. You can change it with environmental variable EVENTLET_THREADPOOL_SIZE. Cheers, Gorka. > > > 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 작성: > > > Hi Jae, > > > > You back ported the deferred deletion patch to Pike? > > > > Cheers, > > Arne > > > > > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: > > > > > > Hello, > > > > > > I recently ran a volume deletion test with deferred deletion enabled on > > the pike release. > > > > > > We experienced a cinder-volume hung when we were deleting a large amount > > of the volume in which the data was actually written(I make 15GB file in > > every volumes), and we thought deferred deletion would solve it. > > > > > > However, while deleting 200 volumes, after 50 volumes, the cinder-volume > > downed as before. In my opinion, the trash_move api does not seem to work > > properly when removing multiple volumes, just like remove api. > > > > > > If these test results are my fault, please let me know the correct test > > method. > > > > > > > -- > > Arne Wiebalck > > CERN IT > > > > From geguileo at redhat.com Mon Feb 11 09:23:26 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 11 Feb 2019 10:23:26 +0100 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> Message-ID: <20190211092326.2qapcegmvpftzt6v@localhost> On 11/02, Arne Wiebalck wrote: > Jae, > > To make sure deferred deletion is properly working: when you delete individual large volumes > with data in them, do you see that > - the volume is fully “deleted" within a few seconds, ie. not staying in ‘deleting’ for a long time? > - that the volume shows up in trash (with “rbd trash ls”)? > - the periodic task reports it is deleting volumes from the trash? > > Another option to look at is “backend_native_threads_pool_size": this will increase the number > of threads to work on deleting volumes. It is independent from deferred deletion, but can also > help with situations where Cinder has more work to do than it can cope with at the moment. > > Cheers, > Arne Hi, That configuration option was added in Queens, so I recommend using the env variable to set it if running in Pike. Cheers, Gorka. > > > > On 11 Feb 2019, at 09:47, Jae Sang Lee > wrote: > > Yes, I added your code to pike release manually. > > > > 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck >님이 작성: > Hi Jae, > > You back ported the deferred deletion patch to Pike? > > Cheers, > Arne > > > On 11 Feb 2019, at 07:54, Jae Sang Lee > wrote: > > > > Hello, > > > > I recently ran a volume deletion test with deferred deletion enabled on the pike release. > > > > We experienced a cinder-volume hung when we were deleting a large amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it. > > > > However, while deleting 200 volumes, after 50 volumes, the cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api. > > > > If these test results are my fault, please let me know the correct test method. > > > > -- > Arne Wiebalck > CERN IT > > > -- > Arne Wiebalck > CERN IT > From geguileo at redhat.com Mon Feb 11 09:33:17 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 11 Feb 2019 10:33:17 +0100 Subject: [cinder][nova][os-brick] os-brick initiator rename In-Reply-To: References: Message-ID: <20190211093317.uf4zofcbtuu6zb7o@localhost> On 07/02, Kulazhenkov, Yury wrote: > Hi all, > Some time ago Dell EMC software-defined storage ScaleIO was renamed to VxFlex OS. > I am currently working on renaming ScaleIO to VxFlex OS in Openstack code to prevent confusion > with storage documentation from vendor. > > This changes require patches at least for cinder, nova and os-brick repos. > I already submitted patches for cinder(634397) and nova(634866), but for now code in these > patches relies on os-brick initiator with name SCALEIO. > Now I'm looking for right way to rename os-brick initiator. > Renaming initiator in os-brick library and then make required changes in nova and cinder is quiet easy, > but os-brick is library and those changes can break someone else code. > > Is some sort of policy for updates with breaking changes exist for os-brick? > > One possible solution is to rename initiator to new name and create alias with deprecation warning for > old initiator name(should this alias be preserved more than one release?). > What do you think about it? > > Thanks, > Yury > Hi Yury, That sounds like a good plan. But don't forget that you'll need to add a new online data migration to Cinder as well, since you are renaming the SCALEIO connector identifier. Otherwise a deployment could have problems when you drop the SCALEIO alias if they've had a very long running VM or if you are doing a fast-forward upgrade. Cheers, Gorka. From geguileo at redhat.com Mon Feb 11 10:12:29 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 11 Feb 2019 11:12:29 +0100 Subject: [cinder] Help with Fedora 29 devstack volume/iscsi issues In-Reply-To: <20190207063940.GA1754@fedora19.localdomain> References: <20190207063940.GA1754@fedora19.localdomain> Message-ID: <20190211101229.j5aqii2os5z2p2cw@localhost> On 07/02, Ian Wienand wrote: > Hello, > > I'm trying to diagnose what has gone wrong with Fedora 29 in our gate > devstack test; it seems there is a problem with the iscsi setup and > consequently the volume based tempest tests all fail. AFAICS we end > up with nova hitting parsing errors inside os_brick's iscsi querying > routines; so it seems whatever error path we've hit is outside the > usual as it's made it pretty far down the stack. > > I have a rather haphazard bug report going on at > > https://bugs.launchpad.net/os-brick/+bug/1814849 > > as I've tried to trace it down. At this point, it's exceeding the > abilities of my cinder/nova/lvm/iscsi/how-this-all-hangs-together > knowledge. > > The final comment there has a link the devstack logs and a few bits > and pieces of gleaned off the host (which I have on hold and can > examine) which is hopefully useful to someone skilled in the art. > > I'm hoping ultimately it's a rather simple case of a missing package > or config option; I would greatly appreciate any input so we can get > this test stable. > > Thanks, > > -i > Hi Ian, Well, the system from the pastebin [1] doesn't look too good. DB and LIO are out of sync. You can see that the database says that there must be 3 exports and maps available, yet you only see 1 in LIO. It is werid that there are things missing from the logs: In method _get_connection_devices we have: LOG.debug('Getting connected devices for (ips,iqns,luns)=%s', 1 ips_iqns_luns) nodes = self._get_iscsi_nodes() And we can see the message in the logs [2], but then we don't see the call to iscsiadm that happens as the first instruction in _get_iscsi_nodes: out, err = self._execute('iscsiadm', '-m', 'node', run_as_root=True, root_helper=self._root_helper, check_exit_code=False) And we only see the error coming from parsing the output of that command that is not logged. I believe Matthew is right in his assessment, the problem is the output from "iscsiadm -m node", there is a missing space between the first 2 columns in the output [4]. This looks like an issue in Open iSCSI, not in OS-Brick, Cinder, or Nova. And checking their code, it looks like this is the patch that fixes it [5], so it needs to be added to F29 iscsi-initiator-utils package. Cheers, Gorka. [1]: http://paste.openstack.org/show/744723/ [2]: http://logs.openstack.org/59/619259/2/check/devstack-platform-fedora-latest/3eaee4d/controller/logs/screen-n-cpu.txt.gz?#_Feb_06_00_10_05_234149 [3]: https://bugs.launchpad.net/os-brick/+bug/1814849/comments/9 [4]: http://paste.openstack.org/show/744724/ [5]: https://github.com/open-iscsi/open-iscsi/commit/baa0cb45cfcf10a81283c191b0b236cd1a2f66ee From hyangii at gmail.com Mon Feb 11 10:39:15 2019 From: hyangii at gmail.com (Jae Sang Lee) Date: Mon, 11 Feb 2019 19:39:15 +0900 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> Message-ID: Arne, I saw the messages like ''moving volume to trash" in the cinder-volume logs and the peridic task also reports like "Deleted from trash for backend ''" The patch worked well when clearing a small number of volumes. This happens only when I am deleting a large number of volumes. I will try to adjust the number of thread pools by adjusting the environment variables with your advices Do you know why the cinder-volume hang does not occur when create a volume, but only when delete a volume? Thanks. 2019년 2월 11일 (월) 오후 6:14, Arne Wiebalck 님이 작성: > Jae, > > To make sure deferred deletion is properly working: when you delete > individual large volumes > with data in them, do you see that > - the volume is fully “deleted" within a few seconds, ie. not staying in > ‘deleting’ for a long time? > - that the volume shows up in trash (with “rbd trash ls”)? > - the periodic task reports it is deleting volumes from the trash? > > Another option to look at is “backend_native_threads_pool_size": this will > increase the number > of threads to work on deleting volumes. It is independent from deferred > deletion, but can also > help with situations where Cinder has more work to do than it can cope > with at the moment. > > Cheers, > Arne > > > > On 11 Feb 2019, at 09:47, Jae Sang Lee wrote: > > Yes, I added your code to pike release manually. > > > > 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 작성: > >> Hi Jae, >> >> You back ported the deferred deletion patch to Pike? >> >> Cheers, >> Arne >> >> > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: >> > >> > Hello, >> > >> > I recently ran a volume deletion test with deferred deletion enabled on >> the pike release. >> > >> > We experienced a cinder-volume hung when we were deleting a large >> amount of the volume in which the data was actually written(I make 15GB >> file in every volumes), and we thought deferred deletion would solve it. >> > >> > However, while deleting 200 volumes, after 50 volumes, the >> cinder-volume downed as before. In my opinion, the trash_move api does not >> seem to work properly when removing multiple volumes, just like remove api. >> > >> > If these test results are my fault, please let me know the correct test >> method. >> > >> >> -- >> Arne Wiebalck >> CERN IT >> >> > -- > Arne Wiebalck > CERN IT > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyangii at gmail.com Mon Feb 11 10:41:05 2019 From: hyangii at gmail.com (Jae Sang Lee) Date: Mon, 11 Feb 2019 19:41:05 +0900 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: <20190211092142.pva6t6zol77fowsn@localhost> References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <20190211092142.pva6t6zol77fowsn@localhost> Message-ID: Gorka, I found the default size of threadpool is 20 in source code. However, I will try to increase this size. Thanks a lot. 2019년 2월 11일 (월) 오후 6:21, Gorka Eguileor 님이 작성: > On 11/02, Jae Sang Lee wrote: > > Yes, I added your code to pike release manually. > > > > Hi, > > Did you enable the feature? > > If I remember correctly, 50 is the default value of the native thread > pool size, so it seems that the 50 available threads are busy deleting > the volumes. > > I would double check that the feature is actually enabled > (enable_deferred_deletion = True in the backend section configuration > and checking the logs to see if there are any messages indicating that a > volume is being deleted from the trash), and increase the thread pool > size. You can change it with environmental variable > EVENTLET_THREADPOOL_SIZE. > > Cheers, > Gorka. > > > > > > > 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 작성: > > > > > Hi Jae, > > > > > > You back ported the deferred deletion patch to Pike? > > > > > > Cheers, > > > Arne > > > > > > > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: > > > > > > > > Hello, > > > > > > > > I recently ran a volume deletion test with deferred deletion enabled > on > > > the pike release. > > > > > > > > We experienced a cinder-volume hung when we were deleting a large > amount > > > of the volume in which the data was actually written(I make 15GB file > in > > > every volumes), and we thought deferred deletion would solve it. > > > > > > > > However, while deleting 200 volumes, after 50 volumes, the > cinder-volume > > > downed as before. In my opinion, the trash_move api does not seem to > work > > > properly when removing multiple volumes, just like remove api. > > > > > > > > If these test results are my fault, please let me know the correct > test > > > method. > > > > > > > > > > -- > > > Arne Wiebalck > > > CERN IT > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev.faz at gmail.com Mon Feb 11 11:58:16 2019 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Mon, 11 Feb 2019 12:58:16 +0100 Subject: [keystone] adfs SingleSignOn with CLI/API? Message-ID: <1B71BEE3-D72D-42E8-A61A-380CAA548722@gmail.com> Hi, Im currently trying to implement some way to do a SSO against our ActiveDirectory. I already tried SAMLv2 and OpenID Connect. Im able to sign in via Horizon, but im unable to find a working way on cli. Already tried v3adfspassword and v3oidcpassword, but im unable to get them working. Any hints / links / docs where to find more information? Anyone using this kind of setup and willing to share KnowHow? Thanks a lot, Fabian Zimmermann -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arne.Wiebalck at cern.ch Mon Feb 11 12:40:05 2019 From: Arne.Wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 11 Feb 2019 12:40:05 +0000 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> Message-ID: <3C065CFC-3E64-47C1-84C9-FB87A1F9B475@cern.ch> Jae, On 11 Feb 2019, at 11:39, Jae Sang Lee > wrote: Arne, I saw the messages like ''moving volume to trash" in the cinder-volume logs and the peridic task also reports like "Deleted from trash for backend ''" The patch worked well when clearing a small number of volumes. This happens only when I am deleting a large number of volumes. Hmm, from cinder’s point of view, the deletion should be more or less instantaneous, so it should be able to “delete” many more volumes before getting stuck. The periodic task, however, will go through the volumes one by one, so if you delete many at the same time, volumes may pile up in the trash (for some time) before the tasks gets round to delete them. This should not affect c-vol, though. I will try to adjust the number of thread pools by adjusting the environment variables with your advices Do you know why the cinder-volume hang does not occur when create a volume, but only when delete a volume? Deleting a volume ties up a thread for the duration of the deletion (which is synchronous and can hence take very long for ). If you have too many deletions going on at the same time, you run out of threads and c-vol will eventually time out. FWIU, creation basically works the same way, but it is almost instantaneous, hence the risk of using up all threads is simply lower (Gorka may correct me here :-). Cheers, Arne Thanks. 2019년 2월 11일 (월) 오후 6:14, Arne Wiebalck >님이 작성: Jae, To make sure deferred deletion is properly working: when you delete individual large volumes with data in them, do you see that - the volume is fully “deleted" within a few seconds, ie. not staying in ‘deleting’ for a long time? - that the volume shows up in trash (with “rbd trash ls”)? - the periodic task reports it is deleting volumes from the trash? Another option to look at is “backend_native_threads_pool_size": this will increase the number of threads to work on deleting volumes. It is independent from deferred deletion, but can also help with situations where Cinder has more work to do than it can cope with at the moment. Cheers, Arne On 11 Feb 2019, at 09:47, Jae Sang Lee > wrote: Yes, I added your code to pike release manually. 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck >님이 작성: Hi Jae, You back ported the deferred deletion patch to Pike? Cheers, Arne > On 11 Feb 2019, at 07:54, Jae Sang Lee > wrote: > > Hello, > > I recently ran a volume deletion test with deferred deletion enabled on the pike release. > > We experienced a cinder-volume hung when we were deleting a large amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it. > > However, while deleting 200 volumes, after 50 volumes, the cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api. > > If these test results are my fault, please let me know the correct test method. > -- Arne Wiebalck CERN IT -- Arne Wiebalck CERN IT -- Arne Wiebalck CERN IT -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Mon Feb 11 14:02:20 2019 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 11 Feb 2019 15:02:20 +0100 Subject: [TripleO][Kolla] Reduce base layer of containers for security and size of images (maintenance) sakes: UPDATE Message-ID: Good news: so the %systemd_ordering macro works well for containers images to build it w/o systemd & deps pulled in, and the changes got accepted for RDO and some of the base packages for f29! Bad news: [0] is a show stopper for removing systemd off the base RHEL/Fedora containers in Kolla. To mitigate that issue for the remaining dnf and puppet, and as well for the less important* to have it fixed iscsi-initiator-utils and kuryr-kubernetes-distgit, we need to consider using microdnf instead of dnf for installing RPM packages in Kolla. Or alternatively somehow to achieve a trick with _tmpfiles to be split off the main spec files into sub-packages [1]: if the tmpfiles and such were split out into a subpackage that'd be required if and only if the kernel was installed or being installed, that might work. * it is only less important as those do not belong to the Kolla base/openstack-base images and impacts only its individual containers images. [0] https://bugs.launchpad.net/tripleo/+bug/1804822/comments/17 [1] https://github.com/rpm-software-management/dnf/pull/1315#issuecomment-462326283 > Here is an update. > The %{systemd_ordering} macro is proposed for lightening containers > images and removing the systemd dependency for containers. Please see & > try patches in the topic [0] for RDO, and [1][2][3][4][5] for generic > Fedora 29 rpms. I'd very appreciate if anyone building Kolla containers > for f29/(rhel8 yet?) could try these out as well. > > PS (somewhat internal facing but who cares): I wonder if we could see > those changes catched up automagically for rhel8 repos as well? > >> I'm tracking systemd changes here [0],[1],[2], btw (if accepted, >> it should be working as of fedora28(or 29) I hope) >> >> [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659 >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672 > > [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672 > [3] https://bugzilla.redhat.com/show_bug.cgi?id=1668688 > [4] https://bugzilla.redhat.com/show_bug.cgi?id=1668687 > [5] https://bugzilla.redhat.com/show_bug.cgi?id=1668678 -- Best regards, Bogdan Dobrelya, Irc #bogdando From hjensas at redhat.com Mon Feb 11 14:16:53 2019 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Mon, 11 Feb 2019 15:16:53 +0100 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: Message-ID: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> On Wed, 2019-02-06 at 06:12 +0000, NANTHINI A A wrote: > Hi , > We are developing heat templates for our vnf deployment .It > includes multiple resources .We want to repeat the resource and hence > used the api RESOURCE GROUP . > Attached are the templates which we used > > Set1.yaml -> has the resources we want to repeat > Setrepeat.yaml -> has the resource group api with count . > > We want to access the variables of resource in set1.yaml while > repeating it with count .Eg . port name ,port fixed ip address we > want to change in each set . > Please let us know how we can have a variable with each repeated > resource . > Sounds like you want to use the index_var variable[1] to prefix/suffix reource names? I.e in set1.yaml you can use: name: list_join: - '_' - {get_param: 'OS::stack_name'} - %index% - The example should resulting in something like: stack_0_Network3, stack_0_Subnet3 stack_1_Network0, stack_1_Subnet0 [ ... ] If you want to be more advanced you could use a list parameter in the set1.yaml template, and have each list entry contain a dictionaly of each resource name. The %index% variable would then be used to pick the correct entry from the list. parameters: resource_name_map: - network1: foo_custom_name_net1 network2: foo_custom_name_net2 - network1: bar_custom_name_net1 network2: bar_custom_name_net2 - resources: neutron_Network_1: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network1]} neutron_Network_2: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network2]} %index% is the "count" picking the 'foo' entries when %index% is 0, and 'bar' entries when %index% is 1 and so on. [1] https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Heat::ResourceGroup-props-opt From colleen at gazlene.net Mon Feb 11 14:19:51 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 11 Feb 2019 15:19:51 +0100 Subject: [keystone] adfs SingleSignOn with CLI/API? In-Reply-To: <1B71BEE3-D72D-42E8-A61A-380CAA548722@gmail.com> References: <1B71BEE3-D72D-42E8-A61A-380CAA548722@gmail.com> Message-ID: <1549894791.2312833.1655509928.25450D18@webmail.messagingengine.com> Hi Fabian, On Mon, Feb 11, 2019, at 12:58 PM, Fabian Zimmermann wrote: > Hi, > > Im currently trying to implement some way to do a SSO against our > ActiveDirectory. I already tried SAMLv2 and OpenID Connect. > > Im able to sign in via Horizon, but im unable to find a working way on cli. > > Already tried v3adfspassword and v3oidcpassword, but im unable to get > them working. > > Any hints / links / docs where to find more information? > > Anyone using this kind of setup and willing to share KnowHow? > > Thanks a lot, > > Fabian Zimmermann We have an example of authenticating with the CLI here: https://docs.openstack.org/keystone/latest/admin/federation/configure_federation.html#authenticating That only covers the regular SAML2.0 ECP type of authentication, which I guess won't work with ADFS, and we seem to have zero ADFS-specific documentation. >From the keystoneauth plugin code, it looks like you need to set identity-provider-url, service-provider-endpoint, service-provider-entity-id, username, password, identity-provider, and protocol (I'm getting that from the loader classes[1][2]). Is that the information you're looking for, or can you give more details on what specifically isn't working? Colleen [1] http://git.openstack.org/cgit/openstack/keystoneauth/tree/keystoneauth1/loading/identity.py#n104 [2] http://git.openstack.org/cgit/openstack/keystoneauth/tree/keystoneauth1/extras/_saml2/_loading.py#n45 From kchamart at redhat.com Mon Feb 11 14:38:45 2019 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 11 Feb 2019 15:38:45 +0100 Subject: [nova] Floppy drive =?utf-8?Q?support_?= =?utf-8?B?4oCU?= does anyone rely on it? In-Reply-To: References: <20190207112959.GF5349@paraplu.home> Message-ID: <20190211143845.GA26837@paraplu> On Thu, Feb 07, 2019 at 09:41:19AM -0500, Jay Pipes wrote: > On 02/07/2019 06:29 AM, Kashyap Chamarthy wrote: > > Given that, and the use of floppy drives is generally not recommended in > > 2019, any objection to go ahead and remove support for floppy drives? > > No objections from me. Thanks. Since I haven't heard much else objections, I'll add the blueprint (to remove floppy drive support) to my queue. [...] -- /kashyap From mriedemos at gmail.com Mon Feb 11 14:41:09 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 11 Feb 2019 08:41:09 -0600 Subject: [nova] Can we drop the cells v1 docs now? Message-ID: <1979b66e-7de8-9826-1145-e80af5d6a270@gmail.com> I have kind of lost where we are on dropping cells v1 code at this point, but it's probably too late in Stein. And technically nova-network won't start unless cells v1 is configured, and we've left the nova-network code in place while CERN is migrating their deployment to neutron*. CERN is running cells v2 since Queens and I think they have just removed this [1] to still run nova-network without cells v1. There has been no work in Stein to remove nova-network [2] even though we still have a few API related things we can work on removing [3] but that is very low priority. To be clear, CERN only cares about the nova-network service, not the APIs which is why we started removing those in Rocky. As for cells v1, if we're not going to drop it in Stein, can we at least make incremental progress and drop the cells v1 related docs to further signal the eventual demise and to avoid confusion in the docs about what cells is (v1 vs v2) for newcomers? People can still get the cells v1 in-tree docs on the stable branches (which are being published [4]). [1] https://github.com/openstack/nova/blob/bff3fd1cd/nova/cmd/network.py#L43 [2] https://blueprints.launchpad.net/nova/+spec/remove-nova-network-stein [3] https://etherpad.openstack.org/p/nova-network-removal-rocky [4] https://docs.openstack.org/nova/queens/user/cells.html#cells-v1 *I think they said there are parts of their deployment that will probably never move off of nova-network, and they will just maintain a fork for that part of the deployment. -- Thanks, Matt From mnaser at vexxhost.com Mon Feb 11 14:51:59 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 11 Feb 2019 09:51:59 -0500 Subject: [nova] Can we drop the cells v1 docs now? In-Reply-To: <1979b66e-7de8-9826-1145-e80af5d6a270@gmail.com> References: <1979b66e-7de8-9826-1145-e80af5d6a270@gmail.com> Message-ID: On Mon, Feb 11, 2019 at 9:43 AM Matt Riedemann wrote: > > I have kind of lost where we are on dropping cells v1 code at this > point, but it's probably too late in Stein. And technically nova-network > won't start unless cells v1 is configured, and we've left the > nova-network code in place while CERN is migrating their deployment to > neutron*. CERN is running cells v2 since Queens and I think they have > just removed this [1] to still run nova-network without cells v1. > > There has been no work in Stein to remove nova-network [2] even though > we still have a few API related things we can work on removing [3] but > that is very low priority. To be clear, CERN only cares about the > nova-network service, not the APIs which is why we started removing > those in Rocky. > > As for cells v1, if we're not going to drop it in Stein, can we at least > make incremental progress and drop the cells v1 related docs to further > signal the eventual demise and to avoid confusion in the docs about what > cells is (v1 vs v2) for newcomers? People can still get the cells v1 > in-tree docs on the stable branches (which are being published [4]). I think from an operators perspective, the documentation should at least be ripped out (and any nova-manage commands, assuming there's any). I guess there should be any tooling to allow you to get a cells v1 deployment (imho). Cells V2 have been out for a while, extensively tested and work pretty well now. > [1] https://github.com/openstack/nova/blob/bff3fd1cd/nova/cmd/network.py#L43 > [2] https://blueprints.launchpad.net/nova/+spec/remove-nova-network-stein > [3] https://etherpad.openstack.org/p/nova-network-removal-rocky > [4] https://docs.openstack.org/nova/queens/user/cells.html#cells-v1 > > *I think they said there are parts of their deployment that will > probably never move off of nova-network, and they will just maintain a > fork for that part of the deployment. > > -- > > Thanks, > > Matt > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From lbragstad at gmail.com Mon Feb 11 14:53:23 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 11 Feb 2019 08:53:23 -0600 Subject: [dev][keystone] Keystone Team Update - Week of 4 February 2019 In-Reply-To: <1549722452.3566947.1654366432.049A66E5@webmail.messagingengine.com> References: <1549722452.3566947.1654366432.049A66E5@webmail.messagingengine.com> Message-ID: On 2/9/19 8:27 AM, Colleen Murphy wrote: > # Keystone Team Update - Week of 4 February 2019 > > ## News > > ### Performance of Loading Fernet/JWT Key Repositories > > Lance noticed that it seems that token signing/encryption keys are loaded from disk on every request and is therefore not very performant, and started investigating ways we could improve this[1][2]. I didn't come to a conclusion on if the performance hit was due to the actual reading of something from disk, if it was because we loop through each available key until we find one that works, or if it was because I completely disabled token caching. The obvious worst case in this scenario is trying the right key, last - O(n). This is the approach I was using to preemptively identify which public key needs to be used to validate a JWT [0]. Ultimately, I need some more information/constraints from wxy [1]. Possibly something we can start in anther thread. [0] https://pasted.tech/pastes/c10a774a9d17e1743f7a6543031b8c43d930906c.raw [1] https://review.openstack.org/#/c/614549/13/keystone/token/providers/jws/core.py > > [1] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-02-07.log.html#t2019-02-07T17:55:34 > [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-02-08.log.html#t2019-02-08T17:09:24 > > ## Recently Merged Changes > > Search query: https://bit.ly/2pquOwT > > We merged 10 changes this week. > > ## Changes that need Attention > > Search query: https://bit.ly/2RLApdA > > There are 73 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. > > ## Bugs > > This week we opened 2 new bugs and closed 3. > > Bugs opened (2) > Bug #1814589 (keystone:High) opened by Guang Yee https://bugs.launchpad.net/keystone/+bug/1814589 > Bug #1814570 (keystone:Medium) opened by Guang Yee https://bugs.launchpad.net/keystone/+bug/1814570 > > Bugs fixed (3) > Bug #1804483 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1804483 > Bug #1805406 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1805406 > Bug #1801095 (keystone:Wishlist) fixed by Artem Vasilyev https://bugs.launchpad.net/keystone/+bug/1801095 > > ## Milestone Outlook > > https://releases.openstack.org/stein/schedule.html > > Feature freeze is in four weeks. Be mindful of the gate and try to submit and review things early. > > ## Shout-outs > > Congratulations and thank you to our Outreachy intern Islam for completing the first step in refactoring our unit tests to lean on our shiny new Flask framework! Great work! ++ > > ## Help with this newsletter > > Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From nanthini.a.a at ericsson.com Mon Feb 11 15:32:58 2019 From: nanthini.a.a at ericsson.com (NANTHINI A A) Date: Mon, 11 Feb 2019 15:32:58 +0000 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: Hi , I have tried the below .But getting error .Please let me know how I can proceed further . root at cic-1:~# cat try1.yaml heat_template_version: 2013-05-23 description: This is the template for I&V R6.1 base configuration to create neutron resources other than sg and vm for vyos vms parameters: resource_name_map: - network1: NetworkA1 network2: NetworkA2 - network1: NetworkB1 network2: NetworkB2 resources: neutron_Network_1: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network1]} neutron_Network_2: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network2]} root at cic-1:~# cat tryrepeat.yaml heat_template_version: 2013-05-23 resources: rg: type: OS::Heat::ResourceGroup properties: count: 2 resource_def: type: try1.yaml root at cic-1:~# root at cic-1:~# heat stack-create tests -f tryrepeat.yaml WARNING (shell) "heat stack-create" is deprecated, please use "openstack stack create" instead ERROR: resources.rg: : Error parsing template file:///root/try1.yaml while scanning for the next token found character '%' that cannot start any token in "", line 15, column 45: ... {get_param: [resource_name_map, %index%, network1]} Thanks in advance . Thanks, A.Nanthini -----Original Message----- From: Harald Jensås [mailto:hjensas at redhat.com] Sent: Monday, February 11, 2019 7:47 PM To: NANTHINI A A ; openstack-dev at lists.openstack.org Subject: Re: [Heat] Reg accessing variables of resource group heat api On Wed, 2019-02-06 at 06:12 +0000, NANTHINI A A wrote: > Hi , > We are developing heat templates for our vnf deployment .It > includes multiple resources .We want to repeat the resource and hence > used the api RESOURCE GROUP . > Attached are the templates which we used > > Set1.yaml -> has the resources we want to repeat Setrepeat.yaml -> has > the resource group api with count . > > We want to access the variables of resource in set1.yaml while > repeating it with count .Eg . port name ,port fixed ip address we want > to change in each set . > Please let us know how we can have a variable with each repeated > resource . > Sounds like you want to use the index_var variable[1] to prefix/suffix reource names? I.e in set1.yaml you can use: name: list_join: - '_' - {get_param: 'OS::stack_name'} - %index% - The example should resulting in something like: stack_0_Network3, stack_0_Subnet3 stack_1_Network0, stack_1_Subnet0 [ ... ] If you want to be more advanced you could use a list parameter in the set1.yaml template, and have each list entry contain a dictionaly of each resource name. The %index% variable would then be used to pick the correct entry from the list. parameters: resource_name_map: - network1: foo_custom_name_net1 network2: foo_custom_name_net2 - network1: bar_custom_name_net1 network2: bar_custom_name_net2 - resources: neutron_Network_1: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network1]} neutron_Network_2: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network2]} %index% is the "count" picking the 'foo' entries when %index% is 0, and 'bar' entries when %index% is 1 and so on. [1] https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Heat::ResourceGroup-props-opt From thierry at openstack.org Mon Feb 11 15:50:57 2019 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 11 Feb 2019 16:50:57 +0100 Subject: Subject: Re: [Trove] State of the Trove service tenant deployment model In-Reply-To: References: Message-ID: Lingxian Kong wrote: > On Sun, Feb 10, 2019 at 7:04 AM Darek Król > wrote: > > Hello Lingxian, > > I’ve heard about a few tries of running Trove in production. > Unfortunately, I didn’t have opportunity to get details about > networking. At Samsung, we introducing Trove into our products for > on-premise cloud platforms. However, I cannot share too many details > about it, besides it is oriented towards performance and security is > not a concern. Hence, the networking is very basic without any > layers of abstractions if possible. > > Could you share more details about your topology and goals you want > to achieve in Trove ? Maybe Trove team could help you in this ? > Unfortunately, I’m not a network expert so I would need to get more > details to understand your use case better. > > > Yeah, I think trove team could definitely help. I've been working on a > patch[1] to support different sgs for different type of neutron ports, > the patch is for the use case that `CONF.default_neutron_networks` is > configured as trove management network. > > Besides, I also have some patches[2][3] for trove need to be reviewed, > not sure who are the right people I should ask for review now, but would > appriciate if you could help. I think OVH has been deploying Trove as well, or at least considering it... Ccing Jean-Daniel in case he can bring some insights on that. -- Thierry From thierry at openstack.org Mon Feb 11 16:01:40 2019 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 11 Feb 2019 17:01:40 +0100 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> Message-ID: <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> Doug Hellmann wrote: > Kendall Nelson writes: >> [...] >> So I think that the First Contact SIG project liaison list kind of fits >> this. Its already maintained in a wiki and its already a list of people >> willing to be contacted for helping people get started. It probably just >> needs more attention and refreshing. When it was first set up we (the FC >> SIG) kind of went around begging for volunteers and then once we maxxed out >> on them, we said those projects without volunteers will have the role >> defaulted to the PTL unless they delegate (similar to how other liaison >> roles work). >> >> Long story short, I think we have the sort of mentoring things covered. And >> to back up an earlier email, project specific onboarding would be a good >> help too. > > OK, that does sound pretty similar. I guess the piece that's missing is > a description of the sort of help the team is interested in receiving. I guess the key difference is that the first contact list is more a function of the team (who to contact for first contributions in this team, defaults to PTL), rather than a distinct offer to do 1:1 mentoring to cover specific needs in a team. It's probably pretty close (and the same people would likely be involved), but I think an approach where specific people offer a significant amount of their time to one mentee interested in joining a team is a bit different. I don't think every team would have volunteers to do that. I would not expect a mentor volunteer to care for several mentees. In the end I think we would end up with a much shorter list than the FC list. Maybe the two efforts can converge into one, or they can be kept as two different things but coordinated by the same team ? -- Thierry Carrez (ttx) From ed at leafe.com Mon Feb 11 16:03:26 2019 From: ed at leafe.com (Ed Leafe) Date: Mon, 11 Feb 2019 10:03:26 -0600 Subject: Placement governance switch Message-ID: <8BE26158-5817-497F-A9D2-60222BD9F82C@leafe.com> With PTL election season coming up soon, this seems like a good time to revisit the plans for the Placement effort to become a separate project with its own governance. We last discussed this back at the Denver PTG in September 2018, and settled on making Placement governance dependent on a number of items. [0] Most of the items in that list have been either completed, are very close to completion, or, in the case of the upgrade, is no longer expected. But in the time since that last discussion, much has changed. Placement is now a separate git repo, and is deployed and run independently of Nova. The integrated gate in CI is using the extracted Placement repo, and not Nova’s version. In a hangout last week [1], we agreed to several things: * Placement code would remain in the Nova repo for the Stein release to allow for an easier transition for deployments tools that were not prepared for this change * The Placement code in the Nova tree will remain frozen; all new Placement work will be in the Placement repo. * The Placement API is now unfrozen. Nova, however, will not develop code in Stein that will rely on any newer Placement microversion than the current 1.30. * The Placement code in the Nova repo will be deleted in the Train release. Given the change of context, now may be a good time to change to a separate governance. The concerns on the Nova side have been largely addressed, and switching governance now would allow us to participate in the next PTL election cycle. We’d like to get input from anyone else in the OpenStack community who feels that a governance change would impact them, so please reply in this thread if you have concerns. [0] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002451.html -- Ed Leafe From colleen at gazlene.net Mon Feb 11 16:18:40 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 11 Feb 2019 17:18:40 +0100 Subject: [keystone] adfs SingleSignOn with CLI/API? In-Reply-To: References: <1B71BEE3-D72D-42E8-A61A-380CAA548722@gmail.com> <1549894791.2312833.1655509928.25450D18@webmail.messagingengine.com> Message-ID: <1549901920.3451697.1655621200.6F07535E@webmail.messagingengine.com> Forwarding back to list On Mon, Feb 11, 2019, at 5:11 PM, Blake Covarrubias wrote: > > On Feb 11, 2019, at 6:19 AM, Colleen Murphy wrote: > > > > Hi Fabian, > > > > On Mon, Feb 11, 2019, at 12:58 PM, Fabian Zimmermann wrote: > >> Hi, > >> > >> Im currently trying to implement some way to do a SSO against our > >> ActiveDirectory. I already tried SAMLv2 and OpenID Connect. > >> > >> Im able to sign in via Horizon, but im unable to find a working way on cli. > >> > >> Already tried v3adfspassword and v3oidcpassword, but im unable to get > >> them working. > >> > >> Any hints / links / docs where to find more information? > >> > >> Anyone using this kind of setup and willing to share KnowHow? > >> > >> Thanks a lot, > >> > >> Fabian Zimmermann > > > > We have an example of authenticating with the CLI here: > > > > https://docs.openstack.org/keystone/latest/admin/federation/configure_federation.html#authenticating > > > > That only covers the regular SAML2.0 ECP type of authentication, which I guess won't work with ADFS, and we seem to have zero ADFS-specific documentation. > > > > From the keystoneauth plugin code, it looks like you need to set identity-provider-url, service-provider-endpoint, service-provider-entity-id, username, password, identity-provider, and protocol (I'm getting that from the loader classes[1][2]). Is that the information you're looking for, or can you give more details on what specifically isn't working? > > > > Colleen > > > > [1] http://git.openstack.org/cgit/openstack/keystoneauth/tree/keystoneauth1/loading/identity.py#n104 > > [2] http://git.openstack.org/cgit/openstack/keystoneauth/tree/keystoneauth1/extras/_saml2/_loading.py#n45 > > > > Fabian, > > To add a bit more info, the AD FS plugin essentially uses IdP-initiated > sign-on. The identity provider URL is where the initial authentication > request to AD FS will be sent. An example of this would be > https://HOSTNAME/adfs/services/trust/13/usernamemixed > . The service > provider’s entity ID must also be sent in the request so that AD FS > knows which Relying Party Trust to associate with the request. > > AD FS will provide a SAML assertion upon successful authentication. The > service provider endpoint is the URL of the Assertion Consumer Service. > If you’re using Shibboleth on the SP, this would be > https://HOSTNAME/Shibboleth.sso/ADFS > . > > Note: The service-provider-entity-id can be omitted if it is the same > value as the service-provider-endpoint (or Assertion Consumer Service > URL). > > Hope this helps. > > — > Blake Covarrubias > From openstack at fried.cc Mon Feb 11 16:34:22 2019 From: openstack at fried.cc (Eric Fried) Date: Mon, 11 Feb 2019 10:34:22 -0600 Subject: [ptg][nova][placement] Etherpad & collector started Message-ID: <407f5508-b2ab-667e-d4f1-122e2906e324@fried.cc> I needed to brain-dump some topics to be discussed at the PTG in a couple of months. I asked if there was already an etherpad and the two people who happened to hear my question weren't aware of one, so I started one [1]. I also started the collector wiki page [2], templated on the Stein one [3]. Enjoy. -efried [1] https://etherpad.openstack.org/p/nova-ptg-train [2] https://wiki.openstack.org/wiki/PTG/Train/Etherpads [3] https://wiki.openstack.org/wiki/PTG/Stein/Etherpads From zufar at onf-ambassador.org Mon Feb 11 16:46:45 2019 From: zufar at onf-ambassador.org (Zufar Dhiyaulhaq) Date: Mon, 11 Feb 2019 23:46:45 +0700 Subject: [Neutron] Split Network node from controller Node In-Reply-To: <3DC9635F-4B85-41D4-B615-E6E2A8234B38@redhat.com> References: <3DC9635F-4B85-41D4-B615-E6E2A8234B38@redhat.com> Message-ID: Hi Thank you for your answer, I just install the network agent in a network node, with this following package - openstack-neutron.noarch - openstack-neutron-common.noarch - openstack-neutron-openvswitch.noarch - openstack-neutron-metering-agent.noarch and configuring and appear in the agent list [root at zu-controller1 ~(keystone_admin)]# openstack network agent list +--------------------------------------+--------------------+----------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+----------------+-------------------+-------+-------+---------------------------+ | 025f8a15-03b5-421e-94ff-3e07fc1317b5 | Open vSwitch agent | zu-compute2 | None | :-) | UP | neutron-openvswitch-agent | | 04af3150-7673-4ac4-9670-fd1505737466 | Metadata agent | zu-network1 | None | :-) | UP | neutron-metadata-agent | | 11a9c764-e53d-4316-9801-fa2a931f0572 | Open vSwitch agent | zu-compute1 | None | :-) | UP | neutron-openvswitch-agent | | 1875a93f-09df-4c50-8660-1f4dc33b228d | L3 agent | zu-controller1 | nova | :-) | UP | neutron-l3-agent | | 1b492ed7-fbc2-4b95-ba70-e045e255a63d | L3 agent | zu-network1 | nova | :-) | UP | neutron-l3-agent | | 2fb2a714-9735-4f78-8019-935cb6422063 | Metering agent | zu-network1 | None | :-) | UP | neutron-metering-agent | | 3873fc10-1758-47e9-92b8-1e8605651c70 | Open vSwitch agent | zu-network1 | None | :-) | UP | neutron-openvswitch-agent | | 4b51bdd2-df13-4a35-9263-55e376b6e2ea | Metering agent | zu-controller1 | None | :-) | UP | neutron-metering-agent | | 54af229f-3dc1-49db-b32a-25f3fd62c010 | DHCP agent | zu-controller1 | nova | :-) | UP | neutron-dhcp-agent | | 9337c72b-8703-4c80-911b-106abe51ffbd | DHCP agent | zu-network1 | nova | :-) | UP | neutron-dhcp-agent | | a3c78231-027d-4ddd-8234-7afd1d67910e | Metadata agent | zu-controller1 | None | :-) | UP | neutron-metadata-agent | | aeb7537e-98af-49f0-914b-204e64cb4103 | Open vSwitch agent | zu-controller1 | None | :-) | UP | neutron-openvswitch-agent | +--------------------------------------+--------------------+----------------+-------------------+-------+-------+---------------------------+ I try to migrate the network (external & internal) and router into zu-network1 (my new network node). and success [root at zu-controller1 ~(keystone_admin)]# openstack network agent list --router $ROUTER_ID +--------------------------------------+------------+-------------+-------------------+-------+-------+------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+------------+-------------+-------------------+-------+-------+------------------+ | 1b492ed7-fbc2-4b95-ba70-e045e255a63d | L3 agent | zu-network1 | nova | :-) | UP | neutron-l3-agent | +--------------------------------------+------------+-------------+-------------------+-------+-------+------------------+ [root at zu-controller1 ~(keystone_admin)]# openstack network agent list --network $NETWORK_INTERNAL +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ | 9337c72b-8703-4c80-911b-106abe51ffbd | DHCP agent | zu-network1 | nova | :-) | UP | neutron-dhcp-agent | +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ [root at zu-controller1 ~(keystone_admin)]# openstack network agent list --network $NETWORK_EXTERNAL +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ | 9337c72b-8703-4c80-911b-106abe51ffbd | DHCP agent | zu-network1 | nova | :-) | UP | neutron-dhcp-agent | +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ But, I cannot ping my instance after the migration. I don't know why. ii check my DHCP and router has already moved. [root at zu-controller1 ~(keystone_admin)]# ip netns [root at zu-controller1 ~(keystone_admin)]# [root at zu-network1 ~]# ip netns qdhcp-fddd647b-3601-43e4-8299-60b703405110 (id: 1) qrouter-dd8ae033-0db2-4153-a060-cbb7cd18bae7 (id: 0) [root at zu-network1 ~]# What step do I miss? Thanks Best Regards, Zufar Dhiyaulhaq On Mon, Feb 11, 2019 at 3:13 PM Slawomir Kaplonski wrote: > Hi, > > I don’t know if there is any tutorial for that but You can just deploy new > node with agents which You need, then disable old DHCP/L3 agents with > neutron API [1] and move existing networks/routers to agents in new host > with neutron API. Docs for agents scheduler API is in [2] and [3]. > Please keep in mind that when You will move routers to new agent You will > have some downtime in data plane. > > [1] https://developer.openstack.org/api-ref/network/v2/#update-agent > [2] https://developer.openstack.org/api-ref/network/v2/#l3-agent-scheduler > [3] > https://developer.openstack.org/api-ref/network/v2/#dhcp-agent-scheduler > > > Wiadomość napisana przez Zufar Dhiyaulhaq w > dniu 11.02.2019, o godz. 03:33: > > > > Hi everyone, > > > > I Have existing OpenStack with 1 controller node (Network Node in > controller node) and 2 compute node. I need to expand the architecture by > splitting the network node from controller node (create 1 node for > network). > > > > Do you have any recommended step or tutorial for doing this? > > Thanks > > > > Best Regards, > > Zufar Dhiyaulhaq > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Feb 11 17:03:39 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 11 Feb 2019 11:03:39 -0600 Subject: [tc][all] Train Community Goals In-Reply-To: References: <66d73db6-9f84-1290-1ab8-cf901a7fb355@catalyst.net.nz> <6b498008e71b7dae651e54e29717f3ccedea50d1.camel@evrard.me> Message-ID: <7e69aef5-d3c1-22df-7a6f-89b35e14fb8c@nemebean.com> cc aspiers, who sounded interested in leading this work, pending discussion with his employer[1]. 1: http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001750.html On 1/31/19 9:59 AM, Lance Bragstad wrote: > *Healthcheck middleware* > > There is currently no volunteer to champion for this goal. The first > iteration of the work on the oslo.middleware was updated [3], and a gap > analysis was started on the mailing lists [4]. > If you want to get involved in this goal, don't hesitate to answer on > the ML thread there. > > [3] https://review.openstack.org/#/c/617924/2 > [4] https://ethercalc.openstack.org/di0mxkiepll8 From kennelson11 at gmail.com Mon Feb 11 17:14:56 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 11 Feb 2019 09:14:56 -0800 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> Message-ID: On Mon, Feb 11, 2019 at 8:01 AM Thierry Carrez wrote: > Doug Hellmann wrote: > > Kendall Nelson writes: > >> [...] > >> So I think that the First Contact SIG project liaison list kind of fits > >> this. Its already maintained in a wiki and its already a list of people > >> willing to be contacted for helping people get started. It probably just > >> needs more attention and refreshing. When it was first set up we (the FC > >> SIG) kind of went around begging for volunteers and then once we maxxed > out > >> on them, we said those projects without volunteers will have the role > >> defaulted to the PTL unless they delegate (similar to how other liaison > >> roles work). > >> > >> Long story short, I think we have the sort of mentoring things covered. > And > >> to back up an earlier email, project specific onboarding would be a good > >> help too. > > > > OK, that does sound pretty similar. I guess the piece that's missing is > > a description of the sort of help the team is interested in receiving. > > I guess the key difference is that the first contact list is more a > function of the team (who to contact for first contributions in this > team, defaults to PTL), rather than a distinct offer to do 1:1 mentoring > to cover specific needs in a team. > > It's probably pretty close (and the same people would likely be > involved), but I think an approach where specific people offer a > significant amount of their time to one mentee interested in joining a > team is a bit different. I don't think every team would have volunteers > to do that. I would not expect a mentor volunteer to care for several > mentees. In the end I think we would end up with a much shorter list > than the FC list. > I think our original ask for people volunteering (before we completed the list with PTLs as stand ins) was for people willing to help get started in a project and look after their first few patches. So I think that was kinda the mentoring role originally but then it evolved? Maybe Matt Oliver or Ghanshyam remember better than I do? > > Maybe the two efforts can converge into one, or they can be kept as two > different things but coordinated by the same team ? > > I think we could go either way, but that they both would live with the FC SIG. Seems like the most logical place to me. I lean towards two lists, one being a list of volunteer mentors for projects that are actively looking for new contributors (the shorter list) and the other being a list of people just willing to keep an eye out for the welcome new contributor patches and being the entry point for people asking about getting started that don't know anyone in the project yet (kind of what our current view is, I think). > -- > Thierry Carrez (ttx) > -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Feb 11 17:52:09 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 11 Feb 2019 11:52:09 -0600 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <1548944804.945378.1647818352.1EDB6215@webmail.messagingengine.com> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <1548944804.945378.1647818352.1EDB6215@webmail.messagingengine.com> Message-ID: <9133d5d8-8d0e-62de-aca9-4efbda6703fe@nemebean.com> On 1/31/19 8:26 AM, Colleen Murphy wrote: > > I like the idea. One question is, how would these groups be bootstrapped? At the moment, SIGs are formed by 1) people express an interest in a common idea 2) the SIG is proposed and approved by the TC and UC chairs 3) profit. With a more cross-project, deliverable-focused type of group, you would need to have buy-in from all project teams involved before bringing it up for approval by the TC - but getting that buy-in from many different groups can be difficult if you aren't already a blessed group. And if you didn't get buy-in first and the group became approved anyway, project teams may be resentful of having new objectives imposed on them when they may not even agree it's the right direction. As a concrete example of this, the image encryption feature[1] had multiple TC members pushing it along in Berlin, but then got some pushback from the project side[2] on the basis that they didn't prioritize it as highly as the group did. Matt suggested that the priority could be raised if a SIG essentially sponsored it as a top priority for them. Maybe SIG support would be an aspect of creating one of these teams? I don't know what you would do with something that doesn't fall under a SIG though. 1: https://review.openstack.org/#/c/618754/ 2: http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000815.html From ignaziocassano at gmail.com Mon Feb 11 18:18:00 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 11 Feb 2019 19:18:00 +0100 Subject: [manila][glusterfs] on queens error In-Reply-To: References: <20190203100549.urtnvf2iatmqm6oy@barron.net> <20190206153219.yyir5m5tyw7bvrj7@barron.net> <20190206201619.o6turxaps6iv65p7@barron.net> Message-ID: Hello, the manila replication dr works fine on netapp ontap following your suggestions. :-) Source backends (svm for netapp) must belong to a different destination backends availability zone, but in a single manila.conf I cannot specify more than one availability zone. For doing this I must create more share servers ....one for each availability zone. Svm1 with avz1 Svm1-dr with avz1-dr ......... Are you agree??? Thanks & Regards Ignazio Il giorno Gio 7 Feb 2019 06:11 Ignazio Cassano ha scritto: > Many thanks. > I'll check today. > Ignazio > > > Il giorno Mer 6 Feb 2019 21:26 Goutham Pacha Ravi > ha scritto: > >> On Wed, Feb 6, 2019 at 12:16 PM Tom Barron wrote: >> > >> > On 06/02/19 17:48 +0100, Ignazio Cassano wrote: >> > >The 2 openstack Installations do not share anything. The manila on >> each one >> > >works on different netapp storage, but the 2 netapp can be >> synchronized. >> > >Site A with an openstack instalkation and netapp A. >> > >Site B with an openstack with netapp B. >> > >Netapp A and netapp B can be synchronized via network. >> > >Ignazio >> > >> > OK, thanks. >> > >> > You can likely get the share data and its netapp metadata to show up >> > on B via replication and (gouthamr may explain details) but you will >> > lose all the Openstack/manila information about the share unless >> > Openstack database info (more than just manila tables) is imported. >> > That may be OK foryour use case. >> > >> > -- Tom >> >> >> Checking if I understand your request correctly, you have setup >> manila's "dr" replication in OpenStack A and now want to move your >> shares from OpenStack A to OpenStack B's manila. Is this correct? >> >> If yes, you must: >> * Promote your replicas >> - this will make the mirrored shares available. This action does >> not delete the old "primary" shares though, you need to clean them up >> yourself, because manila will attempt to reverse the replication >> relationships if the primary shares are still accessible >> * Note the export locations and Unmanage your shares from OpenStack A's >> manila >> * Manage your shares in OpenStack B's manila with the export locations >> you noted. >> >> > > >> > > >> > >Il giorno Mer 6 Feb 2019 16:32 Tom Barron ha >> scritto: >> > > >> > >> On 06/02/19 15:34 +0100, Ignazio Cassano wrote: >> > >> >Hello Tom, I think cases you suggested do not meet my needs. >> > >> >I have an openstack installation A with a fas netapp A. >> > >> >I have another openstack installation B with fas netapp B. >> > >> >I would like to use manila replication dr. >> > >> >If I replicate manila volumes from A to B the manila db on B does >> not >> > >> >knows anything about the replicated volume but only the backends on >> > >> netapp >> > >> >B. Can I discover replicated volumes on openstack B? >> > >> >Or I must modify the manila db on B? >> > >> >Regards >> > >> >Ignazio >> > >> >> > >> I guess I don't understand your use case. Do Openstack installation >> A >> > >> and Openstack installation B know *anything* about one another? For >> > >> example, are their keystone and neutron databases somehow synced? >> Are >> > >> they going to be operative for the same set of manila shares at the >> > >> same time, or are you contemplating a migration of the shares from >> > >> installation A to installation B? >> > >> >> > >> Probably it would be helpful to have a statement of the problem that >> > >> you intend to solve before we consider the potential mechanisms for >> > >> solving it. >> > >> >> > >> Cheers, >> > >> >> > >> -- Tom >> > >> >> > >> > >> > >> > >> > >> >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha >> scritto: >> > >> > >> > >> >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: >> > >> >> >Thanks Goutham. >> > >> >> >If there are not mantainers for this driver I will switch on >> ceph and >> > >> or >> > >> >> >netapp. >> > >> >> >I am already using netapp but I would like to export shares from >> an >> > >> >> >openstack installation to another. >> > >> >> >Since these 2 installations do non share any openstack component >> and >> > >> have >> > >> >> >different openstack database, I would like to know it is >> possible . >> > >> >> >Regards >> > >> >> >Ignazio >> > >> >> >> > >> >> Hi Ignazio, >> > >> >> >> > >> >> If by "export shares from an openstack installation to another" >> you >> > >> >> mean removing them from management by manila in installation A and >> > >> >> instead managing them by manila in installation B then you can do >> that >> > >> >> while leaving them in place on your Net App back end using the >> manila >> > >> >> "manage-unmanage" administrative commands. Here's some >> documentation >> > >> >> [1] that should be helpful. >> > >> >> >> > >> >> If on the other hand by "export shares ... to another" you mean to >> > >> >> leave the shares under management of manila in installation A but >> > >> >> consume them from compute instances in installation B it's all >> about >> > >> >> the networking. One can use manila to "allow-access" to >> consumers of >> > >> >> shares anywhere but the consumers must be able to reach the >> "export >> > >> >> locations" for those shares and mount them. >> > >> >> >> > >> >> Cheers, >> > >> >> >> > >> >> -- Tom Barron >> > >> >> >> > >> >> [1] >> > >> >> >> > >> >> https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 >> > >> >> > >> > >> >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < >> > >> >> gouthampravi at gmail.com> >> > >> >> >ha scritto: >> > >> >> > >> > >> >> >> Hi Ignazio, >> > >> >> >> >> > >> >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano >> > >> >> >> wrote: >> > >> >> >> > >> > >> >> >> > Hello All, >> > >> >> >> > I installed manila on my queens openstack based on centos 7. >> > >> >> >> > I configured two servers with glusterfs replocation and >> ganesha >> > >> nfs. >> > >> >> >> > I configured my controllers octavia,conf but when I try to >> create a >> > >> >> share >> > >> >> >> > the manila scheduler logs reports: >> > >> >> >> > >> > >> >> >> > Failed to schedule create_share: No valid host was found. >> Failed to >> > >> >> find >> > >> >> >> a weighted host, the last executed filter was >> CapabilitiesFilter.: >> > >> >> >> NoValidHost: No valid host was found. Failed to find a >> weighted host, >> > >> >> the >> > >> >> >> last executed filter was CapabilitiesFilter. >> > >> >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api >> > >> >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a >> > >> >> 89f76bc5de5545f381da2c10c7df7f15 >> > >> >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message >> record for >> > >> >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a >> > >> >> >> >> > >> >> >> >> > >> >> >> The scheduler failure points out that you have a mismatch in >> > >> >> >> expectations (backend capabilities vs share type extra-specs) >> and >> > >> >> >> there was no host to schedule your share to. So a few things >> to check >> > >> >> >> here: >> > >> >> >> >> > >> >> >> - What is the share type you're using? Can you list the share >> type >> > >> >> >> extra-specs and confirm that the backend (your GlusterFS >> storage) >> > >> >> >> capabilities are appropriate with whatever you've set up as >> > >> >> >> extra-specs ($ manila pool-list --detail)? >> > >> >> >> - Is your backend operating correctly? You can list the manila >> > >> >> >> services ($ manila service-list) and see if the backend is both >> > >> >> >> 'enabled' and 'up'. If it isn't, there's a good chance there >> was a >> > >> >> >> problem with the driver initialization, please enable debug >> logging, >> > >> >> >> and look at the log file for the manila-share service, you >> might see >> > >> >> >> why and be able to fix it. >> > >> >> >> >> > >> >> >> >> > >> >> >> Please be aware that we're on a look out for a maintainer for >> the >> > >> >> >> GlusterFS driver for the past few releases. We're open to bug >> fixes >> > >> >> >> and maintenance patches, but there is currently no active >> maintainer >> > >> >> >> for this driver. >> > >> >> >> >> > >> >> >> >> > >> >> >> > I did not understand if controllers node must be connected >> to the >> > >> >> >> network where shares must be exported for virtual machines, so >> my >> > >> >> glusterfs >> > >> >> >> are connected on the management network where openstack >> controllers >> > >> are >> > >> >> >> conencted and to the network where virtual machine are >> connected. >> > >> >> >> > >> > >> >> >> > My manila.conf section for glusterfs section is the following >> > >> >> >> > >> > >> >> >> > [gluster-manila565] >> > >> >> >> > driver_handles_share_servers = False >> > >> >> >> > share_driver = >> manila.share.drivers.glusterfs.GlusterfsShareDriver >> > >> >> >> > glusterfs_target = root at 10.102.184.229:/manila565 >> > >> >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa >> > >> >> >> > glusterfs_ganesha_server_username = root >> > >> >> >> > glusterfs_nfs_server_type = Ganesha >> > >> >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 >> > >> >> >> > #glusterfs_servers = root at 10.102.185.19 >> > >> >> >> > ganesha_config_dir = /etc/ganesha >> > >> >> >> > >> > >> >> >> > >> > >> >> >> > PS >> > >> >> >> > 10.102.184.0/24 is the network where controlelrs expose >> endpoint >> > >> >> >> > >> > >> >> >> > 10.102.189.0/24 is the shared network inside openstack where >> > >> virtual >> > >> >> >> machines are connected. >> > >> >> >> > >> > >> >> >> > The gluster servers are connected on both. >> > >> >> >> > >> > >> >> >> > >> > >> >> >> > Any help, please ? >> > >> >> >> > >> > >> >> >> > Ignazio >> > >> >> >> >> > >> >> >> > >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Feb 11 18:28:15 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 11 Feb 2019 10:28:15 -0800 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> Message-ID: Also, to keep everyone on the same page, this topic was discussed in the D&I WG meeting today for those interested[1]. Long story short, the organizers of the mentoring cohort program are concerned that this might take away from their efforts. We talked a little bit about who would be making use of this list, how it should be formatted, how postings enter/exit the list,etc. -Kendall (diablo_rojo) [1] http://eavesdrop.openstack.org/meetings/diversity_wg/2019/diversity_wg.2019-02-11-17.02.log.html#l-62 On Mon, Feb 11, 2019 at 9:14 AM Kendall Nelson wrote: > On Mon, Feb 11, 2019 at 8:01 AM Thierry Carrez > wrote: > >> Doug Hellmann wrote: >> > Kendall Nelson writes: >> >> [...] >> >> So I think that the First Contact SIG project liaison list kind of fits >> >> this. Its already maintained in a wiki and its already a list of people >> >> willing to be contacted for helping people get started. It probably >> just >> >> needs more attention and refreshing. When it was first set up we (the >> FC >> >> SIG) kind of went around begging for volunteers and then once we >> maxxed out >> >> on them, we said those projects without volunteers will have the role >> >> defaulted to the PTL unless they delegate (similar to how other liaison >> >> roles work). >> >> >> >> Long story short, I think we have the sort of mentoring things >> covered. And >> >> to back up an earlier email, project specific onboarding would be a >> good >> >> help too. >> > >> > OK, that does sound pretty similar. I guess the piece that's missing is >> > a description of the sort of help the team is interested in receiving. >> >> I guess the key difference is that the first contact list is more a >> function of the team (who to contact for first contributions in this >> team, defaults to PTL), rather than a distinct offer to do 1:1 mentoring >> to cover specific needs in a team. >> >> It's probably pretty close (and the same people would likely be >> involved), but I think an approach where specific people offer a >> significant amount of their time to one mentee interested in joining a >> team is a bit different. I don't think every team would have volunteers >> to do that. I would not expect a mentor volunteer to care for several >> mentees. In the end I think we would end up with a much shorter list >> than the FC list. >> > > I think our original ask for people volunteering (before we completed the > list with PTLs as stand ins) was for people willing to help get started in > a project and look after their first few patches. So I think that was kinda > the mentoring role originally but then it evolved? Maybe Matt Oliver or > Ghanshyam remember better than I do? > > >> >> Maybe the two efforts can converge into one, or they can be kept as two >> different things but coordinated by the same team ? >> >> > I think we could go either way, but that they both would live with the FC > SIG. Seems like the most logical place to me. I lean towards two lists, one > being a list of volunteer mentors for projects that are actively looking > for new contributors (the shorter list) and the other being a list of > people just willing to keep an eye out for the welcome new contributor > patches and being the entry point for people asking about getting started > that don't know anyone in the project yet (kind of what our current view > is, I think). > > >> -- >> Thierry Carrez (ttx) >> > > -Kendall (diablo_rojo) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Feb 11 18:42:01 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 11 Feb 2019 10:42:01 -0800 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> Message-ID: Yeah I think the Project Team Guide makes sense. -Kendall (diablo_rojo) On Thu, 7 Feb 2019, 12:29 pm Doug Hellmann, wrote: > Kendall Nelson writes: > > > On Mon, Feb 4, 2019 at 9:26 AM Doug Hellmann > wrote: > > > >> Jeremy Stanley writes: > >> > >> > On 2019-02-04 17:31:46 +0900 (+0900), Ghanshyam Mann wrote: > >> > [...] > >> >> If I recall it correctly from Board+TC meeting, TC is looking for > >> >> a new home for this list ? Or we continue to maintain this in TC > >> >> itself which should not be much effort I feel. > >> > [...] > >> > > >> > It seems like you might be referring to the in-person TC meeting we > >> > held on the Sunday prior to the Stein PTG in Denver (Alan from the > >> > OSF BoD was also present). Doug's recap can be found in the old > >> > openstack-dev archive here: > >> > > >> > > >> > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134744.html > >> > > >> > Quoting Doug, "...it wasn't clear that the TC was the best group to > >> > manage a list of 'roles' or other more detailed information. We > >> > discussed placing that information into team documentation or > >> > hosting it somewhere outside of the governance repository where more > >> > people could contribute." (If memory serves, this was in response to > >> > earlier OSF BoD suggestions that retooling the Help Wanted list to > >> > be a set of business-case-focused job descriptions might garner more > >> > uptake from the organizations they represent.) > >> > -- > >> > Jeremy Stanley > >> > >> Right, the feedback was basically that we might have more luck > >> convincing companies to provide resources if we were more specific about > >> how they would be used by describing the work in more detail. When we > >> started thinking about how that change might be implemented, it seemed > >> like managing the information a well-defined job in its own right, and > >> our usual pattern is to establish a group of people interested in doing > >> something and delegating responsibility to them. When we talked about it > >> in the TC meeting in Denver we did not have any TC members volunteer to > >> drive the implementation to the next step by starting to recruit a team. > >> > >> During the Train series goal discussion in Berlin we talked about having > >> a goal of ensuring that each team had documentation for bringing new > >> contributors onto the team. > > > > > > This was something I thought the docs team was working on pushing with > all > > of the individual projects, but I am happy to help if they need extra > > hands. I think this is suuuuuper important. Each Upstream Institute we > > teach all the general info we can, but we always mention that there are > > project specific ways of handling things and project specific processes. > If > > we want to lower the barrier for new contributors, good per project > > documentation is vital. > > > > > >> Offering specific mentoring resources seems > >> to fit nicely with that goal, and doing it in each team's repository in > >> a consistent way would let us build a central page on > docs.openstack.org > >> to link to all of the team contributor docs, like we link to the user > >> and installation documentation, without requiring us to find a separate > >> group of people to manage the information across the entire community. > > > > > > I think maintaining the project liaison list[1] that the First Contact > SIG > > has kind of does this? Between that list and the mentoring cohort program > > that lives under the D&I WG, I think we have things covered. Its more a > > matter of publicizing those than starting something new I think? > > > > > >> > >> So, maybe the next step is to convince someone to champion a goal of > >> improving our contributor documentation, and to have them describe what > >> the documentation should include, covering the usual topics like how to > >> actually submit patches as well as suggestions for how to describe areas > >> where help is needed in a project and offers to mentor contributors. > > > >> Does anyone want to volunteer to serve as the goal champion for that? > >> > >> > > I can probably draft a rough outline of places where I see projects > diverge > > and make a template, but where should we have that live? > > > > /me imagines a template similar to the infra spec template > > Could we put it in the project team guide? > > > > > > >> -- > >> Doug > >> > >> > > [1] https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons > > -- > Doug > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Feb 11 20:32:59 2019 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 12 Feb 2019 09:32:59 +1300 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: <3fb28588-06b5-12a3-cc4c-a28aa758f166@redhat.com> On 12/02/19 4:32 AM, NANTHINI A A wrote: > Hi , > I have tried the below .But getting error .Please let me know how I can proceed further . > > root at cic-1:~# cat try1.yaml > heat_template_version: 2013-05-23 > description: > This is the template for I&V R6.1 base configuration to create neutron resources other than sg and vm for vyos vms > parameters: > resource_name_map: > - network1: NetworkA1 > network2: NetworkA2 > - network1: NetworkB1 > network2: NetworkB2 > > resources: > neutron_Network_1: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network1]} > neutron_Network_2: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network2]} > root at cic-1:~# cat tryrepeat.yaml > > heat_template_version: 2013-05-23 > > resources: > rg: > type: OS::Heat::ResourceGroup > properties: > count: 2 > resource_def: > type: try1.yaml > root at cic-1:~# > > root at cic-1:~# heat stack-create tests -f tryrepeat.yaml > WARNING (shell) "heat stack-create" is deprecated, please use "openstack stack create" instead > ERROR: resources.rg: : Error parsing template file:///root/try1.yaml while scanning for the next token > found character '%' that cannot start any token > in "", line 15, column 45: > ... {get_param: [resource_name_map, %index%, network1]} That's a yaml parsing error. You just need to put quotes around the thing that starts with %, like "%index%" > Thanks in advance . > > > Thanks, > A.Nanthini > -----Original Message----- > From: Harald Jensås [mailto:hjensas at redhat.com] > Sent: Monday, February 11, 2019 7:47 PM > To: NANTHINI A A ; openstack-dev at lists.openstack.org > Subject: Re: [Heat] Reg accessing variables of resource group heat api > > On Wed, 2019-02-06 at 06:12 +0000, NANTHINI A A wrote: >> Hi , >> We are developing heat templates for our vnf deployment .It >> includes multiple resources .We want to repeat the resource and hence >> used the api RESOURCE GROUP . >> Attached are the templates which we used >> >> Set1.yaml -> has the resources we want to repeat Setrepeat.yaml -> has >> the resource group api with count . >> >> We want to access the variables of resource in set1.yaml while >> repeating it with count .Eg . port name ,port fixed ip address we want >> to change in each set . >> Please let us know how we can have a variable with each repeated >> resource . >> > > Sounds like you want to use the index_var variable[1] to prefix/suffix reource names? > > I.e in set1.yaml you can use: > > name: > list_join: > - '_' > - {get_param: 'OS::stack_name'} > - %index% > - > > > The example should resulting in something like: > stack_0_Network3, stack_0_Subnet3 > stack_1_Network0, stack_1_Subnet0 > [ ... ] > > > If you want to be more advanced you could use a list parameter in the set1.yaml template, and have each list entry contain a dictionaly of each resource name. The %index% variable would then be used to pick the correct entry from the list. > > > parameters: > resource_name_map: > - network1: foo_custom_name_net1 > network2: foo_custom_name_net2 > - network1: bar_custom_name_net1 > network2: bar_custom_name_net2 - > > resources: > neutron_Network_1: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network1]} > neutron_Network_2: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network2]} > > > %index% is the "count" picking the 'foo' entries when %index% is 0, and 'bar' entries when %index% is 1 and so on. > > > > > > [1] > https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Heat::ResourceGroup-props-opt > > > From openstack at nemebean.com Mon Feb 11 21:03:36 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 11 Feb 2019 15:03:36 -0600 Subject: [dev][tc] Part 2: Evaluating projects in relation to OpenStack cloud vision In-Reply-To: References: Message-ID: <87db3527-3bd7-614b-5fc6-d44092452885@nemebean.com> On 2/10/19 2:33 PM, Chris Dent wrote: > > This a "part 2" or "other half" of evaluating OpenStack projects in > relation to the technical vision. See the other threads [1][2] for > more information. > > In the conversations that led up to the creation of the vision > document [3] one of the things we hoped was that the process could > help identify ways in which existing projects could evolve to be > better at what they do. This was couched in two ideas: > > * Helping to make sure that OpenStack continuously improves, in the >   right direction. > * Helping to make sure that developers were working on projects that >   leaned more towards interesting and educational than frustrating >   and embarrassing, where choices about what to do and how to do it >   were straightforward, easy to share with others, so >   well-founded in agreed good practice that argument would be rare, >   and so few that it was easy to decide. > > Of course, to have a "right direction" you first have to have a > direction, and thus the vision document and the idea of evaluating > how aligned a project is with that. > > The other half, then, is looking at the projects from a development > standpoint and thinking about what aspects of the project are: > > * Things (techniques, tools) the project contributors would encourage >   others to try. Stuff that has worked out well. Oslo documents some things that I think would fall under this category in http://specs.openstack.org/openstack/oslo-specs/#team-policies The incubator one should probably get removed since it's no longer applicable, but otherwise I feel like we mostly still follow those policies and find them to be reasonable best practices. Some are very Oslo-specific and not useful to anyone else, of course, but others could be applied more broadly. There's also http://specs.openstack.org/openstack/openstack-specs/specs/eventlet-best-practices.html although in the spirit of your next point I would be more +1 on the "don't use Eventlet" option for new projects. It might be nice to have a document that discusses preferred Eventlet alternatives for new projects. I know there are a few Eventlet-free projects out there that could probably provide feedback on their method. > > * Things—given a clean slate, unlimited time and resources, the >   benefit of hindsight and without the weight of legacy—the project >   contributors would encourage others to not repeat. > > And documenting those things so they can be carried forward in time > some place other than people's heads, and new projects or > refactorings of existing projects can start on a good foot. > > A couple of examples: > > * Whatever we might say about the implementation (in itself and how >   it is used), the concept of a unified configuration file format, >   via oslo_config, is probably considered a good choice, and we >   should keep on doing that. I'm a _little_ biased, but +1. Things like your env var driver or the drivers for moving secrets out of plaintext would be next to impossible if everyone were using a different configuration method. > > * On the other hand, given hindsight and improvements in commonly >   available tools, using a homegrown WSGI (non-)framework (unless >   you are Swift) plus eventlet may not have been the way to go, yet >   because it is what's still there in nova, it often gets copied. And as I noted above, +1 to this too. > > It's not clear at this point whether these sorts of things should be > documented in projects, or somewhere more central. So perhaps we can > just talk about it here in email and figure something out. I'll > followup with some I have for placement, since that's the project > I've given the most attention. > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001417.html > > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002524.html > > [3] https://governance.openstack.org/tc/reference/technical-vision.html > From amy at demarco.com Mon Feb 11 21:47:23 2019 From: amy at demarco.com (Amy Marrich) Date: Mon, 11 Feb 2019 15:47:23 -0600 Subject: Fwd: UC Candidacy In-Reply-To: References: Message-ID: This email is my nomination to re-run for the OpenStack User Committee election. I have been involved with OpenStack as an operator since the Grizzly release working with both private and public cloud environments. I have been an upstream contributor since the Mitaka release cycle and I am currently a Core Reviewer for OpenStack-Ansible which works closely with operators to help them set up their deployments and insight for our direction. I believe I bring valuable insight to the User Committee being involved as both an AUC and ATC. Through my involvement with the OpenStack Upstream Institute and Diversity Working Group, I have been very active in helping to bring new members to our community and more importantly working to find new ways to keep them involved once they join. There is still work I would like to continue working on, such as the OPS Meetups and the OpenStack mentoring programs to help get more Operators involved in the community. Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Mon Feb 11 21:55:17 2019 From: aspiers at suse.com (Adam Spiers) Date: Mon, 11 Feb 2019 21:55:17 +0000 Subject: [tc][all] Train Community Goals In-Reply-To: <7e69aef5-d3c1-22df-7a6f-89b35e14fb8c@nemebean.com> References: <66d73db6-9f84-1290-1ab8-cf901a7fb355@catalyst.net.nz> <6b498008e71b7dae651e54e29717f3ccedea50d1.camel@evrard.me> <7e69aef5-d3c1-22df-7a6f-89b35e14fb8c@nemebean.com> Message-ID: <20190211215517.ax5jktscy7ovhoz7@pacific.linksys.moosehall> Yeah thanks - I'm well looped in here through my colleague JP[1] :-) Still hoping to find some more time for this very soon, although right now I'm focused on some pressing nova work ... [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/002089.html Ben Nemec wrote: >cc aspiers, who sounded interested in leading this work, pending >discussion with his employer[1]. > >1: http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001750.html > >On 1/31/19 9:59 AM, Lance Bragstad wrote: >>*Healthcheck middleware* >> >>There is currently no volunteer to champion for this goal. The first >>iteration of the work on the oslo.middleware was updated [3], and a >>gap analysis was started on the mailing lists [4]. >>If you want to get involved in this goal, don't hesitate to answer >>on the ML thread there. >> >>[3] https://review.openstack.org/#/c/617924/2 >>[4] https://ethercalc.openstack.org/di0mxkiepll8 From aspiers at suse.com Mon Feb 11 22:26:41 2019 From: aspiers at suse.com (Adam Spiers) Date: Mon, 11 Feb 2019 22:26:41 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <723736DB-ED80-4600-AA98-F51FE70A8D73@gmail.com> References: <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> <20190208091829.6tiig7lgef6txcxk@pacific.linksys.moosehall> <723736DB-ED80-4600-AA98-F51FE70A8D73@gmail.com> Message-ID: <20190211222641.pney33hmai6vjoky@pacific.linksys.moosehall> Ildiko Vancsa wrote: >First of all I like the idea of pop-up teams. > >On 2019. Feb 8., at 10:18, Adam Spiers wrote: >>True. And for temporary docs / notes / brainstorming there's the >>wiki and etherpad. So yeah, in terms of infrastructure maybe IRC >>meetings in one of the communal meeting channels is the only thing >>needed. We'd still need to take care of ensuring that popups are >>easily discoverable by anyone, however. And this ties in with the >>"should we require official approval" debate - maybe a halfway >>house is the right balance between red tape and agility? For >>example, set up a table on a page like >> >> https://wiki.openstack.org/wiki/Popup_teams >> >>and warmly encourage newly forming teams to register themselves by adding a row to that table. Suggested columns: >> >> - Team name >> - One-line summary of team purpose >> - Expected life span (optional) >> - Link to team wiki page or etherpad >> - Link to IRC meeting schedule (if any) >> - Other comments >> >>Or if that's too much of a free-for-all, it could be a slightly more >>formal process of submitting a review to add a row to a page: >> >> https://governance.openstack.org/popup-teams/ >> >>which would be similar in spirit to: >> >> https://governance.openstack.org/sigs/ >> >>Either this or a wiki page would ensure that anyone can easily >>discover what teams are currently in existence, or have been in the >>past (since historical information is often useful too). Just >>thinking out aloud … > >In my experience there are two crucial steps to make a cross-project >team work successful. The first is making sure that the proposed new >feature/enhancement is accepted by all teams. The second is to have >supporters from every affected project team preferably also resulting >in involvement during both design and review time maybe also during >feature development and testing phase. > >When these two steps are done you can work on the design part and >making sure you have the work items prioritized on each side in a way >that you don’t end up with road blocks that would delay the work by >multiple release cycles. Makes perfect sense to me - thanks for sharing! >To help with all this I would start the experiment with wiki pages >and etherpads as these are all materials you can point to without too >much formality to follow so the goals, drivers, supporters and >progress are visible to everyone who’s interested and to the TC to >follow-up on. > >Do we expect an approval process to help with or even drive either of >the crucial steps I listed above? I'm not sure if it would help. But I agree that visibility is important, and by extension also discoverability. To that end I think it would be worth hosting a central list of popup initiatives somewhere which links to the available materials for each initiative. Maybe it doesn't matter too much whether that central list is simply a wiki page or a static web page managed by Gerrit under a governance repo or similar. From openstack at nemebean.com Mon Feb 11 22:28:57 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 11 Feb 2019 16:28:57 -0600 Subject: [tc][all][self-healing-sig] Service-side health checks community goal for Train cycle In-Reply-To: References: <158c354c1d7a3e6fb261202b34d4e3233d5f39bc.camel@evrard.me> <1548671352.507178.1645094472.39B42BCA@webmail.messagingengine.com> <7cc5aa565a3a50a2d520d99e3ddcd6da5502e990.camel@evrard.me> Message-ID: <21a9a786-a530-55b3-cf74-0444899a98f2@nemebean.com> On 1/28/19 5:34 AM, Chris Dent wrote: > On Mon, 28 Jan 2019, Jean-Philippe Evrard wrote: > >> It is not a non-starter. I knew this would show up :) >> It's fine that some projects do differently (for example swift has >> different middleware, keystone is not using paste). > > Tangent so that people are clear on the state of Paste and > PasteDeploy. > > I recommend projects move away from using either. > > Until recently both were abandonware, not receiving updates, and > had issues working with Python3. > > I managed to locate maintainers from a few years ago, and negotiated > to bring them under some level of maintenance, but in both cases the > people involved are only interested in doing limited management to > keep the projects barely alive. > > pastedeploy (the thing that is more often used in OpenStack, and is > usually used to load the paste.ini file and doesn't have to have a > dependency on paste itself) is now under the Pylons project: > https://github.com/Pylons/pastedeploy > > Paste itself is with me: https://github.com/cdent/paste > >> I think it's also too big of a change to move everyone to one single >> technology in a cycle :) Instead, I want to focus on the real use case >> for people (bringing a common healthcheck "api" itself), which doesn't >> matter on the technology. > > I agree that the healthcheck change can and should be completely > separate from any question of what is used to load middleware. > That's the great thing about WSGI. > > As long as the healthcheck tooling presents are "normal" WSGI > interface it ought to either "just work" or be wrappable by other tooling, > so I wouldn't spend too much time making a survey of how people are > doing middleware. So should that question be re-worded? The current Keystone answer is accurate but unhelpful, given that I believe Keystone does enable the healthcheck middleware by default: https://docs.openstack.org/keystone/latest/admin/health-check-middleware.html Since what we care about isn't the WSGI implementation but the availability of the feature, shouldn't that question be more like "Project enables healthcheck middleware by default"? In which case Keystone's answer becomes a simple "yes" and Manila's a simple "no". > > The tricky part (but not that tricky) will be with managing how the > "tests" are provided to the middleware. > From hyangii at gmail.com Tue Feb 12 00:07:48 2019 From: hyangii at gmail.com (Jae Sang Lee) Date: Tue, 12 Feb 2019 09:07:48 +0900 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: <3C065CFC-3E64-47C1-84C9-FB87A1F9B475@cern.ch> References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> <3C065CFC-3E64-47C1-84C9-FB87A1F9B475@cern.ch> Message-ID: Hello, I tested today by increasing EVENTLET_THREADPOOL_SIZE size to 100. I wanted to have good results, but this time I did not get a response after removing 41 volumes. This environment variable did not fix the cinder-volume stopping. Restarting the stopped cinder-volume will delete all volumes that are in deleting state while running the clean_up function. Only one volume in the deleting state, I force the state of this volume to be available, and then delete it, all volumes will be deleted. This result was the same for 3 consecutive times. After removing dozens of volumes, the cinder-volume was down, and after the restart of the service, 199 volumes were deleted and one volume was manually erased. If you have a different approach to solving this problem, please let me know. Thanks. 2019년 2월 11일 (월) 오후 9:40, Arne Wiebalck 님이 작성: > Jae, > > On 11 Feb 2019, at 11:39, Jae Sang Lee wrote: > > Arne, > > I saw the messages like ''moving volume to trash" in the cinder-volume > logs and the peridic task also reports > like "Deleted from trash for backend ''" > > The patch worked well when clearing a small number of volumes. This > happens only when I am deleting a large > number of volumes. > > > Hmm, from cinder’s point of view, the deletion should be more or less > instantaneous, so it should be able to “delete” > many more volumes before getting stuck. > > The periodic task, however, will go through the volumes one by one, so if > you delete many at the same time, > volumes may pile up in the trash (for some time) before the tasks gets > round to delete them. This should not affect > c-vol, though. > > I will try to adjust the number of thread pools by adjusting the > environment variables with your advices > > Do you know why the cinder-volume hang does not occur when create a > volume, but only when delete a volume? > > > Deleting a volume ties up a thread for the duration of the deletion (which > is synchronous and can hence take very > long for ). If you have too many deletions going on at the same time, you > run out of threads and c-vol will eventually > time out. FWIU, creation basically works the same way, but it is almost > instantaneous, hence the risk of using up all > threads is simply lower (Gorka may correct me here :-). > > Cheers, > Arne > > > > Thanks. > > > 2019년 2월 11일 (월) 오후 6:14, Arne Wiebalck 님이 작성: > >> Jae, >> >> To make sure deferred deletion is properly working: when you delete >> individual large volumes >> with data in them, do you see that >> - the volume is fully “deleted" within a few seconds, ie. not staying in >> ‘deleting’ for a long time? >> - that the volume shows up in trash (with “rbd trash ls”)? >> - the periodic task reports it is deleting volumes from the trash? >> >> Another option to look at is “backend_native_threads_pool_size": this >> will increase the number >> of threads to work on deleting volumes. It is independent from deferred >> deletion, but can also >> help with situations where Cinder has more work to do than it can cope >> with at the moment. >> >> Cheers, >> Arne >> >> >> >> On 11 Feb 2019, at 09:47, Jae Sang Lee wrote: >> >> Yes, I added your code to pike release manually. >> >> >> >> 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 작성: >> >>> Hi Jae, >>> >>> You back ported the deferred deletion patch to Pike? >>> >>> Cheers, >>> Arne >>> >>> > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: >>> > >>> > Hello, >>> > >>> > I recently ran a volume deletion test with deferred deletion enabled >>> on the pike release. >>> > >>> > We experienced a cinder-volume hung when we were deleting a large >>> amount of the volume in which the data was actually written(I make 15GB >>> file in every volumes), and we thought deferred deletion would solve it. >>> > >>> > However, while deleting 200 volumes, after 50 volumes, the >>> cinder-volume downed as before. In my opinion, the trash_move api does not >>> seem to work properly when removing multiple volumes, just like remove api. >>> > >>> > If these test results are my fault, please let me know the correct >>> test method. >>> > >>> >>> -- >>> Arne Wiebalck >>> CERN IT >>> >>> >> -- >> Arne Wiebalck >> CERN IT >> >> > -- > Arne Wiebalck > CERN IT > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at openstack.org Tue Feb 12 00:38:42 2019 From: chris at openstack.org (Chris Hoge) Date: Mon, 11 Feb 2019 16:38:42 -0800 Subject: [loci] Loci builds functionally broken Message-ID: It appears the lastest release of virtualenv has broken Loci builds. I believe the root cause is an update in how symlinks are handled. Before the release, the python libraries installed in the: /var/lib/openstack/lib64/python2.7/lib-dynload directory (this is on CentOS, Ubuntu and Suse vary) were direct instances of the library. For example: -rwxr-xr-x. 1 root root 62096 Oct 30 23:46 itertoolsmodule.so Now, the build points to a long-destroyed symlink that is an artifact of the requirements build process. For example: lrwxrwxrwx. 1 root root 56 Feb 11 23:01 itertoolsmodule.so -> /tmp/venv/lib64/python2.7/lib-dynload/itertoolsmodule.so We will investigate how to make the build more robust, repair this, and will report back soon. Until then, you should expect any fresh builds to not be functional, despite the apparent success in building the container. Thanks, Chris [1] https://virtualenv.pypa.io/en/stable/changes/#release-history From chris at openstack.org Tue Feb 12 01:12:14 2019 From: chris at openstack.org (Chris Hoge) Date: Mon, 11 Feb 2019 17:12:14 -0800 Subject: [loci] Loci builds functionally broken In-Reply-To: References: Message-ID: <378149AB-54F2-45E7-B196-31F0505F6E0A@openstack.org> A patch for a temporary fix is up for review. https://review.openstack.org/#/c/636252/ We’ll be looking into a more permanent fix in the coming days. > On Feb 11, 2019, at 4:38 PM, Chris Hoge wrote: > > It appears the lastest release of virtualenv has broken Loci builds. I > believe the root cause is an update in how symlinks are handled. Before > the release, the python libraries installed in the: > > /var/lib/openstack/lib64/python2.7/lib-dynload > > directory (this is on CentOS, Ubuntu and Suse vary) were direct instances > of the library. For example: > > -rwxr-xr-x. 1 root root 62096 Oct 30 23:46 itertoolsmodule.so > > Now, the build points to a long-destroyed symlink that is an artifact of > the requirements build process. For example: > > lrwxrwxrwx. 1 root root 56 Feb 11 23:01 itertoolsmodule.so -> /tmp/venv/lib64/python2.7/lib-dynload/itertoolsmodule.so > > We will investigate how to make the build more robust, repair this, and > will report back soon. Until then, you should expect any fresh builds to > not be functional, despite the apparent success in building the container. > > Thanks, > Chris > > [1] https://virtualenv.pypa.io/en/stable/changes/#release-history > > From sam47priya at gmail.com Mon Feb 11 17:07:41 2019 From: sam47priya at gmail.com (Sam P) Date: Tue, 12 Feb 2019 02:07:41 +0900 Subject: [ops] OpenStack operators meetup, Berlin, March 6th,7th In-Reply-To: References: Message-ID: Hi Erik, Thanks!. I will contact Ashlee. --- Regards, Sampath On Sat, Feb 9, 2019 at 2:30 AM Erik McCormick wrote: > > Hi Sam, > > On Thu, Feb 7, 2019 at 9:07 PM Sam P wrote: > > > > Hi Chris, > > > > I need an invitation letter to get my German visa. Please let me know > > who to contact. > > > You can contact Ashlee at the foundation and she will be able to > assist you. Her email is ashlee at openstack.org. See you in Berlin! > > --- Regards, > > Sampath > > > > > > On Thu, Feb 7, 2019 at 2:38 AM Chris Morgan wrote: > > > > > > See you there! > > > > > > On Wed, Feb 6, 2019 at 12:18 PM Erik McCormick wrote: > > >> > > >> I'm all signed up. See you in Berlin! > > >> > > >> On Wed, Feb 6, 2019, 10:43 AM Chris Morgan > >>> > > >>> Dear All, > > >>> The Evenbrite for the next ops meetup is now open, see > > >>> > > >>> https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 > > >>> > > >>> Thanks for Allison Price from the foundation for making this for us. We'll be sharing more details on the event soon. > > >>> > > >>> Chris > > >>> on behalf of the ops meetups team > > >>> > > >>> -- > > >>> Chris Morgan > > > > > > > > > > > > -- > > > Chris Morgan From liliueecg at gmail.com Tue Feb 12 03:23:42 2019 From: liliueecg at gmail.com (Li Liu) Date: Mon, 11 Feb 2019 22:23:42 -0500 Subject: [Cyborg][IRC] The Cyborg IRC meeting will be held Wednesday at 0300 UTC Message-ID: Happy Chinese New Year! The IRC meeting will be resumed Wednesday at 0300 UTC, which is 10:00 pm est(Tuesday) / 7:00 pm pst(Tuesday) /11 am Beijing time (Wednesday) -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Tue Feb 12 05:15:00 2019 From: ramishra at redhat.com (Rabi Mishra) Date: Tue, 12 Feb 2019 10:45:00 +0530 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: On Mon, Feb 11, 2019 at 9:23 PM NANTHINI A A wrote: > Hi , > I have tried the below .But getting error .Please let me know how I can > proceed further . > > root at cic-1:~# cat try1.yaml > heat_template_version: 2013-05-23 > description: > This is the template for I&V R6.1 base configuration to create neutron > resources other than sg and vm for vyos vms > parameters: > resource_name_map: > - network1: NetworkA1 > network2: NetworkA2 > - network1: NetworkB1 > network2: NetworkB2 > > resources: > neutron_Network_1: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network1]} > I don't think you can use %index% directly in this template. You have to pass it as resource property from tryreapet.yaml. Please check the example[1] in heat-templates repo (resource_group_index_lookup.yaml and random.yaml). [1] https://github.com/openstack/heat-templates/blob/master/hot/resource_group/resource_group_index_lookup.yaml > neutron_Network_2: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network2]} > root at cic-1:~# cat tryrepeat.yaml > > heat_template_version: 2013-05-23 > > resources: > rg: > type: OS::Heat::ResourceGroup > properties: > count: 2 > resource_def: > type: try1.yaml > root at cic-1:~# > > root at cic-1:~# heat stack-create tests -f tryrepeat.yaml > WARNING (shell) "heat stack-create" is deprecated, please use "openstack > stack create" instead > ERROR: resources.rg: : Error parsing template > file:///root/try1.yaml while scanning for the next token > found character '%' that cannot start any token > in "", line 15, column 45: > ... {get_param: [resource_name_map, %index%, network1]} > > > > Thanks in advance . > > > Thanks, > A.Nanthini > -----Original Message----- > From: Harald Jensås [mailto:hjensas at redhat.com] > Sent: Monday, February 11, 2019 7:47 PM > To: NANTHINI A A ; > openstack-dev at lists.openstack.org > Subject: Re: [Heat] Reg accessing variables of resource group heat api > > On Wed, 2019-02-06 at 06:12 +0000, NANTHINI A A wrote: > > Hi , > > We are developing heat templates for our vnf deployment .It > > includes multiple resources .We want to repeat the resource and hence > > used the api RESOURCE GROUP . > > Attached are the templates which we used > > > > Set1.yaml -> has the resources we want to repeat Setrepeat.yaml -> has > > the resource group api with count . > > > > We want to access the variables of resource in set1.yaml while > > repeating it with count .Eg . port name ,port fixed ip address we want > > to change in each set . > > Please let us know how we can have a variable with each repeated > > resource . > > > > Sounds like you want to use the index_var variable[1] to prefix/suffix > reource names? > > I.e in set1.yaml you can use: > > name: > list_join: > - '_' > - {get_param: 'OS::stack_name'} > - %index% > - > > > The example should resulting in something like: > stack_0_Network3, stack_0_Subnet3 > stack_1_Network0, stack_1_Subnet0 > [ ... ] > > > If you want to be more advanced you could use a list parameter in the > set1.yaml template, and have each list entry contain a dictionaly of each > resource name. The %index% variable would then be used to pick the correct > entry from the list. > > > parameters: > resource_name_map: > - network1: foo_custom_name_net1 > network2: foo_custom_name_net2 > - network1: bar_custom_name_net1 > network2: bar_custom_name_net2 - > > resources: > neutron_Network_1: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network1]} > neutron_Network_2: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network2]} > > > %index% is the "count" picking the 'foo' entries when %index% is 0, and > 'bar' entries when %index% is 1 and so on. > > > > > > [1] > > https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Heat::ResourceGroup-props-opt > > > > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Tue Feb 12 05:43:18 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 12 Feb 2019 14:43:18 +0900 Subject: [dev] [neutron] bug deputy report of the week of Feb 4 Message-ID: Hi neutrinos, I was a bug deputy last week. The last week was relatively quiet. The following * Needs investigation * https://bugs.launchpad.net/neutron/+bug/1815463 (New) [dev] Agent RPC version does not auto upgrade if neutron-server restart first * ovsdbapp.exceptions.TimeoutException in functional tests (gate failure) https://bugs.launchpad.net/bugs/1815142 * In Progress * https://bugs.launchpad.net/bugs/1815345 (Medium, In Progress) neutron doesnt delete port binding level when deleting an inactive port binding * Incomplete * https://bugs.launchpad.net/bugs/1815424 (Incomplete) Port gets port security disabled if using --no-security-groups I cannot reproduce it. Requesting the author more information. * FYI * https://bugs.launchpad.net/bugs/1815433 Code crash with invalid connection limit of listener neutron-lbaas bug needs to be filed to storyboard. I requested it to the bug author and he/she filed it. [1] https://storyboard.openstack.org/#!/project/openstack/neutron-lbaas Best Regards, Akihiro Motoki (irc: amotoki) -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Tue Feb 12 05:46:59 2019 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 12 Feb 2019 16:46:59 +1100 Subject: [cinder] Help with Fedora 29 devstack volume/iscsi issues In-Reply-To: <20190211101229.j5aqii2os5z2p2cw@localhost> References: <20190207063940.GA1754@fedora19.localdomain> <20190211101229.j5aqii2os5z2p2cw@localhost> Message-ID: <20190212054659.GA14416@fedora19.localdomain> On Mon, Feb 11, 2019 at 11:12:29AM +0100, Gorka Eguileor wrote: > It is werid that there are things missing from the logs: > > In method _get_connection_devices we have: > > LOG.debug('Getting connected devices for (ips,iqns,luns)=%s', 1 > ips_iqns_luns) > nodes = self._get_iscsi_nodes() > > And we can see the message in the logs [2], but then we don't see the > call to iscsiadm that happens as the first instruction in > _get_iscsi_nodes: > > out, err = self._execute('iscsiadm', '-m', 'node', run_as_root=True, > root_helper=self._root_helper, > check_exit_code=False) > > And we only see the error coming from parsing the output of that command > that is not logged. Yes, I wonder if this is related to a rootwrap stdout/stderr capturing or something? > I believe Matthew is right in his assessment, the problem is the output > from "iscsiadm -m node", there is a missing space between the first 2 > columns in the output [4]. > > This looks like an issue in Open iSCSI, not in OS-Brick, Cinder, or > Nova. > > And checking their code, it looks like this is the patch that fixes it > [5], so it needs to be added to F29 iscsi-initiator-utils package. Thank you! This excellent detective work has solved the problem. I did a copr build with that patch [1] and got a good tempest run [2]. Amazing how much trouble a " " can cause. I have filed an upstream bug on the package https://bugzilla.redhat.com/show_bug.cgi?id=1676365 Anyway, it has led to a series of patches you may be interested in, which I think would help future debugging efforts https://review.openstack.org/636078 : fix for quoting of devstack args (important for follow-ons) https://review.openstack.org/636079 : export all journal logs. Things like iscsid were logging to the journal, but we weren't capturing them. Includes instructions on how to use the exported journal [3] https://review.openstack.org/636080 : add a tcpdump service. With this you can easily packet capture during a devstack run. e.g. https://review.openstack.org/636082 captures all iscsi traffic and stores it [4] https://review.openstack.org/636081 : iscsid debug option, which uses a systemd override to turn up debug logging. Reviews welcome :) Thanks, -i [1] https://github.com/open-iscsi/open-iscsi/commit/baa0cb45cfcf10a81283c191b0b236cd1a2f66ee.patch [2] http://logs.openstack.org/82/636082/9/check/devstack-platform-fedora-latest/e2fac10/ [3] http://logs.openstack.org/82/636082/9/check/devstack-platform-fedora-latest/e2fac10/controller/logs/devstack.journal.README.txt [4] http://logs.openstack.org/82/636082/9/check/devstack-platform-fedora-latest/e2fac10/controller/logs/tcpdump.pcap.gz From amotoki at gmail.com Tue Feb 12 06:11:57 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 12 Feb 2019 15:11:57 +0900 Subject: [dev] [neutron] bug deputy report of the week of Feb 4 In-Reply-To: References: Message-ID: I forgot to add one bug which needs help from FWaaS team. The updated list is as follows. 2019年2月12日(火) 14:43 Akihiro Motoki : > Hi neutrinos, > > I was a bug deputy last week. > The last week was relatively quiet. The following > > > * Needs investigation > * https://bugs.launchpad.net/neutron/+bug/1815463 (New) > [dev] Agent RPC version does not auto upgrade if neutron-server > restart first > * ovsdbapp.exceptions.TimeoutException in functional tests (gate failure) > https://bugs.launchpad.net/bugs/1815142 > * Needs help from FWaaS team * https://bugs.launchpad.net/neutron/+bug/1814507 Deleting the default firewall group not deleting the associated firewall rules to the policy We need an input on the basic design policy from FWaaS team. > > * In Progress > * https://bugs.launchpad.net/bugs/1815345 (Medium, In Progress) > neutron doesnt delete port binding level when deleting an inactive > port binding > > * Incomplete > * https://bugs.launchpad.net/bugs/1815424 (Incomplete) > Port gets port security disabled if using --no-security-groups > I cannot reproduce it. Requesting the author more information. > > * FYI > * https://bugs.launchpad.net/bugs/1815433 > Code crash with invalid connection limit of listener > neutron-lbaas bug needs to be filed to storyboard. I requested it to > the bug author and he/she filed it. > [1] > https://storyboard.openstack.org/#!/project/openstack/neutron-lbaas > > Best Regards, > Akihiro Motoki (irc: amotoki) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Tue Feb 12 06:55:13 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 12 Feb 2019 15:55:13 +0900 Subject: [Searchlight] TC vision reflection Message-ID: Hi team, Follow by the call of the TC [1] for each project to self-evaluate against the OpenStack Cloud Vision [2], the Searchlight team would like to produce a short bullet point style document comparing itself with the vision. The purpose is to find the gaps between Searchlight and the TC vision and it is a good practice to align our work with the rest. I created a new pad [3] and welcome all of your opinions. Then, after about 3 weeks, I will submit a patch set to add the vision reflection document to our doc source. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001417.html [2] https://governance.openstack.org/tc/reference/technical-vision.html [3] https://etherpad.openstack.org/p/-tc-vision-self-eval Ping me on the channel #openstack-searchlight Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From arne.wiebalck at cern.ch Tue Feb 12 06:55:57 2019 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Tue, 12 Feb 2019 07:55:57 +0100 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> <3C065CFC-3E64-47C1-84C9-FB87A1F9B475@cern.ch> Message-ID: <93782FC6-38BE-438C-B665-40977863DEDA@cern.ch> Jae, One other setting that caused trouble when bulk deleting cinder volumes was the DB connection string: we did not configure a driver and hence used the Python mysql wrapper instead … essentially changing connection = mysql://cinder:@:/cinder to connection = mysql+pymysql://cinder:@:/cinder solved the parallel deletion issue for us. All details in the last paragraph of [1]. HTH! Arne [1] https://techblog.web.cern.ch/techblog/post/experiences-with-cinder-in-production/ > On 12 Feb 2019, at 01:07, Jae Sang Lee wrote: > > Hello, > > I tested today by increasing EVENTLET_THREADPOOL_SIZE size to 100. I wanted to have good results, > but this time I did not get a response after removing 41 volumes. This environment variable did not fix > the cinder-volume stopping. > > Restarting the stopped cinder-volume will delete all volumes that are in deleting state while running the clean_up function. > Only one volume in the deleting state, I force the state of this volume to be available, and then delete it, all volumes will be deleted. > > This result was the same for 3 consecutive times. After removing dozens of volumes, the cinder-volume was down, > and after the restart of the service, 199 volumes were deleted and one volume was manually erased. > > If you have a different approach to solving this problem, please let me know. > > Thanks. > > 2019년 2월 11일 (월) 오후 9:40, Arne Wiebalck 님이 작성: > Jae, > >> On 11 Feb 2019, at 11:39, Jae Sang Lee wrote: >> >> Arne, >> >> I saw the messages like ''moving volume to trash" in the cinder-volume logs and the peridic task also reports >> like "Deleted from trash for backend ''" >> >> The patch worked well when clearing a small number of volumes. This happens only when I am deleting a large >> number of volumes. > > Hmm, from cinder’s point of view, the deletion should be more or less instantaneous, so it should be able to “delete” > many more volumes before getting stuck. > > The periodic task, however, will go through the volumes one by one, so if you delete many at the same time, > volumes may pile up in the trash (for some time) before the tasks gets round to delete them. This should not affect > c-vol, though. > >> I will try to adjust the number of thread pools by adjusting the environment variables with your advices >> >> Do you know why the cinder-volume hang does not occur when create a volume, but only when delete a volume? > > Deleting a volume ties up a thread for the duration of the deletion (which is synchronous and can hence take very > long for ). If you have too many deletions going on at the same time, you run out of threads and c-vol will eventually > time out. FWIU, creation basically works the same way, but it is almost instantaneous, hence the risk of using up all > threads is simply lower (Gorka may correct me here :-). > > Cheers, > Arne > >> >> >> Thanks. >> >> >> 2019년 2월 11일 (월) 오후 6:14, Arne Wiebalck 님이 작성: >> Jae, >> >> To make sure deferred deletion is properly working: when you delete individual large volumes >> with data in them, do you see that >> - the volume is fully “deleted" within a few seconds, ie. not staying in ‘deleting’ for a long time? >> - that the volume shows up in trash (with “rbd trash ls”)? >> - the periodic task reports it is deleting volumes from the trash? >> >> Another option to look at is “backend_native_threads_pool_size": this will increase the number >> of threads to work on deleting volumes. It is independent from deferred deletion, but can also >> help with situations where Cinder has more work to do than it can cope with at the moment. >> >> Cheers, >> Arne >> >> >> >>> On 11 Feb 2019, at 09:47, Jae Sang Lee wrote: >>> >>> Yes, I added your code to pike release manually. >>> >>> >>> >>> 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 작성: >>> Hi Jae, >>> >>> You back ported the deferred deletion patch to Pike? >>> >>> Cheers, >>> Arne >>> >>> > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: >>> > >>> > Hello, >>> > >>> > I recently ran a volume deletion test with deferred deletion enabled on the pike release. >>> > >>> > We experienced a cinder-volume hung when we were deleting a large amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it. >>> > >>> > However, while deleting 200 volumes, after 50 volumes, the cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api. >>> > >>> > If these test results are my fault, please let me know the correct test method. >>> > >>> >>> -- >>> Arne Wiebalck >>> CERN IT >>> >> >> -- >> Arne Wiebalck >> CERN IT >> > > -- > Arne Wiebalck > CERN IT > From gmann at ghanshyammann.com Tue Feb 12 08:21:09 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 Feb 2019 17:21:09 +0900 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> Message-ID: <168e0cba6f4.1013071eb93450.6339187288167074748@ghanshyammann.com> ---- On Tue, 12 Feb 2019 02:14:56 +0900 Kendall Nelson wrote ---- > > > On Mon, Feb 11, 2019 at 8:01 AM Thierry Carrez wrote: > Doug Hellmann wrote: > > Kendall Nelson writes: > >> [...] > >> So I think that the First Contact SIG project liaison list kind of fits > >> this. Its already maintained in a wiki and its already a list of people > >> willing to be contacted for helping people get started. It probably just > >> needs more attention and refreshing. When it was first set up we (the FC > >> SIG) kind of went around begging for volunteers and then once we maxxed out > >> on them, we said those projects without volunteers will have the role > >> defaulted to the PTL unless they delegate (similar to how other liaison > >> roles work). > >> > >> Long story short, I think we have the sort of mentoring things covered. And > >> to back up an earlier email, project specific onboarding would be a good > >> help too. > > > > OK, that does sound pretty similar. I guess the piece that's missing is > > a description of the sort of help the team is interested in receiving. > > I guess the key difference is that the first contact list is more a > function of the team (who to contact for first contributions in this > team, defaults to PTL), rather than a distinct offer to do 1:1 mentoring > to cover specific needs in a team. > > It's probably pretty close (and the same people would likely be > involved), but I think an approach where specific people offer a > significant amount of their time to one mentee interested in joining a > team is a bit different. I don't think every team would have volunteers > to do that. I would not expect a mentor volunteer to care for several > mentees. In the end I think we would end up with a much shorter list > than the FC list. > > I think our original ask for people volunteering (before we completed the list with PTLs as stand ins) was for people willing to help get started in a project and look after their first few patches. So I think that was kinda the mentoring role originally but then it evolved? Maybe Matt Oliver or Ghanshyam remember better than I do? Yeah, that's right. > Maybe the two efforts can converge into one, or they can be kept as two > different things but coordinated by the same team ? > > > I think we could go either way, but that they both would live with the FC SIG. Seems like the most logical place to me. I lean towards two lists, one being a list of volunteer mentors for projects that are actively looking for new contributors (the shorter list) and the other being a list of people just willing to keep an eye out for the welcome new contributor patches and being the entry point for people asking about getting started that don't know anyone in the project yet (kind of what our current view is, I think). -- IMO, very first thing to make help-wanted list a success is, it has to be uptodate per development cycle, mentor-mapping(or with example workflow etc). By Keeping the help-wanted list in any place other than the project team again leads to existing problem for example it will be hard to prioritize, maintain and easy to get obsolete/outdated. FC SIG, D&I WG are great place to market/redirect the contributors to the list. The model I was thinking is: 1. Project team maintain the help-wanted-list per current development cycle. Entry criteria in that list is some volunteer mentor(exmaple workflow/patch) which are technically closer to that topic. 2. During PTG/developer meetup, PTL checks if planned/discussed topic needs to be in help-wanted list and who will serve as the mentor. 3. The list has to be updated in every developement cycle. It can be empty if any project team does not need help during that cycle or few items can be carry-forward if those are still a priority and have mentor mapping. 4. FC SIG, D&I WG, Mentoring team use that list and publish in all possible place. Redirect new contributors to that list depends on the contributor interested area. This will be the key role to make help-wanted-list success. -gmann > Thierry Carrez (ttx) > > -Kendall (diablo_rojo) From gmann at ghanshyammann.com Tue Feb 12 08:27:11 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 Feb 2019 17:27:11 +0900 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <168c8439d24.feed3a49551.7656492683145817726@ghanshyammann.com> Message-ID: <168e0d12b9b.ec123b9093670.4194575768064979236@ghanshyammann.com> ---- On Fri, 08 Feb 2019 01:07:33 +0900 Doug Hellmann wrote ---- > Ghanshyam Mann writes: > > > ---- On Thu, 07 Feb 2019 21:42:53 +0900 Doug Hellmann wrote ---- > > > Thierry Carrez writes: > > > > > > > Doug Hellmann wrote: > > > >> [...] > > > >> During the Train series goal discussion in Berlin we talked about having > > > >> a goal of ensuring that each team had documentation for bringing new > > > >> contributors onto the team. Offering specific mentoring resources seems > > > >> to fit nicely with that goal, and doing it in each team's repository in > > > >> a consistent way would let us build a central page on docs.openstack.org > > > >> to link to all of the team contributor docs, like we link to the user > > > >> and installation documentation, without requiring us to find a separate > > > >> group of people to manage the information across the entire community. > > > > > > > > I'm a bit skeptical of that approach. > > > > > > > > Proper peer mentoring takes a lot of time, so I expect there will be a > > > > limited number of "I'll spend significant time helping you if you help > > > > us" offers. I don't envision potential contributors to browse dozens of > > > > project-specific "on-boarding doc" to find them. I would rather > > > > consolidate those offers on a single page. > > > > > > > > So.. either some magic consolidation job that takes input from all of > > > > those project-specific repos to build a nice rendered list... Or just a > > > > wiki page ? > > > > > > > > -- > > > > Thierry Carrez (ttx) > > > > > > > > > > A wiki page would be nicely lightweight, so that approach makes some > > > sense. Maybe if the only maintenance is to review the page periodically, > > > we can convince one of the existing mentorship groups or the first > > > contact SIG to do that. > > > > Same can be achieved If we have a single link on doc.openstack.org or contributor guide with > > top section "Help-wanted" with subsection of each project specific help-wanted. project help > > wanted subsection can be build from help wanted section from project contributor doc. > > > > That way it is easy for the project team to maintain their help wanted list. Wiki page can > > have the challenge of prioritizing and maintain the list. > > > > -gmann > > > > > > > > -- > > > Doug > > Another benefit of using the wiki is that SIGs and pop-up teams can add > their own items. We don't have a good way for those groups to be > integrated with docs.openstack.org right now. Nice point about SIG. pop-up teams are more of volunteer only which might have less chance to make an entry in this list. My main concern with wiki is, it easily ( and maybe most of them ) gets obsolete. Especially in this case where technical ownership is distributed. -gmann > > -- > Doug > > From gmann at ghanshyammann.com Tue Feb 12 08:41:03 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 Feb 2019 17:41:03 +0900 Subject: [tc] cdent non-nomination for TC In-Reply-To: <0047dff9-7138-fa7b-16a6-6bbad31a493a@openstack.org> References: <20190208192550.5s2sx52fnvzps4sl@yuggoth.org> <0047dff9-7138-fa7b-16a6-6bbad31a493a@openstack.org> Message-ID: <168e0dde0ad.f104a07594256.7469283881027772697@ghanshyammann.com> ---- On Mon, 11 Feb 2019 18:00:36 +0900 Thierry Carrez wrote ---- > Jeremy Stanley wrote: > > On 2019-02-08 12:34:18 +0000 (+0000), Chris Dent wrote: > > [...] > >> I do not intend to run. I've done two years and that's enough. When > >> I was first elected I had no intention of doing any more than one > >> year but at the end of the first term I had not accomplished much of > >> what I hoped, so stayed on. Now, at the end of the second term I > >> still haven't accomplished much of what I hoped > > [...] > > > > You may not have accomplished what you set out to, but you certainly > > have made a difference. You've nudged lines of discussion into > > useful directions they might not otherwise have gone, provided a > > frequent reminder of the representative nature of our governance, > > and produced broadly useful summaries of our long-running > > conversations. I really appreciate what you brought to the TC, and > > am glad you'll still be around to hold the rest of us (and those who > > succeed you/us) accountable. Thanks! > > Jeremy said it better than I could have ! While I really appreciated the > perspective you brought to the TC, I understand the need to focus to > have the most impact. > > It's also a good reminder that the role that the TC fills can be shared > beyond the elected membership -- so if you care about a specific aspect > of governance, OpenStack-wide technical leadership or community health, > I encourage you to participate in the TC activities, whether you are > elected or not. > Thanks Chris for serving your great effort in TC and making the difference. You have been doing a lot of things during your TC terms with the actual outcome and setting an example. -gmann > -- > Thierry Carrez (ttx) > > From gmann at ghanshyammann.com Tue Feb 12 08:44:52 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 Feb 2019 17:44:52 +0900 Subject: [tc] smcginnis non-nomination for TC In-Reply-To: <20190208140051.GB8848@sm-workstation> References: <20190208140051.GB8848@sm-workstation> Message-ID: <168e0e15da8.106c8076294445.1151043233506755582@ghanshyammann.com> ---- On Fri, 08 Feb 2019 23:00:51 +0900 Sean McGinnis wrote ---- > As Chris said, it is probably good for incumbents to make it known if they are > not running. > > This is my second term on the TC. It's been great being part of this group and > trying to contribute whatever I can. But I do feel it is important to make room > for new folks to regularly join and help shape things. So with that in mind, > along with the need to focus on some other areas for a bit, I do not plan to > run in the upcoming TC election. > > I would highly encourage anyone interested to run for the TC. If you have any > questions about it, feel free to ping me for any thoughts/advice/feedback. > > Thanks for the last two years. I think I've learned a lot since joining the TC, > and hopefully I have been able to contribute some positive things over the > years. I will still be around, so hopefully I will see folks in Denver. > Thanks Sean for your serving as TC with one of the most humble and helpful person. -gmann > Sean > > From geguileo at redhat.com Tue Feb 12 09:24:30 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 12 Feb 2019 10:24:30 +0100 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: <93782FC6-38BE-438C-B665-40977863DEDA@cern.ch> References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> <3C065CFC-3E64-47C1-84C9-FB87A1F9B475@cern.ch> <93782FC6-38BE-438C-B665-40977863DEDA@cern.ch> Message-ID: <20190212092430.34q6zlr47jj6uq4c@localhost> On 12/02, Arne Wiebalck wrote: > Jae, > > One other setting that caused trouble when bulk deleting cinder volumes was the > DB connection string: we did not configure a driver and hence used the Python > mysql wrapper instead … essentially changing > > connection = mysql://cinder:@:/cinder > > to > > connection = mysql+pymysql://cinder:@:/cinder > > solved the parallel deletion issue for us. > > All details in the last paragraph of [1]. > > HTH! > Arne > > [1] https://techblog.web.cern.ch/techblog/post/experiences-with-cinder-in-production/ > Good point, using a C mysql connection library will induce thread starvation. This was thoroughly discussed, and the default changed, like 2 years ago... So I assumed we all changed that. Something else that could be problematic when receiving many concurrent requests on any Cinder service is the number of concurrent DB connections, although we also changed this a while back to 50. This is set as sql_max_retries or max_retries (depending on the version) in the "[database]" section. Cheers, Gorka. > > > > On 12 Feb 2019, at 01:07, Jae Sang Lee wrote: > > > > Hello, > > > > I tested today by increasing EVENTLET_THREADPOOL_SIZE size to 100. I wanted to have good results, > > but this time I did not get a response after removing 41 volumes. This environment variable did not fix > > the cinder-volume stopping. > > > > Restarting the stopped cinder-volume will delete all volumes that are in deleting state while running the clean_up function. > > Only one volume in the deleting state, I force the state of this volume to be available, and then delete it, all volumes will be deleted. > > > > This result was the same for 3 consecutive times. After removing dozens of volumes, the cinder-volume was down, > > and after the restart of the service, 199 volumes were deleted and one volume was manually erased. > > > > If you have a different approach to solving this problem, please let me know. > > > > Thanks. > > > > 2019년 2월 11일 (월) 오후 9:40, Arne Wiebalck 님이 작성: > > Jae, > > > >> On 11 Feb 2019, at 11:39, Jae Sang Lee wrote: > >> > >> Arne, > >> > >> I saw the messages like ''moving volume to trash" in the cinder-volume logs and the peridic task also reports > >> like "Deleted from trash for backend ''" > >> > >> The patch worked well when clearing a small number of volumes. This happens only when I am deleting a large > >> number of volumes. > > > > Hmm, from cinder’s point of view, the deletion should be more or less instantaneous, so it should be able to “delete” > > many more volumes before getting stuck. > > > > The periodic task, however, will go through the volumes one by one, so if you delete many at the same time, > > volumes may pile up in the trash (for some time) before the tasks gets round to delete them. This should not affect > > c-vol, though. > > > >> I will try to adjust the number of thread pools by adjusting the environment variables with your advices > >> > >> Do you know why the cinder-volume hang does not occur when create a volume, but only when delete a volume? > > > > Deleting a volume ties up a thread for the duration of the deletion (which is synchronous and can hence take very > > long for ). If you have too many deletions going on at the same time, you run out of threads and c-vol will eventually > > time out. FWIU, creation basically works the same way, but it is almost instantaneous, hence the risk of using up all > > threads is simply lower (Gorka may correct me here :-). > > > > Cheers, > > Arne > > > >> > >> > >> Thanks. > >> > >> > >> 2019년 2월 11일 (월) 오후 6:14, Arne Wiebalck 님이 작성: > >> Jae, > >> > >> To make sure deferred deletion is properly working: when you delete individual large volumes > >> with data in them, do you see that > >> - the volume is fully “deleted" within a few seconds, ie. not staying in ‘deleting’ for a long time? > >> - that the volume shows up in trash (with “rbd trash ls”)? > >> - the periodic task reports it is deleting volumes from the trash? > >> > >> Another option to look at is “backend_native_threads_pool_size": this will increase the number > >> of threads to work on deleting volumes. It is independent from deferred deletion, but can also > >> help with situations where Cinder has more work to do than it can cope with at the moment. > >> > >> Cheers, > >> Arne > >> > >> > >> > >>> On 11 Feb 2019, at 09:47, Jae Sang Lee wrote: > >>> > >>> Yes, I added your code to pike release manually. > >>> > >>> > >>> > >>> 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 작성: > >>> Hi Jae, > >>> > >>> You back ported the deferred deletion patch to Pike? > >>> > >>> Cheers, > >>> Arne > >>> > >>> > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: > >>> > > >>> > Hello, > >>> > > >>> > I recently ran a volume deletion test with deferred deletion enabled on the pike release. > >>> > > >>> > We experienced a cinder-volume hung when we were deleting a large amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it. > >>> > > >>> > However, while deleting 200 volumes, after 50 volumes, the cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api. > >>> > > >>> > If these test results are my fault, please let me know the correct test method. > >>> > > >>> > >>> -- > >>> Arne Wiebalck > >>> CERN IT > >>> > >> > >> -- > >> Arne Wiebalck > >> CERN IT > >> > > > > -- > > Arne Wiebalck > > CERN IT > > > From bence.romsics at gmail.com Tue Feb 12 10:09:25 2019 From: bence.romsics at gmail.com (Bence Romsics) Date: Tue, 12 Feb 2019 11:09:25 +0100 Subject: [Neutron] Multi segment networks In-Reply-To: References: Message-ID: Hi Ricardo, On Thu, Feb 7, 2019 at 6:45 PM Ricardo Noriega De Soto wrote: > Does it mean, that placing two VMs (with regular virtio interfaces), one in the vxlan segment and one on the vlan segment, would be able to ping each other without the need of a router? > Or would it require an external router that belongs to the owner of the infrastructure? To my limited understanding of multi-segment networks I think neutron generally does not take care of packet forwarding between the segments. So I expect your example net-create command to create a network with two disconnected segments. IIRC the first time when multi-segment networks were allowed in the API, there was no implementation of connecting the segments at all automatically. The API was merged to allow later features like the routed-networks feature of neutron [1][2]. Or to allow connecting segments administratively outside of neutron control. I'm not sure if it is well defined how the segments should be connected - on l2 or l3. I think people originally thought of mostly bridging the segments together. Then the routed networks feature went to connect them by routers. I guess it depends on your use case. Hope this helps, Bence Romsics (rubasov) [1] https://specs.openstack.org/openstack/neutron-specs/specs/newton/routed-networks.html [2] https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html From chkumar246 at gmail.com Tue Feb 12 10:26:50 2019 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 12 Feb 2019 15:56:50 +0530 Subject: [tripleo][openstack-ansible] collaboration on os_tempest role update X - Feb 12, 2019 Message-ID: Hello, Here is the 10th update (Feb 06 to Feb 12, 2019) on collaboration on os_tempest[1] role between TripleO and OpenStack-Ansible projects. Summary: This week we basically worked on clearing up/merging the existing patches like: * For debugging networking issue for os_tempest, we have router ping * Added telemetry tempest plugin support * The os_tempest overview page got rewrite: https://docs.openstack.org/openstack-ansible-os_tempest/latest/overview.html * Added use of user/password for secure image download And from myside, not to much work as busy with ruck/rover on TripleO Side. Things got merged OS_TEMPEST: * Update all plugin urls to use https rather than git - https://review.openstack.org/633752 * venv: use inventory_hostname instead of ansible_hostname - https://review.openstack.org/635187 * Add telemetry distro plugin install for aodh - https://review.openstack.org/632125 * Add user and password for secure image download (optional) - https://review.openstack.org/625266 * Ping router once it is created - https://review.openstack.org/633883 * Improve overview subpage - https://review.openstack.org/633934 python-venv_build: * Add tripleo-ci-centos-7-standalone-os-tempest job - https://review.openstack.org/634377 In Progress work: OS_TEMPEST * Use the correct heat tests - https://review.openstack.org/#/c/630695/ * Add option to disable router ping - https://review.openstack.org/636211 * Add tempest_service_available_mistral with distro packages - https://review.openstack.org/635180 * Added tempest.conf for heat_plugin - https://review.openstack.org/632021 TripleO: * Reuse the validate-tempest skip list in os_tempest - https://review.openstack.org/634380 Goal of this week: * Unblock os_heat gate due to mpi4py dependency and other issue * Complete skip list reuse on tripleo side Thanks to jrosser, mnaser, odyssey4me, guilhermesp on router_ping, os_heat help, arxcruz on reuse skip list & mkopec for improving doc. Here is the 9th update [2]. Have queries, Feel free to ping us on #tripleo or #openstack-ansible channel. Links: Links: [1.] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest [2.] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002382.html Thanks, Chandan Kumar From moreira.belmiro.email.lists at gmail.com Tue Feb 12 10:31:39 2019 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Tue, 12 Feb 2019 11:31:39 +0100 Subject: [nova] Can we drop the cells v1 docs now? In-Reply-To: <1979b66e-7de8-9826-1145-e80af5d6a270@gmail.com> References: <1979b66e-7de8-9826-1145-e80af5d6a270@gmail.com> Message-ID: +1 to remove cellsV1 docs. This architecture should not be considered in new Nova deployments. As Matt described we use cellsV2 since Queens but we are still using nova-network in a significant part of the infrastructure. I was always assuming that cellsV1/nova-network code would be removed in Stein. I continue to support this plan! We will not maintain an internal fork but migrate everything to Neutron. Belmiro CERN On Mon, Feb 11, 2019 at 3:44 PM Matt Riedemann wrote: > I have kind of lost where we are on dropping cells v1 code at this > point, but it's probably too late in Stein. And technically nova-network > won't start unless cells v1 is configured, and we've left the > nova-network code in place while CERN is migrating their deployment to > neutron*. CERN is running cells v2 since Queens and I think they have > just removed this [1] to still run nova-network without cells v1. > > There has been no work in Stein to remove nova-network [2] even though > we still have a few API related things we can work on removing [3] but > that is very low priority. To be clear, CERN only cares about the > nova-network service, not the APIs which is why we started removing > those in Rocky. > > As for cells v1, if we're not going to drop it in Stein, can we at least > make incremental progress and drop the cells v1 related docs to further > signal the eventual demise and to avoid confusion in the docs about what > cells is (v1 vs v2) for newcomers? People can still get the cells v1 > in-tree docs on the stable branches (which are being published [4]). > > [1] > https://github.com/openstack/nova/blob/bff3fd1cd/nova/cmd/network.py#L43 > [2] https://blueprints.launchpad.net/nova/+spec/remove-nova-network-stein > [3] https://etherpad.openstack.org/p/nova-network-removal-rocky > [4] https://docs.openstack.org/nova/queens/user/cells.html#cells-v1 > > *I think they said there are parts of their deployment that will > probably never move off of nova-network, and they will just maintain a > fork for that part of the deployment. > > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Tue Feb 12 13:04:02 2019 From: ramishra at redhat.com (Rabi Mishra) Date: Tue, 12 Feb 2019 18:34:02 +0530 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: On Tue, Feb 12, 2019 at 11:14 AM NANTHINI A A wrote: > Hi , > > May I know in the following example given > > > parameters: > resource_name_map: > - network1: foo_custom_name_net1 > network2: foo_custom_name_net2 > - network1: bar_custom_name_net1 > network2: bar_custom_name_net2 > > what is the parameter type ? > > json -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Feb 12 14:35:10 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 12 Feb 2019 09:35:10 -0500 Subject: [ops] last weeks ops meetups team minutes Message-ID: Meeting ended Tue Feb 5 15:31:14 2019 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 10:31 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-02-05-15.00.html 10:31 AM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-02-05-15.00.txt 10:31 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-02-05-15.00.log.html Next meeting is in 25 minutes on #openstack-operators Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Tue Feb 12 15:06:40 2019 From: amy at demarco.com (Amy Marrich) Date: Tue, 12 Feb 2019 09:06:40 -0600 Subject: [tc] cdent non-nomination for TC In-Reply-To: References: Message-ID: Chris, Thank you so much for all you have done as a member of the TC! Amy (spotz) On Fri, Feb 8, 2019 at 6:41 AM Chris Dent wrote: > > Next week sees the start of election season for the TC [1]. People > often worry that incumbents always get re-elected so it is > considered good form to announce if you are an incumbent and do > not intend to run. > > I do not intend to run. I've done two years and that's enough. When > I was first elected I had no intention of doing any more than one > year but at the end of the first term I had not accomplished much of > what I hoped, so stayed on. Now, at the end of the second term I > still haven't accomplished much of what I hoped, so I think it is > time to focus my energy in the places where I've been able to get > some traction and give someone else—someone with a different > approach—a chance. > > If you're interested in being on the TC, I encourage you to run. If > you have questions about it, please feel free to ask me, but also > ask others so you get plenty of opinions. And do your due diligence: > Make sure you're clear with yourself about what the TC has been, > is now, what you would like it to be, and what it can be. > > Elections are fairly far in advance of the end of term this time > around. I'll continue in my TC responsibilities until the end of > term, which is some time in April. I'm not leaving the community or > anything like that, I'm simply narrowing my focus. Over the past > several months I've been stripping things back so I can be sure that > I'm not ineffectively over-committing myself to OpenStack but am > instead focusing where I can be most useful and make the most > progress. Stepping away from the TC is just one more part of that. > > Thanks very much for the experiences and for the past votes. > > [1] https://governance.openstack.org/election/ > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Tue Feb 12 15:10:46 2019 From: amy at demarco.com (Amy Marrich) Date: Tue, 12 Feb 2019 09:10:46 -0600 Subject: [tc] smcginnis non-nomination for TC In-Reply-To: <168e0e15da8.106c8076294445.1151043233506755582@ghanshyammann.com> References: <20190208140051.GB8848@sm-workstation> <168e0e15da8.106c8076294445.1151043233506755582@ghanshyammann.com> Message-ID: Sean, Thanks for all your hard work on the TC and hope to see you in Denver. Amy (spotz) ---- On Fri, 08 Feb 2019 23:00:51 +0900 Sean McGinnis > wrote ---- > > As Chris said, it is probably good for incumbents to make it known if > they are > > not running. > > > > This is my second term on the TC. It's been great being part of this > group and > > trying to contribute whatever I can. But I do feel it is important to > make room > > for new folks to regularly join and help shape things. So with that in > mind, > > along with the need to focus on some other areas for a bit, I do not > plan to > > run in the upcoming TC election. > > > > I would highly encourage anyone interested to run for the TC. If you > have any > > questions about it, feel free to ping me for any > thoughts/advice/feedback. > > > > Thanks for the last two years. I think I've learned a lot since joining > the TC, > > and hopefully I have been able to contribute some positive things over > the > > years. I will still be around, so hopefully I will see folks in Denver. > > > > Sean > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mvanwinkle at salesforce.com Tue Feb 12 15:37:19 2019 From: mvanwinkle at salesforce.com (Matt Van Winkle) Date: Tue, 12 Feb 2019 09:37:19 -0600 Subject: [PTLs] Got unfinished business from Berlin? Message-ID: Greetings, PTLs, cores or anyone on point for a key feature, In an effort to make the feedback loop even stronger between the dev and ops community, the UC is actively looking for any unfinished etherpads or topics from the Berlin summit that need more Ops input. We'd like to get them proposed as potential topics at the upcoming Ops Meetup (strangely enough back in Berlin) [1] If you have something your dev team needs input on, please propose it here: [2] so it can get voted on by the attendees and organizers. There is a section titled "Session Ideas" that you can list the topic in. Feel free to link an etherpad if one exists. The UC will continue to push to tie discussions at forums/PTGs to those at the Ops meetups and OpenStack Days - and vice versa. Thanks in advance! VW [1] https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 [2] https://etherpad.openstack.org/p/BER-ops-meetup -- Matt Van Winkle Senior Manager, Software Engineering | Salesforce -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanthini.a.a at ericsson.com Tue Feb 12 05:44:22 2019 From: nanthini.a.a at ericsson.com (NANTHINI A A) Date: Tue, 12 Feb 2019 05:44:22 +0000 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: Hi , May I know in the following example given parameters: resource_name_map: - network1: foo_custom_name_net1 network2: foo_custom_name_net2 - network1: bar_custom_name_net1 network2: bar_custom_name_net2 what is the parameter type ? Thanks, A.Nanthini From: Rabi Mishra [mailto:ramishra at redhat.com] Sent: Tuesday, February 12, 2019 10:45 AM To: NANTHINI A A Cc: hjensas at redhat.com; openstack-dev at lists.openstack.org Subject: Re: [Heat] Reg accessing variables of resource group heat api On Mon, Feb 11, 2019 at 9:23 PM NANTHINI A A > wrote: Hi , I have tried the below .But getting error .Please let me know how I can proceed further . root at cic-1:~# cat try1.yaml heat_template_version: 2013-05-23 description: This is the template for I&V R6.1 base configuration to create neutron resources other than sg and vm for vyos vms parameters: resource_name_map: - network1: NetworkA1 network2: NetworkA2 - network1: NetworkB1 network2: NetworkB2 resources: neutron_Network_1: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network1]} I don't think you can use %index% directly in this template. You have to pass it as resource property from tryreapet.yaml. Please check the example[1] in heat-templates repo (resource_group_index_lookup.yaml and random.yaml). [1] https://github.com/openstack/heat-templates/blob/master/hot/resource_group/resource_group_index_lookup.yaml neutron_Network_2: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network2]} root at cic-1:~# cat tryrepeat.yaml heat_template_version: 2013-05-23 resources: rg: type: OS::Heat::ResourceGroup properties: count: 2 resource_def: type: try1.yaml root at cic-1:~# root at cic-1:~# heat stack-create tests -f tryrepeat.yaml WARNING (shell) "heat stack-create" is deprecated, please use "openstack stack create" instead ERROR: resources.rg: : Error parsing template file:///root/try1.yaml while scanning for the next token found character '%' that cannot start any token in "", line 15, column 45: ... {get_param: [resource_name_map, %index%, network1]} Thanks in advance . Thanks, A.Nanthini -----Original Message----- From: Harald Jensås [mailto:hjensas at redhat.com] Sent: Monday, February 11, 2019 7:47 PM To: NANTHINI A A >; openstack-dev at lists.openstack.org Subject: Re: [Heat] Reg accessing variables of resource group heat api On Wed, 2019-02-06 at 06:12 +0000, NANTHINI A A wrote: > Hi , > We are developing heat templates for our vnf deployment .It > includes multiple resources .We want to repeat the resource and hence > used the api RESOURCE GROUP . > Attached are the templates which we used > > Set1.yaml -> has the resources we want to repeat Setrepeat.yaml -> has > the resource group api with count . > > We want to access the variables of resource in set1.yaml while > repeating it with count .Eg . port name ,port fixed ip address we want > to change in each set . > Please let us know how we can have a variable with each repeated > resource . > Sounds like you want to use the index_var variable[1] to prefix/suffix reource names? I.e in set1.yaml you can use: name: list_join: - '_' - {get_param: 'OS::stack_name'} - %index% - The example should resulting in something like: stack_0_Network3, stack_0_Subnet3 stack_1_Network0, stack_1_Subnet0 [ ... ] If you want to be more advanced you could use a list parameter in the set1.yaml template, and have each list entry contain a dictionaly of each resource name. The %index% variable would then be used to pick the correct entry from the list. parameters: resource_name_map: - network1: foo_custom_name_net1 network2: foo_custom_name_net2 - network1: bar_custom_name_net1 network2: bar_custom_name_net2 - resources: neutron_Network_1: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network1]} neutron_Network_2: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network2]} %index% is the "count" picking the 'foo' entries when %index% is 0, and 'bar' entries when %index% is 1 and so on. [1] https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Heat::ResourceGroup-props-opt -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanthini.a.a at ericsson.com Tue Feb 12 14:18:12 2019 From: nanthini.a.a at ericsson.com (NANTHINI A A) Date: Tue, 12 Feb 2019 14:18:12 +0000 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: Hi , I followed the example given in random.yaml .But getting below error .Can you please tell me what is wrong here . root at cic-1:~# heat stack-create test -f main.yaml WARNING (shell) "heat stack-create" is deprecated, please use "openstack stack create" instead ERROR: Property error: : resources.rg.resources[0].properties: : Unknown Property names root at cic-1:~# cat main.yaml heat_template_version: 2015-04-30 description: Shows how to look up list/map values by group index parameters: net_names: type: json default: - network1: NetworkA1 network2: NetworkA2 - network1: NetworkB1 network2: NetworkB2 resources: rg: type: OS::Heat::ResourceGroup properties: count: 3 resource_def: type: nested.yaml properties: # Note you have to pass the index and the entire list into the # nested template, resolving via %index% doesn't work directly # in the get_param here index: "%index%" names: {get_param: net_names} outputs: all_values: value: {get_attr: [rg, value]} root at cic-1:~# cat nested.yaml heat_template_version: 2013-05-23 description: This is the template for I&V R6.1 base configuration to create neutron resources other than sg and vm for vyos vms parameters: net_names: type: json index: type: number resources: neutron_Network_1: type: OS::Neutron::Net properties: name: {get_param: [names, {get_param: index}, network1]} Thanks, A.Nanthini From: Rabi Mishra [mailto:ramishra at redhat.com] Sent: Tuesday, February 12, 2019 6:34 PM To: NANTHINI A A Cc: hjensas at redhat.com; openstack-dev at lists.openstack.org Subject: Re: [Heat] Reg accessing variables of resource group heat api On Tue, Feb 12, 2019 at 11:14 AM NANTHINI A A > wrote: Hi , May I know in the following example given parameters: resource_name_map: - network1: foo_custom_name_net1 network2: foo_custom_name_net2 - network1: bar_custom_name_net1 network2: bar_custom_name_net2 what is the parameter type ? json -------------- next part -------------- An HTML attachment was scrubbed... URL: From elfosardo at gmail.com Tue Feb 12 16:09:53 2019 From: elfosardo at gmail.com (elfosardo) Date: Tue, 12 Feb 2019 17:09:53 +0100 Subject: [ironic] should console be renamed to seriale_console ? Message-ID: Greetings Openstackers! Currently ironic supports only one type of console: serial The current implementation also gives as assumed the support for just one type of console, but not that long ago a spec to also support a graphical console type [1] has been accepted and we're now close to see a first patch with basic support merged [2]. With the introduction of the support for the graphical console, the need to define a new parameter called "console_type" has been recognized. In practice, at the moment that would mean having "console" and "graphical" as console types, which could result in a confusing and in the end not correct implementation. With this message I'd like to start a discussion on the potential impact of the possible future renaming of everything that currently involves the serial console from "console" to "serial_console" or equivalent. Thanks, Riccardo [1] https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/vnc-graphical-console.html [2] https://review.openstack.org/#/c/547356/ From kennelson11 at gmail.com Tue Feb 12 17:06:20 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 12 Feb 2019 09:06:20 -0800 Subject: Denver PTG Attending Teams Message-ID: Hello! The results are in! Here are the list of teams that are planning to attend the upcoming PTG in Denver, following the summit. Hopefully we are getting it to you soon enough to plan travel. If you haven't already registered yet, you can do that here[1]. If you haven't booked your hotel yet, please please please use our hotel block here[2]. ----------------------------------------- Pilot Projects: - Airship - Kata Containers - StarlingX OpenStack Components: - - Barbican - Charms - Cinder - Cyborg - Docs/I18n - Glance - Heat - Horizon - Infrastructure - Ironic - Keystone - LOCI - Manila - Monasca - Neutron - Nova - Octavia - OpenStack Ansible - OpenStack QA - OpenStackClient - Oslo - Placement - Release Management - Requirements - Swift - Tacker - TripleO - Vitrage - OpenStack-Helm SIGs: - API-SIG - AutoScaling SIG - Edge Computing Group - Extended Maintenance SIG - First Contact SIG - Interop WG/RefStack - K8s SIG - Scientific SIG - Security SIG - Self-healing SIG ------------------------------------------ If your team is missing from this list, its because I didn't get a 'yes' response from your PTL/Chair/Contact Person. Have them contact me and we can try to work something out. Now that we have this list, we will start putting together a draft schedule. See you all in Denver! -Kendall (diablo_rojo) [1] https://www.eventbrite.com/e/open-infrastructure-summit-project-teams-gathering-tickets-52606153421 [2] https://www.hyatt.com/en-US/group-booking/DENCC/G-FNTE -------------- next part -------------- An HTML attachment was scrubbed... URL: From km.giuseppesannino at gmail.com Tue Feb 12 17:31:23 2019 From: km.giuseppesannino at gmail.com (Giuseppe Sannino) Date: Tue, 12 Feb 2019 18:31:23 +0100 Subject: [kolla][mariadb] Multinode deployment fails due to bootstrap_mariadb or mariadb errors Message-ID: Hi all, need your help. I'm trying to deploy Openstack "Queens" via kolla on a multinode system (1 controller/kolla host + 1 compute). I tried with both binary and source packages and I'm using "ubuntu" as base_distro. The first attempt of deployment systematically fails here: TASK [mariadb : Running MariaDB bootstrap container] ******************************************************************************************************************************************************************************************************** fatal: [xx.yy.zz.136]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 1"} Looking at the bootstrap_mariadb container logs I can see: ---------- Neither host 'xxyyzz' nor 'localhost' could be looked up with '/usr/sbin/resolveip' Please configure the 'hostname' command to return a correct hostname. ---------- Any idea ? Thanks a lot /Giuseppe -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Feb 12 17:41:00 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 12 Feb 2019 12:41:00 -0500 Subject: Placement governance switch In-Reply-To: <8BE26158-5817-497F-A9D2-60222BD9F82C@leafe.com> References: <8BE26158-5817-497F-A9D2-60222BD9F82C@leafe.com> Message-ID: Ed Leafe writes: > With PTL election season coming up soon, this seems like a good time to revisit the plans for the Placement effort to become a separate project with its own governance. We last discussed this back at the Denver PTG in September 2018, and settled on making Placement governance dependent on a number of items. [0] > > Most of the items in that list have been either completed, are very close to completion, or, in the case of the upgrade, is no longer expected. But in the time since that last discussion, much has changed. Placement is now a separate git repo, and is deployed and run independently of Nova. The integrated gate in CI is using the extracted Placement repo, and not Nova’s version. > > In a hangout last week [1], we agreed to several things: > > * Placement code would remain in the Nova repo for the Stein release to allow for an easier transition for deployments tools that were not prepared for this change > * The Placement code in the Nova tree will remain frozen; all new Placement work will be in the Placement repo. > * The Placement API is now unfrozen. Nova, however, will not develop code in Stein that will rely on any newer Placement microversion than the current 1.30. > * The Placement code in the Nova repo will be deleted in the Train release. > > Given the change of context, now may be a good time to change to a separate governance. The concerns on the Nova side have been largely addressed, and switching governance now would allow us to participate in the next PTL election cycle. We’d like to get input from anyone else in the OpenStack community who feels that a governance change would impact them, so please reply in this thread if you have concerns. > > [0] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002451.html > > > -- Ed Leafe Have you talked to the election team about running a PTL election for the new team? I don't know what their expected cut-off date for having teams defined is, so we should make sure they're ready and then have the governance patch to set up the new team prepared ASAP because that requires a formal vote from the TC, which will take a while and we're about to enter TC elections. -- Doug From doug at doughellmann.com Tue Feb 12 17:44:27 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 12 Feb 2019 12:44:27 -0500 Subject: [tc] cdent non-nomination for TC In-Reply-To: <0047dff9-7138-fa7b-16a6-6bbad31a493a@openstack.org> References: <20190208192550.5s2sx52fnvzps4sl@yuggoth.org> <0047dff9-7138-fa7b-16a6-6bbad31a493a@openstack.org> Message-ID: Thierry Carrez writes: > Jeremy Stanley wrote: >> On 2019-02-08 12:34:18 +0000 (+0000), Chris Dent wrote: >> [...] >>> I do not intend to run. I've done two years and that's enough. When >>> I was first elected I had no intention of doing any more than one >>> year but at the end of the first term I had not accomplished much of >>> what I hoped, so stayed on. Now, at the end of the second term I >>> still haven't accomplished much of what I hoped >> [...] >> >> You may not have accomplished what you set out to, but you certainly >> have made a difference. You've nudged lines of discussion into >> useful directions they might not otherwise have gone, provided a >> frequent reminder of the representative nature of our governance, >> and produced broadly useful summaries of our long-running >> conversations. I really appreciate what you brought to the TC, and >> am glad you'll still be around to hold the rest of us (and those who >> succeed you/us) accountable. Thanks! > > Jeremy said it better than I could have ! While I really appreciated the > perspective you brought to the TC, I understand the need to focus to > have the most impact. > > It's also a good reminder that the role that the TC fills can be shared > beyond the elected membership -- so if you care about a specific aspect > of governance, OpenStack-wide technical leadership or community health, > I encourage you to participate in the TC activities, whether you are > elected or not. > > -- > Thierry Carrez (ttx) > Yes, I'm piling on a bit late so I'll keep this short and just say I agree with all of the above and have definitely found your perspective valuable. Thank you! -- Doug From ed at leafe.com Tue Feb 12 17:46:59 2019 From: ed at leafe.com (Ed Leafe) Date: Tue, 12 Feb 2019 11:46:59 -0600 Subject: Placement governance switch In-Reply-To: References: <8BE26158-5817-497F-A9D2-60222BD9F82C@leafe.com> Message-ID: On Feb 12, 2019, at 11:41 AM, Doug Hellmann wrote: > > Have you talked to the election team about running a PTL election for > the new team? I don't know what their expected cut-off date for having > teams defined is, so we should make sure they're ready and then have the > governance patch to set up the new team prepared ASAP because that > requires a formal vote from the TC, which will take a while and we're > about to enter TC elections. We did realize that it might be cutting it close, as nominations begin on March 5. Since the governance change would not be a new issue, we did not anticipate a lengthy debate among the TC. If it turns out that it can’t be done in time, so be it, but we at least wanted to try. -- Ed Leafe From doug at doughellmann.com Tue Feb 12 17:48:30 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 12 Feb 2019 12:48:30 -0500 Subject: [tc] smcginnis non-nomination for TC In-Reply-To: <20190208140051.GB8848@sm-workstation> References: <20190208140051.GB8848@sm-workstation> Message-ID: Sean McGinnis writes: > As Chris said, it is probably good for incumbents to make it known if they are > not running. > > This is my second term on the TC. It's been great being part of this group and > trying to contribute whatever I can. But I do feel it is important to make room > for new folks to regularly join and help shape things. So with that in mind, > along with the need to focus on some other areas for a bit, I do not plan to > run in the upcoming TC election. > > I would highly encourage anyone interested to run for the TC. If you have any > questions about it, feel free to ping me for any thoughts/advice/feedback. > > Thanks for the last two years. I think I've learned a lot since joining the TC, > and hopefully I have been able to contribute some positive things over the > years. I will still be around, so hopefully I will see folks in Denver. > > Sean > Thank you, Sean. Your input and help has been valuable. I look forward to seeing your impact on the Board. :-) -- Doug From doug at stackhpc.com Tue Feb 12 17:54:26 2019 From: doug at stackhpc.com (Doug Szumski) Date: Tue, 12 Feb 2019 17:54:26 +0000 Subject: [kolla][mariadb] Multinode deployment fails due to bootstrap_mariadb or mariadb errors In-Reply-To: References: Message-ID: On 12/02/2019 17:31, Giuseppe Sannino wrote: > Hi all, > need your help. > I'm trying to deploy Openstack "Queens" via kolla on a multinode > system (1 controller/kolla host + 1 compute). > > I tried with both binary and source packages and I'm using "ubuntu" as > base_distro. > > The first attempt of deployment systematically fails here: > > TASK [mariadb : Running MariaDB bootstrap container] > ******************************************************************************************************************************************************************************************************** > fatal: [xx.yy.zz.136]: FAILED! => {"changed": true, "msg": "Container > exited with non-zero return code 1"} > > Looking at the bootstrap_mariadb container logs I can see: > ---------- > Neither host 'xxyyzz' nor 'localhost' could be looked up with > '/usr/sbin/resolveip' > Please configure the 'hostname' command to return a correct > hostname. > ---------- > > Any idea ? Have you checked that /etc/hosts is configured correctly? > > Thanks a lot > /Giuseppe > From lyarwood at redhat.com Tue Feb 12 18:00:21 2019 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 12 Feb 2019 18:00:21 +0000 Subject: [nova][dev] Which response code should be returned when migrate is called but the src host is offline? Message-ID: <20190212180021.nloawdf5ywvmvdgh@lyarwood.usersys.redhat.com> Hello all, I can't seem to settle on an answer for $subject as part this bugfix: compute: Reject migration requests when source is down https://review.openstack.org/#/c/623489/ 409 suggests that the user is able to address the issue while 503 suggests that n-api itself is at fault. I'd really appreciate peoples thoughts on this given I hardly ever touch n-api. Thanks in advance, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From km.giuseppesannino at gmail.com Tue Feb 12 18:18:23 2019 From: km.giuseppesannino at gmail.com (Giuseppe Sannino) Date: Tue, 12 Feb 2019 19:18:23 +0100 Subject: [kolla][mariadb] Multinode deployment fails due to bootstrap_mariadb or mariadb errors In-Reply-To: References: Message-ID: Hi Doug, first of all, many thanks for the fast reply. the /etc/hosts on my "host" machine is properly confiured: 127.0.0.1 localhost 127.0.1.1 hce03 # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters # BEGIN ANSIBLE GENERATED HOSTS xx.yy.zz.136 hce03 xx.yy.zz.138 hce05 # END ANSIBLE GENERATED HOSTS while the bootstrap_mariadb is attempting to start up if I check within the container I see: ()[mysql at 01ec215b2dc8 /]$ cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.2 01ec215b2dc8 /G On Tue, 12 Feb 2019 at 18:54, Doug Szumski wrote: > > On 12/02/2019 17:31, Giuseppe Sannino wrote: > > Hi all, > > need your help. > > I'm trying to deploy Openstack "Queens" via kolla on a multinode > > system (1 controller/kolla host + 1 compute). > > > > I tried with both binary and source packages and I'm using "ubuntu" as > > base_distro. > > > > The first attempt of deployment systematically fails here: > > > > TASK [mariadb : Running MariaDB bootstrap container] > > > ******************************************************************************************************************************************************************************************************** > > fatal: [xx.yy.zz.136]: FAILED! => {"changed": true, "msg": "Container > > exited with non-zero return code 1"} > > > > Looking at the bootstrap_mariadb container logs I can see: > > ---------- > > Neither host 'xxyyzz' nor 'localhost' could be looked up with > > '/usr/sbin/resolveip' > > Please configure the 'hostname' command to return a correct > > hostname. > > ---------- > > > > Any idea ? > > Have you checked that /etc/hosts is configured correctly? > > > > > Thanks a lot > > /Giuseppe > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Feb 12 18:22:28 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 12 Feb 2019 13:22:28 -0500 Subject: Placement governance switch In-Reply-To: References: <8BE26158-5817-497F-A9D2-60222BD9F82C@leafe.com> Message-ID: Ed Leafe writes: > On Feb 12, 2019, at 11:41 AM, Doug Hellmann wrote: >> >> Have you talked to the election team about running a PTL election for >> the new team? I don't know what their expected cut-off date for having >> teams defined is, so we should make sure they're ready and then have the >> governance patch to set up the new team prepared ASAP because that >> requires a formal vote from the TC, which will take a while and we're >> about to enter TC elections. > > We did realize that it might be cutting it close, as nominations begin on March 5. Since the governance change would not be a new issue, we did not anticipate a lengthy debate among the TC. > > If it turns out that it can’t be done in time, so be it, but we at least wanted to try. > > > -- Ed Leafe I'm not suggesting you should wait; I just want you to be aware of the deadlines. New project teams fall under the formal vote rules described in the "Motions" section of the TC charter [1]. Those call for a minimum of 7 calendar days and 3 days after reaching the minimum number of votes for approval. Assuming no prolonged debate, you'll need 7-10 days for the change to be approved. If the team is ready to go now, I suggest you go ahead and file the governance patch so we can start collecting the necessary votes. [1] https://governance.openstack.org/tc/reference/charter.html#motions -- Doug From lbragstad at gmail.com Tue Feb 12 18:25:24 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 12 Feb 2019 12:25:24 -0600 Subject: [Edge-computing] [keystone] x509 authentication In-Reply-To: References: Message-ID: Sending a quick update here that summarizes activity on this topic from the last couple of weeks. A few more bugs have trickled in regarding x509 federation support [0]. One of the original authors of the feature has started chipping away at fixing them, but they can be worked in parallel if others are interested in this work. As a reminder, there are areas of the docs that can be improved, in case you don't have time to dig into a patch. [0] https://bugs.launchpad.net/keystone/+bugs?field.tag=x509 On 1/29/19 11:55 AM, Lance Bragstad wrote: > > > On Fri, Jan 25, 2019 at 3:02 PM James Penick > wrote: > > Hey Lance, >  We'd definitely be interested in helping with the work. I'll grab > some volunteers from my team and get them in touch within the next > few days. > > > Awesome, that sounds great! I'm open to using this thread for more > technical communication if needed. Otherwise, #openstack-keystone is > always open for folks to swing by if they want to discuss things there. > > FWIW - we brought this up in the keystone meeting today and there > several other people interested in this work. There is probably going > to be an opportunity to break the work up a bit. >   > > -James > > > On Fri, Jan 25, 2019 at 11:16 AM Lance Bragstad > > wrote: > > Hi all, > > We've been going over keystone gaps that need to be addressed > for edge use cases every Tuesday. Since Berlin, Oath has > open-sourced some of their custom authentication plugins for > keystone that help them address these gaps. > > The basic idea is that users authenticate to some external > identity provider (Athenz in Oath's case), and then present an > Athenz token to keystone. The custom plugins decode the token > from Athenz to determine the user, project, roles assignments, > and other useful bits of information. After that, it creates > any resources that don't exist in keystone already. > Ultimately, a user can authenticate against a keystone node > and have specific resources provisioned automatically. In > Berlin, engineers from Oath were saying they'd like to move > away from Athenz tokens altogether and use x509 certificates > issued by Athenz instead. The auto-provisioning approach is > very similar to a feature we have in keystone already. In > Berlin, and shortly after, there was general agreement that if > we could support x509 authentication with auto-provisioning > via keystone federation, that would pretty much solve Oath's > use case without having to maintain custom keystone plugins. > > Last week, Colleen started digging into keystone's existing > x509 authentication support. I'll start with the good news, > which is x509 authentication works, for the most part. It's > been a feature in keystone for a long time, and it landed > after we implemented federation support around the Kilo > release. Chances are there won't be a need for a keystone > specification like we were initially thinking in the edge > meetings. Unfortunately, the implementation for x509 > authentication has outdated documentation, is extremely > fragile, hard to set up, and hasn't been updated with > improvements we've made to the federation API since the > original implementation (like shadow users or > auto-provisioning, which work with other federated protocols > like OpenID Connect and SAML). We've started tracking the gaps > with bugs [0] so that we have things written down. > > I think the good thing is that once we get this cleaned up, > we'll be able to re-use some of the newer federation features > with x509 authentication/federation. These updates would make > x509 a first-class federated protocol. The approach, pending > the bug fixes, would remove the need for Oath's custom > authentication plugins. It could be useful for edge > deployments, or even deployments with many regions, by > allowing users to be auto-provisioned in each region. > Although, it doesn't necessarily solve the network partition > issue. > > Now that we have an idea of where to start and some bug > reports [0], I'm wondering if anyone is interested in helping > with the update or refactor. Because this won't require a > specification, we can get started on it sooner, instead of > having to wait for Train development and a new specification. > I'm also curious if anyone has comments or questions about the > approach. > > Thanks, > > Lance > > [0] https://bugs.launchpad.net/keystone/+bugs?field.tag=x509 > _______________________________________________ > Edge-computing mailing list > Edge-computing at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From mbooth at redhat.com Tue Feb 12 18:25:44 2019 From: mbooth at redhat.com (Matthew Booth) Date: Tue, 12 Feb 2019 18:25:44 +0000 Subject: [nova][dev] Which response code should be returned when migrate is called but the src host is offline? In-Reply-To: <20190212180021.nloawdf5ywvmvdgh@lyarwood.usersys.redhat.com> References: <20190212180021.nloawdf5ywvmvdgh@lyarwood.usersys.redhat.com> Message-ID: On Tue, 12 Feb 2019 at 18:06, Lee Yarwood wrote: > > Hello all, > > I can't seem to settle on an answer for $subject as part this bugfix: > > compute: Reject migration requests when source is down > https://review.openstack.org/#/c/623489/ > > 409 suggests that the user is able to address the issue while 503 > suggests that n-api itself is at fault. I'd really appreciate peoples > thoughts on this given I hardly ever touch n-api. I don't think it means n-api is at fault, I think it means that nova, as a whole, is temporarily unable to fulfil the request for reasons which can't be resolved using the API, but might be fixed if you wait a bit. The weird thing about this specific request is that, being an admin api, the person you might be waiting on to fix it might be yourself. It's still OOB, though: compute host down isn't something you can fix using the API. 409 to me means you raced with something, or something is in the wrong state, and you need to go do other things with the API before coming back here and trying again. 503 to me means we can't do it no matter what you do because, well, 'Service Unavailable'. Which it is, because the compute host is down. This isn't a hill I'm prepared to die on, though, just my 2c ;) Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From ed at leafe.com Tue Feb 12 18:36:58 2019 From: ed at leafe.com (Ed Leafe) Date: Tue, 12 Feb 2019 12:36:58 -0600 Subject: Placement governance switch In-Reply-To: References: <8BE26158-5817-497F-A9D2-60222BD9F82C@leafe.com> Message-ID: <2B9D8207-CFD6-4864-8B2A-C9D3B31D6588@leafe.com> On Feb 12, 2019, at 12:22 PM, Doug Hellmann wrote: > > Assuming no prolonged debate, you'll need 7-10 days for the change to be > approved. If the team is ready to go now, I suggest you go ahead and > file the governance patch so we can start collecting the necessary > votes. Done: https://review.openstack.org/#/c/636416/ -- Ed Leafe From jasonanderson at uchicago.edu Tue Feb 12 18:38:47 2019 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Tue, 12 Feb 2019 18:38:47 +0000 Subject: [kolla] State of SELinux support Message-ID: Hey all, With CVE-2019-5736 dropping today, I thought it would be a good opportunity to poke about the current state of SELinux support in Kolla. The docs have said it is a work in progress since the Mitaka release at least. I did find a spec that was marked as completed, but I am not aware that there is yet any support and I see that the baremetal role still forces SELinux to "permissive" by default. Is anybody currently working on this or is there an update spec/blueprint to track the development here? I am no SELinux expert by any means but this feels like an important thing to address, particularly if Docker has made it easier to label bind mounts. Thanks! Jason Anderson Cloud Computing Software Developer Consortium for Advanced Science and Engineering, The University of Chicago Mathematics & Computer Science Division, Argonne National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: From msm at redhat.com Tue Feb 12 18:56:48 2019 From: msm at redhat.com (Michael McCune) Date: Tue, 12 Feb 2019 13:56:48 -0500 Subject: [nova][dev] Which response code should be returned when migrate is called but the src host is offline? In-Reply-To: References: <20190212180021.nloawdf5ywvmvdgh@lyarwood.usersys.redhat.com> Message-ID: although i don't know the nova internals that well, i will respond with my sig api hat on. On Tue, Feb 12, 2019 at 1:28 PM Matthew Booth wrote: > I don't think it means n-api is at fault, I think it means that nova, > as a whole, is temporarily unable to fulfil the request for reasons > which can't be resolved using the API, but might be fixed if you wait > a bit. The weird thing about this specific request is that, being an > admin api, the person you might be waiting on to fix it might be > yourself. It's still OOB, though: compute host down isn't something > you can fix using the API. 409 to me means you raced with something, > or something is in the wrong state, and you need to go do other things > with the API before coming back here and trying again. 503 to me means > we can't do it no matter what you do because, well, 'Service > Unavailable'. Which it is, because the compute host is down. i tend to concur with this reading as well. for me, when thinking about status codes in general i like to keep this in miind: 4xx means something has gone wrong but the client might be able to fix it by changing the request (this could mean many things), 5xx means that something has gone wrong on the server-side and continued requests will not "fix it". so, under the given example, i would expect a 5xx status code if the server is unable to respond to my request /regardless/ of how i format it or what uri i am accessing. just 2 more c to the pile =) peace o/ From jean-daniel.bonnetot at corp.ovh.com Tue Feb 12 17:41:02 2019 From: jean-daniel.bonnetot at corp.ovh.com (Jean-Daniel Bonnetot) Date: Tue, 12 Feb 2019 17:41:02 +0000 Subject: Subject: Re: [Trove] State of the Trove service tenant deployment model In-Reply-To: References: Message-ID: <04B9E83B-3DBE-47B4-8CE2-2A914624A80A@corp.ovh.com> Hi, We are mainly focus on Ironic, we put the DB topics a little bit aside. Sorry. Jean-Daniel Bonnetot ovh.com | @pilgrimstack On 11/02/2019 16:51, "Thierry Carrez" wrote: Lingxian Kong wrote: > On Sun, Feb 10, 2019 at 7:04 AM Darek Król > wrote: > > Hello Lingxian, > > I’ve heard about a few tries of running Trove in production. > Unfortunately, I didn’t have opportunity to get details about > networking. At Samsung, we introducing Trove into our products for > on-premise cloud platforms. However, I cannot share too many details > about it, besides it is oriented towards performance and security is > not a concern. Hence, the networking is very basic without any > layers of abstractions if possible. > > Could you share more details about your topology and goals you want > to achieve in Trove ? Maybe Trove team could help you in this ? > Unfortunately, I’m not a network expert so I would need to get more > details to understand your use case better. > > > Yeah, I think trove team could definitely help. I've been working on a > patch[1] to support different sgs for different type of neutron ports, > the patch is for the use case that `CONF.default_neutron_networks` is > configured as trove management network. > > Besides, I also have some patches[2][3] for trove need to be reviewed, > not sure who are the right people I should ask for review now, but would > appriciate if you could help. I think OVH has been deploying Trove as well, or at least considering it... Ccing Jean-Daniel in case he can bring some insights on that. -- Thierry From kennelson11 at gmail.com Tue Feb 12 19:05:34 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 12 Feb 2019 11:05:34 -0800 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <168e0cba6f4.1013071eb93450.6339187288167074748@ghanshyammann.com> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> <168e0cba6f4.1013071eb93450.6339187288167074748@ghanshyammann.com> Message-ID: On Tue, Feb 12, 2019 at 12:21 AM Ghanshyam Mann wrote: > ---- On Tue, 12 Feb 2019 02:14:56 +0900 Kendall Nelson < > kennelson11 at gmail.com> wrote ---- > > > > > > On Mon, Feb 11, 2019 at 8:01 AM Thierry Carrez > wrote: > > Doug Hellmann wrote: > > > Kendall Nelson writes: > > >> [...] > > >> So I think that the First Contact SIG project liaison list kind of > fits > > >> this. Its already maintained in a wiki and its already a list of > people > > >> willing to be contacted for helping people get started. It probably > just > > >> needs more attention and refreshing. When it was first set up we > (the FC > > >> SIG) kind of went around begging for volunteers and then once we > maxxed out > > >> on them, we said those projects without volunteers will have the > role > > >> defaulted to the PTL unless they delegate (similar to how other > liaison > > >> roles work). > > >> > > >> Long story short, I think we have the sort of mentoring things > covered. And > > >> to back up an earlier email, project specific onboarding would be a > good > > >> help too. > > > > > > OK, that does sound pretty similar. I guess the piece that's missing > is > > > a description of the sort of help the team is interested in > receiving. > > > > I guess the key difference is that the first contact list is more a > > function of the team (who to contact for first contributions in this > > team, defaults to PTL), rather than a distinct offer to do 1:1 > mentoring > > to cover specific needs in a team. > > > > It's probably pretty close (and the same people would likely be > > involved), but I think an approach where specific people offer a > > significant amount of their time to one mentee interested in joining a > > team is a bit different. I don't think every team would have > volunteers > > to do that. I would not expect a mentor volunteer to care for several > > mentees. In the end I think we would end up with a much shorter list > > than the FC list. > > > > I think our original ask for people volunteering (before we completed > the list with PTLs as stand ins) was for people willing to help get started > in a project and look after their first few patches. So I think that was > kinda the mentoring role originally but then it evolved? Maybe Matt Oliver > or Ghanshyam remember better than I do? > > Yeah, that's right. > > > Maybe the two efforts can converge into one, or they can be kept as > two > > different things but coordinated by the same team ? > > > > > > I think we could go either way, but that they both would live with the > FC SIG. Seems like the most logical place to me. I lean towards two lists, > one being a list of volunteer mentors for projects that are actively > looking for new contributors (the shorter list) and the other being a list > of people just willing to keep an eye out for the welcome new contributor > patches and being the entry point for people asking about getting started > that don't know anyone in the project yet (kind of what our current view > is, I think). -- > > IMO, very first thing to make help-wanted list a success is, it has to be > uptodate per development cycle, mentor-mapping(or with example workflow > etc). By Keeping the help-wanted list in any place other than the project > team again leads to existing problem for example it will be hard to > prioritize, maintain and easy to get obsolete/outdated. FC SIG, D&I WG are > great place to market/redirect the contributors to the list. > > The model I was thinking is: > 1. Project team maintain the help-wanted-list per current development > cycle. Entry criteria in that list is some volunteer mentor(exmaple > workflow/patch) which are technically closer to that topic. > I was thinking more yearly than per release if its not too much work for project teams. I think each item on the list also needs clear completion criteria. If the item hasn't been picked up in like two releases or something we (the FC SIG) can send it back to the project team to make sure its still relevant, the mentor is correct, etc. > 2. During PTG/developer meetup, PTL checks if planned/discussed topic > needs to be in help-wanted list and who will serve as the mentor. > 3. The list has to be updated in every developement cycle. It can be empty > if any project team does not need help during that cycle or few items can > be carry-forward if those are still a priority and have mentor mapping. > 4. FC SIG, D&I WG, Mentoring team use that list and publish in all > possible place. Redirect new contributors to that list depends on the > contributor interested area. This will be the key role to make > help-wanted-list success. > > -gmann > > > Thierry Carrez (ttx) > > > > -Kendall (diablo_rojo) > > -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Feb 12 19:09:01 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 12 Feb 2019 13:09:01 -0600 Subject: [neutron][oslo] CI issue related to pyroute2 and latest oslo.privsep In-Reply-To: References: <37C0CF62-A6FA-4837-8C31-4628FCFA339A@redhat.com> Message-ID: <09d274a0-a6b8-7af4-b6b4-8cef5fd2b5c6@nemebean.com> We discussed this in the Oslo meeting yesterday and the conclusion we came to was that we would try the easiest option first. We're going to provide a way to specify that certain calls need to run in the main thread rather than being scheduled to the thread pool. If this proves insufficient to fix the problem we can revisit the more complicated options. On 1/17/19 2:37 PM, Ben Nemec wrote: > I think it's worth noting that this has actually demonstrated a rather > significant issue with threaded privsep, which is that forking from a > Python thread is really not a safe thing to do.[1][2] > > Sure, we could just say "don't fork in privileged code", but in this > case the fork wasn't even in our code, it was in a library we were > using. There are a few options, none of which I'm crazy about at this > point: > > * Provide a way for callers to specify that a call needs to run > in-process rather than in the thread-pool. Two problems with this: 1) It > requires the callers to know that forking is happening and 2) I'm not > sure it actually fixes all of the potential problems. You might need to > have a completely separate privsep daemon to avoid the potential bad > fork/thread interactions. > > * Switch to multiprocessing so calls execute in their own process. I may > be wrong, but I think this requires all of the parameters passed in to > be pickleable, which I bet is not remotely the case right now. > > I'm open to suggestions that are better than playing whack-a-mole with > these bugs using a threaded and un-threaded daemon. > > -Ben > > 1: https://rachelbythebay.com/w/2011/06/07/forked/ > 2: https://rachelbythebay.com/w/2014/08/16/forkenv/ > > On 1/17/19 2:12 PM, Slawomir Kaplonski wrote: >> Hi, >> >> Recently we had one more issue related to oslo.privsep and pyroute2. >> This caused many failures in Neutron CI. See [1] for details. Now fix >> (more like a workaround) for this issue is merged [2]. So if You saw >> in Your patch failing tempest/scenario jobs and in failed tests there >> were issues with SSH to instance through floating IP, please now >> rebase Your patch. It should be better :) >> >> [1] https://bugs.launchpad.net/neutron/+bug/1811515 >> [2] https://review.openstack.org/#/c/631275/ >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> >> > From ed at leafe.com Tue Feb 12 19:20:31 2019 From: ed at leafe.com (Ed Leafe) Date: Tue, 12 Feb 2019 13:20:31 -0600 Subject: [nova][dev] Which response code should be returned when migrate is called but the src host is offline? In-Reply-To: References: <20190212180021.nloawdf5ywvmvdgh@lyarwood.usersys.redhat.com> Message-ID: On Feb 12, 2019, at 12:56 PM, Michael McCune wrote: > > so, under the given example, i would expect a 5xx status code if the > server is unable to respond to my request /regardless/ of how i format > it or what uri i am accessing. I view 503 as “something is wrong with the server” [0], and 409 as “something is wrong with the resource” [1]. In the scenario described, there is nothing wrong at all with the servers handing the request. There is, however, a problem with the resource that the request is trying to work with. Of course, the advice in the docs to include enough in the payload for the client to understand the nature of the problem is critical, no matter which code is used. [0] https://tools.ietf.org/html/rfc7231#section-6.6.4 [1] https://tools.ietf.org/html/rfc7231#section-6.5.8 -- Ed Leafe From msm at redhat.com Tue Feb 12 19:35:48 2019 From: msm at redhat.com (Michael McCune) Date: Tue, 12 Feb 2019 14:35:48 -0500 Subject: [nova][dev] Which response code should be returned when migrate is called but the src host is offline? In-Reply-To: References: <20190212180021.nloawdf5ywvmvdgh@lyarwood.usersys.redhat.com> Message-ID: On Tue, Feb 12, 2019 at 2:20 PM Ed Leafe wrote: > In the scenario described, there is nothing wrong at all with the servers handing the request. There is, however, a problem with the resource that the request is trying to work with. Of course, the advice in the docs to include enough in the payload for the client to understand the nature of the problem is critical, no matter which code is used. > ++, i think this nuance is crucial to crafting the proper response from the server. peace o/ From mnaser at vexxhost.com Tue Feb 12 21:33:28 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 12 Feb 2019 16:33:28 -0500 Subject: [openstack-ansible] bug squash day! In-Reply-To: References: <717c065910a2365e8d9674f987227771@arcor.de> <5f88b97f42da5cd3015ec738d4d7a6f9@arcor.de> <2ddb206f78e4c79ed6bc45a0d027b656473f09e7.camel@evrard.me> Message-ID: Get excited! We're getting started on this tomorrow (or maybe it's already today for you when you read this!) https://etherpad.openstack.org/p/osa-bug-squash-q1 On Wed, Feb 6, 2019 at 10:15 AM Mohammed Naser wrote: > > Hi all: > > We're likely going to have an etherpad and we'll be coordinating in > IRC. Bring your own bug is probably the best avenue! > > Thanks all! > > Regards, > Mohammed > > On Wed, Feb 6, 2019 at 6:10 AM Frank Kloeker wrote: > > > > Am 2019-02-06 10:32, schrieb Jean-Philippe Evrard: > > > On Tue, 2019-02-05 at 19:04 +0100, Frank Kloeker wrote: > > >> Hi Mohammed, > > >> > > >> will there be an extra invitation or an etherpad for logistic? > > >> > > >> many thanks > > >> > > >> Frank > > >> > > >> Am 2019-02-05 17:22, schrieb Mohammed Naser: > > >> > Hi everyone, > > >> > > > >> > We've discussed this over the ML today and we've decided for it to > > >> > be > > >> > next Wednesday (13th of February). Due to the distributed nature > > >> > of > > >> > our teams, we'll be aiming to go throughout the day and we'll all > > >> > be > > >> > hanging out on #openstack-ansible with a few more high bandwidth > > >> > way > > >> > of discussion if that is needed > > >> > > > >> > Thanks! > > >> > Mohammed > > > > > > What I did in the past was to prepare an etherpad of the most urgent > > > ones, but wasn't the most successful bug squash we had. > > > > > > I also took the other approach, BYO bug, list it in the etherpad, so we > > > can track the bug squashers. > > > > > > And in both cases, I brought belgian cookies/chocolates to the most > > > successful bug squasher (please note you should ponderate with the task > > > criticality level, else people might solve the simplest bugs to get the > > > chocolates :p) > > > This was my informal motivational, but I didn't have to do that. I > > > justliked doing so :) > > > > Very generous, we appreciate that. Would it be possible to expand the > > list with Belgian beer? :) > > > > kind regards > > > > Frank > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From colleen at gazlene.net Tue Feb 12 21:38:28 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Tue, 12 Feb 2019 22:38:28 +0100 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <168e0cba6f4.1013071eb93450.6339187288167074748@ghanshyammann.com> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> <168e0cba6f4.1013071eb93450.6339187288167074748@ghanshyammann.com> Message-ID: <1550007508.442544.1656696288.1CEB9AC9@webmail.messagingengine.com> On Tue, Feb 12, 2019, at 9:21 AM, Ghanshyam Mann wrote: > ---- On Tue, 12 Feb 2019 02:14:56 +0900 Kendall Nelson > wrote ---- > > > > > > On Mon, Feb 11, 2019 at 8:01 AM Thierry Carrez > wrote: > > Doug Hellmann wrote: > > > Kendall Nelson writes: > > >> [...] > > >> So I think that the First Contact SIG project liaison list kind > of fits > > >> this. Its already maintained in a wiki and its already a list of > people > > >> willing to be contacted for helping people get started. It > probably just > > >> needs more attention and refreshing. When it was first set up we > (the FC > > >> SIG) kind of went around begging for volunteers and then once we > maxxed out > > >> on them, we said those projects without volunteers will have the > role > > >> defaulted to the PTL unless they delegate (similar to how other > liaison > > >> roles work). > > >> > > >> Long story short, I think we have the sort of mentoring things > covered. And > > >> to back up an earlier email, project specific onboarding would be > a good > > >> help too. > > > > > > OK, that does sound pretty similar. I guess the piece that's > missing is > > > a description of the sort of help the team is interested in > receiving. > > > > I guess the key difference is that the first contact list is more a > > function of the team (who to contact for first contributions in this > > team, defaults to PTL), rather than a distinct offer to do 1:1 > mentoring > > to cover specific needs in a team. > > > > It's probably pretty close (and the same people would likely be > > involved), but I think an approach where specific people offer a > > significant amount of their time to one mentee interested in joining > a > > team is a bit different. I don't think every team would have > volunteers > > to do that. I would not expect a mentor volunteer to care for > several > > mentees. In the end I think we would end up with a much shorter list > > than the FC list. > > > > I think our original ask for people volunteering (before we completed > the list with PTLs as stand ins) was for people willing to help get > started in a project and look after their first few patches. So I think > that was kinda the mentoring role originally but then it evolved? Maybe > Matt Oliver or Ghanshyam remember better than I do? > > Yeah, that's right. > > > Maybe the two efforts can converge into one, or they can be kept as > two > > different things but coordinated by the same team ? > > > > > > I think we could go either way, but that they both would live with > the FC SIG. Seems like the most logical place to me. I lean towards two > lists, one being a list of volunteer mentors for projects that are > actively looking for new contributors (the shorter list) and the other > being a list of people just willing to keep an eye out for the welcome > new contributor patches and being the entry point for people asking > about getting started that don't know anyone in the project yet (kind of > what our current view is, I think). -- > > IMO, very first thing to make help-wanted list a success is, it has to > be uptodate per development cycle, mentor-mapping(or with example > workflow etc). By Keeping the help-wanted list in any place other than > the project team again leads to existing problem for example it will be > hard to prioritize, maintain and easy to get obsolete/outdated. FC SIG, > D&I WG are great place to market/redirect the contributors to the list. > > The model I was thinking is: > 1. Project team maintain the help-wanted-list per current development > cycle. Entry criteria in that list is some volunteer mentor(exmaple > workflow/patch) which are technically closer to that topic. > 2. During PTG/developer meetup, PTL checks if planned/discussed topic > needs to be in help-wanted list and who will serve as the mentor. > 3. The list has to be updated in every developement cycle. It can be > empty if any project team does not need help during that cycle or few > items can be carry-forward if those are still a priority and have mentor > mapping. > 4. FC SIG, D&I WG, Mentoring team use that list and publish in all > possible place. Redirect new contributors to that list depends on the > contributor interested area. This will be the key role to make help- > wanted-list success. > > -gmann > > > Thierry Carrez (ttx) > > > > -Kendall (diablo_rojo) > > > I feel like there is a bit of a disconnect between what the TC is asking for and what the current mentoring organizations are designed to provide. Thierry framed this as a "peer-mentoring offered" list, but mentoring doesn't quite capture everything that's needed. Mentorship programs like Outreachy, cohort mentoring, and the First Contact SIG are oriented around helping new people quickstart into the community, getting them up to speed on basics and helping them feel good about themselves and their contributions. The hope is that happy first-timers eventually become happy regular contributors which will eventually be a benefit to the projects, but the benefit to the projects is not the main focus. The way I see it, the TC Help Wanted list, as well as the new thing, is not necessarily oriented around newcomers but is instead advocating for the projects and meant to help project teams thrive by getting committed long-term maintainers involved and invested in solving longstanding technical debt that in some cases requires deep tribal knowledge to solve. It's not a thing for a newbie to step into lightly and it's not something that can be solved by a FC-liaison pointing at the contributor docs. Instead what's needed are mentors who are willing to walk through that tribal knowledge with a new contributor until they are equipped enough to help with the harder problems. For that reason I think neither the FC SIG or the mentoring cohort group, in their current incarnations, are the right groups to be managing this. The FC SIG's mission is "To provide a place for new contributors to come for information and advice" which does not fit the long-term goal of the help wanted list, and cohort mentoring's four topics ("your first patch", "first CFP", "first Cloud", and "COA"[1]) also don't fit with the long-term and deeply technical requirements that a project-specific mentorship offering needs. Either of those groups could be rescoped to fit with this new mission, and there is certainly a lot of overlap, but my feeling is that this needs to be an effort conducted by the TC because the TC is the group that advocates for the projects. It's moreover not a thing that can be solved by another list of names. In addition to naming someone willing to do the several hours per week of mentoring, project teams that want help should be forced to come up with a specific description of 1) what the project is, 2) what kind of person (experience or interests) would be a good fit for the project, 3) specific work items with completion criteria that needs to be done - and it can be extremely challenging to reframe a project's longstanding issues in such concrete ways that make it clear what steps are needed to tackle the problem. It should basically be an advertisement that makes the project sound interesting and challenging and do-able, because the current help-wanted list and liaison lists and mentoring topics are too vague to entice anyone to step up. Finally, I rather disagree that this should be something maintained as a page in individual projects' contributor guides, although we should certainly be encouraging teams to keep those guides up to date. It should be compiled by the TC and regularly updated by the project liaisons within the TC. A link to a contributor guide on docs.openstack.org doesn't give anyone an idea of what projects need the most help nor does it empower people to believe they can help by giving them an understanding of what the "job" entails. [1] https://wiki.openstack.org/wiki/Mentoring#Cohort_Mentoring Colleen From cdent+os at anticdent.org Tue Feb 12 21:57:59 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 12 Feb 2019 21:57:59 +0000 (GMT) Subject: [tc] The future of the "Help most needed" list In-Reply-To: <1550007508.442544.1656696288.1CEB9AC9@webmail.messagingengine.com> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> <168e0cba6f4.1013071eb93450.6339187288167074748@ghanshyammann.com> <1550007508.442544.1656696288.1CEB9AC9@webmail.messagingengine.com> Message-ID: On Tue, 12 Feb 2019, Colleen Murphy wrote: > The way I see it, the TC Help Wanted list, as well as the new thing, is not > necessarily oriented around newcomers but is instead advocating for the > projects and meant to help project teams thrive by getting committed long-term > maintainers involved and invested in solving longstanding technical debt that > in some cases requires deep tribal knowledge to solve. It's not a thing for a > newbie to step into lightly and it's not something that can be solved by a > FC-liaison pointing at the contributor docs. Instead what's needed are mentors > who are willing to walk through that tribal knowledge with a new contributor > until they are equipped enough to help with the harder problems. Thank you for writing this message and especially this ^ paragraph. I've been watching this thread with some concern, feeling like the depth of _need_ and _effort_ was being lost. You've captured it well here and I think that this > Finally, I rather disagree that this should be something maintained as a page in > individual projects' contributor guides, although we should certainly be > encouraging teams to keep those guides up to date. It should be compiled by the > TC and regularly updated by the project liaisons within the TC. A link to a > contributor guide on docs.openstack.org doesn't give anyone an idea of what > projects need the most help nor does it empower people to believe they can help > by giving them an understanding of what the "job" entails. is right as well. In this instance the imprimatur of the TC is supposed to give weight to the need. As important as the mentorship programs and first contact efforts are, they are for a different kind of thing. Thank you. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From aheczko at mirantis.com Tue Feb 12 22:02:57 2019 From: aheczko at mirantis.com (aheczko at mirantis.com) Date: Tue, 12 Feb 2019 14:02:57 -0800 Subject: [keystone] x509 authentication In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi Lance, I'd be glad to help out with the docs update. -----BEGIN PGP SIGNATURE----- Version: FlowCrypt 6.6.1 Gmail Encryption Comment: Seamlessly send and receive encrypted email wsFcBAEBCAAGBQJcY0KNAAoJEKhrJ5xxCfXWfsQQAJJktzu0kwIYBs8wi+QY Szd4wxRPf2GhKfYXsCgtUKAf5QWRXJBWEVpwai3ZbEDyCXa9UkEqSZY4s9l6 iizIuGW07qw/vTVJL9GX1q/6lpoYO5YqiL5cepCronKbzQtHvNL8TP2SPovG BXBTTiyj5LxKdQ7nojePpOQlINkTVWPVyVEwyVGqzXWh6/Dm7ws3pMUM5E5q SYcmmyiBxTbttV7csLqvB4WMpwM4Ucxd/1b9ojdaxeqkXy6uKA6fIOktP7QK uq+P7h9TYeuZ0x4wkw3dodJaRDLL4bvp+1sFtLXQELPQSEWFQ8SX5SMH9dQs lCTpkNjwmqTHXQ0/f69rjWc9zWIG/Y6S+f6I9fPwVV/L6bEkqoC9B3wz7K/y 2emAi96XwER4uLMrEpcQnQVi8aDczSfZtSw355Gxdp0h+A2FBmGC7pGU6Vn3 o1UUOV/HE49jWeqo+suNCoMBqz52+pAQ76fY81QAiUsnYcHGF6rL5yQJJZBG 6aF6pYa5A3iNeaSLIiZKNC2QEItV0GjmbJg7LHIsJDQwls2ITRa/WGpbakmW Jisgr/VxIIjrwp2z9+kOTQNptDbYANuyu6KQp/DORuDzNPGoCUedtZALebC7 2cJjzlSo1ZqjFbZiTw6wpZBTlfsGrJmrv0uRZYII21Zf4KLRmr4PXb83VRFo +rYF =gitY -----END PGP SIGNATURE----- -------------- next part -------------- A non-text attachment was scrubbed... Name: 0xA86B279C7109F5D6.asc Type: application/pgp-keys Size: 3177 bytes Desc: not available URL: From kennelson11 at gmail.com Tue Feb 12 23:45:12 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 12 Feb 2019 15:45:12 -0800 Subject: [all][TC] 'Train' Technical Committee Nominations Open Message-ID: Hello All, Nominations for the Technical Committee positions (7 positions) are now open and will remain open until Feb 19, 2019 23:45 UTC. All nominations must be submitted as a text file to the openstack/election repository as explained on the election website[1]. Please note that the name of the file should match an email address in the foundation member profile of the candidate. Also for TC candidates election officials refer to the community member profiles at [2] please take this opportunity to ensure that your profile contains current information. Candidates for the Technical Committee Positions: Any Foundation individual member can propose their candidacy for an available, directly-elected TC seat. The election will be held from Feb 26, 2019 23:45 UTC through to Mar 05, 2019 23:45 UTC. The electorate are the Foundation individual members that are also committers for one of the official teams[3] over the Feb 09, 2018 00:00 UTC - Feb 19, 2019 00:00 UTC timeframe (Rocky to Stein), as well as the extra-ATCs who are acknowledged by the TC[4]. Please see the website[5] for additional details about this election. Please find below the timeline: TC nomination starts @ Feb 12, 2019 23:45 UTC TC nomination ends @ Feb 19, 2019 23:45 UTC TC campaigning starts @ Feb 19, 2019 23:45 UTC TC campaigning ends @ Feb 26, 2019 23:45 UTC TC elections starts @ Feb 26, 2019 23:45 UTC TC elections ends @ Mar 05, 2019 23:45 UTC If you have any questions please be sure to either ask them on the mailing list or to the elections officials[6]. Thank you, -Kendall Nelson (diablo_rojo) [1] http://governance.openstack.org/election/#how-to-submit-your-candidacy [2] http://www.openstack.org/community/members/ [3] https://governance.openstack.org/tc/reference/projects/ [4] https://releases.openstack.org/stein/schedule.html#p-extra-atcs [5] https://governance.openstack.org/election/ [6] http://governance.openstack.org/election/#election-officials -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Tue Feb 12 23:53:39 2019 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 12 Feb 2019 15:53:39 -0800 Subject: [manila][glusterfs] on queens error In-Reply-To: References: <20190203100549.urtnvf2iatmqm6oy@barron.net> <20190206153219.yyir5m5tyw7bvrj7@barron.net> <20190206201619.o6turxaps6iv65p7@barron.net> Message-ID: On Mon, Feb 11, 2019 at 10:18 AM Ignazio Cassano wrote: > > Hello, the manila replication dr works fine on netapp ontap following your suggestions. :-) > Source backends (svm for netapp) must belong to a different destination backends availability zone, but in a single manila.conf I cannot specify more than one availability zone. For doing this I must create more share servers ....one for each availability zone. > Svm1 with avz1 > Svm1-dr with avz1-dr > ......... > Are you agree??? > Thanks & Regards > Ignazio > Yes, until the Stein release, you cannot specify multiple availability zones for a single manila share manager service, even if your deployment has multiple storage backends. However, you can run another manila share manager process with a different "storage_availability_zone" parameter as you described. > > Il giorno Gio 7 Feb 2019 06:11 Ignazio Cassano ha scritto: >> >> Many thanks. >> I'll check today. >> Ignazio >> >> >> Il giorno Mer 6 Feb 2019 21:26 Goutham Pacha Ravi ha scritto: >>> >>> On Wed, Feb 6, 2019 at 12:16 PM Tom Barron wrote: >>> > >>> > On 06/02/19 17:48 +0100, Ignazio Cassano wrote: >>> > >The 2 openstack Installations do not share anything. The manila on each one >>> > >works on different netapp storage, but the 2 netapp can be synchronized. >>> > >Site A with an openstack instalkation and netapp A. >>> > >Site B with an openstack with netapp B. >>> > >Netapp A and netapp B can be synchronized via network. >>> > >Ignazio >>> > >>> > OK, thanks. >>> > >>> > You can likely get the share data and its netapp metadata to show up >>> > on B via replication and (gouthamr may explain details) but you will >>> > lose all the Openstack/manila information about the share unless >>> > Openstack database info (more than just manila tables) is imported. >>> > That may be OK foryour use case. >>> > >>> > -- Tom >>> >>> >>> Checking if I understand your request correctly, you have setup >>> manila's "dr" replication in OpenStack A and now want to move your >>> shares from OpenStack A to OpenStack B's manila. Is this correct? >>> >>> If yes, you must: >>> * Promote your replicas >>> - this will make the mirrored shares available. This action does >>> not delete the old "primary" shares though, you need to clean them up >>> yourself, because manila will attempt to reverse the replication >>> relationships if the primary shares are still accessible >>> * Note the export locations and Unmanage your shares from OpenStack A's manila >>> * Manage your shares in OpenStack B's manila with the export locations >>> you noted. >>> >>> > > >>> > > >>> > >Il giorno Mer 6 Feb 2019 16:32 Tom Barron ha scritto: >>> > > >>> > >> On 06/02/19 15:34 +0100, Ignazio Cassano wrote: >>> > >> >Hello Tom, I think cases you suggested do not meet my needs. >>> > >> >I have an openstack installation A with a fas netapp A. >>> > >> >I have another openstack installation B with fas netapp B. >>> > >> >I would like to use manila replication dr. >>> > >> >If I replicate manila volumes from A to B the manila db on B does not >>> > >> >knows anything about the replicated volume but only the backends on >>> > >> netapp >>> > >> >B. Can I discover replicated volumes on openstack B? >>> > >> >Or I must modify the manila db on B? >>> > >> >Regards >>> > >> >Ignazio >>> > >> >>> > >> I guess I don't understand your use case. Do Openstack installation A >>> > >> and Openstack installation B know *anything* about one another? For >>> > >> example, are their keystone and neutron databases somehow synced? Are >>> > >> they going to be operative for the same set of manila shares at the >>> > >> same time, or are you contemplating a migration of the shares from >>> > >> installation A to installation B? >>> > >> >>> > >> Probably it would be helpful to have a statement of the problem that >>> > >> you intend to solve before we consider the potential mechanisms for >>> > >> solving it. >>> > >> >>> > >> Cheers, >>> > >> >>> > >> -- Tom >>> > >> >>> > >> > >>> > >> > >>> > >> >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha scritto: >>> > >> > >>> > >> >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: >>> > >> >> >Thanks Goutham. >>> > >> >> >If there are not mantainers for this driver I will switch on ceph and >>> > >> or >>> > >> >> >netapp. >>> > >> >> >I am already using netapp but I would like to export shares from an >>> > >> >> >openstack installation to another. >>> > >> >> >Since these 2 installations do non share any openstack component and >>> > >> have >>> > >> >> >different openstack database, I would like to know it is possible . >>> > >> >> >Regards >>> > >> >> >Ignazio >>> > >> >> >>> > >> >> Hi Ignazio, >>> > >> >> >>> > >> >> If by "export shares from an openstack installation to another" you >>> > >> >> mean removing them from management by manila in installation A and >>> > >> >> instead managing them by manila in installation B then you can do that >>> > >> >> while leaving them in place on your Net App back end using the manila >>> > >> >> "manage-unmanage" administrative commands. Here's some documentation >>> > >> >> [1] that should be helpful. >>> > >> >> >>> > >> >> If on the other hand by "export shares ... to another" you mean to >>> > >> >> leave the shares under management of manila in installation A but >>> > >> >> consume them from compute instances in installation B it's all about >>> > >> >> the networking. One can use manila to "allow-access" to consumers of >>> > >> >> shares anywhere but the consumers must be able to reach the "export >>> > >> >> locations" for those shares and mount them. >>> > >> >> >>> > >> >> Cheers, >>> > >> >> >>> > >> >> -- Tom Barron >>> > >> >> >>> > >> >> [1] >>> > >> >> >>> > >> https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 >>> > >> >> > >>> > >> >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < >>> > >> >> gouthampravi at gmail.com> >>> > >> >> >ha scritto: >>> > >> >> > >>> > >> >> >> Hi Ignazio, >>> > >> >> >> >>> > >> >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano >>> > >> >> >> wrote: >>> > >> >> >> > >>> > >> >> >> > Hello All, >>> > >> >> >> > I installed manila on my queens openstack based on centos 7. >>> > >> >> >> > I configured two servers with glusterfs replocation and ganesha >>> > >> nfs. >>> > >> >> >> > I configured my controllers octavia,conf but when I try to create a >>> > >> >> share >>> > >> >> >> > the manila scheduler logs reports: >>> > >> >> >> > >>> > >> >> >> > Failed to schedule create_share: No valid host was found. Failed to >>> > >> >> find >>> > >> >> >> a weighted host, the last executed filter was CapabilitiesFilter.: >>> > >> >> >> NoValidHost: No valid host was found. Failed to find a weighted host, >>> > >> >> the >>> > >> >> >> last executed filter was CapabilitiesFilter. >>> > >> >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api >>> > >> >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a >>> > >> >> 89f76bc5de5545f381da2c10c7df7f15 >>> > >> >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for >>> > >> >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a >>> > >> >> >> >>> > >> >> >> >>> > >> >> >> The scheduler failure points out that you have a mismatch in >>> > >> >> >> expectations (backend capabilities vs share type extra-specs) and >>> > >> >> >> there was no host to schedule your share to. So a few things to check >>> > >> >> >> here: >>> > >> >> >> >>> > >> >> >> - What is the share type you're using? Can you list the share type >>> > >> >> >> extra-specs and confirm that the backend (your GlusterFS storage) >>> > >> >> >> capabilities are appropriate with whatever you've set up as >>> > >> >> >> extra-specs ($ manila pool-list --detail)? >>> > >> >> >> - Is your backend operating correctly? You can list the manila >>> > >> >> >> services ($ manila service-list) and see if the backend is both >>> > >> >> >> 'enabled' and 'up'. If it isn't, there's a good chance there was a >>> > >> >> >> problem with the driver initialization, please enable debug logging, >>> > >> >> >> and look at the log file for the manila-share service, you might see >>> > >> >> >> why and be able to fix it. >>> > >> >> >> >>> > >> >> >> >>> > >> >> >> Please be aware that we're on a look out for a maintainer for the >>> > >> >> >> GlusterFS driver for the past few releases. We're open to bug fixes >>> > >> >> >> and maintenance patches, but there is currently no active maintainer >>> > >> >> >> for this driver. >>> > >> >> >> >>> > >> >> >> >>> > >> >> >> > I did not understand if controllers node must be connected to the >>> > >> >> >> network where shares must be exported for virtual machines, so my >>> > >> >> glusterfs >>> > >> >> >> are connected on the management network where openstack controllers >>> > >> are >>> > >> >> >> conencted and to the network where virtual machine are connected. >>> > >> >> >> > >>> > >> >> >> > My manila.conf section for glusterfs section is the following >>> > >> >> >> > >>> > >> >> >> > [gluster-manila565] >>> > >> >> >> > driver_handles_share_servers = False >>> > >> >> >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver >>> > >> >> >> > glusterfs_target = root at 10.102.184.229:/manila565 >>> > >> >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa >>> > >> >> >> > glusterfs_ganesha_server_username = root >>> > >> >> >> > glusterfs_nfs_server_type = Ganesha >>> > >> >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 >>> > >> >> >> > #glusterfs_servers = root at 10.102.185.19 >>> > >> >> >> > ganesha_config_dir = /etc/ganesha >>> > >> >> >> > >>> > >> >> >> > >>> > >> >> >> > PS >>> > >> >> >> > 10.102.184.0/24 is the network where controlelrs expose endpoint >>> > >> >> >> > >>> > >> >> >> > 10.102.189.0/24 is the shared network inside openstack where >>> > >> virtual >>> > >> >> >> machines are connected. >>> > >> >> >> > >>> > >> >> >> > The gluster servers are connected on both. >>> > >> >> >> > >>> > >> >> >> > >>> > >> >> >> > Any help, please ? >>> > >> >> >> > >>> > >> >> >> > Ignazio >>> > >> >> >> >>> > >> >> >>> > >> From rico.lin.guanyu at gmail.com Wed Feb 13 01:18:37 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 13 Feb 2019 09:18:37 +0800 Subject: [tc][uc] Becoming an Open Source Initiative affiliate org In-Reply-To: References: Message-ID: ++ On Wed, Feb 6, 2019 at 11:56 PM Thierry Carrez wrote: > I started a thread on the Foundation mailing-list about the OSF becoming > an OSI affiliate org: > > http://lists.openstack.org/pipermail/foundation/2019-February/002680.html > > Please follow-up there is you have any concerns or questions. > > -- > Thierry Carrez (ttx) > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Feb 13 01:29:35 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 12 Feb 2019 20:29:35 -0500 Subject: Placement governance switch In-Reply-To: <2B9D8207-CFD6-4864-8B2A-C9D3B31D6588@leafe.com> References: <8BE26158-5817-497F-A9D2-60222BD9F82C@leafe.com> <2B9D8207-CFD6-4864-8B2A-C9D3B31D6588@leafe.com> Message-ID: Ed Leafe writes: > On Feb 12, 2019, at 12:22 PM, Doug Hellmann wrote: >> >> Assuming no prolonged debate, you'll need 7-10 days for the change to be >> approved. If the team is ready to go now, I suggest you go ahead and >> file the governance patch so we can start collecting the necessary >> votes. > > Done: https://review.openstack.org/#/c/636416/ > > -- Ed Leafe After consulting with the election officials during the most recent TC office hour [1], I have proposed shifting the PTL election deadline out 2 days to allow the TC time to approve the new team [2]. Thank you to Tony, Jeremy, and Kendall for accommodating the change. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-02-13.log.html [2] https://review.openstack.org/#/c/636510/ -- Doug From gmann at ghanshyammann.com Wed Feb 13 02:50:59 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 13 Feb 2019 11:50:59 +0900 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <1550007508.442544.1656696288.1CEB9AC9@webmail.messagingengine.com> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> <168e0cba6f4.1013071eb93450.6339187288167074748@ghanshyammann.com> <1550007508.442544.1656696288.1CEB9AC9@webmail.messagingengine.com> Message-ID: <168e4c3bcab.cb011f32126710.849384454581241579@ghanshyammann.com> ---- On Wed, 13 Feb 2019 06:38:28 +0900 Colleen Murphy wrote ---- > On Tue, Feb 12, 2019, at 9:21 AM, Ghanshyam Mann wrote: > > ---- On Tue, 12 Feb 2019 02:14:56 +0900 Kendall Nelson > > wrote ---- > > > > > > > > > On Mon, Feb 11, 2019 at 8:01 AM Thierry Carrez > > wrote: > > > Doug Hellmann wrote: > > > > Kendall Nelson writes: > > > >> [...] > > > >> So I think that the First Contact SIG project liaison list kind > > of fits > > > >> this. Its already maintained in a wiki and its already a list of > > people > > > >> willing to be contacted for helping people get started. It > > probably just > > > >> needs more attention and refreshing. When it was first set up we > > (the FC > > > >> SIG) kind of went around begging for volunteers and then once we > > maxxed out > > > >> on them, we said those projects without volunteers will have the > > role > > > >> defaulted to the PTL unless they delegate (similar to how other > > liaison > > > >> roles work). > > > >> > > > >> Long story short, I think we have the sort of mentoring things > > covered. And > > > >> to back up an earlier email, project specific onboarding would be > > a good > > > >> help too. > > > > > > > > OK, that does sound pretty similar. I guess the piece that's > > missing is > > > > a description of the sort of help the team is interested in > > receiving. > > > > > > I guess the key difference is that the first contact list is more a > > > function of the team (who to contact for first contributions in this > > > team, defaults to PTL), rather than a distinct offer to do 1:1 > > mentoring > > > to cover specific needs in a team. > > > > > > It's probably pretty close (and the same people would likely be > > > involved), but I think an approach where specific people offer a > > > significant amount of their time to one mentee interested in joining > > a > > > team is a bit different. I don't think every team would have > > volunteers > > > to do that. I would not expect a mentor volunteer to care for > > several > > > mentees. In the end I think we would end up with a much shorter list > > > than the FC list. > > > > > > I think our original ask for people volunteering (before we completed > > the list with PTLs as stand ins) was for people willing to help get > > started in a project and look after their first few patches. So I think > > that was kinda the mentoring role originally but then it evolved? Maybe > > Matt Oliver or Ghanshyam remember better than I do? > > > > Yeah, that's right. > > > > > Maybe the two efforts can converge into one, or they can be kept as > > two > > > different things but coordinated by the same team ? > > > > > > > > > I think we could go either way, but that they both would live with > > the FC SIG. Seems like the most logical place to me. I lean towards two > > lists, one being a list of volunteer mentors for projects that are > > actively looking for new contributors (the shorter list) and the other > > being a list of people just willing to keep an eye out for the welcome > > new contributor patches and being the entry point for people asking > > about getting started that don't know anyone in the project yet (kind of > > what our current view is, I think). -- > > > > IMO, very first thing to make help-wanted list a success is, it has to > > be uptodate per development cycle, mentor-mapping(or with example > > workflow etc). By Keeping the help-wanted list in any place other than > > the project team again leads to existing problem for example it will be > > hard to prioritize, maintain and easy to get obsolete/outdated. FC SIG, > > D&I WG are great place to market/redirect the contributors to the list. > > > > The model I was thinking is: > > 1. Project team maintain the help-wanted-list per current development > > cycle. Entry criteria in that list is some volunteer mentor(exmaple > > workflow/patch) which are technically closer to that topic. > > 2. During PTG/developer meetup, PTL checks if planned/discussed topic > > needs to be in help-wanted list and who will serve as the mentor. > > 3. The list has to be updated in every developement cycle. It can be > > empty if any project team does not need help during that cycle or few > > items can be carry-forward if those are still a priority and have mentor > > mapping. > > 4. FC SIG, D&I WG, Mentoring team use that list and publish in all > > possible place. Redirect new contributors to that list depends on the > > contributor interested area. This will be the key role to make help- > > wanted-list success. > > > > -gmann > > > > > Thierry Carrez (ttx) > > > > > > -Kendall (diablo_rojo) > > > > > > > > I feel like there is a bit of a disconnect between what the TC is asking for > and what the current mentoring organizations are designed to provide. Thierry > framed this as a "peer-mentoring offered" list, but mentoring doesn't quite > capture everything that's needed. > > Mentorship programs like Outreachy, cohort mentoring, and the First Contact SIG > are oriented around helping new people quickstart into the community, getting > them up to speed on basics and helping them feel good about themselves and > their contributions. The hope is that happy first-timers eventually become > happy regular contributors which will eventually be a benefit to the projects, > but the benefit to the projects is not the main focus. > > The way I see it, the TC Help Wanted list, as well as the new thing, is not > necessarily oriented around newcomers but is instead advocating for the > projects and meant to help project teams thrive by getting committed long-term > maintainers involved and invested in solving longstanding technical debt that > in some cases requires deep tribal knowledge to solve. It's not a thing for a > newbie to step into lightly and it's not something that can be solved by a > FC-liaison pointing at the contributor docs. Instead what's needed are mentors > who are willing to walk through that tribal knowledge with a new contributor > until they are equipped enough to help with the harder problems. > > For that reason I think neither the FC SIG or the mentoring cohort group, in > their current incarnations, are the right groups to be managing this. The FC > SIG's mission is "To provide a place for new contributors to come for > information and advice" which does not fit the long-term goal of the help > wanted list, and cohort mentoring's four topics ("your first patch", "first > CFP", "first Cloud", and "COA"[1]) also don't fit with the long-term and deeply > technical requirements that a project-specific mentorship offering needs. > Either of those groups could be rescoped to fit with this new mission, and > there is certainly a lot of overlap, but my feeling is that this needs to be an > effort conducted by the TC because the TC is the group that advocates for the > projects. Thanks for writing it in a clear and with details which really help. +1 on not maintaining it on mentoring or FC group side. I completely agree with this. The main reason why I vote to maintain it on the projects side is that gives ownership to the right people who actually need help. "Project_X need help, so they create and maintain this list with proper mentor mapping". They eventually get help if they maintain it well with up to date item with mentor mapping. I am sure that most of the time, mentor to the list items will be from the project team (I assume very few cases can be from SIG or TC), which makes PTL or other leaders in the project team as good candidate to manage the list than any central team. This idea changes this list towards project wise help wanted from openstack wise help wanted ( *openstack-*help-wanted -> *project-*help-wanted ) which I feel should not be a concern as long as it serves the purpose of getting help and complete some item. Template or best place to link that list (in contributor doc or spec repo or in home page etc) is something we can discuss as a second step. TC maintaining this list is not so helpful or valuable in past. But Yes, the idea of peer-mentoring is something much-need pre-condition for such effort either it is corporate or open source. I am ok with trying it under TC with peer-mentoring idea and see how it goes. Key point will be how closely the project teams involved in this effort with TC. If we win in that step then, we can see some outcome. -gmann > > It's moreover not a thing that can be solved by another list of names. In addition > to naming someone willing to do the several hours per week of mentoring, > project teams that want help should be forced to come up with a specific > description of 1) what the project is, 2) what kind of person (experience or > interests) would be a good fit for the project, 3) specific work items with > completion criteria that needs to be done - and it can be extremely challenging > to reframe a project's longstanding issues in such concrete ways that make it > clear what steps are needed to tackle the problem. It should basically be an > advertisement that makes the project sound interesting and challenging and > do-able, because the current help-wanted list and liaison lists and mentoring > topics are too vague to entice anyone to step up. > > Finally, I rather disagree that this should be something maintained as a page in > individual projects' contributor guides, although we should certainly be > encouraging teams to keep those guides up to date. It should be compiled by the > TC and regularly updated by the project liaisons within the TC. A link to a > contributor guide on docs.openstack.org doesn't give anyone an idea of what > projects need the most help nor does it empower people to believe they can help > by giving them an understanding of what the "job" entails. > > [1] https://wiki.openstack.org/wiki/Mentoring#Cohort_Mentoring > > Colleen > > From ramishra at redhat.com Wed Feb 13 03:37:20 2019 From: ramishra at redhat.com (Rabi Mishra) Date: Wed, 13 Feb 2019 09:07:20 +0530 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: On Tue, Feb 12, 2019 at 7:48 PM NANTHINI A A wrote: > Hi , > > I followed the example given in random.yaml .But getting below error > .Can you please tell me what is wrong here . > > > > root at cic-1:~# heat stack-create test -f main.yaml > > WARNING (shell) "heat stack-create" is deprecated, please use "openstack > stack create" instead > > ERROR: Property error: : > resources.rg.resources[0].properties: : Unknown Property names > > root at cic-1:~# cat main.yaml > > heat_template_version: 2015-04-30 > > > > description: Shows how to look up list/map values by group index > > > > parameters: > > net_names: > > type: json > > default: > > - network1: NetworkA1 > > network2: NetworkA2 > > - network1: NetworkB1 > > network2: NetworkB2 > > > > > > resources: > > rg: > > type: OS::Heat::ResourceGroup > > properties: > > count: 3 > > resource_def: > > type: nested.yaml > > properties: > > # Note you have to pass the index and the entire list into the > > # nested template, resolving via %index% doesn't work directly > > # in the get_param here > > index: "%index%" > names: {get_param: net_names} > property name should be same as parameter name in you nested.yaml > > > outputs: > > all_values: > > value: {get_attr: [rg, value]} > > root at cic-1:~# cat nested.yaml > > heat_template_version: 2013-05-23 > > description: > > This is the template for I&V R6.1 base configuration to create neutron > resources other than sg and vm for vyos vms > > parameters: > > net_names: > changing this to 'names' should fix your error. > type: json > > index: > > type: number > > resources: > > neutron_Network_1: > > type: OS::Neutron::Net > > properties: > > name: {get_param: [names, {get_param: index}, network1]} > > > > > > Thanks, > > A.Nanthini > > > > *From:* Rabi Mishra [mailto:ramishra at redhat.com] > *Sent:* Tuesday, February 12, 2019 6:34 PM > *To:* NANTHINI A A > *Cc:* hjensas at redhat.com; openstack-dev at lists.openstack.org > *Subject:* Re: [Heat] Reg accessing variables of resource group heat api > > > > On Tue, Feb 12, 2019 at 11:14 AM NANTHINI A A > wrote: > > Hi , > > May I know in the following example given > > > parameters: > resource_name_map: > - network1: foo_custom_name_net1 > network2: foo_custom_name_net2 > - network1: bar_custom_name_net1 > network2: bar_custom_name_net2 > > what is the parameter type ? > > > > json > > > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Feb 13 08:31:01 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 13 Feb 2019 09:31:01 +0100 Subject: [manila][glusterfs] on queens error In-Reply-To: References: <20190203100549.urtnvf2iatmqm6oy@barron.net> <20190206153219.yyir5m5tyw7bvrj7@barron.net> <20190206201619.o6turxaps6iv65p7@barron.net> Message-ID: Many thanks for your help. Ignazio Il giorno mer 13 feb 2019 alle ore 00:53 Goutham Pacha Ravi < gouthampravi at gmail.com> ha scritto: > On Mon, Feb 11, 2019 at 10:18 AM Ignazio Cassano > wrote: > > > > Hello, the manila replication dr works fine on netapp ontap following > your suggestions. :-) > > Source backends (svm for netapp) must belong to a different destination > backends availability zone, but in a single manila.conf I cannot specify > more than one availability zone. For doing this I must create more share > servers ....one for each availability zone. > > Svm1 with avz1 > > Svm1-dr with avz1-dr > > ......... > > Are you agree??? > > Thanks & Regards > > Ignazio > > > > Yes, until the Stein release, you cannot specify multiple availability > zones for a single manila share manager service, even if your > deployment has multiple storage backends. However, you can run another > manila share manager process with a different > "storage_availability_zone" parameter as you described. > > > > > > Il giorno Gio 7 Feb 2019 06:11 Ignazio Cassano > ha scritto: > >> > >> Many thanks. > >> I'll check today. > >> Ignazio > >> > >> > >> Il giorno Mer 6 Feb 2019 21:26 Goutham Pacha Ravi < > gouthampravi at gmail.com> ha scritto: > >>> > >>> On Wed, Feb 6, 2019 at 12:16 PM Tom Barron wrote: > >>> > > >>> > On 06/02/19 17:48 +0100, Ignazio Cassano wrote: > >>> > >The 2 openstack Installations do not share anything. The manila on > each one > >>> > >works on different netapp storage, but the 2 netapp can be > synchronized. > >>> > >Site A with an openstack instalkation and netapp A. > >>> > >Site B with an openstack with netapp B. > >>> > >Netapp A and netapp B can be synchronized via network. > >>> > >Ignazio > >>> > > >>> > OK, thanks. > >>> > > >>> > You can likely get the share data and its netapp metadata to show up > >>> > on B via replication and (gouthamr may explain details) but you will > >>> > lose all the Openstack/manila information about the share unless > >>> > Openstack database info (more than just manila tables) is imported. > >>> > That may be OK foryour use case. > >>> > > >>> > -- Tom > >>> > >>> > >>> Checking if I understand your request correctly, you have setup > >>> manila's "dr" replication in OpenStack A and now want to move your > >>> shares from OpenStack A to OpenStack B's manila. Is this correct? > >>> > >>> If yes, you must: > >>> * Promote your replicas > >>> - this will make the mirrored shares available. This action does > >>> not delete the old "primary" shares though, you need to clean them up > >>> yourself, because manila will attempt to reverse the replication > >>> relationships if the primary shares are still accessible > >>> * Note the export locations and Unmanage your shares from OpenStack > A's manila > >>> * Manage your shares in OpenStack B's manila with the export locations > >>> you noted. > >>> > >>> > > > >>> > > > >>> > >Il giorno Mer 6 Feb 2019 16:32 Tom Barron ha > scritto: > >>> > > > >>> > >> On 06/02/19 15:34 +0100, Ignazio Cassano wrote: > >>> > >> >Hello Tom, I think cases you suggested do not meet my needs. > >>> > >> >I have an openstack installation A with a fas netapp A. > >>> > >> >I have another openstack installation B with fas netapp B. > >>> > >> >I would like to use manila replication dr. > >>> > >> >If I replicate manila volumes from A to B the manila db on B > does not > >>> > >> >knows anything about the replicated volume but only the > backends on > >>> > >> netapp > >>> > >> >B. Can I discover replicated volumes on openstack B? > >>> > >> >Or I must modify the manila db on B? > >>> > >> >Regards > >>> > >> >Ignazio > >>> > >> > >>> > >> I guess I don't understand your use case. Do Openstack > installation A > >>> > >> and Openstack installation B know *anything* about one another? > For > >>> > >> example, are their keystone and neutron databases somehow > synced? Are > >>> > >> they going to be operative for the same set of manila shares at > the > >>> > >> same time, or are you contemplating a migration of the shares from > >>> > >> installation A to installation B? > >>> > >> > >>> > >> Probably it would be helpful to have a statement of the problem > that > >>> > >> you intend to solve before we consider the potential mechanisms > for > >>> > >> solving it. > >>> > >> > >>> > >> Cheers, > >>> > >> > >>> > >> -- Tom > >>> > >> > >>> > >> > > >>> > >> > > >>> > >> >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha > scritto: > >>> > >> > > >>> > >> >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: > >>> > >> >> >Thanks Goutham. > >>> > >> >> >If there are not mantainers for this driver I will switch on > ceph and > >>> > >> or > >>> > >> >> >netapp. > >>> > >> >> >I am already using netapp but I would like to export shares > from an > >>> > >> >> >openstack installation to another. > >>> > >> >> >Since these 2 installations do non share any openstack > component and > >>> > >> have > >>> > >> >> >different openstack database, I would like to know it is > possible . > >>> > >> >> >Regards > >>> > >> >> >Ignazio > >>> > >> >> > >>> > >> >> Hi Ignazio, > >>> > >> >> > >>> > >> >> If by "export shares from an openstack installation to > another" you > >>> > >> >> mean removing them from management by manila in installation A > and > >>> > >> >> instead managing them by manila in installation B then you can > do that > >>> > >> >> while leaving them in place on your Net App back end using the > manila > >>> > >> >> "manage-unmanage" administrative commands. Here's some > documentation > >>> > >> >> [1] that should be helpful. > >>> > >> >> > >>> > >> >> If on the other hand by "export shares ... to another" you > mean to > >>> > >> >> leave the shares under management of manila in installation A > but > >>> > >> >> consume them from compute instances in installation B it's all > about > >>> > >> >> the networking. One can use manila to "allow-access" to > consumers of > >>> > >> >> shares anywhere but the consumers must be able to reach the > "export > >>> > >> >> locations" for those shares and mount them. > >>> > >> >> > >>> > >> >> Cheers, > >>> > >> >> > >>> > >> >> -- Tom Barron > >>> > >> >> > >>> > >> >> [1] > >>> > >> >> > >>> > >> > https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 > >>> > >> >> > > >>> > >> >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < > >>> > >> >> gouthampravi at gmail.com> > >>> > >> >> >ha scritto: > >>> > >> >> > > >>> > >> >> >> Hi Ignazio, > >>> > >> >> >> > >>> > >> >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano > >>> > >> >> >> wrote: > >>> > >> >> >> > > >>> > >> >> >> > Hello All, > >>> > >> >> >> > I installed manila on my queens openstack based on centos > 7. > >>> > >> >> >> > I configured two servers with glusterfs replocation and > ganesha > >>> > >> nfs. > >>> > >> >> >> > I configured my controllers octavia,conf but when I try > to create a > >>> > >> >> share > >>> > >> >> >> > the manila scheduler logs reports: > >>> > >> >> >> > > >>> > >> >> >> > Failed to schedule create_share: No valid host was found. > Failed to > >>> > >> >> find > >>> > >> >> >> a weighted host, the last executed filter was > CapabilitiesFilter.: > >>> > >> >> >> NoValidHost: No valid host was found. Failed to find a > weighted host, > >>> > >> >> the > >>> > >> >> >> last executed filter was CapabilitiesFilter. > >>> > >> >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api > >>> > >> >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a > >>> > >> >> 89f76bc5de5545f381da2c10c7df7f15 > >>> > >> >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message > record for > >>> > >> >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a > >>> > >> >> >> > >>> > >> >> >> > >>> > >> >> >> The scheduler failure points out that you have a mismatch in > >>> > >> >> >> expectations (backend capabilities vs share type > extra-specs) and > >>> > >> >> >> there was no host to schedule your share to. So a few > things to check > >>> > >> >> >> here: > >>> > >> >> >> > >>> > >> >> >> - What is the share type you're using? Can you list the > share type > >>> > >> >> >> extra-specs and confirm that the backend (your GlusterFS > storage) > >>> > >> >> >> capabilities are appropriate with whatever you've set up as > >>> > >> >> >> extra-specs ($ manila pool-list --detail)? > >>> > >> >> >> - Is your backend operating correctly? You can list the > manila > >>> > >> >> >> services ($ manila service-list) and see if the backend is > both > >>> > >> >> >> 'enabled' and 'up'. If it isn't, there's a good chance > there was a > >>> > >> >> >> problem with the driver initialization, please enable debug > logging, > >>> > >> >> >> and look at the log file for the manila-share service, you > might see > >>> > >> >> >> why and be able to fix it. > >>> > >> >> >> > >>> > >> >> >> > >>> > >> >> >> Please be aware that we're on a look out for a maintainer > for the > >>> > >> >> >> GlusterFS driver for the past few releases. We're open to > bug fixes > >>> > >> >> >> and maintenance patches, but there is currently no active > maintainer > >>> > >> >> >> for this driver. > >>> > >> >> >> > >>> > >> >> >> > >>> > >> >> >> > I did not understand if controllers node must be > connected to the > >>> > >> >> >> network where shares must be exported for virtual machines, > so my > >>> > >> >> glusterfs > >>> > >> >> >> are connected on the management network where openstack > controllers > >>> > >> are > >>> > >> >> >> conencted and to the network where virtual machine are > connected. > >>> > >> >> >> > > >>> > >> >> >> > My manila.conf section for glusterfs section is the > following > >>> > >> >> >> > > >>> > >> >> >> > [gluster-manila565] > >>> > >> >> >> > driver_handles_share_servers = False > >>> > >> >> >> > share_driver = > manila.share.drivers.glusterfs.GlusterfsShareDriver > >>> > >> >> >> > glusterfs_target = root at 10.102.184.229:/manila565 > >>> > >> >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa > >>> > >> >> >> > glusterfs_ganesha_server_username = root > >>> > >> >> >> > glusterfs_nfs_server_type = Ganesha > >>> > >> >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 > >>> > >> >> >> > #glusterfs_servers = root at 10.102.185.19 > >>> > >> >> >> > ganesha_config_dir = /etc/ganesha > >>> > >> >> >> > > >>> > >> >> >> > > >>> > >> >> >> > PS > >>> > >> >> >> > 10.102.184.0/24 is the network where controlelrs expose > endpoint > >>> > >> >> >> > > >>> > >> >> >> > 10.102.189.0/24 is the shared network inside openstack > where > >>> > >> virtual > >>> > >> >> >> machines are connected. > >>> > >> >> >> > > >>> > >> >> >> > The gluster servers are connected on both. > >>> > >> >> >> > > >>> > >> >> >> > > >>> > >> >> >> > Any help, please ? > >>> > >> >> >> > > >>> > >> >> >> > Ignazio > >>> > >> >> >> > >>> > >> >> > >>> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyangii at gmail.com Wed Feb 13 08:35:47 2019 From: hyangii at gmail.com (Jae Sang Lee) Date: Wed, 13 Feb 2019 17:35:47 +0900 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: <20190212092430.34q6zlr47jj6uq4c@localhost> References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> <3C065CFC-3E64-47C1-84C9-FB87A1F9B475@cern.ch> <93782FC6-38BE-438C-B665-40977863DEDA@cern.ch> <20190212092430.34q6zlr47jj6uq4c@localhost> Message-ID: As mentioned in Gorka, sql connection is using pymysql. And I increased max_pool_size to 50(I think gorka mistaken max_pool_size to max_retries.), but it was the same that the cinder-volume stucked from the time that 4~50 volumes were deleted. There seems to be a problem with the cinder rbd volume driver, so I tested to delete 200 volumes continously by used only RBDClient and RBDProxy. There was no problem at this time. I think there is some code in the cinder-volume that causes a hang but it's too hard to find now. Thanks. 2019년 2월 12일 (화) 오후 6:24, Gorka Eguileor 님이 작성: > On 12/02, Arne Wiebalck wrote: > > Jae, > > > > One other setting that caused trouble when bulk deleting cinder volumes > was the > > DB connection string: we did not configure a driver and hence used the > Python > > mysql wrapper instead … essentially changing > > > > connection = mysql://cinder:@:/cinder > > > > to > > > > connection = mysql+pymysql://cinder:@:/cinder > > > > solved the parallel deletion issue for us. > > > > All details in the last paragraph of [1]. > > > > HTH! > > Arne > > > > [1] > https://techblog.web.cern.ch/techblog/post/experiences-with-cinder-in-production/ > > > > Good point, using a C mysql connection library will induce thread > starvation. This was thoroughly discussed, and the default changed, > like 2 years ago... So I assumed we all changed that. > > Something else that could be problematic when receiving many concurrent > requests on any Cinder service is the number of concurrent DB > connections, although we also changed this a while back to 50. This is > set as sql_max_retries or max_retries (depending on the version) in the > "[database]" section. > > Cheers, > Gorka. > > > > > > > > > On 12 Feb 2019, at 01:07, Jae Sang Lee wrote: > > > > > > Hello, > > > > > > I tested today by increasing EVENTLET_THREADPOOL_SIZE size to 100. I > wanted to have good results, > > > but this time I did not get a response after removing 41 volumes. This > environment variable did not fix > > > the cinder-volume stopping. > > > > > > Restarting the stopped cinder-volume will delete all volumes that are > in deleting state while running the clean_up function. > > > Only one volume in the deleting state, I force the state of this > volume to be available, and then delete it, all volumes will be deleted. > > > > > > This result was the same for 3 consecutive times. After removing > dozens of volumes, the cinder-volume was down, > > > and after the restart of the service, 199 volumes were deleted and one > volume was manually erased. > > > > > > If you have a different approach to solving this problem, please let > me know. > > > > > > Thanks. > > > > > > 2019년 2월 11일 (월) 오후 9:40, Arne Wiebalck 님이 작성: > > > Jae, > > > > > >> On 11 Feb 2019, at 11:39, Jae Sang Lee wrote: > > >> > > >> Arne, > > >> > > >> I saw the messages like ''moving volume to trash" in the > cinder-volume logs and the peridic task also reports > > >> like "Deleted from trash for backend ''" > > >> > > >> The patch worked well when clearing a small number of volumes. This > happens only when I am deleting a large > > >> number of volumes. > > > > > > Hmm, from cinder’s point of view, the deletion should be more or less > instantaneous, so it should be able to “delete” > > > many more volumes before getting stuck. > > > > > > The periodic task, however, will go through the volumes one by one, so > if you delete many at the same time, > > > volumes may pile up in the trash (for some time) before the tasks gets > round to delete them. This should not affect > > > c-vol, though. > > > > > >> I will try to adjust the number of thread pools by adjusting the > environment variables with your advices > > >> > > >> Do you know why the cinder-volume hang does not occur when create a > volume, but only when delete a volume? > > > > > > Deleting a volume ties up a thread for the duration of the deletion > (which is synchronous and can hence take very > > > long for ). If you have too many deletions going on at the same time, > you run out of threads and c-vol will eventually > > > time out. FWIU, creation basically works the same way, but it is > almost instantaneous, hence the risk of using up all > > > threads is simply lower (Gorka may correct me here :-). > > > > > > Cheers, > > > Arne > > > > > >> > > >> > > >> Thanks. > > >> > > >> > > >> 2019년 2월 11일 (월) 오후 6:14, Arne Wiebalck 님이 작성: > > >> Jae, > > >> > > >> To make sure deferred deletion is properly working: when you delete > individual large volumes > > >> with data in them, do you see that > > >> - the volume is fully “deleted" within a few seconds, ie. not staying > in ‘deleting’ for a long time? > > >> - that the volume shows up in trash (with “rbd trash ls”)? > > >> - the periodic task reports it is deleting volumes from the trash? > > >> > > >> Another option to look at is “backend_native_threads_pool_size": this > will increase the number > > >> of threads to work on deleting volumes. It is independent from > deferred deletion, but can also > > >> help with situations where Cinder has more work to do than it can > cope with at the moment. > > >> > > >> Cheers, > > >> Arne > > >> > > >> > > >> > > >>> On 11 Feb 2019, at 09:47, Jae Sang Lee wrote: > > >>> > > >>> Yes, I added your code to pike release manually. > > >>> > > >>> > > >>> > > >>> 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 > 작성: > > >>> Hi Jae, > > >>> > > >>> You back ported the deferred deletion patch to Pike? > > >>> > > >>> Cheers, > > >>> Arne > > >>> > > >>> > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: > > >>> > > > >>> > Hello, > > >>> > > > >>> > I recently ran a volume deletion test with deferred deletion > enabled on the pike release. > > >>> > > > >>> > We experienced a cinder-volume hung when we were deleting a large > amount of the volume in which the data was actually written(I make 15GB > file in every volumes), and we thought deferred deletion would solve it. > > >>> > > > >>> > However, while deleting 200 volumes, after 50 volumes, the > cinder-volume downed as before. In my opinion, the trash_move api does not > seem to work properly when removing multiple volumes, just like remove api. > > >>> > > > >>> > If these test results are my fault, please let me know the correct > test method. > > >>> > > > >>> > > >>> -- > > >>> Arne Wiebalck > > >>> CERN IT > > >>> > > >> > > >> -- > > >> Arne Wiebalck > > >> CERN IT > > >> > > > > > > -- > > > Arne Wiebalck > > > CERN IT > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev.faz at gmail.com Wed Feb 13 08:50:06 2019 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Wed, 13 Feb 2019 09:50:06 +0100 Subject: [keystone] adfs SingleSignOn with CLI/API? In-Reply-To: <1549901920.3451697.1655621200.6F07535E@webmail.messagingengine.com> References: <1B71BEE3-D72D-42E8-A61A-380CAA548722@gmail.com> <1549894791.2312833.1655509928.25450D18@webmail.messagingengine.com> <1549901920.3451697.1655621200.6F07535E@webmail.messagingengine.com> Message-ID: Hi, thanks for the fast answers. I asked our ADFS Administrators if they could provide some logs to see whats going wrong, but they are unable to deliver these. So I installed keycloak and switched to OpenID Connect. Im (again) able to connect via Horizon SSO, but when I try to use v3oidcpassword in the CLI Im running into https://bugs.launchpad.net/python-openstackclient/+bug/1648580 I already added the suggested --os-client-secret without luck. Updating to latest python-versions.. pip install -U python-keystoneclient pip install -U python-openstackclient didnt change anything. Any ideas what to try next? Offtopic: Seems like https://groups.google.com/forum/#!topic/mod_auth_openidc/qGE1DGQCTMY is right. I had to change the RedirectURI to geht OpenIDConnect working with Keystone. The sample config of https://docs.openstack.org/keystone/rocky/advanced-topics/federation/websso.html is *not working for me* Fabian Am 11.02.19 um 17:18 schrieb Colleen Murphy: > Forwarding back to list > > On Mon, Feb 11, 2019, at 5:11 PM, Blake Covarrubias wrote: >>> On Feb 11, 2019, at 6:19 AM, Colleen Murphy wrote: >>> >>> Hi Fabian, >>> >>> On Mon, Feb 11, 2019, at 12:58 PM, Fabian Zimmermann wrote: >>>> Hi, >>>> >>>> Im currently trying to implement some way to do a SSO against our >>>> ActiveDirectory. I already tried SAMLv2 and OpenID Connect. >>>> >>>> Im able to sign in via Horizon, but im unable to find a working way on cli. >>>> >>>> Already tried v3adfspassword and v3oidcpassword, but im unable to get >>>> them working. >>>> >>>> Any hints / links / docs where to find more information? >>>> >>>> Anyone using this kind of setup and willing to share KnowHow? >>>> >>>> Thanks a lot, >>>> >>>> Fabian Zimmermann >>> >>> We have an example of authenticating with the CLI here: >>> >>> https://docs.openstack.org/keystone/latest/admin/federation/configure_federation.html#authenticating >>> >>> That only covers the regular SAML2.0 ECP type of authentication, which I guess won't work with ADFS, and we seem to have zero ADFS-specific documentation. >>> >>> From the keystoneauth plugin code, it looks like you need to set identity-provider-url, service-provider-endpoint, service-provider-entity-id, username, password, identity-provider, and protocol (I'm getting that from the loader classes[1][2]). Is that the information you're looking for, or can you give more details on what specifically isn't working? >>> >>> Colleen >>> >>> [1] http://git.openstack.org/cgit/openstack/keystoneauth/tree/keystoneauth1/loading/identity.py#n104 >>> [2] http://git.openstack.org/cgit/openstack/keystoneauth/tree/keystoneauth1/extras/_saml2/_loading.py#n45 >>> >> >> Fabian, >> >> To add a bit more info, the AD FS plugin essentially uses IdP-initiated >> sign-on. The identity provider URL is where the initial authentication >> request to AD FS will be sent. An example of this would be >> https://HOSTNAME/adfs/services/trust/13/usernamemixed >> . The service >> provider’s entity ID must also be sent in the request so that AD FS >> knows which Relying Party Trust to associate with the request. >> >> AD FS will provide a SAML assertion upon successful authentication. The >> service provider endpoint is the URL of the Assertion Consumer Service. >> If you’re using Shibboleth on the SP, this would be >> https://HOSTNAME/Shibboleth.sso/ADFS >> . >> >> Note: The service-provider-entity-id can be omitted if it is the same >> value as the service-provider-endpoint (or Assertion Consumer Service >> URL). >> >> Hope this helps. >> >> — >> Blake Covarrubias >> > From Viktor_Shulhin at jabil.com Wed Feb 13 09:04:19 2019 From: Viktor_Shulhin at jabil.com (Viktor Shulhin) Date: Wed, 13 Feb 2019 09:04:19 +0000 Subject: [kolla][mariadb] Multinode deployment fails due to bootstrap_mariadb or mariadb errors (Giuseppe Sannino) In-Reply-To: References: Message-ID: Hi Giuseppe, Seems like deployments with Ubuntu base distros are broken. I didn't find any solution for that. I have deployed Rocky with Centos based distro on Ubuntu 16.04 host recently. Some parameters from /etc/kolla/globals.yml are: kolla_base_distro: "centos" kolla_install_type: "binary" openstack_release: "rocky" >Hi all, >need your help. >I'm trying to deploy Openstack "Queens" via kolla on a multinode system (1 >controller/kolla host + 1 compute). >I tried with both binary and source packages and I'm using "ubuntu" as >base_distro. From ildiko.vancsa at gmail.com Wed Feb 13 09:07:06 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 13 Feb 2019 10:07:06 +0100 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190211222641.pney33hmai6vjoky@pacific.linksys.moosehall> References: <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> <20190208091829.6tiig7lgef6txcxk@pacific.linksys.moosehall> <723736DB-ED80-4600-AA98-F51FE70A8D73@gmail.com> <20190211222641.pney33hmai6vjoky@pacific.linksys.moosehall> Message-ID: <355BD2CB-B1F9-43B1-943C-66553E90050F@gmail.com> > On 2019. Feb 11., at 23:26, Adam Spiers wrote: > [snip…] > >> To help with all this I would start the experiment with wiki pages >> and etherpads as these are all materials you can point to without too >> much formality to follow so the goals, drivers, supporters and >> progress are visible to everyone who’s interested and to the TC to >> follow-up on. >> >> Do we expect an approval process to help with or even drive either of >> the crucial steps I listed above? > > I'm not sure if it would help. But I agree that visibility is > important, and by extension also discoverability. To that end I think > it would be worth hosting a central list of popup initiatives > somewhere which links to the available materials for each initiative. > Maybe it doesn't matter too much whether that central list is simply a > wiki page or a static web page managed by Gerrit under a governance > repo or similar. I would start with a wiki page as it stores history as well and it’s easier to edit. Later on if we feel the need to be more formal we can move to a static web page and use Gerrit. Thanks, Ildikó From geguileo at redhat.com Wed Feb 13 09:37:24 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 13 Feb 2019 10:37:24 +0100 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> <3C065CFC-3E64-47C1-84C9-FB87A1F9B475@cern.ch> <93782FC6-38BE-438C-B665-40977863DEDA@cern.ch> <20190212092430.34q6zlr47jj6uq4c@localhost> Message-ID: <20190213093724.4hrp2u344zjsfj4v@localhost> On 13/02, Jae Sang Lee wrote: > As mentioned in Gorka, sql connection is using pymysql. > > And I increased max_pool_size to 50(I think gorka mistaken max_pool_size to > max_retries.), My bad, I meant "max_overflow", which was changed a while back to 50 (though I don't remember when). > but it was the same that the cinder-volume stucked from the time that 4~50 > volumes were deleted. > > There seems to be a problem with the cinder rbd volume driver, so I tested > to delete 200 volumes continously > by used only RBDClient and RBDProxy. There was no problem at this time. I assume you tested it using eventlets, right? Cheers, Gorka. > > I think there is some code in the cinder-volume that causes a hang but it's > too hard to find now. > > Thanks. > > 2019년 2월 12일 (화) 오후 6:24, Gorka Eguileor 님이 작성: > > > On 12/02, Arne Wiebalck wrote: > > > Jae, > > > > > > One other setting that caused trouble when bulk deleting cinder volumes > > was the > > > DB connection string: we did not configure a driver and hence used the > > Python > > > mysql wrapper instead … essentially changing > > > > > > connection = mysql://cinder:@:/cinder > > > > > > to > > > > > > connection = mysql+pymysql://cinder:@:/cinder > > > > > > solved the parallel deletion issue for us. > > > > > > All details in the last paragraph of [1]. > > > > > > HTH! > > > Arne > > > > > > [1] > > https://techblog.web.cern.ch/techblog/post/experiences-with-cinder-in-production/ > > > > > > > Good point, using a C mysql connection library will induce thread > > starvation. This was thoroughly discussed, and the default changed, > > like 2 years ago... So I assumed we all changed that. > > > > Something else that could be problematic when receiving many concurrent > > requests on any Cinder service is the number of concurrent DB > > connections, although we also changed this a while back to 50. This is > > set as sql_max_retries or max_retries (depending on the version) in the > > "[database]" section. > > > > Cheers, > > Gorka. > > > > > > > > > > > > > > On 12 Feb 2019, at 01:07, Jae Sang Lee wrote: > > > > > > > > Hello, > > > > > > > > I tested today by increasing EVENTLET_THREADPOOL_SIZE size to 100. I > > wanted to have good results, > > > > but this time I did not get a response after removing 41 volumes. This > > environment variable did not fix > > > > the cinder-volume stopping. > > > > > > > > Restarting the stopped cinder-volume will delete all volumes that are > > in deleting state while running the clean_up function. > > > > Only one volume in the deleting state, I force the state of this > > volume to be available, and then delete it, all volumes will be deleted. > > > > > > > > This result was the same for 3 consecutive times. After removing > > dozens of volumes, the cinder-volume was down, > > > > and after the restart of the service, 199 volumes were deleted and one > > volume was manually erased. > > > > > > > > If you have a different approach to solving this problem, please let > > me know. > > > > > > > > Thanks. > > > > > > > > 2019년 2월 11일 (월) 오후 9:40, Arne Wiebalck 님이 작성: > > > > Jae, > > > > > > > >> On 11 Feb 2019, at 11:39, Jae Sang Lee wrote: > > > >> > > > >> Arne, > > > >> > > > >> I saw the messages like ''moving volume to trash" in the > > cinder-volume logs and the peridic task also reports > > > >> like "Deleted from trash for backend ''" > > > >> > > > >> The patch worked well when clearing a small number of volumes. This > > happens only when I am deleting a large > > > >> number of volumes. > > > > > > > > Hmm, from cinder’s point of view, the deletion should be more or less > > instantaneous, so it should be able to “delete” > > > > many more volumes before getting stuck. > > > > > > > > The periodic task, however, will go through the volumes one by one, so > > if you delete many at the same time, > > > > volumes may pile up in the trash (for some time) before the tasks gets > > round to delete them. This should not affect > > > > c-vol, though. > > > > > > > >> I will try to adjust the number of thread pools by adjusting the > > environment variables with your advices > > > >> > > > >> Do you know why the cinder-volume hang does not occur when create a > > volume, but only when delete a volume? > > > > > > > > Deleting a volume ties up a thread for the duration of the deletion > > (which is synchronous and can hence take very > > > > long for ). If you have too many deletions going on at the same time, > > you run out of threads and c-vol will eventually > > > > time out. FWIU, creation basically works the same way, but it is > > almost instantaneous, hence the risk of using up all > > > > threads is simply lower (Gorka may correct me here :-). > > > > > > > > Cheers, > > > > Arne > > > > > > > >> > > > >> > > > >> Thanks. > > > >> > > > >> > > > >> 2019년 2월 11일 (월) 오후 6:14, Arne Wiebalck 님이 작성: > > > >> Jae, > > > >> > > > >> To make sure deferred deletion is properly working: when you delete > > individual large volumes > > > >> with data in them, do you see that > > > >> - the volume is fully “deleted" within a few seconds, ie. not staying > > in ‘deleting’ for a long time? > > > >> - that the volume shows up in trash (with “rbd trash ls”)? > > > >> - the periodic task reports it is deleting volumes from the trash? > > > >> > > > >> Another option to look at is “backend_native_threads_pool_size": this > > will increase the number > > > >> of threads to work on deleting volumes. It is independent from > > deferred deletion, but can also > > > >> help with situations where Cinder has more work to do than it can > > cope with at the moment. > > > >> > > > >> Cheers, > > > >> Arne > > > >> > > > >> > > > >> > > > >>> On 11 Feb 2019, at 09:47, Jae Sang Lee wrote: > > > >>> > > > >>> Yes, I added your code to pike release manually. > > > >>> > > > >>> > > > >>> > > > >>> 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 > > 작성: > > > >>> Hi Jae, > > > >>> > > > >>> You back ported the deferred deletion patch to Pike? > > > >>> > > > >>> Cheers, > > > >>> Arne > > > >>> > > > >>> > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: > > > >>> > > > > >>> > Hello, > > > >>> > > > > >>> > I recently ran a volume deletion test with deferred deletion > > enabled on the pike release. > > > >>> > > > > >>> > We experienced a cinder-volume hung when we were deleting a large > > amount of the volume in which the data was actually written(I make 15GB > > file in every volumes), and we thought deferred deletion would solve it. > > > >>> > > > > >>> > However, while deleting 200 volumes, after 50 volumes, the > > cinder-volume downed as before. In my opinion, the trash_move api does not > > seem to work properly when removing multiple volumes, just like remove api. > > > >>> > > > > >>> > If these test results are my fault, please let me know the correct > > test method. > > > >>> > > > > >>> > > > >>> -- > > > >>> Arne Wiebalck > > > >>> CERN IT > > > >>> > > > >> > > > >> -- > > > >> Arne Wiebalck > > > >> CERN IT > > > >> > > > > > > > > -- > > > > Arne Wiebalck > > > > CERN IT > > > > > > > > > From stig.openstack at telfer.org Wed Feb 13 09:52:43 2019 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 13 Feb 2019 09:52:43 +0000 Subject: [scientific-sig] IRC Meeting 1100UTC: ISC participation, Summit Forum, GPFS follow-up Message-ID: Hi All - We have a Scientific SIG IRC meeting at 1100 UTC (just over an hour away) in channel #openstack-meeting. Everyone is welcome. This week’s agenda is here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_February_13th_2019 We’d like to continue the discussions on SIG participation in up-and-coming conferences, and also follow up on user experiences of the GPFS/Manila driver. If anyone has other items to raise, please do add them to the agenda. Cheers, Stig -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyangii at gmail.com Wed Feb 13 10:07:54 2019 From: hyangii at gmail.com (Jae Sang Lee) Date: Wed, 13 Feb 2019 19:07:54 +0900 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: <20190213093724.4hrp2u344zjsfj4v@localhost> References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> <3C065CFC-3E64-47C1-84C9-FB87A1F9B475@cern.ch> <93782FC6-38BE-438C-B665-40977863DEDA@cern.ch> <20190212092430.34q6zlr47jj6uq4c@localhost> <20190213093724.4hrp2u344zjsfj4v@localhost> Message-ID: Yes, I also used eventlets because RBDPool call eventlet.tpool. Anyway, and finally I found the cause of the problem. That was because the file descriptor reached its limit. My test environment was ulimit 1024, and every time I deleted a volume, the fd number increased by 3,40, and when it exceeded 1024, the cinder-volume no longer worked exactly. I changed the ulimit to a large value so fd exceeded 2300 until we erased 200 volumes. When all the volumes were erased, fd also decreased normally. In the end, I think there will be an increase in fd in the source code that deletes the volume. This is because fd remains stable during volume creation. Thanks, Jaesang. 2019년 2월 13일 (수) 오후 6:37, Gorka Eguileor 님이 작성: > On 13/02, Jae Sang Lee wrote: > > As mentioned in Gorka, sql connection is using pymysql. > > > > And I increased max_pool_size to 50(I think gorka mistaken max_pool_size > to > > max_retries.), > > My bad, I meant "max_overflow", which was changed a while back to 50 > (though I don't remember when). > > > > > but it was the same that the cinder-volume stucked from the time that > 4~50 > > volumes were deleted. > > > > There seems to be a problem with the cinder rbd volume driver, so I > tested > > to delete 200 volumes continously > > by used only RBDClient and RBDProxy. There was no problem at this time. > > I assume you tested it using eventlets, right? > > Cheers, > Gorka. > > > > > > I think there is some code in the cinder-volume that causes a hang but > it's > > too hard to find now. > > > > Thanks. > > > > 2019년 2월 12일 (화) 오후 6:24, Gorka Eguileor 님이 작성: > > > > > On 12/02, Arne Wiebalck wrote: > > > > Jae, > > > > > > > > One other setting that caused trouble when bulk deleting cinder > volumes > > > was the > > > > DB connection string: we did not configure a driver and hence used > the > > > Python > > > > mysql wrapper instead … essentially changing > > > > > > > > connection = mysql://cinder:@:/cinder > > > > > > > > to > > > > > > > > connection = mysql+pymysql://cinder:@:/cinder > > > > > > > > solved the parallel deletion issue for us. > > > > > > > > All details in the last paragraph of [1]. > > > > > > > > HTH! > > > > Arne > > > > > > > > [1] > > > > https://techblog.web.cern.ch/techblog/post/experiences-with-cinder-in-production/ > > > > > > > > > > Good point, using a C mysql connection library will induce thread > > > starvation. This was thoroughly discussed, and the default changed, > > > like 2 years ago... So I assumed we all changed that. > > > > > > Something else that could be problematic when receiving many concurrent > > > requests on any Cinder service is the number of concurrent DB > > > connections, although we also changed this a while back to 50. This is > > > set as sql_max_retries or max_retries (depending on the version) in the > > > "[database]" section. > > > > > > Cheers, > > > Gorka. > > > > > > > > > > > > > > > > > > > On 12 Feb 2019, at 01:07, Jae Sang Lee wrote: > > > > > > > > > > Hello, > > > > > > > > > > I tested today by increasing EVENTLET_THREADPOOL_SIZE size to 100. > I > > > wanted to have good results, > > > > > but this time I did not get a response after removing 41 volumes. > This > > > environment variable did not fix > > > > > the cinder-volume stopping. > > > > > > > > > > Restarting the stopped cinder-volume will delete all volumes that > are > > > in deleting state while running the clean_up function. > > > > > Only one volume in the deleting state, I force the state of this > > > volume to be available, and then delete it, all volumes will be > deleted. > > > > > > > > > > This result was the same for 3 consecutive times. After removing > > > dozens of volumes, the cinder-volume was down, > > > > > and after the restart of the service, 199 volumes were deleted and > one > > > volume was manually erased. > > > > > > > > > > If you have a different approach to solving this problem, please > let > > > me know. > > > > > > > > > > Thanks. > > > > > > > > > > 2019년 2월 11일 (월) 오후 9:40, Arne Wiebalck 님이 > 작성: > > > > > Jae, > > > > > > > > > >> On 11 Feb 2019, at 11:39, Jae Sang Lee wrote: > > > > >> > > > > >> Arne, > > > > >> > > > > >> I saw the messages like ''moving volume to trash" in the > > > cinder-volume logs and the peridic task also reports > > > > >> like "Deleted from trash for backend ''" > > > > >> > > > > >> The patch worked well when clearing a small number of volumes. > This > > > happens only when I am deleting a large > > > > >> number of volumes. > > > > > > > > > > Hmm, from cinder’s point of view, the deletion should be more or > less > > > instantaneous, so it should be able to “delete” > > > > > many more volumes before getting stuck. > > > > > > > > > > The periodic task, however, will go through the volumes one by > one, so > > > if you delete many at the same time, > > > > > volumes may pile up in the trash (for some time) before the tasks > gets > > > round to delete them. This should not affect > > > > > c-vol, though. > > > > > > > > > >> I will try to adjust the number of thread pools by adjusting the > > > environment variables with your advices > > > > >> > > > > >> Do you know why the cinder-volume hang does not occur when create > a > > > volume, but only when delete a volume? > > > > > > > > > > Deleting a volume ties up a thread for the duration of the deletion > > > (which is synchronous and can hence take very > > > > > long for ). If you have too many deletions going on at the same > time, > > > you run out of threads and c-vol will eventually > > > > > time out. FWIU, creation basically works the same way, but it is > > > almost instantaneous, hence the risk of using up all > > > > > threads is simply lower (Gorka may correct me here :-). > > > > > > > > > > Cheers, > > > > > Arne > > > > > > > > > >> > > > > >> > > > > >> Thanks. > > > > >> > > > > >> > > > > >> 2019년 2월 11일 (월) 오후 6:14, Arne Wiebalck 님이 > 작성: > > > > >> Jae, > > > > >> > > > > >> To make sure deferred deletion is properly working: when you > delete > > > individual large volumes > > > > >> with data in them, do you see that > > > > >> - the volume is fully “deleted" within a few seconds, ie. not > staying > > > in ‘deleting’ for a long time? > > > > >> - that the volume shows up in trash (with “rbd trash ls”)? > > > > >> - the periodic task reports it is deleting volumes from the trash? > > > > >> > > > > >> Another option to look at is “backend_native_threads_pool_size": > this > > > will increase the number > > > > >> of threads to work on deleting volumes. It is independent from > > > deferred deletion, but can also > > > > >> help with situations where Cinder has more work to do than it can > > > cope with at the moment. > > > > >> > > > > >> Cheers, > > > > >> Arne > > > > >> > > > > >> > > > > >> > > > > >>> On 11 Feb 2019, at 09:47, Jae Sang Lee > wrote: > > > > >>> > > > > >>> Yes, I added your code to pike release manually. > > > > >>> > > > > >>> > > > > >>> > > > > >>> 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck >님이 > > > 작성: > > > > >>> Hi Jae, > > > > >>> > > > > >>> You back ported the deferred deletion patch to Pike? > > > > >>> > > > > >>> Cheers, > > > > >>> Arne > > > > >>> > > > > >>> > On 11 Feb 2019, at 07:54, Jae Sang Lee > wrote: > > > > >>> > > > > > >>> > Hello, > > > > >>> > > > > > >>> > I recently ran a volume deletion test with deferred deletion > > > enabled on the pike release. > > > > >>> > > > > > >>> > We experienced a cinder-volume hung when we were deleting a > large > > > amount of the volume in which the data was actually written(I make 15GB > > > file in every volumes), and we thought deferred deletion would solve > it. > > > > >>> > > > > > >>> > However, while deleting 200 volumes, after 50 volumes, the > > > cinder-volume downed as before. In my opinion, the trash_move api does > not > > > seem to work properly when removing multiple volumes, just like remove > api. > > > > >>> > > > > > >>> > If these test results are my fault, please let me know the > correct > > > test method. > > > > >>> > > > > > >>> > > > > >>> -- > > > > >>> Arne Wiebalck > > > > >>> CERN IT > > > > >>> > > > > >> > > > > >> -- > > > > >> Arne Wiebalck > > > > >> CERN IT > > > > >> > > > > > > > > > > -- > > > > > Arne Wiebalck > > > > > CERN IT > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ileixe at gmail.com Wed Feb 13 10:14:21 2019 From: ileixe at gmail.com (=?UTF-8?B?7JaR7Jyg7ISd?=) Date: Wed, 13 Feb 2019 19:14:21 +0900 Subject: How *-conductor (database accessor) assure concurrency? Message-ID: Hi, guys. Today, I encountered strange behaviors and it raise me a simple question which I never concerned about. It's about 'how database accessors like *-conductor assure DB data consistency?' What I've seen is ModelsNotFound in Trove which meaning there is no sqlalchemy query result for the Trove DB model. I'm not sure what's happening before, but what I found is DB entry is actually existed but trove-conductor could not find it emitting exception continuously. At the same time, different trove-conductor would be working normally after restart. First, I thought that sqlalchemy ensures concurrency but after reading related document ( https://docs.sqlalchemy.org/en/latest/orm/session_basics.html#is-the-session-thread-safe), it does not guarantee at all, and user should take all responsibility to manage session. After, I thought oslo.concurrency is promising to ensure consistency by making lock inter process in that the name tells me. But I could not find any related code for locking. So.. What happens internally for *-manage? Any hints would be appreciated. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Wed Feb 13 10:26:58 2019 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 13 Feb 2019 10:26:58 +0000 Subject: [nova][dev] Which response code should be returned when migrate is called but the src host is offline? In-Reply-To: References: <20190212180021.nloawdf5ywvmvdgh@lyarwood.usersys.redhat.com> Message-ID: <20190213102658.sky5qp5vqcd3mtsv@lyarwood.usersys.redhat.com> On 12-02-19 14:35:48, Michael McCune wrote: > On Tue, Feb 12, 2019 at 2:20 PM Ed Leafe wrote: > > In the scenario described, there is nothing wrong at all with the > > servers handing the request. There is, however, a problem with the > > resource that the request is trying to work with. Of course, the > > advice in the docs to include enough in the payload for the client > > to understand the nature of the problem is critical, no matter which > > code is used. > > ++, i think this nuance is crucial to crafting the proper response > from the server. > > peace o/ Right, hopefully the current payload is useful enough. I can't include the actual hostname as the resize API that is also changed as a result of this is not admin only and we don't want to leak hostnames to users. For now I'm going to stick with 409 unless anyone can point to an example in n-api of us using 5xx for something similar. Thanks again, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 From aspiers at suse.com Wed Feb 13 12:24:51 2019 From: aspiers at suse.com (Adam Spiers) Date: Wed, 13 Feb 2019 12:24:51 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <355BD2CB-B1F9-43B1-943C-66553E90050F@gmail.com> References: <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> <20190208091829.6tiig7lgef6txcxk@pacific.linksys.moosehall> <723736DB-ED80-4600-AA98-F51FE70A8D73@gmail.com> <20190211222641.pney33hmai6vjoky@pacific.linksys.moosehall> <355BD2CB-B1F9-43B1-943C-66553E90050F@gmail.com> Message-ID: <20190213122451.nyyllx555smf2mwy@pacific.linksys.moosehall> Ildiko Vancsa wrote: >>On 2019. Feb 11., at 23:26, Adam Spiers wrote: >>[snip…] >> >>>To help with all this I would start the experiment with wiki pages >>>and etherpads as these are all materials you can point to without too >>>much formality to follow so the goals, drivers, supporters and >>>progress are visible to everyone who’s interested and to the TC to >>>follow-up on. >>> >>>Do we expect an approval process to help with or even drive either of >>>the crucial steps I listed above? >> >>I'm not sure if it would help. But I agree that visibility is >>important, and by extension also discoverability. To that end I think >>it would be worth hosting a central list of popup initiatives >>somewhere which links to the available materials for each initiative. >>Maybe it doesn't matter too much whether that central list is simply a >>wiki page or a static web page managed by Gerrit under a governance >>repo or similar. > >I would start with a wiki page as it stores history as well and it’s easier to edit. Later on if we feel the need to be more formal we can move to a static web page and use Gerrit. Sounds good to me. Do we already have some popup teams? If so we could set this up straight away. From fungi at yuggoth.org Wed Feb 13 12:31:01 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 13 Feb 2019 12:31:01 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190213122451.nyyllx555smf2mwy@pacific.linksys.moosehall> References: <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> <20190208091829.6tiig7lgef6txcxk@pacific.linksys.moosehall> <723736DB-ED80-4600-AA98-F51FE70A8D73@gmail.com> <20190211222641.pney33hmai6vjoky@pacific.linksys.moosehall> <355BD2CB-B1F9-43B1-943C-66553E90050F@gmail.com> <20190213122451.nyyllx555smf2mwy@pacific.linksys.moosehall> Message-ID: <20190213123101.2ploytjscls5lxx3@yuggoth.org> On 2019-02-13 12:24:51 +0000 (+0000), Adam Spiers wrote: [...] > Do we already have some popup teams? If so we could set this up > straight away. The folks driving cross-project work on image encryption were cited as an example earlier in this thread. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dabarren at gmail.com Wed Feb 13 13:12:35 2019 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Wed, 13 Feb 2019 14:12:35 +0100 Subject: [kolla] Today meeting cancelled Feb-13-2019 Message-ID: Hi team, Due to job responsabilities i won't be able to hold today's meeting. Meeting topics haven't changed since last week. If have something wanna discuss please add to next week meeting or raise in the main IRC channel. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanthini.a.a at ericsson.com Wed Feb 13 13:48:38 2019 From: nanthini.a.a at ericsson.com (NANTHINI A A) Date: Wed, 13 Feb 2019 13:48:38 +0000 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: Hi , As per your suggested change ,I am able to create network A1,network A2 ; in second iteration network b1,network b2 .But I want to reduce number of lines of variable params.hence tried using repeat function .But it is not working .Can you please let me know what is wrong here . I am getting following error . root at cic-1:~# heat stack-create test2 -f main.yaml WARNING (shell) "heat stack-create" is deprecated, please use "openstack stack create" instead ERROR: AttributeError: : resources.rg: : 'NoneType' object has no attribute 'parameters' root at cic-1:~# cat main.yaml heat_template_version: 2015-04-30 description: Shows how to look up list/map values by group index parameters: sets: type: comma_delimited_list label: sets default: "A,B,C" net_names: type: json default: repeat: for each: <%set%>: {get_param: sets} template: - network1: Network<%set>1 network2: Network<%set>2 resources: rg: type: OS::Heat::ResourceGroup properties: count: 3 resource_def: type: nested.yaml properties: # Note you have to pass the index and the entire list into the # nested template, resolving via %index% doesn't work directly # in the get_param here index: "%index%" names: {get_param: net_names} outputs: all_values: value: {get_attr: [rg, value]} root at cic-1:~# Thanks in advance. Regards, A.Nanthini From: Rabi Mishra [mailto:ramishra at redhat.com] Sent: Wednesday, February 13, 2019 9:07 AM To: NANTHINI A A Cc: hjensas at redhat.com; openstack-dev at lists.openstack.org Subject: Re: [Heat] Reg accessing variables of resource group heat api On Tue, Feb 12, 2019 at 7:48 PM NANTHINI A A > wrote: Hi , I followed the example given in random.yaml .But getting below error .Can you please tell me what is wrong here . root at cic-1:~# heat stack-create test -f main.yaml WARNING (shell) "heat stack-create" is deprecated, please use "openstack stack create" instead ERROR: Property error: : resources.rg.resources[0].properties: : Unknown Property names root at cic-1:~# cat main.yaml heat_template_version: 2015-04-30 description: Shows how to look up list/map values by group index parameters: net_names: type: json default: - network1: NetworkA1 network2: NetworkA2 - network1: NetworkB1 network2: NetworkB2 resources: rg: type: OS::Heat::ResourceGroup properties: count: 3 resource_def: type: nested.yaml properties: # Note you have to pass the index and the entire list into the # nested template, resolving via %index% doesn't work directly # in the get_param here index: "%index%" names: {get_param: net_names} property name should be same as parameter name in you nested.yaml outputs: all_values: value: {get_attr: [rg, value]} root at cic-1:~# cat nested.yaml heat_template_version: 2013-05-23 description: This is the template for I&V R6.1 base configuration to create neutron resources other than sg and vm for vyos vms parameters: net_names: changing this to 'names' should fix your error. type: json index: type: number resources: neutron_Network_1: type: OS::Neutron::Net properties: name: {get_param: [names, {get_param: index}, network1]} Thanks, A.Nanthini From: Rabi Mishra [mailto:ramishra at redhat.com] Sent: Tuesday, February 12, 2019 6:34 PM To: NANTHINI A A > Cc: hjensas at redhat.com; openstack-dev at lists.openstack.org Subject: Re: [Heat] Reg accessing variables of resource group heat api On Tue, Feb 12, 2019 at 11:14 AM NANTHINI A A > wrote: Hi , May I know in the following example given parameters: resource_name_map: - network1: foo_custom_name_net1 network2: foo_custom_name_net2 - network1: bar_custom_name_net1 network2: bar_custom_name_net2 what is the parameter type ? json -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Wed Feb 13 13:55:02 2019 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 13 Feb 2019 07:55:02 -0600 Subject: User Committee Elections - call for candidates Message-ID: The candidacy period for the upcoming UC election is open. Three seats will be filled. If you are an AUC and are interested in running for one of them, now is the time to announce it. Here are the important dates: February 04 - February 17, 05:59 UTC: Open candidacy for UC positions February 18 - February 24, 11:59 UTC: UC elections (voting) Special thanks to our election officials - Mohamed Elsakhawy and Jonathan Proulx! You can find all the info for the election here: https://governance.openstack.org/uc/reference/uc-election-feb2019.html You announce your candidacy by sending an email to user-committee at lists.openstack.org with the subject line UC Candidacy. Here are some great examples of previous candidate letters if you are having difficulty: http://lists.openstack.org/pipermail/user-committee/2018-August/002700.html http://lists.openstack.org/pipermail/user-committee/2018-August/002713.html http://lists.openstack.org/pipermail/user-committee/2018-February/002556.html http://lists.openstack.org/pipermail/user-committee/2018-February/002563.html -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at medberry.net Wed Feb 13 14:22:47 2019 From: dave at medberry.net (David Medberry) Date: Wed, 13 Feb 2019 07:22:47 -0700 Subject: [tc][uc] Becoming an Open Source Initiative affiliate org In-Reply-To: References: Message-ID: And select the thread view to see how much support has already weighed in (I haven't read all the comments, there could be some negative comments.) I endorse this and am kind of surprised we hadn't already done this. Thread: http://lists.openstack.org/pipermail/foundation/2019-February/thread.html#2681 On Wed, Feb 6, 2019 at 8:56 AM Thierry Carrez wrote: > > I started a thread on the Foundation mailing-list about the OSF becoming > an OSI affiliate org: > > http://lists.openstack.org/pipermail/foundation/2019-February/002680.html > > Please follow-up there is you have any concerns or questions. > > -- > Thierry Carrez (ttx) > From openstack at medberry.net Wed Feb 13 14:24:23 2019 From: openstack at medberry.net (David Medberry) Date: Wed, 13 Feb 2019 07:24:23 -0700 Subject: [tc][uc] Becoming an Open Source Initiative affiliate org In-Reply-To: References: Message-ID: And using my openstack email this time... Select the thread view to see how much support has already weighed in (I haven't read all the comments, there could be some negative comments.) I endorse this and am kind of surprised we hadn't already done this. Thread: http://lists.openstack.org/pipermail/foundation/2019-February/thread.html#2681 On Wed, Feb 6, 2019 at 8:56 AM Thierry Carrez wrote: > > I started a thread on the Foundation mailing-list about the OSF becoming > an OSI affiliate org: > > http://lists.openstack.org/pipermail/foundation/2019-February/002680.html > > Please follow-up there is you have any concerns or questions. > > -- > Thierry Carrez (ttx) > From kchamart at redhat.com Wed Feb 13 14:43:32 2019 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 13 Feb 2019 15:43:32 +0100 Subject: [ops][nova] Heads-up: Upcoming version bump for libvirt and QEMU in 'Stein' Message-ID: <20190213144332.GB26837@paraplu> Heya folks, This is a gentle reminder to note that we are about to[1] bump the minimum required versions for libvirt and QEMU in "Stein" to: libvirt : 3.0.0 QEMU : 2.8.0 We've picked[1][2] these "next" minimum versions for "Stein" in April 2018. And the last time we did a minimum libvirt / QEMU version bump was during the "Pike" release. So we hope the intervening one year's time since the last version change is sufficient enough to prepare for this upcoming version bump. If there aren't any valid objections to this in a week, we intend to go ahead and bump the versions, as we planned[1][2]. * * * PS: Ccing the zKVM CI maintainers to check if they're okay with this, as the zKVM CI isn't running anymore. [1] https://review.openstack.org/#/c/632507/ — libvirt: Bump MIN_{LIBVIRT,QEMU}_VERSION for "Stein" [2] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129048.html — RFC: Next minimum libvirt / QEMU versions for "Stein" release [3] http://git.openstack.org/cgit/openstack/nova/commit/?id=28d337b — Pick next minimum libvirt / QEMU versions for "Stein" -- /kashyap From fungi at yuggoth.org Wed Feb 13 14:54:28 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 13 Feb 2019 14:54:28 +0000 Subject: [infra] StoryBoard Maintenance 2019-02-15 at 20:00 UTC Message-ID: <20190213145427.64uuaxb5i3cjtyy7@yuggoth.org> We're planning to perform a quick server upgrade/replacement and database move for the StoryBoard service at https://storyboard.openstack.org/ this Friday, 2019-02-15 starting at 20:00 UTC and hopefully only lasting 5-10 minutes. Please find us in the #openstack-infra channel on the Freenode IRC network or follow up to this message if you have any questions or concerns. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tobias.urdin at binero.se Wed Feb 13 14:54:47 2019 From: tobias.urdin at binero.se (Tobias Urdin) Date: Wed, 13 Feb 2019 15:54:47 +0100 Subject: [dev] [mistral] Deprecating keystone_authtoken/auth_uri Message-ID: <8d7c8349-d01b-a1a4-8706-35ef6e84410a@binero.se> Hello all Mistral people, This patch [1] has been up for a long time about the removal (which probably should a deprecation?) (and support for www_authenticate_uri) but it's been still for a long time and since deployment tools have moved away from using auth_uri I would guess that the Puppet OpenStack project is not the only project that has specific code to keep compatible with Mistral. Is there anybody active in Mistral that could look at it or have more information? Best regards Tobias [1] https://review.openstack.org/#/c/594187/ From doug at doughellmann.com Wed Feb 13 17:12:21 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 13 Feb 2019 12:12:21 -0500 Subject: Placement governance switch In-Reply-To: References: <8BE26158-5817-497F-A9D2-60222BD9F82C@leafe.com> <2B9D8207-CFD6-4864-8B2A-C9D3B31D6588@leafe.com> Message-ID: Doug Hellmann writes: > Ed Leafe writes: > >> On Feb 12, 2019, at 12:22 PM, Doug Hellmann wrote: >>> >>> Assuming no prolonged debate, you'll need 7-10 days for the change to be >>> approved. If the team is ready to go now, I suggest you go ahead and >>> file the governance patch so we can start collecting the necessary >>> votes. >> >> Done: https://review.openstack.org/#/c/636416/ >> >> -- Ed Leafe > > After consulting with the election officials during the most recent TC > office hour [1], I have proposed shifting the PTL election deadline out > 2 days to allow the TC time to approve the new team [2]. Thank you to > Tony, Jeremy, and Kendall for accommodating the change. > > [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-02-13.log.html > [2] https://review.openstack.org/#/c/636510/ > > -- > Doug Since I've had a question or two about this, I want to clarify exactly what happened and what changed. First, two points to clarify the source of the issue: 1. The rules for voting on a governance change that would add a new team require us to give at least 7 days of consideration, regardless of the vote count, in case there are objections from the community. 2. The election officials had previously set 19 Feb as the deadline for defining the electorate for PTL elections for teams. The TC rule and election deadline combined mean that the Placement team would not have been approved in time, and so would not be able to participate in "normal" PTL elections for Train. We explored a few options: Changing the TC voting rule would have taken us at least as long as approving Placement under the current rules, at which point we'd still be past the deadline. We could create the Placement team and have them run a special election during Train, and the election officials were open to that approach. However, that's a lot of extra work for them just because of missing a deadline by 1 day for something we knew was coming, to which there is not a lot of objection, but that we let slip. So, in order to avoid creating that extra work, and to have Placement participate in the normal Train cycle PTL election process like the other teams, we've proposed moving the deadline for defining the electorate to 22 Feb. The patch change the deadline in the election repo is modifying a file that currently defines the TC election schedule, so it's a bit confusing about why that file is being changed. I'm not an expert in the tools, but from what I understand that same setting in that file is used for both elections, but only one election can be listed at a time. Since the TC election comes first, the other dates in that file are about the TC election, and the file contents will be changed when the PTL election starts. In any case, the change won't affect the TC election because the repositories moving under the new team are already part of an official team, and so contributors will already be able to participate in the TC election. The change also will not affect the dates of the PTL election process in which most people participate (nominations, voting, etc.), because the election officials have agreed to a smaller buffer between the deadline and the start of the election. So, they'll be building the election rolls in a shorter amount of time in order to allow the rest of the schedule to stay the same. Thanks again, Tony, Jeremy, and Kendall, for going the extra mile for us. Based on the current votes on the governance change to create the Placement team, we are on track to have it approved in time to meet the new deadline. Let me know if anyone has further questions about this; I'll be happy to make sure it's clear for everyone. -- Doug From lbragstad at gmail.com Wed Feb 13 17:15:56 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 13 Feb 2019 11:15:56 -0600 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190213122451.nyyllx555smf2mwy@pacific.linksys.moosehall> References: <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> <20190208091829.6tiig7lgef6txcxk@pacific.linksys.moosehall> <723736DB-ED80-4600-AA98-F51FE70A8D73@gmail.com> <20190211222641.pney33hmai6vjoky@pacific.linksys.moosehall> <355BD2CB-B1F9-43B1-943C-66553E90050F@gmail.com> <20190213122451.nyyllx555smf2mwy@pacific.linksys.moosehall> Message-ID: On 2/13/19 6:24 AM, Adam Spiers wrote: > Ildiko Vancsa wrote: >>> On 2019. Feb 11., at 23:26, Adam Spiers wrote: >>> [snip…] >>> >>>> To help with all this I would start the experiment with wiki pages >>>> and etherpads as these are all materials you can point to without >>>> too much formality to follow so the goals, drivers, supporters and >>>> progress are visible to everyone who’s interested and to the TC to >>>> follow-up on. >>>> Do we expect an approval process to help with or even drive either >>>> of the crucial steps I listed above? >>> >>> I'm not sure if it would help.  But I agree that visibility is >>> important, and by extension also discoverability.  To that end I >>> think it would be worth hosting a central list of popup initiatives >>> somewhere which links to the available materials for each >>> initiative. Maybe it doesn't matter too much whether that central >>> list is simply a wiki page or a static web page managed by Gerrit >>> under a governance repo or similar. >> >> I would start with a wiki page as it stores history as well and it’s >> easier to edit. Later on if we feel the need to be more formal we can >> move to a static web page and use Gerrit. > > Sounds good to me.  Do we already have some popup teams?  If so we > could set this up straight away. The unified limits work is certainly cross-project, but it's slowed down recently. There were a few folks from nova, keystone, and oslo working on various parts of it. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From fungi at yuggoth.org Wed Feb 13 17:22:46 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 13 Feb 2019 17:22:46 +0000 Subject: Placement governance switch In-Reply-To: References: <8BE26158-5817-497F-A9D2-60222BD9F82C@leafe.com> <2B9D8207-CFD6-4864-8B2A-C9D3B31D6588@leafe.com> Message-ID: <20190213172246.tbsstbs6qvsfvdlp@yuggoth.org> On 2019-02-13 12:12:21 -0500 (-0500), Doug Hellmann wrote: [...] > The patch change the deadline in the election repo is modifying a file > that currently defines the TC election schedule, so it's a bit confusing > about why that file is being changed. I'm not an expert in the tools, > but from what I understand that same setting in that file is used for > both elections, but only one election can be listed at a time. Since the > TC election comes first, the other dates in that file are about the TC > election, and the file contents will be changed when the PTL election > starts. [...] To clarify further the reason in this case is that, due to release and conference scheduling constraints placing the TC and PTL elections nearly adjacent, we'll be using one tag on the governance repository to identify the state of our official project list in determining the electorate rolls for both elections. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tzumainn at redhat.com Wed Feb 13 18:54:28 2019 From: tzumainn at redhat.com (Tzu-Mainn Chen) Date: Wed, 13 Feb 2019 13:54:28 -0500 Subject: [blazar] Question about Interaction between Ironic and Blazar Message-ID: Hi! I'm working with both Ironic and Blazar, and came across a strange interaction that I was wondering if the Blazar devs were aware of. I had four Ironic nodes registered, and only node A had an instance running on it. I tried adding node B - which was available - to the freepool and got this error: 2019-02-13 09:42:28.560 220255 ERROR oslo_messaging.rpc.server ERROR: Servers [[{u'uuid': u'298e83a4-7d5e-4aae-b89a-9dc74b4278af', u'name': u'instance-00000011'}]] found for host a00696d5-32ba-475e-9528-59bf11cffea6 This was strange, because the instance in question was running on node A, and not node B. After some investigation, the cause was identified as the following: https://bugs.launchpad.net/nova/+bug/1815793 But in the meantime, my question is: have other people using Blazar and Ironic run into this issue? It would seem to imply that Ironic nodes can only be added to the freepool if no instances are created, which poses a long-term maintenance issue. Is there a workaround? Thanks, Tzu-Mainn Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Feb 13 19:56:52 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 13 Feb 2019 13:56:52 -0600 Subject: [dev][keystone] Launchpad blueprint reckoning Message-ID: <72413deb-161a-04a9-bdb7-b3e9f745ba7c@gmail.com> Over the last couple of years, our launchpad blueprints have grown unruly [0] (~77 blueprints a few days ago). The majority of them were in "New" status, unmaintained, and several years old (some dating back to 2013). Even though we've been using specifications [1] for several years, people still get confused when they see conflicting or inaccurate blueprints. After another person tripped over a duplicate blueprint this week, cmurphy, vishakha, and I decided to devote some attention to it. We tracked the work in an etherpad [2] - so we can still find links to things. First, if you are the owner of a blueprint that was marked as "Obsolete", you should see a comment on the whiteboard that includes a reason or justification. If you'd like to continue the discussion about your feature request, please open a specification against the openstack/keystone-specs repository instead. For historical context, when we converted to specifications, we were only supposed to create blueprints for tracking the work after the specification was merged. Unfortunately, I don't think this process was ever written down, which I'm sure attributed to blueprint bloat over the years. Second, if you track work regularly using blueprints or plan on delivering something for Stein, please make sure your blueprint in Launchpad is approved and tracked to the appropriate release (this should already be done, but feel free to double check). The team doesn't plan on switching processes for feature tracking mid-release. Instead, we're going to continue tracking feature work with launchpad blueprints for the remainder of Stein. Currently, the team is leaning heavily towards using RFE bug reports for new feature work, which we can easily switch to in Train. The main reason for this switch is that bug comments are immutable with better timestamps while blueprint whiteboards are editable to anyone and not timestamped very well. We already have tooling in place to update bug reports based on commit messages and that will continue to work for RFE bug reports. Third, any existing blueprints that aren't targeted for Stein but are good ideas, should be converted to RFE bug reports. All context from the blueprint will need to be ported to the bug report. After a sufficient RFE bug report is opened, the blueprint should be marked as "Superseded" or "Obsolete" *with* a link to the newly opened bug. While this is tedious, there aren't nearly as many blueprints open now as there were a couple of days ago. If you're interested in assisting with this effort, let me know. Fourth, after moving non-Stein blueprints to RFE bugs, only Stein related blueprints should be open in launchpad. Once Stein is released, we'll go ahead disable keystone blueprints. Finally, we need to overhaul a portion of our contributor guide to include information around this process. The goal should be to make that documentation clear enough that we don't have this issue again. I plan on getting something up for review soon, but I don't have anything currently, so if someone is interested in taking a shot at writing this document, please feel free to do so. Morgan has a patch up to replace blueprint usage with RFE bugs in the specification template [3]. We can air out any comments, questions, or concerns here in the thread. Thanks, Lance [0] https://blueprints.launchpad.net/keystone [1] http://specs.openstack.org/openstack/keystone-specs/ [2] https://etherpad.openstack.org/p/keystone-blueprint-cleanup [3] https://review.openstack.org/#/c/625282/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From iballou at redhat.com Wed Feb 13 20:52:38 2019 From: iballou at redhat.com (Ian Ballou) Date: Wed, 13 Feb 2019 15:52:38 -0500 Subject: [neutron] Switch port configuration with Neutron and Networking-Ansible Message-ID: Hi Dan, I'm working with a group of people on a project to create an Ironic-based bare-metal leasing system. As part of this system, we would like to allow users and administrators to connect their hardware to different VLAN networks that may span multiple switch ports. We're hoping to use Neutron and the appropriate plugins to accomplish this. I've been testing out the Networking-Ansible ML2 Neutron driver to see if it fits our needs. The Python APIs available in the newest version (such as configuring switch ports and VLANs) work well. However, we would like to use this ML2 driver directly for OpenStack CLI support. In testing I've been able to create VLANs on a Nexus switch, but I have not yet found a way through Neutron (via the Openstack CLI) to change VLAN settings on switch ports (access or trunk). I was able to do this with the Python APIs, however. I did see that the Networking-Ansible mechanism driver calls bind_port() to change switch port configuration, but it's not clear if the port information comes from static configuration or if Neutron does have a way of reading it from the CLI. Is this kind of switch configuration by users and admins outside the scope of Neutron or this Neutron driver? I've included openstack-discuss in case anyone else has any comments. Thanks! - Ian -- Ian Ballou SOFTWARE ENGINEERING Intern Red Hat Boston, MA iballou at redhat.com From lbragstad at gmail.com Wed Feb 13 21:00:41 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 13 Feb 2019 15:00:41 -0600 Subject: [dev][keystone] Key loading performance during token validation Message-ID: Both fernet tokens and JSON Web Tokens (still under review [0]) require keys to either encrypt or sign the token payload. These keys are kept on disk by default and are treated like configuration. During the validation process, they're loaded from disk, stored as a list, and iterated over until the correct key decrypts the ciphertext/validates the signature or the iterable is exhausted [1][2]. Last week XiYuan raised some concerns about loading all public keys from disk every time we validate a token [3]. To clarify, this would be applicable to both fernet keys and asymmetric key pairs used for JWT. I spent a couple days late last week trying to recreate the performance bottleneck. There were two obvious approaches. 1. Watch for writable changes to key repositories on disk I used a third-party library for this called watchdog [4], but the inherent downside is that it is specific to file-based key repositories. For example, this might not work with a proper key manager, which has been proposed in the past [5]. 2. Encode a hash of the key used to create the token inside the token payload This would give any keystone server validating the token the ability preemptively load the correct key for validation instead of iterating over a list of all possible keys. There was some hesitation around this approach because the hash would be visible to anyone with access to the token (it would be outside of ciphertext in the case of fernet). While hashes are one-way, it would allow someone to "collect" tokens they know were issued using the same symmetric private key or signed with the same asymmetric private key. Security concerns aside, I did attempt to write up the second approach [6]. I was able to get preemptive loading to work, but I didn't notice a significant performance impact. For clarification, I was using a single keystone server with one private key for signing and 100 public keys to simulate a deployment of 100 API servers that all need to validate tokens issued by each other. At a certain point, I wondered if the loading of keys was really the issue, or if it was because we were iterating over an entire set every time we validate a token. It's also important to note that I had to turn off caching to fully test this. Keystone enables caching by default, which short-circuits the entire key loading/token provider code path after the token is issued. My question is if anyone has additional feedback on how to recreate performance issues specifically for loading files from disk, since I wasn't particularly successful in noticing a difference between repetitive key loading or iterating all keys on token validation against what we already do. [0] https://review.openstack.org/#/c/614549/ [1] https://git.openstack.org/cgit/openstack/keystone/tree/keystone/token/token_formatters.py?id=053f908853481c00ab28f562a7623f121a7703af#n69 [2] https://github.com/pyca/cryptography/blob/master/src/cryptography/fernet.py#L165-L171 [3] https://review.openstack.org/#/c/614549/13/keystone/token/providers/jws/core.py,unified at 103 [4] https://pythonhosted.org/watchdog/ [5] https://review.openstack.org/#/c/363065/ [6] https://review.openstack.org/#/c/636151/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From chris at openstack.org Wed Feb 13 21:32:09 2019 From: chris at openstack.org (Chris Hoge) Date: Wed, 13 Feb 2019 13:32:09 -0800 Subject: [loci] Loci builds functionally broken (temporary fix in place, permanent in review) In-Reply-To: <378149AB-54F2-45E7-B196-31F0505F6E0A@openstack.org> References: <378149AB-54F2-45E7-B196-31F0505F6E0A@openstack.org> Message-ID: <70A19C19-683C-4DB4-8A91-61EA3B8CADC0@openstack.org> Following up on this issue, I've sent up a patch with a permanent fix. There are a few issues related to various releases of virtualenv and distribution packaging that make this a bit tricky. For CentOS, Loci follows the recommentdations of the RDO team to not install EPEL[1], since there may be incompatabilities between RDO and EPEL. Because of this, we can't assume that pip will be available by default. virtualenv is available, and is a way to get a version of pip that can bootstrap the pip and updated virtualenv installation. Handling of symlinks to virtualenvs has been buggy and inconsistent for quite a while. For the CentOS packaged version of virtualenv, existing python libraries are copied directly over to the virtualenv. In 16.4.0, this behavior changed to symlink to the parent library location. In the instance where you have nested virtualenvs (which Loci does, because this is how it bootstraps the build environment), this means the build venv links back to the bootstrap venv[2]. Previously we deleted the bootstrap venv. The fix is to hold on to it, preserving the chain of links[3]. This solution isn't ideal, and it's worthwhile to rethink how we get pip and other dependencies onto the build hosts. It's a bigger discussion that the Loci team will have in the coming months. Regardless of that, Loci builds now work with the temporary patch that landed a few days ago and with the more "permanent" patch that is in flight. [1] https://docs.openstack.org/install-guide/environment-packages-rdo.html [2] https://github.com/pypa/virtualenv/pull/1309 [3] https://review.openstack.org/#/c/636447/2 > On Feb 11, 2019, at 5:12 PM, Chris Hoge wrote: > > A patch for a temporary fix is up for review. > > https://review.openstack.org/#/c/636252/ > > We’ll be looking into a more permanent fix in the coming days. > >> On Feb 11, 2019, at 4:38 PM, Chris Hoge wrote: >> >> It appears the lastest release of virtualenv has broken Loci builds. I >> believe the root cause is an update in how symlinks are handled. Before >> the release, the python libraries installed in the: >> >> /var/lib/openstack/lib64/python2.7/lib-dynload >> >> directory (this is on CentOS, Ubuntu and Suse vary) were direct instances >> of the library. For example: >> >> -rwxr-xr-x. 1 root root 62096 Oct 30 23:46 itertoolsmodule.so >> >> Now, the build points to a long-destroyed symlink that is an artifact of >> the requirements build process. For example: >> >> lrwxrwxrwx. 1 root root 56 Feb 11 23:01 itertoolsmodule.so -> /tmp/venv/lib64/python2.7/lib-dynload/itertoolsmodule.so >> >> We will investigate how to make the build more robust, repair this, and >> will report back soon. Until then, you should expect any fresh builds to >> not be functional, despite the apparent success in building the container. >> >> Thanks, >> Chris >> >> [1] https://virtualenv.pypa.io/en/stable/changes/#release-history >> >> > > From skaplons at redhat.com Wed Feb 13 22:03:21 2019 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 13 Feb 2019 23:03:21 +0100 Subject: [Neutron] Split Network node from controller Node In-Reply-To: References: <3DC9635F-4B85-41D4-B615-E6E2A8234B38@redhat.com> Message-ID: <834C96A4-CD3F-4D0C-B43D-1E75E94A6798@redhat.com> Hi, Please check in neutron-l3 agent logs on new host if all was configured properly. Also, are You trying to ping VM via floating IP or fixed one? If floating IP is not working, please check from qdhcp- and qrouter- namespaces if fixed IP is working fine or not. If fixed IP is not reachable from there, check on this new host if L2 agent (ovs agent) configured everything properly. > Wiadomość napisana przez Zufar Dhiyaulhaq w dniu 11.02.2019, o godz. 17:46: > > Hi > > Thank you for your answer, > I just install the network agent in a network node, > > with this following package > • openstack-neutron.noarch > • openstack-neutron-common.noarch > • openstack-neutron-openvswitch.noarch > • openstack-neutron-metering-agent.noarch > and configuring and appear in the agent list > > [root at zu-controller1 ~(keystone_admin)]# openstack network agent list > +--------------------------------------+--------------------+----------------+-------------------+-------+-------+---------------------------+ > | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | > +--------------------------------------+--------------------+----------------+-------------------+-------+-------+---------------------------+ > | 025f8a15-03b5-421e-94ff-3e07fc1317b5 | Open vSwitch agent | zu-compute2 | None | :-) | UP | neutron-openvswitch-agent | > | 04af3150-7673-4ac4-9670-fd1505737466 | Metadata agent | zu-network1 | None | :-) | UP | neutron-metadata-agent | > | 11a9c764-e53d-4316-9801-fa2a931f0572 | Open vSwitch agent | zu-compute1 | None | :-) | UP | neutron-openvswitch-agent | > | 1875a93f-09df-4c50-8660-1f4dc33b228d | L3 agent | zu-controller1 | nova | :-) | UP | neutron-l3-agent | > | 1b492ed7-fbc2-4b95-ba70-e045e255a63d | L3 agent | zu-network1 | nova | :-) | UP | neutron-l3-agent | > | 2fb2a714-9735-4f78-8019-935cb6422063 | Metering agent | zu-network1 | None | :-) | UP | neutron-metering-agent | > | 3873fc10-1758-47e9-92b8-1e8605651c70 | Open vSwitch agent | zu-network1 | None | :-) | UP | neutron-openvswitch-agent | > | 4b51bdd2-df13-4a35-9263-55e376b6e2ea | Metering agent | zu-controller1 | None | :-) | UP | neutron-metering-agent | > | 54af229f-3dc1-49db-b32a-25f3fd62c010 | DHCP agent | zu-controller1 | nova | :-) | UP | neutron-dhcp-agent | > | 9337c72b-8703-4c80-911b-106abe51ffbd | DHCP agent | zu-network1 | nova | :-) | UP | neutron-dhcp-agent | > | a3c78231-027d-4ddd-8234-7afd1d67910e | Metadata agent | zu-controller1 | None | :-) | UP | neutron-metadata-agent | > | aeb7537e-98af-49f0-914b-204e64cb4103 | Open vSwitch agent | zu-controller1 | None | :-) | UP | neutron-openvswitch-agent | > +--------------------------------------+--------------------+----------------+-------------------+-------+-------+---------------------------+ > > I try to migrate the network (external & internal) and router into zu-network1 (my new network node). > and success > > [root at zu-controller1 ~(keystone_admin)]# openstack network agent list --router $ROUTER_ID > +--------------------------------------+------------+-------------+-------------------+-------+-------+------------------+ > | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | > +--------------------------------------+------------+-------------+-------------------+-------+-------+------------------+ > | 1b492ed7-fbc2-4b95-ba70-e045e255a63d | L3 agent | zu-network1 | nova | :-) | UP | neutron-l3-agent | > +--------------------------------------+------------+-------------+-------------------+-------+-------+------------------+ > [root at zu-controller1 ~(keystone_admin)]# openstack network agent list --network $NETWORK_INTERNAL > > +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ > | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | > +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ > | 9337c72b-8703-4c80-911b-106abe51ffbd | DHCP agent | zu-network1 | nova | :-) | UP | neutron-dhcp-agent | > +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ > [root at zu-controller1 ~(keystone_admin)]# openstack network agent list --network $NETWORK_EXTERNAL > +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ > | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | > +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ > | 9337c72b-8703-4c80-911b-106abe51ffbd | DHCP agent | zu-network1 | nova | :-) | UP | neutron-dhcp-agent | > +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ > > But, I cannot ping my instance after the migration. > I don't know why. > > ii check my DHCP and router has already moved. > > [root at zu-controller1 ~(keystone_admin)]# ip netns > [root at zu-controller1 ~(keystone_admin)]# > > [root at zu-network1 ~]# ip netns > qdhcp-fddd647b-3601-43e4-8299-60b703405110 (id: 1) > qrouter-dd8ae033-0db2-4153-a060-cbb7cd18bae7 (id: 0) > [root at zu-network1 ~]# > > What step do I miss? > Thanks > > Best Regards, > Zufar Dhiyaulhaq > > > On Mon, Feb 11, 2019 at 3:13 PM Slawomir Kaplonski wrote: > Hi, > > I don’t know if there is any tutorial for that but You can just deploy new node with agents which You need, then disable old DHCP/L3 agents with neutron API [1] and move existing networks/routers to agents in new host with neutron API. Docs for agents scheduler API is in [2] and [3]. > Please keep in mind that when You will move routers to new agent You will have some downtime in data plane. > > [1] https://developer.openstack.org/api-ref/network/v2/#update-agent > [2] https://developer.openstack.org/api-ref/network/v2/#l3-agent-scheduler > [3] https://developer.openstack.org/api-ref/network/v2/#dhcp-agent-scheduler > > > Wiadomość napisana przez Zufar Dhiyaulhaq w dniu 11.02.2019, o godz. 03:33: > > > > Hi everyone, > > > > I Have existing OpenStack with 1 controller node (Network Node in controller node) and 2 compute node. I need to expand the architecture by splitting the network node from controller node (create 1 node for network). > > > > Do you have any recommended step or tutorial for doing this? > > Thanks > > > > Best Regards, > > Zufar Dhiyaulhaq > > — > Slawek Kaplonski > Senior software engineer > Red Hat > — Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Wed Feb 13 22:07:51 2019 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 13 Feb 2019 23:07:51 +0100 Subject: [neutron] Multi-segment per host support for routed networks In-Reply-To: <7D8DEE81-6D5F-4424-9482-12C80A5C15DA@godaddy.com> References: <7D8DEE81-6D5F-4424-9482-12C80A5C15DA@godaddy.com> Message-ID: <8821F0D5-67ED-4F71-9D5E-CCA4A576867F@redhat.com> Hi, > Wiadomość napisana przez David G. Bingham w dniu 01.02.2019, o godz. 19:16: > > Neutron land, > > Problem: > Neutron currently only allows a single network segment per host. This > becomes a problem when networking teams want to limit the number of IPs it > supports on a segment. This means that at times the number of IPs available to > the host is the limiting factor for the number of instances we can deploy on a > host. Ref: https://bugs.launchpad.net/neutron/+bug/1764738 > > Ongoing Work: > We are excited in our work add "multi-segment support for routed networks". > We currently have a proof of concept here https://review.openstack.org/#/c/623115 > that for routed networks effectively: > * Removes validation preventing multiple segments. > * Injects segment_id into fixed IP records. > * Uses the segment_id when creating a bridge (rather than network_id). > In effect, it gives each segment its own bridge. > > It works pretty well for new networks and deployments. For existing > routed networks, however, it breaks networking. Please use *caution* if you > decide to try it. > > TODOs: > Things TODO before this before it is fully baked: > * Need to add code to handle ensuring bridges are also updated/deleted using > the segment_id (rather than network_id). > * Need to add something (a feature flag?) that prevents this from breaking > routed networks when a cloud admin updates to master and is configured for > routed networks. > * Need to create checker and upgrade migration code that will convert existing > bridges from network_id based to segment_id based (ideally live or with > little network traffic downtime). Once converted, the feature flag could > enable the feature and start using the new code. > > Need: > 1. How does one go about adding a migration tool? Maybe some examples? I’m not sure if this can be similar but I know that networking-ovn project has some migration tool to migrate from ml2/ovs to ml2/ovn solution. Maybe this can be somehow helpful for You. > 2. Will nova need to be notified/upgraded to have bridge related files updated? Probably someone from Nova team should answer to that. Maybe Sean Mooney would be good person to ask? > 3. Is there a way to migrate without (or minimal) downtime? > 4. How to repeatably test this migration code? Grenade? Again, check how networking-ovn did it, maybe You will be able to do something similar :) > > Looking for any ideas that can keep this moving :) > > Thanks a ton, > > David Bingham (wwriverrat on irc) > Kris Lindgren (klindgren on irc) > Cloud Engineers at GoDaddy > — Slawek Kaplonski Senior software engineer Red Hat From tpb at dyncloud.net Wed Feb 13 22:37:49 2019 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 13 Feb 2019 17:37:49 -0500 Subject: [dev][tc][ptl] Continuing Evaluating projects in relation to OpenStack cloud vision In-Reply-To: References: Message-ID: <20190213223749.6mss24jp5rqofxyx@barron.net> On 08/02/19 13:15 +0000, Chris Dent wrote: > >Yesterday at the TC meeting [1] we decided that the in-progress task >to make sure the technical vision document [2] has been fully >evaluated by project teams needs a bit more time, so this message is >being produced as a reminder. > >Back in January Julia produced a message [3] suggesting that each >project consider producing a document where they compare their >current state with an idealized state if they were in complete >alignment with the vision. There were two hoped for outcomes: > >* A useful in-project document that could help guide future > development. >* Patches to the vision document to clarify or correct the vision > where it is discovered to be not quite right. > >A few projects have started that process (see, for example, >melwitt's recent message for some links [4]) resulting in some good >plans as well as some improvements to the vision document [5]. > >In the future the TC would like to use the vision document to help >evaluate projects applying to be "official" as well as determining >if projects are "healthy". As such it is important that the document >be constantly evolving toward whatever "correct" means. The process >described in Julia's message [3] is a useful to make it so. Please >check it out. > >Thanks. > >[1] http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-02-07-14.00.html >[2] https://governance.openstack.org/tc/reference/technical-vision.html >[3] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001417.html >[4] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002501.html >[5] https://review.openstack.org/#/q/project:openstack/governance+file:reference/technical-vision.rst > >-- >Chris Dent ٩◔̯◔۶ https://anticdent.org/ >freenode: cdent tw: @anticdent Here [1] is an initial iteration on a vision reflection document for manila. Review comments and questions are of course welcome. -- Tom Barron [1] https://review.openstack.org/#/c/636770/ From tpb at dyncloud.net Wed Feb 13 22:39:22 2019 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 13 Feb 2019 17:39:22 -0500 Subject: Fw: Re: [manila] [dev][tc][ptl] Continuing Evaluating projects in relation to OpenStack cloud vision Message-ID: <20190213223922.5lubos55fmihahge@barron.net> Resending with '[manila]' tag as well. -------------- next part -------------- An embedded message was scrubbed... From: Tom Barron Subject: Re: [dev][tc][ptl] Continuing Evaluating projects in relation to OpenStack cloud vision Date: Wed, 13 Feb 2019 17:37:49 -0500 Size: 2739 URL: From tony at bakeyournoodle.com Thu Feb 14 02:18:37 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 14 Feb 2019 13:18:37 +1100 Subject: Call for help! 'U' Release name Mandarin speakers Message-ID: <20190214021836.GD12795@thor.bakeyournoodle.com> Hello folks, Now that we know when and where the 'U' summit will be[1] we can pick the name for the U release. I'm happy to act as coordinator but given the location I'd really like a native speaker (or 2) to help with assessing the names against the criteria, defining the geographic region and translating the civs POLL when we get there. Yours Tony. [1] https://www.openstack.org/summit/shanghai-2019 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mnaser at vexxhost.com Thu Feb 14 02:35:55 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 13 Feb 2019 21:35:55 -0500 Subject: Call for help! 'U' Release name Mandarin speakers In-Reply-To: <20190214021836.GD12795@thor.bakeyournoodle.com> References: <20190214021836.GD12795@thor.bakeyournoodle.com> Message-ID: Sent from my iPhone > On Feb 13, 2019, at 9:18 PM, Tony Breeds wrote: > > Hello folks, > Now that we know when and where the 'U' summit will be[1] we can > pick the name for the U release. I'm happy to act as coordinator but > given the location I'd really like a native speaker (or 2) to help with > assessing the names against the criteria, defining the geographic > region and translating the civs POLL when we get there. Alex Xu and I reached out via WeChat to the OpenStack community there! I’m hoping we can recruit someone :) > > Yours Tony. > > [1] https://www.openstack.org/summit/shanghai-2019 From tony at bakeyournoodle.com Thu Feb 14 02:45:42 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 14 Feb 2019 13:45:42 +1100 Subject: [infra][releases][requirements] Publishing per branch constraints files Message-ID: <20190214024541.GE12795@thor.bakeyournoodle.com> Hi all, Back in the dim dark (around Sept 2017) we discussed the idea of publishing the constraints files statically (instead of via gitweb)[1]. the TL;DR: it's nice to be able to use https://release.openstack.org/constraints/upper/ocata instead of http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/ocata in tox.ini At the Dublin (yes Dublin) PTG Jim, Jeremy, Clark and I discussed how we'd go about doing that. The notes we have are at: https://etherpad.openstack.org/p/publish-upper-constraints There was a reasonable ammount of discussion about merging and root-markers which I don't recall and only barely understood at the time. I have no idea how much of the first 3 items I can do vs calling on others. I'm happy to do anything that I can ... is it reasonable to get this done before RC1 (March 18th ish)[2]? Yours Tony. [1] http://lists.openstack.org/pipermail/openstack-dev/2017-September/122333.html [2] https://releases.openstack.org/stein/schedule.html#s-rc1 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mnaser at vexxhost.com Thu Feb 14 03:09:35 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 13 Feb 2019 22:09:35 -0500 Subject: Call for help! 'U' Release name Mandarin speakers In-Reply-To: <20190214021836.GD12795@thor.bakeyournoodle.com> References: <20190214021836.GD12795@thor.bakeyournoodle.com> Message-ID: <0E16E57B-98E7-498A-A810-19D3AD1ED028@vexxhost.com> Sent from my iPhone > On Feb 13, 2019, at 9:18 PM, Tony Breeds wrote: > > Hello folks, > Now that we know when and where the 'U' summit will be[1] we can > pick the name for the U release. I'm happy to act as coordinator but > given the location I'd really like a native speaker (or 2) to help with > assessing the names against the criteria, defining the geographic > region and translating the civs POLL when we get there. So: chatting with some folks from China and we’ve got the interesting problem that Pinyin does not have a U! http://xh.5156edu.com/pinyi.php I will leave it for some of the locals who mentioned that they can clarify more about that :) > Yours Tony. > > [1] https://www.openstack.org/summit/shanghai-2019 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Feb 14 03:14:47 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 14 Feb 2019 14:14:47 +1100 Subject: Call for help! 'U' Release name Mandarin speakers In-Reply-To: <0E16E57B-98E7-498A-A810-19D3AD1ED028@vexxhost.com> References: <20190214021836.GD12795@thor.bakeyournoodle.com> <0E16E57B-98E7-498A-A810-19D3AD1ED028@vexxhost.com> Message-ID: <20190214031445.GH12795@thor.bakeyournoodle.com> On Wed, Feb 13, 2019 at 10:09:35PM -0500, Mohammed Naser wrote: > So: chatting with some folks from China and we’ve got the interesting problem that Pinyin does not have a U! > > http://xh.5156edu.com/pinyi.php I admit this doesn't surprise me. > I will leave it for some of the locals who mentioned that they can clarify more about that :) Yup, if there isn't anything that naturally fits with the establish criteria we'd just have a number of items that are 'exceptional' and get TC endorsement for similar to https://review.openstack.org/#/c/611511/ Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From yongle.li at gmail.com Thu Feb 14 03:34:59 2019 From: yongle.li at gmail.com (Fred Li) Date: Thu, 14 Feb 2019 11:34:59 +0800 Subject: Call for help! 'U' Release name Mandarin speakers In-Reply-To: <20190214031445.GH12795@thor.bakeyournoodle.com> References: <20190214021836.GD12795@thor.bakeyournoodle.com> <0E16E57B-98E7-498A-A810-19D3AD1ED028@vexxhost.com> <20190214031445.GH12795@thor.bakeyournoodle.com> Message-ID: Thanks, Mohanmmed. In Chinese Pinyin, there are not any words starting with i, u, or v. :-( Regarding U, the pronunciation is /u:/ in English, but it has to use together with w, like wu. There are several alternatives. 1. use some words having the pronunciation /u:/, like Wu Zhen[1], a very beautiful town close to Shanghai. 2. use some words starting with U, but not really Chinese, like Urumqi, Ulanhot, which are city names in China. [1] https://goo.gl/maps/r6sipo1bkjn On Thu, Feb 14, 2019 at 11:18 AM Tony Breeds wrote: > On Wed, Feb 13, 2019 at 10:09:35PM -0500, Mohammed Naser wrote: > > > So: chatting with some folks from China and we’ve got the interesting > problem that Pinyin does not have a U! > > > > http://xh.5156edu.com/pinyi.php > > I admit this doesn't surprise me. > > > I will leave it for some of the locals who mentioned that they can > clarify more about that :) > > Yup, if there isn't anything that naturally fits with the establish > criteria we'd just have a number of items that are 'exceptional' and get > TC endorsement for similar to https://review.openstack.org/#/c/611511/ > > Yours Tony. > -- Regards Fred Li (李永乐) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Thu Feb 14 03:46:58 2019 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Thu, 14 Feb 2019 11:46:58 +0800 Subject: Call for help! 'U' Release name Mandarin speakers In-Reply-To: <20190214031445.GH12795@thor.bakeyournoodle.com> References: <20190214021836.GD12795@thor.bakeyournoodle.com> <0E16E57B-98E7-498A-A810-19D3AD1ED028@vexxhost.com> <20190214031445.GH12795@thor.bakeyournoodle.com> Message-ID: Yes, we do not have word start with U but we have alot of cities or sites start with U in english, mostly in Inner Mongolia, Xinjiang and Xizang, I guess we can find something suitable. On Thu, Feb 14, 2019 at 11:21 AM Tony Breeds wrote: > On Wed, Feb 13, 2019 at 10:09:35PM -0500, Mohammed Naser wrote: > > > So: chatting with some folks from China and we’ve got the interesting > problem that Pinyin does not have a U! > > > > http://xh.5156edu.com/pinyi.php > > I admit this doesn't surprise me. > > > I will leave it for some of the locals who mentioned that they can > clarify more about that :) > > Yup, if there isn't anything that naturally fits with the establish > criteria we'd just have a number of items that are 'exceptional' and get > TC endorsement for similar to https://review.openstack.org/#/c/611511/ > > Yours Tony. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Thu Feb 14 05:09:03 2019 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Thu, 14 Feb 2019 12:09:03 +0700 Subject: [dev] [mistral] Deprecating keystone_authtoken/auth_uri In-Reply-To: <8d7c8349-d01b-a1a4-8706-35ef6e84410a@binero.se> References: <8d7c8349-d01b-a1a4-8706-35ef6e84410a@binero.se> Message-ID: <70c83850-b48e-4633-867f-51ca7627f697@Spark> Hi Tobias, Sorry for that, we’ll review it asap. Thanks Renat Akhmerov @Nokia On 13 Feb 2019, 21:55 +0700, Tobias Urdin , wrote: > Hello all Mistral people, > > This patch [1] has been up for a long time about the removal (which > probably should a deprecation?) (and support for www_authenticate_uri) > but it's been still for a long time and since deployment tools have > moved away from using auth_uri I would guess that the Puppet OpenStack > project is not the only project that has specific code to keep > compatible with Mistral. > > Is there anybody active in Mistral that could look at it or have more > information? > > Best regards > Tobias > > [1] https://review.openstack.org/#/c/594187/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lujinluo at gmail.com Thu Feb 14 07:37:00 2019 From: lujinluo at gmail.com (Lujin Luo) Date: Wed, 13 Feb 2019 23:37:00 -0800 Subject: [neutron] [upgrade] No meeting on Feb. 14th Message-ID: Hi team, I will not be able to chair the meeting tomorrow either. Let's skip it and resume next week on 21th! Sorry for any inconvenience caused. Best regards, Lujin From wangxiyuan1007 at gmail.com Thu Feb 14 07:51:14 2019 From: wangxiyuan1007 at gmail.com (Xiyuan Wang) Date: Thu, 14 Feb 2019 15:51:14 +0800 Subject: [dev][keystone] Key loading performance during token validation In-Reply-To: References: Message-ID: Hi Lance, Thanks for your test. I think the performance issue here is mainly related to a case like: When issuing/validating tokens concurrently in multi-keystone mode, the disk I/O may be blocked. So for your test env, I'm not sure it would work. We hit this issue when using PKI/PKIz token, PKI/PKIz token use "openssl" to issue/vallidating tokens and openssl loads the keys from disk within every request as well. So I think it's similar with fernet/jws. They all read keys from disk everytime. Since PKI/PKIz is removed from unstream already. I can't not give you any useful test data for fernet or jws. Regardless of my concern, we can of cause merge JWT feature first. Then I guess we can get more feedback easier in the future. Lance Bragstad 于2019年2月14日周四 上午5:02写道: > Both fernet tokens and JSON Web Tokens (still under review [0]) require > keys to either encrypt or sign the token payload. These keys are kept on > disk by default and are treated like configuration. During the validation > process, they're loaded from disk, stored as a list, and iterated over > until the correct key decrypts the ciphertext/validates the signature or > the iterable is exhausted [1][2]. > > Last week XiYuan raised some concerns about loading all public keys from > disk every time we validate a token [3]. To clarify, this would be > applicable to both fernet keys and asymmetric key pairs used for JWT. I > spent a couple days late last week trying to recreate the performance > bottleneck. > > There were two obvious approaches. > > 1. Watch for writable changes to key repositories on disk > > I used a third-party library for this called watchdog [4], but the > inherent downside is that it is specific to file-based key repositories. > For example, this might not work with a proper key manager, which has been > proposed in the past [5]. > > 2. Encode a hash of the key used to create the token inside the token > payload > > This would give any keystone server validating the token the ability > preemptively load the correct key for validation instead of iterating over > a list of all possible keys. There was some hesitation around this approach > because the hash would be visible to anyone with access to the token (it > would be outside of ciphertext in the case of fernet). While hashes are > one-way, it would allow someone to "collect" tokens they know were issued > using the same symmetric private key or signed with the same asymmetric > private key. > > Security concerns aside, I did attempt to write up the second approach > [6]. I was able to get preemptive loading to work, but I didn't notice a > significant performance impact. For clarification, I was using a single > keystone server with one private key for signing and 100 public keys to > simulate a deployment of 100 API servers that all need to validate tokens > issued by each other. At a certain point, I wondered if the loading of keys > was really the issue, or if it was because we were iterating over an entire > set every time we validate a token. It's also important to note that I had > to turn off caching to fully test this. Keystone enables caching by > default, which short-circuits the entire key loading/token provider code > path after the token is issued. > > My question is if anyone has additional feedback on how to recreate > performance issues specifically for loading files from disk, since I wasn't > particularly successful in noticing a difference between repetitive key > loading or iterating all keys on token validation against what we already > do. > > > [0] https://review.openstack.org/#/c/614549/ > [1] > https://git.openstack.org/cgit/openstack/keystone/tree/keystone/token/token_formatters.py?id=053f908853481c00ab28f562a7623f121a7703af#n69 > [2] > https://github.com/pyca/cryptography/blob/master/src/cryptography/fernet.py#L165-L171 > [3] > https://review.openstack.org/#/c/614549/13/keystone/token/providers/jws/core.py,unified at 103 > [4] https://pythonhosted.org/watchdog/ > [5] https://review.openstack.org/#/c/363065/ > [6] https://review.openstack.org/#/c/636151/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apetrich at redhat.com Thu Feb 14 07:58:39 2019 From: apetrich at redhat.com (Adriano Petrich) Date: Thu, 14 Feb 2019 08:58:39 +0100 Subject: [dev] [mistral] Deprecating keystone_authtoken/auth_uri In-Reply-To: <70c83850-b48e-4633-867f-51ca7627f697@Spark> References: <8d7c8349-d01b-a1a4-8706-35ef6e84410a@binero.se> <70c83850-b48e-4633-867f-51ca7627f697@Spark> Message-ID: Thanks for bringing this up. Reviewed. On Thu, 14 Feb 2019 at 06:13, Renat Akhmerov wrote: > Hi Tobias, > > Sorry for that, we’ll review it asap. > > > Thanks > > Renat Akhmerov > @Nokia > On 13 Feb 2019, 21:55 +0700, Tobias Urdin , wrote: > > Hello all Mistral people, > > This patch [1] has been up for a long time about the removal (which > probably should a deprecation?) (and support for www_authenticate_uri) > but it's been still for a long time and since deployment tools have > moved away from using auth_uri I would guess that the Puppet OpenStack > project is not the only project that has specific code to keep > compatible with Mistral. > > Is there anybody active in Mistral that could look at it or have more > information? > > Best regards > Tobias > > [1] https://review.openstack.org/#/c/594187/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Thu Feb 14 08:55:25 2019 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 14 Feb 2019 09:55:25 +0100 Subject: [tc][uc] Becoming an Open Source Initiative affiliate org In-Reply-To: References: Message-ID: <20190214085525.GC26837@paraplu> On Wed, Feb 06, 2019 at 04:55:49PM +0100, Thierry Carrez wrote: > I started a thread on the Foundation mailing-list about the OSF becoming an > OSI affiliate org: > > http://lists.openstack.org/pipermail/foundation/2019-February/002680.html > > Please follow-up there is you have any concerns or questions. Not subscribed to that list, but whole-heartedly agree with everything you wrote there. Thanks for notifying it here. -- /kashyap From thierry at openstack.org Thu Feb 14 09:05:28 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 14 Feb 2019 10:05:28 +0100 Subject: Call for help! 'U' Release name Mandarin speakers In-Reply-To: References: <20190214021836.GD12795@thor.bakeyournoodle.com> <0E16E57B-98E7-498A-A810-19D3AD1ED028@vexxhost.com> <20190214031445.GH12795@thor.bakeyournoodle.com> Message-ID: <5ef9cecf-3fde-d274-b329-93a60acf6298@openstack.org> Zhenyu Zheng wrote: > Yes, we do not have word start with U but we have alot of cities or > sites start with U in english, mostly in Inner Mongolia, Xinjiang and > Xizang, I guess we can find something suitable. A few possibilities: http://www.fallingrain.com/world/CH/a/U/ -- Thierry Carrez (ttx) From ellorent at redhat.com Thu Feb 14 09:22:10 2019 From: ellorent at redhat.com (Felix Enrique Llorente Pastora) Date: Thu, 14 Feb 2019 10:22:10 +0100 Subject: [tripleo][ci] WARNING: All CI affected by a DPDK version bump Message-ID: Hi All, All the jobs from tripleo CI are currently failing at check, gate and promotions. There is a compatibility issue between OVS and DPDK versions we have a bypass to exclude problematic version. LP: https://bugs.launchpad.net/tripleo/+bug/1815863 Possible bypass: https://review.openstack.org/636860 BR -- Quique Llorente Openstack TripleO CI -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Feb 14 10:13:52 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 14 Feb 2019 11:13:52 +0100 Subject: [Release-job-failures] Release of openstack/puppet-aodh failed In-Reply-To: References: Message-ID: <9eea0f55-17d0-8f03-39c1-2d865d3d266e@openstack.org> zuul at openstack.org wrote: > Build failed. > > - release-openstack-puppet http://logs.openstack.org/61/617ffad84b633618490ca1023f8a31d9694b31a9/release/release-openstack-puppet/422c475/ : POST_FAILURE in 3m 42s > - announce-release announce-release : SKIPPED Error is: Forge API auth failed with code: 400 However it's a bit weird, since that release was made 4 weeks ago. Also we don't seem to upload things to the Puppet Forge... Was it some kind of a test ? It looks like I'm missing context. -- Thierry Carrez (ttx) From dalvarez at redhat.com Thu Feb 14 10:25:11 2019 From: dalvarez at redhat.com (Daniel Alvarez Sanchez) Date: Thu, 14 Feb 2019 11:25:11 +0100 Subject: [TripleO] openvswitch is broken - avoid rechecks in the next couple hours In-Reply-To: References: Message-ID: Hi folks, A new DPDK version landed in CentOS which is not compatible with the current Open vSwitch version that we have in RDO (error below). RDOfolks++ are working on it to make a new OVS version available without DPDK support so that we can unblock our jobs until we get a proper fix. Please, avoid rechecks in the next ~3 hours or so as no tests are expected to pass. Once [0] is merged, we'll need to wait around 30 more minutes for it to be available in CI jobs. Thanks! [0] https://review.rdoproject.org/r/#/c/18853 2019-02-14 07:35:06.464494 | primary | 2019-02-14 07:35:05 | Error: Package: 1:openvswitch-2.10.1-1.el7.x86_64 (delorean-master-deps) 2019-02-14 07:35:06.464603 | primary | 2019-02-14 07:35:05 | Requires: librte_table.so.3()(64bit) 2019-02-14 07:35:06.464711 | primary | 2019-02-14 07:35:05 | Available: dpdk-17.11-13.el7.x86_64 (quickstart-centos-extras) From colleen at gazlene.net Thu Feb 14 10:32:14 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Thu, 14 Feb 2019 11:32:14 +0100 Subject: [keystone] adfs SingleSignOn with CLI/API? In-Reply-To: References: <1B71BEE3-D72D-42E8-A61A-380CAA548722@gmail.com> <1549894791.2312833.1655509928.25450D18@webmail.messagingengine.com> <1549901920.3451697.1655621200.6F07535E@webmail.messagingengine.com> Message-ID: <1550140334.3146579.1657835168.35187945@webmail.messagingengine.com> On Wed, Feb 13, 2019, at 9:50 AM, Fabian Zimmermann wrote: > Hi, > > thanks for the fast answers. > > I asked our ADFS Administrators if they could provide some logs to see > whats going wrong, but they are unable to deliver these. I'm more interested in what you were seeing, both the output from the client and the output from the keystone server if you have access to it. > > So I installed keycloak and switched to OpenID Connect. > > Im (again) able to connect via Horizon SSO, but when I try to use > v3oidcpassword in the CLI Im running into > > https://bugs.launchpad.net/python-openstackclient/+bug/1648580 > > I already added the suggested --os-client-secret without luck. > Updating to latest python-versions.. > > pip install -U python-keystoneclient > pip install -U python-openstackclient > > didnt change anything. > > Any ideas what to try next? Unfortunately that seems to still be a valid bug that we'll need to address. You could try using the python keystoneauth library directly and see if the issue appears there[1][2]. [1] https://docs.openstack.org/keystoneauth/latest/using-sessions.html [2] https://docs.openstack.org/keystoneauth/latest/plugin-options.html#v3oidcpassword > > Offtopic: > > Seems like > > https://groups.google.com/forum/#!topic/mod_auth_openidc/qGE1DGQCTMY > > is right. I had to change the RedirectURI to geht OpenIDConnect working > with Keystone. The sample config of > > https://docs.openstack.org/keystone/rocky/advanced-topics/federation/websso.html > > is *not working for me* I found that too. The in-development documentation has already been fixed[3] but we didn't backport that to the Rocky documentation because it was part of a large series of rewrites and reorgs. [3] https://docs.openstack.org/keystone/latest/admin/federation/configure_federation.html#configure-mod-auth-openidc > > Fabian > Colleen From jean-philippe at evrard.me Thu Feb 14 10:43:58 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Thu, 14 Feb 2019 11:43:58 +0100 Subject: [tc][all][self-healing-sig] Service-side health checks community goal for Train cycle In-Reply-To: <21a9a786-a530-55b3-cf74-0444899a98f2@nemebean.com> References: <158c354c1d7a3e6fb261202b34d4e3233d5f39bc.camel@evrard.me> <1548671352.507178.1645094472.39B42BCA@webmail.messagingengine.com> <7cc5aa565a3a50a2d520d99e3ddcd6da5502e990.camel@evrard.me> <21a9a786-a530-55b3-cf74-0444899a98f2@nemebean.com> Message-ID: <26eb0f47161986adccce4de16e8dc6f7f6672794.camel@evrard.me> On Mon, 2019-02-11 at 16:28 -0600, Ben Nemec wrote: > > On 1/28/19 5:34 AM, Chris Dent wrote: > > On Mon, 28 Jan 2019, Jean-Philippe Evrard wrote: > > > > > It is not a non-starter. I knew this would show up :) > > > It's fine that some projects do differently (for example swift > > > has > > > different middleware, keystone is not using paste). > > > > Tangent so that people are clear on the state of Paste and > > PasteDeploy. > > > > I recommend projects move away from using either. > > > > Until recently both were abandonware, not receiving updates, and > > had issues working with Python3. > > > > I managed to locate maintainers from a few years ago, and > > negotiated > > to bring them under some level of maintenance, but in both cases > > the > > people involved are only interested in doing limited management to > > keep the projects barely alive. > > > > pastedeploy (the thing that is more often used in OpenStack, and is > > usually used to load the paste.ini file and doesn't have to have a > > dependency on paste itself) is now under the Pylons project: > > https://github.com/Pylons/pastedeploy > > > > Paste itself is with me: https://github.com/cdent/paste > > > > > I think it's also too big of a change to move everyone to one > > > single > > > technology in a cycle :) Instead, I want to focus on the real use > > > case > > > for people (bringing a common healthcheck "api" itself), which > > > doesn't > > > matter on the technology. > > > > I agree that the healthcheck change can and should be completely > > separate from any question of what is used to load middleware. > > That's the great thing about WSGI. > > > > As long as the healthcheck tooling presents are "normal" WSGI > > interface it ought to either "just work" or be wrappable by other > > tooling, > > so I wouldn't spend too much time making a survey of how people are > > doing middleware. > > So should that question be re-worded? The current Keystone answer is > accurate but unhelpful, given that I believe Keystone does enable > the > healthcheck middleware by default: > https://docs.openstack.org/keystone/latest/admin/health-check-middleware.html > > Since what we care about isn't the WSGI implementation but the > availability of the feature, shouldn't that question be more like > "Project enables healthcheck middleware by default"? In which case > Keystone's answer becomes a simple "yes" and Manila's a simple "no". > > > The tricky part (but not that tricky) will be with managing how the > > "tests" are provided to the middleware. > > Totally fair, to me. From colleen at gazlene.net Thu Feb 14 11:07:05 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Thu, 14 Feb 2019 12:07:05 +0100 Subject: [dev][keystone] Launchpad blueprint reckoning In-Reply-To: <72413deb-161a-04a9-bdb7-b3e9f745ba7c@gmail.com> References: <72413deb-161a-04a9-bdb7-b3e9f745ba7c@gmail.com> Message-ID: <1550142425.3159728.1657851088.24E20D91@webmail.messagingengine.com> On Wed, Feb 13, 2019, at 8:56 PM, Lance Bragstad wrote: > Over the last couple of years, our launchpad blueprints have grown > unruly [0] (~77 blueprints a few days ago). The majority of them were in > "New" status, unmaintained, and several years old (some dating back to > 2013). Even though we've been using specifications [1] for several > years, people still get confused when they see conflicting or inaccurate > blueprints. After another person tripped over a duplicate blueprint this > week, cmurphy, vishakha, and I decided to devote some attention to it. > We tracked the work in an etherpad [2] - so we can still find links to > things. > > First, if you are the owner of a blueprint that was marked as > "Obsolete", you should see a comment on the whiteboard that includes a > reason or justification. If you'd like to continue the discussion about > your feature request, please open a specification against the > openstack/keystone-specs repository instead. For historical context, > when we converted to specifications, we were only supposed to create > blueprints for tracking the work after the specification was merged. > Unfortunately, I don't think this process was ever written down, which > I'm sure attributed to blueprint bloat over the years. > > Second, if you track work regularly using blueprints or plan on > delivering something for Stein, please make sure your blueprint in > Launchpad is approved and tracked to the appropriate release (this > should already be done, but feel free to double check). The team doesn't > plan on switching processes for feature tracking mid-release. Instead, > we're going to continue tracking feature work with launchpad blueprints > for the remainder of Stein. Currently, the team is leaning heavily > towards using RFE bug reports for new feature work, which we can easily > switch to in Train. The main reason for this switch is that bug comments > are immutable with better timestamps while blueprint whiteboards are > editable to anyone and not timestamped very well. We already have > tooling in place to update bug reports based on commit messages and that > will continue to work for RFE bug reports. > > Third, any existing blueprints that aren't targeted for Stein but are > good ideas, should be converted to RFE bug reports. All context from the > blueprint will need to be ported to the bug report. After a sufficient > RFE bug report is opened, the blueprint should be marked as "Superseded" > or "Obsolete" *with* a link to the newly opened bug. While this is > tedious, there aren't nearly as many blueprints open now as there were a > couple of days ago. If you're interested in assisting with this effort, > let me know. > > Fourth, after moving non-Stein blueprints to RFE bugs, only Stein > related blueprints should be open in launchpad. Once Stein is released, > we'll go ahead disable keystone blueprints. > > Finally, we need to overhaul a portion of our contributor guide to > include information around this process. The goal should be to make that > documentation clear enough that we don't have this issue again. I plan > on getting something up for review soon, but I don't have anything > currently, so if someone is interested in taking a shot at writing this > document, please feel free to do so. Morgan has a patch up to replace > blueprint usage with RFE bugs in the specification template [3]. > > We can air out any comments, questions, or concerns here in the thread. What should we do about tracking "deprecated-as-of-*" and "removed-as-of-*" work? I never liked how this was done with blueprints but I'm not sure how we would do it with bugs. One tracking bug for all deprecated things in a cycle? One bug for each? A Trello/Storyboard board or etherpad? Do we even need to track it with an external tool - perhaps we can just keep a running list in a release note that we add to over the cycle? Thanks for tackling this cleanup work. > > Thanks, > > Lance > > [0] https://blueprints.launchpad.net/keystone > [1] http://specs.openstack.org/openstack/keystone-specs/ > [2] https://etherpad.openstack.org/p/keystone-blueprint-cleanup > [3] https://review.openstack.org/#/c/625282/ > Email had 1 attachment: > + signature.asc > 1k (application/pgp-signature) From limao at cisco.com Thu Feb 14 11:16:17 2019 From: limao at cisco.com (Liping Mao (limao)) Date: Thu, 14 Feb 2019 11:16:17 +0000 Subject: Call for help! 'U' Release name Mandarin speakers In-Reply-To: <5ef9cecf-3fde-d274-b329-93a60acf6298@openstack.org> References: <20190214021836.GD12795@thor.bakeyournoodle.com> <0E16E57B-98E7-498A-A810-19D3AD1ED028@vexxhost.com> <20190214031445.GH12795@thor.bakeyournoodle.com> <5ef9cecf-3fde-d274-b329-93a60acf6298@openstack.org> Message-ID: <9A9D61E1-0E9A-41CF-BBF6-C8D7B90C45C8@cisco.com> Or maybe not a city, "Ussuri" river , 乌苏里江 in Chinese. https://en.wikipedia.org/wiki/Ussuri_River Regards, Liping Mao 在 2019/2/14 17:12,“Thierry Carrez” 写入: Zhenyu Zheng wrote: > Yes, we do not have word start with U but we have alot of cities or > sites start with U in english, mostly in Inner Mongolia, Xinjiang and > Xizang, I guess we can find something suitable. A few possibilities: http://www.fallingrain.com/world/CH/a/U/ -- Thierry Carrez (ttx) From morgan.fainberg at gmail.com Thu Feb 14 11:46:09 2019 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Thu, 14 Feb 2019 06:46:09 -0500 Subject: [dev][keystone] Launchpad blueprint reckoning In-Reply-To: <1550142425.3159728.1657851088.24E20D91@webmail.messagingengine.com> References: <72413deb-161a-04a9-bdb7-b3e9f745ba7c@gmail.com> <1550142425.3159728.1657851088.24E20D91@webmail.messagingengine.com> Message-ID: I would go for one tracking bug per cycle or we could also just lean on the release notes instead of having a direct bug. On Thu, Feb 14, 2019, 06:07 Colleen Murphy On Wed, Feb 13, 2019, at 8:56 PM, Lance Bragstad wrote: > > Over the last couple of years, our launchpad blueprints have grown > > unruly [0] (~77 blueprints a few days ago). The majority of them were in > > "New" status, unmaintained, and several years old (some dating back to > > 2013). Even though we've been using specifications [1] for several > > years, people still get confused when they see conflicting or inaccurate > > blueprints. After another person tripped over a duplicate blueprint this > > week, cmurphy, vishakha, and I decided to devote some attention to it. > > We tracked the work in an etherpad [2] - so we can still find links to > > things. > > > > First, if you are the owner of a blueprint that was marked as > > "Obsolete", you should see a comment on the whiteboard that includes a > > reason or justification. If you'd like to continue the discussion about > > your feature request, please open a specification against the > > openstack/keystone-specs repository instead. For historical context, > > when we converted to specifications, we were only supposed to create > > blueprints for tracking the work after the specification was merged. > > Unfortunately, I don't think this process was ever written down, which > > I'm sure attributed to blueprint bloat over the years. > > > > Second, if you track work regularly using blueprints or plan on > > delivering something for Stein, please make sure your blueprint in > > Launchpad is approved and tracked to the appropriate release (this > > should already be done, but feel free to double check). The team doesn't > > plan on switching processes for feature tracking mid-release. Instead, > > we're going to continue tracking feature work with launchpad blueprints > > for the remainder of Stein. Currently, the team is leaning heavily > > towards using RFE bug reports for new feature work, which we can easily > > switch to in Train. The main reason for this switch is that bug comments > > are immutable with better timestamps while blueprint whiteboards are > > editable to anyone and not timestamped very well. We already have > > tooling in place to update bug reports based on commit messages and that > > will continue to work for RFE bug reports. > > > > Third, any existing blueprints that aren't targeted for Stein but are > > good ideas, should be converted to RFE bug reports. All context from the > > blueprint will need to be ported to the bug report. After a sufficient > > RFE bug report is opened, the blueprint should be marked as "Superseded" > > or "Obsolete" *with* a link to the newly opened bug. While this is > > tedious, there aren't nearly as many blueprints open now as there were a > > couple of days ago. If you're interested in assisting with this effort, > > let me know. > > > > Fourth, after moving non-Stein blueprints to RFE bugs, only Stein > > related blueprints should be open in launchpad. Once Stein is released, > > we'll go ahead disable keystone blueprints. > > > > Finally, we need to overhaul a portion of our contributor guide to > > include information around this process. The goal should be to make that > > documentation clear enough that we don't have this issue again. I plan > > on getting something up for review soon, but I don't have anything > > currently, so if someone is interested in taking a shot at writing this > > document, please feel free to do so. Morgan has a patch up to replace > > blueprint usage with RFE bugs in the specification template [3]. > > > > We can air out any comments, questions, or concerns here in the thread. > > What should we do about tracking "deprecated-as-of-*" and > "removed-as-of-*" work? I never liked how this was done with blueprints but > I'm not sure how we would do it with bugs. One tracking bug for all > deprecated things in a cycle? One bug for each? A Trello/Storyboard board > or etherpad? Do we even need to track it with an external tool - perhaps we > can just keep a running list in a release note that we add to over the > cycle? > > Thanks for tackling this cleanup work. > > > > > Thanks, > > > > Lance > > > > [0] https://blueprints.launchpad.net/keystone > > [1] http://specs.openstack.org/openstack/keystone-specs/ > > [2] https://etherpad.openstack.org/p/keystone-blueprint-cleanup > > [3] https://review.openstack.org/#/c/625282/ > > Email had 1 attachment: > > + signature.asc > > 1k (application/pgp-signature) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Thu Feb 14 11:47:13 2019 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Thu, 14 Feb 2019 06:47:13 -0500 Subject: [dev][keystone] Launchpad blueprint reckoning In-Reply-To: References: <72413deb-161a-04a9-bdb7-b3e9f745ba7c@gmail.com> <1550142425.3159728.1657851088.24E20D91@webmail.messagingengine.com> Message-ID: Rethinking my last email... Go with just release notes, no need for a bug. On Thu, Feb 14, 2019, 06:46 Morgan Fainberg I would go for one tracking bug per cycle or we could also just lean on > the release notes instead of having a direct bug. > > On Thu, Feb 14, 2019, 06:07 Colleen Murphy >> On Wed, Feb 13, 2019, at 8:56 PM, Lance Bragstad wrote: >> > Over the last couple of years, our launchpad blueprints have grown >> > unruly [0] (~77 blueprints a few days ago). The majority of them were in >> > "New" status, unmaintained, and several years old (some dating back to >> > 2013). Even though we've been using specifications [1] for several >> > years, people still get confused when they see conflicting or inaccurate >> > blueprints. After another person tripped over a duplicate blueprint this >> > week, cmurphy, vishakha, and I decided to devote some attention to it. >> > We tracked the work in an etherpad [2] - so we can still find links to >> > things. >> > >> > First, if you are the owner of a blueprint that was marked as >> > "Obsolete", you should see a comment on the whiteboard that includes a >> > reason or justification. If you'd like to continue the discussion about >> > your feature request, please open a specification against the >> > openstack/keystone-specs repository instead. For historical context, >> > when we converted to specifications, we were only supposed to create >> > blueprints for tracking the work after the specification was merged. >> > Unfortunately, I don't think this process was ever written down, which >> > I'm sure attributed to blueprint bloat over the years. >> > >> > Second, if you track work regularly using blueprints or plan on >> > delivering something for Stein, please make sure your blueprint in >> > Launchpad is approved and tracked to the appropriate release (this >> > should already be done, but feel free to double check). The team doesn't >> > plan on switching processes for feature tracking mid-release. Instead, >> > we're going to continue tracking feature work with launchpad blueprints >> > for the remainder of Stein. Currently, the team is leaning heavily >> > towards using RFE bug reports for new feature work, which we can easily >> > switch to in Train. The main reason for this switch is that bug comments >> > are immutable with better timestamps while blueprint whiteboards are >> > editable to anyone and not timestamped very well. We already have >> > tooling in place to update bug reports based on commit messages and that >> > will continue to work for RFE bug reports. >> > >> > Third, any existing blueprints that aren't targeted for Stein but are >> > good ideas, should be converted to RFE bug reports. All context from the >> > blueprint will need to be ported to the bug report. After a sufficient >> > RFE bug report is opened, the blueprint should be marked as "Superseded" >> > or "Obsolete" *with* a link to the newly opened bug. While this is >> > tedious, there aren't nearly as many blueprints open now as there were a >> > couple of days ago. If you're interested in assisting with this effort, >> > let me know. >> > >> > Fourth, after moving non-Stein blueprints to RFE bugs, only Stein >> > related blueprints should be open in launchpad. Once Stein is released, >> > we'll go ahead disable keystone blueprints. >> > >> > Finally, we need to overhaul a portion of our contributor guide to >> > include information around this process. The goal should be to make that >> > documentation clear enough that we don't have this issue again. I plan >> > on getting something up for review soon, but I don't have anything >> > currently, so if someone is interested in taking a shot at writing this >> > document, please feel free to do so. Morgan has a patch up to replace >> > blueprint usage with RFE bugs in the specification template [3]. >> > >> > We can air out any comments, questions, or concerns here in the thread. >> >> What should we do about tracking "deprecated-as-of-*" and >> "removed-as-of-*" work? I never liked how this was done with blueprints but >> I'm not sure how we would do it with bugs. One tracking bug for all >> deprecated things in a cycle? One bug for each? A Trello/Storyboard board >> or etherpad? Do we even need to track it with an external tool - perhaps we >> can just keep a running list in a release note that we add to over the >> cycle? >> >> Thanks for tackling this cleanup work. >> >> > >> > Thanks, >> > >> > Lance >> > >> > [0] https://blueprints.launchpad.net/keystone >> > [1] http://specs.openstack.org/openstack/keystone-specs/ >> > [2] https://etherpad.openstack.org/p/keystone-blueprint-cleanup >> > [3] https://review.openstack.org/#/c/625282/ >> > Email had 1 attachment: >> > + signature.asc >> > 1k (application/pgp-signature) >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Thu Feb 14 12:01:46 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 14 Feb 2019 12:01:46 +0000 Subject: [blazar] Question about Interaction between Ironic and Blazar In-Reply-To: References: Message-ID: Hi, The current architecture of Blazar relies on Nova host aggregates for physical host reservation. Unfortunately, Nova host aggregates don't play well with Ironic, as they are associated with nova-compute services instead of individual compute hosts. The Blazar community is well aware of this issue: https://blueprints.launchpad.net/blazar/+spec/ironic-compatibility We are planning to move to placement aggregates which will remove this limitation. However, this may only happen in the Train release. One of the main Blazar deployment is Chameleon (https://www.chameleoncloud.org/), which is a bare-metal cloud using Ironic. Chameleon has used two different workarounds to this issue. Originally, Nova was modified so that a nova-compute service would only manage one Ironic bare-metal node. By running as many nova-compute services as there were bare-metal nodes, host aggregates could be used by Blazar. More recently, Chameleon has been using Nova patches developed by Jay Pipes which allow compute nodes (and thus Ironic nodes) to be associated with host aggregates (see https://review.openstack.org/#/c/526753/). However, this patch series was not accepted following discussion at the Rocky PTG in Dublin, with placement aggregates being preferred. I am happy to help you adapt Blazar to your requirements. We have a weekly meeting on Tuesdays, however it's at 0900 UTC which is not convenient for US timezones. Feel free to join #openstack-blazar on freenode, I am often online during UTC daytime. Best wishes, Pierre Riteau (Blazar PTL for the Stein cycle) On Wed, 13 Feb 2019 at 18:59, Tzu-Mainn Chen wrote: > > Hi! I'm working with both Ironic and Blazar, and came across a strange interaction that I was wondering if the Blazar devs were aware of. > > I had four Ironic nodes registered, and only node A had an instance running on it. I tried adding node B - which was available - to the freepool and got this error: > > 2019-02-13 09:42:28.560 220255 ERROR oslo_messaging.rpc.server > ERROR: Servers [[{u'uuid': u'298e83a4-7d5e-4aae-b89a-9dc74b4278af', u'name': u'instance-00000011'}]] found for host a00696d5-32ba-475e-9528-59bf11cffea6 > > This was strange, because the instance in question was running on node A, and not node B. > > After some investigation, the cause was identified as the following: https://bugs.launchpad.net/nova/+bug/1815793 > > But in the meantime, my question is: have other people using Blazar and Ironic run into this issue? It would seem to imply that Ironic nodes can only be added to the freepool if no instances are created, which poses a long-term maintenance issue. Is there a workaround? > > > Thanks, > Tzu-Mainn Chen From thierry at openstack.org Thu Feb 14 13:14:37 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 14 Feb 2019 14:14:37 +0100 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190213122451.nyyllx555smf2mwy@pacific.linksys.moosehall> References: <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> <20190208091829.6tiig7lgef6txcxk@pacific.linksys.moosehall> <723736DB-ED80-4600-AA98-F51FE70A8D73@gmail.com> <20190211222641.pney33hmai6vjoky@pacific.linksys.moosehall> <355BD2CB-B1F9-43B1-943C-66553E90050F@gmail.com> <20190213122451.nyyllx555smf2mwy@pacific.linksys.moosehall> Message-ID: Adam Spiers wrote: > Ildiko Vancsa wrote: >>> On 2019. Feb 11., at 23:26, Adam Spiers wrote: >>> [snip…] >>> >>>> To help with all this I would start the experiment with wiki pages >>>> and etherpads as these are all materials you can point to without >>>> too much formality to follow so the goals, drivers, supporters and >>>> progress are visible to everyone who’s interested and to the TC to >>>> follow-up on. >>>> Do we expect an approval process to help with or even drive either >>>> of the crucial steps I listed above? >>> >>> I'm not sure if it would help.  But I agree that visibility is >>> important, and by extension also discoverability.  To that end I >>> think it would be worth hosting a central list of popup initiatives >>> somewhere which links to the available materials for each initiative. >>> Maybe it doesn't matter too much whether that central list is simply >>> a wiki page or a static web page managed by Gerrit under a governance >>> repo or similar. >> >> I would start with a wiki page as it stores history as well and it’s >> easier to edit. Later on if we feel the need to be more formal we can >> move to a static web page and use Gerrit. > > Sounds good to me.  Do we already have some popup teams?  If so we could > set this up straight away. To continue this discussion, I just set up a basic page with an example team at: https://wiki.openstack.org/wiki/Popup_Teams Feel free to improve the description and example entry. -- Thierry Carrez (ttx) From brandor5 at gmail.com Thu Feb 14 13:15:13 2019 From: brandor5 at gmail.com (Brandon Sawyers) Date: Thu, 14 Feb 2019 08:15:13 -0500 Subject: [keystone] adfs SingleSignOn with CLI/API? In-Reply-To: <1550140334.3146579.1657835168.35187945@webmail.messagingengine.com> References: <1B71BEE3-D72D-42E8-A61A-380CAA548722@gmail.com> <1549894791.2312833.1655509928.25450D18@webmail.messagingengine.com> <1549901920.3451697.1655621200.6F07535E@webmail.messagingengine.com> <1550140334.3146579.1657835168.35187945@webmail.messagingengine.com> Message-ID: You should be able to configure keystone to authenticate against "ldap" using your active directory. Have you tried that yet? On Thu, Feb 14, 2019, 05:33 Colleen Murphy wrote: > On Wed, Feb 13, 2019, at 9:50 AM, Fabian Zimmermann wrote: > > Hi, > > > > thanks for the fast answers. > > > > I asked our ADFS Administrators if they could provide some logs to see > > whats going wrong, but they are unable to deliver these. > > I'm more interested in what you were seeing, both the output from the > client and the output from the keystone server if you have access to it. > > > > > So I installed keycloak and switched to OpenID Connect. > > > > Im (again) able to connect via Horizon SSO, but when I try to use > > v3oidcpassword in the CLI Im running into > > > > https://bugs.launchpad.net/python-openstackclient/+bug/1648580 > > > > I already added the suggested --os-client-secret without luck. > > Updating to latest python-versions.. > > > > pip install -U python-keystoneclient > > pip install -U python-openstackclient > > > > didnt change anything. > > > > Any ideas what to try next? > > Unfortunately that seems to still be a valid bug that we'll need to > address. You could try using the python keystoneauth library directly and > see if the issue appears there[1][2]. > > [1] https://docs.openstack.org/keystoneauth/latest/using-sessions.html > [2] > https://docs.openstack.org/keystoneauth/latest/plugin-options.html#v3oidcpassword > > > > > Offtopic: > > > > Seems like > > > > https://groups.google.com/forum/#!topic/mod_auth_openidc/qGE1DGQCTMY > > > > is right. I had to change the RedirectURI to geht OpenIDConnect working > > with Keystone. The sample config of > > > > > https://docs.openstack.org/keystone/rocky/advanced-topics/federation/websso.html > > > > is *not working for me* > > I found that too. The in-development documentation has already been > fixed[3] but we didn't backport that to the Rocky documentation because it > was part of a large series of rewrites and reorgs. > > [3] > https://docs.openstack.org/keystone/latest/admin/federation/configure_federation.html#configure-mod-auth-openidc > > > > > Fabian > > > > Colleen > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Feb 14 13:29:44 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 14 Feb 2019 14:29:44 +0100 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <1550007508.442544.1656696288.1CEB9AC9@webmail.messagingengine.com> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> <168e0cba6f4.1013071eb93450.6339187288167074748@ghanshyammann.com> <1550007508.442544.1656696288.1CEB9AC9@webmail.messagingengine.com> Message-ID: <5b651d3d-ac42-e46d-c52b-9e9b280d2af3@openstack.org> Colleen Murphy wrote: > I feel like there is a bit of a disconnect between what the TC is asking for > and what the current mentoring organizations are designed to provide. Thierry > framed this as a "peer-mentoring offered" list, but mentoring doesn't quite > capture everything that's needed. > > Mentorship programs like Outreachy, cohort mentoring, and the First Contact SIG > are oriented around helping new people quickstart into the community, getting > them up to speed on basics and helping them feel good about themselves and > their contributions. The hope is that happy first-timers eventually become > happy regular contributors which will eventually be a benefit to the projects, > but the benefit to the projects is not the main focus. > > The way I see it, the TC Help Wanted list, as well as the new thing, is not > necessarily oriented around newcomers but is instead advocating for the > projects and meant to help project teams thrive by getting committed long-term > maintainers involved and invested in solving longstanding technical debt that > in some cases requires deep tribal knowledge to solve. It's not a thing for a > newbie to step into lightly and it's not something that can be solved by a > FC-liaison pointing at the contributor docs. Instead what's needed are mentors > who are willing to walk through that tribal knowledge with a new contributor > until they are equipped enough to help with the harder problems. > > For that reason I think neither the FC SIG or the mentoring cohort group, in > their current incarnations, are the right groups to be managing this. The FC > SIG's mission is "To provide a place for new contributors to come for > information and advice" which does not fit the long-term goal of the help > wanted list, and cohort mentoring's four topics ("your first patch", "first > CFP", "first Cloud", and "COA"[1]) also don't fit with the long-term and deeply > technical requirements that a project-specific mentorship offering needs. > Either of those groups could be rescoped to fit with this new mission, and > there is certainly a lot of overlap, but my feeling is that this needs to be an > effort conducted by the TC because the TC is the group that advocates for the > projects. > > It's moreover not a thing that can be solved by another list of names. In addition > to naming someone willing to do the several hours per week of mentoring, > project teams that want help should be forced to come up with a specific > description of 1) what the project is, 2) what kind of person (experience or > interests) would be a good fit for the project, 3) specific work items with > completion criteria that needs to be done - and it can be extremely challenging > to reframe a project's longstanding issues in such concrete ways that make it > clear what steps are needed to tackle the problem. It should basically be an > advertisement that makes the project sound interesting and challenging and > do-able, because the current help-wanted list and liaison lists and mentoring > topics are too vague to entice anyone to step up. Well said. I think we need to use another term for this program, to avoid colliding with other forms of mentoring or on-boarding help. On the #openstack-tc channel, I half-jokingly suggested to call this the 'Padawan' program, but now that I'm sober, I feel like it might actually capture what we are trying to do here: - Padawans are 1:1 trained by a dedicated, experienced team member - Padawans feel the Force, they just need help and perspective to master it - Padawans ultimately join the team* and may have a padawan of their own - Bonus geek credit for using Star Wars references * unless they turn to the Dark Side, always a possibility > Finally, I rather disagree that this should be something maintained as a page in > individual projects' contributor guides, although we should certainly be > encouraging teams to keep those guides up to date. It should be compiled by the > TC and regularly updated by the project liaisons within the TC. A link to a > contributor guide on docs.openstack.org doesn't give anyone an idea of what > projects need the most help nor does it empower people to believe they can help > by giving them an understanding of what the "job" entails. I think we need a single list. I guess it could be sourced from several repositories, but at least for the start I would not over-engineer it, just put it out there as a replacement for the help-most-needed list and see if it flies. As a next step, I propose to document the concept on a TC page, then reach out to the currently-listed teams on help-most-wanted to see if there would be a volunteer interested in offering Padawan training and bootstrap the new list, before we start to promote it more actively. -- Thierry Carrez (ttx) From i at liuyulong.me Thu Feb 14 13:34:36 2019 From: i at liuyulong.me (=?utf-8?B?TElVIFl1bG9uZw==?=) Date: Thu, 14 Feb 2019 21:34:36 +0800 Subject: Call for help! 'U' Release name Mandarin speakers In-Reply-To: <20190214031445.GH12795@thor.bakeyournoodle.com> References: <20190214021836.GD12795@thor.bakeyournoodle.com> <0E16E57B-98E7-498A-A810-19D3AD1ED028@vexxhost.com> <20190214031445.GH12795@thor.bakeyournoodle.com> Message-ID: This now seems interesting. There is no Standard Chinese Pinyin starts with 'U'. So I have a suggestion, because we have 'Wu', 'Lu', 'Hu', 'Nu', 'Yu' and so on. How about we give the OpenStack version name with the letters order of rotation? For instance, we can use 'Uw', 'Uy' to represent the Standard Pinyin. Then we will have a lot of choices. Such as 'Uwzhen' for 乌镇,'Uyxi' for 玉溪, 'Uylin' for 玉林, 'Uhbei' for 湖北,yeah, yeah, we have a lot of choices. Thanks, LIU Yulong ------------------ Original ------------------ From: "Tony Breeds"; Date: Thu, Feb 14, 2019 11:14 AM To: "Mohammed Naser"; Cc: "OpenStack Discuss"; Subject: Re: Call for help! 'U' Release name Mandarin speakers On Wed, Feb 13, 2019 at 10:09:35PM -0500, Mohammed Naser wrote: > So: chatting with some folks from China and we’ve got the interesting problem that Pinyin does not have a U! > > http://xh.5156edu.com/pinyi.php I admit this doesn't surprise me. > I will leave it for some of the locals who mentioned that they can clarify more about that :) Yup, if there isn't anything that naturally fits with the establish criteria we'd just have a number of items that are 'exceptional' and get TC endorsement for similar to https://review.openstack.org/#/c/611511/ Yours Tony. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalvarez at redhat.com Thu Feb 14 13:35:16 2019 From: dalvarez at redhat.com (Daniel Alvarez Sanchez) Date: Thu, 14 Feb 2019 14:35:16 +0100 Subject: [TripleO] openvswitch is broken - avoid rechecks in the next couple hours In-Reply-To: References: Message-ID: We should be fine now :) On Thu, Feb 14, 2019 at 11:25 AM Daniel Alvarez Sanchez wrote: > > Hi folks, > > A new DPDK version landed in CentOS which is not compatible with the > current Open vSwitch version that we have in RDO (error below). > > RDOfolks++ are working on it to make a new OVS version available > without DPDK support so that we can unblock our jobs until we get a > proper fix. Please, avoid rechecks in the next ~3 hours or so as no > tests are expected to pass. > > Once [0] is merged, we'll need to wait around 30 more minutes for it > to be available in CI jobs. > > Thanks! > > > [0] https://review.rdoproject.org/r/#/c/18853 > > 2019-02-14 07:35:06.464494 | primary | 2019-02-14 07:35:05 | Error: > Package: 1:openvswitch-2.10.1-1.el7.x86_64 (delorean-master-deps) > 2019-02-14 07:35:06.464603 | primary | 2019-02-14 07:35:05 | > Requires: librte_table.so.3()(64bit) > 2019-02-14 07:35:06.464711 | primary | 2019-02-14 07:35:05 | > Available: dpdk-17.11-13.el7.x86_64 (quickstart-centos-extras) From vgvoleg at gmail.com Thu Feb 14 13:58:49 2019 From: vgvoleg at gmail.com (Oleg Ovcharuk) Date: Thu, 14 Feb 2019 16:58:49 +0300 Subject: [requirements][mistral] Add yamlloader to global requirements Message-ID: Hi! Can you please add yamlloader library to global requirements? https://pypi.org/project/yamlloader/ It provides ability to preserve key order in dicts, it supports either python 2.7 and python 3.x, it provides better performance than built-in functions. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Thu Feb 14 14:16:08 2019 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 14 Feb 2019 09:16:08 -0500 Subject: [dev][tc] Part 2: Evaluating projects in relation to OpenStack cloud vision In-Reply-To: References: Message-ID: <1138f8f0-9314-3b2e-06c5-5cc848650cd2@gmail.com> Good thread. Comments inline. On 02/10/2019 04:08 PM, Chris Dent wrote: > On Sun, 10 Feb 2019, Chris Dent wrote: > Things have have worked out well (you can probably see a theme): > > * Placement is a single purpose service with, until very recently, >   only the WSGI service as the sole moving part. There are now >   placement-manage and placement-status commands, but they are >   rarely used (thankfully). This makes the system easier to reason >   about than something with multiple agents. Obviously some things >   need lots of agents. Placement isn't one of them. Yes. > * Using gabbi [2] as the framework for functional tests of the API >   and using them to enable test-driven-development, via those >   functional tests, has worked out really well. It keeps the focus on that >   sole moving part: The API. Yes. Bigly. I'd also include here the fact that we didn't care much at all in placement land about unit tests and instead focused almost exclusively on functional test coverage. > * No RPC, no messaging, no notifications. This is mostly just a historical artifact of wanting placement to be single-purpose; not something that was actively sought after, though :) I think having placement send event notifications would actually be A Good Thing since it turns placement into a better cloud citizen, enabling interested observers to trigger action instead of polling the placement API for information. But I agree with your overall point that the simplicity gained by not having all the cruft of nova's RPC/messaging layer was a boon. > * Very little configuration, reasonable defaults to that config. >   It's possible to run a working placement service with two config >   settings, if you are not using keystone. Keystone adds a few more, >   but not that much. Yes. > * String adherence to WSGI norms (that is, any WSGI server can run a Strict adherence I think you meant? :) >   placement WSGI app) and avoidance of eventlet, but see below. The >   combination of this with small number of moving parts and little >   configuration make it super easy to deploy placement [3] in lots >   of different setups, from tiny to huge, scaling and robustifying >   those setups as required. Yes. > * Declarative URL routing. There's a dict which maps HTTP method:URL >   pairs to python functions. Clear dispatch is a _huge_ help when >   debugging. Look one place, as a computer or human, to find where >   to go. Yes. > * microversion-parse [4] has made microversion handling easy. Yes. I will note a couple other things that I believe have worked out well: 1) Using generation markers for concurrent update mechanisms Using a generation marker field for the relevant data models under the covers -- and exposing/expecting that generation via the API -- has enabled us to have a clear concurrency model and a clear mechanism for callers to trigger a re-drive of change operations. The use of generation markers has enabled us over time to reduce our use of caching and to have a single consistent trigger for callers (nova-scheduler, nova-compute) to fetch updated information about providers and consumers. Finally, the use of generation markers means there is nowhere in either the placement API nor its clients that use any locking semantics *at all*. No mutexes. No semaphores. No "lock this thing" API call. None of that heavyweight old skool concurrency. 2) Separation of quantitative and qualitative things Unlike the Nova flavor and its extra specs, placement has clear boundaries and expectations regarding what is a *resource* (quantitative thing that is consumed) and what is a *trait* (qualitative thing that describes a capability of the thing providing resources). This simple black-and-white modeling has allowed placement to fulfill scheduling queries and resource claim transactions efficiently. I hope, long term, that we can standardize on placement for tracking quota usage since its underlying data model and schema are perfectly suited for this task. > Things that haven't gone so well (none of these are dire) and would > have been nice to do differently had we but known: > > * Because of a combination of "we might need it later", "it's a >   handy tool and constraint" and "that's the way we do things" the >   interface between the placement URL handlers and the database is >   mediated through oslo versioned objects. Since there's no RPC, nor >   inter-version interaction, this is overkill. It also turns out that >   OVO getters and setters are a moderate factor in performance. Data please. >   Initially we were versioning the versioned objects, which created >   a lot of cognitive overhead when evolving the system, but we no >   longer do that, now that we've declared RPC isn't going to happen. I agree with you that ovo is overkill and not needed in placement. > * Despite the strict adherence to being a good WSGI citizen >   mentioned above, placement is using a custom (very limited) >   framework for the WSGI application. An initial proof of concept >   used flask but it was decided that introducing flask into the nova >   development environment would be introducing another thing to know >   when decoding nova. I suspect the expected outcome was that >   placement would reuse nova's framework, but the truth is I simply >   couldn't do it. Declarative URL dispatch was a critical feature >   that has proven worth it. The resulting code is relatively >   straightforward but it is unicorn where a boring pony would have >   been the right thing. Boring ponies are very often the right >   thing. Not sure I agree with this. The simplicity of the placement WSGI (non-)framework is a benefit. We don't need to mess with it. Really, it hasn't been an issue at all. I'll add one thing that I don't believe we did correctly and that we'll regret over time: Placement allocations currently have a distinct lack of temporal awareness. An allocation either exists or doesn't exist -- there is no concept of an allocation "end time". What this means is that placement cannot be used for a reservation system. I used to think this was OK, and that reservation systems should be layered on top of the simpler placement data model. I no longer believe this is a good thing, and feel that placement is actually the most appropriate service for modeling a reservation system. If I were to have a "do-over", I would have added the concept of a start and end time to the allocation. Best, -jay > I'm sure there are more here, but I've run out of brain. > > [1] https://review.openstack.org/#/c/630216/ > [2] https://gabbi.readthedocs.io/ > [3] https://anticdent.org/placement-from-pypi.html > [4] https://pypi.org/project/microversion_parse/ > From openstack at nemebean.com Thu Feb 14 14:26:08 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 14 Feb 2019 08:26:08 -0600 Subject: [dev][tc] Part 2: Evaluating projects in relation to OpenStack cloud vision In-Reply-To: <1138f8f0-9314-3b2e-06c5-5cc848650cd2@gmail.com> References: <1138f8f0-9314-3b2e-06c5-5cc848650cd2@gmail.com> Message-ID: <6547bef5-aeaf-9b4c-a667-e1e233c90f10@nemebean.com> On 2/14/19 8:16 AM, Jay Pipes wrote: > This simple black-and-white modeling has allowed placement to fulfill > scheduling queries and resource claim transactions efficiently. I hope, > long term, that we can standardize on placement for tracking quota usage > since its underlying data model and schema are perfectly suited for this > task. Instead of, or in addition to the Keystone unified limits? From mark at stackhpc.com Thu Feb 14 14:34:43 2019 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 14 Feb 2019 14:34:43 +0000 Subject: [kolla][mariadb] Multinode deployment fails due to bootstrap_mariadb or mariadb errors In-Reply-To: References: Message-ID: This issue should now be fixed in master. There are patches up for the stable branches, here's the queens one: https://review.openstack.org/636928. Mark On Tue, 12 Feb 2019 at 17:32, Giuseppe Sannino wrote: > Hi all, > need your help. > I'm trying to deploy Openstack "Queens" via kolla on a multinode system (1 > controller/kolla host + 1 compute). > > I tried with both binary and source packages and I'm using "ubuntu" as > base_distro. > > The first attempt of deployment systematically fails here: > > TASK [mariadb : Running MariaDB bootstrap container] > ******************************************************************************************************************************************************************************************************** > fatal: [xx.yy.zz.136]: FAILED! => {"changed": true, "msg": "Container > exited with non-zero return code 1"} > > Looking at the bootstrap_mariadb container logs I can see: > ---------- > Neither host 'xxyyzz' nor 'localhost' could be looked up with > '/usr/sbin/resolveip' > Please configure the 'hostname' command to return a correct > hostname. > ---------- > > Any idea ? > > Thanks a lot > /Giuseppe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu Feb 14 14:47:30 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 14 Feb 2019 14:47:30 +0000 (GMT) Subject: [dev][tc] Part 2: Evaluating projects in relation to OpenStack cloud vision In-Reply-To: <1138f8f0-9314-3b2e-06c5-5cc848650cd2@gmail.com> References: <1138f8f0-9314-3b2e-06c5-5cc848650cd2@gmail.com> Message-ID: On Thu, 14 Feb 2019, Jay Pipes wrote: >> * No RPC, no messaging, no notifications. > > This is mostly just a historical artifact of wanting placement to be > single-purpose; not something that was actively sought after, though :) I certainly sought it and would have fought hard to prevent it if we ever ran into a situation where we had time to do it. These days, given time constraints, these sort of optional nice to haves are easier to avoid because there are fewer people to do them... > I think having placement send event notifications would actually be A Good > Thing since it turns placement into a better cloud citizen, enabling > interested observers to trigger action instead of polling the placement API > for information. I think some kind of event stream would be interesting, but there are many ways to skin that cat. The current within-openstack standards for such things are pretty heavyweight, better ways are on the scene in the big wide world. By putting it off as long as possible, we can take avantage of that new stuff. >> * String adherence to WSGI norms (that is, any WSGI server can run a > > Strict adherence I think you meant? :) My strictness is much better in wsgi than typing. > 1) Using generation markers for concurrent update mechanisms I agree. I'm still conflicted over whether we should have exposed them as ETags or not (mostly from an HTTP-love standpoint), but overall they've made lots of stuff possible and easier. > Finally, the use of generation markers means there is nowhere in either the > placement API nor its clients that use any locking semantics *at all*. No > mutexes. No semaphores. No "lock this thing" API call. None of that > heavyweight old skool concurrency. Yeah. State handling (lack of) is nice. > 2) Separation of quantitative and qualitative things Yes, very much agree. >> * Because of a combination of "we might need it later", "it's a >>   handy tool and constraint" and "that's the way we do things" the >>   interface between the placement URL handlers and the database is >>   mediated through oslo versioned objects. Since there's no RPC, nor >>   inter-version interaction, this is overkill. It also turns out that >>   OVO getters and setters are a moderate factor in performance. > > Data please. When I wrote that bullet I just had some random profiling data from running a profiler during a bunch of requests, which made it clear that some ovo methods (in the getters and setters) were being called a ton (in large part because of the number of objects invovled in an allocation candidates response). I didn't copy that down anywhere at the time because I planned to do it more formally. Since then, I've made this: https://review.openstack.org/#/c/636631/ That's a stack which removes OVO from placement. While we know the perfload job is not scientific, it does provide a nice quide. An ovo-using patch has perfload times of 2.65-ish (seconds). The base of that OVO removal stack (which changes allocation candidates) < http://logs.openstack.org/31/636631/4/check/placement-perfload/a413724/logs/placement-perf.txt> is 2.3-ish. The end of it is 1.5-ish. And there are ways in which the code is much more explicit. There's plenty of cleanup to do, and I'm not wed to us making that change if people aren't keen, but I can see a fair number reasons above and beyond peformance to do it but that might be enough. Lot's more info in the commits and comments in that stack. >> * Despite the strict adherence to being a good WSGI citizen >>   mentioned above, placement is using a custom (very limited) >>   framework for the WSGI application. An initial proof of concept >>   used flask but it was decided that introducing flask into the nova >>   development environment would be introducing another thing to know >>   when decoding nova. I suspect the expected outcome was that >>   placement would reuse nova's framework, but the truth is I simply >>   couldn't do it. Declarative URL dispatch was a critical feature >>   that has proven worth it. The resulting code is relatively >>   straightforward but it is unicorn where a boring pony would have >>   been the right thing. Boring ponies are very often the right >>   thing. > > Not sure I agree with this. The simplicity of the placement WSGI > (non-)framework is a benefit. We don't need to mess with it. Really, it > hasn't been an issue at all. I agree that it is very hands off now, and not worth changing, but as an example for new projects, it is something to think about. It had creation costs in various forms. If there wasn't a me around (many custom non-frameworks under my belt) it would have been harder to create something (and then manage/maintain/educate it). sdague and I nearly came to metaphorical blows over it. If it were just normal to use the boring pony such things wouldn't need to happen. > Placement allocations currently have a distinct lack of temporal awareness. > An allocation either exists or doesn't exist -- there is no concept of an > allocation "end time". What this means is that placement cannot be used for a > reservation system. I used to think this was OK, and that reservation systems > should be layered on top of the simpler placement data model. Yeah, I was thinking about this recently too. Trying to come up with conceptual hacks that would make it possible without drastically changing the existing data model. There's stuff percolating in my brain, potentially as weird as infinite resource classes but maybe not, but nothing has gelled. I hope, at least, that we can get the layered on top stuff working well. > Best, > -jay Thanks very much for chiming in here, I hope other people will too. >> I'm sure there are more here, but I've run out of brain. One thing that came up in the TC discussions [1] related to placement governance [1] was that given there have been bumps in the extraction road, it might be useful to also document the learnings from that. The main one, from my perspective is: If there's any inkling that a new service (something with what might be described as a public interface) is ever going to be eventually extracted, start it outside from the outset, but make sure the people involved overlap. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-02-12.log.html [2] https://review.openstack.org/#/c/636416/ -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From jaypipes at gmail.com Thu Feb 14 14:51:28 2019 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 14 Feb 2019 09:51:28 -0500 Subject: [dev][tc] Part 2: Evaluating projects in relation to OpenStack cloud vision In-Reply-To: <6547bef5-aeaf-9b4c-a667-e1e233c90f10@nemebean.com> References: <1138f8f0-9314-3b2e-06c5-5cc848650cd2@gmail.com> <6547bef5-aeaf-9b4c-a667-e1e233c90f10@nemebean.com> Message-ID: <867a2365-592c-ca80-9491-7b5093e2a0f0@gmail.com> On 02/14/2019 09:26 AM, Ben Nemec wrote: > On 2/14/19 8:16 AM, Jay Pipes wrote: >> This simple black-and-white modeling has allowed placement to fulfill >> scheduling queries and resource claim transactions efficiently. I >> hope, long term, that we can standardize on placement for tracking >> quota usage since its underlying data model and schema are perfectly >> suited for this task. > > Instead of, or in addition to the Keystone unified limits? In addition. Keystone unified limits stores the limits. Placement stores the usage counts. Best, -jay From mark at stackhpc.com Thu Feb 14 14:53:31 2019 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 14 Feb 2019 14:53:31 +0000 Subject: [kolla][TripleO] State of SELinux support In-Reply-To: References: Message-ID: On Tue, 12 Feb 2019 at 18:39, Jason Anderson wrote: > Hey all, > > With CVE-2019-5736 > dropping > today, I thought it would be a good opportunity to poke about the current > state of SELinux support in Kolla. The docs > have > said it is a work in progress since the Mitaka release at least. I did find > a spec that > was marked as completed, but I am not aware that there is yet any support > and I see that the baremetal role still forces SELinux to "permissive" by > default. > > Is anybody currently working on this or is there an update spec/blueprint > to track the development here? I am no SELinux expert by any means but this > feels like an important thing to address, particularly if Docker has made > it easier to label bind mounts > > . > Hi Jason, Thanks for bringing this up. I'm afraid SELinux is still not supported in kolla-ansible. I'd definitely be interested in at least understanding what would be required to make it happen. I saw some messages on here about SELinux in TripleO, which suggests that it is possible with the kolla images. The discussion I saw was around the bind mount labelling. I've tagged TripleO, perhaps someone from that team could speak about what they have done to deploy the kolla containers with SELinux enabled? This thread [1] looks like a good starting point. Mark [1] https://openstack.nimeyo.com/121793/openstack-tripleo-undercloud-containers-selinux-enforcing > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Thu Feb 14 14:56:03 2019 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 14 Feb 2019 09:56:03 -0500 Subject: [dev][tc] Part 2: Evaluating projects in relation to OpenStack cloud vision In-Reply-To: References: <1138f8f0-9314-3b2e-06c5-5cc848650cd2@gmail.com> Message-ID: <61384876-48dc-b1e7-8f3e-83bcb8c5d872@gmail.com> On 02/14/2019 09:47 AM, Chris Dent wrote: >>> * Because of a combination of "we might need it later", "it's a >>>    handy tool and constraint" and "that's the way we do things" the >>>    interface between the placement URL handlers and the database is >>>    mediated through oslo versioned objects. Since there's no RPC, nor >>>    inter-version interaction, this is overkill. It also turns out that >>>    OVO getters and setters are a moderate factor in performance. >> >> Data please. > > When I wrote that bullet I just had some random profiling data from > running a profiler during a bunch of requests, which made it clear > that some ovo methods (in the getters and setters) were being called > a ton (in large part because of the number of objects invovled in an > allocation candidates response). I didn't copy that down anywhere at > the time because I planned to do it more formally. > > Since then, I've made this: > > https://review.openstack.org/#/c/636631/ > > That's a stack which removes OVO from placement. While we know the > perfload job is not scientific, it does provide a nice quide. An > ovo-using patch > > > has perfload times of 2.65-ish (seconds). > > The base of that OVO removal stack (which changes allocation > candidates) < > http://logs.openstack.org/31/636631/4/check/placement-perfload/a413724/logs/placement-perf.txt> > > is 2.3-ish. > > The end of it > > > is 1.5-ish. > > And there are ways in which the code is much more explicit. There's > plenty of cleanup to do, and I'm not wed to us making that change if > people aren't keen, but I can see a fair number reasons above and > beyond peformance to do it but that might be enough. Lot's more info > in the commits and comments in that stack. bueno. :) I'll review that series over the next couple days. Great work, Chris. -jay From morgan.fainberg at gmail.com Thu Feb 14 14:59:11 2019 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Thu, 14 Feb 2019 09:59:11 -0500 Subject: [dev][tc] Part 2: Evaluating projects in relation to OpenStack cloud vision In-Reply-To: <867a2365-592c-ca80-9491-7b5093e2a0f0@gmail.com> References: <1138f8f0-9314-3b2e-06c5-5cc848650cd2@gmail.com> <6547bef5-aeaf-9b4c-a667-e1e233c90f10@nemebean.com> <867a2365-592c-ca80-9491-7b5093e2a0f0@gmail.com> Message-ID: On Thu, Feb 14, 2019, 09:51 Jay Pipes On 02/14/2019 09:26 AM, Ben Nemec wrote: > > On 2/14/19 8:16 AM, Jay Pipes wrote: > >> This simple black-and-white modeling has allowed placement to fulfill > >> scheduling queries and resource claim transactions efficiently. I > >> hope, long term, that we can standardize on placement for tracking > >> quota usage since its underlying data model and schema are perfectly > >> suited for this task. > > > > Instead of, or in addition to the Keystone unified limits? > > In addition. Keystone unified limits stores the limits. Placement stores > the usage counts. > > Best, > -jay > This was the exact response I was hoping to see. I'm pleased if we start having consistent consumption of quota as well as the unifited limit storage. Placement seems very well positioned for providing the functionality. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Thu Feb 14 15:08:53 2019 From: ed at leafe.com (Ed Leafe) Date: Thu, 14 Feb 2019 09:08:53 -0600 Subject: [dev][tc] Part 2: Evaluating projects in relation to OpenStack cloud vision In-Reply-To: <1138f8f0-9314-3b2e-06c5-5cc848650cd2@gmail.com> References: <1138f8f0-9314-3b2e-06c5-5cc848650cd2@gmail.com> Message-ID: <0A24656E-D2B9-4668-A105-D6546896C6DA@leafe.com> On Feb 14, 2019, at 8:16 AM, Jay Pipes wrote: > > Placement allocations currently have a distinct lack of temporal awareness. An allocation either exists or doesn't exist -- there is no concept of an allocation "end time". What this means is that placement cannot be used for a reservation system. I used to think this was OK, and that reservation systems should be layered on top of the simpler placement data model. > > I no longer believe this is a good thing, and feel that placement is actually the most appropriate service for modeling a reservation system. If I were to have a "do-over", I would have added the concept of a start and end time to the allocation. I’m not clear on how you are envisioning this working. Will Placement somehow delete an allocation at this end time? IMO this sort of functionality should really be done by a system external to Placement. But perhaps you are thinking of something completely different, and I’m just a little thick? -- Ed Leafe From renat.akhmerov at gmail.com Thu Feb 14 15:10:25 2019 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Thu, 14 Feb 2019 22:10:25 +0700 Subject: [requirements][mistral] Add yamlloader to global requirements In-Reply-To: References: Message-ID: <6b3ca84b-61cc-4df6-b16c-6310886cce8f@Spark> Yes, it would help us implement a useful feature. Thanks Renat Akhmerov @Nokia On 14 Feb 2019, 21:02 +0700, Oleg Ovcharuk , wrote: > Hi! Can you please add yamlloader library to global requirements? > https://pypi.org/project/yamlloader/ > > It provides ability to preserve key order in dicts, it supports either python 2.7 and python 3.x, it provides better performance than built-in functions. > Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Thu Feb 14 15:19:51 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 14 Feb 2019 09:19:51 -0600 Subject: [tc][all] Train Community Goals In-Reply-To: References: <66d73db6-9f84-1290-1ab8-cf901a7fb355@catalyst.net.nz> <6b498008e71b7dae651e54e29717f3ccedea50d1.camel@evrard.me> Message-ID: <36bf8876-b9bf-27c5-ee5a-387ce8f6768b@gmail.com> On 1/31/19 9:59 AM, Lance Bragstad wrote: > Hello everyone, > > I thought it would be good to have a quick recap of the various goal > proposals. > > *Project clean-up* > > Adrian and Tobias Rydberg have volunteered to champion the goal. There > has also been some productive discussion around the approaches > detailed in the etherpad [0]. At this point is it safe to assume we've > come to a conclusion on the proposed approach? If so, I think the next > logical step would be to do a gap analysis on what the proposed > approach would mean work-wise for all projects. Note, Assaf Muller > brought the approach Neutron takes to my attention [1] and I wanted to > highlight this here since it establishes a template for us to follow, > or at least look at. Note, Neutron's approach is client-based, which > might not be orthogonal with the client goal. Just something to keep > in mind if those two happen to be accepted for the same release. Is there anything preventing this goal from making its way into review? The goal has champions and a plan for implementation. > > [0] https://etherpad.openstack.org/p/community-goal-project-deletion > [1] https://github.com/openstack/python-neutronclient/blob/master/neutronclient/neutron/v2_0/purge.py > > *Moving legacy clients to python-openstackclient* > > Artem has done quite a bit of pre-work here [2], which has been useful > in understanding the volume of work required to complete this goal in > its entirety. I suggest we look for seams where we can break this into > more consumable pieces of work for a given release. > > For example, one possible goal would be to work on parity with > python-openstackclient and openstacksdk. A follow-on goal would be to > move the legacy clients. Alternatively, we could start to move all the > project clients logic into python-openstackclient, and then have > another goal to implement the common logic gaps into openstacksdk. > Arriving at the same place but using different paths. The approach > still has to be discussed and proposed. I do think it is apparent that > we'll need to break this up, however. Artem's call for help is still open [0]. Artem, has anyone reached out to you about co-championing the goal? Do you have suggestions for how you'd like to break up the work to make the goal more achievable, especially if you're the only one championing the initiative? > > [2] https://etherpad.openstack.org/p/osc-gaps-analysis > > *Healthcheck middleware* > > There is currently no volunteer to champion for this goal. The first > iteration of the work on the oslo.middleware was updated [3], and a > gap analysis was started on the mailing lists [4]. > If you want to get involved in this goal, don't hesitate to answer on > the ML thread there. This goal still needs at least one champion. Based on recent feedback and discussions, we still need to smooth out some wrinkles in the implementation (see cdent's note about checks [1]). Regardless, it sounds like this effort is still in the prework phase and would greatly benefit from a PoC before pushing this as a Train goal for review. Should we consider that goal for U instead? Just a reminder that we would like to have all potential goals proposed for review in governance in the next days,giving us 6 weeks to hash out details in Gerrit if we plan to have the goals merged bythe end of March. This should give us 4 weeks to prepare anydiscussions we'd like to have in-person pertaining to those goals. Thanks for the time, Jean-Philippe & Lance [0] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/002275.html [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/002126.html > > [3] https://review.openstack.org/#/c/617924/2 > [4] https://ethercalc.openstack.org/di0mxkiepll8 > > Just a reminder that we would like to have all potential goals > proposed for review in openstack/governance by the middle of this > month, giving us 6 weeks to hash out details in Gerrit if we plan to > have the goals merged by the end of March. This timeframe should give > us 4 weeks to prepare any discussions we'd like to have in-person > pertaining to those goals. > > Thanks for the time, > > Lance > > On Tue, Jan 8, 2019 at 4:11 AM Jean-Philippe Evrard > > wrote: > > On Wed, 2018-12-19 at 06:58 +1300, Adrian Turjak wrote: > > I put my hand up during the summit for being at least one of the > > champions for the deletion of project resources effort. > > > > I have been meaning to do a follow up email and options as well as > > steps > > for how the goal might go, but my working holiday in Europe > after the > > summit turned into more of a holiday than originally planned. > > > > I'll get a thread going around what I (and the public cloud working > > group) think project resource deletion should look like, and > what the > > options are, and where we should aim to be with it. We can then turn > > that discussion into a final 'spec' of sorts. > > > > > > Great news! > > Do you need any help to get started there? > > Regards, > JP > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From openstack at nemebean.com Thu Feb 14 15:23:46 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 14 Feb 2019 09:23:46 -0600 Subject: [requirements][mistral] Add yamlloader to global requirements In-Reply-To: <6b3ca84b-61cc-4df6-b16c-6310886cce8f@Spark> References: <6b3ca84b-61cc-4df6-b16c-6310886cce8f@Spark> Message-ID: <54911bc5-fcab-c58e-dfae-92c13b61f4f0@nemebean.com> Anyone can propose to add a new library to global-requirements. Open a review in the requirements repo[1] and make sure the commit message answers the questions in the docs[2]. 1: http://git.openstack.org/cgit/openstack/requirements/ 2: https://docs.openstack.org/project-team-guide/dependency-management.html#review-guidelines On 2/14/19 9:10 AM, Renat Akhmerov wrote: > Yes, it would help us implement a useful feature. > > > > Thanks > > Renat Akhmerov > @Nokia > On 14 Feb 2019, 21:02 +0700, Oleg Ovcharuk , wrote: >> Hi! Can you please add yamlloader library to global requirements? >> https://pypi.org/project/yamlloader/ >> >> It provides ability to preserve key order in dicts, it supports either >> python 2.7 and python 3.x, it provides better performance than >> built-in functions. >> Thank you. From lbragstad at gmail.com Thu Feb 14 15:27:29 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 14 Feb 2019 09:27:29 -0600 Subject: [dev][keystone] Launchpad blueprint reckoning In-Reply-To: References: <72413deb-161a-04a9-bdb7-b3e9f745ba7c@gmail.com> <1550142425.3159728.1657851088.24E20D91@webmail.messagingengine.com> Message-ID: <520cb398-f286-04fc-2e72-ac28a2dba125@gmail.com> On 2/14/19 5:47 AM, Morgan Fainberg wrote: > Rethinking my last email... Go with just release notes, no need for a bug. The only thing we lose with this would be a place to see every commit that deprecated or removed something in a release (short of doing a git blame on the release note). We could still do this with bugs and we could drive the tracking with Partial-Bug in each commit message. We need to make sure to formally close the bug however at the end of the release if we don't close it with a commit using Closes-Bug. In my experience, we rarely stage all these commits at once. They're usually proposed haphazardly throughout the release as people have cycles. > > On Thu, Feb 14, 2019, 06:46 Morgan Fainberg wrote: > > I would go for one tracking bug per cycle or we could also just > lean on the release notes instead of having a direct bug.  > > On Thu, Feb 14, 2019, 06:07 Colleen Murphy wrote: > > On Wed, Feb 13, 2019, at 8:56 PM, Lance Bragstad wrote: > > Over the last couple of years, our launchpad blueprints have > grown > > unruly [0] (~77 blueprints a few days ago). The majority of > them were in > > "New" status, unmaintained, and several years old (some > dating back to > > 2013). Even though we've been using specifications [1] for > several > > years, people still get confused when they see conflicting > or inaccurate > > blueprints. After another person tripped over a duplicate > blueprint this > > week, cmurphy, vishakha, and I decided to devote some > attention to it. > > We tracked the work in an etherpad [2] - so we can still > find links to > > things. > > > > First, if you are the owner of a blueprint that was marked as > > "Obsolete", you should see a comment on the whiteboard that > includes a > > reason or justification. If you'd like to continue the > discussion about > > your feature request, please open a specification against the > > openstack/keystone-specs repository instead. For historical > context, > > when we converted to specifications, we were only supposed > to create > > blueprints for tracking the work after the specification was > merged. > > Unfortunately, I don't think this process was ever written > down, which > > I'm sure attributed to blueprint bloat over the years. > > > > Second, if you track work regularly using blueprints or plan on > > delivering something for Stein, please make sure your > blueprint in > > Launchpad is approved and tracked to the appropriate release > (this > > should already be done, but feel free to double check). The > team doesn't > > plan on switching processes for feature tracking > mid-release. Instead, > > we're going to continue tracking feature work with launchpad > blueprints > > for the remainder of Stein. Currently, the team is leaning > heavily > > towards using RFE bug reports for new feature work, which we > can easily > > switch to in Train. The main reason for this switch is that > bug comments > > are immutable with better timestamps while blueprint > whiteboards are > > editable to anyone and not timestamped very well. We already > have > > tooling in place to update bug reports based on commit > messages and that > > will continue to work for RFE bug reports. > > > > Third, any existing blueprints that aren't targeted for > Stein but are > > good ideas, should be converted to RFE bug reports. All > context from the > > blueprint will need to be ported to the bug report. After a > sufficient > > RFE bug report is opened, the blueprint should be marked as > "Superseded" > > or "Obsolete" *with* a link to the newly opened bug. While > this is > > tedious, there aren't nearly as many blueprints open now as > there were a > > couple of days ago. If you're interested in assisting with > this effort, > > let me know. > > > > Fourth, after moving non-Stein blueprints to RFE bugs, only > Stein > > related blueprints should be open in launchpad. Once Stein > is released, > > we'll go ahead disable keystone blueprints. > > > > Finally, we need to overhaul a portion of our contributor > guide to > > include information around this process. The goal should be > to make that > > documentation clear enough that we don't have this issue > again. I plan > > on getting something up for review soon, but I don't have > anything > > currently, so if someone is interested in taking a shot at > writing this > > document, please feel free to do so. Morgan has a patch up > to replace > > blueprint usage with RFE bugs in the specification template [3]. > > > > We can air out any comments, questions, or concerns here in > the thread. > > What should we do about tracking "deprecated-as-of-*" and > "removed-as-of-*" work? I never liked how this was done with > blueprints but I'm not sure how we would do it with bugs. One > tracking bug for all deprecated things in a cycle? One bug for > each? A Trello/Storyboard board or etherpad? Do we even need > to track it with an external tool - perhaps we can just keep a > running list in a release note that we add to over the cycle? > I agree. The solution that is jumping out at me is to track one bug for deprecated things and one for removed things per release, so similar to what we do now with blueprints. We would have to make sure we tag commits properly, so they are all tracked in the bug report. Creating a bug for everything that is deprecated or removed would be nice for capturing specific details, but it also feels like it will introduce more churn to the process. I guess I'm assuming there are users that like to read every commit that has deprecated something or removed something in a release. If we don't need to operate under that assumption, then a release note would do just fine and I'm all for simplifying the process. > > Thanks for tackling this cleanup work. > > > > > Thanks, > > > > Lance > > > > [0] https://blueprints.launchpad.net/keystone > > [1] http://specs.openstack.org/openstack/keystone-specs/ > > [2] https://etherpad.openstack.org/p/keystone-blueprint-cleanup > > [3] https://review.openstack.org/#/c/625282/ > > Email had 1 attachment: > > + signature.asc > >   1k (application/pgp-signature) > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From morgan.fainberg at gmail.com Thu Feb 14 15:50:05 2019 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Thu, 14 Feb 2019 10:50:05 -0500 Subject: [dev][keystone] Launchpad blueprint reckoning In-Reply-To: <520cb398-f286-04fc-2e72-ac28a2dba125@gmail.com> References: <72413deb-161a-04a9-bdb7-b3e9f745ba7c@gmail.com> <1550142425.3159728.1657851088.24E20D91@webmail.messagingengine.com> <520cb398-f286-04fc-2e72-ac28a2dba125@gmail.com> Message-ID: I think a `git blame` or history of the deprecated release note is nice, it centralizes out tracking of removed/deprecated items to the git log itself rather than some external tracker that may or may not be available forever. This way as long as the git repo is maintained, our tracking for a given release is also tracked. Specs and bugs are nice, but the deprecated bug # for a given release is fairly opaque. Other bugs might have more context in the bug, but if it's just a list of commits, I don't see a huge win. On Thu, Feb 14, 2019, 10:28 Lance Bragstad > > On 2/14/19 5:47 AM, Morgan Fainberg wrote: > > Rethinking my last email... Go with just release notes, no need for a bug. > > > The only thing we lose with this would be a place to see every commit that > deprecated or removed something in a release (short of doing a git blame on > the release note). We could still do this with bugs and we could drive the > tracking with Partial-Bug in each commit message. We need to make sure to > formally close the bug however at the end of the release if we don't close > it with a commit using Closes-Bug. In my experience, we rarely stage all > these commits at once. They're usually proposed haphazardly throughout the > release as people have cycles. > > > On Thu, Feb 14, 2019, 06:46 Morgan Fainberg wrote: > >> I would go for one tracking bug per cycle or we could also just lean on >> the release notes instead of having a direct bug. >> >> On Thu, Feb 14, 2019, 06:07 Colleen Murphy > >>> On Wed, Feb 13, 2019, at 8:56 PM, Lance Bragstad wrote: >>> > Over the last couple of years, our launchpad blueprints have grown >>> > unruly [0] (~77 blueprints a few days ago). The majority of them were >>> in >>> > "New" status, unmaintained, and several years old (some dating back to >>> > 2013). Even though we've been using specifications [1] for several >>> > years, people still get confused when they see conflicting or >>> inaccurate >>> > blueprints. After another person tripped over a duplicate blueprint >>> this >>> > week, cmurphy, vishakha, and I decided to devote some attention to it. >>> > We tracked the work in an etherpad [2] - so we can still find links to >>> > things. >>> > >>> > First, if you are the owner of a blueprint that was marked as >>> > "Obsolete", you should see a comment on the whiteboard that includes a >>> > reason or justification. If you'd like to continue the discussion about >>> > your feature request, please open a specification against the >>> > openstack/keystone-specs repository instead. For historical context, >>> > when we converted to specifications, we were only supposed to create >>> > blueprints for tracking the work after the specification was merged. >>> > Unfortunately, I don't think this process was ever written down, which >>> > I'm sure attributed to blueprint bloat over the years. >>> > >>> > Second, if you track work regularly using blueprints or plan on >>> > delivering something for Stein, please make sure your blueprint in >>> > Launchpad is approved and tracked to the appropriate release (this >>> > should already be done, but feel free to double check). The team >>> doesn't >>> > plan on switching processes for feature tracking mid-release. Instead, >>> > we're going to continue tracking feature work with launchpad blueprints >>> > for the remainder of Stein. Currently, the team is leaning heavily >>> > towards using RFE bug reports for new feature work, which we can easily >>> > switch to in Train. The main reason for this switch is that bug >>> comments >>> > are immutable with better timestamps while blueprint whiteboards are >>> > editable to anyone and not timestamped very well. We already have >>> > tooling in place to update bug reports based on commit messages and >>> that >>> > will continue to work for RFE bug reports. >>> > >>> > Third, any existing blueprints that aren't targeted for Stein but are >>> > good ideas, should be converted to RFE bug reports. All context from >>> the >>> > blueprint will need to be ported to the bug report. After a sufficient >>> > RFE bug report is opened, the blueprint should be marked as >>> "Superseded" >>> > or "Obsolete" *with* a link to the newly opened bug. While this is >>> > tedious, there aren't nearly as many blueprints open now as there were >>> a >>> > couple of days ago. If you're interested in assisting with this effort, >>> > let me know. >>> > >>> > Fourth, after moving non-Stein blueprints to RFE bugs, only Stein >>> > related blueprints should be open in launchpad. Once Stein is released, >>> > we'll go ahead disable keystone blueprints. >>> > >>> > Finally, we need to overhaul a portion of our contributor guide to >>> > include information around this process. The goal should be to make >>> that >>> > documentation clear enough that we don't have this issue again. I plan >>> > on getting something up for review soon, but I don't have anything >>> > currently, so if someone is interested in taking a shot at writing this >>> > document, please feel free to do so. Morgan has a patch up to replace >>> > blueprint usage with RFE bugs in the specification template [3]. >>> > >>> > We can air out any comments, questions, or concerns here in the thread. >>> >>> What should we do about tracking "deprecated-as-of-*" and >>> "removed-as-of-*" work? I never liked how this was done with blueprints but >>> I'm not sure how we would do it with bugs. One tracking bug for all >>> deprecated things in a cycle? One bug for each? A Trello/Storyboard board >>> or etherpad? Do we even need to track it with an external tool - perhaps we >>> can just keep a running list in a release note that we add to over the >>> cycle? >>> >> > I agree. The solution that is jumping out at me is to track one bug for > deprecated things and one for removed things per release, so similar to > what we do now with blueprints. We would have to make sure we tag commits > properly, so they are all tracked in the bug report. Creating a bug for > everything that is deprecated or removed would be nice for capturing > specific details, but it also feels like it will introduce more churn to > the process. > > I guess I'm assuming there are users that like to read every commit that > has deprecated something or removed something in a release. If we don't > need to operate under that assumption, then a release note would do just > fine and I'm all for simplifying the process. > > >>> Thanks for tackling this cleanup work. >>> >>> > >>> > Thanks, >>> > >>> > Lance >>> > >>> > [0] https://blueprints.launchpad.net/keystone >>> > [1] http://specs.openstack.org/openstack/keystone-specs/ >>> > [2] https://etherpad.openstack.org/p/keystone-blueprint-cleanup >>> > [3] https://review.openstack.org/#/c/625282/ >>> > Email had 1 attachment: >>> > + signature.asc >>> > 1k (application/pgp-signature) >>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Thu Feb 14 15:57:08 2019 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 14 Feb 2019 10:57:08 -0500 Subject: [dev][tc] Part 2: Evaluating projects in relation to OpenStack cloud vision In-Reply-To: <0A24656E-D2B9-4668-A105-D6546896C6DA@leafe.com> References: <1138f8f0-9314-3b2e-06c5-5cc848650cd2@gmail.com> <0A24656E-D2B9-4668-A105-D6546896C6DA@leafe.com> Message-ID: <693ea803-3dd9-b118-85a3-dbb2b05e084d@gmail.com> On 02/14/2019 10:08 AM, Ed Leafe wrote: > On Feb 14, 2019, at 8:16 AM, Jay Pipes wrote: >> Placement allocations currently have a distinct lack of temporal awareness. An allocation either exists or doesn't exist -- there is no concept of an allocation "end time". What this means is that placement cannot be used for a reservation system. I used to think this was OK, and that reservation systems should be layered on top of the simpler placement data model. >> >> I no longer believe this is a good thing, and feel that placement is actually the most appropriate service for modeling a reservation system. If I were to have a "do-over", I would have added the concept of a start and end time to the allocation. > > I’m not clear on how you are envisioning this working. Will Placement somehow delete an allocation at this end time? IMO this sort of functionality should really be done by a system external to Placement. But perhaps you are thinking of something completely different, and I’m just a little thick? I'm not actually proposing this functionality be added to placement at this time. Just remarking that had I to do things over again, I would have modeled an end time in the allocation concept. The end times are not yet upon us, fortunately. Best, -jay From ed at leafe.com Thu Feb 14 15:57:43 2019 From: ed at leafe.com (Ed Leafe) Date: Thu, 14 Feb 2019 09:57:43 -0600 Subject: [dev][tc] Part 2: Evaluating projects in relation to OpenStack cloud vision In-Reply-To: <693ea803-3dd9-b118-85a3-dbb2b05e084d@gmail.com> References: <1138f8f0-9314-3b2e-06c5-5cc848650cd2@gmail.com> <0A24656E-D2B9-4668-A105-D6546896C6DA@leafe.com> <693ea803-3dd9-b118-85a3-dbb2b05e084d@gmail.com> Message-ID: <90304E45-C1ED-4515-87EA-24C4BC4B9CAC@leafe.com> On Feb 14, 2019, at 9:57 AM, Jay Pipes wrote: > > The end times are not yet upon us, fortunately. I see what you did there... -- Ed Leafe From mriedemos at gmail.com Thu Feb 14 15:59:29 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 14 Feb 2019 09:59:29 -0600 Subject: [nova][qa][cinder] CI job changes In-Reply-To: <666ffefd-7344-1853-7fd5-a2a32ea8d222@gmail.com> References: <666ffefd-7344-1853-7fd5-a2a32ea8d222@gmail.com> Message-ID: <7635bdba-6202-27d6-1098-4e287ffc26cb@gmail.com> Updates inline. On 2/5/2019 8:35 AM, Matt Riedemann wrote: > I'd like to propose some changes primarily to the CI jobs that run on > nova changes, but also impact cinder and tempest. > > 1. Drop the nova-multiattach job and move test coverage to other jobs > > This is actually an old thread [1] and I had started the work but got > hung up on a bug that was teased out of one of the tests when running in > the multi-node tempest-slow job [2]. For now I've added a conditional > skip on that test if running in a multi-node job. The open changes are > here [3]. Done: https://review.openstack.org/#/q/status:merged+topic:drop-multiattach-job > > 2. Only run compute.api and scenario tests in nova-next job and run > under python3 only > > The nova-next job is a place to test new or advanced nova features like > placement and cells v2 when those were still optional in Newton. It > currently runs with a few changes from the normal tempest-full job: > > * configures service user tokens > * configures nova console proxy to use TLS > * disables the resource provider association refresh interval > * it runs the post_test_hook which runs some commands like > archive_delete_rows, purge, and looks for leaked resource allocations [4] > > Like tempest-full, it runs the non-slow tempest API tests concurrently > and then the scenario tests serially. I'm proposing that we: > > a) change that job to only run tempest compute API tests and scenario > tests to cut down on the number of tests to run; since the job is really > only about testing nova features, we don't need to spend time running > glance/keystone/cinder/neutron tests which don't touch nova. Proposed: https://review.openstack.org/#/c/636459/ (+2 from Stephen) > > b) run it with python3 [5] which is the direction all jobs are moving > anyway Done: https://review.openstack.org/#/c/634739/ > > 3. Drop the integrated-gate (py2) template jobs (from nova) > > Nova currently runs with both the integrated-gate and > integrated-gate-py3 templates, which adds a set of tempest-full and > grenade jobs each to the check and gate pipelines. I don't think we need > to be gating on both py2 and py3 at this point when it comes to > tempest/grenade changes. Tempest changes are still gating on both so we > have coverage there against breaking changes, but I think anything > that's py2 specific would be caught in unit and functional tests (which > we're running on both py27 and py3*). Proposed: https://review.openstack.org/#/c/634949/ (+2 from Stephen) -- Thanks, Matt From colleen at gazlene.net Thu Feb 14 16:00:32 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Thu, 14 Feb 2019 17:00:32 +0100 Subject: [dev][keystone] Launchpad blueprint reckoning In-Reply-To: References: <72413deb-161a-04a9-bdb7-b3e9f745ba7c@gmail.com> <1550142425.3159728.1657851088.24E20D91@webmail.messagingengine.com> <520cb398-f286-04fc-2e72-ac28a2dba125@gmail.com> Message-ID: <1550160032.3273902.1658015488.111ADCBA@webmail.messagingengine.com> On Thu, Feb 14, 2019, at 4:50 PM, Morgan Fainberg wrote: > I think a `git blame` or history of the deprecated release note is nice, it > centralizes out tracking of removed/deprecated items to the git log itself > rather than some external tracker that may or may not be available forever. > This way as long as the git repo is maintained, our tracking for a given > release is also tracked. > > Specs and bugs are nice, but the deprecated bug # for a given release is > fairly opaque. Other bugs might have more context in the bug, but if it's > just a list of commits, I don't see a huge win. I'm also +1 on just keeping it in the release notes. > > On Thu, Feb 14, 2019, 10:28 Lance Bragstad > >> On Thu, Feb 14, 2019, 06:07 Colleen Murphy >>> What should we do about tracking "deprecated-as-of-*" and > >>> "removed-as-of-*" work? I never liked how this was done with blueprints but > >>> I'm not sure how we would do it with bugs. One tracking bug for all > >>> deprecated things in a cycle? One bug for each? A Trello/Storyboard board > >>> or etherpad? Do we even need to track it with an external tool - perhaps we > >>> can just keep a running list in a release note that we add to over the > >>> cycle? > >>> > >> > > I agree. The solution that is jumping out at me is to track one bug for > > deprecated things and one for removed things per release, so similar to > > what we do now with blueprints. We would have to make sure we tag commits > > properly, so they are all tracked in the bug report. Creating a bug for > > everything that is deprecated or removed would be nice for capturing > > specific details, but it also feels like it will introduce more churn to > > the process. > > > > I guess I'm assuming there are users that like to read every commit that > > has deprecated something or removed something in a release. If we don't > > need to operate under that assumption, then a release note would do just > > fine and I'm all for simplifying the process. > > I think the reason we have release notes is so people *don't* have to read every commit. Colleen From mthode at mthode.org Thu Feb 14 16:11:15 2019 From: mthode at mthode.org (Matthew Thode) Date: Thu, 14 Feb 2019 10:11:15 -0600 Subject: [requirements][mistral] Add yamlloader to global requirements In-Reply-To: References: Message-ID: <20190214161115.2virevrqttkf74ra@mthode.org> On 19-02-14 16:58:49, Oleg Ovcharuk wrote: > Hi! Can you please add yamlloader library to global requirements? > https://pypi.org/project/yamlloader/ > > It provides ability to preserve key order in dicts, it supports either > python 2.7 and python 3.x, it provides better performance than built-in > functions. > Thank you. I'd like to know a little more about why we need this, yaml as a spec itself doesn't guarantee order so order should be stored somewhere else. If all you need is ordereddict support something like this may be better then adding yet another lib. https://gist.github.com/enaeseth/844388 -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From lbragstad at gmail.com Thu Feb 14 16:24:29 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 14 Feb 2019 10:24:29 -0600 Subject: [dev][keystone] Launchpad blueprint reckoning In-Reply-To: <1550160032.3273902.1658015488.111ADCBA@webmail.messagingengine.com> References: <72413deb-161a-04a9-bdb7-b3e9f745ba7c@gmail.com> <1550142425.3159728.1657851088.24E20D91@webmail.messagingengine.com> <520cb398-f286-04fc-2e72-ac28a2dba125@gmail.com> <1550160032.3273902.1658015488.111ADCBA@webmail.messagingengine.com> Message-ID: <07f1042a-1f84-87a5-1505-38ce1705429c@gmail.com> Sounds good to me. We should probably find a home for this information. Somewhere in our contributor guide, perhaps? On 2/14/19 10:00 AM, Colleen Murphy wrote: > On Thu, Feb 14, 2019, at 4:50 PM, Morgan Fainberg wrote: >> I think a `git blame` or history of the deprecated release note is nice, it >> centralizes out tracking of removed/deprecated items to the git log itself >> rather than some external tracker that may or may not be available forever. >> This way as long as the git repo is maintained, our tracking for a given >> release is also tracked. >> >> Specs and bugs are nice, but the deprecated bug # for a given release is >> fairly opaque. Other bugs might have more context in the bug, but if it's >> just a list of commits, I don't see a huge win. > I'm also +1 on just keeping it in the release notes. > >> On Thu, Feb 14, 2019, 10:28 Lance Bragstad > >>>> On Thu, Feb 14, 2019, 06:07 Colleen Murphy >>>> What should we do about tracking "deprecated-as-of-*" and >>>>> "removed-as-of-*" work? I never liked how this was done with blueprints but >>>>> I'm not sure how we would do it with bugs. One tracking bug for all >>>>> deprecated things in a cycle? One bug for each? A Trello/Storyboard board >>>>> or etherpad? Do we even need to track it with an external tool - perhaps we >>>>> can just keep a running list in a release note that we add to over the >>>>> cycle? >>>>> >>> I agree. The solution that is jumping out at me is to track one bug for >>> deprecated things and one for removed things per release, so similar to >>> what we do now with blueprints. We would have to make sure we tag commits >>> properly, so they are all tracked in the bug report. Creating a bug for >>> everything that is deprecated or removed would be nice for capturing >>> specific details, but it also feels like it will introduce more churn to >>> the process. >>> >>> I guess I'm assuming there are users that like to read every commit that >>> has deprecated something or removed something in a release. If we don't >>> need to operate under that assumption, then a release note would do just >>> fine and I'm all for simplifying the process. >>> > I think the reason we have release notes is so people *don't* have to read every commit. > > Colleen > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From vgvoleg at gmail.com Thu Feb 14 16:46:06 2019 From: vgvoleg at gmail.com (Oleg Ovcharuk) Date: Thu, 14 Feb 2019 19:46:06 +0300 Subject: [requirements][mistral] Add yamlloader to global requirements In-Reply-To: <20190214161115.2virevrqttkf74ra@mthode.org> References: <20190214161115.2virevrqttkf74ra@mthode.org> Message-ID: Matthew, we use not only load, but also dump. We can't use custom constructor and default representer - the output will be terrible. This custom constructor contains about 50 lines of code, representer would have a similar count. Also, we should think about compatibility with Python 2.7, 3.x and about it's performance. Summary, we would have about 150 lines of code, which is just copy-paste from `yamlloader` library. IMHO, it is better to use existing solutions. чт, 14 февр. 2019 г. в 19:14, Matthew Thode : > On 19-02-14 16:58:49, Oleg Ovcharuk wrote: > > Hi! Can you please add yamlloader library to global requirements? > > https://pypi.org/project/yamlloader/ > > > > It provides ability to preserve key order in dicts, it supports either > > python 2.7 and python 3.x, it provides better performance than built-in > > functions. > > Thank you. > > I'd like to know a little more about why we need this, yaml as a spec > itself doesn't guarantee order so order should be stored somewhere else. > > If all you need is ordereddict support something like this may be better > then adding yet another lib. > > https://gist.github.com/enaeseth/844388 > > -- > Matthew Thode > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Feb 14 16:48:03 2019 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 14 Feb 2019 11:48:03 -0500 Subject: [tripleo] Plan around switching Podman to default Message-ID: Pacemaker provided by CentOS7 doesn't work with Podman, and only works with Docker. Podman is already the default on the Undercloud, and this is fine, as we don't deploy Pacemaker on this node. However for the Overcloud, it causes problem as upstream is tested on CentOS7 and downstream is being tested on RHEL8. With that said, I propose that we: - Keep Docker as the default on the Overcloud until CentOS8 is out. - Switch downstream to use Podman on the Overcloud (since we run RHEL8 it's fine). - Switch all CI jobs except OVB to NOT deploy Pacemaker and switch to Podman. - Once CentOS8 is out, we revert the downstream only patch and land it upstream. Any feedback / concerns are welcome. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Feb 14 16:51:02 2019 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 14 Feb 2019 11:51:02 -0500 Subject: [tripleo] Plan around switching Podman to default In-Reply-To: References: Message-ID: Sorry I forgot to mention Standalone, but it's in the same situation as Overcloud. Let's keep Docker by default on both until CentOS8 is out. On Thu, Feb 14, 2019 at 11:48 AM Emilien Macchi wrote: > Pacemaker provided by CentOS7 doesn't work with Podman, and only works > with Docker. > > Podman is already the default on the Undercloud, and this is fine, as we > don't deploy Pacemaker on this node. > However for the Overcloud, it causes problem as upstream is tested on > CentOS7 and downstream is being tested on RHEL8. > > With that said, I propose that we: > - Keep Docker as the default on the Overcloud until CentOS8 is out. > - Switch downstream to use Podman on the Overcloud (since we run RHEL8 > it's fine). > - Switch all CI jobs except OVB to NOT deploy Pacemaker and switch to > Podman. > - Once CentOS8 is out, we revert the downstream only patch and land it > upstream. > > Any feedback / concerns are welcome. > -- > Emilien Macchi > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Feb 14 17:02:49 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 14 Feb 2019 17:02:49 +0000 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <5b651d3d-ac42-e46d-c52b-9e9b280d2af3@openstack.org> References: <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> <168e0cba6f4.1013071eb93450.6339187288167074748@ghanshyammann.com> <1550007508.442544.1656696288.1CEB9AC9@webmail.messagingengine.com> <5b651d3d-ac42-e46d-c52b-9e9b280d2af3@openstack.org> Message-ID: <20190214170248.7t7snjz4pacl6lpe@yuggoth.org> On 2019-02-14 14:29:44 +0100 (+0100), Thierry Carrez wrote: > Well said. I think we need to use another term for this program, > to avoid colliding with other forms of mentoring or on-boarding > help. > > On the #openstack-tc channel, I half-jokingly suggested to call > this the 'Padawan' program [...] A more traditional Franco-English term for this might be "protoge" or, if you prefer Japanese, perhaps 弟子 (deshi). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu Feb 14 17:09:04 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 14 Feb 2019 17:09:04 +0000 Subject: Call for help! 'U' Release name Mandarin speakers In-Reply-To: <0E16E57B-98E7-498A-A810-19D3AD1ED028@vexxhost.com> References: <20190214021836.GD12795@thor.bakeyournoodle.com> <0E16E57B-98E7-498A-A810-19D3AD1ED028@vexxhost.com> Message-ID: <20190214170903.x56hubdrtjfyidwp@yuggoth.org> On 2019-02-13 22:09:35 -0500 (-0500), Mohammed Naser wrote: [...] > So: chatting with some folks from China and we’ve got the > interesting problem that Pinyin does not have a U! [...] Those with keen memories might recall we had the exact same problem when needing to come up with a name for our "I" release corresponding to the Hong Kong summit, and eventually settled on a local street which had an English name (Ice House Street). We wanted a Chinese place name, but as noted there were none in Pinyin starting with an "I" vowel. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From andr.kurilin at gmail.com Thu Feb 14 17:09:46 2019 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Thu, 14 Feb 2019 19:09:46 +0200 Subject: [requirements][mistral] Add yamlloader to global requirements In-Reply-To: <20190214161115.2virevrqttkf74ra@mthode.org> References: <20190214161115.2virevrqttkf74ra@mthode.org> Message-ID: чт, 14 февр. 2019 г. в 18:15, Matthew Thode : > On 19-02-14 16:58:49, Oleg Ovcharuk wrote: > > Hi! Can you please add yamlloader library to global requirements? > > https://pypi.org/project/yamlloader/ > > > > It provides ability to preserve key order in dicts, it supports either > > python 2.7 and python 3.x, it provides better performance than built-in > > functions. > > Thank you. > > I'd like to know a little more about why we need this, yaml as a spec > itself doesn't guarantee order so order should be stored somewhere else. > > If all you need is ordereddict support something like this may be better > then adding yet another lib. > > > this may be better then adding yet another lib If someone raised this question before, probably, we would have less oslo libs. > https://gist.github.com/enaeseth/844388 > > -- > Matthew Thode > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bansalnehal26 at gmail.com Thu Feb 14 04:28:16 2019 From: bansalnehal26 at gmail.com (Nehal Bansal) Date: Thu, 14 Feb 2019 09:58:16 +0530 Subject: Zun : Error in "docker network create" with kuryr as driver, as required by Zun Message-ID: Hi, I have installed OpenStack Queens release. I wish to run docker containers on it as first class residents like VMs therefore, I installed Zun. Zun requires Kuryr-libnetwork on the compute node. Everything got installed correctly but verifying the installation with docker network create --driver kuryr --ipam-driver kuryr --subnet 192.168.4.0/24 --gateway=192.168.4.1 test_net gives the following error: Error response from daemon: legacy plugin: Plugin.Activate: {"message":"page not found"} The /var/log/syslog file gives this error: Feb 13 09:25:01 compute dockerd[27830]: time="2019-02-13T09:25:01.006618155+05:30" level=error msg="Handler for POST /v1.39/networks/create returned error: legacy plugin: Plugin.Activate: {\"message\":\"page not found\"}\n". I have asked the question on ask.openstack.org too but have received no answers. I am new to both OpenStack and Docker. Please let me know if you need any more information. Thank you. Regards, Nehal Bansal -------------- next part -------------- An HTML attachment was scrubbed... URL: From vgvoleg at gmail.com Thu Feb 14 09:14:40 2019 From: vgvoleg at gmail.com (Oleg Ovcharuk) Date: Thu, 14 Feb 2019 12:14:40 +0300 Subject: [requirements][mistral] Add yamlloader to global requirements Message-ID: Hi! Can you please add yamlloader library to global requirements? https://pypi.org/project/yamlloader/ It provides ability to preserve key order in dicts, it supports either python 2.7 and python 3.x, it provides better performance than built-in functions. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbalaji.uk at gmail.com Thu Feb 14 10:01:20 2019 From: kbalaji.uk at gmail.com (bkannadassan) Date: Thu, 14 Feb 2019 03:01:20 -0700 (MST) Subject: Diskimage-builder lvm In-Reply-To: <07B961D1-EC8A-4967-A515-00A933D273A6@linux.vnet.ibm.com> References: <07B961D1-EC8A-4967-A515-00A933D273A6@linux.vnet.ibm.com> Message-ID: <1550138480276-0.post@n7.nabble.com> Not sure if you figured out. I hit the same for centos and I need to include dracut-regenerate in disk-image-create. Which did install lvm2 and other minimal packages.. -- Sent from: http://openstack.10931.n7.nabble.com/Operators-f4774.html From openstack at nemebean.com Thu Feb 14 17:15:59 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 14 Feb 2019 11:15:59 -0600 Subject: [requirements][mistral] Add yamlloader to global requirements In-Reply-To: References: <20190214161115.2virevrqttkf74ra@mthode.org> Message-ID: On 2/14/19 10:46 AM, Oleg Ovcharuk wrote: > Matthew, we use not only load, but also dump. We can't use custom > constructor and default representer - the output will be terrible. > This custom constructor contains about 50 lines of code, representer > would have a similar count. Also, we should think about compatibility > with Python 2.7, 3.x and about it's performance. > Summary, we would have about 150 lines of code, which is just copy-paste > from `yamlloader` library. > IMHO, it is better to use existing solutions. You don't need a complex representer to dump OrderedDicts. It can be done in about three lines: https://github.com/cybertron/tripleo-scripts/blob/105381d4f080394e68a40327c398d32eb9f4f580/net_processing.py#L302 That's the code I used when I wanted to dump dicts in a particular order. Once you add the representer OrderedDicts are handled as you would expect. > > чт, 14 февр. 2019 г. в 19:14, Matthew Thode >: > > On 19-02-14 16:58:49, Oleg Ovcharuk wrote: > > Hi! Can you please add yamlloader library to global requirements? > > https://pypi.org/project/yamlloader/ > > > > It provides ability to preserve key order in dicts, it supports > either > > python 2.7 and python 3.x, it provides better performance than > built-in > > functions. > > Thank you. > > I'd like to know a little more about why we need this, yaml as a spec > itself doesn't guarantee order so order should be stored somewhere else. > > If all you need is ordereddict support something like this may be better > then adding yet another lib. > > https://gist.github.com/enaeseth/844388 > > -- > Matthew Thode > From fungi at yuggoth.org Thu Feb 14 17:18:23 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 14 Feb 2019 17:18:23 +0000 Subject: [infra][Release-job-failures] Release of openstack/puppet-aodh failed In-Reply-To: <9eea0f55-17d0-8f03-39c1-2d865d3d266e@openstack.org> References: <9eea0f55-17d0-8f03-39c1-2d865d3d266e@openstack.org> Message-ID: <20190214171823.g72orauret47il77@yuggoth.org> On 2019-02-14 11:13:52 +0100 (+0100), Thierry Carrez wrote: [...] > Error is: > Forge API auth failed with code: 400 > > However it's a bit weird, since that release was made 4 weeks ago. > Also we don't seem to upload things to the Puppet Forge... > > Was it some kind of a test ? It looks like I'm missing context. It was reenqueued this morning at Tobias Urdin's request in the #openstack-infra IRC channel. We're continuing to attempt to get automated Puppetforge publication working, however it currently seems to be choking on the credentials we're supplying so I need to find a few minutes to decrypt them manually and confirm they match what we have on record for the account. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From melwittt at gmail.com Thu Feb 14 17:19:03 2019 From: melwittt at gmail.com (melanie witt) Date: Thu, 14 Feb 2019 09:19:03 -0800 Subject: [nova][dev][ops] can we get rid of 'project_only' in the DB layer? Message-ID: Hey all, Recently, we had a customer try the following command as a non-admin with a policy role granted in policy.json to allow live migrate: "os_compute_api:os-migrate-server:migrate_live": "rule:admin_api or role:Operator" The scenario is that they have a server in project A and a user in project B with role:Operator and the user makes a call to live migrate the server. But when they call the API, they get the following error response: {"itemNotFound": {"message": "Instance could not be found.", "code": 404}} A superficial look through the code shows that the live migrate should work, because we have appropriate policy checks in the API, and the request makes it past those checks because the policy.json has been set correctly. A common pattern in our APIs is that we first compute_api.get() the instance object and then we call the server action (live migrate, stop, start, etc) with it after we retrieve it. In this scenario, the compute_api.get() fails with NotFound. And the reason it fails with NotFound is because, much lower level, at the DB layer, we have a keyword arg called 'project_only' which, when True, will scope a database query to the RequestContext.project_id only. We have hard-coded 'project_only=True' for the instance get query. So, when the user in project B with role:Operator tries to retrieve the instance record in project A, with appropriate policy rules set, it will fail because 'project_only=True' and the request context is project B, while the instance is in project A. My question is: can we get rid of the hard-coded 'project_only=True' at the database layer? This seems like something that should be enforced at the API layer and not at the database layer. It reminded me of an effort we had a few years ago where we removed other hard-coded policy enforcement from the database layer [1][2]. I've uploaded a WIP patch to demonstrate the proposed change [3]. Can anyone think of any potential problems with doing this? I'd like to be able to remove it so that operators are able use policy to allow non-admin users with appropriately configured roles to run server actions. Cheers, -melanie [1] https://blueprints.launchpad.net/nova/+spec/nova-api-policy-final-part [2] https://review.openstack.org/#/q/topic:bp/nova-api-policy-final-part+(status:open+OR+status:merged) [3] https://review.openstack.org/637010 From mthode at mthode.org Thu Feb 14 17:23:53 2019 From: mthode at mthode.org (Matthew Thode) Date: Thu, 14 Feb 2019 11:23:53 -0600 Subject: [requirements][mistral] Add yamlloader to global requirements In-Reply-To: References: <20190214161115.2virevrqttkf74ra@mthode.org> Message-ID: <20190214172353.6znldpvawmryx6mu@mthode.org> On 19-02-14 11:15:59, Ben Nemec wrote: > > > On 2/14/19 10:46 AM, Oleg Ovcharuk wrote: > > Matthew, we use not only load, but also dump. We can't use custom > > constructor and default representer - the output will be terrible. > > This custom constructor contains about 50 lines of code, representer > > would have a similar count. Also, we should think about compatibility > > with Python 2.7, 3.x and about it's performance. > > Summary, we would have about 150 lines of code, which is just copy-paste > > from `yamlloader` library. > > IMHO, it is better to use existing solutions. > > You don't need a complex representer to dump OrderedDicts. It can be done in > about three lines: https://github.com/cybertron/tripleo-scripts/blob/105381d4f080394e68a40327c398d32eb9f4f580/net_processing.py#L302 > > That's the code I used when I wanted to dump dicts in a particular order. > Once you add the representer OrderedDicts are handled as you would expect. > > > > > чт, 14 февр. 2019 г. в 19:14, Matthew Thode > >: > > > > On 19-02-14 16:58:49, Oleg Ovcharuk wrote: > > > Hi! Can you please add yamlloader library to global requirements? > > > https://pypi.org/project/yamlloader/ > > > > > > It provides ability to preserve key order in dicts, it supports > > either > > > python 2.7 and python 3.x, it provides better performance than > > built-in > > > functions. > > > Thank you. > > > > I'd like to know a little more about why we need this, yaml as a spec > > itself doesn't guarantee order so order should be stored somewhere else. > > > > If all you need is ordereddict support something like this may be better > > then adding yet another lib. > > > > https://gist.github.com/enaeseth/844388 > > > > -- Matthew Thode > > Thanks for this, hope this can be used instead of adding yet another lib to track -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jaypipes at gmail.com Thu Feb 14 17:27:53 2019 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 14 Feb 2019 12:27:53 -0500 Subject: [nova][dev][ops] can we get rid of 'project_only' in the DB layer? In-Reply-To: References: Message-ID: On 02/14/2019 12:19 PM, melanie witt wrote: > Hey all, > > Recently, we had a customer try the following command as a non-admin > with a policy role granted in policy.json to allow live migrate: > >   "os_compute_api:os-migrate-server:migrate_live": "rule:admin_api or > role:Operator" > > The scenario is that they have a server in project A and a user in > project B with role:Operator and the user makes a call to live migrate > the server. > > But when they call the API, they get the following error response: > >   {"itemNotFound": {"message": "Instance could not be > found.", "code": 404}} > > A superficial look through the code shows that the live migrate should > work, because we have appropriate policy checks in the API, and the > request makes it past those checks because the policy.json has been set > correctly. > > A common pattern in our APIs is that we first compute_api.get() the > instance object and then we call the server action (live migrate, stop, > start, etc) with it after we retrieve it. In this scenario, the > compute_api.get() fails with NotFound. > > And the reason it fails with NotFound is because, much lower level, at > the DB layer, we have a keyword arg called 'project_only' which, when > True, will scope a database query to the RequestContext.project_id only. > We have hard-coded 'project_only=True' for the instance get query. > > So, when the user in project B with role:Operator tries to retrieve the > instance record in project A, with appropriate policy rules set, it will > fail because 'project_only=True' and the request context is project B, > while the instance is in project A. > > My question is: can we get rid of the hard-coded 'project_only=True' at > the database layer? This seems like something that should be enforced at > the API layer and not at the database layer. It reminded me of an effort > we had a few years ago where we removed other hard-coded policy > enforcement from the database layer [1][2]. I've uploaded a WIP patch to > demonstrate the proposed change [3]. > > Can anyone think of any potential problems with doing this? I'd like to > be able to remove it so that operators are able use policy to allow > non-admin users with appropriately configured roles to run server actions. +1 to removing these hard-coded policy-like things. I can't think of any potential "problems" with removing the project_only thing, actually. Best, -jay From ignaziocassano at gmail.com Thu Feb 14 17:27:54 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 14 Feb 2019 18:27:54 +0100 Subject: Diskimage-builder lvm In-Reply-To: <1550138480276-0.post@n7.nabble.com> References: <07B961D1-EC8A-4967-A515-00A933D273A6@linux.vnet.ibm.com> <1550138480276-0.post@n7.nabble.com> Message-ID: I did it and worked fine. Thanks Ignazio Il giorno Gio 14 Feb 2019 18:13 bkannadassan ha scritto: > Not sure if you figured out. I hit the same for centos and I need to > include > dracut-regenerate in disk-image-create. Which did install lvm2 and other > minimal packages.. > > > > -- > Sent from: http://openstack.10931.n7.nabble.com/Operators-f4774.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Feb 14 17:28:33 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 14 Feb 2019 12:28:33 -0500 Subject: [infra][releases][requirements] Publishing per branch constraints files In-Reply-To: <20190214024541.GE12795@thor.bakeyournoodle.com> References: <20190214024541.GE12795@thor.bakeyournoodle.com> Message-ID: Tony Breeds writes: > Hi all, > Back in the dim dark (around Sept 2017) we discussed the idea of > publishing the constraints files statically (instead of via gitweb)[1]. > > the TL;DR: it's nice to be able to use > https://release.openstack.org/constraints/upper/ocata instead of > http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/ocata in tox.ini > > At the Dublin (yes Dublin) PTG Jim, Jeremy, Clark and I discussed how > we'd go about doing that. > > The notes we have are at: > https://etherpad.openstack.org/p/publish-upper-constraints > > There was a reasonable ammount of discussion about merging and root-markers > which I don't recall and only barely understood at the time. > > I have no idea how much of the first 3 items I can do vs calling on others. > I'm happy to do anything that I can ... is it reasonable to get this > done before RC1 (March 18th ish)[2]? > > Yours Tony. > > [1] http://lists.openstack.org/pipermail/openstack-dev/2017-September/122333.html > [2] https://releases.openstack.org/stein/schedule.html#s-rc1 Could we do it with redirects, instead of publishing copies of files? -- Doug From doug at doughellmann.com Thu Feb 14 17:32:17 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 14 Feb 2019 12:32:17 -0500 Subject: [requirements][mistral] Add yamlloader to global requirements In-Reply-To: References: Message-ID: Oleg Ovcharuk writes: > Hi! Can you please add yamlloader library to global requirements? > https://pypi.org/project/yamlloader/ > > It provides ability to preserve key order in dicts, it supports either > python 2.7 and python 3.x, it provides better performance than built-in > functions. > Thank you. The global requirements list is managed through the openstack/requirements repository, and anyone is free to propose changes there. The README.rst file at the top of the repository includes the basic instructions, but feel free to post questions in the #openstack-requirements IRC channel on freenode, too. I think we're likely to need a longer conversation about why adding a second parser is better than using PyYAML everywhere, whether we need to try to transition, etc. so be prepared to answer those sorts of questions in the review. -- Doug From fungi at yuggoth.org Thu Feb 14 17:34:00 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 14 Feb 2019 17:34:00 +0000 Subject: [requirements][mistral] Add yamlloader to global requirements In-Reply-To: References: <20190214161115.2virevrqttkf74ra@mthode.org> Message-ID: <20190214173400.xvqjef6xqxuqvug3@yuggoth.org> On 2019-02-14 11:15:59 -0600 (-0600), Ben Nemec wrote: [...] > You don't need a complex representer to dump OrderedDicts. It can > be done in about three lines [...] What's more, once we're exclusively on Python >=3.6 (not that much longer, really!) ordering is guaranteed by the normal dict type with no special handling required. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu Feb 14 17:38:06 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 14 Feb 2019 17:38:06 +0000 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <20190214170248.7t7snjz4pacl6lpe@yuggoth.org> References: <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> <168e0cba6f4.1013071eb93450.6339187288167074748@ghanshyammann.com> <1550007508.442544.1656696288.1CEB9AC9@webmail.messagingengine.com> <5b651d3d-ac42-e46d-c52b-9e9b280d2af3@openstack.org> <20190214170248.7t7snjz4pacl6lpe@yuggoth.org> Message-ID: <20190214173806.cfs5fmsisiwxfoea@yuggoth.org> On 2019-02-14 17:02:49 +0000 (+0000), Jeremy Stanley wrote: [...] > A more traditional Franco-English term for this might be "protoge" [...] And lest people think me terrible at spelling, I did in fact mean "protege" there. Sorry! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Thu Feb 14 17:40:02 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 14 Feb 2019 12:40:02 -0500 Subject: [all][tc][foundation] looking for feedback on acceptance criteria for new open infrastructure projects Message-ID: Claire posted this request to the foundation mailing list [1], but in case any contributors are not subscribed there it seemed like a good idea to mention it here on this list, too. The OpenStack Foundation Board and staff are looking for feedback from the community about the criteria used to approve new Open Infrastructure Projects (the name for new top level projects under the foundation's umbrella). Moving projects like Zuul, Airship, StarlingX, and Kata from the pilot phase to be fully recognized projects is a big milestone for our community, and it's important that everyone interested participate in the conversation about defining the process and criteria for accepting new projects. Please take a few minutes to read Claire's email and review the etherpad linked there in the next week or two, so that the Board will have your input before their next meeting. [1] http://lists.openstack.org/pipermail/foundation/2019-February/002708.html -- Doug From sean.mcginnis at gmx.com Thu Feb 14 17:41:57 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 14 Feb 2019 11:41:57 -0600 Subject: [release] Release countdown for week R-7, February 18-22 Message-ID: <20190214174157.GA26193@sm-workstation> Welcome to this weeks release countdown email. Development Focus ----------------- The non-client library freeze is getting, followed closely by the client lib freeze. Teams should be focusing on wrapping up work that will require library changes to complete. General Information ------------------- As we get closer to the end of the cycle, we have deadlines coming up for client and non-client libraries to ensure any dependency issues are worked out and we have time to make any critical fixes before the final release candidates. To this end, it is good practice to release libraries throughout the cycle once they have accumulated any significant functional changes. Upcoming Deadlines & Dates -------------------------- Non-client library freeze: February 28 Stein-3 milestone: March 7 RC1 deadline: March 21 -- Sean McGinnis (smcginnis) From lbragstad at gmail.com Thu Feb 14 18:47:04 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 14 Feb 2019 12:47:04 -0600 Subject: [nova][dev][ops] can we get rid of 'project_only' in the DB layer? In-Reply-To: References: Message-ID: On 2/14/19 11:19 AM, melanie witt wrote: > Hey all, > > Recently, we had a customer try the following command as a non-admin > with a policy role granted in policy.json to allow live migrate: > >   "os_compute_api:os-migrate-server:migrate_live": "rule:admin_api or > role:Operator" > > The scenario is that they have a server in project A and a user in > project B with role:Operator and the user makes a call to live migrate > the server. > > But when they call the API, they get the following error response: > >   {"itemNotFound": {"message": "Instance could not be > found.", "code": 404}} > > A superficial look through the code shows that the live migrate should > work, because we have appropriate policy checks in the API, and the > request makes it past those checks because the policy.json has been > set correctly. > > A common pattern in our APIs is that we first compute_api.get() the > instance object and then we call the server action (live migrate, > stop, start, etc) with it after we retrieve it. In this scenario, the > compute_api.get() fails with NotFound. > > And the reason it fails with NotFound is because, much lower level, at > the DB layer, we have a keyword arg called 'project_only' which, when > True, will scope a database query to the RequestContext.project_id > only. We have hard-coded 'project_only=True' for the instance get query. > > So, when the user in project B with role:Operator tries to retrieve > the instance record in project A, with appropriate policy rules set, > it will fail because 'project_only=True' and the request context is > project B, while the instance is in project A. This API sounds like a good fit for system-scope (e.g., a user with an Operator role on the system could call this API with a system-scoped token). > > My question is: can we get rid of the hard-coded 'project_only=True' > at the database layer? This seems like something that should be > enforced at the API layer and not at the database layer. It reminded > me of an effort we had a few years ago where we removed other > hard-coded policy enforcement from the database layer [1][2]. I've > uploaded a WIP patch to demonstrate the proposed change [3]. Cinder was having some discussions about removing these kind of checks from their database layer recently, too. Sounds like a good idea, and keystone has the flexibility with assignments (on projects, domains, and system) for services to make these decisions with context objects and updated policy check strings, as opposed to hard-coded checks and overloading role names to protect APIs like this. If me know if you need help groking the system-scope concept, or if you choose to pursue that route. > > Can anyone think of any potential problems with doing this? I'd like > to be able to remove it so that operators are able use policy to allow > non-admin users with appropriately configured roles to run server > actions. > > Cheers, > -melanie > > [1] > https://blueprints.launchpad.net/nova/+spec/nova-api-policy-final-part > [2] > https://review.openstack.org/#/q/topic:bp/nova-api-policy-final-part+(status:open+OR+status:merged) > [3] https://review.openstack.org/637010 > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lbragstad at gmail.com Thu Feb 14 18:51:19 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 14 Feb 2019 12:51:19 -0600 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <5b651d3d-ac42-e46d-c52b-9e9b280d2af3@openstack.org> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> <168e0cba6f4.1013071eb93450.6339187288167074748@ghanshyammann.com> <1550007508.442544.1656696288.1CEB9AC9@webmail.messagingengine.com> <5b651d3d-ac42-e46d-c52b-9e9b280d2af3@openstack.org> Message-ID: Updating the thread since we talked about this quite a bit in the -tc channel, too [0] (sorry for duplicating across communication mediums!) TL;DR the usefulness of job descriptions is still a thing. To kick start that, I proposed an example to the current help wanted list to kick start what we want our "job descriptions" to look like [1], if we were to have them. [0] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-02-14.log.html#t2019-02-14T16:53:55 [1] https://review.openstack.org/#/c/637025/ On 2/14/19 7:29 AM, Thierry Carrez wrote: > Colleen Murphy wrote: >> I feel like there is a bit of a disconnect between what the TC is >> asking for >> and what the current mentoring organizations are designed to provide. >> Thierry >> framed this as a "peer-mentoring offered" list, but mentoring doesn't >> quite >> capture everything that's needed. >> >> Mentorship programs like Outreachy, cohort mentoring, and the First >> Contact SIG >> are oriented around helping new people quickstart into the community, >> getting >> them up to speed on basics and helping them feel good about >> themselves and >> their contributions. The hope is that happy first-timers eventually >> become >> happy regular contributors which will eventually be a benefit to the >> projects, >> but the benefit to the projects is not the main focus. >> >> The way I see it, the TC Help Wanted list, as well as the new thing, >> is not >> necessarily oriented around newcomers but is instead advocating for the >> projects and meant to help project teams thrive by getting committed >> long-term >> maintainers involved and invested in solving longstanding technical >> debt that >> in some cases requires deep tribal knowledge to solve. It's not a >> thing for a >> newbie to step into lightly and it's not something that can be solved >> by a >> FC-liaison pointing at the contributor docs. Instead what's needed >> are mentors >> who are willing to walk through that tribal knowledge with a new >> contributor >> until they are equipped enough to help with the harder problems. >> >> For that reason I think neither the FC SIG or the mentoring cohort >> group, in >> their current incarnations, are the right groups to be managing this. >> The FC >> SIG's mission is "To provide a place for new contributors to come for >> information and advice" which does not fit the long-term goal of the >> help >> wanted list, and cohort mentoring's four topics ("your first patch", >> "first >> CFP", "first Cloud", and "COA"[1]) also don't fit with the long-term >> and deeply >> technical requirements that a project-specific mentorship offering >> needs. >> Either of those groups could be rescoped to fit with this new >> mission, and >> there is certainly a lot of overlap, but my feeling is that this >> needs to be an >> effort conducted by the TC because the TC is the group that advocates >> for the >> projects. >> >> It's moreover not a thing that can be solved by another list of >> names. In addition >> to naming someone willing to do the several hours per week of mentoring, >> project teams that want help should be forced to come up with a specific >> description of 1) what the project is, 2) what kind of person >> (experience or >> interests) would be a good fit for the project, 3) specific work >> items with >> completion criteria that needs to be done - and it can be extremely >> challenging >> to reframe a project's longstanding issues in such concrete ways that >> make it >> clear what steps are needed to tackle the problem. It should >> basically be an >> advertisement that makes the project sound interesting and >> challenging and >> do-able, because the current help-wanted list and liaison lists and >> mentoring >> topics are too vague to entice anyone to step up. > > Well said. I think we need to use another term for this program, to > avoid colliding with other forms of mentoring or on-boarding help. > > On the #openstack-tc channel, I half-jokingly suggested to call this > the 'Padawan' program, but now that I'm sober, I feel like it might > actually capture what we are trying to do here: > > - Padawans are 1:1 trained by a dedicated, experienced team member > - Padawans feel the Force, they just need help and perspective to > master it > - Padawans ultimately join the team* and may have a padawan of their own > - Bonus geek credit for using Star Wars references > > * unless they turn to the Dark Side, always a possibility > >> Finally, I rather disagree that this should be something maintained >> as a page in >> individual projects' contributor guides, although we should certainly be >> encouraging teams to keep those guides up to date. It should be >> compiled by the >> TC and regularly updated by the project liaisons within the TC. A >> link to a >> contributor guide on docs.openstack.org doesn't give anyone an idea >> of what >> projects need the most help nor does it empower people to believe >> they can help >> by giving them an understanding of what the "job" entails. > > I think we need a single list. I guess it could be sourced from > several repositories, but at least for the start I would not > over-engineer it, just put it out there as a replacement for the > help-most-needed list and see if it flies. > > As a next step, I propose to document the concept on a TC page, then > reach out to the currently-listed teams on help-most-wanted to see if > there would be a volunteer interested in offering Padawan training and > bootstrap the new list, before we start to promote it more actively. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From james.slagle at gmail.com Thu Feb 14 19:02:26 2019 From: james.slagle at gmail.com (James Slagle) Date: Thu, 14 Feb 2019 14:02:26 -0500 Subject: [tripleo] Plan around switching Podman to default In-Reply-To: References: Message-ID: On Thu, Feb 14, 2019 at 11:59 AM Emilien Macchi wrote: > > Sorry I forgot to mention Standalone, but it's in the same situation as Overcloud. > Let's keep Docker by default on both until CentOS8 is out. > > On Thu, Feb 14, 2019 at 11:48 AM Emilien Macchi wrote: >> >> Pacemaker provided by CentOS7 doesn't work with Podman, and only works with Docker. >> >> Podman is already the default on the Undercloud, and this is fine, as we don't deploy Pacemaker on this node. >> However for the Overcloud, it causes problem as upstream is tested on CentOS7 and downstream is being tested on RHEL8. >> >> With that said, I propose that we: >> - Keep Docker as the default on the Overcloud until CentOS8 is out. >> - Switch downstream to use Podman on the Overcloud (since we run RHEL8 it's fine). >> - Switch all CI jobs except OVB to NOT deploy Pacemaker and switch to Podman. >> - Once CentOS8 is out, we revert the downstream only patch and land it upstream. I'd prefer to keep some gating jobs (non-OVB) that deploy pacemaker, so that we have coverage of pacemaker in the gate. I'm not sure if that's what your followup comment about standalone was implying. If it was then the plan sounds ok to me, otherwise I'd be concerned about the loss of pacemaker coverage. -- -- James Slagle -- From hjensas at redhat.com Thu Feb 14 20:05:54 2019 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Thu, 14 Feb 2019 21:05:54 +0100 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: On Wed, 2019-02-13 at 13:48 +0000, NANTHINI A A wrote: > Hi , > As per your suggested change ,I am able to create network > A1,network A2 ; in second iteration network b1,network b2 .But I want > to reduce number of lines of variable params.hence tried using repeat > function .But it is not working .Can you please let me know what is > wrong here . > > I am getting following error . > root at cic-1:~# heat stack-create test2 -f main.yaml > WARNING (shell) "heat stack-create" is deprecated, please use > "openstack stack create" instead > ERROR: AttributeError: : resources.rg: : 'NoneType' object has no > attribute 'parameters' > > root at cic-1:~# cat main.yaml > heat_template_version: 2015-04-30 > > description: Shows how to look up list/map values by group index > > parameters: > sets: > type: comma_delimited_list > label: sets > default: "A,B,C" > net_names: > type: json > default: > repeat: > for each: > <%set%>: {get_param: sets} > template: > - network1: Network<%set>1 > network2: Network<%set>2 > I don't think you can use the repeat function in the parameters section. You could try using a OS::Heat::Value resource in the resources section below to iterate over the sets parameter. Then use get_attr to read the result of the heat value and pass that as names to nested.yaml. > > resources: > rg: > type: OS::Heat::ResourceGroup > properties: > count: 3 > resource_def: > type: nested.yaml > properties: > # Note you have to pass the index and the entire list into > the > # nested template, resolving via %index% doesn't work > directly > # in the get_param here > index: "%index%" > names: {get_param: net_names} Alternatively you could put the repeat function here? names: repeat: for each: [ ... ] > > outputs: > all_values: > value: {get_attr: [rg, value]} > root at cic-1:~# > > > Thanks in advance. > > > Regards, > A.Nanthini > > From: Rabi Mishra [mailto:ramishra at redhat.com] > Sent: Wednesday, February 13, 2019 9:07 AM > To: NANTHINI A A > Cc: hjensas at redhat.com; openstack-dev at lists.openstack.org > Subject: Re: [Heat] Reg accessing variables of resource group heat > api > > > On Tue, Feb 12, 2019 at 7:48 PM NANTHINI A A < > nanthini.a.a at ericsson.com> wrote: > > Hi , > > I followed the example given in random.yaml .But getting below > > error .Can you please tell me what is wrong here . > > > > root at cic-1:~# heat stack-create test -f main.yaml > > WARNING (shell) "heat stack-create" is deprecated, please use > > "openstack stack create" instead > > ERROR: Property error: : > > resources.rg.resources[0].properties: : Unknown > > Property names > > root at cic-1:~# cat main.yaml > > heat_template_version: 2015-04-30 > > > > description: Shows how to look up list/map values by group index > > > > parameters: > > net_names: > > type: json > > default: > > - network1: NetworkA1 > > network2: NetworkA2 > > - network1: NetworkB1 > > network2: NetworkB2 > > > > > > resources: > > rg: > > type: OS::Heat::ResourceGroup > > properties: > > count: 3 > > resource_def: > > type: nested.yaml > > properties: > > # Note you have to pass the index and the entire list > > into the > > # nested template, resolving via %index% doesn't work > > directly > > # in the get_param here > > index: "%index%" > > > names: {get_param: net_names} > > property name should be same as parameter name in you nested.yaml > > > > outputs: > > all_values: > > value: {get_attr: [rg, value]} > > root at cic-1:~# cat nested.yaml > > heat_template_version: 2013-05-23 > > description: > > This is the template for I&V R6.1 base configuration to create > > neutron resources other than sg and vm for vyos vms > > parameters: > > net_names: > > changing this to 'names' should fix your error. > > type: json > > index: > > type: number > > resources: > > neutron_Network_1: > > type: OS::Neutron::Net > > properties: > > name: {get_param: [names, {get_param: index}, network1]} > > > > > > Thanks, > > A.Nanthini > > > > From: Rabi Mishra [mailto:ramishra at redhat.com] > > Sent: Tuesday, February 12, 2019 6:34 PM > > To: NANTHINI A A > > Cc: hjensas at redhat.com; openstack-dev at lists.openstack.org > > Subject: Re: [Heat] Reg accessing variables of resource group heat > > api > > > > On Tue, Feb 12, 2019 at 11:14 AM NANTHINI A A < > > nanthini.a.a at ericsson.com> wrote: > > > Hi , > > > May I know in the following example given > > > > > > parameters: > > > resource_name_map: > > > - network1: foo_custom_name_net1 > > > network2: foo_custom_name_net2 > > > - network1: bar_custom_name_net1 > > > network2: bar_custom_name_net2 > > > what is the parameter type ? > > > > > > json > > > > > -- > Regards, > Rabi Mishra > From emilien at redhat.com Thu Feb 14 21:14:04 2019 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 14 Feb 2019 16:14:04 -0500 Subject: [tripleo] Plan around switching Podman to default In-Reply-To: References: Message-ID: yes let's keep some jobs with pacemaker too. My goal is to switch the majority of CI and keep small/minimum coverage for HA (even if deployed with Docker). On Thu, Feb 14, 2019 at 2:02 PM James Slagle wrote: > On Thu, Feb 14, 2019 at 11:59 AM Emilien Macchi > wrote: > > > > Sorry I forgot to mention Standalone, but it's in the same situation as > Overcloud. > > Let's keep Docker by default on both until CentOS8 is out. > > > > On Thu, Feb 14, 2019 at 11:48 AM Emilien Macchi > wrote: > >> > >> Pacemaker provided by CentOS7 doesn't work with Podman, and only works > with Docker. > >> > >> Podman is already the default on the Undercloud, and this is fine, as > we don't deploy Pacemaker on this node. > >> However for the Overcloud, it causes problem as upstream is tested on > CentOS7 and downstream is being tested on RHEL8. > >> > >> With that said, I propose that we: > >> - Keep Docker as the default on the Overcloud until CentOS8 is out. > >> - Switch downstream to use Podman on the Overcloud (since we run RHEL8 > it's fine). > >> - Switch all CI jobs except OVB to NOT deploy Pacemaker and switch to > Podman. > >> - Once CentOS8 is out, we revert the downstream only patch and land it > upstream. > > I'd prefer to keep some gating jobs (non-OVB) that deploy pacemaker, > so that we have coverage of pacemaker in the gate. I'm not sure if > that's what your followup comment about standalone was implying. If it > was then the plan sounds ok to me, otherwise I'd be concerned about > the loss of pacemaker coverage. > > -- > -- James Slagle > -- > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Feb 14 21:29:02 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 15 Feb 2019 08:29:02 +1100 Subject: [infra][releases][requirements] Publishing per branch constraints files In-Reply-To: References: <20190214024541.GE12795@thor.bakeyournoodle.com> Message-ID: <20190214212901.GI12795@thor.bakeyournoodle.com> On Thu, Feb 14, 2019 at 12:28:33PM -0500, Doug Hellmann wrote: > Could we do it with redirects, instead of publishing copies of files? I think we could. The only small complication would be handling eol tags. Part of the reason this cam up was because when we EOL'd liberty? the branch disappeared in git. I know with extended maintenance we're doing that way less often but we're still doing it. I don't know if we'd publish one .htaccess file per series or a single file with all the redirections in it. I suspect the former would require most of the same infrastructure work. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From dtroyer at gmail.com Thu Feb 14 21:30:44 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 14 Feb 2019 15:30:44 -0600 Subject: [tc][all] Train Community Goals In-Reply-To: <36bf8876-b9bf-27c5-ee5a-387ce8f6768b@gmail.com> References: <66d73db6-9f84-1290-1ab8-cf901a7fb355@catalyst.net.nz> <6b498008e71b7dae651e54e29717f3ccedea50d1.camel@evrard.me> <36bf8876-b9bf-27c5-ee5a-387ce8f6768b@gmail.com> Message-ID: On Thu, Feb 14, 2019 at 9:29 AM Lance Bragstad wrote: > On 1/31/19 9:59 AM, Lance Bragstad wrote: > Moving legacy clients to python-openstackclient > > Artem has done quite a bit of pre-work here [2], which has been useful in understanding the volume of work required to complete this goal in its entirety. I suggest we look for seams where we can break this into more consumable pieces of work for a given release. > > For example, one possible goal would be to work on parity with python-openstackclient and openstacksdk. A follow-on goal would be to move the legacy clients. Alternatively, we could start to move all the project clients logic into python-openstackclient, and then have another goal to implement the common logic gaps into openstacksdk. Arriving at the same place but using different paths. The approach still has to be discussed and proposed. I do think it is apparent that we'll need to break this up, however. > > Artem's call for help is still open [0]. Artem, has anyone reached out to you about co-championing the goal? Do you have suggestions for how you'd like to break up the work to make the goal more achievable, especially if you're the only one championing the initiative? I'll outline my thoughts on how to break these down in that etherpad. Fortunately there are a lot of semi-independent parts here depending on how we want to slice the work (ie, do everything for a small number of projects or do one part for all projects). I am planning to scale back some involvement in StarlingX in 2019 to free up some time for this sort of thing and am willing to co-champion this with Artem. I'm likely to be involved anyway. :) dt -- Dean Troyer dtroyer at gmail.com From doug at doughellmann.com Thu Feb 14 21:33:19 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 14 Feb 2019 16:33:19 -0500 Subject: [infra][releases][requirements] Publishing per branch constraints files In-Reply-To: <20190214212901.GI12795@thor.bakeyournoodle.com> References: <20190214024541.GE12795@thor.bakeyournoodle.com> <20190214212901.GI12795@thor.bakeyournoodle.com> Message-ID: Tony Breeds writes: > On Thu, Feb 14, 2019 at 12:28:33PM -0500, Doug Hellmann wrote: > >> Could we do it with redirects, instead of publishing copies of files? > > I think we could. The only small complication would be handling eol > tags. Part of the reason this cam up was because when we EOL'd liberty? > the branch disappeared in git. I know with extended maintenance we're > doing that way less often but we're still doing it. > > I don't know if we'd publish one .htaccess file per series or a single > file with all the redirections in it. I suspect the former would > require most of the same infrastructure work. > > Yours Tony. What if we just had one .htaccess file in the releases (or requirements) repo, and we updated it when we closed branches? -- Doug From lbragstad at gmail.com Thu Feb 14 21:46:32 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 14 Feb 2019 15:46:32 -0600 Subject: [tc][all] Train Community Goals In-Reply-To: References: <66d73db6-9f84-1290-1ab8-cf901a7fb355@catalyst.net.nz> <6b498008e71b7dae651e54e29717f3ccedea50d1.camel@evrard.me> <36bf8876-b9bf-27c5-ee5a-387ce8f6768b@gmail.com> Message-ID: <99fd20b3-caa6-4bdf-c5b8-129513f8a7d8@gmail.com> On 2/14/19 3:30 PM, Dean Troyer wrote: > On Thu, Feb 14, 2019 at 9:29 AM Lance Bragstad wrote: >> On 1/31/19 9:59 AM, Lance Bragstad wrote: >> Moving legacy clients to python-openstackclient >> >> Artem has done quite a bit of pre-work here [2], which has been useful in understanding the volume of work required to complete this goal in its entirety. I suggest we look for seams where we can break this into more consumable pieces of work for a given release. >> >> For example, one possible goal would be to work on parity with python-openstackclient and openstacksdk. A follow-on goal would be to move the legacy clients. Alternatively, we could start to move all the project clients logic into python-openstackclient, and then have another goal to implement the common logic gaps into openstacksdk. Arriving at the same place but using different paths. The approach still has to be discussed and proposed. I do think it is apparent that we'll need to break this up, however. >> >> Artem's call for help is still open [0]. Artem, has anyone reached out to you about co-championing the goal? Do you have suggestions for how you'd like to break up the work to make the goal more achievable, especially if you're the only one championing the initiative? > I'll outline my thoughts on how to break these down in that etherpad. > Fortunately there are a lot of semi-independent parts here depending > on how we want to slice the work (ie, do everything for a small number > of projects or do one part for all projects). > > I am planning to scale back some involvement in StarlingX in 2019 to > free up some time for this sort of thing and am willing to co-champion > this with Artem. I'm likely to be involved anyway. :) Awesome - that'll be a huge help, Dean! If you need help getting things proposed as a goal, just let me or JP know. > > dt > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From mranga at gmail.com Thu Feb 14 22:30:49 2019 From: mranga at gmail.com (M. Ranganathan) Date: Thu, 14 Feb 2019 17:30:49 -0500 Subject: octavia : diskimage-create.sh ignores -s flag ? Message-ID: Hello, I have been struggling to create a diskimage from an existing qcow2 image. I find that diskimage-builder does the following: 1. apparently ignores DIB_LOCAL_IMAGE environment setting and 2. ignores the -s flag. No matter what -s I supply, it creates a 2gb image. My flags are as follows: bash diskimage-create.sh -s 10 Not sure what I am doing wrong. Any help appreciated. Thanks -- M. Ranganathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu Feb 14 22:37:09 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 14 Feb 2019 14:37:09 -0800 Subject: octavia : diskimage-create.sh ignores -s flag ? In-Reply-To: References: Message-ID: <1550183829.447568.1658247240.0B2024DE@webmail.messagingengine.com> On Thu, Feb 14, 2019, at 2:30 PM, M. Ranganathan wrote: > Hello, > > I have been struggling to create a diskimage from an existing qcow2 image. > > I find that diskimage-builder does the following: > > 1. apparently ignores DIB_LOCAL_IMAGE environment setting and > 2. ignores the -s flag. No matter what -s I supply, it creates a 2gb image. > > My flags are as follows: > > > bash diskimage-create.sh -s 10 Can you clarify where the diskimage-create.sh script comes from? Diskimage-builder's command to build an image is disk-image-create and it takes a --image-size flag (-s isn't a valid argument). > > > Not sure what I am doing wrong. Any help appreciated. > > Thanks > > -- > M. Ranganathan From mranga at gmail.com Thu Feb 14 22:57:54 2019 From: mranga at gmail.com (M. Ranganathan) Date: Thu, 14 Feb 2019 17:57:54 -0500 Subject: octavia : diskimage-create.sh ignores -s flag ? In-Reply-To: <1550183829.447568.1658247240.0B2024DE@webmail.messagingengine.com> References: <1550183829.447568.1658247240.0B2024DE@webmail.messagingengine.com> Message-ID: Hello, I am using octavia to build the image https://github.com/openstack/octavia/tree/master/diskimage-create I want to run it as a loadbalanced server. Do I need to run diskimage-builder prior to running disk-image-create on the image? My original image ran under virt-manager (imported an ISO into virt manager). Thanks, Ranga On Thu, Feb 14, 2019 at 5:43 PM Clark Boylan wrote: > On Thu, Feb 14, 2019, at 2:30 PM, M. Ranganathan wrote: > > Hello, > > > > I have been struggling to create a diskimage from an existing qcow2 > image. > > > > I find that diskimage-builder does the following: > > > > 1. apparently ignores DIB_LOCAL_IMAGE environment setting and > > 2. ignores the -s flag. No matter what -s I supply, it creates a 2gb > image. > > > > My flags are as follows: > > > > > > bash diskimage-create.sh -s 10 > > Can you clarify where the diskimage-create.sh script comes from? > Diskimage-builder's command to build an image is disk-image-create and it > takes a --image-size flag (-s isn't a valid argument). > > > > > > > Not sure what I am doing wrong. Any help appreciated. > > > > Thanks > > > > -- > > M. Ranganathan > > -- M. Ranganathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Thu Feb 14 23:32:34 2019 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 14 Feb 2019 15:32:34 -0800 Subject: octavia : diskimage-create.sh ignores -s flag ? In-Reply-To: References: <1550183829.447568.1658247240.0B2024DE@webmail.messagingengine.com> Message-ID: Hi there. I have not used the "DIB_LOCAL_IMAGE" setting. Are you following the example in the README here?:https://github.com/openstack/octavia/tree/master/diskimage-create#using-distribution-packages-for-amphora-agent The variable will need to be exported. As for the -s setting for the image size, I do know this works. We use this as part of our gate test jobs since the centos image must be 3GB in size. The only way -s wouldn't be honored would be if you have a "AMP_IMAGESIZE" environment variable defined. That will override the -s setting. Also note, that the default "qcow2" format is compressed, the .qcow2 file output will not be the same size as you requested with -s. Only when it is booted will it expand out to the full 10GB you specified with -s. Michael On Thu, Feb 14, 2019 at 3:05 PM M. Ranganathan wrote: > > Hello, > > I am using octavia to build the image > > https://github.com/openstack/octavia/tree/master/diskimage-create > > I want to run it as a loadbalanced server. Do I need to run diskimage-builder prior to running disk-image-create on the image? My original image ran under virt-manager (imported an ISO into virt manager). > > Thanks, > > Ranga > > > > > On Thu, Feb 14, 2019 at 5:43 PM Clark Boylan wrote: >> >> On Thu, Feb 14, 2019, at 2:30 PM, M. Ranganathan wrote: >> > Hello, >> > >> > I have been struggling to create a diskimage from an existing qcow2 image. >> > >> > I find that diskimage-builder does the following: >> > >> > 1. apparently ignores DIB_LOCAL_IMAGE environment setting and >> > 2. ignores the -s flag. No matter what -s I supply, it creates a 2gb image. >> > >> > My flags are as follows: >> > >> > >> > bash diskimage-create.sh -s 10 >> >> Can you clarify where the diskimage-create.sh script comes from? Diskimage-builder's command to build an image is disk-image-create and it takes a --image-size flag (-s isn't a valid argument). >> >> > >> > >> > Not sure what I am doing wrong. Any help appreciated. >> > >> > Thanks >> > >> > -- >> > M. Ranganathan >> > > > -- > M. Ranganathan > From yongle.li at gmail.com Fri Feb 15 00:26:06 2019 From: yongle.li at gmail.com (Fred Li) Date: Fri, 15 Feb 2019 08:26:06 +0800 Subject: [OpenStack Marketing] [OpenStack Foundation] Open Infrastructure Summit Denver - Community Voting Open In-Reply-To: <5164AFCF-285F-43F0-8718-A8F9DDCAF48A@openstack.org> References: <6B02F9A1-28A7-4F43-85E1-66AD570ED37B@openstack.org> <5164AFCF-285F-43F0-8718-A8F9DDCAF48A@openstack.org> Message-ID: Hi Ashlee, May I have a question about the schedule? According to [1] I got that the price increase late February. I am wondering whether the selection of presentations will be finished before that? My questions are, 1. when will the presentation selection finish? 2. will the contributors whose presentations get selected get a free summit ticket as before? 3. will the contributors who attended the previous PTG get a discount for PTG tickets? [1] https://www.openstack.org/summit/denver-2019/faq/ Regards Fred On Tue, Feb 5, 2019 at 2:34 AM Ashlee Ferguson wrote: > Hi everyone, > > Just under 12 hours left to vote for the sessions > you’d > like to see at the Denver Open Infrastructure Summit > ! > > > REGISTER > Register for the Summit > before prices > increase in late February! > > VISA APPLICATION PROCESS > Make sure to secure your Visa soon. More information > > about the Visa application process. > > TRAVEL SUPPORT PROGRAM > February 27 is the last day to submit applications. Please submit your > applications > by > 11:59pm Pacific Time (February 28 at 7:59am UTC). > > If you have any questions, please email summit at openstack.org > . > > > Cheers, > Ashlee > > > Ashlee Ferguson > OpenStack Foundation > ashlee at openstack.org > > > > > On Jan 31, 2019, at 12:29 PM, Ashlee Ferguson > wrote: > > Hi everyone, > > Community voting for the Open Infrastructure Summit Denver sessions is > open! > > You can VOTE HERE > , but > what does that mean? > > > Now that the Call for Presentations has closed, all submissions are > available for community vote and input. After community voting closes, the > volunteer Programming Committee members will receive the presentations to > review and determine the final selections for Summit schedule. While > community votes are meant to help inform the decision, Programming > Committee members are expected to exercise judgment in their area of > expertise and help ensure diversity of sessions and speakers. View full > details of the session selection process here > > . > > In order to vote, you need an OSF community membership. If you do not have > an account, please create one by going to openstack.org/join. If you need > to reset your password, you can do that here > . > > Hurry, voting closes Monday, February 4 at 11:59pm Pacific Time (Tuesday, > February 5 at 7:59 UTC). > > Continue to visit https://www.openstack.org/summit/denver-2019 for all > Summit-related information. > > REGISTER > Register for the Summit > before prices > increase in late February! > > VISA APPLICATION PROCESS > Make sure to secure your Visa soon. More information > > about the Visa application process. > > TRAVEL SUPPORT PROGRAM > February 27 is the last day to submit applications. Please submit your > applications > by > 11:59pm Pacific Time (February 28 at 7:59am UTC). > > If you have any questions, please email summit at openstack.org > . > > Cheers, > Ashlee > > > Ashlee Ferguson > OpenStack Foundation > ashlee at openstack.org > > > > > _______________________________________________ > Foundation mailing list > Foundation at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation > > > _______________________________________________ > Marketing mailing list > Marketing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/marketing > -- Regards Fred Li (李永乐) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Feb 15 00:32:32 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 15 Feb 2019 11:32:32 +1100 Subject: [infra][releases][requirements] Publishing per branch constraints files In-Reply-To: References: <20190214024541.GE12795@thor.bakeyournoodle.com> <20190214212901.GI12795@thor.bakeyournoodle.com> Message-ID: <20190215003231.GJ12795@thor.bakeyournoodle.com> On Thu, Feb 14, 2019 at 04:33:19PM -0500, Doug Hellmann wrote: > What if we just had one .htaccess file in the releases (or requirements) > repo, and we updated it when we closed branches? You mean a completely static file? That could work. I'll have a play and see if a single file and work for multiple paths. I admit it's been a very long time since I played with apache like this. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From hongbin034 at gmail.com Fri Feb 15 01:55:02 2019 From: hongbin034 at gmail.com (Hongbin Lu) Date: Thu, 14 Feb 2019 20:55:02 -0500 Subject: Zun : Error in "docker network create" with kuryr as driver, as required by Zun In-Reply-To: References: Message-ID: Hi Nehal, It seems your docker daemon cannot connect to the kuryr-libnetwork process. There are several things you might want to verify: * If the kuryr-libnetwork process is running? (sudo systemctl start kuryr-libnetwork) * If there is any error in the kuryr-libnetwork log? (sudo journalctl -u kuryr-libnetwork) * If docker daemon has the correct endpoint of the kuryr-libnetwork? (check /usr/lib/docker/plugins/kuryr/kuryr.spec) If above doesn't help, we need the following items for further trouble-shooting. * The kuryr-libnetwork log * The docker daemon log * The kuryr-libnetwork config file (/etc/kuryr/kuryr.conf) Best regards, Hongbin On Thu, Feb 14, 2019 at 12:15 PM Nehal Bansal wrote: > Hi, > > I have installed OpenStack Queens release. I wish to run docker containers > on it as first class residents like VMs therefore, I installed Zun. Zun > requires Kuryr-libnetwork on the compute node. Everything got installed > correctly but verifying the installation with > > docker network create --driver kuryr --ipam-driver kuryr --subnet > 192.168.4.0/24 --gateway=192.168.4.1 test_net > > gives the following error: > > Error response from daemon: legacy plugin: Plugin.Activate: > {"message":"page not found"} > > The /var/log/syslog file gives this error: > > Feb 13 09:25:01 compute dockerd[27830]: > time="2019-02-13T09:25:01.006618155+05:30" level=error msg="Handler for > POST /v1.39/networks/create returned error: legacy plugin: Plugin.Activate: > {\"message\":\"page not found\"}\n". > > I have asked the question on ask.openstack.org too but have received no > answers. > I am new to both OpenStack and Docker. > > Please let me know if you need any more information. > Thank you. > > Regards, > Nehal Bansal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Feb 15 02:24:12 2019 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 14 Feb 2019 19:24:12 -0700 Subject: [TripleO] openvswitch is broken - avoid rechecks in the next couple hours In-Reply-To: References: Message-ID: Thanks Daniel ! On Thu, Feb 14, 2019 at 6:41 AM Daniel Alvarez Sanchez wrote: > We should be fine now :) > > On Thu, Feb 14, 2019 at 11:25 AM Daniel Alvarez Sanchez > wrote: > > > > Hi folks, > > > > A new DPDK version landed in CentOS which is not compatible with the > > current Open vSwitch version that we have in RDO (error below). > > > > RDOfolks++ are working on it to make a new OVS version available > > without DPDK support so that we can unblock our jobs until we get a > > proper fix. Please, avoid rechecks in the next ~3 hours or so as no > > tests are expected to pass. > > > > Once [0] is merged, we'll need to wait around 30 more minutes for it > > to be available in CI jobs. > > > > Thanks! > > > > > > [0] https://review.rdoproject.org/r/#/c/18853 > > > > 2019-02-14 07:35:06.464494 | primary | 2019-02-14 07:35:05 | Error: > > Package: 1:openvswitch-2.10.1-1.el7.x86_64 (delorean-master-deps) > > 2019-02-14 07:35:06.464603 | primary | 2019-02-14 07:35:05 | > > Requires: librte_table.so.3()(64bit) > > 2019-02-14 07:35:06.464711 | primary | 2019-02-14 07:35:05 | > > Available: dpdk-17.11-13.el7.x86_64 (quickstart-centos-extras) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mranga at gmail.com Fri Feb 15 02:25:19 2019 From: mranga at gmail.com (M. Ranganathan) Date: Thu, 14 Feb 2019 21:25:19 -0500 Subject: octavia : diskimage-create.sh ignores -s flag ? In-Reply-To: References: <1550183829.447568.1658247240.0B2024DE@webmail.messagingengine.com> Message-ID: Yes I am following the example in https://github.com/openstack/octavia/tree/master/diskimage-create#using-distribution-packages-for-amphora-agent I did export it. However, I think the problem was that I was invoking the script with bash diskimage-create.sh .... Instead of directly invoking it using ./diskimage-create.sh ... Not sure if it made the difference. I also removed /root/.cache/image-create and re-ran everything. I can now see the files I added to the image when I explore it using guestfish. Thank you for your help. Ranga On Thu, Feb 14, 2019 at 6:32 PM Michael Johnson wrote: > Hi there. > > I have not used the "DIB_LOCAL_IMAGE" setting. Are you following the > example in the README > here?: > https://github.com/openstack/octavia/tree/master/diskimage-create#using-distribution-packages-for-amphora-agent > The variable will need to be exported. > > As for the -s setting for the image size, I do know this works. We use > this as part of our gate test jobs since the centos image must be 3GB > in size. > The only way -s wouldn't be honored would be if you have a > "AMP_IMAGESIZE" environment variable defined. That will override the > -s setting. > > Also note, that the default "qcow2" format is compressed, the .qcow2 > file output will not be the same size as you requested with -s. Only > when it is booted will it expand out to the full 10GB you specified > with -s. > > Michael > > On Thu, Feb 14, 2019 at 3:05 PM M. Ranganathan wrote: > > > > Hello, > > > > I am using octavia to build the image > > > > https://github.com/openstack/octavia/tree/master/diskimage-create > > > > I want to run it as a loadbalanced server. Do I need to run > diskimage-builder prior to running disk-image-create on the image? My > original image ran under virt-manager (imported an ISO into virt manager). > > > > Thanks, > > > > Ranga > > > > > > > > > > On Thu, Feb 14, 2019 at 5:43 PM Clark Boylan > wrote: > >> > >> On Thu, Feb 14, 2019, at 2:30 PM, M. Ranganathan wrote: > >> > Hello, > >> > > >> > I have been struggling to create a diskimage from an existing qcow2 > image. > >> > > >> > I find that diskimage-builder does the following: > >> > > >> > 1. apparently ignores DIB_LOCAL_IMAGE environment setting and > >> > 2. ignores the -s flag. No matter what -s I supply, it creates a 2gb > image. > >> > > >> > My flags are as follows: > >> > > >> > > >> > bash diskimage-create.sh -s 10 > >> > >> Can you clarify where the diskimage-create.sh script comes from? > Diskimage-builder's command to build an image is disk-image-create and it > takes a --image-size flag (-s isn't a valid argument). > >> > >> > > >> > > >> > Not sure what I am doing wrong. Any help appreciated. > >> > > >> > Thanks > >> > > >> > -- > >> > M. Ranganathan > >> > > > > > > -- > > M. Ranganathan > > > -- M. Ranganathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Feb 15 03:32:18 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 15 Feb 2019 14:32:18 +1100 Subject: [infra][releases][requirements] Publishing per branch constraints files In-Reply-To: <20190215003231.GJ12795@thor.bakeyournoodle.com> References: <20190214024541.GE12795@thor.bakeyournoodle.com> <20190214212901.GI12795@thor.bakeyournoodle.com> <20190215003231.GJ12795@thor.bakeyournoodle.com> Message-ID: <20190215033217.GK12795@thor.bakeyournoodle.com> On Fri, Feb 15, 2019 at 11:32:32AM +1100, Tony Breeds wrote: > On Thu, Feb 14, 2019 at 04:33:19PM -0500, Doug Hellmann wrote: > > > What if we just had one .htaccess file in the releases (or requirements) > > repo, and we updated it when we closed branches? > > You mean a completely static file? That could work. I'll have a play > and see if a single file and work for multiple paths. I admit it's been > a very long time since I played with apache like this. Yup putting the following in /constraints/.htaccess[1] --- RewriteEngine On RewriteBase "/constraints/" RewriteRule "^upper/master" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=master" RewriteRule "^upper/train" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=master" RewriteRule "^upper/stein" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=master" RewriteRule "^upper/rocky" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/rocky" RewriteRule "^upper/queens" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/queens" RewriteRule "^upper/pike" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/pike" RewriteRule "^upper/ocata" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/ocata" RewriteRule "^upper/newton" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/newton" RewriteRule "^upper/juno" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=juno-eol" RewriteRule "^upper/kilo" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=kilo-eol" RewriteRule "^upper/liberty" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=liberty-eol" RewriteRule "^upper/mitaka" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=mitaka-eol" RewriteRule "^upper/newton" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=newton-eol" --- Seems to do pretty much exactly what we need. It wont be trivial to make sure we get that right but it also wont be too hard to automate. We could add that as static content to releases.openstack.org (from openstack/releases) today (well next week) while we work out exactly how and when we publish that to ensure it's always in sync. Thanks Doug! Thoughts? Objections? Yours Tony. [1] We might need an apache config tweak to ensure the .htaccess file works but IIRC we're doign somethign similar on docs.o.o -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From doug at doughellmann.com Fri Feb 15 03:51:31 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 14 Feb 2019 22:51:31 -0500 Subject: [infra][releases][requirements] Publishing per branch constraints files In-Reply-To: <20190215033217.GK12795@thor.bakeyournoodle.com> References: <20190214024541.GE12795@thor.bakeyournoodle.com> <20190214212901.GI12795@thor.bakeyournoodle.com> <20190215003231.GJ12795@thor.bakeyournoodle.com> <20190215033217.GK12795@thor.bakeyournoodle.com> Message-ID: Tony Breeds writes: > On Fri, Feb 15, 2019 at 11:32:32AM +1100, Tony Breeds wrote: >> On Thu, Feb 14, 2019 at 04:33:19PM -0500, Doug Hellmann wrote: >> >> > What if we just had one .htaccess file in the releases (or requirements) >> > repo, and we updated it when we closed branches? >> >> You mean a completely static file? That could work. I'll have a play >> and see if a single file and work for multiple paths. I admit it's been >> a very long time since I played with apache like this. > > Yup putting the following in /constraints/.htaccess[1] > > --- > RewriteEngine On > RewriteBase "/constraints/" > RewriteRule "^upper/master" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=master" > RewriteRule "^upper/train" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=master" > RewriteRule "^upper/stein" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=master" > RewriteRule "^upper/rocky" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/rocky" > RewriteRule "^upper/queens" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/queens" > RewriteRule "^upper/pike" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/pike" > RewriteRule "^upper/ocata" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/ocata" > RewriteRule "^upper/newton" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/newton" > RewriteRule "^upper/juno" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=juno-eol" > RewriteRule "^upper/kilo" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=kilo-eol" > RewriteRule "^upper/liberty" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=liberty-eol" > RewriteRule "^upper/mitaka" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=mitaka-eol" > RewriteRule "^upper/newton" "http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=newton-eol" > --- > > Seems to do pretty much exactly what we need. It wont be trivial to > make sure we get that right but it also wont be too hard to automate. > > We could add that as static content to releases.openstack.org (from > openstack/releases) today (well next week) while we work out exactly > how and when we publish that to ensure it's always in sync. We should be able to automate building the list of rules using a template in the releases repo, since we already have a list of all of the releases and their status there in deliverables/series_status.yaml. It may require adding something to source/conf.py to load that data to make it available to the template. > Thanks Doug! > > Thoughts? Objections? > > Yours Tony. > > [1] We might need an apache config tweak to ensure the .htaccess file > works but IIRC we're doign somethign similar on docs.o.o Yeah, we should make sure redirects are enabled. I think we made that a blanket change when we did the docs redirect work, but possibly not. -- Doug From renat.akhmerov at gmail.com Fri Feb 15 05:28:54 2019 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Fri, 15 Feb 2019 12:28:54 +0700 Subject: [requirements][mistral] Add yamlloader to global requirements In-Reply-To: References: <20190214161115.2virevrqttkf74ra@mthode.org> Message-ID: This looks like a solution, yes. Thanks. @Oleg, please take a look. Maybe we really decided to add a new yaml lib too early. Renat On 15 Feb 2019, 00:16 +0700, Ben Nemec , wrote: > > > On 2/14/19 10:46 AM, Oleg Ovcharuk wrote: > > Matthew, we use not only load, but also dump. We can't use custom > > constructor and default representer - the output will be terrible. > > This custom constructor contains about 50 lines of code, representer > > would have a similar count. Also, we should think about compatibility > > with Python 2.7, 3.x and about it's performance. > > Summary, we would have about 150 lines of code, which is just copy-paste > > from `yamlloader` library. > > IMHO, it is better to use existing solutions. > > You don't need a complex representer to dump OrderedDicts. It can be > done in about three lines: > https://github.com/cybertron/tripleo-scripts/blob/105381d4f080394e68a40327c398d32eb9f4f580/net_processing.py#L302 > > That's the code I used when I wanted to dump dicts in a particular > order. Once you add the representer OrderedDicts are handled as you > would expect. > > > > > чт, 14 февр. 2019 г. в 19:14, Matthew Thode > >: > > > > On 19-02-14 16:58:49, Oleg Ovcharuk wrote: > > > Hi! Can you please add yamlloader library to global requirements? > > > https://pypi.org/project/yamlloader/ > > > > > > It provides ability to preserve key order in dicts, it supports > > either > > > python 2.7 and python 3.x, it provides better performance than > > built-in > > > functions. > > > Thank you. > > > > I'd like to know a little more about why we need this, yaml as a spec > > itself doesn't guarantee order so order should be stored somewhere else. > > > > If all you need is ordereddict support something like this may be better > > then adding yet another lib. > > > > https://gist.github.com/enaeseth/844388 > > > > -- > > Matthew Thode > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Feb 15 05:37:29 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 15 Feb 2019 16:37:29 +1100 Subject: [infra][releases][requirements] Publishing per branch constraints files In-Reply-To: References: <20190214024541.GE12795@thor.bakeyournoodle.com> <20190214212901.GI12795@thor.bakeyournoodle.com> <20190215003231.GJ12795@thor.bakeyournoodle.com> <20190215033217.GK12795@thor.bakeyournoodle.com> Message-ID: <20190215053728.GN12795@thor.bakeyournoodle.com> On Thu, Feb 14, 2019 at 10:51:31PM -0500, Doug Hellmann wrote: > We should be able to automate building the list of rules using a > template in the releases repo, since we already have a list of all of > the releases and their status there in > deliverables/series_status.yaml. It may require adding something to > source/conf.py to load that data to make it available to the template. I think it might be a little harder than that as we want /constraints/upper/stein to switch from 'master' to stable/stein pretty soon after the branch exists in openstack/requirements. Likewise we want to switch from from the 'stable/newton' to newton-eol once that exists (the redirect rules for newton are wrong). So we might need to extract the data from the raw delieverable files themselves. I'll try coding that up next week. Expect sphinx questions ;P > Yeah, we should make sure redirects are enabled. I think we made that a > blanket change when we did the docs redirect work, but possibly not. So I used Rewrite rather then Redirect but I think for this I can switch to the latter. If I read system-config correctly[1,2,3] we don't enable Redirect on releases.o.o but we could by switching to [4] but that has other implications. Currently http://releases.o.o/ 302's to https://... If we switched to [4] that wouldn't happen. So we might need a new puppet template to combine them *or* we could allow .htaccess to Override Redirect* Yours Tony. [1] http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/static.pp#n459 [2] http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/static.pp#n488 [3] http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/templates/static-https-redirect.vhost.erb#n38 [4] http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/templates/static-http-and-https.vhost.erb -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mthode at mthode.org Fri Feb 15 07:27:49 2019 From: mthode at mthode.org (Matthew Thode) Date: Fri, 15 Feb 2019 01:27:49 -0600 Subject: [requirements][requests] security update for requests in stable branches Message-ID: <20190215072749.k34tdrnapanietk5@mthode.org> Recently it was reported to us that requests had a recent release that addressed a CVE (CVE-2018-18074). Requests has no stable branches so the only way to update openstack stable branches is to update to 2.20.1 in this case. I wanted to pass this by people as requests is generally a nasty library with nasty surprises. It's passed our cross and dvsm gating though (for rocky) so indications look good. What I'm asking you for is anything that could go wrong with updating (rocky in this case, but possibly back to newton, depending on co-installability). Please let me know any blockers to to update (in the review preferably). https://review.openstack.org/637124 Thanks, -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ellorent at redhat.com Fri Feb 15 07:37:04 2019 From: ellorent at redhat.com (Felix Enrique Llorente Pastora) Date: Fri, 15 Feb 2019 08:37:04 +0100 Subject: [TripleO] openvswitch is broken - avoid rechecks in the next couple hours In-Reply-To: References: Message-ID: Rocky still broken, be prepare. On Fri, Feb 15, 2019 at 3:26 AM Wesley Hayutin wrote: > Thanks Daniel ! > > On Thu, Feb 14, 2019 at 6:41 AM Daniel Alvarez Sanchez < > dalvarez at redhat.com> wrote: > >> We should be fine now :) >> >> On Thu, Feb 14, 2019 at 11:25 AM Daniel Alvarez Sanchez >> wrote: >> > >> > Hi folks, >> > >> > A new DPDK version landed in CentOS which is not compatible with the >> > current Open vSwitch version that we have in RDO (error below). >> > >> > RDOfolks++ are working on it to make a new OVS version available >> > without DPDK support so that we can unblock our jobs until we get a >> > proper fix. Please, avoid rechecks in the next ~3 hours or so as no >> > tests are expected to pass. >> > >> > Once [0] is merged, we'll need to wait around 30 more minutes for it >> > to be available in CI jobs. >> > >> > Thanks! >> > >> > >> > [0] https://review.rdoproject.org/r/#/c/18853 >> > >> > 2019-02-14 07:35:06.464494 | primary | 2019-02-14 07:35:05 | Error: >> > Package: 1:openvswitch-2.10.1-1.el7.x86_64 (delorean-master-deps) >> > 2019-02-14 07:35:06.464603 | primary | 2019-02-14 07:35:05 | >> > Requires: librte_table.so.3()(64bit) >> > 2019-02-14 07:35:06.464711 | primary | 2019-02-14 07:35:05 | >> > Available: dpdk-17.11-13.el7.x86_64 (quickstart-centos-extras) >> >> -- Quique Llorente Openstack TripleO CI -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssbarnea at redhat.com Fri Feb 15 08:23:31 2019 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Fri, 15 Feb 2019 08:23:31 +0000 Subject: [TripleO] openvswitch is broken - avoid rechecks in the next couple hours In-Reply-To: References: Message-ID: <0DBA3B1F-E2BB-4B1D-94A2-8ADC9E9D1D23@redhat.com> Is there something we can do to prevent this in the future? Unrelated to openvswitch itself, it happened with other packages too and will happen again. -- sorin > On 15 Feb 2019, at 07:37, Felix Enrique Llorente Pastora wrote: > > Rocky still broken, be prepare. > >> On Fri, Feb 15, 2019 at 3:26 AM Wesley Hayutin wrote: >> Thanks Daniel ! >> >>> On Thu, Feb 14, 2019 at 6:41 AM Daniel Alvarez Sanchez wrote: >>> We should be fine now :) >>> >>> On Thu, Feb 14, 2019 at 11:25 AM Daniel Alvarez Sanchez >>> wrote: >>> > >>> > Hi folks, >>> > >>> > A new DPDK version landed in CentOS which is not compatible with the >>> > current Open vSwitch version that we have in RDO (error below). >>> > >>> > RDOfolks++ are working on it to make a new OVS version available >>> > without DPDK support so that we can unblock our jobs until we get a >>> > proper fix. Please, avoid rechecks in the next ~3 hours or so as no >>> > tests are expected to pass. >>> > >>> > Once [0] is merged, we'll need to wait around 30 more minutes for it >>> > to be available in CI jobs. >>> > >>> > Thanks! >>> > >>> > >>> > [0] https://review.rdoproject.org/r/#/c/18853 >>> > >>> > 2019-02-14 07:35:06.464494 | primary | 2019-02-14 07:35:05 | Error: >>> > Package: 1:openvswitch-2.10.1-1.el7.x86_64 (delorean-master-deps) >>> > 2019-02-14 07:35:06.464603 | primary | 2019-02-14 07:35:05 | >>> > Requires: librte_table.so.3()(64bit) >>> > 2019-02-14 07:35:06.464711 | primary | 2019-02-14 07:35:05 | >>> > Available: dpdk-17.11-13.el7.x86_64 (quickstart-centos-extras) >>> > > > -- > Quique Llorente > > Openstack TripleO CI -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Fri Feb 15 08:35:08 2019 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Fri, 15 Feb 2019 09:35:08 +0100 Subject: [tc][all] Train Community Goals In-Reply-To: <99fd20b3-caa6-4bdf-c5b8-129513f8a7d8@gmail.com> References: <66d73db6-9f84-1290-1ab8-cf901a7fb355@catalyst.net.nz> <6b498008e71b7dae651e54e29717f3ccedea50d1.camel@evrard.me> <36bf8876-b9bf-27c5-ee5a-387ce8f6768b@gmail.com> <99fd20b3-caa6-4bdf-c5b8-129513f8a7d8@gmail.com> Message-ID: thanks Dean. I have seen also Matt also joined etherpad - thanks as well. Monty has promised a R1.0 soon ;-), but there are still some issues to be covered before. That's why I wanted to try probably complete couple of services before R1.0 (Object and Image). I am also currently trying to bring DNS into the SDK (when I would have at least 30 hours in a day would be faster) and I have a skeleton of the CLI binding for it as well (would like to upstream it from downstream). Dean, are you ok in receiving changes for switch to SDK before we get R1.0? Let us really just focus on few services as a target and then hopefully achieve more. What do you think? My suggestion would be to focus on: - novaclient - glanceclient - swiftclient Regards, Artem On Thu, Feb 14, 2019 at 10:49 PM Lance Bragstad wrote: > > > On 2/14/19 3:30 PM, Dean Troyer wrote: > > On Thu, Feb 14, 2019 at 9:29 AM Lance Bragstad > wrote: > >> On 1/31/19 9:59 AM, Lance Bragstad wrote: > >> Moving legacy clients to python-openstackclient > >> > >> Artem has done quite a bit of pre-work here [2], which has been useful > in understanding the volume of work required to complete this goal in its > entirety. I suggest we look for seams where we can break this into more > consumable pieces of work for a given release. > >> > >> For example, one possible goal would be to work on parity with > python-openstackclient and openstacksdk. A follow-on goal would be to move > the legacy clients. Alternatively, we could start to move all the project > clients logic into python-openstackclient, and then have another goal to > implement the common logic gaps into openstacksdk. Arriving at the same > place but using different paths. The approach still has to be discussed and > proposed. I do think it is apparent that we'll need to break this up, > however. > >> > >> Artem's call for help is still open [0]. Artem, has anyone reached out > to you about co-championing the goal? Do you have suggestions for how you'd > like to break up the work to make the goal more achievable, especially if > you're the only one championing the initiative? > > I'll outline my thoughts on how to break these down in that etherpad. > > Fortunately there are a lot of semi-independent parts here depending > > on how we want to slice the work (ie, do everything for a small number > > of projects or do one part for all projects). > > > > I am planning to scale back some involvement in StarlingX in 2019 to > > free up some time for this sort of thing and am willing to co-champion > > this with Artem. I'm likely to be involved anyway. :) > > Awesome - that'll be a huge help, Dean! If you need help getting things > proposed as a goal, just let me or JP know. > > > > > dt > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Feb 15 09:06:59 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 15 Feb 2019 18:06:59 +0900 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> <168e0cba6f4.1013071eb93450.6339187288167074748@ghanshyammann.com> <1550007508.442544.1656696288.1CEB9AC9@webmail.messagingengine.com> <5b651d3d-ac42-e46d-c52b-9e9b280d2af3@openstack.org> Message-ID: <168f068b2ec.11067b9ff22320.9152656680917331929@ghanshyammann.com> Having my FC SIG hat on: To summarize about having the 'Help wanted list' under FC SIG, we discussed that in FC meeting this week[1] and planned to have it under TC as first option and if there is no candidate to own it then we re-discuss it to have under FC SIG. After seeing reply from ttx and IRC chat, it seems we are going to give it another chance under TC. So FC SIG is all ok to help/advertise or direct new contributor to that list or contact owner. Having my TC hat on: I agree with ttx idea of 1:1 mapping and I feel that is much needed to make it a success. But please choose some simple name so that people do not need to search or have a hard time to understand it :). Help/Mentor/Pending/Volunteer can be very simple word to understand it. As ttx mentioned about next step, I am listing it in more detail: - Have template to request the items to be added in this list. I prefer it via gerrit and TC review that and approve accordingly. - Add Job Description section also in that list which can vary in term or skill needed per item. lance already started that - https://review.openstack.org/#/c/637025/1 - Clean up the old list and ask for a new list from the community. Or ask old list requester to continue or re-submit the request. - Assign a TC member as Owner to this work. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2019-02-13.log.html#t2019-02-13T07:23:06 -gmann ---- On Fri, 15 Feb 2019 03:51:19 +0900 Lance Bragstad wrote ---- > Updating the thread since we talked about this quite a bit in the -tc channel, too [0] (sorry for duplicating across communication mediums!) > > TL;DR the usefulness of job descriptions is still a thing. To kick start that, I proposed an example to the current help wanted list to kick start what we want our "job descriptions" to look like [1], if we were to have them. > > [0] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-02-14.log.html#t2019-02-14T16:53:55 > [1] https://review.openstack.org/#/c/637025/ > > On 2/14/19 7:29 AM, Thierry Carrez wrote: > Colleen Murphy wrote: > I feel like there is a bit of a disconnect between what the TC is asking for > and what the current mentoring organizations are designed to provide. Thierry > framed this as a "peer-mentoring offered" list, but mentoring doesn't quite > capture everything that's needed. > > Mentorship programs like Outreachy, cohort mentoring, and the First Contact SIG > are oriented around helping new people quickstart into the community, getting > them up to speed on basics and helping them feel good about themselves and > their contributions. The hope is that happy first-timers eventually become > happy regular contributors which will eventually be a benefit to the projects, > but the benefit to the projects is not the main focus. > > The way I see it, the TC Help Wanted list, as well as the new thing, is not > necessarily oriented around newcomers but is instead advocating for the > projects and meant to help project teams thrive by getting committed long-term > maintainers involved and invested in solving longstanding technical debt that > in some cases requires deep tribal knowledge to solve. It's not a thing for a > newbie to step into lightly and it's not something that can be solved by a > FC-liaison pointing at the contributor docs. Instead what's needed are mentors > who are willing to walk through that tribal knowledge with a new contributor > until they are equipped enough to help with the harder problems. > > For that reason I think neither the FC SIG or the mentoring cohort group, in > their current incarnations, are the right groups to be managing this. The FC > SIG's mission is "To provide a place for new contributors to come for > information and advice" which does not fit the long-term goal of the help > wanted list, and cohort mentoring's four topics ("your first patch", "first > CFP", "first Cloud", and "COA"[1]) also don't fit with the long-term and deeply > technical requirements that a project-specific mentorship offering needs. > Either of those groups could be rescoped to fit with this new mission, and > there is certainly a lot of overlap, but my feeling is that this needs to be an > effort conducted by the TC because the TC is the group that advocates for the > projects. > > It's moreover not a thing that can be solved by another list of names. In addition > to naming someone willing to do the several hours per week of mentoring, > project teams that want help should be forced to come up with a specific > description of 1) what the project is, 2) what kind of person (experience or > interests) would be a good fit for the project, 3) specific work items with > completion criteria that needs to be done - and it can be extremely challenging > to reframe a project's longstanding issues in such concrete ways that make it > clear what steps are needed to tackle the problem. It should basically be an > advertisement that makes the project sound interesting and challenging and > do-able, because the current help-wanted list and liaison lists and mentoring > topics are too vague to entice anyone to step up. > > Well said. I think we need to use another term for this program, to avoid colliding with other forms of mentoring or on-boarding help. > > On the #openstack-tc channel, I half-jokingly suggested to call this the 'Padawan' program, but now that I'm sober, I feel like it might actually capture what we are trying to do here: > > - Padawans are 1:1 trained by a dedicated, experienced team member > - Padawans feel the Force, they just need help and perspective to master it > - Padawans ultimately join the team* and may have a padawan of their own > - Bonus geek credit for using Star Wars references > > * unless they turn to the Dark Side, always a possibility > > Finally, I rather disagree that this should be something maintained as a page in > individual projects' contributor guides, although we should certainly be > encouraging teams to keep those guides up to date. It should be compiled by the > TC and regularly updated by the project liaisons within the TC. A link to a > contributor guide on docs.openstack.org doesn't give anyone an idea of what > projects need the most help nor does it empower people to believe they can help > by giving them an understanding of what the "job" entails. > > I think we need a single list. I guess it could be sourced from several repositories, but at least for the start I would not over-engineer it, just put it out there as a replacement for the help-most-needed list and see if it flies. > > As a next step, I propose to document the concept on a TC page, then reach out to the currently-listed teams on help-most-wanted to see if there would be a volunteer interested in offering Padawan training and bootstrap the new list, before we start to promote it more actively. > > > From artem.goncharov at gmail.com Fri Feb 15 09:41:40 2019 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Fri, 15 Feb 2019 10:41:40 +0100 Subject: [keystone][service-catalog] Region "*" for identity in service catalog Message-ID: Hi all, In a public cloud I am using there is currently one region, but should be multiple in future. Endpoint for each service has a region set. However the only exception is identity, which has an empty region (actually "*"). If I do not specify region_name during connection (with diverse tools) everything works fine. Some "admin" operations, however, really require region to be set. But if I set in (i.e. in clouds.yaml) I can't connect to cloud, since identity in this region has no explicit endpoint (keystoneauth1 is not ok with that, gophercloud as well) I was not able to find any requirements/conventions, how such setup should be really treated. On https://wiki.openstack.org/wiki/API_Special_Interest_Group/Current_Design/Service_Catalog there are service catalogs for diverse clouds, and in case where a cloud has multiple region, there are multiple entries for keystone pointing to the same endpoint. Basically each time there is region properly set. In Keystone.v2 region was mandatory, but in v3 it is not anymore ( https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/endpoint.html#endpoint-create). I guess in a normal way you would not be even able to configure region "*", but it was somehow done. While there is only one region the problem is not that big, but as soon as second region is added it becomes problem. Does anyone knows if that is an "allowed" setup (but then tools should be adapted to treat it properly), or this is not an "allowed" configuration (in this case I would like to see some docs to refer properly). I would personally prefer second way of fixing catalog to avoid fixes in diverse tools, but I really need a weightful reference for that. Thanks a lot in advance, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Fri Feb 15 10:13:21 2019 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Fri, 15 Feb 2019 11:13:21 +0100 Subject: [kolla] Proposing Michal Nasiadka to the core team Message-ID: Hi, is my pleasure to propose Michal Nasiadka for the core team in kolla-ansible. Michal has been active reviewer in the last relases ( https://www.stackalytics.com/?module=kolla-group&user_id=mnasiadka), has been keeping an eye on the bugs and being active help on IRC. He has also made efforts in community interactions in Rocky and Stein releases, including PTG attendance. His main interest is NFV and Edge clouds and brings valuable couple of years experience as OpenStack/Kolla operator with good knowledge of Kolla code base. Planning to work on extending Kolla CI scenarios, Edge use cases and improving NFV-related functions ease of deployment. Consider this email as my +1 vote. Vote ends in 7 days (22 feb 2019) Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev.faz at gmail.com Fri Feb 15 10:32:13 2019 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Fri, 15 Feb 2019 11:32:13 +0100 Subject: [keystone] adfs SingleSignOn with CLI/API? In-Reply-To: References: <1B71BEE3-D72D-42E8-A61A-380CAA548722@gmail.com> <1549894791.2312833.1655509928.25450D18@webmail.messagingengine.com> <1549901920.3451697.1655621200.6F07535E@webmail.messagingengine.com> <1550140334.3146579.1657835168.35187945@webmail.messagingengine.com> Message-ID: <829fb374-b834-b868-d429-99d02629f3a1@gmail.com> Hi, thanks for your reply, but Am 14.02.19 um 14:15 schrieb Brandon Sawyers: > You should be able to configure keystone to authenticate against "ldap" > using your active directory. this is not an option, because our customers dont want to share their passwords with us ;) Fabian From dev.faz at gmail.com Fri Feb 15 10:36:47 2019 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Fri, 15 Feb 2019 11:36:47 +0100 Subject: [keystone] adfs SingleSignOn with CLI/API? In-Reply-To: <1550140334.3146579.1657835168.35187945@webmail.messagingengine.com> References: <1B71BEE3-D72D-42E8-A61A-380CAA548722@gmail.com> <1549894791.2312833.1655509928.25450D18@webmail.messagingengine.com> <1549901920.3451697.1655621200.6F07535E@webmail.messagingengine.com> <1550140334.3146579.1657835168.35187945@webmail.messagingengine.com> Message-ID: <46393fe5-d372-a3c1-3f29-f8731ed0553a@gmail.com> Hi Colleen, Am 14.02.19 um 11:32 schrieb Colleen Murphy: > I'm more interested in what you were seeing, both the output from the client and the output from the keystone server if you have access to it. I will configure the adfs-connection again and send you the logs. > > Unfortunately that seems to still be a valid bug that we'll need to address. You could try using the python keystoneauth library directly and see if the issue appears there[1][2]. > > [1] https://docs.openstack.org/keystoneauth/latest/using-sessions.html > [2] https://docs.openstack.org/keystoneauth/latest/plugin-options.html#v3oidcpassword I was missing the --os-client-id parameter, but I didnt got any hint about its required, so took a while to find it. With os-client-id, and os-client-secret Im now able to reach my keycloak. I already found some settings on keycloak I had to change. (Hopefully) I will be able to continue my work next week. > > I found that too. The in-development documentation has already been fixed[3] but we didn't backport that to the Rocky documentation because it was part of a large series of rewrites and reorgs. > > [3] https://docs.openstack.org/keystone/latest/admin/federation/configure_federation.html#configure-mod-auth-openidc Great - thank a lot, I will fix my settings. Fabian From james.page at canonical.com Fri Feb 15 12:35:58 2019 From: james.page at canonical.com (James Page) Date: Fri, 15 Feb 2019 12:35:58 +0000 Subject: [sig][upgrades] IRC meeting Monday 18th->25th Message-ID: Hi All In a feat of random planning weirdness, both the last and the upcoming Upgrades SIG IRC meeting days have landed on a US holiday. As the last one was a bit of a flop due to this co-incidence I'm proposing we bump back one week and move the IRC meetings from the 18th->25th February. They will be at 0900 and 1600 UTC as usual to cover as many participants as possible! If you're based on the US - have a lovely Presidents day! Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Feb 15 14:19:38 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 15 Feb 2019 09:19:38 -0500 Subject: [release][heat][tripleo][kolla][monsca] need PTL/liaison ACK on pending releases Message-ID: We have a few pending releases that were proposed by someone who is not listed as the PTL or Liaison for the associated project. That's fine, but in those cases we need the PTL or Liaison to acknowledge and approve the release before we process it. Please take a minute today or Monday to look at these and indicate whether it is OK to release them. https://review.openstack.org/#/c/636285/ heat-translator 1.3.0 (stein) https://review.openstack.org/#/c/635569/ tripleo-heat-templates 8.3.0 (queens) https://review.openstack.org/#/c/635536/ kolla 6.1.1 (queens) https://review.openstack.org/#/c/637163/ python-monscaclient 1.13.0 (stein) -- Doug From doug at doughellmann.com Fri Feb 15 14:22:21 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 15 Feb 2019 09:22:21 -0500 Subject: [Release-job-failures][kayobe] Pre-release of openstack/kayobe failed In-Reply-To: References: Message-ID: zuul at openstack.org writes: > Build failed. > > - release-openstack-python http://logs.openstack.org/3b/3ba003efaf249ac62ae588310f6ad3279d65f337/pre-release/release-openstack-python/149c10d/ : SUCCESS in 4m 25s > - announce-release http://logs.openstack.org/3b/3ba003efaf249ac62ae588310f6ad3279d65f337/pre-release/announce-release/1d49bad/ : SUCCESS in 4m 40s > - propose-update-constraints http://logs.openstack.org/3b/3ba003efaf249ac62ae588310f6ad3279d65f337/pre-release/propose-update-constraints/a1cf53a/ : SUCCESS in 4m 14s > - trigger-readthedocs-webhook http://logs.openstack.org/3b/3ba003efaf249ac62ae588310f6ad3279d65f337/pre-release/trigger-readthedocs-webhook/2c23362/ : FAILURE in 1m 58s > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures It looks like maybe the kayobe project integration with readthedocs is broken. The log output [1] is suppressed, so I can't actually see the error to provide more details. Please work with the infra team to address this issue. [1] http://logs.openstack.org/3b/3ba003efaf249ac62ae588310f6ad3279d65f337/pre-release/trigger-readthedocs-webhook/2c23362/job-output.txt.gz#_2019-02-15_10_18_46_933867 -- Doug From mthode at mthode.org Fri Feb 15 14:35:31 2019 From: mthode at mthode.org (Matthew Thode) Date: Fri, 15 Feb 2019 08:35:31 -0600 Subject: [requirements][requests] security update for requests in stable branches In-Reply-To: References: <20190215072749.k34tdrnapanietk5@mthode.org> Message-ID: <20190215143531.qhwxbttue7t72wpn@mthode.org> On 19-02-15 06:51:20, Boden Russell wrote: > Just to confirm; the best way to test with this change is to submit a > dummy patch that depends on 637124 in the respective project's > stable/rocky branch? > > > On 2/15/19 12:27 AM, Matthew Thode wrote: > > Recently it was reported to us that requests had a recent release that > > addressed a CVE (CVE-2018-18074). Requests has no stable branches so > > the only way to update openstack stable branches is to update to 2.20.1 > > in this case. I wanted to pass this by people as requests is generally > > a nasty library with nasty surprises. It's passed our cross and dvsm > > gating though (for rocky) so indications look good. What I'm asking you > > for is anything that could go wrong with updating (rocky in this case, > > but possibly back to newton, depending on co-installability). Please > > let me know any blockers to to update (in the review preferably). > > > > https://review.openstack.org/637124 > > > > Thanks, > > Yes -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Fri Feb 15 14:37:09 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 15 Feb 2019 09:37:09 -0500 Subject: [infra][releases][requirements] Publishing per branch constraints files In-Reply-To: <20190215053728.GN12795@thor.bakeyournoodle.com> References: <20190214024541.GE12795@thor.bakeyournoodle.com> <20190214212901.GI12795@thor.bakeyournoodle.com> <20190215003231.GJ12795@thor.bakeyournoodle.com> <20190215033217.GK12795@thor.bakeyournoodle.com> <20190215053728.GN12795@thor.bakeyournoodle.com> Message-ID: Tony Breeds writes: > On Thu, Feb 14, 2019 at 10:51:31PM -0500, Doug Hellmann wrote: > >> We should be able to automate building the list of rules using a >> template in the releases repo, since we already have a list of all of >> the releases and their status there in >> deliverables/series_status.yaml. It may require adding something to >> source/conf.py to load that data to make it available to the template. > > I think it might be a little harder than that as we want > /constraints/upper/stein to switch from 'master' to stable/stein pretty > soon after the branch exists in openstack/requirements. Likewise we want > to switch from from the 'stable/newton' to newton-eol once that exists > (the redirect rules for newton are wrong). > > So we might need to extract the data from the raw delieverable files > themselves. That should also be possible to integrate with sphinx. > I'll try coding that up next week. Expect sphinx questions ;P Yep, I'll try to help. >> Yeah, we should make sure redirects are enabled. I think we made that a >> blanket change when we did the docs redirect work, but possibly not. > > So I used Rewrite rather then Redirect but I think for this I can switch > to the latter. I don't know the difference, so I don't know if it matters. We're using redirects elsewhere for docs, but we should just do whatever works for this case. > > If I read system-config correctly[1,2,3] we don't enable Redirect on > releases.o.o but we could by switching to [4] but that has other > implications. Currently http://releases.o.o/ 302's to https://... > If we switched to [4] that wouldn't happen. So we might need a new > puppet template to combine them *or* we could allow .htaccess to > Override Redirect* > > Yours Tony. > > [1] http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/static.pp#n459 > [2] http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/static.pp#n488 > [3] http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/templates/static-https-redirect.vhost.erb#n38 > [4] http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/templates/static-http-and-https.vhost.erb -- Doug From rico.lin.guanyu at gmail.com Fri Feb 15 15:19:10 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 15 Feb 2019 23:19:10 +0800 Subject: [release][heat][tripleo][kolla][monsca] need PTL/liaison ACK on pending releases In-Reply-To: References: Message-ID: Done from Heat part Doug Hellmann 於 2019年2月15日 週五,下午10:21寫道: > > We have a few pending releases that were proposed by someone who is not > listed as the PTL or Liaison for the associated project. That's fine, > but in those cases we need the PTL or Liaison to acknowledge and approve > the release before we process it. Please take a minute today or Monday > to look at these and indicate whether it is OK to release them. > > https://review.openstack.org/#/c/636285/ heat-translator 1.3.0 (stein) > https://review.openstack.org/#/c/635569/ tripleo-heat-templates 8.3.0 > (queens) > https://review.openstack.org/#/c/635536/ kolla 6.1.1 (queens) > https://review.openstack.org/#/c/637163/ python-monscaclient 1.13.0 > (stein) > > -- > Doug > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Feb 15 15:22:53 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 15 Feb 2019 10:22:53 -0500 Subject: [release][heat][tripleo][kolla][monsca] need PTL/liaison ACK on pending releases In-Reply-To: References: Message-ID: Thanks, Rico! Rico Lin writes: > Done from Heat part > > Doug Hellmann 於 2019年2月15日 週五,下午10:21寫道: > >> >> We have a few pending releases that were proposed by someone who is not >> listed as the PTL or Liaison for the associated project. That's fine, >> but in those cases we need the PTL or Liaison to acknowledge and approve >> the release before we process it. Please take a minute today or Monday >> to look at these and indicate whether it is OK to release them. >> >> https://review.openstack.org/#/c/636285/ heat-translator 1.3.0 (stein) >> https://review.openstack.org/#/c/635569/ tripleo-heat-templates 8.3.0 >> (queens) >> https://review.openstack.org/#/c/635536/ kolla 6.1.1 (queens) >> https://review.openstack.org/#/c/637163/ python-monscaclient 1.13.0 >> (stein) >> >> -- >> Doug >> >> -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin -- Doug From doug at stackhpc.com Fri Feb 15 15:40:23 2019 From: doug at stackhpc.com (Doug Szumski) Date: Fri, 15 Feb 2019 15:40:23 +0000 Subject: [release][heat][tripleo][kolla][monsca] need PTL/liaison ACK on pending releases [monasca] In-Reply-To: References: Message-ID: <1491c4d0-c352-5c8a-2952-850f25fd7525@stackhpc.com> + [monasca] On 15/02/2019 14:19, Doug Hellmann wrote: > We have a few pending releases that were proposed by someone who is not > listed as the PTL or Liaison for the associated project. That's fine, > but in those cases we need the PTL or Liaison to acknowledge and approve > the release before we process it. Please take a minute today or Monday > to look at these and indicate whether it is OK to release them. Thanks Doug. > > https://review.openstack.org/#/c/636285/ heat-translator 1.3.0 (stein) > https://review.openstack.org/#/c/635569/ tripleo-heat-templates 8.3.0 (queens) > https://review.openstack.org/#/c/635536/ kolla 6.1.1 (queens) > https://review.openstack.org/#/c/637163/ python-monscaclient 1.13.0 (stein) > From emccormick at cirrusseven.com Fri Feb 15 16:05:18 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Fri, 15 Feb 2019 11:05:18 -0500 Subject: [all][ops] Ops Meetup Agenda Planning - Berlin Edition Message-ID: Hello All, The time is rapidly approaching for the Ops Meetup in Berlin. In preparation, we need your help developing the agenda. i put an [all] tag on this because I'm hoping that anyone, not just ops, looking for discussion and feedback on particular items might join in and suggest sessions. It is not required that you attend the meetup to post session ideas. If there is sufficient interest, we will hold the session and provide feedback and etherpad links following the meetup. Please insert your session ideas into this etherpad, add subtopics to already proposed sessions, and +1 those that you are interested in. Also please put your name, and maybe some contact info, at the bottom. If you'd be willing to moderate a session, please add yourself to the moderators list. https://etherpad.openstack.org/p/BER-ops-meetup I'd like to give a big shout out to Deutsche Telekom for hosting us and providing the catering. I look forward to seeing many of you in Berlin! Cheers, Erik From luka.peschke at objectif-libre.com Fri Feb 15 16:11:27 2019 From: luka.peschke at objectif-libre.com (Luka Peschke) Date: Fri, 15 Feb 2019 17:11:27 +0100 Subject: [cloudkitty] March IRC meeting is cancelled Message-ID: <1d7ece1a.AM0AAC4h1G0AAAAAAAAAAAQR_QkAAAAAZtYAAAAAAAzbjABcZuSw@mailjet.com> Hello everybody, Due to various people not being available on march 1st, the cloudkitty IRC meeting that was planned at that date is cancelled. The next meeting will be held on april 5th at 15h UTC / 17h CET. Cheers, -- Luka Peschke From richwellum at gmail.com Fri Feb 15 16:40:06 2019 From: richwellum at gmail.com (Richard Wellum) Date: Fri, 15 Feb 2019 11:40:06 -0500 Subject: [kolla] Proposing Michal Nasiadka to the core team In-Reply-To: References: Message-ID: +1 On Fri, Feb 15, 2019 at 5:18 AM Eduardo Gonzalez wrote: > Hi, is my pleasure to propose Michal Nasiadka for the core team in > kolla-ansible. > > Michal has been active reviewer in the last relases ( > https://www.stackalytics.com/?module=kolla-group&user_id=mnasiadka), has > been keeping an eye on the bugs and being active help on IRC. > He has also made efforts in community interactions in Rocky and Stein > releases, including PTG attendance. > > His main interest is NFV and Edge clouds and brings valuable couple of > years experience as OpenStack/Kolla operator with good knowledge of Kolla > code base. > > Planning to work on extending Kolla CI scenarios, Edge use cases and > improving NFV-related functions ease of deployment. > > Consider this email as my +1 vote. Vote ends in 7 days (22 feb 2019) > > Regards > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Fri Feb 15 16:58:28 2019 From: melwittt at gmail.com (melanie witt) Date: Fri, 15 Feb 2019 08:58:28 -0800 Subject: [nova][dev] 3 weeks until feature freeze Message-ID: <7cc06a1f-8eb9-cd1b-2fcb-cecbe7649910@gmail.com> Howdy all, We've about 3 weeks left until feature freeze milestone s-3 on March 7: https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule Non-client library freeze is in 2 weeks February 28, so if you need changes released in Stein for os-vif, os-traits, or os-resource-classes, they need to be merged by Feb 28 and the releases will be proposed on Feb 28. Ping us if you need review. I've updated the blueprint status tracking etherpad: https://etherpad.openstack.org/p/nova-stein-blueprint-status For our Cycle Themes: Multi-cell operational enhancements: The patch series for the API microversion for handling of down cells on the nova side has all been approved as of today. Only the python-novaclient change remains. Counting quota usage from placement is still an active WIP. Cross-cell resize is still making good progress with active code review. Compute nodes able to upgrade and exist with nested resource providers for multiple vGPU types: The libvirt driver reshaper patch has been updated today but needs some fixes for unit test failures. Volume-backed user experience and API improvement: The detach boot volume and volume-backed server rebuild patches are active WIP. If you are the owner of an approved blueprint, please: * Add the blueprint if I've missed it * Update the status if it is not accurate * If your blueprint is in the "Wayward changes" section, please upload and update patches as soon as you can, to allow maximum time for review * If your patches are noted as Merge Conflict or WIP or needing an update, please update them and update the status on the etherpad * Add a note under your blueprint if you're no longer able to work on it this cycle Let us know if you have any questions or need assistance with your blueprint. Cheers, -melanie From colleen at gazlene.net Fri Feb 15 17:23:04 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 15 Feb 2019 12:23:04 -0500 Subject: [dev][keystone] Keystone Team Update - Week of 11 February 2019 Message-ID: # Keystone Team Update - Week of 11 February 2019 ## News ### Blueprints Overhaul After someone inquired about a very old Launchpad blueprint that was not aligned with reality, we started a campaign to clean up all of the keystone blueprints. Lance described the proposed plan[1] which is to stop using Launchpad blueprints for tracking feature work and to consolidate everything into specs and RFE bug reports. The plan is also in discussion in our spec template[2]. Once we have our backlog cleaned up, it will hopefully be a little more straightforward to port all of our tracked work into Storyboard when the time comes. There are also quite a few open blueprints that we need to discuss as a team to reaffirm whether they are moving in the right direction and still worthwhile[3]. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002672.html [2] https://review.openstack.org/625282 [3] https://etherpad.openstack.org/p/keystone-blueprint-cleanup ### OpenStack User Survey Since the Foundation is working on this year's user survey, we talked about what we want included on it[4]. We've found the current keystone question is not really detailed enough to give us concrete feedback, but unfortunately we're limited to two questions. We decided to add one additional question[5] and the Foundation has also offered to help collect more fine-grained feedback via a Surveymonkey survey. [4] http://eavesdrop.openstack.org/meetings/keystone/2019/keystone.2019-02-12-16.00.log.html#l-13 [5] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-02-14.log.html#t2019-02-14T19:16:38 ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 16 changes this week. ## Changes that need Attention Search query: https://bit.ly/2RLApdA There are 68 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs This week we opened 6 new bugs and closed 3. Bugs opened (6) Bug #1815539 (keystone:High) opened by Guang Yee https://bugs.launchpad.net/keystone/+bug/181553 Bug #1815771 (keystone:Medium) opened by Jose Castro Leon https://bugs.launchpad.net/keystone/+bug/1815771 Bug #1815810 (keystone:Low) opened by Drew Freiberger https://bugs.launchpad.net/keystone/+bug/1815810 Bug #1815966 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1815966 Bug #1815971 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1815971 Bug #1815972 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1815972 Bugs fixed (3) Bug #1804446 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1804446 Bug #1805372 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1805372 Bug #1813739 (keystonemiddleware:Undecided) fixed by Yang Youseok https://bugs.launchpad.net/keystonemiddleware/+bug/1813739 ## Milestone Outlook https://releases.openstack.org/stein/schedule.html Feature freeze as well as final client release are both in 3 weeks. Non-client release deadline is in two weeks, which means changes needed for keystonemiddleware, keystoneauth, and the oslo libraries need to be proposed and reviewed ASAP. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From jimmy at openstack.org Fri Feb 15 03:04:38 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 14 Feb 2019 21:04:38 -0600 Subject: [openstack-community] [OpenStack Marketing] [OpenStack Foundation] Open Infrastructure Summit Denver - Community Voting Open In-Reply-To: References: <6B02F9A1-28A7-4F43-85E1-66AD570ED37B@openstack.org> <5164AFCF-285F-43F0-8718-A8F9DDCAF48A@openstack.org> Message-ID: <5C662C46.7010700@openstack.org> Hi Fred, Please see below: > Fred Li > February 14, 2019 at 6:26 PM > Hi Ashlee, > > May I have a question about the schedule? According to [1] I got that > the price increase late February. I am wondering whether the selection > of presentations will be finished before that? Yes, the price increase will occur after the schedule announcement. > My questions are, > 1. when will the presentation selection finish? Expected February 20th. > 2. will the contributors whose presentations get selected get a free > summit ticket as before? Yes, for sure. Presenters and alternates will receive a complimentary ticket. > 3. will the contributors who attended the previous PTG get a discount > for PTG tickets? Yes, PTG attendees should have already received this discount. If you did not, please let us know at summitreg at openstack.org and we'll be happy to assist. Cheers, Jimmy > > [1] https://www.openstack.org/summit/denver-2019/faq/ > > Regards > Fred > > > > -- > Regards > Fred Li (李永乐) > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > Ashlee Ferguson > February 4, 2019 at 12:26 PM > Hi everyone, > > Just under 12 hours left to vote for the sessions > you’d > like to see at the Denver Open Infrastructure Summit > ! > > > REGISTER > Register for the Summit > before prices > increase in late February! > > VISA APPLICATION PROCESS > Make sure to secure your Visa soon. More information > about > the Visa application process. > > TRAVEL SUPPORT PROGRAM > February 27 is the last day to submit applications. Please submit your > applications > by 11:59pm > Pacific Time (February 28 at 7:59am UTC). > > If you have any questions, please email summit at openstack.org > . > > > Cheers, > Ashlee > > > Ashlee Ferguson > OpenStack Foundation > ashlee at openstack.org > > > > >> On Jan 31, 2019, at 12:29 PM, Ashlee Ferguson > > wrote: >> >> Hi everyone, >> >> Community voting for the Open Infrastructure Summit Denver sessions >> is open! >> >> You can VOTE HERE >> , but >> what does that mean? >> >> Now that the Call for Presentations has closed, all submissions are >> available for community vote and input. After community voting >> closes, the volunteer Programming Committee members will receive the >> presentations to review and determine the final selections for Summit >> schedule. While community votes are meant to help inform the >> decision, Programming Committee members are expected to exercise >> judgment in their area of expertise and help ensure diversity of >> sessions and speakers. View full details of the session selection >> process here >> . >> >> In order to vote, you need an OSF community membership. If you do not >> have an account, please create one by going to openstack.org/join >> . If you need to reset your password, you >> can do that here . >> >> Hurry, voting closes Monday, February 4 at 11:59pm Pacific Time >> (Tuesday, February 5 at 7:59 UTC). >> >> Continue to visit https://www.openstack.org/summit/denver-2019for all >> Summit-related information. >> >> REGISTER >> Register for the Summit >> before prices >> increase in late February! >> >> VISA APPLICATION PROCESS >> Make sure to secure your Visa soon. More information >> about >> the Visa application process. >> >> TRAVEL SUPPORT PROGRAM >> February 27 is the last day to submit applications. Please submit >> your applications >> by >> 11:59pm Pacific Time (February 28 at 7:59am UTC). >> >> If you have any questions, please email summit at openstack.org >> . >> >> Cheers, >> Ashlee >> >> >> Ashlee Ferguson >> OpenStack Foundation >> ashlee at openstack.org >> >> >> >> >> _______________________________________________ >> Foundation mailing list >> Foundation at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation > > _______________________________________________ > Foundation mailing list > Foundation at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation > Ashlee Ferguson > January 31, 2019 at 12:29 PM > Hi everyone, > > Community voting for the Open Infrastructure Summit Denver sessions is > open! > > You can VOTE HERE > , but > what does that mean? > > Now that the Call for Presentations has closed, all submissions are > available for community vote and input. After community voting closes, > the volunteer Programming Committee members will receive the > presentations to review and determine the final selections for Summit > schedule. While community votes are meant to help inform the decision, > Programming Committee members are expected to exercise judgment in > their area of expertise and help ensure diversity of sessions and > speakers. View full details of the session selection process here > . > > In order to vote, you need an OSF community membership. If you do not > have an account, please create one by going to openstack.org/join > . If you need to reset your password, you > can do that here . > > Hurry, voting closes Monday, February 4 at 11:59pm Pacific Time > (Tuesday, February 5 at 7:59 UTC). > > Continue to visit https://www.openstack.org/summit/denver-2019for all > Summit-related information. > > REGISTER > Register for the Summit > before prices > increase in late February! > > VISA APPLICATION PROCESS > Make sure to secure your Visa soon. More information > about > the Visa application process. > > TRAVEL SUPPORT PROGRAM > February 27 is the last day to submit applications. Please submit your > applications > by 11:59pm > Pacific Time (February 28 at 7:59am UTC). > > If you have any questions, please email summit at openstack.org > . > > Cheers, > Ashlee > > > Ashlee Ferguson > OpenStack Foundation > ashlee at openstack.org > > > > > _______________________________________________ > Foundation mailing list > Foundation at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongle.li at gmail.com Fri Feb 15 06:41:41 2019 From: yongle.li at gmail.com (Fred Li) Date: Fri, 15 Feb 2019 14:41:41 +0800 Subject: [openstack-community] [OpenStack Marketing] [OpenStack Foundation] Open Infrastructure Summit Denver - Community Voting Open In-Reply-To: <5C662C46.7010700@openstack.org> References: <6B02F9A1-28A7-4F43-85E1-66AD570ED37B@openstack.org> <5164AFCF-285F-43F0-8718-A8F9DDCAF48A@openstack.org> <5C662C46.7010700@openstack.org> Message-ID: Thanks for your reply. On Fri, Feb 15, 2019 at 11:04 AM Jimmy McArthur wrote: > Hi Fred, > > Please see below: > > Fred Li > February 14, 2019 at 6:26 PM > Hi Ashlee, > > May I have a question about the schedule? According to [1] I got that the > price increase late February. I am wondering whether the selection of > presentations will be finished before that? > > Yes, the price increase will occur after the schedule announcement. > > My questions are, > 1. when will the presentation selection finish? > > Expected February 20th. > > 2. will the contributors whose presentations get selected get a free > summit ticket as before? > > Yes, for sure. Presenters and alternates will receive a complimentary > ticket. > > 3. will the contributors who attended the previous PTG get a discount for > PTG tickets? > > Yes, PTG attendees should have already received this discount. If you did > not, please let us know at summitreg at openstack.org and we'll be happy to > assist. > > Cheers, > Jimmy > > > [1] https://www.openstack.org/summit/denver-2019/faq/ > > Regards > Fred > > > > -- > Regards > Fred Li (李永乐) > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > Ashlee Ferguson > February 4, 2019 at 12:26 PM > Hi everyone, > > Just under 12 hours left to vote for the sessions > you’d > like to see at the Denver Open Infrastructure Summit > ! > > > REGISTER > Register for the Summit > before prices > increase in late February! > > VISA APPLICATION PROCESS > Make sure to secure your Visa soon. More information > > about the Visa application process. > > TRAVEL SUPPORT PROGRAM > February 27 is the last day to submit applications. Please submit your > applications > by > 11:59pm Pacific Time (February 28 at 7:59am UTC). > > If you have any questions, please email summit at openstack.org > . > > > Cheers, > Ashlee > > > Ashlee Ferguson > OpenStack Foundation > ashlee at openstack.org > > > > > On Jan 31, 2019, at 12:29 PM, Ashlee Ferguson > wrote: > > Hi everyone, > > Community voting for the Open Infrastructure Summit Denver sessions is > open! > > You can VOTE HERE > , but > what does that mean? > > > Now that the Call for Presentations has closed, all submissions are > available for community vote and input. After community voting closes, the > volunteer Programming Committee members will receive the presentations to > review and determine the final selections for Summit schedule. While > community votes are meant to help inform the decision, Programming > Committee members are expected to exercise judgment in their area of > expertise and help ensure diversity of sessions and speakers. View full > details of the session selection process here > > . > > In order to vote, you need an OSF community membership. If you do not have > an account, please create one by going to openstack.org/join. If you need > to reset your password, you can do that here > . > > Hurry, voting closes Monday, February 4 at 11:59pm Pacific Time (Tuesday, > February 5 at 7:59 UTC). > > Continue to visit https://www.openstack.org/summit/denver-2019 for all > Summit-related information. > > REGISTER > Register for the Summit > before prices > increase in late February! > > VISA APPLICATION PROCESS > Make sure to secure your Visa soon. More information > > about the Visa application process. > > TRAVEL SUPPORT PROGRAM > February 27 is the last day to submit applications. Please submit your > applications > by > 11:59pm Pacific Time (February 28 at 7:59am UTC). > > If you have any questions, please email summit at openstack.org > . > > Cheers, > Ashlee > > > Ashlee Ferguson > OpenStack Foundation > ashlee at openstack.org > > > > > _______________________________________________ > Foundation mailing list > Foundation at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation > > > _______________________________________________ > Foundation mailing listFoundation at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/foundation > > Ashlee Ferguson > January 31, 2019 at 12:29 PM > Hi everyone, > > Community voting for the Open Infrastructure Summit Denver sessions is > open! > > You can VOTE HERE > , but > what does that mean? > > Now that the Call for Presentations has closed, all submissions are > available for community vote and input. After community voting closes, the > volunteer Programming Committee members will receive the presentations to > review and determine the final selections for Summit schedule. While > community votes are meant to help inform the decision, Programming > Committee members are expected to exercise judgment in their area of > expertise and help ensure diversity of sessions and speakers. View full > details of the session selection process here > > . > > In order to vote, you need an OSF community membership. If you do not have > an account, please create one by going to openstack.org/join. If you need > to reset your password, you can do that here > . > > Hurry, voting closes Monday, February 4 at 11:59pm Pacific Time (Tuesday, > February 5 at 7:59 UTC). > > Continue to visit https://www.openstack.org/summit/denver-2019 for all > Summit-related information. > > REGISTER > Register for the Summit > before prices > increase in late February! > > VISA APPLICATION PROCESS > Make sure to secure your Visa soon. More information > > about the Visa application process. > > TRAVEL SUPPORT PROGRAM > February 27 is the last day to submit applications. Please submit your > applications > by > 11:59pm Pacific Time (February 28 at 7:59am UTC). > > If you have any questions, please email summit at openstack.org > . > > Cheers, > Ashlee > > > Ashlee Ferguson > OpenStack Foundation > ashlee at openstack.org > > > > > _______________________________________________ > Foundation mailing list > Foundation at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation > > > -- Regards Fred Li (李永乐) -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Fri Feb 15 10:27:05 2019 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 15 Feb 2019 10:27:05 -0000 Subject: kayobe 4.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for kayobe for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/kayobe/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: https://git.openstack.org/cgit/openstack/kayobe/log/?h=stable/queens Release notes for kayobe can be found at: https://docs.openstack.org/releasenotes/kayobe/ If you find an issue that could be considered release-critical, please file it at: https://storyboard.openstack.org/#!/project/openstack/kayobe and tag it *queens-rc-potential* to bring it to the kayobe release crew's attention. From no-reply at openstack.org Fri Feb 15 11:32:15 2019 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 15 Feb 2019 11:32:15 -0000 Subject: kayobe 5.0.0.0rc1 (rocky) Message-ID: Hello everyone, A new release candidate for kayobe for the end of the Rocky cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/kayobe/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Rocky release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/rocky release branch at: https://git.openstack.org/cgit/openstack/kayobe/log/?h=stable/rocky Release notes for kayobe can be found at: https://docs.openstack.org/releasenotes/kayobe/ If you find an issue that could be considered release-critical, please file it at: https://storyboard.openstack.org/#!/project/openstack/kayobe and tag it *rocky-rc-potential* to bring it to the kayobe release crew's attention. From florian.engelmann at everyware.ch Fri Feb 15 14:42:48 2019 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Fri, 15 Feb 2019 15:42:48 +0100 Subject: [Openstack][Heat] service times out 504 Message-ID: <8ad92313-3653-f2d3-e1af-34849e20065e@everyware.ch> Hi all, - Version: heat-base-archive-stable-rocky - Commit: Ica99cec6765d22d7ee2262e2d402b2e98cb5bd5e I have a fresh openstack deployment (kolla-Ansible). Everything but Heat is working fine. When I do a webrequest (either horizon or curl) on the openstack heat endpoint (internal or public), I just get nothing and after a while, it times out with a 500 http error. root at xxxx-kolla-xxxx:~# curl -vvv http://10.10.10.10:8004/v1/e7f405fb2b7b4b029dfc48e06920eb92 * Trying 10.x.y.z * Connected to heat.xxxxxxxx (10.x.x.x.) port 8004 (#0) > GET /v1/e7f405fb2b7b4b029dfc48e06920eb92 HTTP/1.1 > Host: heat.xxxxxxxxx:8004 > User-Agent: curl/7.47.0 > Accept: */* > < HTTP/1.1 500 Internal Server Error < Content-Type: application/json < Content-Length: 4338 < X-Openstack-Request-Id: req-46afc474-682b-4938-8777-b3b4b6fcb973 < Date: Fri, 15 Feb 2019 13:08:04 GMT < {"explanation": "The server has either erred or is incapable of performing the requested operation.", "code": 500, In the heat_api log I see the request coming through, but it seems that there is just no reply. 2019-02-15 14:16:40.047 25 DEBUG heat.api.middleware.version_negotiation [-] Processing request: GET / Accept: process_request /var/lib/kolla/venv/lib/python2.7/site-packages/heat/api/middleware/version_negotiation.py:50 2019-02-15 14:16:40.048 25 INFO eventlet.wsgi.server [-] 10.x.y.z - - [15/Feb/2019 14:16:40] "GET / HTTP/1.0" 300 327 0.001106 Should the api give some output when I do a http request? Any hints? Thanks a lot, its quite urgent.. Built 11.0.0 and current rocky-stable (11.0.0.1dev), same on both versions. ## with a horizon request (just click on Project -> Compute -> Orchestration -> Stacks 2019-02-15 14:22:22.250 22 DEBUG heat.api.middleware.version_negotiation [-] Processing request: GET /v1/beb568af3781471d94c3623805946ca3/stacks Accept: application/json process_request /var/lib/kolla/venv/lib/python2.7/site-packages/heat/api/middleware/version_negotiation.py:50 2019-02-15 14:22:22.250 22 DEBUG heat.api.middleware.version_negotiation [-] Matched versioned URI. Version: 1.0 process_request /var/lib/kolla/venv/lib/python2.7/site-packages/heat/api/middleware/version_negotiation.py:65 2019-02-15 14:22:23.003 22 DEBUG eventlet.wsgi.server [req-663d17e3-7e41-4c9e-a30a-a43ef18cf056 - - - - -] (22) accepted ('10.xxx.xxx.xxx', 51084) server /var/lib/kolla/venv/lib/python2.7/site-packages/eventlet/wsgi.py:883 2019-02-15 14:22:23.004 22 DEBUG heat.api.middleware.version_negotiation [-] Processing request: GET / Accept: process_request /var/lib/kolla/venv/lib/python2.7/site-packages/heat/api/middleware/version_negotiation.py:50 Best regards, Flo -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From mark at stackhpc.com Fri Feb 15 17:43:21 2019 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 15 Feb 2019 17:43:21 +0000 Subject: [kayobe] Kayobe 5.0.0 released Message-ID: Hi, I'm pleased to announce the release of Kayobe 5.0.0. This release supports deployment of OpenStack Rocky. Lots of new features and fixes, see [1]. Thanks to everyone who contributed. Now onto Stein - we're catching up! [1] https://kayobe-release-notes.readthedocs.io/en/latest/rocky.html Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Feb 15 17:46:34 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Feb 2019 17:46:34 +0000 Subject: [TripleO] openvswitch is broken - avoid rechecks in the next couple hours In-Reply-To: <0DBA3B1F-E2BB-4B1D-94A2-8ADC9E9D1D23@redhat.com> References: <0DBA3B1F-E2BB-4B1D-94A2-8ADC9E9D1D23@redhat.com> Message-ID: <20190215174634.5lzq6tkfhgjcnekf@yuggoth.org> On 2019-02-15 08:23:31 +0000 (+0000), Sorin Sbarnea wrote: > Is there something we can do to prevent this in the future? > > Unrelated to openvswitch itself, it happened with other packages > too and will happen again. [...] As in how to prevent distros from updating their packages in ways which require some adjustments in our software? That's a big part of why our CI system works the way it does: so we know as soon as possible when we need to make modifications to keep our software compatible with distributions we care about. Hiding or deferring such failures just means that we get to spend more time ignoring our users who are getting regular updates from their operating system and are suddenly unable to use our software on it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Feb 15 18:01:16 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Feb 2019 18:01:16 +0000 Subject: [requirements][requests] security update for requests in stable branches In-Reply-To: <20190215072749.k34tdrnapanietk5@mthode.org> References: <20190215072749.k34tdrnapanietk5@mthode.org> Message-ID: <20190215180116.jhuuza7jdmpzmq6p@yuggoth.org> On 2019-02-15 01:27:49 -0600 (-0600), Matthew Thode wrote: > Recently it was reported to us that requests had a recent release that > addressed a CVE (CVE-2018-18074). Requests has no stable branches so > the only way to update openstack stable branches is to update to 2.20.1 > in this case. [...] In the past we've assumed that folks consuming stable branches are doing so on distributions which are backporting security fixes for our dependencies anyway, so treating requirements for stable branches as a snapshot in time (even if that snapshot includes versions of dependencies with known vulnerabilities) is acceptable. If we need to start worrying about vulnerable dependencies on stable branches now, this implies quite a bit of extra work. I don't personally see any special need to make an exception for the requests library in this case. Will, e.g., CentOS or Ubuntu be replacing their LTS python-requests packages with 2.20.1 rather than just backporting a fix to the package versions they currently have? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jim at jimrollenhagen.com Fri Feb 15 18:06:21 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 15 Feb 2019 13:06:21 -0500 Subject: [requirements][requests] security update for requests in stable branches In-Reply-To: <20190215180116.jhuuza7jdmpzmq6p@yuggoth.org> References: <20190215072749.k34tdrnapanietk5@mthode.org> <20190215180116.jhuuza7jdmpzmq6p@yuggoth.org> Message-ID: On Fri, Feb 15, 2019 at 1:02 PM Jeremy Stanley wrote: > On 2019-02-15 01:27:49 -0600 (-0600), Matthew Thode wrote: > > Recently it was reported to us that requests had a recent release that > > addressed a CVE (CVE-2018-18074). Requests has no stable branches so > > the only way to update openstack stable branches is to update to 2.20.1 > > in this case. > [...] > > In the past we've assumed that folks consuming stable branches are > doing so on distributions which are backporting security fixes for > our dependencies anyway, so treating requirements for stable > branches as a snapshot in time (even if that snapshot includes > versions of dependencies with known vulnerabilities) is acceptable. > Interesting, I didn't realize this. I know openstack-ansible and kolla both (optionally?) deploy from source, so maybe it's time to start talking about it. Or should those projects handle security fixes themselves when deploying from source? // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Feb 15 18:17:12 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Feb 2019 18:17:12 +0000 Subject: [requirements][requests] security update for requests in stable branches In-Reply-To: References: <20190215072749.k34tdrnapanietk5@mthode.org> <20190215180116.jhuuza7jdmpzmq6p@yuggoth.org> Message-ID: <20190215181711.7xjsdcoz2fcoe6vn@yuggoth.org> On 2019-02-15 13:06:21 -0500 (-0500), Jim Rollenhagen wrote: [...] > I know openstack-ansible and kolla both (optionally?) deploy from source, > so maybe it's time to start talking about it. Or should those projects > handle security fixes themselves when deploying from source? If they're aggregating non-OpenStack software (that is, acting as a full software distribution) then they ought to be tracking and managing vulnerabilities in that software. I don't see that as being the job of the Requirements team to manage it for them. This is especially true in cases where the output is something like server or container images which include plenty of other software not even tracked by the requirements repository at all, any of which could have security vulnerabilities as well. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From davanum at gmail.com Fri Feb 15 18:28:35 2019 From: davanum at gmail.com (Davanum Srinivas) Date: Fri, 15 Feb 2019 13:28:35 -0500 Subject: [tc] dims non-nomination for TC Message-ID: Folks, I will not be running in the upcoming TC election. It's been a great experience being on the TC and working with all of you. I am / will be still around working on OpenStack related activities, so don't hesitate to ping me if you need need me. Thanks, Dims -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Fri Feb 15 18:32:34 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 15 Feb 2019 13:32:34 -0500 Subject: [requirements][requests] security update for requests in stable branches In-Reply-To: <20190215181711.7xjsdcoz2fcoe6vn@yuggoth.org> References: <20190215072749.k34tdrnapanietk5@mthode.org> <20190215180116.jhuuza7jdmpzmq6p@yuggoth.org> <20190215181711.7xjsdcoz2fcoe6vn@yuggoth.org> Message-ID: On Fri, Feb 15, 2019 at 1:18 PM Jeremy Stanley wrote: > On 2019-02-15 13:06:21 -0500 (-0500), Jim Rollenhagen wrote: > [...] > > I know openstack-ansible and kolla both (optionally?) deploy from source, > > so maybe it's time to start talking about it. Or should those projects > > handle security fixes themselves when deploying from source? > > If they're aggregating non-OpenStack software (that is, acting as a > full software distribution) then they ought to be tracking and > managing vulnerabilities in that software. I don't see that as being > the job of the Requirements team to manage it for them. This is > especially true in cases where the output is something like server > or container images which include plenty of other software not even > tracked by the requirements repository at all, any of which could > have security vulnerabilities as well. > That's fair - I had to ask, given I believe they just take what the requirements.txt file gives them. Hopefully those projects are aware of this policy already. :) // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Fri Feb 15 18:37:19 2019 From: mthode at mthode.org (Matthew Thode) Date: Fri, 15 Feb 2019 12:37:19 -0600 Subject: [requirements][requests] security update for requests in stable branches In-Reply-To: References: <20190215072749.k34tdrnapanietk5@mthode.org> <20190215180116.jhuuza7jdmpzmq6p@yuggoth.org> <20190215181711.7xjsdcoz2fcoe6vn@yuggoth.org> Message-ID: <20190215183719.niheo2y2ji3sqks3@mthode.org> On 19-02-15 13:32:34, Jim Rollenhagen wrote: > On Fri, Feb 15, 2019 at 1:18 PM Jeremy Stanley wrote: > > > On 2019-02-15 13:06:21 -0500 (-0500), Jim Rollenhagen wrote: > > [...] > > > I know openstack-ansible and kolla both (optionally?) deploy from source, > > > so maybe it's time to start talking about it. Or should those projects > > > handle security fixes themselves when deploying from source? > > > > If they're aggregating non-OpenStack software (that is, acting as a > > full software distribution) then they ought to be tracking and > > managing vulnerabilities in that software. I don't see that as being > > the job of the Requirements team to manage it for them. This is > > especially true in cases where the output is something like server > > or container images which include plenty of other software not even > > tracked by the requirements repository at all, any of which could > > have security vulnerabilities as well. > > > > That's fair - I had to ask, given I believe they just take what the > requirements.txt file gives them. Hopefully those projects are > aware of this policy already. :) > I bugged OSA about it. What I'd like to do is to do updates on a best-effort basis (in this case a user reported the bug to us). You can't rely on requirements to monitor upper-constraints for security issues. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From dtroyer at gmail.com Fri Feb 15 18:45:33 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 15 Feb 2019 12:45:33 -0600 Subject: [tc][all] Train Community Goals In-Reply-To: References: <66d73db6-9f84-1290-1ab8-cf901a7fb355@catalyst.net.nz> <6b498008e71b7dae651e54e29717f3ccedea50d1.camel@evrard.me> <36bf8876-b9bf-27c5-ee5a-387ce8f6768b@gmail.com> <99fd20b3-caa6-4bdf-c5b8-129513f8a7d8@gmail.com> Message-ID: On Fri, Feb 15, 2019 at 2:42 AM Artem Goncharov wrote: > Dean, are you ok in receiving changes for switch to SDK before we get R1.0? I suppose we should just go ahead and do that. We've been burnt twice since we started doing the Network commands by changes, there was a lot of compatibility stuff on both sides for the last one, I don't want either project to have to do that again. Monty told me a while back that he didn't expect any more compat-impacting changes, if that is still true I'd say we should start... > Let us really just focus on few services as a target and then hopefully achieve more. What do you think? > My suggestion would be to focus on: > - novaclient > - glanceclient > - swiftclient I had been planning to do glance first for a number of reasons, one being the number of unique dependencies glanceclient bring in to OSC. Swift is a special case, we don't use swiftclient at all, I used what I had originally proposed to the SDK (a really long time ago now) for a low-level API and copied the useful functions directly from swift (this was even before swiftclient was a thing). The core of what is in OSC for swift is swift code, way out of date now, which is one reason so much of that API is not implemented. Both of those have notiecable returns to do first although it could also be argued that due to the above history of swift support in OSC not a lot of users are relying on that. For this to make sense as a community goal, I want to support projects that want to do similar things themselves too, although much of that is in plugins. Being around for spiritual support and getting the SDK updated would be the need from my point of view here. dt -- Dean Troyer dtroyer at gmail.com From jesse at odyssey4.me Fri Feb 15 18:57:31 2019 From: jesse at odyssey4.me (Jesse Pretorius) Date: Fri, 15 Feb 2019 18:57:31 +0000 Subject: [requirements][requests] security update for requests in stable branches In-Reply-To: <20190215181711.7xjsdcoz2fcoe6vn@yuggoth.org> References: <20190215072749.k34tdrnapanietk5@mthode.org> <20190215180116.jhuuza7jdmpzmq6p@yuggoth.org> <20190215181711.7xjsdcoz2fcoe6vn@yuggoth.org> Message-ID: On 2/15/19, 6:20 PM, "Jeremy Stanley" wrote: On 2019-02-15 13:06:21 -0500 (-0500), Jim Rollenhagen wrote: [...] > I know openstack-ansible and kolla both (optionally?) deploy from source, > so maybe it's time to start talking about it. Or should those projects > handle security fixes themselves when deploying from source? If I read the situation correctly, requests posted a CVE. Given that requests is a non-OpenStack python library , while it is part of our ecosystem, it is not directly curated by the OpenStack community. From the OSA standpoint, as long as upper-constraints updates the version to include the fix, we inherit it. I think that packagers and us, along with ansible-helm and kolla, all rely on that mechanism - however, if the stance is that non-OpenStack libraries are not something managed through the requirements team then we (OSA) can work around it because we have our own override mechanisms... but those are meant to only be for temporary purposes. Any OSA community member should be proposing changes to the requirements repo if something like this comes up. I would also hope that generally devstack tests would desire would be to test with the same thing that everyone is using to validate whether those new library versions might break things. Personally, I think a 'best effort' approach is good enough. If CVE's are discovered in the community, then ideally we should cater to test with the updated libraries as far up the chain as possible. We should all be making the effort, however, to adhere to https://governance.openstack.org/tc/reference/principles.html#openstack-first-project-team-second-company-third - improving OpenStack for the greater good of the community. From lauren at openstack.org Fri Feb 15 19:30:47 2019 From: lauren at openstack.org (Lauren Sell) Date: Fri, 15 Feb 2019 13:30:47 -0600 Subject: Why COA exam is being retired? In-Reply-To: <268F8E4B-0DBA-464A-B44C-A4023634EF94@openstack.org> References: <25c27f7e-80ec-2eb5-6b88-5627bc9f1f01@admin.grnet.gr> <16640d78-1124-a21d-8658-b7d9b2d50509@gmail.com> <5077d9dc-c4af-8736-0db3-2e05cbc1e992@gmail.com> <20190125152713.dxbxgkzoevzw35f2@csail.mit.edu> <1688640cbe0.27a5.eb5fa01e01bf15c6e0d805bdb1ad935e@jbryce.com> <268F8E4B-0DBA-464A-B44C-A4023634EF94@openstack.org> Message-ID: <9F7655EE-2B42-4AA2-B92E-C3FB368FF265@openstack.org> We had a very good discussion on the community call today and received some honest feedback about the impact of losing a vendor neutral certification exam. We also discussed why the current model is not sustainable from a resource perspective based on the current level of demand. We brainstormed some ideas for potential paths forward and have some follow up actions to gather more information. As a next step, we’ll schedule a follow up call in a few weeks to continue the conversation once we have more information. Notes in this etherpad: https://etherpad.openstack.org/p/coa-community-meeting > On Feb 5, 2019, at 7:23 AM, Lauren Sell wrote: > > Hi everyone, > > I had a few direct responses to my email, so I’m scheduling a community call for anyone who wants to discuss the COA and options going forward. > > Friday, February 15 @ 10:00 am CT / 15:00 UTC > > Zoom meeting: https://zoom.us/j/361542002 > Find your local number: https://zoom.us/u/akLt1CD2H > > For those who cannot attend, we will take notes in an etherpad and share back with the list. > > Best, > Lauren > > >> On Jan 25, 2019, at 12:34 PM, Lauren Sell wrote: >> >> Thanks very much for the feedback. When we launched the COA, the commercial market for OpenStack was much more crowded (read: fragmented), and the availability of individuals with OpenStack experience was more scarce. That indicated a need for a vendor neutral certification to test baseline OpenStack proficiency, and to help provide a target for training curriculum being developed by companies in the ecosystem. >> >> Three years on, the commercial ecosystem has become easier to navigate, and there are a few thousand professionals who have taken the COA and had on-the-job experience. As those conditions have changed, we've been trying to evaluate the best ways to use the Foundation's resources and time to support the current needs for education and certification. The COA in its current form is pretty resource intensive, because it’s a hands-on exam that runs in a virtual OpenStack environment. To maintain the exam (including keeping it current to OpenStack releases) would require a pretty significant investment in terms of time and money this year. From the data and demand we’re seeing, the COA did not seem to be a top priority compared to our investments in programs that push knowledge and training into the ecosystem like Upstream Institute, supporting OpenStack training partners, mentoring, and sponsoring internship programs like Outreachy and Google Summer of Code. >> >> That said, we’ve honestly been surprised by the response from training partners and the community as plans have been trickling out these past few weeks, and are open to discussing it. If there are people and companies who are willing to invest time and resources into a neutral certification exam, we could investigate alternative paths. It's very helpful to hear which education activities you find most valuable, and if you'd like to have a deeper discussion or volunteer to help, let me know and we can schedule a community call next week. >> >> Regardless of the future of the COA exam, we will of course continue to maintain the training marketplace at openstack.org to promote commercial training partners and certifications. There are also some great books and resources developed by community members listed alongside the community training. >> >> >>> From: Jay Bryant jungleboyj at gmail.com >>> Date: January 25, 2019 07:42:55 >>> Subject: Re: Why COA exam is being retired? >>> To: openstack-discuss at lists.openstack.org >>> >>>> On 1/25/2019 9:27 AM, Jonathan Proulx wrote: >>>>> On Fri, Jan 25, 2019 at 10:09:04AM -0500, Jay Pipes wrote: >>>>> :On 01/25/2019 09:09 AM, Erik McCormick wrote: >>>>> :> On Fri, Jan 25, 2019, 8:58 AM Jay Bryant >>>> >>>>> :> That's sad. I really appreciated having a non-vendory, ubiased, >>>>> :> community-driven option. >>>>> : >>>>> :+10 >>>>> : >>>>> :> If a vendor folds or moves on from Openstack, your certification >>>>> :> becomes worthless. Presumably, so long as there is Openstack, there >>>>> :> will be the foundation at its core. I hope they might reconsider. >>>>> : >>>>> :+100 >>>>> >>>>> So to clarify is the COA certifiaction going away or is the Foundation >>>>> just no longer administerign the exam? >>>>> >>>>> It would be a shame to loose a standard unbiased certification, but if >>>>> this is a transition away from directly providing the training and >>>>> only providing the exam specification that may be reasonable. >>>>> >>>>> -Jon >>>> >>>> When Allison e-mailed me last week they said they were having meetings >>>> to figure out how to go forward with the COA. The foundations partners >>>> were going to be offering the exam through September and they were >>>> working on communicating the status of things to the community. >>>> >>>> So, probably best to not jump to conclusions and wait for the official >>>> word from the community. >>>> >>>> - Jay >>> >>> >>> >> > From aschultz at redhat.com Fri Feb 15 20:14:43 2019 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 15 Feb 2019 13:14:43 -0700 Subject: [TripleO] openvswitch is broken - avoid rechecks in the next couple hours In-Reply-To: <20190215174634.5lzq6tkfhgjcnekf@yuggoth.org> References: <0DBA3B1F-E2BB-4B1D-94A2-8ADC9E9D1D23@redhat.com> <20190215174634.5lzq6tkfhgjcnekf@yuggoth.org> Message-ID: On Fri, Feb 15, 2019 at 10:49 AM Jeremy Stanley wrote: > > On 2019-02-15 08:23:31 +0000 (+0000), Sorin Sbarnea wrote: > > Is there something we can do to prevent this in the future? > > > > Unrelated to openvswitch itself, it happened with other packages > > too and will happen again. > [...] > > As in how to prevent distros from updating their packages in ways > which require some adjustments in our software? That's a big part of > why our CI system works the way it does: so we know as soon as > possible when we need to make modifications to keep our software > compatible with distributions we care about. Hiding or deferring > such failures just means that we get to spend more time ignoring our > users who are getting regular updates from their operating system > and are suddenly unable to use our software on it. So it's not necessarily hiding it if you can be notified a head of time and it doesn't disrupt the world. Yes we need to fix it, no we shouldn't completely break the world on updates if possible. Being able to track these changes earlier in testing is one way that we can get ahead of the upcoming changes and get fixes in sooner. I know in tripleo we use the centos continous release repo + periodic to try and find these things before it breaks the world. I'm not sure of the specifics of this change and as to why the continuous release repository didn't help in this instance. As a practice I believe we generally don't like pinning things on master specifically for this reason however we do need to be aware of the risks to the system as a whole and how can we mitigate the potential breakages to allow development to continue while still allowing updates to function as intended. Thanks, -Alex > -- > Jeremy Stanley From fungi at yuggoth.org Fri Feb 15 20:24:41 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Feb 2019 20:24:41 +0000 Subject: [requirements][requests] security update for requests in stable branches In-Reply-To: References: <20190215072749.k34tdrnapanietk5@mthode.org> <20190215180116.jhuuza7jdmpzmq6p@yuggoth.org> <20190215181711.7xjsdcoz2fcoe6vn@yuggoth.org> Message-ID: <20190215202439.lestrzhp3vlryway@yuggoth.org> On 2019-02-15 18:57:31 +0000 (+0000), Jesse Pretorius wrote: [...] > I would also hope that generally devstack tests would desire would > be to test with the same thing that everyone is using to validate > whether those new library versions might break things. [...] Continuing to test the frozen set of stable branch dependencies most closely approximates, typically, the state of frozen contemporary packaged versions on LTS distros which are backporting select security fixes to the versions they already ship. By testing our release under development (master branch) with latest versions of our dependencies, we attempt to ensure that we work with the versions most likely to be present in upcoming distro releases. Updating dependencies on stable branches makes for a moving target, and further destabilizes testing on releases which have a hard time getting maintainers to keep their testing viable at all. We don't recommend running our stable branch source with the exact source code represented by the dependencies we froze at the time of release. It's expected they will be run within the scope of distributions which separately keep track of and patch security vulnerabilities in their contemporary forks of our dependencies as a small part of the overall running system. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Feb 15 20:35:19 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Feb 2019 20:35:19 +0000 Subject: [tc] dims non-nomination for TC In-Reply-To: References: Message-ID: <20190215203518.hqqfaao745o46bnx@yuggoth.org> On 2019-02-15 13:28:35 -0500 (-0500), Davanum Srinivas wrote: > I will not be running in the upcoming TC election. It's been a great > experience being on the TC and working with all of you. I am / will be > still around working on OpenStack related activities, so don't hesitate to > ping me if you need need me. It's been an honor to serve along side you on the TC, as well as to be represented by you. I'm pleased we'll still have you around to lean on; we benefit greatly from your expertise and viewpoints. Thanks for your service! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Fri Feb 15 21:07:03 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 15 Feb 2019 15:07:03 -0600 Subject: [requirements][requests] security update for requests in stable branches In-Reply-To: <20190215202439.lestrzhp3vlryway@yuggoth.org> References: <20190215072749.k34tdrnapanietk5@mthode.org> <20190215180116.jhuuza7jdmpzmq6p@yuggoth.org> <20190215181711.7xjsdcoz2fcoe6vn@yuggoth.org> <20190215202439.lestrzhp3vlryway@yuggoth.org> Message-ID: <20190215210703.GA14654@sm-workstation> > > Updating dependencies on stable branches makes for a moving target, > and further destabilizes testing on releases which have a hard time > getting maintainers to keep their testing viable at all. We don't > recommend running our stable branch source with the exact source > code represented by the dependencies we froze at the time of > release. It's expected they will be run within the scope of > distributions which separately keep track of and patch security > vulnerabilities in their contemporary forks of our dependencies as a > small part of the overall running system. > -- > Jeremy Stanley It's sounding like we have two target audiences that have conflicting needs. This makes a lot of sense for distros, and I think for the most part, our policies so far have been in keeping with the needs of distro maintainers. It's also less burden on upstream requirements management, which I think is very important. The second group of folks are the deployment tools that are part of the community that attempt to use pure upstream source as much as possible to deploy stable versions of OpenStack services. My impressions is, due to lack of understanding (due to lack of communication (due to lack of knowing there was a need for communication)), most of these deployment projects expected the defined requirements and constraints to be maintained and accurate to get a decent installation of a given project. I have no suggests for how to improve this, but I thought it worth pointing out the issue. Sean From whayutin at redhat.com Fri Feb 15 21:27:35 2019 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 15 Feb 2019 14:27:35 -0700 Subject: [TripleO] openvswitch is broken - avoid rechecks in the next couple hours In-Reply-To: References: <0DBA3B1F-E2BB-4B1D-94A2-8ADC9E9D1D23@redhat.com> <20190215174634.5lzq6tkfhgjcnekf@yuggoth.org> Message-ID: On Fri, Feb 15, 2019 at 1:21 PM Alex Schultz wrote: > On Fri, Feb 15, 2019 at 10:49 AM Jeremy Stanley wrote: > > > > On 2019-02-15 08:23:31 +0000 (+0000), Sorin Sbarnea wrote: > > > Is there something we can do to prevent this in the future? > > > > > > Unrelated to openvswitch itself, it happened with other packages > > > too and will happen again. > > [...] > > > > As in how to prevent distros from updating their packages in ways > > which require some adjustments in our software? That's a big part of > > why our CI system works the way it does: so we know as soon as > > possible when we need to make modifications to keep our software > > compatible with distributions we care about. Hiding or deferring > > such failures just means that we get to spend more time ignoring our > > users who are getting regular updates from their operating system > > and are suddenly unable to use our software on it. > > So it's not necessarily hiding it if you can be notified a head of > time and it doesn't disrupt the world. Yes we need to fix it, no we > shouldn't completely break the world on updates if possible. Being > able to track these changes earlier in testing is one way that we can > get ahead of the upcoming changes and get fixes in sooner. I know in > tripleo we use the centos continous release repo + periodic to try and > find these things before it breaks the world. I'm not sure of the > specifics of this change and as to why the continuous release > repository didn't help in this instance. As a practice I believe we > generally don't like pinning things on master specifically for this > reason however we do need to be aware of the risks to the system as a > whole and how can we mitigate the potential breakages to allow > development to continue while still allowing updates to function as > intended. > > Thanks, > -Alex > It's my observation, not a fact that the rt repo is not a staging area for every update. The rt repo is well populated in advance of a minor update, but for updates in the same release I think it's very much hit or miss. http://mirror.centos.org/centos/7/rt/x86_64/Packages/?C=M;O=D > > > -- > > Jeremy Stanley > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Feb 15 22:19:40 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 15 Feb 2019 16:19:40 -0600 Subject: [dev][keystone] Launchpad blueprint reckoning In-Reply-To: <07f1042a-1f84-87a5-1505-38ce1705429c@gmail.com> References: <72413deb-161a-04a9-bdb7-b3e9f745ba7c@gmail.com> <1550142425.3159728.1657851088.24E20D91@webmail.messagingengine.com> <520cb398-f286-04fc-2e72-ac28a2dba125@gmail.com> <1550160032.3273902.1658015488.111ADCBA@webmail.messagingengine.com> <07f1042a-1f84-87a5-1505-38ce1705429c@gmail.com> Message-ID: Updating this based on everything that's happened in the last day or two. At this point, every blueprint that *isn't* targeted to Stein has been ported to an RFE bug report [0]. Each one should contain links to any relevant information that lived in the blueprint. Only stein-specific blueprints are left [1], which will be completed at the end of this release. There is a patch up to the contributor guide that describes the process for requesting new features [2]. Please have a look and let me know if anything is missing. The etherpad should be completely up-to-date [3] with pointers to all blueprints we touched, in case you want to see why we classified a particular blueprint a certain way. Most of the RFE bugs still require some investigation, but we can use our usual process for validating or invalidating them, with justification in comments. Don't hesitate to ask if you have questions about this work. [0] https://bugs.launchpad.net/keystone/+bugs?field.tag=rfe [1] https://blueprints.launchpad.net/keystone [2] https://review.openstack.org/#/c/637311/ [3] https://etherpad.openstack.org/p/keystone-blueprint-cleanup On 2/14/19 10:24 AM, Lance Bragstad wrote: > Sounds good to me. We should probably find a home for this information. > Somewhere in our contributor guide, perhaps? > > On 2/14/19 10:00 AM, Colleen Murphy wrote: >> On Thu, Feb 14, 2019, at 4:50 PM, Morgan Fainberg wrote: >>> I think a `git blame` or history of the deprecated release note is nice, it >>> centralizes out tracking of removed/deprecated items to the git log itself >>> rather than some external tracker that may or may not be available forever. >>> This way as long as the git repo is maintained, our tracking for a given >>> release is also tracked. >>> >>> Specs and bugs are nice, but the deprecated bug # for a given release is >>> fairly opaque. Other bugs might have more context in the bug, but if it's >>> just a list of commits, I don't see a huge win. >> I'm also +1 on just keeping it in the release notes. >> >>> On Thu, Feb 14, 2019, 10:28 Lance Bragstad >> >>>>> On Thu, Feb 14, 2019, 06:07 Colleen Murphy >>>>> What should we do about tracking "deprecated-as-of-*" and >>>>>> "removed-as-of-*" work? I never liked how this was done with blueprints but >>>>>> I'm not sure how we would do it with bugs. One tracking bug for all >>>>>> deprecated things in a cycle? One bug for each? A Trello/Storyboard board >>>>>> or etherpad? Do we even need to track it with an external tool - perhaps we >>>>>> can just keep a running list in a release note that we add to over the >>>>>> cycle? >>>>>> >>>> I agree. The solution that is jumping out at me is to track one bug for >>>> deprecated things and one for removed things per release, so similar to >>>> what we do now with blueprints. We would have to make sure we tag commits >>>> properly, so they are all tracked in the bug report. Creating a bug for >>>> everything that is deprecated or removed would be nice for capturing >>>> specific details, but it also feels like it will introduce more churn to >>>> the process. >>>> >>>> I guess I'm assuming there are users that like to read every commit that >>>> has deprecated something or removed something in a release. If we don't >>>> need to operate under that assumption, then a release note would do just >>>> fine and I'm all for simplifying the process. >>>> >> I think the reason we have release notes is so people *don't* have to read every commit. >> >> Colleen >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From aschultz at redhat.com Fri Feb 15 23:28:17 2019 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 15 Feb 2019 16:28:17 -0700 Subject: [kolla][tripleo] python3 support for RHEL based systems Message-ID: Ahoy Kolla folks, In TripleO/RDO we've been working towards getting python3 support ready when the next CentOS is released. We've been working towards getting the packaging all ready and have a basic setup working on fedora 28. We would like to get a head start in Kolla and I've proposed two possible solutions to support the python2/python3 transition in the container builds. I am looking for input from the Kolla folks on which of the two methods would be preferred so we can move forward. The first method is to handle the package names in the Dockerfiles themselves. https://review.openstack.org/#/c/632156/ (kolla python3 packages) https://review.openstack.org/#/c/624838/ (kolla WIP for fedora support (not expected to merge)) https://review.openstack.org/#/c/629679/ (tripleo specific overrides) IMHO think this method allows for a better transition between the python2 and python3 packages as it just keys off the distro_python3 setting that we added as part of https://review.openstack.org/#/c/631091/. It was mentioned that this might be more complex to follow and verbose due to the if/elses being added into the Docker files. I think the if/else complexity goes away once CentOS7 support is dropped so it's a minor transition (as I believe the target is Train for full python3 support). The second method is to add in a new configuration option that allows for package names to be overrideable/replaced. See https://review.openstack.org/#/c/636403/ (kolla package-replace option) https://review.openstack.org/#/c/636457/ (kolla python3 specifics) https://review.openstack.org/#/c/624838/ (kolla WIP for fedora support would need to be supplied to test) https://review.openstack.org/#/c/636472/ (tripleo specific overrides) IMHO I actually like the package-replace option for a few package overrides. I don't think this is a cleaner implementation for replacing all the packages because it becomes less obvious what packages would actually be used and it becomes more complex to manage. This one feels more like it's pushing the complexity to the end user and would still be problematic when CentOS8 comes out all the packages would still need to be replaced in all the Docker files (in the next cycle?). My personal preference would be to proceed with the first method and maybe merge the package-replace functionality if other would find it beneficial. I would be interested to hear if other think that the second option would be better in the long term. Thanks, -Alex From dangtrinhnt at gmail.com Sat Feb 16 00:08:26 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Sat, 16 Feb 2019 09:08:26 +0900 Subject: [Tacker][Searchlight] Implementing Tacker plugin for Searchlight Message-ID: Hi Tacker team, The Searchlight team is trying to implement a Tacker plugin for Searchlight that has two features: - Index Tacker resource info & events into ElasticSearch - Create an event trigger engine to provide self-healing if needed There's nothing needed to be changed at the Tacker side but it would be great if you guys can help to suggest what resource info we should get and where we can get those. The proposed spec of the feature is here [1]. My initial plan is to use the tacker client to list the resources and the oslo_messeging bus to get the events/notifications. [1] https://review.openstack.org/#/c/636856/ Any concerns or questions let me know. Many thanks, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From moreira.belmiro.email.lists at gmail.com Sat Feb 16 07:21:57 2019 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Sat, 16 Feb 2019 08:21:57 +0100 Subject: [User-committee] UC Candidacy Message-ID: Greetings, I'd like to announce my candidacy for the OpenStack User Committee (UC). I'm involved with OpenStack since 2011 when CERN started to investigate a Cloud Infrastructure solution. I work in the design/deployment/operation of the CERN OpenStack Cloud (from 0 to +300k cores). My particular interest has been how to scale and operate large OpenStack deployments. I was an active member of the Large Deployment Team (LDT), a group of operators that shared ideas on how to scale OpenStack. I had the privilege to learn from a talented group of Operators on how they manage/run their infrastructures. I'm fortunate enough to have attended all OpenStack Summits since San Francisco (2012). I'm a regular speaker talking about my experience operating OpenStack at scale. Also, I serve regularly as a track chair, participated in few OpenStack Days, Ops Meetups and wrote several blog posts. I believe that the Operators/User community should have an active role in the design of the next features for the OpenStack/Open Infrastructure projects. If elected I will focus to engage the Operator/Users community with the different Development teams. It would be an honor to be an advocate of this great community and promote its mission. Thank you for your consideration, Belmiro Moreira -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Sun Feb 17 06:09:42 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Sun, 17 Feb 2019 14:09:42 +0800 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: References: <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> <20190208091829.6tiig7lgef6txcxk@pacific.linksys.moosehall> <723736DB-ED80-4600-AA98-F51FE70A8D73@gmail.com> <20190211222641.pney33hmai6vjoky@pacific.linksys.moosehall> <355BD2CB-B1F9-43B1-943C-66553E90050F@gmail.com> <20190213122451.nyyllx555smf2mwy@pacific.linksys.moosehall> Message-ID: I definitely like the idea of a pop-up team in the house. I also tend to agree with the opinion about visibility for a pop-up team. We definitely need something with quick workflow. And allow people directly start communicate and working on the goal. Here's what I think we should do in advance. 1. To clear ways that we have to trigger cross-project development. To clear the path and to document. People have been trying a few formats to work on cross-project spec. After time, some prove useful and gaining momentum on the way. And some getting less. But we never try to stop at a point and declare what's the way we should do cross-project. And I'm pretty sure that's very confused for a lot of people who might not always as a part of this community. To give official recognition for ways that we think will cover the patch, and to officially announce the unofficial for methods (which we might already done with in ML/Meeting, but it's worth to bringing it to a more visible place.). And most importantly to have it documented in our develop guidelines, so message we send will be crystal clear for all of us all the time. If we can keep broadcast around and tell people how they can set-up for their requirement, than we actually get better chance to have more people trying the right way of engage with us and give really useful feedback. IMO, to use SIG as long-term (and repository demanded) path, and have a pop-up team for short term trace sounds like a fully covered for me. 2. Remove annoying channels. I like the idea if only use a single irc channel to communicate for multiple purpose I feel the lack of time to work through multiple channels and in multiple communities. Which is really annoying. So if we can have a general irc channel like openstack-dev, and allow popup-teams move to specific irc channel if they see fit. To consider have better visibility, a single channel for multiple team cores and leads to be awarded of some pop-up works will definitely a benefit. I mean, how many people actually capable to trace that much channel dally and happen to have time to join development? Isn't that's why we comes up with `openstack-discussion` for most mail to be in place? The down side is that we didn't have tag filter for irc. but that's also a good thing to actually know about works for pop-up teams. 3. Gaining visibility and official blessing As agree on comments about gaining visibility, a session for all SIGs and pop-up teams to shout out and raise their own most wanted help out. We can do it in one single session and give 3-5 minutes for each team. Any one also feel this is a good idea too? On Thu, Feb 14, 2019 at 9:15 PM Thierry Carrez wrote: > Adam Spiers wrote: > > Ildiko Vancsa wrote: > >>> On 2019. Feb 11., at 23:26, Adam Spiers wrote: > >>> [snip…] > >>> > >>>> To help with all this I would start the experiment with wiki pages > >>>> and etherpads as these are all materials you can point to without > >>>> too much formality to follow so the goals, drivers, supporters and > >>>> progress are visible to everyone who’s interested and to the TC to > >>>> follow-up on. > >>>> Do we expect an approval process to help with or even drive either > >>>> of the crucial steps I listed above? > >>> > >>> I'm not sure if it would help. But I agree that visibility is > >>> important, and by extension also discoverability. To that end I > >>> think it would be worth hosting a central list of popup initiatives > >>> somewhere which links to the available materials for each initiative. > >>> Maybe it doesn't matter too much whether that central list is simply > >>> a wiki page or a static web page managed by Gerrit under a governance > >>> repo or similar. > >> > >> I would start with a wiki page as it stores history as well and it’s > >> easier to edit. Later on if we feel the need to be more formal we can > >> move to a static web page and use Gerrit. > > > > Sounds good to me. Do we already have some popup teams? If so we could > > set this up straight away. > > To continue this discussion, I just set up a basic page with an example > team at: > > https://wiki.openstack.org/wiki/Popup_Teams > > Feel free to improve the description and example entry. > > -- > Thierry Carrez (ttx) > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Sun Feb 17 18:59:42 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 17 Feb 2019 13:59:42 -0500 Subject: [all][TC] 'Train' Technical Committee Nominations Open In-Reply-To: References: Message-ID: Kendall Nelson writes: > Hello All, > > Nominations for the Technical Committee positions (7 positions) > are now open and will remain open until Feb 19, 2019 23:45 UTC. > > All nominations must be submitted as a text file to the > openstack/election repository as explained on the election website[1]. > > Please note that the name of the file should match an email > address in the foundation member profile of the candidate. > > Also for TC candidates election officials refer to the community > member profiles at [2] please take this opportunity to ensure that > your profile contains current information. > > Candidates for the Technical Committee Positions: Any Foundation > individual member can propose their candidacy for an available, > directly-elected TC seat. > > The election will be held from Feb 26, 2019 23:45 UTC through to Mar 05, > 2019 23:45 UTC. The electorate are the Foundation individual members that > are also committers for one of the official teams[3] over the Feb 09, 2018 > 00:00 UTC - Feb 19, 2019 00:00 UTC timeframe (Rocky to > Stein), as well as the extra-ATCs who are acknowledged by the TC[4]. > > Please see the website[5] for additional details about this election. > Please find below the timeline: > > TC nomination starts @ Feb 12, 2019 23:45 UTC > TC nomination ends @ Feb 19, 2019 23:45 UTC > TC campaigning starts @ Feb 19, 2019 23:45 UTC > TC campaigning ends @ Feb 26, 2019 23:45 UTC > TC elections starts @ Feb 26, 2019 23:45 UTC > TC elections ends @ Mar 05, 2019 23:45 UTC > > If you have any questions please be sure to either ask them on the > mailing list or to the elections officials[6]. > > Thank you, > > -Kendall Nelson (diablo_rojo) > > [1] http://governance.openstack.org/election/#how-to-submit-your-candidacy > [2] http://www.openstack.org/community/members/ > [3] https://governance.openstack.org/tc/reference/projects/ > [4] https://releases.openstack.org/stein/schedule.html#p-extra-atcs > [5] https://governance.openstack.org/election/ > [6] http://governance.openstack.org/election/#election-officials The deadline for nominations is approaching quickly, and I haven't seen any candidate threads started or nominations in the election repo. If you've been holding back, waiting to see who else might be running, I think it's fair to say you've waited long enough. Don't wait too long. -- Doug From mnaser at vexxhost.com Sun Feb 17 19:28:25 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sun, 17 Feb 2019 14:28:25 -0500 Subject: [tc][elections] TC candidacy Message-ID: Hi everyone: First of all, thank you very much for giving me the opportunity to be part of the technical committee over the past term within the OpenStack project governance. I’ve also taken the vice chair role which I have had to serve for sometime during the time that our chair (Doug) was out of the office. In my candidacy email for my last term, there was a few things that I brought up which I think are still very important, as well as a few others which I think we’ve made great progress in. I’d like to start talking about those first. I still believe that it’s really important to have a contact with the users and deployers, something that we’re slowly getting better at. I have found that having some of those very large operators that sit in our technical committee meetings at the PTG super productive, because they bring an important perspective to the technical committee. As I help manage a public cloud and several private clouds all over the world, I’ve seen a lot of stories about OpenStack experiences, shortcomings and seeing how users consume OpenStack. It’s a very eye opening experience and it’s built up a strong basis to be able to formulate technical decisions and understanding the impact it has on all of our different users. I think this information is a really strong asset. I mentioned that we needed to work on improving our bridges with other communities such as Kubernetes. I’ve helped add and provide resources for OpenLab to help bring CI for the Kubernetes OpenStack cloud provider, worked on Magnum changes to help better integrate and test the project and even added resources which allowed Magnum to run full functional Kubernetes tests in it’s gate (with a work in progress of adding conformance tests). While the TC is about governance, I also think it’s important for us to get some work that gets the critical moving pieces running done. I’ve also worked with a few teams in order to help and somehow mediate a discussion to facilitate the split of the placement service and provided operator feedback, participated in the split meetings and helped to push things in the direction to make it happen. However, learning a few lessons in the way about how we might go about doing something like this in the future. I’ve also increased my engagement with our APAC community by speaking more often with them over WeChat. There is a tremendous amount of knowledge and wonderful community of people who want to participate. I’ve tried my best to also build a ‘bridge’ by sharing things that might make sense for them to see from our mailing lists (for example, the most recent one about our upcoming release name). For the upcoming term, I think that we should work on increasing our engagement with other communities. In addition, providing guidance and leadership for groups and projects that need help to merge their features, even if it involves finding the right person and asking them for a review. I’d like to personally have a more “hands-on” approach and try to work more closely with the teams, help shape and provide input into the direction of their project (while looking at the overall picture). I’d also like us to try and be more engaged with the other OSF projects such as Kata containers, StarlingX, Airship and Zuul. While the last one is our darling that was created out of our use, I think providing a bridge with the others would provide a lot of value as well. Thank you for giving me the opportunity to serve as part of your technical committee, I hope to be able to continue to help over the next term. Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From zbitter at redhat.com Sun Feb 17 22:53:34 2019 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 18 Feb 2019 11:53:34 +1300 Subject: [tc][election] TC Candidacy Message-ID: <40d2a109-5435-a77c-e50c-d76dd292790f@redhat.com> Hello again friends, I'm running again for a second term on the Technical Committee. (For the record, I don't plan to seek a third term next year.) I've been part of the OpenStack community since 2012, and as well as a TC member I am also a core reviewer for Heat and (since very recently) Oslo. I think of the TC as effectively the 'core reviewer' team for a larger group of folks who participate in the governance of OpenStack (a group that I think we should be aiming to expand even further). I'm deeply grateful to the community for giving me the opportunity to work with what is a fantastic team of people. Here's what I've been up to in the past year on the TC: - I supported Julia's initiative to spread constructive code-review techniques by distilling some of our annual endless threads on code-review etiquette into a linkable page in the Project Teams Guide.[1] A number of people, in one case an entire team, told me that they'd tweaked their approach to code review after getting ideas from this document. (This feedback is *much* appreciated by the way, because from the TC perspective it can be very hard to tell the difference between achieving lazy consensus and shouting into the void.) - I wrote the draft of and edited contributions to what became the Vision for OpenStack Clouds,[2] contacted every affected team to explain what it meant for them individually, and presented it to the OSF Board in Berlin for their feedback as well. - I helped drive the definition of a process for determining which versions of Python3 should be tested in a release.[3] That should help us make the transitions smoothly in future, though it unfortunately started too late for Stein. - I've been actively engaged with members of the OSF Board on the topic of the process for adding new Open Infrastructure projects to the Foundation, by passing on feedback from foundation members and from the TC's own experience with evaluating project applications, and trying to publicise the board's position in the community.[4] It's hard to imagine being able to get any of those done without being a TC member. As I've written elsewhere,[5] because the TC is the only project-wide elected body, leading the community to all move in the same direction is something that cannot happen without the TC. I plan to continue trying to do that, and encouraging others to do the same. Thanks for your consideration. cheers, Zane. [1] https://docs.openstack.org/project-team-guide/review-the-openstack-way.html [2] https://governance.openstack.org/tc/reference/technical-vision.html [3] https://governance.openstack.org/tc/resolutions/20181024-python-update-process.html [4] https://www.zerobanana.com/archive/2018/06/14#osf-expansion [5] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001841.html From tony at bakeyournoodle.com Mon Feb 18 04:50:20 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 18 Feb 2019 15:50:20 +1100 Subject: [tc][elections] TC candidacy In-Reply-To: References: Message-ID: <20190218045020.GA23039@thor.bakeyournoodle.com> On Sun, Feb 17, 2019 at 02:28:25PM -0500, Mohammed Naser wrote: > Hi everyone: > > First of all, thank you very much for giving me the opportunity to be part of > the technical committee over the past term within the OpenStack project > governance. I’ve also taken the vice chair role which I have had to serve for > sometime during the time that our chair (Doug) was out of the office. I apologise if I missed it but I don't see your candidacy proposed via the election repo[1]. I have proposed https://review.openstack.org/637452 Add TC candidacy for Mohammed Naser from mailing list. Please check that this is what you intended and either +1 if I got it right or take whatever corrective action is required. Yours Tony. [1] https://governance.openstack.org/election/#how-to-submit-a-candidacy -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mnaser at vexxhost.com Mon Feb 18 04:51:47 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sun, 17 Feb 2019 23:51:47 -0500 Subject: [tc][elections] TC candidacy In-Reply-To: <20190218045020.GA23039@thor.bakeyournoodle.com> References: <20190218045020.GA23039@thor.bakeyournoodle.com> Message-ID: <74B1E408-C14A-4251-A14B-55E6A0E8E857@vexxhost.com> Hi Tony, I had already pushed it up here: https://review.openstack.org/#/c/637434/ Thanks Mohammed Sent from my iPhone > On Feb 17, 2019, at 11:50 PM, Tony Breeds wrote: > >> On Sun, Feb 17, 2019 at 02:28:25PM -0500, Mohammed Naser wrote: >> Hi everyone: >> >> First of all, thank you very much for giving me the opportunity to be part of >> the technical committee over the past term within the OpenStack project >> governance. I’ve also taken the vice chair role which I have had to serve for >> sometime during the time that our chair (Doug) was out of the office. > > I apologise if I missed it but I don't see your candidacy > proposed via the election repo[1]. > > I have proposed https://review.openstack.org/637452 > Add TC candidacy for Mohammed Naser from mailing list. Please check > that this is what you intended and either +1 if I got it right or take > whatever corrective action is required. > > Yours Tony. > > [1] https://governance.openstack.org/election/#how-to-submit-a-candidacy -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Mon Feb 18 04:55:10 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 18 Feb 2019 15:55:10 +1100 Subject: [tc][election] TC Candidacy In-Reply-To: <40d2a109-5435-a77c-e50c-d76dd292790f@redhat.com> References: <40d2a109-5435-a77c-e50c-d76dd292790f@redhat.com> Message-ID: <20190218045505.GB23039@thor.bakeyournoodle.com> On Mon, Feb 18, 2019 at 11:53:34AM +1300, Zane Bitter wrote: > Hello again friends, > > I'm running again for a second term on the Technical Committee. (For the > record, I don't plan to seek a third term next year.) I've been part of the > OpenStack community since 2012, and as well as a TC member I am also a core > reviewer for Heat and (since very recently) Oslo. I apologise if I missed it but I don't see your candidacy proposed via the election repo[1]. I have proposed https://review.openstack.org/637453 Add TC candidacy for Zane Bitter from mailing list. Please check that this is what you intended and either +1 if I got it right or take whatever corrective action is required. Yours Tony. [1] https://governance.openstack.org/election/#how-to-submit-a-candidacy -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Mon Feb 18 04:57:32 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 18 Feb 2019 15:57:32 +1100 Subject: [tc][elections] TC candidacy In-Reply-To: <74B1E408-C14A-4251-A14B-55E6A0E8E857@vexxhost.com> References: <20190218045020.GA23039@thor.bakeyournoodle.com> <74B1E408-C14A-4251-A14B-55E6A0E8E857@vexxhost.com> Message-ID: <20190218045732.GC23039@thor.bakeyournoodle.com> On Sun, Feb 17, 2019 at 11:51:47PM -0500, Mohammed Naser wrote: > Hi Tony, > > I had already pushed it up here: > > https://review.openstack.org/#/c/637434/ /me wipes egg from face. I really don't know how I missed that. Sorry. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Mon Feb 18 05:05:10 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 18 Feb 2019 16:05:10 +1100 Subject: [tc][election] TC Candidacy In-Reply-To: <20190218045505.GB23039@thor.bakeyournoodle.com> References: <40d2a109-5435-a77c-e50c-d76dd292790f@redhat.com> <20190218045505.GB23039@thor.bakeyournoodle.com> Message-ID: <20190218050509.GD23039@thor.bakeyournoodle.com> On Mon, Feb 18, 2019 at 03:55:05PM +1100, Tony Breeds wrote: > On Mon, Feb 18, 2019 at 11:53:34AM +1300, Zane Bitter wrote: > > Hello again friends, > > > > I'm running again for a second term on the Technical Committee. (For the > > record, I don't plan to seek a third term next year.) I've been part of the > > OpenStack community since 2012, and as well as a TC member I am also a core > > reviewer for Heat and (since very recently) Oslo. > > I apologise if I missed it but I don't see your candidacy > proposed via the election repo[1]. > > I have proposed https://review.openstack.org/637453 > Add TC candidacy for Zane Bitter from mailing list. Please check > that this is what you intended and either +1 if I got it right or take > whatever corrective action is required. As with Mohammed's I see I missed your own review. https://review.openstack.org/#/c/637439/ I'm sorry. I have abandoned my review, and approved 637439. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From soulxu at gmail.com Mon Feb 18 05:16:38 2019 From: soulxu at gmail.com (Alex Xu) Date: Mon, 18 Feb 2019 13:16:38 +0800 Subject: [nova][dev][ops] can we get rid of 'project_only' in the DB layer? In-Reply-To: <3fb287ae-753f-7e56-aa2a-7e3a1d7d6d89@gmail.com> References: <3fb287ae-753f-7e56-aa2a-7e3a1d7d6d89@gmail.com> Message-ID: Add the maillist back, I missed from the previous reply... melanie witt 于2019年2月16日周六 上午12:22写道: > Thanks for the reply, Alex. Response is inline. > > On Fri, 15 Feb 2019 15:49:19 +0800, He Jie Xu wrote: > > We need to ensure all the APIs check the policy with real instance's > > project and user id, like this > > > https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/evacuate.py#L85-L88 > > Otherwise, any user can get any other tenant's instance. > > > > Some of APIs doesn't check like this, for example: > > > https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/attach_interfaces.py#L55 > > While I agree with this (policy check instance project/user instead of > request context project/user), I'm not sure it's related to the > project_only=True at the database layer. The project_only=True at the > database layer only enforces request context project, which is what the > second example here will also do, if policy contains > 'project_id:%(project_id)s'. It seems to me that if we remove > project_only=True from the database layer, we will get proper > enforcement of request context project/user by the existing policy > checks, if the policy is configured as such. And we default policy to > either rule:admin_api or rule:admin_or_owner: > > https://docs.openstack.org/nova/latest/configuration/sample-policy.html > > So, it seems to me that changing policy enforcement from request context > project/user => instance project/user would be a separate change. Please > let me know if I'm misunderstanding you. > > One thing I do notice though, in your first example, is that the > get_instance is done _before_ the policy check, which would need to be > moved after the policy check, in the same change that would remove > project_only=True. So I'm glad you pointed that out. > Emm...no, the first example is the right example. project_only=True will ensure db call return the instance belong to the project in the request context. If project_only=False, the db call may return other project's instance. Then when the rule is 'project_id:%(project_id)s' and the target is instance's project_id, the policy enforcement will ensure the request context's project id match the instance's project_id, then the user won't get other project's instance. > > > I'm trying to memory why we didn't do that in the beginning, sounds like > > we refused to support user-id based policy. But > > in the end for the backward-campatible for the user like CERN, we > > support user-id based policy for few APIs. > > > https://review.openstack.org/#/q/topic:bp/user_id_based_policy_enforcement+(status:open+OR+status:merged) > > > > This is probably why some APIs checks policy with real instance's > > project id and user id and some APIs not. > > > > melanie witt > 于2019年2 > > 月15日周五 上午1:23写道: > > > > Hey all, > > > > Recently, we had a customer try the following command as a non-admin > > with a policy role granted in policy.json to allow live migrate: > > > > "os_compute_api:os-migrate-server:migrate_live": "rule:admin_api > or > > role:Operator" > > > > The scenario is that they have a server in project A and a user in > > project B with role:Operator and the user makes a call to live > migrate > > the server. > > > > But when they call the API, they get the following error response: > > > > {"itemNotFound": {"message": "Instance could not > be > > found.", "code": 404}} > > > > A superficial look through the code shows that the live migrate > should > > work, because we have appropriate policy checks in the API, and the > > request makes it past those checks because the policy.json has been > set > > correctly. > > > > A common pattern in our APIs is that we first compute_api.get() the > > instance object and then we call the server action (live migrate, > stop, > > start, etc) with it after we retrieve it. In this scenario, the > > compute_api.get() fails with NotFound. > > > > And the reason it fails with NotFound is because, much lower level, > at > > the DB layer, we have a keyword arg called 'project_only' which, when > > True, will scope a database query to the RequestContext.project_id > > only. > > We have hard-coded 'project_only=True' for the instance get query. > > > > So, when the user in project B with role:Operator tries to retrieve > the > > instance record in project A, with appropriate policy rules set, it > > will > > fail because 'project_only=True' and the request context is project > B, > > while the instance is in project A. > > > > My question is: can we get rid of the hard-coded 'project_only=True' > at > > the database layer? This seems like something that should be > > enforced at > > the API layer and not at the database layer. It reminded me of an > > effort > > we had a few years ago where we removed other hard-coded policy > > enforcement from the database layer [1][2]. I've uploaded a WIP > > patch to > > demonstrate the proposed change [3]. > > > > Can anyone think of any potential problems with doing this? I'd like > to > > be able to remove it so that operators are able use policy to allow > > non-admin users with appropriately configured roles to run server > > actions. > > > > Cheers, > > -melanie > > > > [1] > > > https://blueprints.launchpad.net/nova/+spec/nova-api-policy-final-part > > [2] > > > https://review.openstack.org/#/q/topic:bp/nova-api-policy-final-part+(status:open+OR+status:merged) > > [3] https://review.openstack.org/637010 > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Feb 18 08:16:00 2019 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 18 Feb 2019 09:16:00 +0100 Subject: [tc][election] TC candidacy Message-ID: Hi everyone, I've been on the OpenStack Technical Committee forever, and my original plan was to not run for re-election this cycle. Thanks to name recognition, incumbents have traditionally easily been reelected. Stepping down is the only way to make room for new leaders to emerge, a process that is necessary to keep alignment between the Technical Committee members and the active contributors that are their constituency. So, having room for new members to be elected is necessary. At the same time, there is value in historical perspective, and experienced members are useful to help newcomers into their role. We also need to be careful about the message that too many people stepping down would send. So, it's a mixed bag -- we need enough, but not too many people stepping down. Over the past cycles, we introduced about 3 new people at every election. I think renewing 3 or 4 people every 6 months in a committee of 13 members is the right balance. We already have three experienced members who announced they would not be running again this cycle, so I feel like it might not be the best moment for me to step down. With the OSF shifting from being solely about OpenStack to more generally tackling the open infrastructure space, my focus has certainly expanded lately. But I don't want to feed the trolls arguing that we are leaving OpenStack behind. I'm not done with the OpenStack TC yet, and would like to advance two more things before stepping down. OpenStack is in the middle of a transition -- from hyped project driven by startups and big service providers to a more stable project led by people running it or having a business depending on it. A lot of the systems and processes that I helped put in place to cope with explosive growth are now holding us back. We need to adapt them to a new era, and I feel like I can help bringing the original perspective of why those systems were put in place, so hopefully we do not end up throwing the baby with the bath water. I feel like there is also a lot of work to do to better present what we produce. The work on the OpenStack map and the "software" pages on the openstack.org website is far from complete. OpenStack is still much harder to navigate and understand for a newcomer than it should be. I want to continue that work for one more year. So I am announcing my candidacy for a position on the OpenStack Technical Committee in the upcoming election. Thank you for your consideration ! -- Thierry Carrez (ttx) From mark at stackhpc.com Mon Feb 18 08:52:56 2019 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 18 Feb 2019 08:52:56 +0000 Subject: [kolla][tripleo] python3 support for RHEL based systems In-Reply-To: References: Message-ID: On Fri, 15 Feb 2019 at 23:29, Alex Schultz wrote: > Ahoy Kolla folks, > > In TripleO/RDO we've been working towards getting python3 support > ready when the next CentOS is released. We've been working towards > getting the packaging all ready and have a basic setup working on > fedora 28. We would like to get a head start in Kolla and I've > proposed two possible solutions to support the python2/python3 > transition in the container builds. I am looking for input from the > Kolla folks on which of the two methods would be preferred so we can > move forward. > > The first method is to handle the package names in the Dockerfiles > themselves. > > https://review.openstack.org/#/c/632156/ (kolla python3 packages) > https://review.openstack.org/#/c/624838/ (kolla WIP for fedora support > (not expected to merge)) > https://review.openstack.org/#/c/629679/ (tripleo specific overrides) > > IMHO think this method allows for a better transition between the > python2 and python3 packages as it just keys off the distro_python3 > setting that we added as part of > https://review.openstack.org/#/c/631091/. It was mentioned that this > might be more complex to follow and verbose due to the if/elses being > added into the Docker files. I think the if/else complexity goes away > once CentOS7 support is dropped so it's a minor transition (as I > believe the target is Train for full python3 support). > > The second method is to add in a new configuration option that allows > for package names to be overrideable/replaced. See > > https://review.openstack.org/#/c/636403/ (kolla package-replace option) > https://review.openstack.org/#/c/636457/ (kolla python3 specifics) > https://review.openstack.org/#/c/624838/ (kolla WIP for fedora support > would need to be supplied to test) > https://review.openstack.org/#/c/636472/ (tripleo specific overrides) > > IMHO I actually like the package-replace option for a few package > overrides. I don't think this is a cleaner implementation for > replacing all the packages because it becomes less obvious what > packages would actually be used and it becomes more complex to manage. > This one feels more like it's pushing the complexity to the end user > and would still be problematic when CentOS8 comes out all the packages > would still need to be replaced in all the Docker files (in the next > cycle?). > > My personal preference would be to proceed with the first method and > maybe merge the package-replace functionality if other would find it > beneficial. I would be interested to hear if other think that the > second option would be better in the long term. > > Thanks, > -Alex > Thanks for bringing this up Alex. I tend to agree with your thinking that although the package-replace option looks cleaner, it adds hidden complexity. The if/else approach is a bit of an eyesore now, but should not be present for long. My vote is for the first option. Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Mon Feb 18 09:59:56 2019 From: balazs.gibizer at ericsson.com (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Mon, 18 Feb 2019 09:59:56 +0000 Subject: [oslo] How to properly cleanup FakeExchangeManager between test cases Message-ID: <1550483993.10501.6@smtp.office365.com> Hi, Nova has tests that are using the oslo.messaging FakeDriver implementation by providing the 'fake://' transport url. The FakeExchangeManager keeps a class level dict of FakeExchange objects keyed by the communication topic[1]. This can cause that an RPC message sent by a test case is received by a later test case running in the same process. I did not find any proper way to clean up the FakeExchangeManager at the end of each test case to prevent this. To fix the problem in the nova test I did a hackish cleanup by overwriting FakeExchangeManager._exchanges directly during test case cleanup [2]. Is there a better way to do this cleanup? Cheers, gibi [1] https://github.com/openstack/oslo.messaging/blob/0a784d260465bc7ba878bedeb5c1f184e5ff6e2e/oslo_messaging/_drivers/impl_fake.py#L149 [2] https://review.openstack.org/#/c/637233/1/nova/tests/fixtures.py From rtnair at gmail.com Sun Feb 17 02:18:09 2019 From: rtnair at gmail.com (Raja T) Date: Sun, 17 Feb 2019 07:48:09 +0530 Subject: Help Needed - Mirantis - All Services Down in 2 Controllers Message-ID: Hello All, Using Mirantis - Mitaka on Ubuntu 14.04 Currently I'm facing this issue: almost all services are down on 2 controllers, and are up only in one node. Pl find below pcs status output. Any help to recover this is highly appreciated. root at node-11:/var/log# pcs status Cluster name: WARNING: corosync and pacemaker node names do not match (IPs used in setup?) Last updated: Sun Feb 17 02:12:59 2019 Last change: Sat Feb 16 08:56:52 2019 by root via crm_attribute on node-1.mydomain.com Stack: corosync Current DC: node-10.mydomain.com (version 1.1.14-70404b0) - partition with quorum 3 nodes and 46 resources configured Online: [ node-1.mydomain.com node-10.mydomain.com node-11.mydomain.com ] Full list of resources: Clone Set: clone_p_vrouter [p_vrouter] Started: [ node-1.mydomain.com ] Stopped: [ node-10.mydomain.com node-11.mydomain.com ] vip__management (ocf::fuel:ns_IPaddr2): Started node-1.mydomain.com vip__vrouter_pub (ocf::fuel:ns_IPaddr2): Started node-1.mydomain.com vip__vrouter (ocf::fuel:ns_IPaddr2): Started node-1.mydomain.com vip__public (ocf::fuel:ns_IPaddr2): Started node-1.mydomain.com Clone Set: clone_p_haproxy [p_haproxy] Started: [ node-1.mydomain.com ] Stopped: [ node-10.mydomain.com node-11.mydomain.com ] Clone Set: clone_p_mysqld [p_mysqld] Started: [ node-1.mydomain.com ] Stopped: [ node-10.mydomain.com node-11.mydomain.com ] Master/Slave Set: master_p_rabbitmq-server [p_rabbitmq-server] Masters: [ node-1.mydomain.com ] Stopped: [ node-10.mydomain.com node-11.mydomain.com ] Clone Set: clone_neutron-openvswitch-agent [neutron-openvswitch-agent] Started: [ node-1.mydomain.com ] Stopped: [ node-10.mydomain.com node-11.mydomain.com ] Clone Set: clone_neutron-l3-agent [neutron-l3-agent] Started: [ node-1.mydomain.com ] Stopped: [ node-10.mydomain.com node-11.mydomain.com ] Clone Set: clone_neutron-metadata-agent [neutron-metadata-agent] Started: [ node-1.mydomain.com ] Stopped: [ node-10.mydomain.com node-11.mydomain.com ] Clone Set: clone_neutron-dhcp-agent [neutron-dhcp-agent] Started: [ node-1.mydomain.com ] Stopped: [ node-10.mydomain.com node-11.mydomain.com ] Clone Set: clone_p_heat-engine [p_heat-engine] Started: [ node-1.mydomain.com ] Stopped: [ node-10.mydomain.com node-11.mydomain.com ] sysinfo_node-1.mydomain.com (ocf::pacemaker:SysInfo): Started node-1.mydomain.com Clone Set: clone_p_dns [p_dns] Started: [ node-1.mydomain.com ] Stopped: [ node-10.mydomain.com node-11.mydomain.com ] Master/Slave Set: master_p_conntrackd [p_conntrackd] Masters: [ node-1.mydomain.com ] Stopped: [ node-10.mydomain.com node-11.mydomain.com ] Clone Set: clone_p_ntp [p_ntp] Started: [ node-1.mydomain.com ] Stopped: [ node-10.mydomain.com node-11.mydomain.com ] Clone Set: clone_ping_vip__public [ping_vip__public] Started: [ node-1.mydomain.com ] Stopped: [ node-10.mydomain.com node-11.mydomain.com ] sysinfo_node-10.mydomain.com (ocf::pacemaker:SysInfo): Stopped sysinfo_node-11.mydomain.com (ocf::pacemaker:SysInfo): Stopped PCSD Status: node-1.mydomain.com member (172.17.6.24): Offline node-10.mydomain.com member (172.17.6.32): Offline node-11.mydomain.com member (172.17.6.33): Offline Thanks! Raja. -- :^) -------------- next part -------------- An HTML attachment was scrubbed... URL: From m2elsakha at gmail.com Sun Feb 17 18:30:41 2019 From: m2elsakha at gmail.com (Mohamed Elsakhawy) Date: Sun, 17 Feb 2019 13:30:41 -0500 Subject: [User-committee] UC Candidacy In-Reply-To: References: Message-ID: Thank you. Your candidacy is confirmed On Sat, Feb 16, 2019 at 2:24 AM Belmiro Moreira < moreira.belmiro.email.lists at gmail.com> wrote: > Greetings, > I'd like to announce my candidacy for the OpenStack User Committee (UC). > > I'm involved with OpenStack since 2011 when CERN started to investigate a > Cloud Infrastructure solution. > I work in the design/deployment/operation of the CERN OpenStack Cloud > (from 0 to +300k cores). > > My particular interest has been how to scale and operate large OpenStack > deployments. > I was an active member of the Large Deployment Team (LDT), a group of > operators that shared ideas on how to scale OpenStack. I had the privilege > to learn from a talented group of Operators on how they manage/run their > infrastructures. > > I'm fortunate enough to have attended all OpenStack Summits since San > Francisco (2012). I'm a regular speaker talking about my experience > operating OpenStack at scale. > Also, I serve regularly as a track chair, participated in few OpenStack > Days, Ops Meetups and wrote several blog posts. > > I believe that the Operators/User community should have an active role in > the design of the next features for the OpenStack/Open Infrastructure > projects. > If elected I will focus to engage the Operator/Users community with the > different Development teams. > It would be an honor to be an advocate of this great community and promote > its mission. > > Thank you for your consideration, > Belmiro Moreira > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanthini.a.a at ericsson.com Mon Feb 18 04:46:44 2019 From: nanthini.a.a at ericsson.com (NANTHINI A A) Date: Mon, 18 Feb 2019 04:46:44 +0000 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: Hi , To resolve previous mentioned issue I am trying to use list_join function in nested.yaml.Stack is created successfully .but the resources are not created .Can you please let me know what might be the issue here with list_join. root at cic-1:~# cat nested.yaml heat_template_version: 2013-05-23 description: This is the template for I&V R6.1 base configuration to create neutron resources other than sg and vm for vyos vms parameters: names: type: comma_delimited_list index: type: number resources: neutron_Network_1: type: OS::Neutron::Net properties: name: list_join: [",",['Network','{get_param: [names, {get_param: index}]','1']] # str_replace: # template: net%-set%-number% # params: # "net%": "Network" # "set%": "A" # "number%": "1" neutron_Network_2: type: OS::Neutron::Net properties: name: list_join: [",",['Network','{get_param: [names, {get_param: index}]','1']] # str_replace: # template: net%-set%-number% # params: # "net%": "Network" # "set%": "A" #"number%": "2" root at cic-1:~# root at cic-1:~# cat main.yaml heat_template_version: 2015-04-30 description: Shows how to look up list/map values by group index parameters: sets: type: comma_delimited_list label: sets default: "A,B,C" net_names: type: json default: repeat: for each: <%set%>: {get_param: sets} template: - network1: Network<%set>1 network2: Network<%set>2 resources: rg: type: OS::Heat::ResourceGroup properties: count: 3 resource_def: type: nested.yaml properties: # Note you have to pass the index and the entire list into the # nested template, resolving via %index% doesn't work directly # in the get_param here index: "%index%" names: {get_param: sets} outputs: all_values: value: {get_attr: [rg, value]} root at cic-1:~# -----Original Message----- From: Harald Jensås [mailto:hjensas at redhat.com] Sent: Friday, February 15, 2019 1:36 AM To: NANTHINI A A Cc: openstack-dev at lists.openstack.org Subject: Re: [Heat] Reg accessing variables of resource group heat api On Wed, 2019-02-13 at 13:48 +0000, NANTHINI A A wrote: > Hi , > As per your suggested change ,I am able to create network > A1,network A2 ; in second iteration network b1,network b2 .But I want > to reduce number of lines of variable params.hence tried using repeat > function .But it is not working .Can you please let me know what is > wrong here . > > I am getting following error . > root at cic-1:~# heat stack-create test2 -f main.yaml WARNING (shell) > "heat stack-create" is deprecated, please use "openstack stack create" > instead > ERROR: AttributeError: : resources.rg: : 'NoneType' object has no > attribute 'parameters' > > root at cic-1:~# cat main.yaml > heat_template_version: 2015-04-30 > > description: Shows how to look up list/map values by group index > > parameters: > sets: > type: comma_delimited_list > label: sets > default: "A,B,C" > net_names: > type: json > default: > repeat: > for each: > <%set%>: {get_param: sets} > template: > - network1: Network<%set>1 > network2: Network<%set>2 > I don't think you can use the repeat function in the parameters section. You could try using a OS::Heat::Value resource in the resources section below to iterate over the sets parameter. Then use get_attr to read the result of the heat value and pass that as names to nested.yaml. > > resources: > rg: > type: OS::Heat::ResourceGroup > properties: > count: 3 > resource_def: > type: nested.yaml > properties: > # Note you have to pass the index and the entire list into > the > # nested template, resolving via %index% doesn't work > directly > # in the get_param here > index: "%index%" > names: {get_param: net_names} Alternatively you could put the repeat function here? names: repeat: for each: [ ... ] > > outputs: > all_values: > value: {get_attr: [rg, value]} > root at cic-1:~# > > > Thanks in advance. > > > Regards, > A.Nanthini > > From: Rabi Mishra [mailto:ramishra at redhat.com] > Sent: Wednesday, February 13, 2019 9:07 AM > To: NANTHINI A A > Cc: hjensas at redhat.com; openstack-dev at lists.openstack.org > Subject: Re: [Heat] Reg accessing variables of resource group heat api > > > On Tue, Feb 12, 2019 at 7:48 PM NANTHINI A A < > nanthini.a.a at ericsson.com> wrote: > > Hi , > > I followed the example given in random.yaml .But getting below > > error .Can you please tell me what is wrong here . > > > > root at cic-1:~# heat stack-create test -f main.yaml WARNING (shell) > > "heat stack-create" is deprecated, please use "openstack stack > > create" instead > > ERROR: Property error: : > > resources.rg.resources[0].properties: : Unknown > > Property names root at cic-1:~# cat main.yaml > > heat_template_version: 2015-04-30 > > > > description: Shows how to look up list/map values by group index > > > > parameters: > > net_names: > > type: json > > default: > > - network1: NetworkA1 > > network2: NetworkA2 > > - network1: NetworkB1 > > network2: NetworkB2 > > > > > > resources: > > rg: > > type: OS::Heat::ResourceGroup > > properties: > > count: 3 > > resource_def: > > type: nested.yaml > > properties: > > # Note you have to pass the index and the entire list into > > the > > # nested template, resolving via %index% doesn't work > > directly > > # in the get_param here > > index: "%index%" > > > names: {get_param: net_names} > > property name should be same as parameter name in you nested.yaml > > > > outputs: > > all_values: > > value: {get_attr: [rg, value]} > > root at cic-1:~# cat nested.yaml > > heat_template_version: 2013-05-23 > > description: > > This is the template for I&V R6.1 base configuration to create > > neutron resources other than sg and vm for vyos vms > > parameters: > > net_names: > > changing this to 'names' should fix your error. > > type: json > > index: > > type: number > > resources: > > neutron_Network_1: > > type: OS::Neutron::Net > > properties: > > name: {get_param: [names, {get_param: index}, network1]} > > > > > > Thanks, > > A.Nanthini > > > > From: Rabi Mishra [mailto:ramishra at redhat.com] > > Sent: Tuesday, February 12, 2019 6:34 PM > > To: NANTHINI A A > > Cc: hjensas at redhat.com; openstack-dev at lists.openstack.org > > Subject: Re: [Heat] Reg accessing variables of resource group heat > > api > > > > On Tue, Feb 12, 2019 at 11:14 AM NANTHINI A A < > > nanthini.a.a at ericsson.com> wrote: > > > Hi , > > > May I know in the following example given > > > > > > parameters: > > > resource_name_map: > > > - network1: foo_custom_name_net1 > > > network2: foo_custom_name_net2 > > > - network1: bar_custom_name_net1 > > > network2: bar_custom_name_net2 > > > what is the parameter type ? > > > > > > json > > > > > -- > Regards, > Rabi Mishra > From shyambiradarsggsit at gmail.com Mon Feb 18 12:06:31 2019 From: shyambiradarsggsit at gmail.com (Shyam Biradar) Date: Mon, 18 Feb 2019 17:36:31 +0530 Subject: Rocky undercloud/director installation getting stuck -- keepalived container issue Message-ID: Hi, Undercloud installation getting stuck due to following issue. In short, keepalived container is not starting, hence installation going in loop. https://bugs.launchpad.net/tripleo/+bug/1816349 Any comments/ideas are welcome. Thanks & Regards, Shyam Biradar, Email: shyambiradarsggsit at gmail.com, Contact: +91 8600266938. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Feb 18 13:47:11 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 18 Feb 2019 08:47:11 -0500 Subject: [oslo] How to properly cleanup FakeExchangeManager between test cases In-Reply-To: <1550483993.10501.6@smtp.office365.com> References: <1550483993.10501.6@smtp.office365.com> Message-ID: Balázs Gibizer writes: > Hi, > > Nova has tests that are using the oslo.messaging FakeDriver > implementation by providing the 'fake://' transport url. The > FakeExchangeManager keeps a class level dict of FakeExchange objects > keyed by the communication topic[1]. This can cause that an RPC message > sent by a test case is received by a later test case running in the > same process. I did not find any proper way to clean up the > FakeExchangeManager at the end of each test case to prevent this. To > fix the problem in the nova test I did a hackish cleanup by overwriting > FakeExchangeManager._exchanges directly during test case cleanup [2]. > Is there a better way to do this cleanup? > > Cheers, > gibi > > [1] > https://github.com/openstack/oslo.messaging/blob/0a784d260465bc7ba878bedeb5c1f184e5ff6e2e/oslo_messaging/_drivers/impl_fake.py#L149 > [2] https://review.openstack.org/#/c/637233/1/nova/tests/fixtures.py It sounds like we need a test fixture to manage that, added to oslo.messaging so if the internal implementation of the fake driver changes we can update the fixture without breaking consumers of the library (and where it would be deemed "safe" to modify private properties of the class). I'm sure the Oslo team would be happy to review patches to create that. -- Doug From openstack at nemebean.com Mon Feb 18 16:07:22 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 18 Feb 2019 10:07:22 -0600 Subject: [oslo] Feature freeze this week Message-ID: <80ee1143-fd93-b0b1-1e15-dcce10661e01@nemebean.com> Just a reminder that Oslo feature freeze happens this week. Yes, it's earlier than everyone else, and if you're wondering why, we have a policy[1] that discusses it. The main thing is that if you have features you want to get into Oslo libraries this cycle, please make sure they merge by Friday. After that we'll need to go through the FFE process and there's no guarantee we can land them. Feel free to ping us on IRC if you need reviews. Thanks. -Ben 1: http://specs.openstack.org/openstack/oslo-specs/specs/policy/feature-freeze.html From jpetrini at coredial.com Mon Feb 18 16:22:33 2019 From: jpetrini at coredial.com (John Petrini) Date: Mon, 18 Feb 2019 11:22:33 -0500 Subject: Help Needed - Mirantis - All Services Down in 2 Controllers In-Reply-To: References: Message-ID: You should try to determine what caused this condition. In all likelihood you ran out of resources on these nodes (memory is a likely culprit). Restarting pacemaker on the nodes where services are no longer running should bring them back up but you probably want to check that the nodes are back in a good state before you do so. You can also reboot the nodes but keep in mind that if you're running ceph and your ceph mons live on your controllers you can only have one ceph mon offline at a time. Any less than two active monitors and your storage will go offline. From mjturek at linux.vnet.ibm.com Mon Feb 18 17:04:04 2019 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Mon, 18 Feb 2019 12:04:04 -0500 Subject: [ironic] Ironic Stein Mid-Cycle Notes Message-ID: <80d090fb-1733-c82c-6cb0-6686109bb088@linux.vnet.ibm.com> Ironic had it's virtual mid-cycle call on January 21st and 22nd. Discussions were around how far along we were with priorities this cycle, making the project more container friendly, Smart NIC support, bug triaging, and more. If you are interested in seeing more about these discussions, we suggest that you check out the etherpad where we kept notes. https://etherpad.openstack.org/p/ironic-stein-midcycle If you have anything you'd like to bring up about the notes or the mid-cycle, please reach out here or on #openstack-ironic Thanks! From openstack at nemebean.com Mon Feb 18 17:11:22 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 18 Feb 2019 11:11:22 -0600 Subject: [dev][keystone] Keystone Team Update - Week of 11 February 2019 In-Reply-To: References: Message-ID: On 2/15/19 11:23 AM, Colleen Murphy wrote: > ## Milestone Outlook > > https://releases.openstack.org/stein/schedule.html > > Feature freeze as well as final client release are both in 3 weeks. Non-client release deadline is in two weeks, which means changes needed for keystonemiddleware, keystoneauth, and the oslo libraries need to be proposed and reviewed ASAP. Reading this, it occurred to me that we probably shouldn't be applying Oslo feature freeze to co-owned libraries. Most of those function more as non-client libraries than the oslo.* libs, so there's no need to freeze early. I've proposed https://review.openstack.org/637588 to reflect that in the Oslo policy. If you have an opinion on it please vote! -Ben From mnaser at vexxhost.com Mon Feb 18 18:40:49 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 18 Feb 2019 13:40:49 -0500 Subject: [keystone][service-catalog] Region "*" for identity in service catalog In-Reply-To: References: Message-ID: Hi Artem, You bring up a great point. While I don't have the answer, I think this is something that we share as an operator in terms of finding the solution for. I'd be happy to work together to bring up the ideal solution for this. Thanks, Mohammed On Fri, Feb 15, 2019 at 4:45 AM Artem Goncharov wrote: > > Hi all, > > In a public cloud I am using there is currently one region, but should be multiple in future. Endpoint for each service has a region set. However the only exception is identity, which has an empty region (actually "*"). If I do not specify region_name during connection (with diverse tools) everything works fine. Some "admin" operations, however, really require region to be set. But if I set in (i.e. in clouds.yaml) I can't connect to cloud, since identity in this region has no explicit endpoint (keystoneauth1 is not ok with that, gophercloud as well) > > I was not able to find any requirements/conventions, how such setup should be really treated. On https://wiki.openstack.org/wiki/API_Special_Interest_Group/Current_Design/Service_Catalog there are service catalogs for diverse clouds, and in case where a cloud has multiple region, there are multiple entries for keystone pointing to the same endpoint. Basically each time there is region properly set. > > In Keystone.v2 region was mandatory, but in v3 it is not anymore (https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/endpoint.html#endpoint-create). I guess in a normal way you would not be even able to configure region "*", but it was somehow done. > > While there is only one region the problem is not that big, but as soon as second region is added it becomes problem. Does anyone knows if that is an "allowed" setup (but then tools should be adapted to treat it properly), or this is not an "allowed" configuration (in this case I would like to see some docs to refer properly). I would personally prefer second way of fixing catalog to avoid fixes in diverse tools, but I really need a weightful reference for that. > > > Thanks a lot in advance, > Artem -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Mon Feb 18 18:46:48 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 18 Feb 2019 13:46:48 -0500 Subject: [nova] boot from volume with root_gb=0 flavors is not allowed now Message-ID: Hi everyone, Just wanted to give a heads up about the change that recently merged in Nova which will affect Train (and current master, for deployment projects). https://review.openstack.org/#/c/603910/ As of that change, Nova will no longer allow you to to create a VM when the root_gb of a flavor is set to 0. This is to avoid a reported security issue which we've found a while ago: https://launchpad.net/bugs/1739646 If you're using devstack, you'll probably be okay as the following change has already fixed it: https://review.openstack.org/#/c/619319/ However, deployment projects will need to make that adjustment as well as deployers need to plan accordingly (everyone is going to upgrade to Train the day we release, right?) -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From tpb at dyncloud.net Mon Feb 18 18:57:49 2019 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 18 Feb 2019 13:57:49 -0500 Subject: [manila] Fw: [nova] boot from volume with root_gb=0 flavors is not allowed now Message-ID: <20190218185749.jfdcjg2ykwben6qn@barron.net> Some of the jobs in manila gate were booting service VMs with root_gb=0 and got caught by this but were fixed here [1]. That's a devstack plugin fix, so as Mohammed warns, your distribution tests downstream may require their own adjustment. [1] https://review.openstack.org/#/c/637176 -------------- next part -------------- An embedded message was scrubbed... From: Mohammed Naser Subject: [nova] boot from volume with root_gb=0 flavors is not allowed now Date: Mon, 18 Feb 2019 13:46:48 -0500 Size: 8950 URL: From mriedemos at gmail.com Mon Feb 18 20:07:12 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 18 Feb 2019 14:07:12 -0600 Subject: [nova] boot from volume with root_gb=0 flavors is not allowed now In-Reply-To: References: Message-ID: <6a20e439-88e1-a543-1399-6c780ff99ac6@gmail.com> On 2/18/2019 12:46 PM, Mohammed Naser wrote: > As of that change, Nova will no longer allow you to to create a VM > when the root_gb of a flavor is set to 0. Slight correction - nova will no longer allow this *by default* (it's configurable in policy if you need the old unsafe behavior) for *non-volume-backed* servers. If you're doing boot-from-volume then you can still use flavors with root_gb=0 since that doesn't apply to the root volume. -- Thanks, Matt From haleyb.dev at gmail.com Mon Feb 18 20:22:23 2019 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 18 Feb 2019 15:22:23 -0500 Subject: [neutron] Bug deputy report week of February 11th Message-ID: Hi, I was Neutron bug deputy last week. Below is a short summary about reported bugs. -Brian Critical bugs ------------- None High bugs --------- * https://bugs.launchpad.net/neutron/+bug/1815585 - Floating IP status failed to transition to DOWN in neutron-tempest-plugin-scenario-linuxbridge - intermittent gate failure * https://bugs.launchpad.net/neutron/+bug/1815618 - cannot update qos rule - Bence took ownership, triaging * https://bugs.launchpad.net/neutron/+bug/1815629 - api and rpc worker defaults are problematic - Fix proposed - https://review.openstack.org/636363 * https://bugs.launchpad.net/neutron/+bug/1815758 - Error in ip_lib.get_devices_info() retrieving veth interface info - Fix proposed - https://review.openstack.org/#/c/636652/ * https://bugs.launchpad.net/neutron/+bug/1815797 - rpc_response_max_timeout" configuration variable not present in fullstack tests - Fix proposed - https://review.openstack.org/#/c/636719/ - Duplicate - https://bugs.launchpad.net/neutron/+bug/1816443 * https://bugs.launchpad.net/neutron/+bug/1815912 - [OVS] exception message when retrieving bridge-id is not present - Fix proposed - https://review.openstack.org/#/c/636963/ * https://bugs.launchpad.net/neutron/+bug/1815913 - DVR can not work with multiple routers on single network. - Fix proposed - https://review.openstack.org/#/c/636953/ * https://bugs.launchpad.net/bugs/1816239 - Functional test test_router_processing_pool_size failing - Fix proposed - https://review.openstack.org/#/c/637544/ Medium bugs ----------- * https://bugs.launchpad.net/neutron/+bug/1815609 - [fwaas] devstack plugin fails if ML2/OVS is not use - Fix proposed - https://review.openstack.org/#/c/636340/ * https://bugs.launchpad.net/neutron/+bug/1815871 - neutron-server api don't shutdown gracefully - Fix proposed - https://review.openstack.org/#/c/636855/ Low bugs -------- * https://bugs.launchpad.net/neutron/+bug/1815600 - "tags" listed in POST in api-ref - bug in docs * https://bugs.launchpad.net/neutron/+bug/1816395 - L2 Networking with SR-IOV enabled NICs in neutron - bug in docs Wishlist bugs ------------- * https://bugs.launchpad.net/neutron/+bug/1815498 - Use pyroute2 to check vlan/vxlan in use - Fix proposed - https://review.openstack.org/#/c/636296/ * https://bugs.launchpad.net/neutron/+bug/1815933 - [RFE] Allow bulk-tagging of resources - Discussed at drivers meeting 2/15 Invalid bugs ------------ None Further triage required ----------------------- * https://bugs.launchpad.net/neutron/+bug/1815728 - Can delete port attached to VM without Nova noticing - The nova_notifier code should have done this, asked for more info, no answer yet * https://bugs.launchpad.net/neutron/+bug/1815989 - OVS drops RARP packets by QEMU upon live-migration causes up to 40s ping pause in Rocky - Asked for clarification on issue, Sean Mooney indicated it was os-vif related so added the component. From colleen at gazlene.net Mon Feb 18 21:19:37 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 18 Feb 2019 16:19:37 -0500 Subject: [dev][keystone] Keystone Team Update - Week of 11 February 2019 In-Reply-To: References: Message-ID: <59ad3d82-a945-4648-b8a6-3eb53d3e8851@www.fastmail.com> On Mon, Feb 18, 2019, at 6:11 PM, Ben Nemec wrote: > > > On 2/15/19 11:23 AM, Colleen Murphy wrote: > > ## Milestone Outlook > > > > https://releases.openstack.org/stein/schedule.html > > > > Feature freeze as well as final client release are both in 3 weeks. Non-client release deadline is in two weeks, which means changes needed for keystonemiddleware, keystoneauth, and the oslo libraries need to be proposed and reviewed ASAP. > > Reading this, it occurred to me that we probably shouldn't be applying > Oslo feature freeze to co-owned libraries. Most of those function more > as non-client libraries than the oslo.* libs, so there's no need to > freeze early. > > I've proposed https://review.openstack.org/637588 to reflect that in the > Oslo policy. If you have an opinion on it please vote! > > -Ben > Thanks for mentioning that, I didn't even notice that Oslo had an earlier freeze date. Arguably oslo.policy should be held to the same standard as the other Oslo libraries since it has such far reach. Oslo.limit not so much yet. Colleen From openstack at nemebean.com Mon Feb 18 21:38:59 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 18 Feb 2019 15:38:59 -0600 Subject: [dev][keystone] Keystone Team Update - Week of 11 February 2019 In-Reply-To: <59ad3d82-a945-4648-b8a6-3eb53d3e8851@www.fastmail.com> References: <59ad3d82-a945-4648-b8a6-3eb53d3e8851@www.fastmail.com> Message-ID: <7f1a680c-04f7-0ced-6803-4807d70b2679@nemebean.com> On 2/18/19 3:19 PM, Colleen Murphy wrote: > > > On Mon, Feb 18, 2019, at 6:11 PM, Ben Nemec wrote: >> >> >> On 2/15/19 11:23 AM, Colleen Murphy wrote: >>> ## Milestone Outlook >>> >>> https://releases.openstack.org/stein/schedule.html >>> >>> Feature freeze as well as final client release are both in 3 weeks. Non-client release deadline is in two weeks, which means changes needed for keystonemiddleware, keystoneauth, and the oslo libraries need to be proposed and reviewed ASAP. >> >> Reading this, it occurred to me that we probably shouldn't be applying >> Oslo feature freeze to co-owned libraries. Most of those function more >> as non-client libraries than the oslo.* libs, so there's no need to >> freeze early. >> >> I've proposed https://review.openstack.org/637588 to reflect that in the >> Oslo policy. If you have an opinion on it please vote! >> >> -Ben >> > Thanks for mentioning that, I didn't even notice that Oslo had an earlier freeze date. Arguably oslo.policy should be held to the same standard as the other Oslo libraries since it has such far reach. Oslo.limit not so much yet. Oh, hmm. I forgot we had oslo.* libraries that were co-owned too. I agree those should probably be kept to the Oslo feature freeze date. I'll update the policy change to reflect that, and maybe make a note that the policy is a guideline, not a hard and fast rule. If it makes sense to freeze a library earlier then we should do that regardless of what the policy says. I'm just trying to avoid stepping on other teams' toes unnecessarily. Note that oslo.limit falls under the "not released yet" exception (unfortunately) so feature freeze doesn't apply to it at all. > > Colleen > From lbragstad at gmail.com Mon Feb 18 22:28:57 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 18 Feb 2019 16:28:57 -0600 Subject: [dev][keystone] Keystone Team Update - Week of 11 February 2019 In-Reply-To: <7f1a680c-04f7-0ced-6803-4807d70b2679@nemebean.com> References: <59ad3d82-a945-4648-b8a6-3eb53d3e8851@www.fastmail.com> <7f1a680c-04f7-0ced-6803-4807d70b2679@nemebean.com> Message-ID: <25d99f39-c197-b30b-e12c-22c17c8737aa@gmail.com> On 2/18/19 3:38 PM, Ben Nemec wrote: > > > On 2/18/19 3:19 PM, Colleen Murphy wrote: >> >> >> On Mon, Feb 18, 2019, at 6:11 PM, Ben Nemec wrote: >>> >>> >>> On 2/15/19 11:23 AM, Colleen Murphy wrote: >>>> ## Milestone Outlook >>>> >>>> https://releases.openstack.org/stein/schedule.html >>>> >>>> Feature freeze as well as final client release are both in 3 weeks. >>>> Non-client release deadline is in two weeks, which means changes >>>> needed for keystonemiddleware, keystoneauth, and the oslo libraries >>>> need to be proposed and reviewed ASAP. >>> >>> Reading this, it occurred to me that we probably shouldn't be applying >>> Oslo feature freeze to co-owned libraries. Most of those function more >>> as non-client libraries than the oslo.* libs, so there's no need to >>> freeze early. >>> >>> I've proposed https://review.openstack.org/637588 to reflect that in >>> the >>> Oslo policy. If you have an opinion on it please vote! >>> >>> -Ben >>> >> Thanks for mentioning that, I didn't even notice that Oslo had an >> earlier freeze date. Arguably oslo.policy should be held to the same >> standard as the other Oslo libraries since it has such far reach. >> Oslo.limit not so much yet. > > Oh, hmm. I forgot we had oslo.* libraries that were co-owned too. I > agree those should probably be kept to the Oslo feature freeze date. > I'll update the policy change to reflect that, and maybe make a note > that the policy is a guideline, not a hard and fast rule. If it makes > sense to freeze a library earlier then we should do that regardless of > what the policy says. I'm just trying to avoid stepping on other > teams' toes unnecessarily. > > Note that oslo.limit falls under the "not released yet" exception > (unfortunately) so feature freeze doesn't apply to it at all. ++ This will apply once we get a 1.0 release, I think? We could release oslo.limit after feature freeze and still make changes to it up until we release a 1.0... then we're are the point of no return (dun dun dun!). > >> >> Colleen >> > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From openstack at nemebean.com Mon Feb 18 23:18:47 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 18 Feb 2019 17:18:47 -0600 Subject: [dev][keystone] Keystone Team Update - Week of 11 February 2019 In-Reply-To: <25d99f39-c197-b30b-e12c-22c17c8737aa@gmail.com> References: <59ad3d82-a945-4648-b8a6-3eb53d3e8851@www.fastmail.com> <7f1a680c-04f7-0ced-6803-4807d70b2679@nemebean.com> <25d99f39-c197-b30b-e12c-22c17c8737aa@gmail.com> Message-ID: <67cd75fc-b28d-61f3-cec8-e2c093b353b6@nemebean.com> On 2/18/19 4:28 PM, Lance Bragstad wrote: > > > On 2/18/19 3:38 PM, Ben Nemec wrote: >> >> >> On 2/18/19 3:19 PM, Colleen Murphy wrote: >>> >>> >>> On Mon, Feb 18, 2019, at 6:11 PM, Ben Nemec wrote: >>>> >>>> >>>> On 2/15/19 11:23 AM, Colleen Murphy wrote: >>>>> ## Milestone Outlook >>>>> >>>>> https://releases.openstack.org/stein/schedule.html >>>>> >>>>> Feature freeze as well as final client release are both in 3 weeks. >>>>> Non-client release deadline is in two weeks, which means changes >>>>> needed for keystonemiddleware, keystoneauth, and the oslo libraries >>>>> need to be proposed and reviewed ASAP. >>>> >>>> Reading this, it occurred to me that we probably shouldn't be applying >>>> Oslo feature freeze to co-owned libraries. Most of those function more >>>> as non-client libraries than the oslo.* libs, so there's no need to >>>> freeze early. >>>> >>>> I've proposed https://review.openstack.org/637588 to reflect that in >>>> the >>>> Oslo policy. If you have an opinion on it please vote! >>>> >>>> -Ben >>>> >>> Thanks for mentioning that, I didn't even notice that Oslo had an >>> earlier freeze date. Arguably oslo.policy should be held to the same >>> standard as the other Oslo libraries since it has such far reach. >>> Oslo.limit not so much yet. >> >> Oh, hmm. I forgot we had oslo.* libraries that were co-owned too. I >> agree those should probably be kept to the Oslo feature freeze date. >> I'll update the policy change to reflect that, and maybe make a note >> that the policy is a guideline, not a hard and fast rule. If it makes >> sense to freeze a library earlier then we should do that regardless of >> what the policy says. I'm just trying to avoid stepping on other >> teams' toes unnecessarily. >> >> Note that oslo.limit falls under the "not released yet" exception >> (unfortunately) so feature freeze doesn't apply to it at all. > > ++ > > This will apply once we get a 1.0 release, I think? We could release > oslo.limit after feature freeze and still make changes to it up until we > release a 1.0... then we're are the point of no return (dun dun dun!). I think it's more about when a library has consumers than a strict version. Castellan technically just went 1.0 this cycle, but we've observed feature freeze for it because it had consumers (and really should have been 1.0). Ideally we want "has consumers" and "declared 1.0" to happen fairly close together so we aren't breaking people, but if the latter doesn't happen in a timely fashion we still need to not break people. :-) And just to tie a bow on the policy change discussion, I'm planning to abandon it. I discovered that both my examples were used in other non-client libraries, which means the rationale for freezing them early still applies. Maybe there is an Oslo library that doesn't need to, but we have the FFE process precisely for such exceptional cases. It seems I jumped the gun a bit on wanting to change the policy. Sorry for subjecting everyone to the noise of my train of thought, but hey, we are going back to Denver! ;-) > >> >>> >>> Colleen >>> >> > > From melwittt at gmail.com Tue Feb 19 02:22:46 2019 From: melwittt at gmail.com (melanie witt) Date: Mon, 18 Feb 2019 18:22:46 -0800 Subject: [nova][dev][ops] can we get rid of 'project_only' in the DB layer? In-Reply-To: References: <3fb287ae-753f-7e56-aa2a-7e3a1d7d6d89@gmail.com> Message-ID: On Mon, 18 Feb 2019 13:16:38 +0800, He Jie Xu wrote: > Add the maillist back, I missed from the previous reply... > > melanie witt > 于2019年2 > 月16日周六 上午12:22写道: > > Thanks for the reply, Alex. Response is inline. > > On Fri, 15 Feb 2019 15:49:19 +0800, He Jie Xu > wrote: > > We need to ensure all the APIs check the policy with real instance's > > project and user id, like this > > > https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/evacuate.py#L85-L88 > > Otherwise, any user can get any other tenant's instance. > > > > Some of APIs doesn't check like this, for example: > > > https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/attach_interfaces.py#L55 > > While I agree with this (policy check instance project/user instead of > request context project/user), I'm not sure it's related to the > project_only=True at the database layer. The project_only=True at the > database layer only enforces request context project, which is what the > second example here will also do, if policy contains > 'project_id:%(project_id)s'. It seems to me that if we remove > project_only=True from the database layer, we will get proper > enforcement of request context project/user by the existing policy > checks, if the policy is configured as such. And we default policy to > either rule:admin_api or rule:admin_or_owner: > > https://docs.openstack.org/nova/latest/configuration/sample-policy.html > > So, it seems to me that changing policy enforcement from request > context > project/user => instance project/user would be a separate change. > Please > let me know if I'm misunderstanding you. > > One thing I do notice though, in your first example, is that the > get_instance is done _before_ the policy check, which would need to be > moved after the policy check, in the same change that would remove > project_only=True. So I'm glad you pointed that out. > > > Emm...no, the first example is the right example. > > project_only=True will ensure db call return the instance belong to the > project > in the request context. If project_only=False, the db call may return > other project's > instance. > > Then when the rule is 'project_id:%(project_id)s' and the target is > instance's project_id, > the policy enforcement will ensure the request context's project id > match the instance's project_id, > then the user won't get other project's instance. Right, that is the proposal in this email. That we should remove project_only=True and let the API policy check handle whether or not the user from a different project is allowed to get the instance. Otherwise, users are not able to use policy to control the behavior because it is hard-coded in the database layer. I was trying to say that adding the instance.project_id target seems unrelated to the issue about removing project_only=True. > > I'm trying to memory why we didn't do that in the beginning, > sounds like > > we refused to support user-id based policy. But > > in the end for the backward-campatible for the user like CERN, we > > support user-id based policy for few APIs. > > > https://review.openstack.org/#/q/topic:bp/user_id_based_policy_enforcement+(status:open+OR+status:merged) > > > > This is probably why some APIs checks policy with real instance's > > project id and user id and some APIs not. > > > > melanie witt > >> 于2019年2 > > 月15日周五 上午1:23写道: > > > >     Hey all, > > > >     Recently, we had a customer try the following command as a > non-admin > >     with a policy role granted in policy.json to allow live migrate: > > > >         "os_compute_api:os-migrate-server:migrate_live": > "rule:admin_api or > >     role:Operator" > > > >     The scenario is that they have a server in project A and a > user in > >     project B with role:Operator and the user makes a call to > live migrate > >     the server. > > > >     But when they call the API, they get the following error > response: > > > >         {"itemNotFound": {"message": "Instance > could not be > >     found.", "code": 404}} > > > >     A superficial look through the code shows that the live > migrate should > >     work, because we have appropriate policy checks in the API, > and the > >     request makes it past those checks because the policy.json > has been set > >     correctly. > > > >     A common pattern in our APIs is that we first > compute_api.get() the > >     instance object and then we call the server action (live > migrate, stop, > >     start, etc) with it after we retrieve it. In this scenario, the > >     compute_api.get() fails with NotFound. > > > >     And the reason it fails with NotFound is because, much lower > level, at > >     the DB layer, we have a keyword arg called 'project_only' > which, when > >     True, will scope a database query to the > RequestContext.project_id > >     only. > >     We have hard-coded 'project_only=True' for the instance get > query. > > > >     So, when the user in project B with role:Operator tries to > retrieve the > >     instance record in project A, with appropriate policy rules > set, it > >     will > >     fail because 'project_only=True' and the request context is > project B, > >     while the instance is in project A. > > > >     My question is: can we get rid of the hard-coded > 'project_only=True' at > >     the database layer? This seems like something that should be > >     enforced at > >     the API layer and not at the database layer. It reminded me of an > >     effort > >     we had a few years ago where we removed other hard-coded policy > >     enforcement from the database layer [1][2]. I've uploaded a WIP > >     patch to > >     demonstrate the proposed change [3]. > > > >     Can anyone think of any potential problems with doing this? > I'd like to > >     be able to remove it so that operators are able use policy to > allow > >     non-admin users with appropriately configured roles to run server > >     actions. > > > >     Cheers, > >     -melanie > > > >     [1] > > > https://blueprints.launchpad.net/nova/+spec/nova-api-policy-final-part > >     [2] > > > https://review.openstack.org/#/q/topic:bp/nova-api-policy-final-part+(status:open+OR+status:merged) > >     [3] https://review.openstack.org/637010 > > > > > > From fungi at yuggoth.org Tue Feb 19 02:58:17 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 19 Feb 2019 02:58:17 +0000 Subject: Last day for TC candidate nominations Message-ID: <20190219025817.t5sammeofwltvxbn@yuggoth.org> A quick reminder that we are in the last hours for TC candidate announcements. Nominations are open until Feb 19, 2019 23:45 UTC. If you want to stand for TC, don't delay, follow the instructions at [1] to make sure the community knows your intentions. Make sure your nomination has been submitted to the openstack/election repository and approved by election officials. Thank you, [1] http://governance.openstack.org/election/#how-to-submit-a-candidacy -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dabarren at gmail.com Tue Feb 19 07:25:25 2019 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Tue, 19 Feb 2019 08:25:25 +0100 Subject: [openstack-community] Help on OpenStack-Kolla In-Reply-To: References: Message-ID: Openstack-dicuss is the ML, what area need help? Regards El mar., 19 feb. 2019 a las 8:24, Vlad Blando () escribió: > Hi, > > Is there a mailing list that tackles issues specific to openstack-kolla? > > Regards, > > /Vlad > ᐧ > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir.blando at gmail.com Tue Feb 19 07:29:53 2019 From: vladimir.blando at gmail.com (vladimir franciz blando) Date: Tue, 19 Feb 2019 15:29:53 +0800 Subject: openstack-kolla precheck failure Message-ID: Hi, I have a newly installed node running on CentOS 7 with 2 NICs, my precheck failed and I can't figure it out. I'm trying out multinode with 1 node for controller and the other for compute --- begin paste --- TASK [glance : Checking free port for Glance Registry] ****************************************************************************************************************************************** fatal: [10.150.7.102]: FAILED! => {"msg": "The conditional check 'inventory_hostname in groups[glance_services['glance-registry']['group']]' failed. The error was: error while evaluating conditional (inventory_hostname in groups[glance_services['glance-registry']['group']]): Unable to look up a name or access an attribute in template string ({% if inventory_hostname in groups[glance_services['glance-registry']['group']] %} True {% else %} False {% endif %}).\nMake sure your variable name does not contain invalid characters like '-': argument of type 'StrictUndefined' is not iterable\n\nThe error appears to have been in '/usr/share/kolla-ansible/ansible/roles/glance/tasks/precheck.yml': line 18, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Checking free port for Glance Registry\n ^ here\n"} to retry, use: --limit @/usr/share/kolla-ansible/ansible/site.retry PLAY RECAP ************************************************************************************************************************************************************************************** 10.150.7.102 : ok=68 changed=0 unreachable=0 failed=1 10.150.7.103 : ok=15 changed=0 unreachable=0 failed=0 localhost --- - Vlad -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yury.Kulazhenkov at dell.com Tue Feb 19 07:31:56 2019 From: Yury.Kulazhenkov at dell.com (Kulazhenkov, Yury) Date: Tue, 19 Feb 2019 07:31:56 +0000 Subject: [cinder] questions about release cycles Message-ID: Hi all, I'm currently maintain VxFlex OS(ScaleIO) cinder driver. Driver require some changes to support future VxFlex OS release. We want to add this changes to Stein release and if it possible backport them to old supported releases. I have couple questions: 1. Is it still possible to submit patches which extend driver functionality in Stein release cycle? If such changes are still possible, then I have another question: I already submitted patch that renames ScaleIO driver to VxFlex OS (https://review.openstack.org/#/c/634397/). Is it possible that this patch will be merged during Stein release cycle? In other words, I'm interesting, should I prepare "new feature" patch based on this "renaming" patch (as patch chain) or it will be better to prepare separate patch based on master branch state? 2. Is it possible to add new driver features (changes for compatibility, to be more accurate) for currently supported Openstack releases? Any rules, policy or special workflow for that? Thanks, Yury From e0ne at e0ne.info Tue Feb 19 07:46:01 2019 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Tue, 19 Feb 2019 09:46:01 +0200 Subject: [cinder] questions about release cycles In-Reply-To: References: Message-ID: Hi Yury, Please, see my comments inline. On Tue, Feb 19, 2019 at 9:32 AM Kulazhenkov, Yury wrote: > Hi all, > > I'm currently maintain VxFlex OS(ScaleIO) cinder driver. Driver require > some changes to support future VxFlex OS release. > We want to add this changes to Stein release and if it possible backport > them to old supported releases. > I have couple questions: > 1. Is it still possible to submit patches which extend driver > functionality in Stein release cycle? > If such changes are still possible, then I have another question: > I already submitted patch that renames ScaleIO driver to VxFlex > OS (https://review.openstack.org/#/c/634397/). > Is it possible that this patch will be merged during Stein > release cycle? > In other words, I'm interesting, should I prepare "new feature" > patch based on this "renaming" patch (as patch chain) > or it will be better to prepare separate patch based on master > branch state? > We're about two weeks before Stien-3 milestone [1]. It's a feature freeze milestone, so it means all features should be merged before it. If you miss this deadline, these patches will be merged in Train. > 2. Is it possible to add new driver features (changes for compatibility, > to be more accurate) for currently supported Openstack releases? > Any rules, policy or special workflow for that?Cinder code with > drivers should stable policy [1] > Cinder code with drivers should stable policy [2], so we can't backport any driver features to stable branches. > Thanks, > > Yury > > [1] https://releases.openstack.org/stein/schedule.html [2] https://docs.openstack.org/project-team-guide/stable-branches.html Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Tue Feb 19 07:54:36 2019 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 19 Feb 2019 20:54:36 +1300 Subject: [Openstack][Heat] service times out 504 In-Reply-To: <8ad92313-3653-f2d3-e1af-34849e20065e@everyware.ch> References: <8ad92313-3653-f2d3-e1af-34849e20065e@everyware.ch> Message-ID: On 16/02/19 3:42 AM, Florian Engelmann wrote: > Hi all, > > - Version: heat-base-archive-stable-rocky > - Commit: Ica99cec6765d22d7ee2262e2d402b2e98cb5bd5e > > > I have a fresh openstack deployment (kolla-Ansible). Everything but Heat > is working fine. > > > When I do a webrequest (either horizon or curl)  on the openstack heat > endpoint (internal or public), I just get nothing and after a while, it > times out with a 500 http error. Hold up, 504 is a gateway timeout (presumably from HAProxy). But this isn't a 504, it's a 500. And it's not clear if it's heat-api or HAProxy that's generating the 500 response. FWIW, it always pays to set your HAProxy timeout longer than the RPC timeout in heat-api, so that if a message gets dropped you'll see that reported by heat-api rather than HAProxy. > root at xxxx-kolla-xxxx:~# curl -vvv > http://10.10.10.10:8004/v1/e7f405fb2b7b4b029dfc48e06920eb92 > *   Trying 10.x.y.z > * Connected to heat.xxxxxxxx (10.x.x.x.) port 8004 (#0) > > GET /v1/e7f405fb2b7b4b029dfc48e06920eb92 HTTP/1.1 > > Host: heat.xxxxxxxxx:8004 > > User-Agent: curl/7.47.0 > > Accept: */* > > > > < HTTP/1.1 500 Internal Server Error > < Content-Type: application/json > < Content-Length: 4338 > < X-Openstack-Request-Id: req-46afc474-682b-4938-8777-b3b4b6fcb973 > < Date: Fri, 15 Feb 2019 13:08:04 GMT > < > {"explanation": "The server has either erred or is incapable of > performing the requested operation.", "code": 500, > > > In the heat_api log I see the request coming through, but it seems that > there is just no reply. >> 2019-02-15 14:16:40.047 25 DEBUG heat.api.middleware.version_negotiation > [-] Processing request: GET / Accept: > process_request > /var/lib/kolla/venv/lib/python2.7/site-packages/heat/api/middleware/version_negotiation.py:50 > > 2019-02-15 14:16:40.048 25 INFO eventlet.wsgi.server [-] 10.x.y.z - - > [15/Feb/2019 14:16:40] "GET / HTTP/1.0" 300 327 0.001106 This *is* logging a response though - 300 Multiple Choices, 327 bytes in 0.001106s. Which is the correct response for "GET /" (it should return the version negotiation doc). > Should the api give some output when I do a http request? Generally speaking, yes. > > Any hints? > > > Thanks a lot, its quite urgent.. > > Built 11.0.0 and current rocky-stable (11.0.0.1dev), same on both versions. > > > > ## > > with a horizon request (just click on Project -> Compute -> > Orchestration -> Stacks > > 2019-02-15 14:22:22.250 22 DEBUG heat.api.middleware.version_negotiation > [-] Processing request: GET /v1/beb568af3781471d94c3623805946ca3/stacks > Accept: application/json process_request > /var/lib/kolla/venv/lib/python2.7/site-packages/heat/api/middleware/version_negotiation.py:50 > > 2019-02-15 14:22:22.250 22 DEBUG heat.api.middleware.version_negotiation > [-] Matched versioned URI. Version: 1.0 process_request > /var/lib/kolla/venv/lib/python2.7/site-packages/heat/api/middleware/version_negotiation.py:65 > > 2019-02-15 14:22:23.003 22 DEBUG eventlet.wsgi.server > [req-663d17e3-7e41-4c9e-a30a-a43ef18cf056 - - - - -] (22) accepted > ('10.xxx.xxx.xxx', 51084) server > /var/lib/kolla/venv/lib/python2.7/site-packages/eventlet/wsgi.py:883 > 2019-02-15 14:22:23.004 22 DEBUG heat.api.middleware.version_negotiation > [-] Processing request: GET / Accept: > process_request > /var/lib/kolla/venv/lib/python2.7/site-packages/heat/api/middleware/version_negotiation.py:50 There are indeed no responses logged here. Have you checked the heat-engine log to see if the cause of the delay appears there? Is heat-engine alive? Is RabbitMQ working? - ZB From vladimir.blando at gmail.com Tue Feb 19 07:59:16 2019 From: vladimir.blando at gmail.com (vladimir franciz blando) Date: Tue, 19 Feb 2019 15:59:16 +0800 Subject: [openstack-ansible] setup-infrastructure failure Message-ID: Node running on CentOS 7 with 2 NICs, try to run openstack-ansible all-in-one and ran into a problem with setup-infrastructure command: openstack-ansible setup-infrastructure.yml --- TASK [Get list of repo packages] **************************************************************************************************************************************************************** fatal: [aio1_utility_container-9fa7b0be]: FAILED! => {"changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": " http://172.29.236.100:8181/os-releases/18.1.3/centos-7.6-x86_64/requirements_absolute_requirements.txt "} NO MORE HOSTS LEFT ****************************************************************************************************************************************************************************** PLAY RECAP ************************************************************************************************************************************************************************************** aio1 : ok=46 changed=0 unreachable=0 failed=0 aio1_aodh_container-646c3256 : ok=6 changed=0 unreachable=0 failed=0 aio1_ceilometer_central_container-715d011d : ok=6 changed=0 unreachable=0 failed=0 aio1_cinder_api_container-3bd7a6b4 : ok=6 changed=0 unreachable=0 failed=0 aio1_galera_container-64731bca : ok=6 changed=0 unreachable=0 failed=0 aio1_glance_container-d07a3aa4 : ok=6 changed=0 unreachable=0 failed=0 aio1_gnocchi_container-4377a0b7 : ok=6 changed=0 unreachable=0 failed=0 aio1_horizon_container-edb3edf9 : ok=6 changed=0 unreachable=0 failed=0 aio1_keystone_container-e3a4bf34 : ok=6 changed=0 unreachable=0 failed=0 aio1_memcached_container-f94f23e1 : ok=6 changed=0 unreachable=0 failed=0 aio1_neutron_server_container-00f0e572 : ok=6 changed=0 unreachable=0 failed=0 aio1_nova_api_container-bf866cf2 : ok=6 changed=0 unreachable=0 failed=0 aio1_rabbit_mq_container-19320598 : ok=6 changed=0 unreachable=0 failed=0 aio1_repo_container-b335cfef : ok=83 changed=3 unreachable=0 failed=0 aio1_utility_container-9fa7b0be : ok=29 changed=2 unreachable=0 failed=1 localhost : ok=1 changed=0 unreachable=0 failed=0 EXIT NOTICE [Playbook execution failure] ************************************** =============================================================================== [root at openstack-ansible playbooks]# --- the container is running and I can see that url is working --- [root at openstack-ansible playbooks]# lxc-ls -f |grep 9fa7b0be aio1_utility_container-9fa7b0be RUNNING 1 onboot, openstack 10.255.255.176, 172.29.237.17 - [root at openstack-ansible playbooks]# --- --- [root at openstack-ansible playbooks]# curl http://172.29.236.100:8181/os-releases/18.1.3/centos-7.6-x86_64/requirements_absolute_requirements.txt networking_sfc==7.0.1.dev4 urllib3==1.23 alabaster==0.7.11 restructuredtext_lint==1.1.3 pylxd==2.2.7 sphinxcontrib_seqdiag==0.8.5 oslo.cache==1.30.1 tooz==1.62.0 pytz==2018.5 pysaml2==4.5.0 pathtools==0.1.2 appdirs==1.4.3 ... --- - Vlad -------------- next part -------------- An HTML attachment was scrubbed... URL: From manuel.sb at garvan.org.au Tue Feb 19 09:17:18 2019 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Tue, 19 Feb 2019 09:17:18 +0000 Subject: how to define multiple aliases Message-ID: <9D8A2486E35F0941A60430473E29F15B017E850B7E@MXDB2.ad.garvan.unsw.edu.au> Dear openstack community, I am trying to setup pci passthrough for multiple devices types but I am getting an error while creating a new vm. This is my pci section in nova.conf compute node [pci] passthrough_whitelist = [{ "vendor_id": "10de", "product_id": "1db1" }, { "vendor_id": "15b3", "product_id": "1016" }] alias = { "vendor_id":"10de", "product_id":"1db1", "device_type":"type-PCI", "name":"nv_v100" } alias = { "vendor_id":"15b3", "product_id":"1016", "device_type":"type-VF", "name":"mlnx_connectx4" } I also tried this [pci] passthrough_whitelist = [{ "vendor_id": "10de", "product_id": "1db1" }, { "vendor_id": "15b3", "product_id": "1016" }] alias = [{ "vendor_id":"10de", "product_id":"1db1", "device_type":"type-PCI", "name":"nv_v100" }, { "vendor_id":"15b3", "product_id":"1016", "device_type":"type-VF", "name":"mlnx_connectx4" }] And this is the error I am getting when I try to create a new vm: # openstack server create --flavor sriov.small --image centos7.5-image --availability-zone nova:zeus-59.localdomain vm-sriov-test PCI alias mlnx_connectx4 is not defined (HTTP 400) (Request-ID: req-aebb2d05-a557-428b-b2c6-090c16dc7c76) how can I setup multiple aliases for different devices? Thank you very much NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Tue Feb 19 12:20:15 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Tue, 19 Feb 2019 07:20:15 -0500 Subject: [openstack-ansible] setup-infrastructure failure In-Reply-To: References: Message-ID: You should try to curl that link from within your `aio1_utility_container-9fa7b0be` container. It seems like a networking configuration issue. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Tue Feb 19 12:32:55 2019 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 19 Feb 2019 18:02:55 +0530 Subject: [tripleo][openstack-ansible] collaboration on os_tempest role update XI - Feb 19, 2019 Message-ID: Hello, Here is the 11th update (Feb 13 to Feb 19, 2019) on collaboration on os_tempest[1] role between TripleO and OpenStack-Ansible projects. Summary: This week was still a calm week. we unblocked the os_heat CI, thanks to mnaser, odyssey4me and guilhermesp. In os_tempest we can also disable the router ping and run mistral tempest tests. Default zero disk flavor to RULE_ADMIN_API in Stein [https://review.openstack.org/#/c/603910/] patch in nova broke python-tempestconf, but os_tempest was working fine as it has already using DISK=1. Now everything is fixed Things got merged: os_tempest * Add tempest_service_available_mistral with distro packages - https://review.openstack.org/635180 * Add option to disable router ping - https://review.openstack.org/636211 python-tempestconf * Update image flavor to have some disk - https://review.openstack.org/637679 os_heat * Fixed the egg name of heat to openstack_heat - https://review.openstack.org/635518 Things in progress: os_tempest * Ensure stackviz wheel build is isolated - https://review.openstack.org/637503 * Added tempest.conf for heat_plugin - https://review.openstack.org/632021 * Use the correct heat tests - https://review.openstack.org/630695 * Added dependency of os_tempest role - https://review.openstack.org/632726 * Revert "Only init a workspace if doesn't exists" - https://review.openstack.org/637801 TripleO: * Reuse the validate-tempest skip list in os_tempest - https://review.openstack.org/634380 Note: os_tempest tripleo CI got broken, we are working on fixing this bug https://bugs.launchpad.net/tripleo/+bug/1816552 Upcoming week: * Complete heat support in os_tempest Here is the 10th update [2]. Have queries, Feel free to ping us on #tripleo or #openstack-ansible channel. Links: [1.] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest [2.] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002608.html Thanks, Chandan Kumar From vladimir.blando at gmail.com Tue Feb 19 12:40:16 2019 From: vladimir.blando at gmail.com (vladimir franciz blando) Date: Tue, 19 Feb 2019 20:40:16 +0800 Subject: [openstack-ansible] setup-infrastructure failure In-Reply-To: References: Message-ID: I did, and it works - [root at openstack-ansible playbooks]# curl http://172.29.236.100:8181/os-releases/18.1.3/centos-7.6-x86_64/requirements_absolute_requirements.txt networking_sfc==7.0.1.dev4 urllib3==1.23 alabaster==0.7.11 restructuredtext_lint==1.1.3 pylxd==2.2.7 sphinxcontrib_seqdiag==0.8.5 oslo.cache==1.30.1 tooz==1.62.0 pytz==2018.5 pysaml2==4.5.0 pathtools==0.1.2 appdirs==1.4.3 ... - Vlad ᐧ On Tue, Feb 19, 2019 at 8:30 PM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > You should try to curl that link from within your > `aio1_utility_container-9fa7b0be` container. > It seems like a networking configuration issue. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Feb 19 13:53:16 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 19 Feb 2019 13:53:16 +0000 Subject: how to define multiple aliases In-Reply-To: <9D8A2486E35F0941A60430473E29F15B017E850B7E@MXDB2.ad.garvan.unsw.edu.au> References: <9D8A2486E35F0941A60430473E29F15B017E850B7E@MXDB2.ad.garvan.unsw.edu.au> Message-ID: <2c17e513f56fc447353b076a3bda8ab6fa2439dd.camel@redhat.com> On Tue, 2019-02-19 at 09:17 +0000, Manuel Sopena Ballesteros wrote: > Dear openstack community, > > I am trying to setup pci passthrough for multiple devices types but I am getting an error while creating a new vm. > > > This is my pci section in nova.conf compute node you need to set the alais on both the controller ( i belive specically the nova api config) and the compute node see https://docs.openstack.org/nova/latest/admin/pci-passthrough.html#configure-nova-api-controller and https://docs.openstack.org/nova/latest/admin/pci-passthrough.html#configure-pci-devices-compute > > [pci] > passthrough_whitelist = [{ "vendor_id": "10de", "product_id": "1db1" }, { "vendor_id": "15b3", "product_id": "1016" }] > alias = { "vendor_id":"10de", "product_id":"1db1", "device_type":"type-PCI", "name":"nv_v100" } > alias = { "vendor_id":"15b3", "product_id":"1016", "device_type":"type-VF", "name":"mlnx_connectx4" } this is the correct format. it is documented here https://docs.openstack.org/nova/latest/configuration/config.html#pci.alias > > I also tried this > > [pci] > passthrough_whitelist = [{ "vendor_id": "10de", "product_id": "1db1" }, { "vendor_id": "15b3", "product_id": "1016" }] > alias = [{ "vendor_id":"10de", "product_id":"1db1", "device_type":"type-PCI", "name":"nv_v100" }, { > "vendor_id":"15b3", "product_id":"1016", "device_type":"type-VF", "name":"mlnx_connectx4" }] while the json list format is supported for the passthough_whiltelist it is not supported for aliases. > > > And this is the error I am getting when I try to create a new vm: > > # openstack server create --flavor sriov.small --image centos7.5-image --availability-zone nova:zeus- > 59.localdomain vm-sriov-test > PCI alias mlnx_connectx4 is not defined (HTTP 400) (Request-ID: req-aebb2d05-a557-428b-b2c6-090c16dc7c76) > > how can I setup multiple aliases for different devices? > > Thank you very much > NOTICE > Please consider the environment before printing this email. This message and any attachments are intended for the > addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended > recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this > message in error please notify us at once by return email and then delete both messages. We accept no liability for > the distribution of viruses or similar in electronic communications. This notice should not be removed. From openstack at fried.cc Tue Feb 19 14:05:22 2019 From: openstack at fried.cc (Eric Fried) Date: Tue, 19 Feb 2019 08:05:22 -0600 Subject: how to define multiple aliases In-Reply-To: <9D8A2486E35F0941A60430473E29F15B017E850B7E@MXDB2.ad.garvan.unsw.edu.au> References: <9D8A2486E35F0941A60430473E29F15B017E850B7E@MXDB2.ad.garvan.unsw.edu.au> Message-ID: <5319FCE4-43DC-4549-8370-6FFAF393F7E0@fried.cc> Manuel- The aliases need to be defined in your conf files on both the conductor and the compute nodes. If that's the case let me know and I'll take a closer look when I get to my desk. Eric Fried > On Feb 19, 2019, at 03:17, Manuel Sopena Ballesteros wrote: > > Dear openstack community, > > I am trying to setup pci passthrough for multiple devices types but I am getting an error while creating a new vm. > > > This is my pci section in nova.conf compute node > > [pci] > passthrough_whitelist = [{ "vendor_id": "10de", "product_id": "1db1" }, { "vendor_id": "15b3", "product_id": "1016" }] > alias = { "vendor_id":"10de", "product_id":"1db1", "device_type":"type-PCI", "name":"nv_v100" } > alias = { "vendor_id":"15b3", "product_id":"1016", "device_type":"type-VF", "name":"mlnx_connectx4" } > > I also tried this > > [pci] > passthrough_whitelist = [{ "vendor_id": "10de", "product_id": "1db1" }, { "vendor_id": "15b3", "product_id": "1016" }] > alias = [{ "vendor_id":"10de", "product_id":"1db1", "device_type":"type-PCI", "name":"nv_v100" }, { "vendor_id":"15b3", "product_id":"1016", "device_type":"type-VF", "name":"mlnx_connectx4" }] > > > And this is the error I am getting when I try to create a new vm: > > # openstack server create --flavor sriov.small --image centos7.5-image --availability-zone nova:zeus-59.localdomain vm-sriov-test > PCI alias mlnx_connectx4 is not defined (HTTP 400) (Request-ID: req-aebb2d05-a557-428b-b2c6-090c16dc7c76) > > how can I setup multiple aliases for different devices? > > Thank you very much > NOTICE > Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Feb 19 14:28:54 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 19 Feb 2019 14:28:54 +0000 Subject: [all] Last day for TC candidate nominations In-Reply-To: <20190219025817.t5sammeofwltvxbn@yuggoth.org> References: <20190219025817.t5sammeofwltvxbn@yuggoth.org> Message-ID: <20190219142853.hxy7xoorqsnd2kcb@yuggoth.org> Here's another reminder that we are in the last hours no nominate yourself as a candidate for one of the open seats on the OpenStack Technical Committee. Nominations are open until 23:45 UTC today (Feb 19, 2019). So far we have 4 candidates for 7 seats, and therefore need at least 3 more nominees (4 or more will trigger a runoff election). If you want to stand for TC, don't delay, follow the instructions at [1] to make sure the community knows your intentions. Make sure your nomination has been submitted to the openstack/election repository and approved by election officials. Thank you, [1] http://governance.openstack.org/election/#how-to-submit-a-candidacy -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From smooney at redhat.com Tue Feb 19 14:36:54 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 19 Feb 2019 14:36:54 +0000 Subject: how to define multiple aliases In-Reply-To: <5319FCE4-43DC-4549-8370-6FFAF393F7E0@fried.cc> References: <9D8A2486E35F0941A60430473E29F15B017E850B7E@MXDB2.ad.garvan.unsw.edu.au> <5319FCE4-43DC-4549-8370-6FFAF393F7E0@fried.cc> Message-ID: On Tue, 2019-02-19 at 08:05 -0600, Eric Fried wrote: > Manuel- > > The aliases need to be defined in your conf files on both the conductor and the compute nodes. If that's the case let > me know and I'll take a closer look when I get to my desk. it needs to be defined on the nova api server config and the nova compute config due to how the numa afinity policies work. at least i think that is the reason. stephen dug into this while trying to fix https://bugs.launchpad.net/nova/+bug/1805891 https://review.openstack.org/#/c/624444/2 its documented in the docs i linked in my previous reply https://docs.openstack.org/nova/latest/admin/pci-passthrough.html#configure-nova-api-controller and https://docs.openstack.org/nova/latest/admin/pci-passthrough.html#configure-pci-devices-compute stephen has another patch up to make it even more explicit https://review.openstack.org/#/c/624445/ the conductor should not need the alias defiend. > > Eric Fried > > On Feb 19, 2019, at 03:17, Manuel Sopena Ballesteros wrote: > > > Dear openstack community, > > > > I am trying to setup pci passthrough for multiple devices types but I am getting an error while creating a new vm. > > > > > > This is my pci section in nova.conf compute node > > > > [pci] > > passthrough_whitelist = [{ "vendor_id": "10de", "product_id": "1db1" }, { "vendor_id": "15b3", "product_id": "1016" > > }] > > alias = { "vendor_id":"10de", "product_id":"1db1", "device_type":"type-PCI", "name":"nv_v100" } > > alias = { "vendor_id":"15b3", "product_id":"1016", "device_type":"type-VF", "name":"mlnx_connectx4" } > > > > I also tried this > > > > [pci] > > passthrough_whitelist = [{ "vendor_id": "10de", "product_id": "1db1" }, { "vendor_id": "15b3", "product_id": "1016" > > }] > > alias = [{ "vendor_id":"10de", "product_id":"1db1", "device_type":"type-PCI", "name":"nv_v100" }, { > > "vendor_id":"15b3", "product_id":"1016", "device_type":"type-VF", "name":"mlnx_connectx4" }] > > > > > > And this is the error I am getting when I try to create a new vm: > > > > # openstack server create --flavor sriov.small --image centos7.5-image --availability-zone nova:zeus- > > 59.localdomain vm-sriov-test > > PCI alias mlnx_connectx4 is not defined (HTTP 400) (Request-ID: req-aebb2d05-a557-428b-b2c6-090c16dc7c76) > > > > how can I setup multiple aliases for different devices? > > > > Thank you very much > > NOTICE > > Please consider the environment before printing this email. This message and any attachments are intended for the > > addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended > > recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this > > message in error please notify us at once by return email and then delete both messages. We accept no liability for > > the distribution of viruses or similar in electronic communications. This notice should not be removed. From bluejay.ahn at gmail.com Tue Feb 19 14:46:07 2019 From: bluejay.ahn at gmail.com (Jaesuk Ahn) Date: Tue, 19 Feb 2019 23:46:07 +0900 Subject: [openstack-helm] would like to discuss review turnaround time In-Reply-To: References: Message-ID: It has been several weeks without further feedback or response from openstack-helm project members. I really want to hear what others think, and start a discussion on how we can improve together. On Thu, Jan 31, 2019 at 4:35 PM Jaesuk Ahn wrote: > Thank you for thoughtful reply. > > I was able to quickly add my opinion on some of your feedback, not all. > please see inline. > I will get back with more thought and idea. pls note that we have big > holiday next week (lunar new year holiday), therefore, it might take some > time. :) > > > On Wed, Jan 30, 2019 at 9:26 PM Jean-Philippe Evrard < > jean-philippe at evrard.me> wrote: > >> Hello, >> >> Thank you for bringing that topic. Let me answer inline. >> Please note, this is my personal opinion. >> (No company or TC hat here. I realise that, as one of the TC members >> following the health of the osh project, this is a concerning mail, and >> I will report appropriately if further steps need to be taken). >> >> On Wed, 2019-01-30 at 13:15 +0900, Jaesuk Ahn wrote: >> > Dear all, >> > >> > There has been several patch sets getting sparse reviews. >> > Since some of authors wrote these patch sets are difficult to join >> > IRC >> > meeting due to time and language constraints, I would like to pass >> > some of >> > their voice, and get more detail feedback from core reviewers and >> > other >> > devs via ML. >> > >> > I fully understand core reviewers are quite busy and believe they are >> > doing >> > their best efforts. period! >> >> We can only hope for best effort of everyone :) >> I have no doubt here. I also believe the team is very busy. >> >> So here is my opinion: Any review is valuable. Core reviewers should >> not be the only ones to review patches >> The more people will review in all of the involved companies, the more >> they will get trusted in their reviews. That follows up with earned >> trust by the core reviewers, with eventually leads to becoming core >> reviewer. >> > > This is a very good point. I really need to encourage developers to at > least cross-review each other's patch set. > I will discuss with other team members how we can achieve this, we might > need to introduce "half-a-day review only" schedule. > Once my team had tried to review more in general, however it failed > because of very limited time allowed to do so. > At least, we can try to cross-review each other on patch sets, and > explicitly assign time to do so. > THIS will be our important homework to do. > > > >> >> I believe we can make a difference by reviewing more, so that the >> existing core team could get extended. Just a highlight: at the moment, >> more than 90% of reviews are AT&T sponsored (counting independents >> working for at&t. See also >> https://www.stackalytics.com/?module=openstack-helm-group). That's very >> high. >> >> I believe extending the core team geographically/with different >> companies is a solution for the listed pain points. >> > > I really would like to have that as well, however, efforts and time to > become a candidate with "good enough" history seems very difficult. > Matching the level (or amount of works) with what the current core > reviewers does is not an easy thing to achieve. > Frankly speaking, motivating someone to put that much effort is also > challenging, especially with their reluctance (hesitant?) to do so due to > language and time barrier. > > > >> >> > However, I sometimes feel that turnaround time for some of patch sets >> > are >> > really long. I would like to hear opinion from others and suggestions >> > on >> > how to improve this. It can be either/both something each patch set >> > owner >> > need to do more, or/and it could be something we as a openstack-helm >> > project can improve. For instance, it could be influenced by time >> > differences, lack of irc presence, or anything else. etc. I really >> > would >> > like to find out there are anything we can improve together. >> >> I had the same impression myself: the turnaround time is big for a >> deployment project. >> >> The problem is not simple, and here are a few explanations I could >> think of: >> 1) most core reviewers are from a single company, and emergencies in >> their company are most likely to get prioritized over the community >> work. That leaves some reviews pending. >> 2) most core reviewers are from the same timezone in US, which means, >> in the best case, an asian contributor will have to wait a full day >> before seeing his work merged. If a core reviewer doesn't review this >> on his day work due to an emergency, you're putting the turnaround to >> two days at best. >> 3) most core reviewers are working in the same location: it's maybe >> hard for them to scale the conversation from their internal habits to a >> community driven project. Communication is a very important part of a >> community, and if that doesn't work, it is _very_ concerning to me. We >> raised the points of lack of (IRC presence|reviews) in previous >> community meetings. > > > 2-1) other active developers are on the opposite side of the earth, which > make more difficult to sync with core reviewers. No one wanted, but it > somehow creates an invisible barrier. > > I do agree that "Communication" is a very important part of a community. > Language and time differences are adding more difficulties on this as > well. I am trying my best to be a good liaison, but never enough. > There will be no clear solution. However, I will have a discussion again > with team members to gather some ideas. > > >> >> > >> > I would like to get any kind of advise on the following. >> > - sometimes, it is really difficult to get core reviewers' comments >> > or >> > reviews. I routinely put the list of patch sets on irc meeting >> > agenda, >> > however, there still be a long turnaround time between comments. As a >> > result, it usually takes a long time to process a patch set, does >> > sometimes >> > cause rebase as well. >> >> I thank our testing system auto rebases a lot :) >> The bigger problem is when you're working on something which eventually >> conflicts with some AT&T work that was prioritized internally. >> >> For that, I asked a clear list of what the priorities are. >> ( https://storyboard.openstack.org/#!/worklist/341 ) >> >> Anything outside that should IMO raise a little flag in our heads :) >> >> But it's up to the core reviewers to work with this in focus, and to >> the PTL to give directions. >> >> >> > - Having said that, I would like to have any advise on what we need >> > to do >> > more, for instance, do we need to be in irc directly asking each >> > patch set >> > to core reviewers? do we need to put core reviewers' name when we >> > push >> > patch set? etc. >> >> I believe that we should leverage IRC more for reviews. We are doing it >> in OSA, and it works fine. Of course core developers have their habits >> and a review dashboard, but fast/emergency reviews need to be >> socialized to get prioritized. There are other attempts in the >> community (like have a review priority in gerrit), but I am not >> entirely sold on bringing a technical solution to something that should >> be solved with more communication. >> >> > - Some of patch sets are being reviewed and merged quickly, and some >> > of >> > patch sets are not. I would like to know what makes this difference >> > so that >> > I can tell my developers how to do better job writing and >> > communicating >> > patch sets. >> > >> > There are just some example patch sets currently under review stage. >> > >> > 1. https://review.openstack.org/#/c/603971/ >> this ps has been >> > discussed >> > for its contents and scope. Cloud you please add if there is anything >> > else >> > we need to do other than wrapping some of commit message? >> > >> > 2. https://review.openstack.org/#/c/633456/ >> this is simple fix. >> > how can >> > we make core reviewer notice this patch set so that they can quickly >> > view? >> > >> > 3. https://review.openstack.org/#/c/625803/ >> we have been getting >> > feedbacks and questions on this patch set, that has been good. but >> > round-trip time for the recent comments takes a week or more. because >> > of >> > that delay (?), the owner of this patch set needed to rebase this one >> > often. Will this kind of case be improved if author engages more on >> > irc >> > channel or via mailing list to get feedback rather than relying on >> > gerrit >> > reviews? >> >> To me, the last one is more controversial than others (I don't believe >> we should give the opportunity to do that myself until we've done a >> security impact analysis). This change is also bigger than others, >> which is harder to both write and review. As far as I know, there was >> no spec that preceeded this work, so we couldn't discuss the approach >> before the code was written. >> >> I don't mind not having specs for changes to be honest, but it makes >> sense to have one if the subject is more controversial/harder, because >> people will have a tendency to put hard job aside. >> >> This review is the typical review that needs to be discussed in the >> community meeting, advocating for or against it until a decision is >> taken (merge or abandon). >> > > I do agree on your analysis on this one. but, One thing the author really > wanted to have was feedback, that can be either negative or positive. it > could be something to ask to abandon, or rewrite. > but lack of comments with a long turnaround time between comments (that > means author waits days and weeks to see any additional comments) was the > problem. > It felt like somewhat abandoned without any strong reason. > > > >> >> > >> > Frankly speaking, I don't know if this is a real issue or just way it >> > is. I >> > just want to pass some of voice from our developers, and really would >> > like >> > to hear what others think and find a better way to communicate. >> >> It doesn't matter if "it's a real issue" or "just the way it is". >> If there is a feeling of burden/pain, we should tackle the issue. >> >> So, yes, it's very important to raise the issue you feel! >> If you don't do it, nothing will change, the morale of developers will >> fall, and the health of the project will suffer. >> Transparency is key here. >> >> Thanks for voicing your opinion. >> >> > >> > >> > Thanks you. >> > >> > >> >> I would say my key take-aways are: >> 1) We need to review more >> 2) We need to communicate/socialize more on patchsets and issues. Let's >> be more active on IRC outside meetings. >> > > Just one small note here: developers in my team prefer email communication > sometime, where they can have time to think how to write their opinion on > English. > > >> 3) The priority list need to be updated to be accurate. I am not sure >> this list is complete (there is no mention of docs image building >> there). >> > > I really want this happen. Things are often suddenly showed up on patch > set and merged. > It is a bit difficult to follow what is exactly happening on > openstack-helm community. Of course, this required everyone's efforts. > > >> 4) We need to extend the core team in different geographical regions >> and companies as soon as possible >> >> But of course it's only my analysis. I would be happy to see Pete >> answer here. >> >> Regards, >> Jeam-Philippe Evrard (evrardjp) >> >> >> > A bit unrelated with topic, but I really want to say this. > I DO REALLY appreciate openstack-helm community's effort to accept > non-English documents as official one. (although it is slowly progressing > ^^) > I think this move is real diversity effort than any other move > (recognizing there is a good value community need to bring in "as-is", even > though that is non-English information) > > Cheers, > > > -- > *Jaesuk Ahn*, Ph.D. > Software R&D Center, SK Telecom > -- *Jaesuk Ahn*, Ph.D. Software R&D Center, SK Telecom -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramshaazeemi2 at gmail.com Tue Feb 19 15:09:46 2019 From: ramshaazeemi2 at gmail.com (Ramsha Azeemi) Date: Tue, 19 Feb 2019 20:09:46 +0500 Subject: outreachy candidate Message-ID: hi ! I am an applicant , and i want to contribute in "OpenStack Manila Integration with OpenStack CLI (OSC)" project but i couldnt find a way tosetup environment , contribute , find newcomer friendly issues , or codes to fix etc . Kindly guide me . -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Tue Feb 19 15:19:15 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Tue, 19 Feb 2019 09:19:15 -0600 Subject: outreachy candidate In-Reply-To: References: Message-ID: Ramsha, Welcome to the community!  A good place to start is with the contributor guide: https://docs.openstack.org/manila/latest/contributor/index.html We also have a lot of information about getting started in the OpenStack Upstream Institute: https://docs.openstack.org/upstream-training/  There will be an Upstream Institute before the Denver Summit if you are able to attend in person. https://www.openstack.org/summit/denver-2019?gclid=CjwKCAiA767jBRBqEiwAGdAOr8USf8TDJ3Gq45BPthBDikdyaA41J0XCIOpI2Im0jsrF8h825c11DBoCMwcQAvD_BwE Hope this information helps! Thanks! Jay IRC: jungleboyj On 2/19/2019 9:09 AM, Ramsha Azeemi wrote: > hi ! I am an applicant , and i want to contribute in "OpenStack Manila > Integration with OpenStack CLI (OSC)" project but i couldnt find a way > tosetup environment , contribute , find newcomer friendly issues , or > codes to fix etc . Kindly guide me . -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Feb 19 15:31:30 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 19 Feb 2019 09:31:30 -0600 Subject: [oslo] Single approve py37 patches Message-ID: As the subject says, let's single approve the py37 job addition patches. They're machine-generated and part of a broader effort that has already been agreed upon, so as long as they're passing CI they should be fine. -Ben From openstack at nemebean.com Tue Feb 19 16:03:39 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 19 Feb 2019 10:03:39 -0600 Subject: [infra] Patch submitters can remove core reviewer votes? Message-ID: <9da20649-1066-4de2-bb8f-7fc9d24a1d43@nemebean.com> See https://review.openstack.org/#/c/637703/ I'm assuming in that case it was an accident, but it doesn't seem like this should be possible at all. For example, if I had -2'd would it still have allowed the removal? Maybe you can only remove positive votes? Anyway, seemed weird to me so I thought I would bring it up. -Ben From fungi at yuggoth.org Tue Feb 19 16:13:30 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 19 Feb 2019 16:13:30 +0000 Subject: [infra] Patch submitters can remove core reviewer votes? In-Reply-To: <9da20649-1066-4de2-bb8f-7fc9d24a1d43@nemebean.com> References: <9da20649-1066-4de2-bb8f-7fc9d24a1d43@nemebean.com> Message-ID: <20190219161330.h7hfqdza7jqltafz@yuggoth.org> On 2019-02-19 10:03:39 -0600 (-0600), Ben Nemec wrote: > See https://review.openstack.org/#/c/637703/ > > I'm assuming in that case it was an accident, but it doesn't seem like this > should be possible at all. For example, if I had -2'd would it still have > allowed the removal? Maybe you can only remove positive votes? The latter, yes: https://review.openstack.org/Documentation/access-control.html#category_remove_reviewer > Anyway, seemed weird to me so I thought I would bring it up. Thanks for doing so. It's a sometimes surprising behavior worth keeping in mind! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Tue Feb 19 16:13:42 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 19 Feb 2019 11:13:42 -0500 Subject: [infra] Patch submitters can remove core reviewer votes? In-Reply-To: <9da20649-1066-4de2-bb8f-7fc9d24a1d43@nemebean.com> References: <9da20649-1066-4de2-bb8f-7fc9d24a1d43@nemebean.com> Message-ID: <2b73aa64-d966-4a5f-a540-1af466604682@www.fastmail.com> On Tue, Feb 19, 2019, at 8:03 AM, Ben Nemec wrote: > See https://review.openstack.org/#/c/637703/ > > I'm assuming in that case it was an accident, but it doesn't seem like > this should be possible at all. For example, if I had -2'd would it > still have allowed the removal? Maybe you can only remove positive votes? > > Anyway, seemed weird to me so I thought I would bring it up. I want to say we tested this as a behavior noticed during upgrade prep a year and a half ago. And as you suspect it is only allowed for positive votes. However, you should double check us on that. Can you see if you are able to remove either of the -1's on https://review.openstack.org/#/c/571321/ ? (you own that change so should be a reasonable reproduction case). I'd do it myself but unfortunately my account (and other Gerrit admin accounts) can always do this for reasons... Clark From openstack at nemebean.com Tue Feb 19 16:33:21 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 19 Feb 2019 10:33:21 -0600 Subject: [infra] Patch submitters can remove core reviewer votes? In-Reply-To: <2b73aa64-d966-4a5f-a540-1af466604682@www.fastmail.com> References: <9da20649-1066-4de2-bb8f-7fc9d24a1d43@nemebean.com> <2b73aa64-d966-4a5f-a540-1af466604682@www.fastmail.com> Message-ID: On 2/19/19 10:13 AM, Clark Boylan wrote: > On Tue, Feb 19, 2019, at 8:03 AM, Ben Nemec wrote: >> See https://review.openstack.org/#/c/637703/ >> >> I'm assuming in that case it was an accident, but it doesn't seem like >> this should be possible at all. For example, if I had -2'd would it >> still have allowed the removal? Maybe you can only remove positive votes? >> >> Anyway, seemed weird to me so I thought I would bring it up. > > I want to say we tested this as a behavior noticed during upgrade prep a year and a half ago. And as you suspect it is only allowed for positive votes. However, you should double check us on that. Can you see if you are able to remove either of the -1's on https://review.openstack.org/#/c/571321/ ? (you own that change so should be a reasonable reproduction case). No, I can't, so looks like this is working as intended. Thanks! > > I'd do it myself but unfortunately my account (and other Gerrit admin accounts) can always do this for reasons... > > Clark > From km.giuseppesannino at gmail.com Tue Feb 19 16:35:15 2019 From: km.giuseppesannino at gmail.com (Giuseppe Sannino) Date: Tue, 19 Feb 2019 17:35:15 +0100 Subject: [kolla][magnum] Cluster creation failed due to "Waiting for Kubernetes API..." Message-ID: Hi all, need an help. I deployed an AIO via Kolla on a baremetal node. Here some information about the deployment: --------------- kolla-ansible: 7.0.1 openstack_release: Rocky kolla_base_distro: centos kolla_install_type: source TLS: disabled --------------- VMs spawn without issue but I can't make the "Kubernetes cluster creation" successfully. It fails due to "Time out" I managed to log into Kuber Master and from the cloud-init-output.log I can see: + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ curl --silent http://127.0.0.1:8080/healthz + '[' ok = '' ']' + sleep 5 Checking via systemctl and journalctl I see: [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ systemctl status kube-apiserver ● kube-apiserver.service - kubernetes-apiserver Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2019-02-19 15:31:41 UTC; 45min ago Process: 3796 ExecStart=/usr/bin/runc --systemd-cgroup run kube-apiserver (code=exited, status=1/FAILURE) Main PID: 3796 (code=exited, status=1/FAILURE) Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Service RestartSec=100ms expired, scheduling restart. Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter is at 6. Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: Stopped kubernetes-apiserver. Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Start request repeated too quickly. Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: Failed to start kubernetes-apiserver. [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ sudo journalctl -u kube-apiserver -- Logs begin at Tue 2019-02-19 15:21:36 UTC, end at Tue 2019-02-19 16:17:00 UTC. -- Feb 19 15:31:33 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: Started kubernetes-apiserver. Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version. Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: Flag --insecure-port has been deprecated, This flag will be removed in a future version. Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: Error: error creating self-signed certificates: open /var/run/kubernetes/apiserver.crt: permission denied : : : Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: error: error creating self-signed certificates: open /var/run/kubernetes/apiserver.crt: permission denied Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Service RestartSec=100ms expired, scheduling restart. Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter is at 1. May I ask for an help on this ? Many thanks /Giuseppe -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Feb 19 16:36:00 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 19 Feb 2019 10:36:00 -0600 Subject: [infra] Patch submitters can remove core reviewer votes? In-Reply-To: <20190219161330.h7hfqdza7jqltafz@yuggoth.org> References: <9da20649-1066-4de2-bb8f-7fc9d24a1d43@nemebean.com> <20190219161330.h7hfqdza7jqltafz@yuggoth.org> Message-ID: On 2/19/19 10:13 AM, Jeremy Stanley wrote: > On 2019-02-19 10:03:39 -0600 (-0600), Ben Nemec wrote: >> See https://review.openstack.org/#/c/637703/ >> >> I'm assuming in that case it was an accident, but it doesn't seem like this >> should be possible at all. For example, if I had -2'd would it still have >> allowed the removal? Maybe you can only remove positive votes? > > The latter, yes: > > https://review.openstack.org/Documentation/access-control.html#category_remove_reviewer > >> Anyway, seemed weird to me so I thought I would bring it up. > > Thanks for doing so. It's a sometimes surprising behavior worth > keeping in mind! > Yeah, definitely made me o.O when I saw it. :-) Thanks! From mriedemos at gmail.com Tue Feb 19 16:42:32 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 19 Feb 2019 10:42:32 -0600 Subject: [nova][dev][ops] can we get rid of 'project_only' in the DB layer? In-Reply-To: References: <3fb287ae-753f-7e56-aa2a-7e3a1d7d6d89@gmail.com> Message-ID: <47bf561e-439b-1642-1aa7-7bf48adca64a@gmail.com> On 2/18/2019 8:22 PM, melanie witt wrote: > Right, that is the proposal in this email. That we should remove > project_only=True and let the API policy check handle whether or not the > user from a different project is allowed to get the instance. Otherwise, > users are not able to use policy to control the behavior because it is > hard-coded in the database layer. I think this has always been the long-term goal and I remember a spec from John about it [1] but having said that, the spec was fairly complicated (to me at least) and sounds like there would be a fair bit of auditing of the API code we'd need to do before we can remove the DB API check, which means it's likely not something we can complete at this point in Stein. For example, I think we have a lot of APIs that run the policy check on the context (project_id and user_id) as the target before even pulling the resource from the database, and the resource itself should be the target, right? [1] https://review.openstack.org/#/c/433037/ -- Thanks, Matt From sean.mcginnis at gmx.com Tue Feb 19 16:46:42 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 19 Feb 2019 10:46:42 -0600 Subject: [all] Single approve py37 patches In-Reply-To: References: Message-ID: <20190219164642.GA29222@sm-workstation> On Tue, Feb 19, 2019 at 09:31:30AM -0600, Ben Nemec wrote: > As the subject says, let's single approve the py37 job addition patches. > They're machine-generated and part of a broader effort that has already been > agreed upon, so as long as they're passing CI they should be fine. > > -Ben > Somewhat related - I submitted this patch today: https://review.openstack.org/#/c/637866/ The idea with this would be to control which Python versions would be run through a set of templates that makes it clear per-release cycle what is expected to be run. It's maybe a little late to get all folks using this for stein, but I've added a template for it just in case we want to use it once we have stable/stein branches. We can also get some projects switched over before then on a best effort basis. We would try to do some automation in Train to get everyone on that template right from the start. For Stein, I have added py37 as a non-voting job since that was not our declared runtime. But we probably want to make sure there are no surprises once we switch to Train and it becomes the required Python 3 runtime. It probably makes sense, especially for common libraries like oslo, to get these jobs now so we know they are ready ahead of Train. If/when we switch to using these templates we can clean up any now-redundant individual jobs. Sean From km.giuseppesannino at gmail.com Tue Feb 19 17:43:44 2019 From: km.giuseppesannino at gmail.com (Giuseppe Sannino) Date: Tue, 19 Feb 2019 18:43:44 +0100 Subject: [kolla][magnum] Cluster creation failed due to "Waiting for Kubernetes API..." In-Reply-To: References: Message-ID: Hi all...again, I managed to get over the previous issue by "not disabling" the TLS in the cluster template. >From the cloud-init-output.log I see: Cloud-init v. 17.1 running 'modules:final' at Tue, 19 Feb 2019 17:03:53 +0000. Up 38.08 seconds. Cloud-init v. 17.1 finished at Tue, 19 Feb 2019 17:13:22 +0000. Datasource DataSourceEc2. Up 607.13 seconds But the cluster creation keeps on failing. >From the journalctl -f I see a possible issue: Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal runc[2723]: publicURL endpoint for orchestration service in null region not found Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal runc[2723]: Source [heat] Unavailable. Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal runc[2723]: /var/lib/os-collect-config/local-data not found. Skipping anyone familiar with this problem ? Thanks as usual. /Giuseppe On Tue, 19 Feb 2019 at 17:35, Giuseppe Sannino wrote: > Hi all, > need an help. > I deployed an AIO via Kolla on a baremetal node. Here some information > about the deployment: > --------------- > kolla-ansible: 7.0.1 > openstack_release: Rocky > kolla_base_distro: centos > kolla_install_type: source > TLS: disabled > --------------- > > > VMs spawn without issue but I can't make the "Kubernetes cluster creation" > successfully. It fails due to "Time out" > > I managed to log into Kuber Master and from the cloud-init-output.log I > can see: > + echo 'Waiting for Kubernetes API...' > Waiting for Kubernetes API... > ++ curl --silent http://127.0.0.1:8080/healthz > + '[' ok = '' ']' > + sleep 5 > > > Checking via systemctl and journalctl I see: > [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ systemctl status > kube-apiserver > ● kube-apiserver.service - kubernetes-apiserver > Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; > vendor preset: disabled) > Active: failed (Result: exit-code) since Tue 2019-02-19 15:31:41 UTC; > 45min ago > Process: 3796 ExecStart=/usr/bin/runc --systemd-cgroup run > kube-apiserver (code=exited, status=1/FAILURE) > Main PID: 3796 (code=exited, status=1/FAILURE) > > Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: > kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE > Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: > kube-apiserver.service: Failed with result 'exit-code'. > Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: > kube-apiserver.service: Service RestartSec=100ms expired, scheduling > restart. > Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: > kube-apiserver.service: Scheduled restart job, restart counter is at 6. > Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: > Stopped kubernetes-apiserver. > Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: > kube-apiserver.service: Start request repeated too quickly. > Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: > kube-apiserver.service: Failed with result 'exit-code'. > Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: > Failed to start kubernetes-apiserver. > > [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ sudo journalctl -u > kube-apiserver > -- Logs begin at Tue 2019-02-19 15:21:36 UTC, end at Tue 2019-02-19 > 16:17:00 UTC. -- > Feb 19 15:31:33 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: > Started kubernetes-apiserver. > Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: > Flag --insecure-bind-address has been deprecated, This flag will be removed > in a future version. > Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: > Flag --insecure-port has been deprecated, This flag will be removed in a > future version. > Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: > Error: error creating self-signed certificates: open > /var/run/kubernetes/apiserver.crt: permission denied > : > : > : > Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: > error: error creating self-signed certificates: open > /var/run/kubernetes/apiserver.crt: permission denied > Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: > kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE > Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: > kube-apiserver.service: Failed with result 'exit-code'. > Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: > kube-apiserver.service: Service RestartSec=100ms expired, scheduling > restart. > Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: > kube-apiserver.service: Scheduled restart job, restart counter is at 1. > > > May I ask for an help on this ? > > Many thanks > /Giuseppe > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Feb 19 18:02:33 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 19 Feb 2019 18:02:33 +0000 Subject: [all] Single approve py37 patches In-Reply-To: <20190219164642.GA29222@sm-workstation> References: <20190219164642.GA29222@sm-workstation> Message-ID: <88b3ee43c194ebba85b9b4f3ff733de35dbcd81a.camel@redhat.com> On Tue, 2019-02-19 at 10:46 -0600, Sean McGinnis wrote: > On Tue, Feb 19, 2019 at 09:31:30AM -0600, Ben Nemec wrote: > > As the subject says, let's single approve the py37 job addition patches. > > They're machine-generated and part of a broader effort that has already been > > agreed upon, so as long as they're passing CI they should be fine. > > > > -Ben > > > > Somewhat related - I submitted this patch today: > > https://review.openstack.org/#/c/637866/ > > The idea with this would be to control which Python versions would be run > through a set of templates that makes it clear per-release cycle what is > expected to be run. > > It's maybe a little late to get all folks using this for stein, but I've added > a template for it just in case we want to use it once we have stable/stein > branches. We can also get some projects switched over before then on a best > effort basis. > > We would try to do some automation in Train to get everyone on that template > right from the start. > > For Stein, I have added py37 as a non-voting job since that was not our > declared runtime. But we probably want to make sure there are no surprises > once we switch to Train and it becomes the required Python 3 runtime. > > It probably makes sense, especially for common libraries like oslo, to get > these jobs now so we know they are ready ahead of Train. If/when we switch to > using these templates we can clean up any now-redundant individual jobs. so ill approve https://review.openstack.org/#/c/610068/ once it passes ci to swap over os-vif to enable the python 3.7 jobs. but the question i have is the non clinet lib fereeze is next week. should i quickly sub a patch to swap to the stein template once https://review.openstack.org/#/c/637866/ merges? i can also submit a follow up to swap over to the the train template and -w it until RC1. is that the intention of https://review.openstack.org/#/c/637866/. or should i wait for automated proposals to use the templates? > > Sean > From sbauza at redhat.com Tue Feb 19 18:40:12 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 19 Feb 2019 19:40:12 +0100 Subject: [tc][election] TC candidacy Message-ID: Howdy folks, I wasn't thinking to provide my name for the TC election but unfortunately I only see a very few proposals that makes me a bit afraid. Given I love OpenStack and I think the Technical Comittee is more important than my personal values, I throw my hat now in the ring. In case you don't know me, I'm Sylvain Bauza (bauzas on IRC), working on OpenStack since... wow, 6 years already ? In 2013, I was an operator for a SME company when I wanted to use some cloud for our CI and our developers and I discovered OpenStack. After 6 months working on it as an operator, I knew it would be my new life for more than what I know. I moved to another company and became a developer creating a new project which was named Climate. You probably know about this project if I tell you the new name : Blazar. Yeah, Blazar is 6 years old too and I'm super happy to see this project be now important with new companies and developers on it. After 1 year on it, I changed again my position and became a Nova developer, eventually becoming nova-core. Time flies and now I'm still there, happy with what OpenStack became. Of course, it changed. Of course, we have less. But honestly, I haven't seen more operators using it previously than now, which means that we succeeded as a team to make OpenStack useful for our users. I will be honest and say that I now work more on downstream for our customers than upstream with the Nova community. If you see my upstream involment, it slowly decreased from the last cycles but don't think I'm out of the band. After all, that means that people use our code, right? Also, that doesn't necessarly mean that I'll stop working upstream, it's just a balance that needs to be challenged, and be sure that if I'm a TC member, I'll take care of this balance. Enough words about me. I guess you're more interested in knowing about what I think is important for a TC membership. Well, I have a few opinions. - first, OpenStack is used from startups with a few servers to large cloud providers with +200K hosts. That's where we succeeded as projects. I think it's very important to make sure that service projects run from 0 to X smoothly and make it a priority if not. - secondly, the user experience is very important when it comes to talk to service projects. Having consistent and versioned APIs is important, albeit also client usage. We have microversions in place, but the story isn't fully done yet. - thirdly, I think we won the complexity game. Deploying OpenStack is now smoothier than before, but there are still bumps on the road, and I think the TC is the place to arbitrate between the understandable will of having more and the reasonable concern about upgrades and deployment concerns. Those three items aren't exhaustive of course. We all have opinions and I'm just one of the many. By the way, if you read up to here, ask yourself : if you care about OpenStack, why wouldn't you propose your name too ? Thanks, -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Feb 19 18:54:00 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 19 Feb 2019 12:54:00 -0600 Subject: [all] Single approve py37 patches In-Reply-To: <88b3ee43c194ebba85b9b4f3ff733de35dbcd81a.camel@redhat.com> References: <20190219164642.GA29222@sm-workstation> <88b3ee43c194ebba85b9b4f3ff733de35dbcd81a.camel@redhat.com> Message-ID: <20190219185400.GA8558@sm-workstation> > > > > Somewhat related - I submitted this patch today: > > > > https://review.openstack.org/#/c/637866/ > > > > > > It's maybe a little late to get all folks using this for stein, but I've added > > a template for it just in case we want to use it once we have stable/stein > > branches. We can also get some projects switched over before then on a best > > effort basis. > > > > We would try to do some automation in Train to get everyone on that template > > right from the start. > so ill approve https://review.openstack.org/#/c/610068/ once it passes ci to > swap over os-vif to enable the python 3.7 jobs. > but the question i have is the non clinet lib fereeze is next week. > should i quickly sub a patch to swap to the stein template once > https://review.openstack.org/#/c/637866/ merges? > i can also submit a follow up to swap over to the the train template and > -w > it until RC1. is that the intention of https://review.openstack.org/#/c/637866/. > or should i wait for automated proposals to use the templates? > Great questions! I'm waiting to get more feedback from the community on the template approach. I (obviously) think it would be a good way to go, but I'd like to see that there's a little broader consensus on that approach. Which all means, nothing should be held up at this point waiting for that to be approved. If it gets approved quickly - great, we can start switching over to it if we still can and the teams want to yet in Stein. If it doesn't get approved, then hopefully there is no impact on any current in-flight plans. As far as the lib freeze and releasing goes, this shouldn't really have an impact on that. Since it is a non-functional change (as in, it does not change how any of the libraries work, it just impacts which tests are run against them) it could still be merged after the lib freeze and would not need another release to be done just for its own sake. Thanks! Sean From a.settle at outlook.com Tue Feb 19 18:54:42 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Tue, 19 Feb 2019 18:54:42 +0000 Subject: [tc][elections] TC candidacy Message-ID: Hi all, I am announcing my candidacy for a position on the OpenStack Technical Committee. I have been an active contributor to OpenStack manuals since the beginning of 2014, and have been a core member since early 2015. I am currently employed by SUSE to work on SUSE OpenStack Cloud and OpenStack. You may know me better as asettle on IRC. Over these last 3 years, I have been privileged to be a part of this community focusing primarily on OpenStack documentation and have been vocal about treating documentation as a first-class citizen [0]. My key achievement was working as the elected PTL for documentation in 2017 [1] where I initiated moving the documentation out of the OpenStack-manuals repository and into the individual project repositories. My announcement may be considered quite brazen, as I have just returned to the community after a year-long hiatus working on the virtualization side of the world in a product operations role, however I am keen to support and work for this community with a renewed sense of enthusiasm. My experience in the OpenStack sphere has revolved around documentation, making me a traditionally unlikely candidate for the position on the TC. My role as documentation core and PTL has provided me with a broad view of OpenStack projects, how they work, and who you all are. I believe the perspective I will bring to the TC will be unique and helpful, as my time with OpenStack has been primarily focused on the integration and communication of all projects. The technical committee is designed to provide technical leadership for OpenStack as a whole [2] and I believe this to be true. Over the years, there has been an unspoken consensus that we are all aiming for the success of OpenStack as free and open software, fully developed and used by a welcoming and supportive community. I hope to stand as a member of the TC and further promote this statement. Drawing on my past experience as a communicator, documentor, collaborator, breaker of (my own) feet, and all round cool person, I am aiming to focus on the following three things: breaking down barriers between projects (new and old) and contributors (new and old); the openness of the community and maintaining that focus;  and embracing change. Thank you for your consideration, Alex p.s - Tony: https://review.openstack.org/637972 [0] https://en.wikipedia.org/wiki/First-class_citizen [1] https://github.com/openstack/election/blob/master/candidates/pike/Documentation/asettle.txt [2] https://www.openstack.org/foundation/tech-committee/ From rico.lin.guanyu at gmail.com Tue Feb 19 20:02:14 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 20 Feb 2019 04:02:14 +0800 Subject: [tc][elections] TC candidacy Message-ID: Dear all, I'm announcing my candidacy for a position on the OpenStack Technical Committee. I'm Rico Lin. I have been in this community since 2014. And been deeply involved with technical contributions [1], I start from working with heat, which allows me to work on integration and management resources from multiple projects. I have served as Heat PTL for two years. Which allows me to learn better on how we can join users and operators' experiences and requirements and integrated with development workflow and technical decision processes. Here are my major goals with this seat in TC: * Cross-community integrations (K8s, CloudFoundry, Ceph, OPNFV) * Provide guidelines * Strong the structure of SIGs * Application Infra * Cross-cooperation between Users, Operators, and Developers * Diversity I'm already trying to put my goals to actions, still would really hope to join Technical Committee to bring more attention to those domains and get more actions taken. And of couse to take fully responsible for all TC's responsibilities besides above goals. Thank you for your consideration. Best Regards, Rico Lin (ricolin) IRC: ricolin Twitter: @ricolintwhttps://www.openstack.org/community/members/profile/33346/rico-linhttp://stackalytics.com/?release=all&user_id=rico-lin&metric=person-day Here I put some explanations for my goals: - Cross-community integrations (K8s, CloudFoundry, Ceph, OPNFV): This is a long-term goal for our community, but would really like to see this getting more scenario for use cases, and a more clear target for development. As we talk about Edge, AI, etc. It's essential to bring real use cases into this integration( like StarlingX bring some requirements cross-projects to real use cases). On the other hand, K8s SIG, Self-healing SIG, FEMDC SIG are all some nice place for this kind of interacting and integrating to happen. We also just start auto scaling SIG, which will also make a good candidate to achieve such missing too. - Provide unified guidelines and features: We now have `Guidelines for Organisations Contributing to OpenStack` [4]. This is something I believe is quite important for showing how can organizations interacting with OpenStack community correctly. I try to work on the same goal from event to event as well (give presentations like [5]). There are some other guidelines that need to update/renew as well (most of us, who already reading ML and work in the community for long, might no longer require to read guidelines, but remember, whoever try to join now a day, still require an up-to-date guideline to give them hints). To have unified guidelines (and even features), allows whoever lose track with communiy, can come back easily. Also it's confused for users and ops when they try to figure out how they can achieve a task (like doing auto-scaling) in OpenStack. And I pretty sure it's pice of cake for experienced ops, as much as I'm pretty sure it is very possible for other ops to encounter with a lot of confusion. - Strong the structure of SIGs/WGs: As I show in above two goals, SIGs/WGs plays some important roles. I do like to trigger discussions on how can we strengthen the structure of SIGs. Make them more efficient and become someplace for users and ops can directly interact with developers. For real use cases like an Edge computing use case issue, or an atomatic healing service issue. I can't think of a better place than FEMDC SIG and Self-healing SIG to record and target these issues. We might be able to allow Opts to feedback issues on SIG's StoryBoard and ask project teams to connect and review with it. There might be multiple ways to do this. So would really like to trigger this discussion. On the other hand, for some other task that don't require much long-term tracking. We will definitely have some way for all to easily start their communication, tests and development (like the `pop-up team` Thierry Carrez currently proposing). Which (IMO) will not have to deal with structure set up at all. - Application Infra: We've updated our resolution with [3] and saying we care about what applications needs on top of OpenStack. As for jobs from few projects are taking the role and thinking about what application needs, we should provide help with setting up community goals, making resolutions, or define what are top priority applications (can be a short-term definition) we need to focus on and taking action items/guidelines and finding weaknesses, so others from the community can follow (if they agree with the goals, but got no idea on what they can help, IMO this will be a good stuff). - Cross-cooperation between Users, Operators, and Developers: We have been losing some communication cross Users, Operators, and Developers. And it's never a good thing when users can share use cases, ops shares experiences, developers shares code, but they won't make it to one another not if a user provides developers by them self. In this case, works like StoryBoard should be in our first priority. We need a more solid way to bring user feedback to developers, so we can actually learn what's working or not for each feature. And maybe it's considerable, to strengthen the communication between TCs and UCs (User Committee). We take some steps (like merge PTG and Ops-meetup) to this goal, but I believe we can make the interacting more active. - Diversity: I have a long term question here. So how can you encourge people from the entire global to vote, and to care about some technology is relatively stable and old? It's not just about we're losing attractiveness (which is normal for any open source at the end), but also we're not as friendly as we think on globalization issue. If a new non-native developers cames in, do we actually expect him/she to be able to know what to vote, where are resources, what's the best way to trigger a cross-project idea? To try to bridge up across user group and technical team across global is not just something TC should be doing, but IMO TC definitely have their responsibility and roles in it. As for the reason why I should join, well... The math is easy. [2] shows we got around one-third of users from Asia (with 75% of users in China). Also IIRC, around the same percentage of developers. But we got 0 in TC. The actual works are hard. We need forwards our technical guideline to developers in Asia and provide chances to get more feedback from them, so we can provide better technical resolutions which should be able to tight developers all together. Which I think I'm a good candidate for this. [1] http://stackalytics.com/?release=all&user_id=rico-lin&metric=person-day [2] https://www.openstack.org/assets/survey/OpenStack-User-Survey-Nov17.pdf [3] https://review.openstack.org/#/c/447031/5/resolutions/20170317-cloud-applications-mission.rst [4] https://docs.openstack.org/contributors/organizations/index.html [5] https://www.slideshare.net/GuanYuLin1/embrace-community-embrace-a-better-life -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Tue Feb 19 20:14:40 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 19 Feb 2019 15:14:40 -0500 Subject: [tc][elections] TC candidacy Message-ID: Hi friends, I'm throwing my name out there for a position on the TC, if you'll have me. I've been around OpenStack since 2014, when I began working on a bare metal cloud powered by Ironic. Since then, I've been a core reviewer for Ironic and spent three cycles as PTL. I've also spent this time developing and operating OpenStack at large scale at Rackspace and Yahoo^WOath^WVerizon Media. I've never been a member of the TC, but have spent a lot of time in meetings, office hours, IRC channels, and face to face meetings with the TC, over various iterations of its members. I care deeply about OpenStack, and believe that I can help shape the future. I'll be honest, I don't have specific objectives I want to accomplish as a TC member, that I can platform on. Things I do care about doing are: Make/keep the OpenStack community a happy place to be (as happy as people can be while working, anyway). I believe that this community is somewhat family-like, and we all have each other's best interests in mind. We seem to be collaborating better than we have in a long time, and I'd like to keep improving on that. I'd love to continue the work that folks have started to encourage more cross-project feature collaboration. Encourage more part-time contributors. People like someone scratching an itch in their lab at home, a user getting curious about a bug, or an operator that finds an edge case. I think it's easier for these types of people to contribute today than it has been in the past, but I believe we can keep improving on this. Our onboarding process can continue to improve. We should have more people willing to walk a new contributor through their first patch (kudos to the people doing this already!). And folks shouldn't have to spend thousands of dollars attending a summit to gain influence in the community. On that note, I should be clear that I won't be at the Denver summit, and probably not Shanghai. Purely personal reasons, my employer still has my back with upstream contributions. I believe my experience as a downstream dev, upstream dev, operator, IRC addict, and wanna-be thought leaderer makes me a decent fit to help drive the community forward. Thanks for reading, and thanks in advance for voting, even if it's not for me. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From blair.bethwaite at gmail.com Tue Feb 19 20:21:15 2019 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Wed, 20 Feb 2019 09:21:15 +1300 Subject: [scientific-sig] IRC meeting today 2100 UTC (in <1 hour): ISC BoF and Summit prep In-Reply-To: References: Message-ID: Hi all, A quick one today. Attempting to finalise a ISC'19 BoF proposal and looking for lightning talks at the Summit BoF. Cheers, b1airo -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Feb 19 20:48:16 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 19 Feb 2019 15:48:16 -0500 Subject: [tc] Technical Committee Status report for February 19 Message-ID: This is a summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == Project updates: * Retire puppet-stackalytics ** https://review.openstack.org/#/c/631820/ * Extra ATCs for I18n team ** https://review.openstack.org/#/c/633398/ * Add roles for managing HSM hardware to OpenStack-Ansible ** https://review.openstack.org/#/c/631324/ * Add os_mistral role to OpenStack-Ansible ** https://review.openstack.org/#/c/634817/ * Remove instack from TripleO ** https://review.openstack.org/#/c/635278/ Other updates: * Zane Clarified the meaning of design goals in the technical vision document ** https://review.openstack.org/#/c/631435/ * Ghanshyam added version-based feature discovery to the technical vision ** https://review.openstack.org/#/c/621516/ * I fixed the tool that reports on status of governance changes against various voting rules to deal with formal votes properly ** https://review.openstack.org/#/c/636418/ == TC Meetings == The most recent TC meeting was on 7 February. Logs were sent to the mailing list after the meeting. * http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002493.html The next TC meeting is scheduled for 7 March @ 1400 UTC in #openstack-tc. See http://eavesdrop.openstack.org/#Technical_Committee_Meeting for details == Ongoing Discussions == Jean-Philippe has proposed adding openSUSE to the list of supported platforms in the PTI. * https://review.openstack.org/#/c/633460/ Ed Leafe has proposed creating a Placement team * https://review.openstack.org/#/c/636416/ We have a few conversations around the "help most wanted" list. * Ivan is proposing to add Horizon to the existing list: https://review.openstack.org/#/c/633798/ * Lance is proposing to add a section about working on unified limits and quota management: https://review.openstack.org/#/c/637025/ * Thierry proposed creating a Padawan mentoring program, as part of a larger discussion around ways to replace the existing list: https://review.openstack.org/#/c/636956/ Thierry has also proposed a clarification to the technical goal setting role of the TC. * https://review.openstack.org/#/c/636948/ We are discussing whether it makes sense to use stackalytics reports when making governance decisions or providing details about team from a governance perspective. There is some question about the open nature of the tool and whether the data collection algorithms have changed recently with the updated implementation. * Linking to metrics from team pages on governance.o.o: https://review.openstack.org/#/c/636665/ * Jeremy's patch to remove the team fragility tool, which uses stackalytics data: https://review.openstack.org/#/c/636721/ Rico has proposed a patch to record the fact that the new auto-scaling SIG owns the auto-scaling-sig git repo. It isn't up to the TC to set ownership, but we record the information along with other ownership details so it can be used for building voter rolls. * https://review.openstack.org/#/c/637126/ Gorka has proposed a patch to add the cinderlib repository to the cinder team. * https://review.openstack.org/#/c/637614/ == TC member actions/focus/discussions for the coming week(s) == It's election time, so I expect most of us will be focused on the discussions associated with the campaign. == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: - 09:00 UTC on Tuesdays - 01:00 UTC on Wednesdays - 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. You will find channel logs with past conversations at http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-discuss at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. -- Doug From doug at doughellmann.com Tue Feb 19 20:57:09 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 19 Feb 2019 15:57:09 -0500 Subject: [goal][python3] week R-7 update Message-ID: This is the regular update for the "Run under Python 3 by default" goal (https://governance.openstack.org/tc/goals/stein/python3-first.html). == Current Status == After last week's several teams updated their status in the wiki, and at least one team changed the job they had running so that it is voting. The list of teams who have not completed this step is still longer than I would like, but everyone is either working on it or has put it on their roadmap. == Ongoing and Completed Work == We also have a fair number of tox update patches left open. I intend to abandon any of those that I submitted when we hit RC1 for Stein, because I plan to be working on other things next cycle. If you see your project name below and want to get your version of the patch landed before that deadline, look for the patch with topic tag 'python3-first'. +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ | Team | tox defaults | Docs | 3.6 unit | Failing | Unreviewed | Total | Champion | +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ | adjutant | 1/ 1 | - | + | 0 | 0 | 2 | Doug Hellmann | | barbican | + | 1/ 3 | + | 1 | 1 | 7 | Doug Hellmann | | heat | 1/ 8 | + | + | 0 | 0 | 21 | Doug Hellmann | | InteropWG | 1/ 2 | + | + | 0 | 0 | 8 | Doug Hellmann | | ironic | 1/ 10 | + | + | 0 | 0 | 35 | Doug Hellmann | | magnum | 1/ 5 | + | + | 0 | 0 | 10 | | | masakari | 1/ 4 | + | - | 0 | 0 | 5 | Nguyen Hai | | neutron | 2/ 17 | + | + | 1 | 1 | 44 | Doug Hellmann | | OpenStack Charms | 7/ 73 | - | - | 7 | 2 | 73 | Doug Hellmann | | Quality Assurance | 1/ 9 | + | + | 0 | 1 | 30 | Doug Hellmann | | rally | 1/ 3 | + | - | 1 | 1 | 5 | Nguyen Hai | | swift | 2/ 3 | + | + | 2 | 1 | 6 | Nguyen Hai | | tacker | 1/ 4 | + | + | 1 | 0 | 9 | Nguyen Hai | | Telemetry | 1/ 7 | + | + | 0 | 1 | 19 | Doug Hellmann | | tripleo | 1/ 53 | + | + | 0 | 1 | 89 | Doug Hellmann | | trove | 1/ 5 | + | + | 0 | 0 | 11 | Doug Hellmann | | User Committee | 3/ 3 | + | - | 0 | 2 | 5 | Doug Hellmann | | | 45/ 61 | 56/ 57 | 55/ 55 | 13 | 11 | 1068 | | +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ == Next Steps == == How can you help? == 1. Choose a patch that has failing tests and help fix it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) 2. Review the patches for the zuul changes. Keep in mind that some of those patches will be on the stable branches for projects. 3. Work on adding functional test jobs that run under Python 3. == How can you ask for help? == If you have any questions, please post them here to the openstack-dev list with the topic tag [python3] in the subject line. Posting questions to the mailing list will give the widest audience the chance to see the answers. We are using the #openstack-dev IRC channel for discussion as well, but I'm not sure how good our timezone coverage is so it's probably better to use the mailing list. == Reference Material == Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open Storyboard: https://storyboard.openstack.org/#!/board/104 Zuul migration notes: https://etherpad.openstack.org/p/python3-first Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 -- Doug From feilong at catalyst.net.nz Tue Feb 19 21:00:10 2019 From: feilong at catalyst.net.nz (Feilong Wang) Date: Wed, 20 Feb 2019 10:00:10 +1300 Subject: [kolla][magnum] Cluster creation failed due to "Waiting for Kubernetes API..." In-Reply-To: References: Message-ID: <1f5506ea-add1-749d-b6c3-1040776b0ff4@catalyst.net.nz> Can you talk to the Heat API from your master node? On 20/02/19 6:43 AM, Giuseppe Sannino wrote: > Hi all...again, > I managed to get over the previous issue by "not disabling" the TLS in > the cluster template. > From the cloud-init-output.log I see: > Cloud-init v. 17.1 running 'modules:final' at Tue, 19 Feb 2019 > 17:03:53 +0000. Up 38.08 seconds. > Cloud-init v. 17.1 finished at Tue, 19 Feb 2019 17:13:22 +0000. > Datasource DataSourceEc2.  Up 607.13 seconds > > But the cluster creation keeps on failing. > From the journalctl -f I see a possible issue: > Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal > runc[2723]: publicURL endpoint for orchestration service in null > region not found > Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal > runc[2723]: Source [heat] Unavailable. > Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal > runc[2723]: /var/lib/os-collect-config/local-data not found. Skipping > > anyone familiar with this problem ? > > Thanks as usual. > /Giuseppe > > > > > > > > On Tue, 19 Feb 2019 at 17:35, Giuseppe Sannino > > > wrote: > > Hi all, > need an help. > I deployed an AIO via Kolla on a baremetal node. Here some > information about the deployment: > --------------- > kolla-ansible: 7.0.1 > openstack_release: Rocky > kolla_base_distro: centos > kolla_install_type: source > TLS: disabled > ---------------   > > > VMs spawn without issue but I can't make the "Kubernetes cluster > creation" successfully. It fails due to "Time out" > > I managed to log into Kuber Master and from the > cloud-init-output.log I can see: > + echo 'Waiting for Kubernetes API...' > Waiting for Kubernetes API... > ++ curl --silent http://127.0.0.1:8080/healthz > + '[' ok = '' ']' > + sleep 5 > > > Checking via systemctl and journalctl I see: > [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ systemctl status > kube-apiserver > ● kube-apiserver.service - kubernetes-apiserver >    Loaded: loaded (/etc/systemd/system/kube-apiserver.service; > enabled; vendor preset: disabled) >    Active: failed (Result: exit-code) since Tue 2019-02-19 > 15:31:41 UTC; 45min ago >   Process: 3796 ExecStart=/usr/bin/runc --systemd-cgroup run > kube-apiserver (code=exited, status=1/FAILURE) >  Main PID: 3796 (code=exited, status=1/FAILURE) > > Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal > systemd[1]: kube-apiserver.service: Main process exited, > code=exited, status=1/FAILURE > Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal > systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. > Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal > systemd[1]: kube-apiserver.service: Service RestartSec=100ms > expired, scheduling restart. > Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal > systemd[1]: kube-apiserver.service: Scheduled restart job, restart > counter is at 6. > Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal > systemd[1]: Stopped kubernetes-apiserver. > Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal > systemd[1]: kube-apiserver.service: Start request repeated too > quickly. > Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal > systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. > Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal > systemd[1]: Failed to start kubernetes-apiserver. > > [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ sudo journalctl > -u kube-apiserver > -- Logs begin at Tue 2019-02-19 15:21:36 UTC, end at Tue > 2019-02-19 16:17:00 UTC. -- > Feb 19 15:31:33 kube-clsuter-qamdealetlbi-master-0.novalocal > systemd[1]: Started kubernetes-apiserver. > Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal > runc[2794]: Flag --insecure-bind-address has been deprecated, This > flag will be removed in a future version. > Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal > runc[2794]: Flag --insecure-port has been deprecated, This flag > will be removed in a future version. > Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal > runc[2794]: Error: error creating self-signed certificates: open > /var/run/kubernetes/apiserver.crt: permission denied > : > : > : > Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal > runc[2794]: error: error creating self-signed certificates: open > /var/run/kubernetes/apiserver.crt: permission denied > Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal > systemd[1]: kube-apiserver.service: Main process exited, > code=exited, status=1/FAILURE > Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal > systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. > Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal > systemd[1]: kube-apiserver.service: Service RestartSec=100ms > expired, scheduling restart. > Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal > systemd[1]: kube-apiserver.service: Scheduled restart job, restart > counter is at 1. > > > May I ask for an help on this ? > > Many thanks > /Giuseppe > > > > -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Feb 19 21:05:01 2019 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Tue, 19 Feb 2019 22:05:01 +0100 Subject: [goal][python3] week R-7 update In-Reply-To: References: Message-ID: <2E0551A7-513E-42EA-B827-70E60C45610A@redhat.com> Hi, I’m from Neutron team. Can You explain maybe what exactly means „2/17” in „tox defaults” column and what is „1” failing job? Thx in advance for any info :) > Wiadomość napisana przez Doug Hellmann w dniu 19.02.2019, o godz. 21:57: > > > This is the regular update for the "Run under Python 3 by default" goal > (https://governance.openstack.org/tc/goals/stein/python3-first.html). > > == Current Status == > > After last week's several teams updated their status in the wiki, and at > least one team changed the job they had running so that it is > voting. The list of teams who have not completed this step is still > longer than I would like, but everyone is either working on it or has > put it on their roadmap. > > == Ongoing and Completed Work == > > We also have a fair number of tox update patches left open. I intend to > abandon any of those that I submitted when we hit RC1 for Stein, because > I plan to be working on other things next cycle. If you see your project > name below and want to get your version of the patch landed before that > deadline, look for the patch with topic tag 'python3-first'. > > +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ > | Team | tox defaults | Docs | 3.6 unit | Failing | Unreviewed | Total | Champion | > +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ > | adjutant | 1/ 1 | - | + | 0 | 0 | 2 | Doug Hellmann | > | barbican | + | 1/ 3 | + | 1 | 1 | 7 | Doug Hellmann | > | heat | 1/ 8 | + | + | 0 | 0 | 21 | Doug Hellmann | > | InteropWG | 1/ 2 | + | + | 0 | 0 | 8 | Doug Hellmann | > | ironic | 1/ 10 | + | + | 0 | 0 | 35 | Doug Hellmann | > | magnum | 1/ 5 | + | + | 0 | 0 | 10 | | > | masakari | 1/ 4 | + | - | 0 | 0 | 5 | Nguyen Hai | > | neutron | 2/ 17 | + | + | 1 | 1 | 44 | Doug Hellmann | > | OpenStack Charms | 7/ 73 | - | - | 7 | 2 | 73 | Doug Hellmann | > | Quality Assurance | 1/ 9 | + | + | 0 | 1 | 30 | Doug Hellmann | > | rally | 1/ 3 | + | - | 1 | 1 | 5 | Nguyen Hai | > | swift | 2/ 3 | + | + | 2 | 1 | 6 | Nguyen Hai | > | tacker | 1/ 4 | + | + | 1 | 0 | 9 | Nguyen Hai | > | Telemetry | 1/ 7 | + | + | 0 | 1 | 19 | Doug Hellmann | > | tripleo | 1/ 53 | + | + | 0 | 1 | 89 | Doug Hellmann | > | trove | 1/ 5 | + | + | 0 | 0 | 11 | Doug Hellmann | > | User Committee | 3/ 3 | + | - | 0 | 2 | 5 | Doug Hellmann | > | | 45/ 61 | 56/ 57 | 55/ 55 | 13 | 11 | 1068 | | > +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ > > == Next Steps == > > > == How can you help? == > > 1. Choose a patch that has failing tests and help fix > it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) > > 2. Review the patches for the zuul changes. Keep in mind that some of > those patches will be on the stable branches for projects. > > 3. Work on adding functional test jobs that run under Python 3. > > == How can you ask for help? == > > If you have any questions, please post them here to the openstack-dev > list with the topic tag [python3] in the subject line. Posting questions > to the mailing list will give the widest audience the chance to see the > answers. > > We are using the #openstack-dev IRC channel for discussion as well, but > I'm not sure how good our timezone coverage is so it's probably better > to use the mailing list. > > == Reference Material == > > Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html > Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open > Storyboard: https://storyboard.openstack.org/#!/board/104 > Zuul migration notes: https://etherpad.openstack.org/p/python3-first > Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 > Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 > > -- > Doug > — Slawek Kaplonski Senior software engineer Red Hat From doug at doughellmann.com Tue Feb 19 21:06:38 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 19 Feb 2019 16:06:38 -0500 Subject: [goal][python3] week R-7 update In-Reply-To: <2E0551A7-513E-42EA-B827-70E60C45610A@redhat.com> References: <2E0551A7-513E-42EA-B827-70E60C45610A@redhat.com> Message-ID: Those are good questions. I see that you replied to me privately, would you mind if I reply back to the list with the answers? Slawomir Kaplonski writes: > Hi, > > I’m from Neutron team. Can You explain maybe what exactly means „2/17” in „tox defaults” column and what is „1” failing job? > Thx in advance for any info :) > >> Wiadomość napisana przez Doug Hellmann w dniu 19.02.2019, o godz. 21:57: >> >> >> This is the regular update for the "Run under Python 3 by default" goal >> (https://governance.openstack.org/tc/goals/stein/python3-first.html). >> >> == Current Status == >> >> After last week's several teams updated their status in the wiki, and at >> least one team changed the job they had running so that it is >> voting. The list of teams who have not completed this step is still >> longer than I would like, but everyone is either working on it or has >> put it on their roadmap. >> >> == Ongoing and Completed Work == >> >> We also have a fair number of tox update patches left open. I intend to >> abandon any of those that I submitted when we hit RC1 for Stein, because >> I plan to be working on other things next cycle. If you see your project >> name below and want to get your version of the patch landed before that >> deadline, look for the patch with topic tag 'python3-first'. >> >> +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ >> | Team | tox defaults | Docs | 3.6 unit | Failing | Unreviewed | Total | Champion | >> +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ >> | adjutant | 1/ 1 | - | + | 0 | 0 | 2 | Doug Hellmann | >> | barbican | + | 1/ 3 | + | 1 | 1 | 7 | Doug Hellmann | >> | heat | 1/ 8 | + | + | 0 | 0 | 21 | Doug Hellmann | >> | InteropWG | 1/ 2 | + | + | 0 | 0 | 8 | Doug Hellmann | >> | ironic | 1/ 10 | + | + | 0 | 0 | 35 | Doug Hellmann | >> | magnum | 1/ 5 | + | + | 0 | 0 | 10 | | >> | masakari | 1/ 4 | + | - | 0 | 0 | 5 | Nguyen Hai | >> | neutron | 2/ 17 | + | + | 1 | 1 | 44 | Doug Hellmann | >> | OpenStack Charms | 7/ 73 | - | - | 7 | 2 | 73 | Doug Hellmann | >> | Quality Assurance | 1/ 9 | + | + | 0 | 1 | 30 | Doug Hellmann | >> | rally | 1/ 3 | + | - | 1 | 1 | 5 | Nguyen Hai | >> | swift | 2/ 3 | + | + | 2 | 1 | 6 | Nguyen Hai | >> | tacker | 1/ 4 | + | + | 1 | 0 | 9 | Nguyen Hai | >> | Telemetry | 1/ 7 | + | + | 0 | 1 | 19 | Doug Hellmann | >> | tripleo | 1/ 53 | + | + | 0 | 1 | 89 | Doug Hellmann | >> | trove | 1/ 5 | + | + | 0 | 0 | 11 | Doug Hellmann | >> | User Committee | 3/ 3 | + | - | 0 | 2 | 5 | Doug Hellmann | >> | | 45/ 61 | 56/ 57 | 55/ 55 | 13 | 11 | 1068 | | >> +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ >> >> == Next Steps == >> >> >> == How can you help? == >> >> 1. Choose a patch that has failing tests and help fix >> it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) >> >> 2. Review the patches for the zuul changes. Keep in mind that some of >> those patches will be on the stable branches for projects. >> >> 3. Work on adding functional test jobs that run under Python 3. >> >> == How can you ask for help? == >> >> If you have any questions, please post them here to the openstack-dev >> list with the topic tag [python3] in the subject line. Posting questions >> to the mailing list will give the widest audience the chance to see the >> answers. >> >> We are using the #openstack-dev IRC channel for discussion as well, but >> I'm not sure how good our timezone coverage is so it's probably better >> to use the mailing list. >> >> == Reference Material == >> >> Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html >> Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open >> Storyboard: https://storyboard.openstack.org/#!/board/104 >> Zuul migration notes: https://etherpad.openstack.org/p/python3-first >> Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 >> Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 >> >> -- >> Doug >> > > — > Slawek Kaplonski > Senior software engineer > Red Hat > -- Doug From skaplons at redhat.com Tue Feb 19 21:11:12 2019 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Tue, 19 Feb 2019 22:11:12 +0100 Subject: [goal][python3] week R-7 update In-Reply-To: References: <2E0551A7-513E-42EA-B827-70E60C45610A@redhat.com> Message-ID: <3E107091-655C-41C9-B52A-EA493A765351@redhat.com> Hi, Sure. I send my email to You and to the list because I used "reply to all” :) > Wiadomość napisana przez Doug Hellmann w dniu 19.02.2019, o godz. 22:06: > > > Those are good questions. I see that you replied to me privately, would > you mind if I reply back to the list with the answers? > > Slawomir Kaplonski writes: > >> Hi, >> >> I’m from Neutron team. Can You explain maybe what exactly means „2/17” in „tox defaults” column and what is „1” failing job? >> Thx in advance for any info :) >> >>> Wiadomość napisana przez Doug Hellmann w dniu 19.02.2019, o godz. 21:57: >>> >>> >>> This is the regular update for the "Run under Python 3 by default" goal >>> (https://governance.openstack.org/tc/goals/stein/python3-first.html). >>> >>> == Current Status == >>> >>> After last week's several teams updated their status in the wiki, and at >>> least one team changed the job they had running so that it is >>> voting. The list of teams who have not completed this step is still >>> longer than I would like, but everyone is either working on it or has >>> put it on their roadmap. >>> >>> == Ongoing and Completed Work == >>> >>> We also have a fair number of tox update patches left open. I intend to >>> abandon any of those that I submitted when we hit RC1 for Stein, because >>> I plan to be working on other things next cycle. If you see your project >>> name below and want to get your version of the patch landed before that >>> deadline, look for the patch with topic tag 'python3-first'. >>> >>> +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ >>> | Team | tox defaults | Docs | 3.6 unit | Failing | Unreviewed | Total | Champion | >>> +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ >>> | adjutant | 1/ 1 | - | + | 0 | 0 | 2 | Doug Hellmann | >>> | barbican | + | 1/ 3 | + | 1 | 1 | 7 | Doug Hellmann | >>> | heat | 1/ 8 | + | + | 0 | 0 | 21 | Doug Hellmann | >>> | InteropWG | 1/ 2 | + | + | 0 | 0 | 8 | Doug Hellmann | >>> | ironic | 1/ 10 | + | + | 0 | 0 | 35 | Doug Hellmann | >>> | magnum | 1/ 5 | + | + | 0 | 0 | 10 | | >>> | masakari | 1/ 4 | + | - | 0 | 0 | 5 | Nguyen Hai | >>> | neutron | 2/ 17 | + | + | 1 | 1 | 44 | Doug Hellmann | >>> | OpenStack Charms | 7/ 73 | - | - | 7 | 2 | 73 | Doug Hellmann | >>> | Quality Assurance | 1/ 9 | + | + | 0 | 1 | 30 | Doug Hellmann | >>> | rally | 1/ 3 | + | - | 1 | 1 | 5 | Nguyen Hai | >>> | swift | 2/ 3 | + | + | 2 | 1 | 6 | Nguyen Hai | >>> | tacker | 1/ 4 | + | + | 1 | 0 | 9 | Nguyen Hai | >>> | Telemetry | 1/ 7 | + | + | 0 | 1 | 19 | Doug Hellmann | >>> | tripleo | 1/ 53 | + | + | 0 | 1 | 89 | Doug Hellmann | >>> | trove | 1/ 5 | + | + | 0 | 0 | 11 | Doug Hellmann | >>> | User Committee | 3/ 3 | + | - | 0 | 2 | 5 | Doug Hellmann | >>> | | 45/ 61 | 56/ 57 | 55/ 55 | 13 | 11 | 1068 | | >>> +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ >>> >>> == Next Steps == >>> >>> >>> == How can you help? == >>> >>> 1. Choose a patch that has failing tests and help fix >>> it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) >>> >>> 2. Review the patches for the zuul changes. Keep in mind that some of >>> those patches will be on the stable branches for projects. >>> >>> 3. Work on adding functional test jobs that run under Python 3. >>> >>> == How can you ask for help? == >>> >>> If you have any questions, please post them here to the openstack-dev >>> list with the topic tag [python3] in the subject line. Posting questions >>> to the mailing list will give the widest audience the chance to see the >>> answers. >>> >>> We are using the #openstack-dev IRC channel for discussion as well, but >>> I'm not sure how good our timezone coverage is so it's probably better >>> to use the mailing list. >>> >>> == Reference Material == >>> >>> Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html >>> Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open >>> Storyboard: https://storyboard.openstack.org/#!/board/104 >>> Zuul migration notes: https://etherpad.openstack.org/p/python3-first >>> Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 >>> Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 >>> >>> -- >>> Doug >>> >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> > > -- > Doug — Slawek Kaplonski Senior software engineer Red Hat From feilong at catalyst.net.nz Tue Feb 19 21:15:12 2019 From: feilong at catalyst.net.nz (Feilong Wang) Date: Wed, 20 Feb 2019 10:15:12 +1300 Subject: [tc][elections] TC candidacy Message-ID: <97c4a44b-49cc-616f-6fae-04d569f5f61e@catalyst.net.nz> Hi everyone, I'm announcing my candidacy for a position on the OpenStack Technical Committee. For those of you who don't know me yet, I'm Feilong Wang, currently working for Catalyst Cloud as head of R&D. Catalyst Cloud is a public cloud running on OpenStack based in New Zealand, before that I worked for IBM System & Technology Lab for OpenStack upstream work. As for OpenStack upstream, now I'm a core contributor of OpenStack Magnum and actively involve the integration between OpenStack and Kubernetes. Besides, I was serving as the PTL of Zaqar (OpenStack Messaging Service) for years. Before that, I was mainly working for Glance (OpenStack Image Service) as a core reviewer since Folsom 2012. In my opinion, currently the role of the TC is more important than ever. Some large companies already downsized their investment for OpenStack or switching their focus to containers/k8s, but meanwhile many companies from APAC, especially China, are heavily investing OpenStack. Although it's a pain to lose great contributors, the companies having real requirements for OpenStack are kept. It's a good time for us to think about how to define/care OpenStack, for whom we're building the software and how to build a collaborative/integrated community. As a TC member I want to bring focus in below areas: #1 Integration and Collaboration As a distributed cloud platform, it's good to decouple different services to make each service do one thing well. However, seems most of projects are solely focusing on their own offering and not enough projects are paying attention to the global impact. I would like to push a more tighter collaboration between projects and obviously it will make the integration more easier and efficient. Operators should expect different projects can work together smoothly just like they're different parts within one project. As a maintainer of a public cloud running on OpenStack, I know the pain of our ops, when they try to migrate a service or adding a new service in existing cluster. So I'd like to see more interlocks between PTL and TC members to understand the gaps and fill the gaps. #2 Users & Operators Listen closely to the voice of users and operators and this pretty much align with the mission of OpenStack as below. "To produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable." To implement a useful, stable cloud platform, we can't work behind closed doors, but closely work with the user and operator community. As far as I know, except the user survey, we don't have more formal process/approach to collect the feedback from our users and operators, though the operators mailing list and some random forum sessions at OpenStack summit are helpful. And IMHO, currently we’re mixing feedback collected from different perspectives. For example, most of the feedback from operators are how to easily deploy/manage the cloud. But the tenant user/developers' requirements are more related to functions, UX, etc. I can see we have put a lot of effort to address the pain of operators, but obviously, we do also need more work to make the tenant users/developer’s life easier. #3 Better UX This is the tiny part sometimes skipped by us. But I think it's important for us to put effort on. For example, we can release a docker image including all our openstack clients, so user don't have to deal with python depedencies. Another example is enforcing restricted API consistency across different services. It would be an honor to be member of your technical committee. Thanks for your consideration! -- Feilong Wang -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- From bharat at stackhpc.com Tue Feb 19 21:15:46 2019 From: bharat at stackhpc.com (Bharat Kunwar) Date: Tue, 19 Feb 2019 22:15:46 +0100 Subject: [kolla][magnum] Cluster creation failed due to "Waiting for Kubernetes API..." In-Reply-To: <1f5506ea-add1-749d-b6c3-1040776b0ff4@catalyst.net.nz> References: <1f5506ea-add1-749d-b6c3-1040776b0ff4@catalyst.net.nz> Message-ID: I have the same problem. Weird thing is /etc/sysconfig/heat-params has region_name specified in my case! Sent from my iPhone > On 19 Feb 2019, at 22:00, Feilong Wang wrote: > > Can you talk to the Heat API from your master node? > > > >> On 20/02/19 6:43 AM, Giuseppe Sannino wrote: >> Hi all...again, >> I managed to get over the previous issue by "not disabling" the TLS in the cluster template. >> From the cloud-init-output.log I see: >> Cloud-init v. 17.1 running 'modules:final' at Tue, 19 Feb 2019 17:03:53 +0000. Up 38.08 seconds. >> Cloud-init v. 17.1 finished at Tue, 19 Feb 2019 17:13:22 +0000. Datasource DataSourceEc2. Up 607.13 seconds >> >> But the cluster creation keeps on failing. >> From the journalctl -f I see a possible issue: >> Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal runc[2723]: publicURL endpoint for orchestration service in null region not found >> Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal runc[2723]: Source [heat] Unavailable. >> Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal runc[2723]: /var/lib/os-collect-config/local-data not found. Skipping >> >> anyone familiar with this problem ? >> >> Thanks as usual. >> /Giuseppe >> >> >> >> >> >> >> >>> On Tue, 19 Feb 2019 at 17:35, Giuseppe Sannino wrote: >>> Hi all, >>> need an help. >>> I deployed an AIO via Kolla on a baremetal node. Here some information about the deployment: >>> --------------- >>> kolla-ansible: 7.0.1 >>> openstack_release: Rocky >>> kolla_base_distro: centos >>> kolla_install_type: source >>> TLS: disabled >>> --------------- >>> >>> >>> VMs spawn without issue but I can't make the "Kubernetes cluster creation" successfully. It fails due to "Time out" >>> >>> I managed to log into Kuber Master and from the cloud-init-output.log I can see: >>> + echo 'Waiting for Kubernetes API...' >>> Waiting for Kubernetes API... >>> ++ curl --silent http://127.0.0.1:8080/healthz >>> + '[' ok = '' ']' >>> + sleep 5 >>> >>> >>> Checking via systemctl and journalctl I see: >>> [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ systemctl status kube-apiserver >>> ● kube-apiserver.service - kubernetes-apiserver >>> Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled) >>> Active: failed (Result: exit-code) since Tue 2019-02-19 15:31:41 UTC; 45min ago >>> Process: 3796 ExecStart=/usr/bin/runc --systemd-cgroup run kube-apiserver (code=exited, status=1/FAILURE) >>> Main PID: 3796 (code=exited, status=1/FAILURE) >>> >>> Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE >>> Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. >>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Service RestartSec=100ms expired, scheduling restart. >>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter is at 6. >>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: Stopped kubernetes-apiserver. >>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Start request repeated too quickly. >>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. >>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: Failed to start kubernetes-apiserver. >>> >>> [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ sudo journalctl -u kube-apiserver >>> -- Logs begin at Tue 2019-02-19 15:21:36 UTC, end at Tue 2019-02-19 16:17:00 UTC. -- >>> Feb 19 15:31:33 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: Started kubernetes-apiserver. >>> Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version. >>> Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: Flag --insecure-port has been deprecated, This flag will be removed in a future version. >>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: Error: error creating self-signed certificates: open /var/run/kubernetes/apiserver.crt: permission denied >>> : >>> : >>> : >>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: error: error creating self-signed certificates: open /var/run/kubernetes/apiserver.crt: permission denied >>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE >>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. >>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Service RestartSec=100ms expired, scheduling restart. >>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter is at 1. >>> >>> >>> May I ask for an help on this ? >>> >>> Many thanks >>> /Giuseppe >>> >>> >>> >>> > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > -------------------------------------------------------------------------- > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > -------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Feb 19 21:17:13 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 19 Feb 2019 16:17:13 -0500 Subject: [goal][python3] week R-7 update In-Reply-To: <3E107091-655C-41C9-B52A-EA493A765351@redhat.com> References: <2E0551A7-513E-42EA-B827-70E60C45610A@redhat.com> <3E107091-655C-41C9-B52A-EA493A765351@redhat.com> Message-ID: Slawomir Kaplonski writes: > Hi, > > Sure. I send my email to You and to the list because I used "reply to > all” :) Oops, I completely missed the CC entry. Sorry about that! > >> Wiadomość napisana przez Doug Hellmann w dniu 19.02.2019, o godz. 22:06: >> >> >> Those are good questions. I see that you replied to me privately, would >> you mind if I reply back to the list with the answers? >> >> Slawomir Kaplonski writes: >> >>> Hi, >>> >>> I’m from Neutron team. Can You explain maybe what exactly means „2/17” in „tox defaults” column and what is „1” failing job? >>> Thx in advance for any info :) Each column shows the number of open and total patches for that category. So 2/17 means that the team has 2 remaining out of the 17 total patches to deal with. At this point, those patches may need to be approved (including fixes before that can happen), or they might be duplicates if someone else did the same work using a different patch. The failing job count shows the number of patches where zuul is reporting a job failure, out of all of the open patches in the row. In neutron's case, that means 1 of the 2 patches with tox changes is failing some job. Looking at the list of python3-first patches for all neutron team repositories, I see: $ .tox/venv/bin/python3-first patches list neutron +-----------------------------------------------+-----------------------------+---------+----------+-------------------------------------+-------------+-----------------+ | Subject | Repo | Tests | Workflow | URL | Branch | Owner | +-----------------------------------------------+-----------------------------+---------+----------+-------------------------------------+-------------+-----------------+ | fix tox python3 overrides | openstack/networking-bgpvpn | FAILED | REVIEWED | https://review.openstack.org/606671 | master | Doug Hellmann | | Cleanup Zuul project definition | openstack/networking-odl | FAILED | NEW | https://review.openstack.org/598086 | master | Michel Peterson | | Work around test failures on pike | openstack/networking-sfc | PASS | REVIEWED | https://review.openstack.org/608170 | stable/pike | Andreas Jaeger | | fix tox python3 overrides | openstack/neutron-fwaas | UNKNOWN | NEW | https://review.openstack.org/586271 | master | Vieri | | Switch functional and tempest jobs to python3 | openstack/ovsdbapp | UNKNOWN | NEW | https://review.openstack.org/637988 | master | Nate Johnston | +-----------------------------------------------+-----------------------------+---------+----------+-------------------------------------+-------------+-----------------+ The two tox patches are in networking-bgpvpn and neutron-fwaas. >>> >>>> Wiadomość napisana przez Doug Hellmann w dniu 19.02.2019, o godz. 21:57: >>>> >>>> >>>> This is the regular update for the "Run under Python 3 by default" goal >>>> (https://governance.openstack.org/tc/goals/stein/python3-first.html). >>>> >>>> == Current Status == >>>> >>>> After last week's several teams updated their status in the wiki, and at >>>> least one team changed the job they had running so that it is >>>> voting. The list of teams who have not completed this step is still >>>> longer than I would like, but everyone is either working on it or has >>>> put it on their roadmap. >>>> >>>> == Ongoing and Completed Work == >>>> >>>> We also have a fair number of tox update patches left open. I intend to >>>> abandon any of those that I submitted when we hit RC1 for Stein, because >>>> I plan to be working on other things next cycle. If you see your project >>>> name below and want to get your version of the patch landed before that >>>> deadline, look for the patch with topic tag 'python3-first'. >>>> >>>> +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ >>>> | Team | tox defaults | Docs | 3.6 unit | Failing | Unreviewed | Total | Champion | >>>> +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ >>>> | adjutant | 1/ 1 | - | + | 0 | 0 | 2 | Doug Hellmann | >>>> | barbican | + | 1/ 3 | + | 1 | 1 | 7 | Doug Hellmann | >>>> | heat | 1/ 8 | + | + | 0 | 0 | 21 | Doug Hellmann | >>>> | InteropWG | 1/ 2 | + | + | 0 | 0 | 8 | Doug Hellmann | >>>> | ironic | 1/ 10 | + | + | 0 | 0 | 35 | Doug Hellmann | >>>> | magnum | 1/ 5 | + | + | 0 | 0 | 10 | | >>>> | masakari | 1/ 4 | + | - | 0 | 0 | 5 | Nguyen Hai | >>>> | neutron | 2/ 17 | + | + | 1 | 1 | 44 | Doug Hellmann | >>>> | OpenStack Charms | 7/ 73 | - | - | 7 | 2 | 73 | Doug Hellmann | >>>> | Quality Assurance | 1/ 9 | + | + | 0 | 1 | 30 | Doug Hellmann | >>>> | rally | 1/ 3 | + | - | 1 | 1 | 5 | Nguyen Hai | >>>> | swift | 2/ 3 | + | + | 2 | 1 | 6 | Nguyen Hai | >>>> | tacker | 1/ 4 | + | + | 1 | 0 | 9 | Nguyen Hai | >>>> | Telemetry | 1/ 7 | + | + | 0 | 1 | 19 | Doug Hellmann | >>>> | tripleo | 1/ 53 | + | + | 0 | 1 | 89 | Doug Hellmann | >>>> | trove | 1/ 5 | + | + | 0 | 0 | 11 | Doug Hellmann | >>>> | User Committee | 3/ 3 | + | - | 0 | 2 | 5 | Doug Hellmann | >>>> | | 45/ 61 | 56/ 57 | 55/ 55 | 13 | 11 | 1068 | | >>>> +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ >>>> >>>> == Next Steps == >>>> >>>> >>>> == How can you help? == >>>> >>>> 1. Choose a patch that has failing tests and help fix >>>> it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) >>>> >>>> 2. Review the patches for the zuul changes. Keep in mind that some of >>>> those patches will be on the stable branches for projects. >>>> >>>> 3. Work on adding functional test jobs that run under Python 3. >>>> >>>> == How can you ask for help? == >>>> >>>> If you have any questions, please post them here to the openstack-dev >>>> list with the topic tag [python3] in the subject line. Posting questions >>>> to the mailing list will give the widest audience the chance to see the >>>> answers. >>>> >>>> We are using the #openstack-dev IRC channel for discussion as well, but >>>> I'm not sure how good our timezone coverage is so it's probably better >>>> to use the mailing list. >>>> >>>> == Reference Material == >>>> >>>> Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html >>>> Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open >>>> Storyboard: https://storyboard.openstack.org/#!/board/104 >>>> Zuul migration notes: https://etherpad.openstack.org/p/python3-first >>>> Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 >>>> Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 >>>> >>>> -- >>>> Doug >>>> >>> >>> — >>> Slawek Kaplonski >>> Senior software engineer >>> Red Hat >>> >> >> -- >> Doug > > — > Slawek Kaplonski > Senior software engineer > Red Hat > -- Doug From skaplons at redhat.com Tue Feb 19 21:21:11 2019 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Tue, 19 Feb 2019 22:21:11 +0100 Subject: [goal][python3] week R-7 update In-Reply-To: References: <2E0551A7-513E-42EA-B827-70E60C45610A@redhat.com> <3E107091-655C-41C9-B52A-EA493A765351@redhat.com> Message-ID: <0401AB10-BEAF-478B-AA7F-865680CF2451@redhat.com> > Wiadomość napisana przez Doug Hellmann w dniu 19.02.2019, o godz. 22:17: > > Slawomir Kaplonski writes: > >> Hi, >> >> Sure. I send my email to You and to the list because I used "reply to >> all” :) > > Oops, I completely missed the CC entry. Sorry about that! No problem :) > >> >>> Wiadomość napisana przez Doug Hellmann w dniu 19.02.2019, o godz. 22:06: >>> >>> >>> Those are good questions. I see that you replied to me privately, would >>> you mind if I reply back to the list with the answers? >>> >>> Slawomir Kaplonski writes: >>> >>>> Hi, >>>> >>>> I’m from Neutron team. Can You explain maybe what exactly means „2/17” in „tox defaults” column and what is „1” failing job? >>>> Thx in advance for any info :) > > Each column shows the number of open and total patches for that > category. So 2/17 means that the team has 2 remaining out of the 17 > total patches to deal with. > > At this point, those patches may need to be approved (including fixes > before that can happen), or they might be duplicates if someone else did > the same work using a different patch. > > The failing job count shows the number of patches where zuul is > reporting a job failure, out of all of the open patches in the row. In > neutron's case, that means 1 of the 2 patches with tox changes is > failing some job. > > Looking at the list of python3-first patches for all neutron team > repositories, I see: > > $ .tox/venv/bin/python3-first patches list neutron > +-----------------------------------------------+-----------------------------+---------+----------+-------------------------------------+-------------+-----------------+ > | Subject | Repo | Tests | Workflow | URL | Branch | Owner | > +-----------------------------------------------+-----------------------------+---------+----------+-------------------------------------+-------------+-----------------+ > | fix tox python3 overrides | openstack/networking-bgpvpn | FAILED | REVIEWED | https://review.openstack.org/606671 | master | Doug Hellmann | > | Cleanup Zuul project definition | openstack/networking-odl | FAILED | NEW | https://review.openstack.org/598086 | master | Michel Peterson | > | Work around test failures on pike | openstack/networking-sfc | PASS | REVIEWED | https://review.openstack.org/608170 | stable/pike | Andreas Jaeger | > | fix tox python3 overrides | openstack/neutron-fwaas | UNKNOWN | NEW | https://review.openstack.org/586271 | master | Vieri | > | Switch functional and tempest jobs to python3 | openstack/ovsdbapp | UNKNOWN | NEW | https://review.openstack.org/637988 | master | Nate Johnston | > +-----------------------------------------------+-----------------------------+---------+----------+-------------------------------------+-------------+-----------------+ > > The two tox patches are in networking-bgpvpn and neutron-fwaas. Thx. Now I understand that :) I was thinking about neutron as „neutron only” and I forgot about stadium projects. That’s because I couldn’t figure out what it might means exactly. I will try to look at those patches this week and will try to help when I can. > >>>> >>>>> Wiadomość napisana przez Doug Hellmann w dniu 19.02.2019, o godz. 21:57: >>>>> >>>>> >>>>> This is the regular update for the "Run under Python 3 by default" goal >>>>> (https://governance.openstack.org/tc/goals/stein/python3-first.html). >>>>> >>>>> == Current Status == >>>>> >>>>> After last week's several teams updated their status in the wiki, and at >>>>> least one team changed the job they had running so that it is >>>>> voting. The list of teams who have not completed this step is still >>>>> longer than I would like, but everyone is either working on it or has >>>>> put it on their roadmap. >>>>> >>>>> == Ongoing and Completed Work == >>>>> >>>>> We also have a fair number of tox update patches left open. I intend to >>>>> abandon any of those that I submitted when we hit RC1 for Stein, because >>>>> I plan to be working on other things next cycle. If you see your project >>>>> name below and want to get your version of the patch landed before that >>>>> deadline, look for the patch with topic tag 'python3-first'. >>>>> >>>>> +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ >>>>> | Team | tox defaults | Docs | 3.6 unit | Failing | Unreviewed | Total | Champion | >>>>> +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ >>>>> | adjutant | 1/ 1 | - | + | 0 | 0 | 2 | Doug Hellmann | >>>>> | barbican | + | 1/ 3 | + | 1 | 1 | 7 | Doug Hellmann | >>>>> | heat | 1/ 8 | + | + | 0 | 0 | 21 | Doug Hellmann | >>>>> | InteropWG | 1/ 2 | + | + | 0 | 0 | 8 | Doug Hellmann | >>>>> | ironic | 1/ 10 | + | + | 0 | 0 | 35 | Doug Hellmann | >>>>> | magnum | 1/ 5 | + | + | 0 | 0 | 10 | | >>>>> | masakari | 1/ 4 | + | - | 0 | 0 | 5 | Nguyen Hai | >>>>> | neutron | 2/ 17 | + | + | 1 | 1 | 44 | Doug Hellmann | >>>>> | OpenStack Charms | 7/ 73 | - | - | 7 | 2 | 73 | Doug Hellmann | >>>>> | Quality Assurance | 1/ 9 | + | + | 0 | 1 | 30 | Doug Hellmann | >>>>> | rally | 1/ 3 | + | - | 1 | 1 | 5 | Nguyen Hai | >>>>> | swift | 2/ 3 | + | + | 2 | 1 | 6 | Nguyen Hai | >>>>> | tacker | 1/ 4 | + | + | 1 | 0 | 9 | Nguyen Hai | >>>>> | Telemetry | 1/ 7 | + | + | 0 | 1 | 19 | Doug Hellmann | >>>>> | tripleo | 1/ 53 | + | + | 0 | 1 | 89 | Doug Hellmann | >>>>> | trove | 1/ 5 | + | + | 0 | 0 | 11 | Doug Hellmann | >>>>> | User Committee | 3/ 3 | + | - | 0 | 2 | 5 | Doug Hellmann | >>>>> | | 45/ 61 | 56/ 57 | 55/ 55 | 13 | 11 | 1068 | | >>>>> +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ >>>>> >>>>> == Next Steps == >>>>> >>>>> >>>>> == How can you help? == >>>>> >>>>> 1. Choose a patch that has failing tests and help fix >>>>> it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) >>>>> >>>>> 2. Review the patches for the zuul changes. Keep in mind that some of >>>>> those patches will be on the stable branches for projects. >>>>> >>>>> 3. Work on adding functional test jobs that run under Python 3. >>>>> >>>>> == How can you ask for help? == >>>>> >>>>> If you have any questions, please post them here to the openstack-dev >>>>> list with the topic tag [python3] in the subject line. Posting questions >>>>> to the mailing list will give the widest audience the chance to see the >>>>> answers. >>>>> >>>>> We are using the #openstack-dev IRC channel for discussion as well, but >>>>> I'm not sure how good our timezone coverage is so it's probably better >>>>> to use the mailing list. >>>>> >>>>> == Reference Material == >>>>> >>>>> Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html >>>>> Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open >>>>> Storyboard: https://storyboard.openstack.org/#!/board/104 >>>>> Zuul migration notes: https://etherpad.openstack.org/p/python3-first >>>>> Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 >>>>> Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 >>>>> >>>>> -- >>>>> Doug >>>>> >>>> >>>> — >>>> Slawek Kaplonski >>>> Senior software engineer >>>> Red Hat >>>> >>> >>> -- >>> Doug >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> > > -- > Doug — Slawek Kaplonski Senior software engineer Red Hat From kennelson11 at gmail.com Tue Feb 19 23:45:20 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 19 Feb 2019 15:45:20 -0800 Subject: [all] [TC] 'Train' Technical Committee Election Nominations Closed Message-ID: TC Nomination period is now over. The official candidate list is available on the election website[0]. Now begins the campaigning period where candidates and electorate may debate their statements. Polling will start Feb 26, 2019 23:45 UTC. Thank you, [0] http://governance.openstack.org/election/#train-tc-candidates -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Feb 19 23:45:35 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 19 Feb 2019 15:45:35 -0800 Subject: [all] [TC] 'Train' Technical Committee Campaigning Begins! Message-ID: Developers, The TC Election Campaigning Period has now started [1]. During the next couple days, you are all encouraged to ask the candidates questions about their platforms [2], opinions on OpenStack, community governance, and anything else that will help you to better determine how you will vote. This is the time to raise any issues you wish the future TC to consider, and to evaluate the opinions of the nominees prior to their election. Candidates, Each of you has posted a platform [2], and announced your nomination to the developers. From this point, you are encouraged to ask each other questions about the posted platforms, and begin discussion of any points that you feel are particularly important during the next cycle. While you are not yet TC members, your voices and opinions about the issues raised in your platforms and questions raised by the wider community will help ensure that the future TC has the widest possible input on the matters of community concern, and the electorate has the best information to determine the ideal TC composition to address these and other issues that may arise. -Kendall (diablo_rojo) [1] https://governance.openstack.org/election/ [2] http://git.openstack.org/cgit/openstack/election/tree/candidates/train/TC -------------- next part -------------- An HTML attachment was scrubbed... URL: From petebirley+openstack-dev at gmail.com Wed Feb 20 01:13:41 2019 From: petebirley+openstack-dev at gmail.com (Pete Birley) Date: Tue, 19 Feb 2019 19:13:41 -0600 Subject: [openstack-helm] would like to discuss review turnaround time Message-ID: Hi, I must apologize for not responding to this sooner. I feel some irony in that I, and many of the core review team in OpenStack-Helm, never been that great at OpenStack mailing lists, but are very active on IRC. To address the email that Jay sent to start this thread I've attached some comments. > There are just some example patch sets currently under review stage. > > 1. https://review.openstack.org/#/c/603971/ >> this ps has been discussed > for its contents and scope. Cloud you please add if there is anything else > we need to do other than wrapping some of commit message? Last feedback was from myself on the 15th Jan, two weeks before the date of the email thread that started this conversation - I really like the PS, and see it has great value for operators of OSH, however, we really need to gate this, as we do all other components in OpenStack-Helm infra. Once we have a path there I see no reason not to merge, as my overly happy +1 wf finger has shown over the last few weeks. > 2. https://review.openstack.org/#/c/633456/ >> this is simple fix. how can > we make core reviewer notice this patch set so that they can quickly view? This was merged within 48 hours of the initial commit, so I hope had an acceptable turnaround. Please feel free to add a reviewer via gerrit to any PS you think would benefit from their eye. > 3. https://review.openstack.org/#/c/625803/ >> we have been getting > feedbacks and questions on this patch set, that has been good. but > round-trip time for the recent comments takes a week or more. because of > that delay (?), the owner of this patch set needed to rebase this one > often. Will this kind of case be improved if author engages more on irc > channel or via mailing list to get feedback rather than relying on gerrit > reviews? This PS has indeed suffered from review stagnation, though this also highlights that we need to rely on more than just the core reviewers to provide meaningful feedback on patchsets. This PS is quite tricky to digest, as it touches every chart in the repo, and has changed direction several times in its lifespan. Here I think both a gentle nudge via gerrit would provide some assistance. Moving forward I'll also include announcements of the meeting agenda, and links to the logs in a weekly email for the project, which should hopefully allow easier engagement for those that find IRC a less convenient communication mechanism. Cheers Pete On Tue, 19 Feb 2019 at 8:46 AM, Jaesuk Ahn wrote: > > It has been several weeks without further feedback or response from openstack-helm project members. > I really want to hear what others think, and start a discussion on how we can improve together. > > > > On Thu, Jan 31, 2019 at 4:35 PM Jaesuk Ahn wrote: >> >> Thank you for thoughtful reply. >> >> I was able to quickly add my opinion on some of your feedback, not all. please see inline. >> I will get back with more thought and idea. pls note that we have big holiday next week (lunar new year holiday), therefore, it might take some time. :) >> >> >> On Wed, Jan 30, 2019 at 9:26 PM Jean-Philippe Evrard wrote: >>> >>> Hello, >>> >>> Thank you for bringing that topic. Let me answer inline. >>> Please note, this is my personal opinion. >>> (No company or TC hat here. I realise that, as one of the TC members >>> following the health of the osh project, this is a concerning mail, and >>> I will report appropriately if further steps need to be taken). >>> >>> On Wed, 2019-01-30 at 13:15 +0900, Jaesuk Ahn wrote: >>> > Dear all, >>> > >>> > There has been several patch sets getting sparse reviews. >>> > Since some of authors wrote these patch sets are difficult to join >>> > IRC >>> > meeting due to time and language constraints, I would like to pass >>> > some of >>> > their voice, and get more detail feedback from core reviewers and >>> > other >>> > devs via ML. >>> > >>> > I fully understand core reviewers are quite busy and believe they are >>> > doing >>> > their best efforts. period! >>> >>> We can only hope for best effort of everyone :) >>> I have no doubt here. I also believe the team is very busy. >>> >>> So here is my opinion: Any review is valuable. Core reviewers should >>> not be the only ones to review patches >>> The more people will review in all of the involved companies, the more >>> they will get trusted in their reviews. That follows up with earned >>> trust by the core reviewers, with eventually leads to becoming core >>> reviewer. >> >> >> This is a very good point. I really need to encourage developers to at least cross-review each other's patch set. >> I will discuss with other team members how we can achieve this, we might need to introduce "half-a-day review only" schedule. >> Once my team had tried to review more in general, however it failed because of very limited time allowed to do so. >> At least, we can try to cross-review each other on patch sets, and explicitly assign time to do so. >> THIS will be our important homework to do. >> >> >>> >>> >>> I believe we can make a difference by reviewing more, so that the >>> existing core team could get extended. Just a highlight: at the moment, >>> more than 90% of reviews are AT&T sponsored (counting independents >>> working for at&t. See also >>> https://www.stackalytics.com/?module=openstack-helm-group). That's very >>> high. >>> >>> I believe extending the core team geographically/with different >>> companies is a solution for the listed pain points. >> >> >> I really would like to have that as well, however, efforts and time to become a candidate with "good enough" history seems very difficult. >> Matching the level (or amount of works) with what the current core reviewers does is not an easy thing to achieve. >> Frankly speaking, motivating someone to put that much effort is also challenging, especially with their reluctance (hesitant?) to do so due to language and time barrier. >> >> >>> >>> >>> > However, I sometimes feel that turnaround time for some of patch sets >>> > are >>> > really long. I would like to hear opinion from others and suggestions >>> > on >>> > how to improve this. It can be either/both something each patch set >>> > owner >>> > need to do more, or/and it could be something we as a openstack-helm >>> > project can improve. For instance, it could be influenced by time >>> > differences, lack of irc presence, or anything else. etc. I really >>> > would >>> > like to find out there are anything we can improve together. >>> >>> I had the same impression myself: the turnaround time is big for a >>> deployment project. >>> >>> The problem is not simple, and here are a few explanations I could >>> think of: >>> 1) most core reviewers are from a single company, and emergencies in >>> their company are most likely to get prioritized over the community >>> work. That leaves some reviews pending. >>> 2) most core reviewers are from the same timezone in US, which means, >>> in the best case, an asian contributor will have to wait a full day >>> before seeing his work merged. If a core reviewer doesn't review this >>> on his day work due to an emergency, you're putting the turnaround to >>> two days at best. >>> 3) most core reviewers are working in the same location: it's maybe >>> hard for them to scale the conversation from their internal habits to a >>> community driven project. Communication is a very important part of a >>> community, and if that doesn't work, it is _very_ concerning to me. We >>> raised the points of lack of (IRC presence|reviews) in previous >>> community meetings. >> >> >> 2-1) other active developers are on the opposite side of the earth, which make more difficult to sync with core reviewers. No one wanted, but it somehow creates an invisible barrier. >> >> I do agree that "Communication" is a very important part of a community. >> Language and time differences are adding more difficulties on this as well. I am trying my best to be a good liaison, but never enough. >> There will be no clear solution. However, I will have a discussion again with team members to gather some ideas. >> >>> >>> >>> > >>> > I would like to get any kind of advise on the following. >>> > - sometimes, it is really difficult to get core reviewers' comments >>> > or >>> > reviews. I routinely put the list of patch sets on irc meeting >>> > agenda, >>> > however, there still be a long turnaround time between comments. As a >>> > result, it usually takes a long time to process a patch set, does >>> > sometimes >>> > cause rebase as well. >>> >>> I thank our testing system auto rebases a lot :) >>> The bigger problem is when you're working on something which eventually >>> conflicts with some AT&T work that was prioritized internally. >>> >>> For that, I asked a clear list of what the priorities are. >>> ( https://storyboard.openstack.org/#!/worklist/341 ) >>> >>> Anything outside that should IMO raise a little flag in our heads :) >>> >>> But it's up to the core reviewers to work with this in focus, and to >>> the PTL to give directions. >>> >>> >>> > - Having said that, I would like to have any advise on what we need >>> > to do >>> > more, for instance, do we need to be in irc directly asking each >>> > patch set >>> > to core reviewers? do we need to put core reviewers' name when we >>> > push >>> > patch set? etc. >>> >>> I believe that we should leverage IRC more for reviews. We are doing it >>> in OSA, and it works fine. Of course core developers have their habits >>> and a review dashboard, but fast/emergency reviews need to be >>> socialized to get prioritized. There are other attempts in the >>> community (like have a review priority in gerrit), but I am not >>> entirely sold on bringing a technical solution to something that should >>> be solved with more communication. >>> >>> > - Some of patch sets are being reviewed and merged quickly, and some >>> > of >>> > patch sets are not. I would like to know what makes this difference >>> > so that >>> > I can tell my developers how to do better job writing and >>> > communicating >>> > patch sets. >>> > >>> > There are just some example patch sets currently under review stage. >>> > >>> > 1. https://review.openstack.org/#/c/603971/ >> this ps has been >>> > discussed >>> > for its contents and scope. Cloud you please add if there is anything >>> > else >>> > we need to do other than wrapping some of commit message? >>> > >>> > 2. https://review.openstack.org/#/c/633456/ >> this is simple fix. >>> > how can >>> > we make core reviewer notice this patch set so that they can quickly >>> > view? >>> > >>> > 3. https://review.openstack.org/#/c/625803/ >> we have been getting >>> > feedbacks and questions on this patch set, that has been good. but >>> > round-trip time for the recent comments takes a week or more. because >>> > of >>> > that delay (?), the owner of this patch set needed to rebase this one >>> > often. Will this kind of case be improved if author engages more on >>> > irc >>> > channel or via mailing list to get feedback rather than relying on >>> > gerrit >>> > reviews? >>> >>> To me, the last one is more controversial than others (I don't believe >>> we should give the opportunity to do that myself until we've done a >>> security impact analysis). This change is also bigger than others, >>> which is harder to both write and review. As far as I know, there was >>> no spec that preceeded this work, so we couldn't discuss the approach >>> before the code was written. >>> >>> I don't mind not having specs for changes to be honest, but it makes >>> sense to have one if the subject is more controversial/harder, because >>> people will have a tendency to put hard job aside. >>> >>> This review is the typical review that needs to be discussed in the >>> community meeting, advocating for or against it until a decision is >>> taken (merge or abandon). >> >> >> I do agree on your analysis on this one. but, One thing the author really wanted to have was feedback, that can be either negative or positive. it could be something to ask to abandon, or rewrite. >> but lack of comments with a long turnaround time between comments (that means author waits days and weeks to see any additional comments) was the problem. >> It felt like somewhat abandoned without any strong reason. >> >> >>> >>> >>> > >>> > Frankly speaking, I don't know if this is a real issue or just way it >>> > is. I >>> > just want to pass some of voice from our developers, and really would >>> > like >>> > to hear what others think and find a better way to communicate. >>> >>> It doesn't matter if "it's a real issue" or "just the way it is". >>> If there is a feeling of burden/pain, we should tackle the issue. >>> >>> So, yes, it's very important to raise the issue you feel! >>> If you don't do it, nothing will change, the morale of developers will >>> fall, and the health of the project will suffer. >>> Transparency is key here. >>> >>> Thanks for voicing your opinion. >>> >>> > >>> > >>> > Thanks you. >>> > >>> > >>> >>> I would say my key take-aways are: >>> 1) We need to review more >>> 2) We need to communicate/socialize more on patchsets and issues. Let's >>> be more active on IRC outside meetings. >> >> >> Just one small note here: developers in my team prefer email communication sometime, where they can have time to think how to write their opinion on English. >> >>> >>> 3) The priority list need to be updated to be accurate. I am not sure >>> this list is complete (there is no mention of docs image building >>> there). >> >> >> I really want this happen. Things are often suddenly showed up on patch set and merged. >> It is a bit difficult to follow what is exactly happening on openstack-helm community. Of course, this required everyone's efforts. >> >>> >>> 4) We need to extend the core team in different geographical regions >>> and companies as soon as possible >>> >>> But of course it's only my analysis. I would be happy to see Pete >>> answer here. >>> >>> Regards, >>> Jeam-Philippe Evrard (evrardjp) >>> >>> >> >> A bit unrelated with topic, but I really want to say this. >> I DO REALLY appreciate openstack-helm community's effort to accept non-English documents as official one. (although it is slowly progressing ^^) >> I think this move is real diversity effort than any other move (recognizing there is a good value community need to bring in "as-is", even though that is non-English information) >> >> Cheers, >> >> >> -- >> Jaesuk Ahn, Ph.D. >> Software R&D Center, SK Telecom > > > > -- > Jaesuk Ahn, Ph.D. > Software R&D Center, SK Telecom From eduardo.urbanomoreno at ndsu.edu Wed Feb 20 02:40:15 2019 From: eduardo.urbanomoreno at ndsu.edu (Urbano Moreno, Eduardo) Date: Wed, 20 Feb 2019 02:40:15 +0000 Subject: NDSU Capstone Introduction! Message-ID: Hello OpenStack community, I just wanted to go ahead and introduce myself, as I am a part of the NDSU Capstone group! My name is Eduardo Urbano and I am a Jr/Senior at NDSU. I am currently majoring in Computer Science, with no minor although that could change towards graduation. I am currently an intern at an electrical supply company here in Fargo, North Dakota known as Border States. I am an information security intern and I am enjoying it so far. I have learned many interesting security things and have also became a little paranoid of how easily someone can get hacked haha. Anyways, I am so excited to be on board and be working with OpenStack for this semester. So far I have learned many new things and I can't wait to continue on learning. Thank you! -Eduardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Feb 20 02:41:02 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 19 Feb 2019 18:41:02 -0800 Subject: [all] Denver Forum Brainstorming Message-ID: Hello Everyone! Welcome to the topic selection process for our Forum in Denver! This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. For OpenStack Denver marks the beginning of Train’s release cycle, where ideas and requirements will be gathered. We should come armed with feedback from the upcoming Stein release if at all possible. We aim to ensure the broadest coverage of topics that will allow for multiple parts of the community getting together to discuss key areas within our community/projects. For OSF Projects (StarlingX, Zuul, Airship, Kata Containers) As a refresher, the idea is to gather ideas and requirements for your project’s upcoming release. Look to https://wiki.openstack.org/wiki/Forum for an idea of how to structure fishbowls and discussions for your project. The idea is to ensure the broadest coverage of topics, while allowing for the project community to discuss critical areas of concern. To make sure we are presenting the best topics for discussion, we have asked representatives of each of your projects to help us out in the Forum selection process. There are two stages to the brainstorming: 1.If you haven’t already, its encouraged that you set up an etherpad with your team and start discussing ideas you'd like to talk about at the Forum and work out which ones to submit. 2. On the 22nd of February, we will open up a more formal web-based tool for you to submit abstracts for the most popular sessions that came out of your brainstorming. Make an etherpad and add it to the list at: https://wiki.openstack.org/wiki/Forum/Denver2018 This is your opportunity to think outside the box and talk with other projects, groups, and individuals that you might not see during Summit sessions. Look for interested parties to collaborate with and share your ideas. Examples of typical sessions that make for a great Forum: - Strategic, whole-of-community discussions, to think about the big picture, including beyond just one release cycle and new technologies - eg Making OpenStack One Platform for containers/VMs/Bare Metal (Strategic session) the entire community congregates to share opinions on how to make OpenStack achieve its integration engine goal - Cross-project sessions, in a similar vein to what has happened at past forums, but with increased emphasis on issues that are of relevant to all areas of the community - eg Rolling Upgrades at Scale (Cross-Project session) – the Large Deployments Team collaborates with Nova, Cinder and Keystone to tackle issues that come up with rolling upgrades when there’s a large number of machines. - Project-specific sessions, where community members most interested in a specific project can discuss their experience with the project over the last release and provide feedback, collaborate on priorities, and present or generate 'blue sky' ideas for the next release - eg Neutron Pain Points (Project-Specific session) – Co-organized by neutron developers and users. Neutron developers bring some specific questions about implementation and usage. Neutron users bring feedback from the latest release. All community members interested in Neutron discuss ideas about the future. Think about what kind of session ideas might end up as: Project-specific, cross-project or strategic/whole-of-community discussions. There'll be more slots for the latter two, so do try and think outside the box! This part of the process is where we gather broad community consensus - in theory the second part is just about fitting in as many of the good ideas into the schedule as we can. Further details about the forum can be found at: https://wiki.openstack.org/wiki/Forum Thanks all! Kendall Nelson, on behalf of the OpenStack Foundation, User Committee & Technical Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From manuel.sb at garvan.org.au Wed Feb 20 03:11:11 2019 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Wed, 20 Feb 2019 03:11:11 +0000 Subject: nova vs neutron SR-IOV Message-ID: <9D8A2486E35F0941A60430473E29F15B017E852EF0@MXDB2.ad.garvan.unsw.edu.au> Dear Openstack community, What are the differences between setting up SR-IOV in nova vs neutron? Thank you very much NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Wed Feb 20 04:15:55 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Tue, 19 Feb 2019 23:15:55 -0500 Subject: [tripleo][ironic] What I had to do to get standalone ironic working with ovn enabled Message-ID: <20190220041555.54yc5diqviszvb6e@redhat.com> I'm using the tripleo standalone install to set up an Ironic test environment. With recent tripleo master, the deploy started failing because the DockerOvn*Image parameters weren't defined. Here's what I did to get everything working: 1. I added to my deploy: -e /usr/share/tripleo-heat-templates/environment/services/neutron-ovn-standalone.yaml With this change, `openstack tripleo container image prep` correctly detected that ovn was enabled and generated the appropriate image parameters. 2. environments/services/ironic.yaml sets: NeutronMechanismDrivers: ['openvswitch', 'baremetal'] Since I didn't want openvswitch enabled in this deployment, I explicitly set the mechanism drivers in a subsequent environment file: NeutronMechanismDrivers: ['ovn', 'baremetal'] 3. The neutron-ovn-standalone.yaml environment explicitly disables the non-ovn neutron services. Ironic requires the services of the neutron_dhcp_agent, so I had to add: OS::TripleO::Services::NeutronDhcpAgent: /usr/share/openstack-tripleo-heat-templates/deployment/neutron/neutron-dhcp-container-puppet.yaml With this in place, the ironic nodes were able to receive dhcp responses and were able to boot. 3. In order to provide the baremetal nodes with a route to the nova metadata service, I added the following to my deploy: NeutronEnableForceMetadata: true This provides the baremetal nodes with a route to 169.254.169.254 via the neutron dhcp namespace. 4. In order get the metadata service to respond correctly, I also had to enable the neutron metadata agent: OS::TripleO::Services::NeutronMetadataAgent: /usr/share/openstack-tripleo-heat-templates/deployment/neutron/neutron-metadata-container-puppet.yaml This returned my Ironic deployment to a functioning state: I can successfully boot baremetal nodes and provide them with configuration information via the metadata service. I'm curious if this was the *correct* solution, or if there was a better method of getting things working. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From zbitter at redhat.com Wed Feb 20 06:40:34 2019 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 20 Feb 2019 19:40:34 +1300 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <1550007508.442544.1656696288.1CEB9AC9@webmail.messagingengine.com> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> <168e0cba6f4.1013071eb93450.6339187288167074748@ghanshyammann.com> <1550007508.442544.1656696288.1CEB9AC9@webmail.messagingengine.com> Message-ID: <4b0913ee-86fe-aa8d-6a99-2fdc65aab9c9@redhat.com> On 13/02/19 10:38 AM, Colleen Murphy wrote: > I feel like there is a bit of a disconnect between what the TC is asking for > and what the current mentoring organizations are designed to provide. Thierry > framed this as a "peer-mentoring offered" list, but mentoring doesn't quite > capture everything that's needed. > > Mentorship programs like Outreachy, cohort mentoring, and the First Contact SIG > are oriented around helping new people quickstart into the community, getting > them up to speed on basics and helping them feel good about themselves and > their contributions. The hope is that happy first-timers eventually become > happy regular contributors which will eventually be a benefit to the projects, > but the benefit to the projects is not the main focus. > > The way I see it, the TC Help Wanted list, as well as the new thing, is not > necessarily oriented around newcomers but is instead advocating for the > projects and meant to help project teams thrive by getting committed long-term > maintainers involved and invested in solving longstanding technical debt that > in some cases requires deep tribal knowledge to solve. It's not a thing for a > newbie to step into lightly and it's not something that can be solved by a > FC-liaison pointing at the contributor docs. Instead what's needed are mentors > who are willing to walk through that tribal knowledge with a new contributor > until they are equipped enough to help with the harder problems. > > For that reason I think neither the FC SIG or the mentoring cohort group, in > their current incarnations, are the right groups to be managing this. The FC > SIG's mission is "To provide a place for new contributors to come for > information and advice" which does not fit the long-term goal of the help > wanted list, and cohort mentoring's four topics ("your first patch", "first > CFP", "first Cloud", and "COA"[1]) also don't fit with the long-term and deeply > technical requirements that a project-specific mentorship offering needs. > Either of those groups could be rescoped to fit with this new mission, and > there is certainly a lot of overlap, but my feeling is that this needs to be an > effort conducted by the TC because the TC is the group that advocates for the > projects. > > It's moreover not a thing that can be solved by another list of names. In addition > to naming someone willing to do the several hours per week of mentoring, > project teams that want help should be forced to come up with a specific > description of 1) what the project is, 2) what kind of person (experience or > interests) would be a good fit for the project, 3) specific work items with > completion criteria that needs to be done - and it can be extremely challenging > to reframe a project's longstanding issues in such concrete ways that make it > clear what steps are needed to tackle the problem. It should basically be an > advertisement that makes the project sound interesting and challenging and > do-able, because the current help-wanted list and liaison lists and mentoring > topics are too vague to entice anyone to step up. > > Finally, I rather disagree that this should be something maintained as a page in > individual projects' contributor guides, although we should certainly be > encouraging teams to keep those guides up to date. It should be compiled by the > TC and regularly updated by the project liaisons within the TC. A link to a > contributor guide on docs.openstack.org doesn't give anyone an idea of what > projects need the most help nor does it empower people to believe they can help > by giving them an understanding of what the "job" entails. > > [1] https://wiki.openstack.org/wiki/Mentoring#Cohort_Mentoring This one email has pretty much completely changed my mind from "surely the FC SIG would have much more expertise here" to "maybe the TC does need to play a role". I think if the TC had a strong idea of what a useful posting would look like, then the overhead of having the TC as gatekeeper and putting it in the governance repo (where docs go to die) would be worth it. Right now, I'm not sure we have that though. TBH I'm not sure we have any idea what we're doing - having so far only found a way that definitely does not work and indeed was so wide of the mark that it generated virtually no feedback about how to improve - but I'm willing to have another go :) Colleen offers some good ideas above: it should be interesting, challenging, do-able, not vague. We've talked on IRC about the board's suggestion to include a business case like a job req might; perhaps that should be in the mix. We should say exactly who we're targeting - is it folks new to the community or those who have been around for a while but are looking for a project to sink their teeth into? Is it folks employed to work full-time upstream, or OpenStack operators who have a little time to contribute to upstream, or hobbyists? If we could come up with a 'How to write an excellent mentoring offer" guide that could both explain to teams how to do it well and why we expect that to make a different as well as help the TC to review those proposals consistently, then I'd feel a lot more confident about making the TC the gatekeeper. cheers, Zane. From melwittt at gmail.com Wed Feb 20 08:13:35 2019 From: melwittt at gmail.com (melanie witt) Date: Wed, 20 Feb 2019 00:13:35 -0800 Subject: [nova][dev][ops] can we get rid of 'project_only' in the DB layer? In-Reply-To: <47bf561e-439b-1642-1aa7-7bf48adca64a@gmail.com> References: <3fb287ae-753f-7e56-aa2a-7e3a1d7d6d89@gmail.com> <47bf561e-439b-1642-1aa7-7bf48adca64a@gmail.com> Message-ID: <707a1e0d-a270-b030-da5e-9e93e8920c24@gmail.com> On Tue, 19 Feb 2019 10:42:32 -0600, Matt Riedemann wrote: > On 2/18/2019 8:22 PM, melanie witt wrote: >> Right, that is the proposal in this email. That we should remove >> project_only=True and let the API policy check handle whether or not the >> user from a different project is allowed to get the instance. Otherwise, >> users are not able to use policy to control the behavior because it is >> hard-coded in the database layer. > > I think this has always been the long-term goal and I remember a spec > from John about it [1] but having said that, the spec was fairly > complicated (to me at least) and sounds like there would be a fair bit > of auditing of the API code we'd need to do before we can remove the DB > API check, which means it's likely not something we can complete at this > point in Stein. > > For example, I think we have a lot of APIs that run the policy check on > the context (project_id and user_id) as the target before even pulling > the resource from the database, and the resource itself should be the > target, right? > > [1] https://review.openstack.org/#/c/433037/ Thanks for the link -- I hadn't seen this spec yet. Yes, Alex just pinged me in #openstack-nova and now I finally understand his point that I kept missing before. He tried a test with my WIP patch and a user from project A was able to 'nova show' an instance from project B, even though the policy was set to 'rule:admin_or_owner'. The reason is because when the instance project/user isn't passed as a target to the policy check, the policy check for the request context project_id won't do anything. There's nothing for it to compare project_id with. This is interesting because it makes me wonder, what does a policy check like that [2] do then? It will take more learning on my part about the policy system to understand it. -melanie [2] https://github.com/openstack/nova/blob/3548cf59217f62966a21ea65a8cb744606431bd6/nova/api/openstack/compute/servers.py#L425 From km.giuseppesannino at gmail.com Wed Feb 20 09:38:16 2019 From: km.giuseppesannino at gmail.com (Giuseppe Sannino) Date: Wed, 20 Feb 2019 10:38:16 +0100 Subject: [kolla][magnum] Cluster creation failed due to "Waiting for Kubernetes API..." In-Reply-To: References: <1f5506ea-add1-749d-b6c3-1040776b0ff4@catalyst.net.nz> Message-ID: Hi Feilong, Bharat, thanks for your answer. @Feilong, >From /etc/kolla/heat-engine/heat.conf I see: [clients_keystone] auth_uri = http://10.1.7.201:5000 This should map into auth_url within the k8s master. Within the k8s master in /etc/os-collect-config.conf I see: [heat] auth_url = http://10.1.7.201:5000/v3/ : : resource_name = kube-master region_name = null and from /etc/sysconfig/heat-params (among the others): : REGION_NAME="RegionOne" : AUTH_URL="http://10.1.7.201:5000/v3" This URL corresponds to the "public" Heat endpoint openstack endpoint list | grep heat | 3d5f58c43f6b44f6b54990d6fd9ff55d | RegionOne | heat | orchestration | True | internal | http://10.1.7.200:8004/v1/%(tenant_id)s | | 8c2492cb0ddc48ca94942a4a299a88dc | RegionOne | heat-cfn | cloudformation | True | internal | http://10.1.7.200:8000/v1 | | b164c4618a784da9ae14da75a6c764a3 | RegionOne | heat | orchestration | True | public | http://10.1.7.201:8004/v1/%(tenant_id)s | | da203f7d337b4587a0f5fc774c993390 | RegionOne | heat | orchestration | True | admin | http://10.1.7.200:8004/v1/%(tenant_id)s | | e0d3743e7c604e5c8aa4684df2d1ce53 | RegionOne | heat-cfn | cloudformation | True | public | http://10.1.7.201:8000/v1 | | efe0b8418aa24dfca33c243e7eed7e90 | RegionOne | heat-cfn | cloudformation | True | admin | http://10.1.7.200:8000/v1 | Connectivity tests: [fedora at kube-cluster-fed27-k5di3i7stgks-master-0 ~]$ ping 10.1.7.201 PING 10.1.7.201 (10.1.7.201) 56(84) bytes of data. 64 bytes from 10.1.7.201: icmp_seq=1 ttl=63 time=0.285 ms [fedora at kube-cluster-fed27-k5di3i7stgks-master-0 ~]$ curl http://10.1.7.201:5000/v3/ {"version": {"status": "stable", "updated": "2018-10-15T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.11", "links": [{"href": "http://10.1.7.201:5000/v3/", "rel": "self"}]}} Apparently, I can reach such endpoint from within the k8s master @Bharat, that file seems to be properly conifugured to me as well. The problem pointed by "systemctl status heat-container-agent" is with: Feb 20 09:33:23 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: publicURL endpoint for orchestration service in null region not found Feb 20 09:33:23 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: Source [heat] Unavailable. Feb 20 09:33:23 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: /var/lib/os-collect-config/local-data not found. Skipping Feb 20 09:33:53 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: publicURL endpoint for orchestration service in null region not found Feb 20 09:33:53 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: Source [heat] Unavailable. Feb 20 09:33:53 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: /var/lib/os-collect-config/local-data not found. Skipping Still no way forward from my side. /Giuseppe On Tue, 19 Feb 2019 at 22:16, Bharat Kunwar wrote: > I have the same problem. Weird thing is /etc/sysconfig/heat-params has > region_name specified in my case! > > Sent from my iPhone > > On 19 Feb 2019, at 22:00, Feilong Wang wrote: > > Can you talk to the Heat API from your master node? > > > On 20/02/19 6:43 AM, Giuseppe Sannino wrote: > > Hi all...again, > I managed to get over the previous issue by "not disabling" the TLS in the > cluster template. > From the cloud-init-output.log I see: > Cloud-init v. 17.1 running 'modules:final' at Tue, 19 Feb 2019 17:03:53 > +0000. Up 38.08 seconds. > Cloud-init v. 17.1 finished at Tue, 19 Feb 2019 17:13:22 +0000. Datasource > DataSourceEc2. Up 607.13 seconds > > But the cluster creation keeps on failing. > From the journalctl -f I see a possible issue: > Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal > runc[2723]: publicURL endpoint for orchestration service in null region not > found > Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal > runc[2723]: Source [heat] Unavailable. > Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal > runc[2723]: /var/lib/os-collect-config/local-data not found. Skipping > > anyone familiar with this problem ? > > Thanks as usual. > /Giuseppe > > > > > > > > On Tue, 19 Feb 2019 at 17:35, Giuseppe Sannino < > km.giuseppesannino at gmail.com> wrote: > >> Hi all, >> need an help. >> I deployed an AIO via Kolla on a baremetal node. Here some information >> about the deployment: >> --------------- >> kolla-ansible: 7.0.1 >> openstack_release: Rocky >> kolla_base_distro: centos >> kolla_install_type: source >> TLS: disabled >> --------------- >> >> >> VMs spawn without issue but I can't make the "Kubernetes cluster >> creation" successfully. It fails due to "Time out" >> >> I managed to log into Kuber Master and from the cloud-init-output.log I >> can see: >> + echo 'Waiting for Kubernetes API...' >> Waiting for Kubernetes API... >> ++ curl --silent http://127.0.0.1:8080/healthz >> + '[' ok = '' ']' >> + sleep 5 >> >> >> Checking via systemctl and journalctl I see: >> [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ systemctl status >> kube-apiserver >> ● kube-apiserver.service - kubernetes-apiserver >> Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; >> vendor preset: disabled) >> Active: failed (Result: exit-code) since Tue 2019-02-19 15:31:41 UTC; >> 45min ago >> Process: 3796 ExecStart=/usr/bin/runc --systemd-cgroup run >> kube-apiserver (code=exited, status=1/FAILURE) >> Main PID: 3796 (code=exited, status=1/FAILURE) >> >> Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >> kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE >> Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >> kube-apiserver.service: Failed with result 'exit-code'. >> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >> kube-apiserver.service: Service RestartSec=100ms expired, scheduling >> restart. >> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >> kube-apiserver.service: Scheduled restart job, restart counter is at 6. >> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >> Stopped kubernetes-apiserver. >> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >> kube-apiserver.service: Start request repeated too quickly. >> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >> kube-apiserver.service: Failed with result 'exit-code'. >> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >> Failed to start kubernetes-apiserver. >> >> [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ sudo journalctl -u >> kube-apiserver >> -- Logs begin at Tue 2019-02-19 15:21:36 UTC, end at Tue 2019-02-19 >> 16:17:00 UTC. -- >> Feb 19 15:31:33 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >> Started kubernetes-apiserver. >> Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: >> Flag --insecure-bind-address has been deprecated, This flag will be removed >> in a future version. >> Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: >> Flag --insecure-port has been deprecated, This flag will be removed in a >> future version. >> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: >> Error: error creating self-signed certificates: open >> /var/run/kubernetes/apiserver.crt: permission denied >> : >> : >> : >> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: >> error: error creating self-signed certificates: open >> /var/run/kubernetes/apiserver.crt: permission denied >> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >> kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE >> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >> kube-apiserver.service: Failed with result 'exit-code'. >> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >> kube-apiserver.service: Service RestartSec=100ms expired, scheduling >> restart. >> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >> kube-apiserver.service: Scheduled restart job, restart counter is at 1. >> >> >> May I ask for an help on this ? >> >> Many thanks >> /Giuseppe >> >> >> >> >> -- > Cheers & Best regards, > Feilong Wang (王飞龙) > -------------------------------------------------------------------------- > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > -------------------------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramshaazeemi2 at gmail.com Wed Feb 20 10:31:49 2019 From: ramshaazeemi2 at gmail.com (Ramsha Azeemi) Date: Wed, 20 Feb 2019 15:31:49 +0500 Subject: No subject Message-ID: hi! i am windows user is it necessary to be a linux ubuntu user for contribution in openstack projects. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Wed Feb 20 10:49:14 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 20 Feb 2019 23:49:14 +1300 Subject: NDSU Capstone Introduction! In-Reply-To: References: Message-ID: Welcome on board! Cheers, Lingxian Kong On Wed, Feb 20, 2019 at 3:43 PM Urbano Moreno, Eduardo < eduardo.urbanomoreno at ndsu.edu> wrote: > Hello OpenStack community, > > > > I just wanted to go ahead and introduce myself, as I am a part of the NDSU > Capstone group! > > > > My name is Eduardo Urbano and I am a Jr/Senior at NDSU. I am currently > majoring in Computer Science, with no minor although that could change > towards graduation. I am currently an intern at an electrical supply > company here in Fargo, North Dakota known as Border States. I am an > information security intern and I am enjoying it so far. I have learned > many interesting security things and have also became a little paranoid of > how easily someone can get hacked haha. Anyways, I am so excited to be on > board and be working with OpenStack for this semester. So far I have > learned many new things and I can’t wait to continue on learning. > > > > Thank you! > > > > > > -Eduardo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bharat at stackhpc.com Wed Feb 20 11:04:06 2019 From: bharat at stackhpc.com (Bharat Kunwar) Date: Wed, 20 Feb 2019 12:04:06 +0100 Subject: [kolla][magnum] Cluster creation failed due to "Waiting for Kubernetes API..." In-Reply-To: References: <1f5506ea-add1-749d-b6c3-1040776b0ff4@catalyst.net.nz> Message-ID: <54760998-DCF6-4E01-85C8-BB3F5879A14C@stackhpc.com> Hi Giuseppe, What version of heat are you running? Can you check if you have this patch merged? https://review.openstack.org/579485 https://review.openstack.org/579485 Bharat Sent from my iPhone > On 20 Feb 2019, at 10:38, Giuseppe Sannino wrote: > > Hi Feilong, Bharat, > thanks for your answer. > > @Feilong, > From /etc/kolla/heat-engine/heat.conf I see: > [clients_keystone] > auth_uri = http://10.1.7.201:5000 > > This should map into auth_url within the k8s master. > Within the k8s master in /etc/os-collect-config.conf I see: > > [heat] > auth_url = http://10.1.7.201:5000/v3/ > : > : > resource_name = kube-master > region_name = null > > > and from /etc/sysconfig/heat-params (among the others): > : > REGION_NAME="RegionOne" > : > AUTH_URL="http://10.1.7.201:5000/v3" > > This URL corresponds to the "public" Heat endpoint > openstack endpoint list | grep heat > | 3d5f58c43f6b44f6b54990d6fd9ff55d | RegionOne | heat | orchestration | True | internal | http://10.1.7.200:8004/v1/%(tenant_id)s | > | 8c2492cb0ddc48ca94942a4a299a88dc | RegionOne | heat-cfn | cloudformation | True | internal | http://10.1.7.200:8000/v1 | > | b164c4618a784da9ae14da75a6c764a3 | RegionOne | heat | orchestration | True | public | http://10.1.7.201:8004/v1/%(tenant_id)s | > | da203f7d337b4587a0f5fc774c993390 | RegionOne | heat | orchestration | True | admin | http://10.1.7.200:8004/v1/%(tenant_id)s | > | e0d3743e7c604e5c8aa4684df2d1ce53 | RegionOne | heat-cfn | cloudformation | True | public | http://10.1.7.201:8000/v1 | > | efe0b8418aa24dfca33c243e7eed7e90 | RegionOne | heat-cfn | cloudformation | True | admin | http://10.1.7.200:8000/v1 | > > Connectivity tests: > [fedora at kube-cluster-fed27-k5di3i7stgks-master-0 ~]$ ping 10.1.7.201 > PING 10.1.7.201 (10.1.7.201) 56(84) bytes of data. > 64 bytes from 10.1.7.201: icmp_seq=1 ttl=63 time=0.285 ms > > [fedora at kube-cluster-fed27-k5di3i7stgks-master-0 ~]$ curl http://10.1.7.201:5000/v3/ > {"version": {"status": "stable", "updated": "2018-10-15T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.11", "links": [{"href": "http://10.1.7.201:5000/v3/", "rel": "self"}]}} > > > Apparently, I can reach such endpoint from within the k8s master > > > @Bharat, > that file seems to be properly conifugured to me as well. > The problem pointed by "systemctl status heat-container-agent" is with: > > Feb 20 09:33:23 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: publicURL endpoint for orchestration service in null region not found > Feb 20 09:33:23 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: Source [heat] Unavailable. > Feb 20 09:33:23 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: /var/lib/os-collect-config/local-data not found. Skipping > Feb 20 09:33:53 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: publicURL endpoint for orchestration service in null region not found > Feb 20 09:33:53 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: Source [heat] Unavailable. > Feb 20 09:33:53 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: /var/lib/os-collect-config/local-data not found. Skipping > > > Still no way forward from my side. > > /Giuseppe > > > > > > > > > > > > > > > > > >> On Tue, 19 Feb 2019 at 22:16, Bharat Kunwar wrote: >> I have the same problem. Weird thing is /etc/sysconfig/heat-params has region_name specified in my case! >> >> Sent from my iPhone >> >>> On 19 Feb 2019, at 22:00, Feilong Wang wrote: >>> >>> Can you talk to the Heat API from your master node? >>> >>> >>> >>>> On 20/02/19 6:43 AM, Giuseppe Sannino wrote: >>>> Hi all...again, >>>> I managed to get over the previous issue by "not disabling" the TLS in the cluster template. >>>> From the cloud-init-output.log I see: >>>> Cloud-init v. 17.1 running 'modules:final' at Tue, 19 Feb 2019 17:03:53 +0000. Up 38.08 seconds. >>>> Cloud-init v. 17.1 finished at Tue, 19 Feb 2019 17:13:22 +0000. Datasource DataSourceEc2. Up 607.13 seconds >>>> >>>> But the cluster creation keeps on failing. >>>> From the journalctl -f I see a possible issue: >>>> Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal runc[2723]: publicURL endpoint for orchestration service in null region not found >>>> Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal runc[2723]: Source [heat] Unavailable. >>>> Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal runc[2723]: /var/lib/os-collect-config/local-data not found. Skipping >>>> >>>> anyone familiar with this problem ? >>>> >>>> Thanks as usual. >>>> /Giuseppe >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>>> On Tue, 19 Feb 2019 at 17:35, Giuseppe Sannino wrote: >>>>> Hi all, >>>>> need an help. >>>>> I deployed an AIO via Kolla on a baremetal node. Here some information about the deployment: >>>>> --------------- >>>>> kolla-ansible: 7.0.1 >>>>> openstack_release: Rocky >>>>> kolla_base_distro: centos >>>>> kolla_install_type: source >>>>> TLS: disabled >>>>> --------------- >>>>> >>>>> >>>>> VMs spawn without issue but I can't make the "Kubernetes cluster creation" successfully. It fails due to "Time out" >>>>> >>>>> I managed to log into Kuber Master and from the cloud-init-output.log I can see: >>>>> + echo 'Waiting for Kubernetes API...' >>>>> Waiting for Kubernetes API... >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> >>>>> >>>>> Checking via systemctl and journalctl I see: >>>>> [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ systemctl status kube-apiserver >>>>> ● kube-apiserver.service - kubernetes-apiserver >>>>> Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled) >>>>> Active: failed (Result: exit-code) since Tue 2019-02-19 15:31:41 UTC; 45min ago >>>>> Process: 3796 ExecStart=/usr/bin/runc --systemd-cgroup run kube-apiserver (code=exited, status=1/FAILURE) >>>>> Main PID: 3796 (code=exited, status=1/FAILURE) >>>>> >>>>> Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE >>>>> Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. >>>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Service RestartSec=100ms expired, scheduling restart. >>>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter is at 6. >>>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: Stopped kubernetes-apiserver. >>>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Start request repeated too quickly. >>>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. >>>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: Failed to start kubernetes-apiserver. >>>>> >>>>> [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ sudo journalctl -u kube-apiserver >>>>> -- Logs begin at Tue 2019-02-19 15:21:36 UTC, end at Tue 2019-02-19 16:17:00 UTC. -- >>>>> Feb 19 15:31:33 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: Started kubernetes-apiserver. >>>>> Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version. >>>>> Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: Flag --insecure-port has been deprecated, This flag will be removed in a future version. >>>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: Error: error creating self-signed certificates: open /var/run/kubernetes/apiserver.crt: permission denied >>>>> : >>>>> : >>>>> : >>>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: error: error creating self-signed certificates: open /var/run/kubernetes/apiserver.crt: permission denied >>>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE >>>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. >>>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Service RestartSec=100ms expired, scheduling restart. >>>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter is at 1. >>>>> >>>>> >>>>> May I ask for an help on this ? >>>>> >>>>> Many thanks >>>>> /Giuseppe >>>>> >>>>> >>>>> >>>>> >>> -- >>> Cheers & Best regards, >>> Feilong Wang (王飞龙) >>> -------------------------------------------------------------------------- >>> Senior Cloud Software Engineer >>> Tel: +64-48032246 >>> Email: flwang at catalyst.net.nz >>> Catalyst IT Limited >>> Level 6, Catalyst House, 150 Willis Street, Wellington >>> -------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at medberry.net Wed Feb 20 13:58:15 2019 From: openstack at medberry.net (David Medberry) Date: Wed, 20 Feb 2019 06:58:15 -0700 Subject: [all] Denver Forum Brainstorming In-Reply-To: References: Message-ID: Probably that should have been a link to: https://wiki.openstack.org/wiki/Forum/Denver2019 On Tue, Feb 19, 2019 at 7:41 PM Kendall Nelson wrote: > > Hello Everyone! > > > Welcome to the topic selection process for our Forum in Denver! This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. > > > For OpenStack > > Denver marks the beginning of Train’s release cycle, where ideas and requirements will be gathered. We should come armed with feedback from the upcoming Stein release if at all possible. We aim to ensure the broadest coverage of topics that will allow for multiple parts of the community getting together to discuss key areas within our community/projects. > > > For OSF Projects (StarlingX, Zuul, Airship, Kata Containers) > > As a refresher, the idea is to gather ideas and requirements for your project’s upcoming release. Look to https://wiki.openstack.org/wiki/Forum for an idea of how to structure fishbowls and discussions for your project. The idea is to ensure the broadest coverage of topics, while allowing for the project community to discuss critical areas of concern. To make sure we are presenting the best topics for discussion, we have asked representatives of each of your projects to help us out in the Forum selection process. > > > There are two stages to the brainstorming: > > > 1.If you haven’t already, its encouraged that you set up an etherpad with your team and start discussing ideas you'd like to talk about at the Forum and work out which ones to submit. > > > 2. On the 22nd of February, we will open up a more formal web-based tool for you to submit abstracts for the most popular sessions that came out of your brainstorming. > > > Make an etherpad and add it to the list at: https://wiki.openstack.org/wiki/Forum/Denver2018 > > > This is your opportunity to think outside the box and talk with other projects, groups, and individuals that you might not see during Summit sessions. Look for interested parties to collaborate with and share your ideas. > > > Examples of typical sessions that make for a great Forum: > > Strategic, whole-of-community discussions, to think about the big picture, including beyond just one release cycle and new technologies > > eg Making OpenStack One Platform for containers/VMs/Bare Metal (Strategic session) the entire community congregates to share opinions on how to make OpenStack achieve its integration engine goal > > Cross-project sessions, in a similar vein to what has happened at past forums, but with increased emphasis on issues that are of relevant to all areas of the community > > eg Rolling Upgrades at Scale (Cross-Project session) – the Large Deployments Team collaborates with Nova, Cinder and Keystone to tackle issues that come up with rolling upgrades when there’s a large number of machines. > > Project-specific sessions, where community members most interested in a specific project can discuss their experience with the project over the last release and provide feedback, collaborate on priorities, and present or generate 'blue sky' ideas for the next release > > eg Neutron Pain Points (Project-Specific session) – Co-organized by neutron developers and users. Neutron developers bring some specific questions about implementation and usage. Neutron users bring feedback from the latest release. All community members interested in Neutron discuss ideas about the future. > > > Think about what kind of session ideas might end up as: Project-specific, cross-project or strategic/whole-of-community discussions. There'll be more slots for the latter two, so do try and think outside the box! > > > This part of the process is where we gather broad community consensus - in theory the second part is just about fitting in as many of the good ideas into the schedule as we can. > > > Further details about the forum can be found at: https://wiki.openstack.org/wiki/Forum > > > Thanks all! > > Kendall Nelson, on behalf of the OpenStack Foundation, User Committee & Technical Committee > > > > From aschultz at redhat.com Wed Feb 20 14:10:53 2019 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 20 Feb 2019 07:10:53 -0700 Subject: [tripleo][ironic] What I had to do to get standalone ironic working with ovn enabled In-Reply-To: <20190220041555.54yc5diqviszvb6e@redhat.com> References: <20190220041555.54yc5diqviszvb6e@redhat.com> Message-ID: On Tue, Feb 19, 2019 at 9:19 PM Lars Kellogg-Stedman wrote: > > I'm using the tripleo standalone install to set up an Ironic test > environment. With recent tripleo master, the deploy started failing > because the DockerOvn*Image parameters weren't defined. Here's what I > did to get everything working: > > 1. I added to my deploy: > > -e /usr/share/tripleo-heat-templates/environment/services/neutron-ovn-standalone.yaml > > With this change, `openstack tripleo container image prep` > correctly detected that ovn was enabled and generated the > appropriate image parameters. > > 2. environments/services/ironic.yaml sets: > > NeutronMechanismDrivers: ['openvswitch', 'baremetal'] > > Since I didn't want openvswitch enabled in this deployment, I > explicitly set the mechanism drivers in a subsequent environment > file: > > NeutronMechanismDrivers: ['ovn', 'baremetal'] > > 3. The neutron-ovn-standalone.yaml environment explicitly disables > the non-ovn neutron services. Ironic requires the > services of the neutron_dhcp_agent, so I had to add: > > OS::TripleO::Services::NeutronDhcpAgent: /usr/share/openstack-tripleo-heat-templates/deployment/neutron/neutron-dhcp-container-puppet.yaml > > With this in place, the ironic nodes were able to receive dhcp > responses and were able to boot. > > 3. In order to provide the baremetal nodes with a route to the nova > metadata service, I added the following to my deploy: > > NeutronEnableForceMetadata: true > > This provides the baremetal nodes with a route to 169.254.169.254 > via the neutron dhcp namespace. > > 4. In order get the metadata service to respond correctly, I also had > to enable the neutron metadata agent: > > OS::TripleO::Services::NeutronMetadataAgent: /usr/share/openstack-tripleo-heat-templates/deployment/neutron/neutron-metadata-container-puppet.yaml > > This returned my Ironic deployment to a functioning state: I can > successfully boot baremetal nodes and provide them with configuration > information via the metadata service. > > I'm curious if this was the *correct* solution, or if there was a > better method of getting things working. > I think you're hitting https://bugs.launchpad.net/tripleo/+bug/1816663 Dan has proposed a patch https://review.openstack.org/#/c/637989/. This is a side effect of switch to ovn by default i believe > -- > Lars Kellogg-Stedman | larsks @ {irc,twitter,github} > http://blog.oddbit.com/ | > From ignaziocassano at gmail.com Wed Feb 20 14:13:25 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 20 Feb 2019 15:13:25 +0100 Subject: [cinder]queens][vmax3] FC issue Message-ID: Hello All. I'am trying to configure cinder driver for vmax3 fc using the rest api driver. I am facing issues creating volumes from images because masking view errors: The error message received was Bad or unexpected response from the storage volume backend API: Error retrieving masking group On unisphere (8.4) log files I found that moveVolumeToStorageGroup is not found, while the openstack dell_emc rest-py is using it in its payload. Anyone could help us with this problem ? Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgiusti at gmail.com Wed Feb 20 14:23:01 2019 From: kgiusti at gmail.com (Ken Giusti) Date: Wed, 20 Feb 2019 09:23:01 -0500 Subject: [oslo] How to properly cleanup FakeExchangeManager between test cases In-Reply-To: References: <1550483993.10501.6@smtp.office365.com> Message-ID: Opened a launchpad to track this: https://bugs.launchpad.net/oslo.messaging/+bug/1816769 On 2/18/19, Doug Hellmann wrote: > Balázs Gibizer writes: > >> Hi, >> >> Nova has tests that are using the oslo.messaging FakeDriver >> implementation by providing the 'fake://' transport url. The >> FakeExchangeManager keeps a class level dict of FakeExchange objects >> keyed by the communication topic[1]. This can cause that an RPC message >> sent by a test case is received by a later test case running in the >> same process. I did not find any proper way to clean up the >> FakeExchangeManager at the end of each test case to prevent this. To >> fix the problem in the nova test I did a hackish cleanup by overwriting >> FakeExchangeManager._exchanges directly during test case cleanup [2]. >> Is there a better way to do this cleanup? >> >> Cheers, >> gibi >> >> [1] >> https://github.com/openstack/oslo.messaging/blob/0a784d260465bc7ba878bedeb5c1f184e5ff6e2e/oslo_messaging/_drivers/impl_fake.py#L149 >> [2] https://review.openstack.org/#/c/637233/1/nova/tests/fixtures.py > > It sounds like we need a test fixture to manage that, added to > oslo.messaging so if the internal implementation of the fake driver > changes we can update the fixture without breaking consumers of the > library (and where it would be deemed "safe" to modify private > properties of the class). > > I'm sure the Oslo team would be happy to review patches to create that. > > -- > Doug > > -- Ken Giusti (kgiusti at gmail.com) From mnaser at vexxhost.com Wed Feb 20 14:33:11 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 20 Feb 2019 09:33:11 -0500 Subject: [openstack-ansible] setup-infrastructure failure In-Reply-To: References: Message-ID: Hi, There was an issue that affected CentOS that was fixed a few days ago. Can you checkout master again and try again? Mohammed On Tue, Feb 19, 2019 at 7:42 AM vladimir franciz blando < vladimir.blando at gmail.com> wrote: > I did, and it works > > - > [root at openstack-ansible playbooks]# curl > http://172.29.236.100:8181/os-releases/18.1.3/centos-7.6-x86_64/requirements_absolute_requirements.txt > > networking_sfc==7.0.1.dev4 > urllib3==1.23 > alabaster==0.7.11 > restructuredtext_lint==1.1.3 > pylxd==2.2.7 > sphinxcontrib_seqdiag==0.8.5 > oslo.cache==1.30.1 > tooz==1.62.0 > pytz==2018.5 > pysaml2==4.5.0 > pathtools==0.1.2 > appdirs==1.4.3 > ... > > - Vlad > > ᐧ > > On Tue, Feb 19, 2019 at 8:30 PM Jean-Philippe Evrard < > jean-philippe at evrard.me> wrote: > >> You should try to curl that link from within your >> `aio1_utility_container-9fa7b0be` container. >> It seems like a networking configuration issue. >> > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed Feb 20 14:46:13 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 20 Feb 2019 14:46:13 +0000 (GMT) Subject: [tc] Questions for TC Candidates Message-ID: It's the Campaigning slot of the TC election process, where members of the community (including the candidates) are encouraged to ask the candidates questions and witness some debate. I have some questions. First off, I'd like to thank all the candidates for running and being willing to commit some of their time. I'd also like to that group as a whole for being large enough to force an election. A representative body that is not the result of an election would not be very representing nor have much of a mandate. The questions follow. Don't feel obliged to answer all of these. The point here is to inspire some conversation that flows to many places. I hope other people will ask in the areas I've chosen to skip. If you have a lot to say, it might make sense to create a different message for each response. Beware, you might be judged on your email etiquette and attention to good email technique! * How do you account for the low number of candidates? Do you consider this a problem? Why or why not? * Compare and contrast the role of the TC now to 4 years ago. If you weren't around 4 years ago, comment on the changes you've seen over the time you have been around. In either case: What do you think the TC role should be now? * What, to you, is the single most important thing the OpenStack community needs to do to ensure that packagers, deployers, and hobbyist users of OpenStack are willing to consistently upstream their fixes and have a positive experience when they do? What is the TC's role in helping make that "important thing" happen? * If you had a magic wand and could inspire and make a single sweeping architectural or software change across the services, what would it be? For now, ignore legacy or upgrade concerns. What role should the TC have in inspiring and driving such changes? * What can the TC do to make sure that the community (in its many dimensions) is informed of and engaged in the discussions and decisions of the TC? * How do you counter people who assert the TC is not relevant? (Presumably you think it is, otherwise you would not have run. If you don't, why did you run?) That's probably more than enough. Thanks for your attention. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From mnaser at vexxhost.com Wed Feb 20 15:24:16 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 20 Feb 2019 10:24:16 -0500 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: Hi Chris, Thanks for kicking this off. I've added my replies in-line. Thank you for your past term as well. Regards, Mohammed On Wed, Feb 20, 2019 at 9:49 AM Chris Dent wrote: > > > It's the Campaigning slot of the TC election process, where members > of the community (including the candidates) are encouraged to ask > the candidates questions and witness some debate. I have some > questions. > > First off, I'd like to thank all the candidates for running and > being willing to commit some of their time. I'd also like to that > group as a whole for being large enough to force an election. A > representative body that is not the result of an election would not > be very representing nor have much of a mandate. > > The questions follow. Don't feel obliged to answer all of these. The > point here is to inspire some conversation that flows to many > places. I hope other people will ask in the areas I've chosen to > skip. If you have a lot to say, it might make sense to create a > different message for each response. Beware, you might be judged on > your email etiquette and attention to good email technique! > > * How do you account for the low number of candidates? Do you > consider this a problem? Why or why not? Just for context, I wanted to share the following numbers to formulate my response: Ocata candidates: 21 Pike candidates: 14 Queens candidates: 16 Rocky candidates: 10 We're indeed seeing the numbers grow cycle over cycle. However, a lot of the candidates are people that most seem to have ran once and upon not being elected, they didn't take a chance to go again. I think perhaps we should encourage reaching out to those previous candidates, especially those who are still parts of the community still to nominate themselves again. I do however think that with the fact that our software is becoming more stable and having less overall contributors than before, it might be a good time to evaluate the size of the TC, but that could be a really interesting challenge to deal with and I'm not quite so sure yet about how we can approach that. I don't think it's a problem, we had a really quiet start but then a lot of people put their names in. I think if the first candidate had come in a bit earlier, we would have seen more candidates because I get this feeling no one wants to go "first". > * Compare and contrast the role of the TC now to 4 years ago. If you > weren't around 4 years ago, comment on the changes you've seen > over the time you have been around. In either case: What do you > think the TC role should be now? 4 years ago, we were probably around the Kilo release cycle at that time and things were a lot different in the ecosystem. At the time, I think the TC had more of a role of governing as the projects had plenty of traction and things were moving. As OpenStack seems to come closer to delivering most of the value that you need, without needing as much effort, I think it's important for us to try and envision how we can better place OpenStack in the overall infrastructure ecosystem and focus on marketing it. I speak a lot to users and deployers daily and I find out a lot of things about current impressions of OpenStack, once I explain it to them, they are all super impressed by it so I think we need to do a better job at educating people. Also, I think the APAC region is one that is a big growing user and community of OpenStack that we usually don't put as much thought into. We need to make sure that we invest more time into the community there. > * What, to you, is the single most important thing the OpenStack > community needs to do to ensure that packagers, deployers, and > hobbyist users of OpenStack are willing to consistently upstream > their fixes and have a positive experience when they do? What is > the TC's role in helping make that "important thing" happen? I think our tooling is hard to use. I really love it, but it's really not straightforward for most new comers. The majority of users are familiar with the GitHub workflow, the Gerrit one is definitely one that needs a bit of a learning curve. I think this introduces a really weird situation where if I'm not familiar with all of that and I want to submit a patch that's a simple change, it will take me more work to get setup on Gerrit than it does to make the fix. I think most people give up and just don't want to bother at that point, perhaps a few more might be more inclined to get through it but it's really a lot of work to allow pushing a simple patch. > * If you had a magic wand and could inspire and make a single > sweeping architectural or software change across the services, > what would it be? For now, ignore legacy or upgrade concerns. > What role should the TC have in inspiring and driving such > changes? Oh. - Stop using RabbitMQ as an RPC, it's the worst most painful component to run in an entire OpenStack deployment. It's always broken. Switch into something that uses HTTP + service registration to find endpoints. - Use Keystone as an authoritative service catalog, stop having to configure URLs for services inside configuration files. It's confusing and unreliable and causes a lot of breakages often. - SSL first. For most services, the overhead is so small, I don't see why we wouldn't ever have all services to run SSL only. - Single unified client, we're already moving towards this with the OpenStack client but it's probably been one of our biggest weaknesses that have not been completed and fully cleared out. Those are a few that come to mind right now, I'm sure I could come up with so much more. > * What can the TC do to make sure that the community (in its many > dimensions) is informed of and engaged in the discussions and > decisions of the TC? We need to follow the mailing lists and keep up to date at what users are trying to use OpenStack for. There's emerging use cases such as using it for edge deployments, the increase of bare-metal deployments (ironic) and thinking about how it can benefit end users, all of this can be seen by following mailing list discussions, Twitter-verse, and other avenues. I've also found amazing value in being part of WeChat communities which bring a lot of insight from the APAC region. > * How do you counter people who assert the TC is not relevant? > (Presumably you think it is, otherwise you would not have run. If > you don't, why did you run?) This is a tough one. That's something we need to work and change, I think that historically the involvement of the TC and projects have been very hands-off because of the velocity that projects moved at. Now that we're a bit slower, I think that having the TC involved in the projects can be very interesting. It provides access to a group of diverse and highly technical individuals from different backgrounds (operators, developers -- but maybe not as much users) to chime in on certain directions of the projects. > That's probably more than enough. Thanks for your attention. Thank you for starting this. > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From fungi at yuggoth.org Wed Feb 20 15:50:21 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 20 Feb 2019 15:50:21 +0000 Subject: [first-contact-sig] Contributing from Windows In-Reply-To: References: Message-ID: <20190220155020.k7nhpqgu5mjkfvw3@yuggoth.org> On 2019-02-20 15:31:49 +0500 (+0500), Ramsha Azeemi wrote: > hi! i am windows user is it necessary to be a linux ubuntu user > for contribution in openstack projects. [I've added a subject to your message and tagged it for our "First Contact" special interest group, for better visibility.] I think it really depends on what sort of contributions you want to make, as far as how easy that would be without learning to make use of common Unix/Linux tools and commands. There are a number of ways to contribute to the community, many of which can be found outlined here: https://www.openstack.org/community/ That said, it's hard to know what you mean by "windows user" or "linux ubuntu user" in your question. Are you worried about your ability to use command-line tools, or is there some deeper problem you're concerned with there? For example, if you are interested in contributing by improving the software which makes up OpenStack, then using a Linux environment will make you far more effective at that in the long run. To be frank, OpenStack is complicated software, and learning to use a Linux command-line environment is unlikely to be one of the greater challenges you'll face as a contributor. I gather we have quite a few contributors whose desktop environment is MS Windows but who do development work in a local virtual machine or even over the Internet in remote VM instances in public service providers. Also, I'm led to believe Windows now provides a Linux-like command shell with emulated support for Ubuntu packages (I was talking to a new contributor just last week who was using that to propose source code changes for review). So to summarize, I recommend first contemplating what manner of contribution most excites you. Expect to have to learn lots of new things (not just new tools and workflows, those are only the beginning of the journey), and most of all have patience with the process. We're a friendly bunch and eager to help newcomers turn into productive members of our community. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ashlee at openstack.org Wed Feb 20 16:00:16 2019 From: ashlee at openstack.org (Ashlee Ferguson) Date: Wed, 20 Feb 2019 10:00:16 -0600 Subject: Denver Summit Schedule Live Message-ID: <5874F7CF-43AD-4928-B4C9-220FCC0D8E19@openstack.org> Hi everyone, The agenda for the Open Infrastructure Summit (formerly the OpenStack Summit) is now live! If you need a reason to join the Summit in Denver, April 29-May 1, here’s what you can expect: Breakout sessions spanning 30+ open source projects from technical community leaders and organizations including ARM, AT&T, China Mobile, Baidu, Boeing, Blizzard Entertainment, Haitong Securities Company, NASA, and more. Project updates and onboarding from OSF projects: Airship, Kata Containers, OpenStack, StarlingX, and Zuul. Join collaborative sessions at the Forum , where open infrastructure operators and upstream developers will gather to jointly chart the future of open source infrastructure, discussing topics ranging from upgrades to networking models and how to get started contributing. Get hands on training around open source technologies directly from the developers and operators building the software. Now what? Register before prices increase on February 27 at 11:59pm PT (February 28 at 7:59am UTC) Then, book a room at the official Summit hotel while rooms are still available! Questions? Reach out to summit at openstack.org Cheers, Ashlee Ashlee Ferguson OpenStack Foundation ashlee at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon at csail.mit.edu Wed Feb 20 16:21:25 2019 From: jon at csail.mit.edu (Jonathan Proulx) Date: Wed, 20 Feb 2019 11:21:25 -0500 Subject: [first-contact-sig] Contributing from Windows In-Reply-To: <20190220155020.k7nhpqgu5mjkfvw3@yuggoth.org> References: <20190220155020.k7nhpqgu5mjkfvw3@yuggoth.org> Message-ID: <20190220162125.4mqwvgjy2v4ti67m@csail.mit.edu> On Wed, Feb 20, 2019 at 03:50:21PM +0000, Jeremy Stanley wrote: :On 2019-02-20 15:31:49 +0500 (+0500), Ramsha Azeemi wrote: :> hi! i am windows user is it necessary to be a linux ubuntu user :> for contribution in openstack projects. Welcome, It's a big community with many different things that neeed doing so whatever skills and resources you bring there is liekly a use for them! Jeremy's response was pretty extensive, just to undeline one of his points, it depends how you want to contribute. Most of OpenStack runs on Linux so would require some interaction with Linux either as a VM or remote resource for testing. Some parts however like Documentation, Translation, and the Commandline Client are not tied to a particular operating system and may be easier to work on directly in a non-Linux environment. -Jon :[I've added a subject to your message and tagged it for our "First :Contact" special interest group, for better visibility.] : :I think it really depends on what sort of contributions you want to :make, as far as how easy that would be without learning to make use :of common Unix/Linux tools and commands. There are a number of ways :to contribute to the community, many of which can be found outlined :here: https://www.openstack.org/community/ : :That said, it's hard to know what you mean by "windows user" or :"linux ubuntu user" in your question. Are you worried about your :ability to use command-line tools, or is there some deeper problem :you're concerned with there? For example, if you are interested in :contributing by improving the software which makes up OpenStack, :then using a Linux environment will make you far more effective at :that in the long run. To be frank, OpenStack is complicated :software, and learning to use a Linux command-line environment is :unlikely to be one of the greater challenges you'll face as a :contributor. : :I gather we have quite a few contributors whose desktop environment :is MS Windows but who do development work in a local virtual machine :or even over the Internet in remote VM instances in public service :providers. Also, I'm led to believe Windows now provides a :Linux-like command shell with emulated support for Ubuntu packages :(I was talking to a new contributor just last week who was using :that to propose source code changes for review). : :So to summarize, I recommend first contemplating what manner of :contribution most excites you. Expect to have to learn lots of new :things (not just new tools and workflows, those are only the :beginning of the journey), and most of all have patience with the :process. We're a friendly bunch and eager to help newcomers turn :into productive members of our community. :-- :Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From miguel at mlavalle.com Wed Feb 20 16:21:55 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 20 Feb 2019 10:21:55 -0600 Subject: [openstack-dev] [neutron] Change of day and time for the L3 sub-team meeting Message-ID: Hi Stackers, In an effort to make the L3 sub-team meeting schedule friendlier to Far East developers, we have moved it to Wednesdays at 1400 UTC. We are also welcoming Liu Yulong as one of the co-chairs of the meeting: https://review.openstack.org/#/c/637900/. Please update your calendars accordingly. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.settle at outlook.com Wed Feb 20 17:20:05 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Wed, 20 Feb 2019 17:20:05 +0000 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: Hi Chris, Thanks for getting us started :) replies inline below. > * How do you account for the low number of candidates? Do you >   consider this a problem? Why or why not? Change is inevitable, and in the last 3 years (distinctly since the Boston summit) there have been massive changes to our community. OpenStack went from being the new hotness to becoming a stable, secure open source project that is well respected in the open source community and that was reflected by the amount of people contributing to the projects. I see the low number of candidates for the Train TC election as a direct reflection these changes. While a position on the TC still garners a high level of respect from peers and employers, I have seen a distinct decline in the push for leadership positions from large-scale investors, due to OpenStack's stability and the resulting decline for large changes to be made to OpenStack as a product. Is this a problem? No. Not if we view this as a new start. Defining who are are now as a stable open source product will set us apart from communities that wish to continue to ride the success high. > * Compare and contrast the role of the TC now to 4 years ago. If you >   weren't around 4 years ago, comment on the changes you've seen >   over the time you have been around. In either case: What do you >   think the TC role should be now? The TC has always been a position of governance that is (debatably) clearly defined as an elected group that provides technical leadership for OpenStack as a whole. I believe it is not the question of what the TC is, but rather, what is the "technical guidance" that the TC provides, and how that has evolved. Four (4) years ago, OpenStack was in a different place in the product life cycle. The hype was high, and we had buy-in from a wide range of investors coming from all different parts of the technology industry. My experience with the TC then was more of a "governing" body helping to shape an incredibly fast growing community and product, whereas now I see it more as a mediation, communication, and community platform that has a focus on technical issues. > * What, to you, is the single most important thing the OpenStack >   community needs to do to ensure that packagers, deployers, and >   hobbyist users of OpenStack are willing to consistently upstream >   their fixes and have a positive experience when they do? What is >   the TC's role in helping make that "important thing" happen? As my experience stems directly from documentation, I genuinely believe that this is part of the ticket. Speaking as a communicator and a collaborator, I believe we have along the way lost touch with defining a minimum barrier to entry. When I started in 2014, I was not particularly technically minded. I had worked at Red Hat for 2 years, and my experience had enabled me to understand "Cloud" and XML. I was hired by Rackspace with the proviso that, "We need good writers, we can teach you the technology." As a result, (and with some help) I found it easy and accessible to begin contributing and I've been here since. To be able to build documentation today, as a new comer, I would need to install package dependencies for each repository. I have to read at least 2 different contributor guides to get started. And we still are yet to really open our world to our Windows user friends. Installing a package dep isn't hard or it is time consuming, but understanding and knowing what they. I believe we need to be mindful of the time people have. Whether or not they are working with OpenStack as a part of their employment, or if it is in their spare time. > * What can the TC do to make sure that the community (in its many >   dimensions) is informed of and engaged in the discussions and >   decisions of the TC? This is tough, because there are already so many ways that the TC engages with community and I think that's brilliant. The strong presence on the discuss ML, the ability to join the IRC channels, and the genuine interest each TC member has in open discussion. By most standards, this is an incredibly active and public group and I do not wish to criticise it, only encourage what we have - and if new ways of communicating are requested, that we actively seek adoption to ensure we are including everyone. > * How do you counter people who assert the TC is not relevant? >   (Presumably you think it is, otherwise you would not have run. If >   you don't, why did you run?) This relates to my answer to your first question, OpenStack has undergone massive changes not only in community but in focus. It is common that when things change, governing bodies fail to change quickly enough. I can understand how people would come to this conclusion. However, the TC is changing and that is evident by the large number of TC members who stood down this election and the number of those that have elected to stand for Train. I find the challenge of ensuring that the TC remains relevant to be an important part of what I stand for as a candidate. This means adapting to more changes in the future. > > That's probably more than enough. Thanks for your attention. I'd say that's probably more than enough from me too. Thanks for your questions, hopefully my answers are equally insightful :) Cheers, Alex From sbauza at redhat.com Wed Feb 20 17:23:50 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 20 Feb 2019 18:23:50 +0100 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: Thanks Chris for asking us questions so we can clarify our opinions. On Wed, Feb 20, 2019 at 3:52 PM Chris Dent wrote: > > It's the Campaigning slot of the TC election process, where members > of the community (including the candidates) are encouraged to ask > the candidates questions and witness some debate. I have some > questions. > > First off, I'd like to thank all the candidates for running and > being willing to commit some of their time. I'd also like to that > group as a whole for being large enough to force an election. A > representative body that is not the result of an election would not > be very representing nor have much of a mandate. > > I agree with you on this point. It's important for OpenStack to have time to discuss about mandates. The questions follow. Don't feel obliged to answer all of these. The > point here is to inspire some conversation that flows to many > places. I hope other people will ask in the areas I've chosen to > skip. If you have a lot to say, it might make sense to create a > different message for each response. Beware, you might be judged on > your email etiquette and attention to good email technique! > > * How do you account for the low number of candidates? Do you > consider this a problem? Why or why not? > > Yes, again, I agree and to be honest, when I only saw we were only having 4 candidates 8 hours before the deadline, I said to myself "OK, you love OpenStack. You think the TC is important. But then, why aren't you then throwing your hat ?" We all have opinions, right ? But then, why people don't want to be in the TC ? Because we don't have a lot of time for it ? Or because people think the TC isn't important ? I don't want to discuss about politics here. But I somehow see a parallel in between what the TC is and what the European Union is : both are governances not fully decision-makers but are there for sharing same rules and vision. If we stop having the TC, what would become OpenStack ? Just a set of parallel projects with no common guidance ? The fact that a large number of candidacies went very late (including me) is a bit concerning to me. How can we become better ? I have no idea but saying that probably given the time investment it requires, most of the candidacies were probably holding some management acceptance before people would propose their names. Probably worth thinking about how the investment it requires, in particular given we have less full-time contributors that can dedicate large time for governance. * Compare and contrast the role of the TC now to 4 years ago. If you > weren't around 4 years ago, comment on the changes you've seen > over the time you have been around. In either case: What do you > think the TC role should be now? > > 4 years ago, we were in the Kilo timeframe. That's fun you mention this period, because at that exact time of the year, the TC voted on one of the probably most important decisions that impacted OpenStack : The Big Tent reform [1] Taking a look at this time, I remember frustration and hard talks but also people committed to change things. This decision hasn't changed a lot the existing service projects that were before the Big Tent, but it actually created a whole new ecosystem for developers. It had challenges but it never required to be abandoned, which means the program is a success. Now the buzz is gone and the number of projects stable, the TC necessarly has to mutate to a role of making sure all the projects sustain the same pace and reliability. Most of the challenges for the TC is now about defining and applying criterias for ensuring that all our projects have a reasonable state for production. If you see my candidacy letter, two of my main drivers for my nomination are about upgradability and scalability concerns. * What, to you, is the single most important thing the OpenStack > community needs to do to ensure that packagers, deployers, and > hobbyist users of OpenStack are willing to consistently upstream > their fixes and have a positive experience when they do? What is > the TC's role in helping make that "important thing" happen? > > There are two very distinct reasons when a company decides to downstream-only : either by choice or because of technical reasons. I don't think a lot of companies decide to manage technical debt on their own by choice. OpenStack is nearly 9 years old and most of the users know the price it is. Consequently, I assume that the reasons are technical : 1/ they're running an old version and haven't upgraded (yet). We have good user stories of large cloud providers that invested in upgrades (for example OVH) and see the direct benefit of it. Maybe we can educate more on the benefits of upgrading frequently. 2/ they think upstreaming is difficult. I'm all open to hear the barriers they have. For what it's worth, OpenStack invested a lot in mentoring with the FirstContact SIG, documentation and Upstream Institute. There will probably also be a new program about peer-mentoring and recognition [2] if the community agrees with the idea. Honestly, I don't know what do do more. If you really can't upstream but care about your production, just take a service contract I guess. > * If you had a magic wand and could inspire and make a single > sweeping architectural or software change across the services, > what would it be? For now, ignore legacy or upgrade concerns. > What role should the TC have in inspiring and driving such > changes? > > Take me as a fool but I don't think the role of the TC is to drive architectural decision between projects. The TC can help two projects to discuss, the TC can (somehow) help moderate between two teams about some architectural concern but certainly not be the driver of such change. That doesn't mean the TC can't be technical. We have goals, for example. But in order to have well defined goals that are understandable by project contributors, we also need to have the projects be the drivers of such architectural changes. > * What can the TC do to make sure that the community (in its many > dimensions) is informed of and engaged in the discussions and > decisions of the TC? > > You made a very good job in providing TC feedback. I surely think the TC has to make sure that a regular weekly feedback is provided. For decisions that impact projects, I don't really see how TC members can vote without getting feedback from the project contributors, so here I see communication (thru Gerrit at least). > * How do you counter people who assert the TC is not relevant? > (Presumably you think it is, otherwise you would not have run. If > you don't, why did you run?) > Again, I think that is a matter of considering the TC responsibilities. We somehow need to clarify what are those responsibilities and I think I voiced on that above. > That's probably more than enough. Thanks for your attention. > > I totally appreciate you challenging us. That's very important that people vote based on opinions rather than popularity. -Sylvain [1] https://governance.openstack.org/tc/resolutions/20141202-project-structure-reform-spec.html [2] https://review.openstack.org/#/c/636956/ > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent -------------- next part -------------- An HTML attachment was scrubbed... URL: From preetpalok123 at gmail.com Wed Feb 20 17:56:24 2019 From: preetpalok123 at gmail.com (Preetpal Kaur) Date: Wed, 20 Feb 2019 23:26:24 +0530 Subject: Outreachy contribution Message-ID: Hi! I am Preetpal Kaur new in open source. I want to contribute to open source with the help of outreachy. I choose this project to contribute to.OpenStack Manila Integration with OpenStack CLI (OSC) So @Sofia Enriquez Can you please guide me on how to start -- Preetpal Kaur https://preetpalk.wordpress.com/ https://github.com/Preetpalkaur3701 From doug at doughellmann.com Wed Feb 20 17:58:47 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 20 Feb 2019 12:58:47 -0500 Subject: [tc][election] campaign question: team approval criteria Message-ID: One of the key responsibilities of the Technical Committee is still evaluating projects and teams that want to become official OpenStack projects. The Foundation Open Infrastructure Project approval process has recently produced a different set of criteria for the Board to use for approving projects [1] than the TC uses for approving teams [2]. What parts, if any, of the OIP approval criteria do you think should apply to OpenStack teams? What other changes, if any, would you propose to the official team approval process or criteria? Are we asking the right questions and setting the minimum requirements high enough? Are there any criteria that are too hard to meet? How would you apply those rule changes to existing teams? [1] http://lists.openstack.org/pipermail/foundation/2019-February/002708.html [2] https://governance.openstack.org/tc/reference/new-projects-requirements.html -- Doug From sbauza at redhat.com Wed Feb 20 18:36:28 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 20 Feb 2019 19:36:28 +0100 Subject: [tc][election] campaign question: team approval criteria In-Reply-To: References: Message-ID: On Wed, Feb 20, 2019 at 7:00 PM Doug Hellmann wrote: > > One of the key responsibilities of the Technical Committee is still > evaluating projects and teams that want to become official OpenStack > projects. The Foundation Open Infrastructure Project approval process > has recently produced a different set of criteria for the Board to use > for approving projects [1] than the TC uses for approving teams [2]. > > Yup, I was a subscriber of the -tc ML so I noted this draft from ttx's email in December. TBH, I haven't forged any clear opinion yet until you asked about it, so I'll try to give an answer that'll probably evolve over the course of my thinkings. What parts, if any, of the OIP approval criteria do you think should > apply to OpenStack teams? > > I think the set of criterias necessarly has to be a bit divergent for one reason : the OIP approval has to be driven by the Board (which includes other criterias which can not necessarly be purely technical) while our criterias are managed by the OpenStack *Technical* Committee, meaning that those criterias are necessarly measured by technical key points. For example, discussing about strategic focus is totally understandable from a Board point of view but is debatable from a technical point of view. The other difference is about user adoption. As we follow a technical guidance, we don't challenge it. We rather just care of the development ecosystem but we even don't take it as a requirement, since we accept a maintenance mode. Probably worth thinking, but maybe we could investigate the idea to have user-defined tags (thanks to the UC) that would help giving a better view of the user adoptation of each project. That being said, most of the criterias I'm seeing on the OIP etherpad look similar to the ones we have for the OpenStack TC : - the project has to follow the 4 Opens. - it has to communicate well with other Foundation projects - it somehow shares same technical best practices The last item is interesting, because the OIP draft at the moment shows more technical requirements than the Foundation ones. For example, VMT is - at the moment I'm writing those lines - quoted as a common best practice, which is something we don't ask for our projects. That's actually a good food for thoughts : security is crucial and shouldn't be just a tag [3]. OpenStack is mature and it's our responsibility to care about CVEs. What other changes, if any, would you propose to the official team > approval process or criteria? Are we asking the right questions and > setting the minimum requirements high enough? Are there any criteria > that are too hard to meet? > We have minimum requirements that are expressed with [2] but there are also tags that are expressed by [4] One tag I feel is missing is about scalability: we had tenets in the past [5] but I don't see them transcribed into tags. That was one of my three items I mentioned in my candidacy email, but I'd love to see us be better on challenging projects on their scalability model. How would you apply those rule changes to existing teams? > I'm more in a favor of an iterative approach with tags first, so that we are able to capture the current problems, and then tackle the problems thru common goals that are well accepted and discussed by all the project teams. -Sylvain > > [1] > http://lists.openstack.org/pipermail/foundation/2019-February/002708.html > [2] > https://governance.openstack.org/tc/reference/new-projects-requirements.html > -- > Doug > > [3] https://governance.openstack.org/tc/reference/tags/index.html#vulnerability-management-tags [4] https://governance.openstack.org/tc/reference/tags/index.html [5] https://wiki.openstack.org/wiki/BasicDesignTenets -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.rosser at rd.bbc.co.uk Wed Feb 20 18:40:13 2019 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Wed, 20 Feb 2019 18:40:13 +0000 Subject: [heat] keystone endpoint configuration Message-ID: <22a164a6-73c9-5c6f-cfd0-6f29b0bae47a@rd.bbc.co.uk> In openstack-ansible we are trying to help a number of our end users with their heat deployments, some of them in conjunction with magnum. There is some uncertainty with how the following heat.conf sections should be configured: [clients_keystone] auth_uri = ... [keystone_authtoken] www_authenticate_uri = ... It does not appear to be possible to define a set of internal or external keystone endpoints in heat.conf which allow the following: * The orchestration panels being functional in horizon * Deployers isolating internal openstack from external networks * Deployers using self signed/company cert on the external endpoint * Magnum deployments completing * Heat delivering an external endpoint at [1] * Heat delivering an external endpoint at [2] There are a number of related bugs: https://bugs.launchpad.net/openstack-ansible/+bug/1814909 https://bugs.launchpad.net/openstack-ansible/+bug/1811086 https://storyboard.openstack.org/#!/story/2004808 https://storyboard.openstack.org/#!/story/2004524 Any help we could get from the heat team to try to understand the root cause of these issues would be really helpful. Jon. [1] https://github.com/openstack/heat/blob/master/heat/engine/resources/server_base.py#L87 [2] https://github.com/openstack/heat/blob/master/heat/engine/resources/signal_responder.py#L106 From mark at stackhpc.com Wed Feb 20 19:08:34 2019 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 20 Feb 2019 19:08:34 +0000 Subject: openstack-kolla precheck failure In-Reply-To: References: Message-ID: On Tue, 19 Feb 2019 at 07:30, vladimir franciz blando < vladimir.blando at gmail.com> wrote: > Hi, > > I have a newly installed node running on CentOS 7 with 2 NICs, my precheck > failed and I can't figure it out. I'm trying out multinode with 1 node for > controller and the other for compute > > --- begin paste --- > TASK [glance : Checking free port for Glance Registry] > ****************************************************************************************************************************************** > fatal: [10.150.7.102]: FAILED! => {"msg": "The conditional check > 'inventory_hostname in groups[glance_services['glance-registry']['group']]' > failed. The error was: error while evaluating conditional > (inventory_hostname in > groups[glance_services['glance-registry']['group']]): Unable to look up a > name or access an attribute in template string ({% if inventory_hostname in > groups[glance_services['glance-registry']['group']] %} True {% else %} > False {% endif %}).\nMake sure your variable name does not contain invalid > characters like '-': argument of type 'StrictUndefined' is not > iterable\n\nThe error appears to have been in > '/usr/share/kolla-ansible/ansible/roles/glance/tasks/precheck.yml': line > 18, column 3, but may\nbe elsewhere in the file depending on the exact > syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Checking > free port for Glance Registry\n ^ here\n"} > to retry, use: --limit @/usr/share/kolla-ansible/ansible/site.retry > > PLAY RECAP > ************************************************************************************************************************************************************************************** > 10.150.7.102 : ok=68 changed=0 unreachable=0 failed=1 > 10.150.7.103 : ok=15 changed=0 unreachable=0 failed=0 > localhost > --- > > > - Vlad > Hi Vlad, What version of kolla-ansible are you using? We are removing the glance-registry group from the inventory during this release cycle, but if you use an official release of kolla-ansible (Rocky, 7.0.0, is the latest), it should be there. Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at openstack.org Wed Feb 20 19:13:25 2019 From: chris at openstack.org (Chris Hoge) Date: Wed, 20 Feb 2019 11:13:25 -0800 Subject: [baremetal-sig][ironic] Bare Metal SIG First Steps In-Reply-To: <098CC2A3-B207-47D5-A0F1-F227C33C2F01@openstack.org> References: <4191B2EA-A6F0-4183-B0EF-C5C013E3A982@openstack.org> <098CC2A3-B207-47D5-A0F1-F227C33C2F01@openstack.org> Message-ID: <751D0BC0-B349-4038-A53E-F6D43BA04227@openstack.org> Monday the patch for the creation of the Baremetal-SIG was approved by the TC and UC [1]. It's exciting to see the level of interest we've already seen in the planning etherpad [2], and it's time to start kicking off our first initiatives. I'd like to begin by addressing some of the comments in the patch. * Wiki vs Etherpad. My own personal preference is to start with the Etherpad as we get our feet underneath us. As more artifacts and begin to materialize, I think a Wiki would be an excellent location for hosting the information. My primary concern with Wikis is their tendency (from my point of view) to become out of date with the goals of a group. So, to begin with, unless there are any strong objections, we can do initial planning on the Etherpad and graduate to more permanent and resilient landing pages later. * Addressing operational aspects of Ironic. I see this as an absolutely critical aspect of the SIG. We already have organization devoted mostly to development, the Ironic team itself. SIGs are meant to be a collaborative effort between developers, operators, and users. We can send a patch up to clarify that in the governance document. If you are an operator, please use this [baremetal-sig] subject heading to start discussions and organize shared experiences and documentation. * The SIG is focused on all aspects of running bare-metal and Ironic, whether it be as a driver to Nova, a stand-alone service, or built into another project as a component. One of the amazing things about Ironic is its flexibility and versatility. We want to highlight that there's more than one way to do things with Ironic. * Chairs. I would very much like for this to be a community experience, and welcome nominations for co-chairs. I've found in the past that 2-3 co-chairs makes for a good balance, and given the number of people who have expressed interest in the SIG in general I think we should go ahead and appoint two extra people to co-lead the SIG. If this interests you, please self-nominate here and we can use lazy consensus to round out the rest of the leadership. If we have several people step up, we can consider a stronger form of voting using the systems available to us. First goals: I think that an important first goal is in the publication of a whitepaper outlining the motivation, deployment methods, and case studies surrounding OpenStack bare metal, similar to what we did with the containers whitepaper last year. A goal would be to publish at the Denver Open Infrastructure summit. Some initial thoughts and rough schedule can be found here [3], and also linked from the planning etherpad. One of the nice things about working on the whitepaper is we can also generate a bunch of other expanded content based on that work. In particular, I'd very much like to highlight deployment scenarios and case studies. I'm thinking of the whitepaper as a seed from which multiple authors demonstrate their experience and expertise to the benefit of the entire community. Another goal we've talked about at the Foundation is the creation of a new bare metal logo program. Distinct from something like the OpenStack Powered Trademark, which focuses on interoperability between OpenStack products with an emphasis on interoperability, this program would be devoted to highlighting products that are shipping with Ironic as a key component of their bare metal management strategy. This could be in many different configurations, and is focused on the shipping of code that solves particular problems, whether Ironic is user-facing or not. We're very much in the planning stages of a program like this, and it's important to get community feedback early on about if you would find it useful and what features you would like to see a program like this have. A few items that we're very interested in getting early feedback on are: * The Pixie Boots mascot has been an important part of the Ironic project, and we would like to apply it to highlight Ironic usage within the logo program. * If you're a public cloud, sell a distribution, provide installation services, or otherwise have some product that uses Ironic, what is your interest in participating in a logo program? * In addition to the logo, would you find collaboration to build content on how Ironic is being used in projects and products in our ecosystem useful? Finally, we have the goals of producing and highlighting content for using and operating Ironic. A list of possible use-cases is included in the SIG etherpad. We're also thinking about setting up a demo booth with a small set of server hardware to demonstrate Ironic at the Open Infrastructure summit. On all of those items, your feedback and collaboration is essential. Please respond to this mailing list if you have thoughts or want to volunteer for any of these items, and also contribute to the etherpad to help organize efforts and add any resources you might have available. Thanks to everyone, and I'll be following up soon with more information and updates. -Chris [1] https://review.openstack.org/#/c/634824/ [2] https://etherpad.openstack.org/p/bare-metal-sig [3] https://etherpad.openstack.org/p/bare-metal-whitepaper From mark at stackhpc.com Wed Feb 20 19:15:48 2019 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 20 Feb 2019 19:15:48 +0000 Subject: [kolla][magnum] Cluster creation failed due to "Waiting for Kubernetes API..." In-Reply-To: <54760998-DCF6-4E01-85C8-BB3F5879A14C@stackhpc.com> References: <1f5506ea-add1-749d-b6c3-1040776b0ff4@catalyst.net.nz> <54760998-DCF6-4E01-85C8-BB3F5879A14C@stackhpc.com> Message-ID: Hi, I think we've hit this, and John Garbutt has added the following configuration for Kolla Ansible in /etc/kolla/config/heat.conf: [DEFAULT] region_name_for_services=RegionOne We'll need a patch in kolla ansible to do that without custom config changes. Mark On Wed, 20 Feb 2019 at 11:05, Bharat Kunwar wrote: > Hi Giuseppe, > > What version of heat are you running? > > Can you check if you have this patch merged? > https://review.openstack.org/579485 > > https://review.openstack.org/579485 > > Bharat > > Sent from my iPhone > > On 20 Feb 2019, at 10:38, Giuseppe Sannino > wrote: > > Hi Feilong, Bharat, > thanks for your answer. > > @Feilong, > From /etc/kolla/heat-engine/heat.conf I see: > [clients_keystone] > auth_uri = http://10.1.7.201:5000 > > This should map into auth_url within the k8s master. > Within the k8s master in /etc/os-collect-config.conf I see: > > [heat] > auth_url = http://10.1.7.201:5000/v3/ > : > : > resource_name = kube-master > region_name = null > > > and from /etc/sysconfig/heat-params (among the others): > : > REGION_NAME="RegionOne" > : > AUTH_URL="http://10.1.7.201:5000/v3" > > This URL corresponds to the "public" Heat endpoint > openstack endpoint list | grep heat > | 3d5f58c43f6b44f6b54990d6fd9ff55d | RegionOne | heat | > orchestration | True | internal | > http://10.1.7.200:8004/v1/%(tenant_id)s | > | 8c2492cb0ddc48ca94942a4a299a88dc | RegionOne | heat-cfn | > cloudformation | True | internal | http://10.1.7.200:8000/v1 > | > | b164c4618a784da9ae14da75a6c764a3 | RegionOne | heat | > orchestration | True | public | > http://10.1.7.201:8004/v1/%(tenant_id)s | > | da203f7d337b4587a0f5fc774c993390 | RegionOne | heat | > orchestration | True | admin | > http://10.1.7.200:8004/v1/%(tenant_id)s | > | e0d3743e7c604e5c8aa4684df2d1ce53 | RegionOne | heat-cfn | > cloudformation | True | public | http://10.1.7.201:8000/v1 > | > | efe0b8418aa24dfca33c243e7eed7e90 | RegionOne | heat-cfn | > cloudformation | True | admin | http://10.1.7.200:8000/v1 > | > > Connectivity tests: > [fedora at kube-cluster-fed27-k5di3i7stgks-master-0 ~]$ ping 10.1.7.201 > PING 10.1.7.201 (10.1.7.201) 56(84) bytes of data. > 64 bytes from 10.1.7.201: icmp_seq=1 ttl=63 time=0.285 ms > > [fedora at kube-cluster-fed27-k5di3i7stgks-master-0 ~]$ curl > http://10.1.7.201:5000/v3/ > {"version": {"status": "stable", "updated": "2018-10-15T00:00:00Z", > "media-types": [{"base": "application/json", "type": > "application/vnd.openstack.identity-v3+json"}], "id": "v3.11", "links": > [{"href": "http://10.1.7.201:5000/v3/", "rel": "self"}]}} > > > Apparently, I can reach such endpoint from within the k8s master > > > @Bharat, > that file seems to be properly conifugured to me as well. > The problem pointed by "systemctl status heat-container-agent" is with: > > Feb 20 09:33:23 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal > runc[2837]: publicURL endpoint for orchestration service in null region not > found > Feb 20 09:33:23 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal > runc[2837]: Source [heat] Unavailable. > Feb 20 09:33:23 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal > runc[2837]: /var/lib/os-collect-config/local-data not found. Skipping > Feb 20 09:33:53 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal > runc[2837]: publicURL endpoint for orchestration service in null region not > found > Feb 20 09:33:53 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal > runc[2837]: Source [heat] Unavailable. > Feb 20 09:33:53 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal > runc[2837]: /var/lib/os-collect-config/local-data not found. Skipping > > > Still no way forward from my side. > > /Giuseppe > > > > > > > > > > > > > > > > > > On Tue, 19 Feb 2019 at 22:16, Bharat Kunwar wrote: > >> I have the same problem. Weird thing is /etc/sysconfig/heat-params has >> region_name specified in my case! >> >> Sent from my iPhone >> >> On 19 Feb 2019, at 22:00, Feilong Wang wrote: >> >> Can you talk to the Heat API from your master node? >> >> >> On 20/02/19 6:43 AM, Giuseppe Sannino wrote: >> >> Hi all...again, >> I managed to get over the previous issue by "not disabling" the TLS in >> the cluster template. >> From the cloud-init-output.log I see: >> Cloud-init v. 17.1 running 'modules:final' at Tue, 19 Feb 2019 17:03:53 >> +0000. Up 38.08 seconds. >> Cloud-init v. 17.1 finished at Tue, 19 Feb 2019 17:13:22 +0000. >> Datasource DataSourceEc2. Up 607.13 seconds >> >> But the cluster creation keeps on failing. >> From the journalctl -f I see a possible issue: >> Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal >> runc[2723]: publicURL endpoint for orchestration service in null region not >> found >> Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal >> runc[2723]: Source [heat] Unavailable. >> Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal >> runc[2723]: /var/lib/os-collect-config/local-data not found. Skipping >> >> anyone familiar with this problem ? >> >> Thanks as usual. >> /Giuseppe >> >> >> >> >> >> >> >> On Tue, 19 Feb 2019 at 17:35, Giuseppe Sannino < >> km.giuseppesannino at gmail.com> wrote: >> >>> Hi all, >>> need an help. >>> I deployed an AIO via Kolla on a baremetal node. Here some information >>> about the deployment: >>> --------------- >>> kolla-ansible: 7.0.1 >>> openstack_release: Rocky >>> kolla_base_distro: centos >>> kolla_install_type: source >>> TLS: disabled >>> --------------- >>> >>> >>> VMs spawn without issue but I can't make the "Kubernetes cluster >>> creation" successfully. It fails due to "Time out" >>> >>> I managed to log into Kuber Master and from the cloud-init-output.log I >>> can see: >>> + echo 'Waiting for Kubernetes API...' >>> Waiting for Kubernetes API... >>> ++ curl --silent http://127.0.0.1:8080/healthz >>> + '[' ok = '' ']' >>> + sleep 5 >>> >>> >>> Checking via systemctl and journalctl I see: >>> [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ systemctl status >>> kube-apiserver >>> ● kube-apiserver.service - kubernetes-apiserver >>> Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; >>> vendor preset: disabled) >>> Active: failed (Result: exit-code) since Tue 2019-02-19 15:31:41 UTC; >>> 45min ago >>> Process: 3796 ExecStart=/usr/bin/runc --systemd-cgroup run >>> kube-apiserver (code=exited, status=1/FAILURE) >>> Main PID: 3796 (code=exited, status=1/FAILURE) >>> >>> Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >>> kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE >>> Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >>> kube-apiserver.service: Failed with result 'exit-code'. >>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >>> kube-apiserver.service: Service RestartSec=100ms expired, scheduling >>> restart. >>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >>> kube-apiserver.service: Scheduled restart job, restart counter is at 6. >>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >>> Stopped kubernetes-apiserver. >>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >>> kube-apiserver.service: Start request repeated too quickly. >>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >>> kube-apiserver.service: Failed with result 'exit-code'. >>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >>> Failed to start kubernetes-apiserver. >>> >>> [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ sudo journalctl -u >>> kube-apiserver >>> -- Logs begin at Tue 2019-02-19 15:21:36 UTC, end at Tue 2019-02-19 >>> 16:17:00 UTC. -- >>> Feb 19 15:31:33 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >>> Started kubernetes-apiserver. >>> Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: >>> Flag --insecure-bind-address has been deprecated, This flag will be removed >>> in a future version. >>> Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: >>> Flag --insecure-port has been deprecated, This flag will be removed in a >>> future version. >>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: >>> Error: error creating self-signed certificates: open >>> /var/run/kubernetes/apiserver.crt: permission denied >>> : >>> : >>> : >>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: >>> error: error creating self-signed certificates: open >>> /var/run/kubernetes/apiserver.crt: permission denied >>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >>> kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE >>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >>> kube-apiserver.service: Failed with result 'exit-code'. >>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >>> kube-apiserver.service: Service RestartSec=100ms expired, scheduling >>> restart. >>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: >>> kube-apiserver.service: Scheduled restart job, restart counter is at 1. >>> >>> >>> May I ask for an help on this ? >>> >>> Many thanks >>> /Giuseppe >>> >>> >>> >>> >>> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> -------------------------------------------------------------------------- >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> -------------------------------------------------------------------------- >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Wed Feb 20 19:27:12 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 20 Feb 2019 14:27:12 -0500 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: On Wed, Feb 20, 2019 at 9:52 AM Chris Dent wrote: > > It's the Campaigning slot of the TC election process, where members > of the community (including the candidates) are encouraged to ask > the candidates questions and witness some debate. I have some > questions. > > First off, I'd like to thank all the candidates for running and > being willing to commit some of their time. I'd also like to that > group as a whole for being large enough to force an election. A > representative body that is not the result of an election would not > be very representing nor have much of a mandate. > > The questions follow. Don't feel obliged to answer all of these. The > point here is to inspire some conversation that flows to many > places. I hope other people will ask in the areas I've chosen to > skip. If you have a lot to say, it might make sense to create a > different message for each response. Beware, you might be judged on > your email etiquette and attention to good email technique! > I considered top-posting just because you said this :P > * How do you account for the low number of candidates? Do you > consider this a problem? Why or why not? > I suspect that this is a combination of a few things: * A decline in contributors who have enough time dedicated to spend contributing to the TC. We're far down the hill of the hype cycle, and not as many people can get time from their employers to do the "softer" things in the community. I'm not sure if this is a problem or not - does the current TC feel overloaded with work? If not, maybe we don't need so many people. * A decline in contributors who think being on the TC can help further their goals. Most contributors have focused technical goals, and being a part of the TC usually doesn't accelerate those. This seems fine to me, though I'd love to have more people with larger technical goals (more on this later). * The change in the perceived role of the TC. I'll dig into this more in the next question. * Compare and contrast the role of the TC now to 4 years ago. If you > weren't around 4 years ago, comment on the changes you've seen > over the time you have been around. In either case: What do you > think the TC role should be now? > As Sylvain mentioned, this is near the time of the big tent reform. Until then, the TC was getting into technical details within/between projects. The big tent reform was an admission that the TC overseeing technical bits doesn't scale to something OpenStack-sized. As such, the role of the TC has become far less technical, instead becoming stewards of the technical community. In some ways, this is a good thing, as it gives projects autonomy to do what is right for their project's users. However, this means that there are few people driving for larger OpenStack-wide changes to improve the experience for deployers, operators, and users. There are some awesome people (you know who you are) making smaller improvements that improve the story, but nothing like the architectural changes we need to really fix some of our underlying issues. I believe the TC needs to drive more of these big-picture changes, rather than only focusing on governance. I'm not sure if that means doing the research to decide what to focus on, writing POCs and doing performance tests, doing the actual implementation, or just herding the right cats to get it done. I'm also unsure how much time TC members would be able to commit to this. But, I think without the TC driving things, it will never get done. * What, to you, is the single most important thing the OpenStack > community needs to do to ensure that packagers, deployers, and > hobbyist users of OpenStack are willing to consistently upstream > their fixes and have a positive experience when they do? What is > the TC's role in helping make that "important thing" happen? > Mohammed mentioned our tooling not being great here, and I agree. But we've also decided time and time again that's not changing, so. I think what the community needs to be doing is to be willing to spend the time mentoring these people, and holding their hand while they stumble through gerrit or writing complex tests. We should also be willing to take a patch from a contributor (whether by gerrit, email, or bar napkin), and finish it for them. For example, An operator that knows just enough python to find and fix an off-by-one error probably isn't going to be able to fix the unit tests or think through upgrade concerns. I actually think we do a pretty good job with this today. Of course, it can always improve, so I'd like to see us continue that. As far as the TC's role, I think they should continue to encourage this behavior, and maybe make some sort of push to communicating the fact that we're willing to help outside of our usual channels. Busy users probably aren't reading this mailing list much - we should find some more accessible ways to call for these contributions. > * If you had a magic wand and could inspire and make a single > sweeping architectural or software change across the services, > what would it be? For now, ignore legacy or upgrade concerns. > What role should the TC have in inspiring and driving such > changes? > The fun question! Note: these are strong opinions, held loosely. If someone can prove that these changes won't improve OpenStack, I'm happy to drop them. :) I agree with Mohammed, using rabbitmq for RPC isn't working out. I'd like us to be using HTTP and service discovery. I also think that running more than one agent on a hypervisor isn't productive. These agents are fairly tightly coupled and are interacting with the same resources - we should combine them into a single agent which services talk to (over HTTP, of course). This should be organized under a single "compute node" or "hypervisor" team. This aligns the team more with a layer of the stack, rather than the APIs that abstracts those layers. Bonus points if this agent becomes easier to deploy - a container or a statically linked binary makes the hypervisor much easier to manage. Just image or PXE boot an OS, drop the binary in, and go. Last, we need to fix the developer experience in OpenStack. In my experience, tooling that allows developers to iterate on changes quickly is the number one quality of life improvement a software project can do. Our services often take tens of minutes to run unit tests, and getting devstack up and running can easily take an hour. This is a huge turn-off for casual contributors, and a huge timesink for regular contributors. As mentioned above, I believe that changes like this are fundamental to the future of OpenStack. We keep improving, but without fixing the underlying architectural issues, using or running OpenStack will always be painful. I believe that the TC needs to lead these initiatives, and continue to push on them, or else they won't get done. > * What can the TC do to make sure that the community (in its many > dimensions) is informed of and engaged in the discussions and > decisions of the TC? > Honestly, I don't believe that the average OpenStack community member really cares much about the discussions and decisions of the TC. Most of these don't directly affect said average person. See the next question. One thing that the average community member does seem to care about is the goals process. I believe this is because these are technical changes which improve OpenStack as a whole. We should do more of that. > * How do you counter people who assert the TC is not relevant? > (Presumably you think it is, otherwise you would not have run. If > you don't, why did you run?) > Again, I don't think most of what the TC does is relevant to the average community member. I think that the TC needs to be more technically focused, and as such will be more relevant to the community. I hope to help steer us this way. That's probably more than enough. Thanks for your attention. > Thanks for asking! // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Wed Feb 20 19:32:33 2019 From: ayoung at redhat.com (Adam Young) Date: Wed, 20 Feb 2019 14:32:33 -0500 Subject: Edit of a Summit submission Message-ID: There is a typo on on of my summit submission abstracts. How do I go about getting it edited now? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Wed Feb 20 19:35:44 2019 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 20 Feb 2019 19:35:44 +0000 Subject: [baremetal-sig][ironic] Bare Metal SIG First Steps In-Reply-To: <751D0BC0-B349-4038-A53E-F6D43BA04227@openstack.org> References: <4191B2EA-A6F0-4183-B0EF-C5C013E3A982@openstack.org> <098CC2A3-B207-47D5-A0F1-F227C33C2F01@openstack.org>, <751D0BC0-B349-4038-A53E-F6D43BA04227@openstack.org> Message-ID: <1A3C52DFCD06494D8528644858247BF01C2B3873@EX10MBOX03.pnnl.gov> +1 to etherpad for now a small boothable demo is interesting. The cloud native folks like to use tiny raspberry pi clusters. That would be even more interesting in some ways I think. Show just how little hardware it takes to get ironic going. Maybe even with one of these: https://clusterhat.com/ Thanks, Kevin ________________________________________ From: Chris Hoge [chris at openstack.org] Sent: Wednesday, February 20, 2019 11:13 AM To: openstack-discuss at lists.openstack.org Subject: [baremetal-sig][ironic] Bare Metal SIG First Steps Monday the patch for the creation of the Baremetal-SIG was approved by the TC and UC [1]. It's exciting to see the level of interest we've already seen in the planning etherpad [2], and it's time to start kicking off our first initiatives. I'd like to begin by addressing some of the comments in the patch. * Wiki vs Etherpad. My own personal preference is to start with the Etherpad as we get our feet underneath us. As more artifacts and begin to materialize, I think a Wiki would be an excellent location for hosting the information. My primary concern with Wikis is their tendency (from my point of view) to become out of date with the goals of a group. So, to begin with, unless there are any strong objections, we can do initial planning on the Etherpad and graduate to more permanent and resilient landing pages later. * Addressing operational aspects of Ironic. I see this as an absolutely critical aspect of the SIG. We already have organization devoted mostly to development, the Ironic team itself. SIGs are meant to be a collaborative effort between developers, operators, and users. We can send a patch up to clarify that in the governance document. If you are an operator, please use this [baremetal-sig] subject heading to start discussions and organize shared experiences and documentation. * The SIG is focused on all aspects of running bare-metal and Ironic, whether it be as a driver to Nova, a stand-alone service, or built into another project as a component. One of the amazing things about Ironic is its flexibility and versatility. We want to highlight that there's more than one way to do things with Ironic. * Chairs. I would very much like for this to be a community experience, and welcome nominations for co-chairs. I've found in the past that 2-3 co-chairs makes for a good balance, and given the number of people who have expressed interest in the SIG in general I think we should go ahead and appoint two extra people to co-lead the SIG. If this interests you, please self-nominate here and we can use lazy consensus to round out the rest of the leadership. If we have several people step up, we can consider a stronger form of voting using the systems available to us. First goals: I think that an important first goal is in the publication of a whitepaper outlining the motivation, deployment methods, and case studies surrounding OpenStack bare metal, similar to what we did with the containers whitepaper last year. A goal would be to publish at the Denver Open Infrastructure summit. Some initial thoughts and rough schedule can be found here [3], and also linked from the planning etherpad. One of the nice things about working on the whitepaper is we can also generate a bunch of other expanded content based on that work. In particular, I'd very much like to highlight deployment scenarios and case studies. I'm thinking of the whitepaper as a seed from which multiple authors demonstrate their experience and expertise to the benefit of the entire community. Another goal we've talked about at the Foundation is the creation of a new bare metal logo program. Distinct from something like the OpenStack Powered Trademark, which focuses on interoperability between OpenStack products with an emphasis on interoperability, this program would be devoted to highlighting products that are shipping with Ironic as a key component of their bare metal management strategy. This could be in many different configurations, and is focused on the shipping of code that solves particular problems, whether Ironic is user-facing or not. We're very much in the planning stages of a program like this, and it's important to get community feedback early on about if you would find it useful and what features you would like to see a program like this have. A few items that we're very interested in getting early feedback on are: * The Pixie Boots mascot has been an important part of the Ironic project, and we would like to apply it to highlight Ironic usage within the logo program. * If you're a public cloud, sell a distribution, provide installation services, or otherwise have some product that uses Ironic, what is your interest in participating in a logo program? * In addition to the logo, would you find collaboration to build content on how Ironic is being used in projects and products in our ecosystem useful? Finally, we have the goals of producing and highlighting content for using and operating Ironic. A list of possible use-cases is included in the SIG etherpad. We're also thinking about setting up a demo booth with a small set of server hardware to demonstrate Ironic at the Open Infrastructure summit. On all of those items, your feedback and collaboration is essential. Please respond to this mailing list if you have thoughts or want to volunteer for any of these items, and also contribute to the etherpad to help organize efforts and add any resources you might have available. Thanks to everyone, and I'll be following up soon with more information and updates. -Chris [1] https://review.openstack.org/#/c/634824/ [2] https://etherpad.openstack.org/p/bare-metal-sig [3] https://etherpad.openstack.org/p/bare-metal-whitepaper From allison at openstack.org Wed Feb 20 19:37:15 2019 From: allison at openstack.org (Allison Price) Date: Wed, 20 Feb 2019 13:37:15 -0600 Subject: Edit of a Summit submission In-Reply-To: References: Message-ID: Hi Adam, You can email our team at speakersupport at openstack.org and we can get that fixed for you. Cheers, Allison Allison Price OpenStack Foundation allison at openstack.org > On Feb 20, 2019, at 1:32 PM, Adam Young wrote: > > There is a typo on on of my summit submission abstracts. How do I go about getting it edited now? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sol.kuczala at gmail.com Wed Feb 20 19:51:22 2019 From: sol.kuczala at gmail.com (Soledad Kuczala) Date: Wed, 20 Feb 2019 16:51:22 -0300 Subject: Outreachy applicant Message-ID: Hi, My name is Sol and I'm participating already in Outreachy. I was working with the documentation and Sofia and I think I have succesfully set up my environment with Devstack. I join the new channel on IRC they've created (#openstack.outreachy) and I just wanted to let you know that. I'll be waiting for the next step. Meanwhile I'll be checking bugs on Cinder in the low-hanging-fruit. Also I'll be expecting to participate on next meeting. Thanks Sol -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at medberry.net Wed Feb 20 19:52:38 2019 From: openstack at medberry.net (David Medberry) Date: Wed, 20 Feb 2019 12:52:38 -0700 Subject: Fwd: Renaissance Denver Hotel: Quiet Zone (no more train horns!) is OFFICIAL for the A line Light Rail! In-Reply-To: References: Message-ID: Just a note from the hotel used at both Denver PTGs... There are no horns at the old Stapleton Renaissance now. They have probably raised their rates as a result. Of course, this is too late for most/all of us to appreciate, but sharing nonetheless. -dave ---------- Forwarded message --------- From: Mioduchoski, Lauren Date: Wed, Feb 20, 2019 at 12:48 PM Subject: Renaissance Denver Hotel: Quiet Zone (no more train horns!) is OFFICIAL for the A line Light Rail! To: Good afternoon, We are very excited to announce that as of March 1st all intersections along Denver’s A Line light rail train will be Quiet Zones! This means the A Line trains will no longer use their horns when passing through intersections (unless there is an unusual, emergent situation). I know during your stay here, you shared in your feedback that the train noise was disruptive so I wanted to personally reach out and share this exciting news with you! We hope to welcome you back to our hotel in the future so that you can enjoy our Quiet Zone and our newly renovated rooms! Sincerely, LAUREN MIODUCHOSKI FRONT OFFICE MANAGER Renaissance Denver Hotel 3801 Quebec St, Denver, CO 80207 T 303.399.7500 F 303.321.1966 Renaissance Hotels Renhotels.com l facebook.com/renhotels l twitter.com/renhotels Notice: This e-mail message and or fax is confidential, intended only for the named recipient(s) above and may contain information that is privileged, attorney work product or exempt from disclosure under applicable law. If you have received this message in error, or are not the named recipient(s), please immediately notify the sender at 303.399.7500 and delete/destroy this e-mail message/fax. Thank you. From mriedemos at gmail.com Wed Feb 20 19:58:40 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 20 Feb 2019 13:58:40 -0600 Subject: Fwd: Renaissance Denver Hotel: Quiet Zone (no more train horns!) is OFFICIAL for the A line Light Rail! In-Reply-To: References: Message-ID: <852ac4a3-d2c8-7b21-f9d0-07f7b6cc58aa@gmail.com> On 2/20/2019 1:52 PM, David Medberry wrote: > Just a note from the hotel used at both Denver PTGs... There are no > horns at the old Stapleton Renaissance now. They have probably raised > their rates as a result. Of course, this is too late for most/all of > us to appreciate, but sharing nonetheless. > > -dave I wouldn't be caught dead staying there *unless* there were horns. -- Thanks, Matt From senrique at redhat.com Wed Feb 20 20:02:30 2019 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 20 Feb 2019 17:02:30 -0300 Subject: In-Reply-To: References: Message-ID: Welcome, Ramsha! You always can use a Virtual Machine (VM) on Windows. I personally use Fedora, but you can use any distribution. 1. First, I recommend you to read about Devstack [1] (It's a series of scripts used to quickly bring up a complete OpenStack environment) 2. Try to follow the guide [1] and install Devstack on the host machine. 3. Read the [2] developers guide. Maybe this guide is old but could help you [3]. Let me know if you have any questions! Sofi [1] https://docs.openstack.org/devstack/latest/ [2] https://docs.openstack.org/infra/manual/developers.html [3] https://enriquetaso.wordpress.com/2016/05/07/installing-devstack-on-a-vagrant-virtual-machine/ On Wed, Feb 20, 2019 at 7:36 AM Ramsha Azeemi wrote: > > hi! i am windows user is it necessary to be a linux ubuntu user for > contribution in openstack projects. > > -- Sofia Enriquez Associate Software Engineer Red Hat PnT Ingeniero Butty 240, Piso 14 (C1001AFB) Buenos Aires - Argentina +541143297471 (8426471) senrique at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Feb 20 20:06:20 2019 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 20 Feb 2019 17:06:20 -0300 Subject: Outreachy contribution In-Reply-To: References: Message-ID: Hi Peetpal, Welcome to OpenStack! I think you can find the steps to start on the Outreachy web. However, this could help you: 1. First, I recommend you to read about Devstack [1] (It's a series of scripts used to quickly bring up a complete OpenStack environment) 2. Try to follow the guide [1] and install Devstack on the host machine. 3. Read the [2] developers guide. Maybe this guide is old but could help you [3]. Let me know if you have any questions! Sofi [1] https://docs.openstack.org/devstack/latest/ [2] https://docs.openstack.org/infra/manual/developers.html [3] https://enriquetaso.wordpress.com/2016/05/07/installing-devstack-on-a-vagrant-virtual-machine/ On Wed, Feb 20, 2019 at 2:59 PM Preetpal Kaur wrote: > Hi! > I am Preetpal Kaur new in open source. I want to contribute to open > source with the help of outreachy. > I choose this project to contribute to.OpenStack Manila Integration > with OpenStack CLI (OSC) > So @Sofia Enriquez Can you please guide me on how to start > > -- > Preetpal Kaur > https://preetpalk.wordpress.com/ > https://github.com/Preetpalkaur3701 > > -- Sofia Enriquez Associate Software Engineer Red Hat PnT Ingeniero Butty 240, Piso 14 (C1001AFB) Buenos Aires - Argentina +541143297471 (8426471) senrique at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Feb 20 20:08:24 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 20 Feb 2019 15:08:24 -0500 Subject: Fwd: Renaissance Denver Hotel: Quiet Zone (no more train horns!) is OFFICIAL for the A line Light Rail! In-Reply-To: References: Message-ID: David Medberry writes: > Just a note from the hotel used at both Denver PTGs... There are no > horns at the old Stapleton Renaissance now. They have probably raised > their rates as a result. Of course, this is too late for most/all of > us to appreciate, but sharing nonetheless. > > -dave I'll believe it when I don't hear it. -- Doug From senrique at redhat.com Wed Feb 20 20:11:41 2019 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 20 Feb 2019 17:11:41 -0300 Subject: Outreachy applicant In-Reply-To: References: Message-ID: Hey Sol, It's great to hear from you here! If you've already installed Devstack, you can start contributing to Manila! You can find *low-hanging-fruits* bugs for the first contribution [1]. Let me know if you need help! Sofi [1] https://bugs.launchpad.net/manila/+bugs?field.tag=low-hanging-fruit On Wed, Feb 20, 2019 at 4:54 PM Soledad Kuczala wrote: > Hi, > My name is Sol and I'm participating already in Outreachy. > I was working with the documentation and Sofia and I think I have > succesfully set up my environment with Devstack. I join the new channel on > IRC they've created (#openstack.outreachy) and I just wanted to let you > know that. I'll be waiting for the next step. Meanwhile I'll be checking > bugs on Cinder in the low-hanging-fruit. > Also I'll be expecting to participate on next meeting. > > Thanks > > Sol > > -- Sofia Enriquez Associate Software Engineer Red Hat PnT Ingeniero Butty 240, Piso 14 (C1001AFB) Buenos Aires - Argentina +541143297471 (8426471) senrique at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon at csail.mit.edu Wed Feb 20 20:12:04 2019 From: jon at csail.mit.edu (Jonathan Proulx) Date: Wed, 20 Feb 2019 15:12:04 -0500 Subject: Fwd: Renaissance Denver Hotel: Quiet Zone (no more train horns!) is OFFICIAL for the A line Light Rail! In-Reply-To: References: Message-ID: <20190220201204.scim3tskzzvudrks@csail.mit.edu> On Wed, Feb 20, 2019 at 12:52:38PM -0700, David Medberry wrote: :Just a note from the hotel used at both Denver PTGs... There are no :horns at the old Stapleton Renaissance now. They have probably raised :their rates as a result. Of course, this is too late for most/all of :us to appreciate, but sharing nonetheless. Raised their rates! I that was one of the most expensive room I've gotten (inluding places like NYC and Paris), and train horns were hardly the only problem. I did not like that place ;) -Jon :-dave : :---------- Forwarded message --------- :From: Mioduchoski, Lauren :Date: Wed, Feb 20, 2019 at 12:48 PM :Subject: Renaissance Denver Hotel: Quiet Zone (no more train horns!) :is OFFICIAL for the A line Light Rail! :To: : : :Good afternoon, : :We are very excited to announce that as of March 1st all intersections :along Denver’s A Line light rail train will be Quiet Zones! This means :the A Line trains will no longer use their horns when passing through :intersections (unless there is an unusual, emergent situation). I know :during your stay here, you shared in your feedback that the train :noise was disruptive so I wanted to personally reach out and share :this exciting news with you! : : : :We hope to welcome you back to our hotel in the future so that you can :enjoy our Quiet Zone and our newly renovated rooms! : : : :Sincerely, : : : :LAUREN MIODUCHOSKI : :FRONT OFFICE MANAGER : :Renaissance Denver Hotel : :3801 Quebec St, Denver, CO 80207 : :T 303.399.7500 F 303.321.1966 : :Renaissance Hotels : :Renhotels.com l facebook.com/renhotels l twitter.com/renhotels : : : :Notice: This e-mail message and or fax is confidential, intended only :for the named recipient(s) above and may contain information that is :privileged, attorney work product or exempt from disclosure under :applicable law. If you have received this message in error, or are not :the named recipient(s), please immediately notify the sender at :303.399.7500 and delete/destroy this e-mail message/fax. Thank you. : From a.settle at outlook.com Wed Feb 20 20:15:35 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Wed, 20 Feb 2019 20:15:35 +0000 Subject: Fwd: Renaissance Denver Hotel: Quiet Zone (no more train horns!) is OFFICIAL for the A line Light Rail! In-Reply-To: References: Message-ID: Is it sad I'm almost disappointed by the lack of said horns? I'll have to find something else to complain about drunkenly next to a piano. They don't grow on trees ya know... On 20/02/2019 19:52, David Medberry wrote: > Just a note from the hotel used at both Denver PTGs... There are no > horns at the old Stapleton Renaissance now. They have probably raised > their rates as a result. Of course, this is too late for most/all of > us to appreciate, but sharing nonetheless. > > -dave > > ---------- Forwarded message --------- > From: Mioduchoski, Lauren > Date: Wed, Feb 20, 2019 at 12:48 PM > Subject: Renaissance Denver Hotel: Quiet Zone (no more train horns!) > is OFFICIAL for the A line Light Rail! > To: > > > Good afternoon, > > We are very excited to announce that as of March 1st all intersections > along Denver’s A Line light rail train will be Quiet Zones! This means > the A Line trains will no longer use their horns when passing through > intersections (unless there is an unusual, emergent situation). I know > during your stay here, you shared in your feedback that the train > noise was disruptive so I wanted to personally reach out and share > this exciting news with you! > > > > We hope to welcome you back to our hotel in the future so that you can > enjoy our Quiet Zone and our newly renovated rooms! > > > > Sincerely, > > > > LAUREN MIODUCHOSKI > > FRONT OFFICE MANAGER > > Renaissance Denver Hotel > > 3801 Quebec St, Denver, CO 80207 > > T 303.399.7500 F 303.321.1966 > > Renaissance Hotels > > Renhotels.com l facebook.com/renhotels l twitter.com/renhotels > > > > Notice: This e-mail message and or fax is confidential, intended only > for the named recipient(s) above and may contain information that is > privileged, attorney work product or exempt from disclosure under > applicable law. If you have received this message in error, or are not > the named recipient(s), please immediately notify the sender at > 303.399.7500 and delete/destroy this e-mail message/fax. Thank you. > From sol.kuczala at gmail.com Wed Feb 20 20:15:29 2019 From: sol.kuczala at gmail.com (Soledad Kuczala) Date: Wed, 20 Feb 2019 17:15:29 -0300 Subject: Outreachy applicant In-Reply-To: References: Message-ID: Thanks, I'll check it out! El mié., 20 de feb. de 2019 a la(s) 17:11, Sofia Enriquez ( senrique at redhat.com) escribió: > Hey Sol, It's great to hear from you here! > > If you've already installed Devstack, you can start contributing to > Manila! You can find *low-hanging-fruits* bugs for the first contribution > [1]. > > Let me know if you need help! > Sofi > > [1] https://bugs.launchpad.net/manila/+bugs?field.tag=low-hanging-fruit > > On Wed, Feb 20, 2019 at 4:54 PM Soledad Kuczala > wrote: > >> Hi, >> My name is Sol and I'm participating already in Outreachy. >> I was working with the documentation and Sofia and I think I have >> succesfully set up my environment with Devstack. I join the new channel on >> IRC they've created (#openstack.outreachy) and I just wanted to let you >> know that. I'll be waiting for the next step. Meanwhile I'll be checking >> bugs on Cinder in the low-hanging-fruit. >> Also I'll be expecting to participate on next meeting. >> >> Thanks >> >> Sol >> >> > > -- > > Sofia Enriquez > > Associate Software Engineer > Red Hat PnT > > Ingeniero Butty 240, Piso 14 > > (C1001AFB) Buenos Aires - Argentina > +541143297471 (8426471) > > senrique at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramshaazeemi2 at gmail.com Wed Feb 20 20:36:06 2019 From: ramshaazeemi2 at gmail.com (Ramsha Azeemi) Date: Thu, 21 Feb 2019 01:36:06 +0500 Subject: In-Reply-To: References: Message-ID: Thanks, I'll check them out. On Thu, Feb 21, 2019 at 1:02 AM Sofia Enriquez wrote: > Welcome, Ramsha! > > You always can use a Virtual Machine (VM) on Windows. I personally use > Fedora, but you can use any distribution. > > 1. First, I recommend you to read about Devstack [1] (It's a series of > scripts used to quickly bring up a complete OpenStack environment) > 2. Try to follow the guide [1] and install Devstack on the host > machine. > 3. Read the [2] developers guide. > > Maybe this guide is old but could help you [3]. > > Let me know if you have any questions! > Sofi > > [1] https://docs.openstack.org/devstack/latest/ > [2] https://docs.openstack.org/infra/manual/developers.html > [3] > https://enriquetaso.wordpress.com/2016/05/07/installing-devstack-on-a-vagrant-virtual-machine/ > > On Wed, Feb 20, 2019 at 7:36 AM Ramsha Azeemi > wrote: > >> >> hi! i am windows user is it necessary to be a linux ubuntu user for >> contribution in openstack projects. >> >> > > > -- > > Sofia Enriquez > > Associate Software Engineer > Red Hat PnT > > Ingeniero Butty 240, Piso 14 > > (C1001AFB) Buenos Aires - Argentina > +541143297471 (8426471) > > senrique at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Feb 20 21:20:06 2019 From: smooney at redhat.com (Sean Mooney) Date: Wed, 20 Feb 2019 21:20:06 +0000 Subject: In-Reply-To: References: Message-ID: On Thu, 2019-02-21 at 01:36 +0500, Ramsha Azeemi wrote: > Thanks, I'll check them out. for what its worth i personally used windows as my devepmetn clinetsystem and connect to a remove linux system to run openstack on for most of thetime i contributed to openstack. one thing i found useful was to use cygwin on windows to provide a linuxlike environment. that allow my to git clone the repos and in many case wasenought to allow me to run unitests, pep8 style check or docs envs locally on windows. the linux subsystem for windows will similarly help. openstack does use a number of c module that may or may not be available on windowspython distobutions so core service like nova often do not work in there entirity butyou will find that most of the command line client or webservices, espcally any of the webservice that can run under wsgi will actully work on windows. with all of that said if you want to deploy and run openstack with devstack or other toolsyou will be best served by spingnin up a linux vm with hyperv or virtual box and sshing into that in our upstream ci we typically use 8G vms with ~8 cpus and 50G of storage but you can actully reduce thediskspace down to about 20G and its typeicaly fine for development. the extra storage in the ci is for logs andto allow testing ot the storage services of opesntack. anyway the point i wanted to make is often you can make small change to openstack on windows without needing linux but your milage may vary and most development will typically be eaiser on linuxbut its not required for everything. > On Thu, Feb 21, 2019 at 1:02 AM Sofia Enriquez wrote: > > Welcome, Ramsha! > > > > You always can use a Virtual Machine (VM) on Windows. I personally use Fedora, but you can use any distribution. > > First, I recommend you to read about Devstack [1] (It's a series of scripts used to quickly bring up a complete > > OpenStack environment)Try to follow the guide [1] and install Devstack on the host machine.Read the [2] developers > > guide.Maybe this guide is old but could help you [3]. > > > > Let me know if you have any questions! > > Sofi > > > > > > [1] https://docs.openstack.org/devstack/latest/ > > [2] https://docs.openstack.org/infra/manual/developers.html > > [3] https://enriquetaso.wordpress.com/2016/05/07/installing-devstack-on-a-vagrant-virtual-machine/ > > > > On Wed, Feb 20, 2019 at 7:36 AM Ramsha Azeemi wrote: > > > hi! i am windows user is it necessary to be a linux ubuntu user for contribution in openstack projects. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbaker at redhat.com Wed Feb 20 21:54:33 2019 From: sbaker at redhat.com (Steve Baker) Date: Thu, 21 Feb 2019 10:54:33 +1300 Subject: [tripleo][ironic] What I had to do to get standalone ironic working with ovn enabled In-Reply-To: <20190220041555.54yc5diqviszvb6e@redhat.com> References: <20190220041555.54yc5diqviszvb6e@redhat.com> Message-ID: On 20/02/19 5:15 PM, Lars Kellogg-Stedman wrote: > I'm using the tripleo standalone install to set up an Ironic test > environment. With recent tripleo master, the deploy started failing > because the DockerOvn*Image parameters weren't defined. Here's what I > did to get everything working: > > 1. I added to my deploy: > > -e /usr/share/tripleo-heat-templates/environment/services/neutron-ovn-standalone.yaml > > With this change, `openstack tripleo container image prep` > correctly detected that ovn was enabled and generated the > appropriate image parameters. > > 2. environments/services/ironic.yaml sets: > > NeutronMechanismDrivers: ['openvswitch', 'baremetal'] > > Since I didn't want openvswitch enabled in this deployment, I > explicitly set the mechanism drivers in a subsequent environment > file: > > NeutronMechanismDrivers: ['ovn', 'baremetal'] Can you provide your full deployment command. I think it is most likely that the order of environment files is resulting in an incorrect value in NeutronMechanismDrivers. You may be able to confirm this by looking at the resulting plan file with something like: openstack object save --file - overcloud plan-environment.yaml > > 3. The neutron-ovn-standalone.yaml environment explicitly disables > the non-ovn neutron services. Ironic requires the > services of the neutron_dhcp_agent, so I had to add: > > OS::TripleO::Services::NeutronDhcpAgent: /usr/share/openstack-tripleo-heat-templates/deployment/neutron/neutron-dhcp-container-puppet.yaml > > With this in place, the ironic nodes were able to receive dhcp > responses and were able to boot. > > 3. In order to provide the baremetal nodes with a route to the nova > metadata service, I added the following to my deploy: > > NeutronEnableForceMetadata: true > > This provides the baremetal nodes with a route to 169.254.169.254 > via the neutron dhcp namespace. > > 4. In order get the metadata service to respond correctly, I also had > to enable the neutron metadata agent: > > OS::TripleO::Services::NeutronMetadataAgent: /usr/share/openstack-tripleo-heat-templates/deployment/neutron/neutron-metadata-container-puppet.yaml > > This returned my Ironic deployment to a functioning state: I can > successfully boot baremetal nodes and provide them with configuration > information via the metadata service. > > I'm curious if this was the *correct* solution, or if there was a > better method of getting things working. > From senrique at redhat.com Wed Feb 20 22:16:21 2019 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 20 Feb 2019 19:16:21 -0300 Subject: In-Reply-To: References: Message-ID: Thank you very much, Sean!! Your reply is constructive. Sofi On Wed, Feb 20, 2019 at 6:20 PM Sean Mooney wrote: > On Thu, 2019-02-21 at 01:36 +0500, Ramsha Azeemi wrote: > > Thanks, I'll check them out. > > > for what its worth i personally used windows as my devepmetn clinet > system and connect to a remove linux system to run openstack on for most > of the > time i contributed to openstack. > > one thing i found useful was to use cygwin on windows to provide a linux > like environment. that allow my to git clone the repos and in many case was > enought to allow me to run unitests, pep8 style check or docs envs locally > on windows. > > the linux subsystem for windows will similarly help. > > openstack does use a number of c module that may or may not be available > on windows > python distobutions so core service like nova often do not work in there > entirity but > you will find that most of the command line client or webservices, > espcally any of the web > service that can run under wsgi will actully work on windows. > > > with all of that said if you want to deploy and run openstack with > devstack or other tools > you will be best served by spingnin up a linux vm with hyperv or virtual > box and sshing into that > > > in our upstream ci we typically use 8G vms with ~8 cpus and 50G of storage > but you can actully reduce the > diskspace down to about 20G and its typeicaly fine for development. the > extra storage in the ci is for logs and > to allow testing ot the storage services of opesntack. > > anyway the point i wanted to make is often you can make small change to > openstack on windows > without needing linux but your milage may vary and most development will > typically be eaiser on linux > but its not required for everything. > > > On Thu, Feb 21, 2019 at 1:02 AM Sofia Enriquez > wrote: > > Welcome, Ramsha! > > You always can use a Virtual Machine (VM) on Windows. I personally use > Fedora, but you can use any distribution. > > 1. First, I recommend you to read about Devstack [1] (It's a series of > scripts used to quickly bring up a complete OpenStack environment) > 2. Try to follow the guide [1] and install Devstack on the host > machine. > 3. Read the [2] developers guide. > > Maybe this guide is old but could help you [3]. > > Let me know if you have any questions! > Sofi > > [1] https://docs.openstack.org/devstack/latest/ > [2] https://docs.openstack.org/infra/manual/developers.html > [3] > https://enriquetaso.wordpress.com/2016/05/07/installing-devstack-on-a-vagrant-virtual-machine/ > > On Wed, Feb 20, 2019 at 7:36 AM Ramsha Azeemi > wrote: > > > hi! i am windows user is it necessary to be a linux ubuntu user for > contribution in openstack projects. > > > > > -- Sofia Enriquez Associate Software Engineer Red Hat PnT Ingeniero Butty 240, Piso 14 (C1001AFB) Buenos Aires - Argentina +541143297471 (8426471) senrique at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Wed Feb 20 22:29:30 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 20 Feb 2019 17:29:30 -0500 Subject: [tripleo][ironic] What I had to do to get standalone ironic working with ovn enabled In-Reply-To: References: <20190220041555.54yc5diqviszvb6e@redhat.com> Message-ID: <20190220222930.ousgu5inihio6aac@redhat.com> On Thu, Feb 21, 2019 at 10:54:33AM +1300, Steve Baker wrote: > > 2. environments/services/ironic.yaml sets: > > > > NeutronMechanismDrivers: ['openvswitch', 'baremetal'] > > > > Since I didn't want openvswitch enabled in this deployment, I > > explicitly set the mechanism drivers in a subsequent environment > > file: > > > > NeutronMechanismDrivers: ['ovn', 'baremetal'] > > Can you provide your full deployment command. I think it is most likely that > the order of environment files is resulting in an incorrect value in > NeutronMechanismDrivers. You may be able to confirm this by looking at the > resulting plan file with something like: > > openstack object save --file - overcloud plan-environment.yaml The arguments to 'tripleo deploy' look like this: deploy_args=( -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml -r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml # Thisssets NeutronMechanismDrivers: ['openvswitch', 'baremetal'] -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml # This sets NeutronMechanismDrivers: ovn and disables # OS::TripleO::Services::NeutronMetadataAgent and # OS::TripleO::Services::NeutronDhcpAgent -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-standalone.yaml # This sets NeutronMechanismDrivers: ['ovn', 'baremetal'] and re-enables # OS::TripleO::Services::NeutronMetadataAgent and # OS::TripleO::Services::NeutronDhcpAgent -e ./standalone_parameters.yaml ) The above is used in the following command line: sudo openstack tripleo deploy \ --templates $TEMPLATES \ --local-ip=192.168.23.1/24 \ --output-dir deploy \ --standalone \ "${deploy_args[@]}" \ -e ./container-images.yaml -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From mriedemos at gmail.com Wed Feb 20 22:33:30 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 20 Feb 2019 16:33:30 -0600 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: <418aebee-14b0-bc87-d90b-10a88bc8f9f8@gmail.com> On 2/20/2019 9:24 AM, Mohammed Naser wrote: > - Use Keystone as an authoritative service catalog, stop having to configure > URLs for services inside configuration files. It's confusing and unreliable > and causes a lot of breakages often. Nova did this in pike and queens [1] and is probably a good "show and tell" kind of thing that could be done as a cross-project community goal, or at least for the projects that depend on other openstack services. I think the one thing nova hasn't done this with yet is cinder [2]. [1] https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/use-ksa-adapter-for-endpoints.html [2] https://review.openstack.org/#/c/508345/ -- Thanks, Matt From duc.openstack at gmail.com Thu Feb 21 00:21:11 2019 From: duc.openstack at gmail.com (Duc Truong) Date: Wed, 20 Feb 2019 16:21:11 -0800 Subject: [senlin] Senlin Monthly(ish) Newsletter Jan/Feb 2019 Message-ID: HTML: https://dkt26111.wordpress.com/2019/02/21/senlin-monthlyish-newsletter-january-february-2019/ This is the January/February edition of the Senlin monthly(ish) newsletter. The goal of the newsletter is to highlight happenings in the Senlin project. If you have any feedback or questions regarding the contents, please feel free to reach out to me in the #senlin IRC channel. News ---- * We are almost at Stein-3 milestone which coincides with the feature freeze during the week of March 4. Please submit your changes for review before then. Blueprint Status ---------------- * Fail fast locked resource - https://blueprints.launchpad.net/senlin/+spec/fail-fast-locked-resource - Working on documentation and release notes. * Multiple detection modes - https://blueprints.launchpad.net/senlin/+spec/multiple-detection-modes - Working on documentation and release notes. Community Goal Status --------------------- * Python 3 - All patches by Python 3 goal champions for zuul migration, documentation and unit test changes have been merged. - We have set py35 functional tests as voting in gate. * Upgrade Checkers - I have added a patch set to check for unsupported health policies: https://review.openstack.org/#/c/638284/ Reviews Needed -------------- * Improve Health Manager to avoid duplicate health checks: https://review.openstack.org/#/c/634811/ * Fixing devstack tempest jobs for master and stable/rocky: https://review.openstack.org/#/c/637664/ https://review.openstack.org/#/c/635638/ From lars at redhat.com Thu Feb 21 00:21:32 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 20 Feb 2019 19:21:32 -0500 Subject: [tripleo][ironic] What I had to do to get standalone ironic working with ovn enabled In-Reply-To: References: <20190220041555.54yc5diqviszvb6e@redhat.com> Message-ID: <20190221002132.a7tzwh7qxv55k3mi@redhat.com> On Thu, Feb 21, 2019 at 10:54:33AM +1300, Steve Baker wrote: > > 1. I added to my deploy: > > > > -e /usr/share/tripleo-heat-templates/environment/services/neutron-ovn-standalone.yaml > > > > With this change, `openstack tripleo container image prep` > > correctly detected that ovn was enabled and generated the > > appropriate image parameters. > > Can you provide your full deployment command. I think it is most likely that > the order of environment files is resulting in an incorrect value in > NeutronMechanismDrivers. You may be able to confirm this by looking at the > resulting plan file with something like: Upon closer inspection, I believe you are correct. The problem is twofold: - First, by default, NeutronMechanismDrivers is unset. So if you simply run: openstack tripleo container image prepare -e container-prepare-parameters.yaml ...you get no OVN images. - Second, the ironic.yaml environment file explicitly sets: NeutronMechanismDrivers: ['openvswitch', 'baremetal'] So if ironic.yaml is included after something like neutron-ovn-standalone.yaml, it will override the value. Is this one bug or two? Arguably, ironic.yaml shouldn't be setting NeutronMechanismDrivers explicitly like that (although I don't know if there is an "append" mechanism). But shouldn't NeutronMechanismDrivers default to 'ovn', if that's the default mechanism now? -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From ekcs.openstack at gmail.com Thu Feb 21 00:45:45 2019 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 20 Feb 2019 16:45:45 -0800 Subject: [Congress] Congress @ PTG? Message-ID: If you are interested in Congress sessions at the upcoming PTG, please indicate it in the following two-question form! https://goo.gl/forms/NtBiaDCOUcEagLmB3 Feel free to add topics/comments at this etherpad even if you are not interested in attending. https://etherpad.openstack.org/p/congress-ptg-train Thank you! From lbragstad at gmail.com Thu Feb 21 00:49:44 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 20 Feb 2019 18:49:44 -0600 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: <15e37e83-0310-5258-0662-65c650b4ccfd@gmail.com> On 2/20/19 11:23 AM, Sylvain Bauza wrote: > Thanks Chris for asking us questions so we can clarify our opinions. > > On Wed, Feb 20, 2019 at 3:52 PM Chris Dent > wrote: > > > It's the Campaigning slot of the TC election process, where members > of the community (including the candidates) are encouraged to ask > the candidates questions and witness some debate. I have some > questions. > > First off, I'd like to thank all the candidates for running and > being willing to commit some of their time. I'd also like to that > group as a whole for being large enough to force an election. A > representative body that is not the result of an election would not > be very representing nor have much of a mandate. > > > I agree with you on this point. It's important for OpenStack to have > time to discuss about mandates. > > The questions follow. Don't feel obliged to answer all of these. The > point here is to inspire some conversation that flows to many > places. I hope other people will ask in the areas I've chosen to > skip. If you have a lot to say, it might make sense to create a > different message for each response. Beware, you might be judged on > your email etiquette and attention to good email technique! > > * How do you account for the low number of candidates? Do you >    consider this a problem? Why or why not? > > > Yes, again, I agree and to be honest, when I only saw we were only > having 4 candidates 8 hours before the deadline, I said to myself "OK, > you love OpenStack. You think the TC is important. But then, why > aren't you then throwing your hat ?" > We all have opinions, right ? But then, why people don't want to be in > the TC ? Because we don't have a lot of time for it ? Or because > people think the TC isn't important ? > > I don't want to discuss about politics here. But I somehow see a > parallel in between what the TC is and what the European Union is : > both are governances not fully decision-makers but are there for > sharing same rules and vision. > If we stop having the TC, what would become OpenStack ? Just a set of > parallel projects with no common guidance ? > > The fact that a large number of candidacies went very late (including > me) is a bit concerning to me. How can we become better ? I have no > idea but saying that probably given the time investment it requires, > most of the candidacies were probably holding some management > acceptance before people would propose their names. Probably worth > thinking about how the investment it requires, in particular given we > have less full-time contributors that can dedicate large time for > governance. > > > * Compare and contrast the role of the TC now to 4 years ago. If you >    weren't around 4 years ago, comment on the changes you've seen >    over the time you have been around. In either case: What do you >    think the TC role should be now? > > > 4 years ago, we were in the Kilo timeframe. That's fun you mention > this period, because at that exact time of the year, the TC voted on > one of the probably most important decisions that impacted OpenStack : > The Big Tent reform [1] > Taking a look at this time, I remember frustration and hard talks but > also people committed to change things. > This decision hasn't changed a lot the existing service projects that > were before the Big Tent, but it actually created a whole new > ecosystem for developers. It had challenges but it never required to > be abandoned, which means the program is a success. > > Now the buzz is gone and the number of projects stable, the TC > necessarly has to mutate to a role of making sure all the projects > sustain the same pace and reliability. Most of the challenges for the > TC is now about defining and applying criterias for ensuring that all > our projects have a reasonable state for production. If you see my > candidacy letter, two of my main drivers for my nomination are about > upgradability and scalability concerns. > > > * What, to you, is the single most important thing the OpenStack >    community needs to do to ensure that packagers, deployers, and >    hobbyist users of OpenStack are willing to consistently upstream >    their fixes and have a positive experience when they do? What is >    the TC's role in helping make that "important thing" happen? > > > There are two very distinct reasons when a company decides to > downstream-only : either by choice or because of technical reasons. > I don't think a lot of companies decide to manage technical debt on > their own by choice. OpenStack is nearly 9 years old and most of the > users know the price it is. > > Consequently, I assume that the reasons are technical : > 1/ they're running an old version and haven't upgraded (yet). We have > good user stories of large cloud providers that invested in upgrades > (for example OVH) and see the direct benefit of it. Maybe we can > educate more on the benefits of upgrading frequently. > 2/ they think upstreaming is difficult. I'm all open to hear the > barriers they have. For what it's worth, OpenStack invested a lot in > mentoring with the FirstContact SIG, documentation and Upstream > Institute. There will probably also be a new program about > peer-mentoring and recognition [2] if the community agrees with the > idea. Honestly, I don't know what do do more. If you really can't > upstream but care about your production, just take a service contract > I guess. > >   > > * If you had a magic wand and could inspire and make a single >    sweeping architectural or software change across the services, >    what would it be? For now, ignore legacy or upgrade concerns. >    What role should the TC have in inspiring and driving such >    changes? > > > Take me as a fool but I don't think the role of the TC is to drive > architectural decision between projects. > The TC can help two projects to discuss, the TC can (somehow) help > moderate between two teams about some architectural concern but > certainly not be the driver of such change. Is there a particular reason why you feel this way? I think the TC is in a great position to have a profound impact on the architecture of OpenStack, with a caveat. I believe if you ask anyone with even a brief history in OpenStack, you'll dust up some architectural opinions. For example, Jim and Mohammed have already pointed out a bunch in their responses. Another example, Melanie and I had a productive discussion today about how restructuring the architecture of policy enforcement could significantly improvement usability and security [0], which certainly isn't specific to keystone or nova. I don't think we have to look very far to find excellent areas for improvement. As others have noted, the project is at a point where development and hype isn't nearly as intense as it was 4 years ago. While contributor numbers, in a way, reflect project stabilization, I also think it puts us in a prime position to address some of the architectural pain points we've grown to live with over the years. I think we can use the opportunity to make services more consistent, giving consumers and users a more refined and polished experience, among other benefits. That said, I certainly think if the TC is to _facilitate_ in architectural decisions, it needs to be done in the open and with plenty of communication and feedback with the entire community. Similar to the approach we try and take with community goals. I understand there may be a fine line in making decisions of this nature at the TC level, but I also think it presents numerous opportunities to communicate and focus efforts in a unified direction. I see that involvement range from socializing issues to advocating for sponsorship on a particular initiative to diving into the problem and helping projects directly. [0] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-02-20.log.html#t2019-02-20T18:35:06 > > That doesn't mean the TC can't be technical. We have goals, for > example. But in order to have well defined goals that are > understandable by project contributors, we also need to have the > projects be the drivers of such architectural changes. > >   > > * What can the TC do to make sure that the community (in its many >    dimensions) is informed of and engaged in the discussions and >    decisions of the TC? > > > You made a very good job in providing TC feedback. I surely think the > TC has to make sure that a regular weekly feedback is provided. > For decisions that impact projects, I don't really see how TC members > can vote without getting feedback from the project contributors, so > here I see communication (thru Gerrit at least). > > >   > > * How do you counter people who assert the TC is not relevant? >    (Presumably you think it is, otherwise you would not have run. If >    you don't, why did you run?) > > > Again, I think that is a matter of considering the TC > responsibilities. We somehow need to clarify what are those > responsibilities and I think I voiced on that above. > > > > That's probably more than enough. Thanks for your attention. > > > I totally appreciate you challenging us. That's very important that > people vote based on opinions rather than popularity. > -Sylvain > > [1] > https://governance.openstack.org/tc/resolutions/20141202-project-structure-reform-spec.html > [2] https://review.openstack.org/#/c/636956/ > > -- > Chris Dent                       ٩◔̯◔۶           https://anticdent.org/ > freenode: cdent                                         tw: @anticdent > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jungleboyj at gmail.com Thu Feb 21 02:31:36 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 20 Feb 2019 20:31:36 -0600 Subject: Fwd: Renaissance Denver Hotel: Quiet Zone (no more train horns!) is OFFICIAL for the A line Light Rail! In-Reply-To: References: Message-ID: It won't have the same ambiance without the horns. Dont' think it will stop us from Drunkenly singing next to pianos though. Jay On 2/20/2019 2:15 PM, Alexandra Settle wrote: > Is it sad I'm almost disappointed by the lack of said horns? > > I'll have to find something else to complain about drunkenly next to a > piano. > They don't grow on trees ya know... > > On 20/02/2019 19:52, David Medberry wrote: >> Just a note from the hotel used at both Denver PTGs... There are no >> horns at the old Stapleton Renaissance now. They have probably raised >> their rates as a result. Of course, this is too late for most/all of >> us to appreciate, but sharing nonetheless. >> >> -dave >> >> ---------- Forwarded message --------- >> From: Mioduchoski, Lauren >> Date: Wed, Feb 20, 2019 at 12:48 PM >> Subject: Renaissance Denver Hotel: Quiet Zone (no more train horns!) >> is OFFICIAL for the A line Light Rail! >> To: >> >> >> Good afternoon, >> >> We are very excited to announce that as of March 1st all intersections >> along Denver’s A Line light rail train will be Quiet Zones! This means >> the A Line trains will no longer use their horns when passing through >> intersections (unless there is an unusual, emergent situation). I know >> during your stay here, you shared in your feedback that the train >> noise was disruptive so I wanted to personally reach out and share >> this exciting news with you! >> >> >> >> We hope to welcome you back to our hotel in the future so that you can >> enjoy our Quiet Zone and our newly renovated rooms! >> >> >> >> Sincerely, >> >> >> >> LAUREN MIODUCHOSKI >> >> FRONT OFFICE MANAGER >> >> Renaissance Denver Hotel >> >> 3801 Quebec St, Denver, CO 80207 >> >> T 303.399.7500 F 303.321.1966 >> >> Renaissance Hotels >> >> Renhotels.com l facebook.com/renhotels l twitter.com/renhotels >> >> >> >> Notice: This e-mail message and or fax is confidential, intended only >> for the named recipient(s) above and may contain information that is >> privileged, attorney work product or exempt from disclosure under >> applicable law. If you have received this message in error, or are not >> the named recipient(s), please immediately notify the sender at >> 303.399.7500 and delete/destroy this e-mail message/fax. Thank you. >> From lars at redhat.com Thu Feb 21 03:13:26 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 20 Feb 2019 22:13:26 -0500 Subject: [neutron] openvswitch switch connection timeout? Message-ID: <20190221031326.kx3lrd226ts7a64j@redhat.com> I was trying to track down some connectivity issues with some baremetal nodes booting from iSCSI LUNs provided by Cinder. It turns out that openvswitch is going belly-up. We see in openvswitch-agent.log a "Switch connection timeout" error [1]. Just before that, in /var/log/openvswitch/ovs-vswitchd.log, we see: 2019-02-21T00:32:38.696Z|00795|bridge|INFO|bridge br-tun: deleted interface patch-int on port 1 2019-02-21T00:32:38.696Z|00796|bridge|INFO|bridge br-tun: deleted interface br-tun on port 65534 2019-02-21T00:32:38.823Z|00797|bridge|INFO|bridge br-int: deleted interface int-br-ctlplane on port 1 2019-02-21T00:32:38.823Z|00798|bridge|INFO|bridge br-int: deleted interface br-int on port 65534 2019-02-21T00:32:38.823Z|00799|bridge|INFO|bridge br-int: deleted interface tapb0101920-b9 on port 4 2019-02-21T00:32:38.824Z|00800|bridge|INFO|bridge br-int: deleted interface patch-tun on port 3 2019-02-21T00:32:38.954Z|00801|bridge|INFO|bridge br-ctlplane: deleted interface phy-br-ctlplane on port 4 2019-02-21T00:32:38.954Z|00802|bridge|INFO|bridge br-ctlplane: deleted interface br-ctlplane on port 65534 2019-02-21T00:32:38.954Z|00803|bridge|INFO|bridge br-ctlplane: deleted interface em2 on port 3 At this point, while the output from 'ovs-vsctl' looks fine, other tools like ovs-ofctl report: ovs-ofctl: br-int is not a bridge or a socket Has anyone seen this before? [1] https://termbin.com/ipil -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From lars at redhat.com Thu Feb 21 03:58:32 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 20 Feb 2019 22:58:32 -0500 Subject: [neutron] openvswitch switch connection timeout? In-Reply-To: <20190221031326.kx3lrd226ts7a64j@redhat.com> References: <20190221031326.kx3lrd226ts7a64j@redhat.com> Message-ID: On Wed, Feb 20, 2019 at 10:13 PM Lars Kellogg-Stedman wrote: > I was trying to track down some connectivity issues with some > baremetal nodes booting from iSCSI LUNs provided by Cinder. It turns > out that openvswitch is going belly-up. We see in > openvswitch-agent.log a "Switch connection timeout" error [1]. Just > before that, in /var/log/openvswitch/ovs-vswitchd.log, we see: > > 2019-02-21T00:32:38.696Z|00795|bridge|INFO|bridge br-tun: deleted > interface patch-int on port 1 > 2019-02-21T00:32:38.696Z|00796|bridge|INFO|bridge br-tun: deleted > interface br-tun on port 65534 > 2019-02-21T00:32:38.823Z|00797|bridge|INFO|bridge br-int: deleted > interface int-br-ctlplane on port 1 > 2019-02-21T00:32:38.823Z|00798|bridge|INFO|bridge br-int: deleted > interface br-int on port 65534 > 2019-02-21T00:32:38.823Z|00799|bridge|INFO|bridge br-int: deleted > interface tapb0101920-b9 on port 4 > 2019-02-21T00:32:38.824Z|00800|bridge|INFO|bridge br-int: deleted > interface patch-tun on port 3 > 2019-02-21T00:32:38.954Z|00801|bridge|INFO|bridge br-ctlplane: > deleted interface phy-br-ctlplane on port 4 > 2019-02-21T00:32:38.954Z|00802|bridge|INFO|bridge br-ctlplane: > deleted interface br-ctlplane on port 65534 > 2019-02-21T00:32:38.954Z|00803|bridge|INFO|bridge br-ctlplane: > deleted interface em2 on port 3 > The plot thickens: it looks as if something may be doing this explicitly? At the same time, we see in the system journal: Thu 2019-02-21 00:32:38.697531 UTC [s=fa1c368ed0314169b286a29ffe7d9f87;i=4aa2b;b=d64947ee218546d8a94103aa9bbee154;m=4b31e6780;t=5825c9c8dda3b;x=13507b4516ddcb27] _TRANSPORT=stdout PRIORITY=6 SYSLOG_FACILITY=3 _UID=0 _GID=0 _CAP_EFFECTIVE=1fffffffff _SELINUX_CONTEXT=system_u:system_r:init_t:s0 _BOOT_ID=d64947ee218546d8a94103aa9bbee154 _MACHINE_ID=4a470fefdd3b4033a163bb69bc8578da _HOSTNAME=localhost.localdomain _SYSTEMD_SLICE=system.slice _EXE=/usr/bin/bash _STREAM_ID=951219afe7fb4eb4bf5d71af983d1f11 SYSLOG_IDENTIFIER=ovs-ctl MESSAGE=Exiting ovs-vswitchd (9340) [ OK ] _PID=868225 _COMM=ovs-ctl _CMDLINE=/bin/sh /usr/share/openvswitch/scripts/ovs-ctl --no-ovsdb-server stop _SYSTEMD_CGROUP=/system.slice/ovs-vswitchd.service/control _SYSTEMD_UNIT=ovs-vswitchd.service ...but there's nothing else around that time that seems relevant. -- Lars Kellogg-Stedman -------------- next part -------------- An HTML attachment was scrubbed... URL: From manuel.sb at garvan.org.au Thu Feb 21 06:05:52 2019 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Thu, 21 Feb 2019 06:05:52 +0000 Subject: how to find the interfaces to exclude in SR-IOV? Message-ID: <9D8A2486E35F0941A60430473E29F15B017E85627D@MXDB2.ad.garvan.unsw.edu.au> Hi, I would like to exclude a couple of VFs from the neutron SR-IOV configuration... According to documentation https://docs.openstack.org/neutron/latest/admin/config-sriov#enable-neutron-sriov-nic-agent-compute exclude_devices = eth1:0000:07:00.2;0000:07:00.3,eth2:0000:05:00.1;0000:05:00.2 This is my configuration [root at zeus-59 ~]# ibdev2netdev -v 0000:88:00.0 mlx5_0 (MT4117 - MT1611X10113) CX4121A - ConnectX-4 LX SFP28 fw 14.24.1000 port 1 (ACTIVE) ==> bond0 (Up) 0000:88:00.1 mlx5_1 (MT4117 - MT1611X10113) CX4121A - ConnectX-4 LX SFP28 fw 14.24.1000 port 1 (ACTIVE) ==> bond0 (Up) 0000:88:01.2 mlx5_10 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f2 (Up) 0000:88:01.3 mlx5_11 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f3 (Up) 0000:88:01.4 mlx5_12 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f4 (Up) 0000:88:01.5 mlx5_13 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f5 (Up) 0000:88:01.6 mlx5_14 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f6 (Up) 0000:88:01.7 mlx5_15 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f7 (Up) 0000:88:02.0 mlx5_16 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s2 (Up) 0000:88:02.1 mlx5_17 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s2f1 (Up) 0000:88:00.2 mlx5_2 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f2 (Up) 0000:88:00.3 mlx5_3 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f3 (Up) 0000:88:00.4 mlx5_4 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f4 (Up) 0000:88:00.5 mlx5_5 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f5 (Up) 0000:88:00.6 mlx5_6 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f6 (Up) 0000:88:00.7 mlx5_7 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f7 (Up) 0000:88:01.0 mlx5_8 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1 (Up) 0000:88:01.1 mlx5_9 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f1 (Up) ens2f2 (0000:88:00.2) and ens2f3 (0000:88:00.3) are part of bond0 which I want to assign to OVS. I would like to do something like: exclude_devices = :0000:88:00.2,0000:88:00.3 How can I find out ethX and ethY? Are they PFs? Thank you very much NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lujinluo at gmail.com Thu Feb 21 06:36:34 2019 From: lujinluo at gmail.com (Lujin Luo) Date: Wed, 20 Feb 2019 22:36:34 -0800 Subject: [neutron] [upgrade] No meeting on Feb. 21th Message-ID: Hi team, We do not have updates since our last meeting. Thus let's do the hacking this week and resume the meeting next week! Best regards, Lujin From marios at redhat.com Thu Feb 21 07:00:04 2019 From: marios at redhat.com (Marios Andreou) Date: Thu, 21 Feb 2019 09:00:04 +0200 Subject: Fwd: Renaissance Denver Hotel: Quiet Zone (no more train horns!) is OFFICIAL for the A line Light Rail! In-Reply-To: References: Message-ID: I'll just leave this here https://www.dropbox.com/s/9xtmh7n0664rd4l/DenverChooChoo.mp4?dl=0 you can play it on a loop in your room. you're welcome! On Thu, Feb 21, 2019 at 4:32 AM Jay Bryant wrote: > It won't have the same ambiance without the horns. > > Dont' think it will stop us from Drunkenly singing next to pianos though. > > Jay > > On 2/20/2019 2:15 PM, Alexandra Settle wrote: > > Is it sad I'm almost disappointed by the lack of said horns? > > > > I'll have to find something else to complain about drunkenly next to a > > piano. > > They don't grow on trees ya know... > > > > On 20/02/2019 19:52, David Medberry wrote: > >> Just a note from the hotel used at both Denver PTGs... There are no > >> horns at the old Stapleton Renaissance now. They have probably raised > >> their rates as a result. Of course, this is too late for most/all of > >> us to appreciate, but sharing nonetheless. > >> > >> -dave > >> > >> ---------- Forwarded message --------- > >> From: Mioduchoski, Lauren > >> Date: Wed, Feb 20, 2019 at 12:48 PM > >> Subject: Renaissance Denver Hotel: Quiet Zone (no more train horns!) > >> is OFFICIAL for the A line Light Rail! > >> To: > >> > >> > >> Good afternoon, > >> > >> We are very excited to announce that as of March 1st all intersections > >> along Denver’s A Line light rail train will be Quiet Zones! This means > >> the A Line trains will no longer use their horns when passing through > >> intersections (unless there is an unusual, emergent situation). I know > >> during your stay here, you shared in your feedback that the train > >> noise was disruptive so I wanted to personally reach out and share > >> this exciting news with you! > >> > >> > >> > >> We hope to welcome you back to our hotel in the future so that you can > >> enjoy our Quiet Zone and our newly renovated rooms! > >> > >> > >> > >> Sincerely, > >> > >> > >> > >> LAUREN MIODUCHOSKI > >> > >> FRONT OFFICE MANAGER > >> > >> Renaissance Denver Hotel > >> > >> 3801 Quebec St, Denver, CO 80207 > >> > >> T 303.399.7500 F 303.321.1966 > >> > >> Renaissance Hotels > >> > >> Renhotels.com l facebook.com/renhotels l twitter.com/renhotels > >> > >> > >> > >> Notice: This e-mail message and or fax is confidential, intended only > >> for the named recipient(s) above and may contain information that is > >> privileged, attorney work product or exempt from disclosure under > >> applicable law. If you have received this message in error, or are not > >> the named recipient(s), please immediately notify the sender at > >> 303.399.7500 and delete/destroy this e-mail message/fax. Thank you. > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chunnushrivastava at gmail.com Thu Feb 21 07:23:16 2019 From: chunnushrivastava at gmail.com (Niharika Shrivastava) Date: Thu, 21 Feb 2019 12:53:16 +0530 Subject: Invitation to post queries regarding Outreachy Message-ID: email : chunnushrivastava at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Thu Feb 21 07:27:58 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 21 Feb 2019 16:27:58 +0900 Subject: [Searchlight] TC vision reflection In-Reply-To: References: Message-ID: Hello team, We finally finished the initial version of the vision reflection document. Please check it out [1]. Note that this is a live document and will be updated frequently as we move forward. If you have any questions, please let me know. I would like to say thank Christ Dent and Julia Kreger for their initiative at the Placement and Ironic team. I learn a lot from you guys when making this document. [1] https://docs.openstack.org/searchlight/latest/contributor/vision-reflection.html [2] https://review.openstack.org/#/c/630216/ [3] https://review.openstack.org/#/c/629060/ Yours, On Tue, Feb 12, 2019 at 3:55 PM Trinh Nguyen wrote: > Hi team, > > Follow by the call of the TC [1] for each project to self-evaluate against > the OpenStack Cloud Vision [2], the Searchlight team would like to produce > a short bullet point style document comparing itself with the vision. The > purpose is to find the gaps between Searchlight and the TC vision and it is > a good practice to align our work with the rest. I created a new pad [3] > and welcome all of your opinions. Then, after about 3 weeks, I will submit > a patch set to add the vision reflection document to our doc source. > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001417.html > [2] https://governance.openstack.org/tc/reference/technical-vision.html > [3] https://etherpad.openstack.org/p/-tc-vision-self-eval > > Ping me on the channel #openstack-searchlight > > Bests, > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrianc at mellanox.com Thu Feb 21 08:00:39 2019 From: adrianc at mellanox.com (Adrian Chiris) Date: Thu, 21 Feb 2019 08:00:39 +0000 Subject: how to find the interfaces to exclude in SR-IOV? In-Reply-To: <9D8A2486E35F0941A60430473E29F15B017E85627D@MXDB2.ad.garvan.unsw.edu.au> References: <9D8A2486E35F0941A60430473E29F15B017E85627D@MXDB2.ad.garvan.unsw.edu.au> Message-ID: Hi, To correlate between PCI device and net device in linux you can inspect sysfs: # ls -l /sys/bus/pci/devices//net To correlate between net device and PCI device: # ls -l /sys/class/net//device a Virtual PCI function (VF) will have a pointer to its Physical function (PF) /sys/bus/pci/devices/ Sent: Thursday, February 21, 2019 8:06 AM To: openstack at lists.openstack.org Subject: how to find the interfaces to exclude in SR-IOV? Hi, I would like to exclude a couple of VFs from the neutron SR-IOV configuration... According to documentation https://docs.openstack.org/neutron/latest/admin/config-sriov#enable-neutron-sriov-nic-agent-compute exclude_devices = eth1:0000:07:00.2;0000:07:00.3,eth2:0000:05:00.1;0000:05:00.2 This is my configuration [root at zeus-59 ~]# ibdev2netdev -v 0000:88:00.0 mlx5_0 (MT4117 - MT1611X10113) CX4121A - ConnectX-4 LX SFP28 fw 14.24.1000 port 1 (ACTIVE) ==> bond0 (Up) 0000:88:00.1 mlx5_1 (MT4117 - MT1611X10113) CX4121A - ConnectX-4 LX SFP28 fw 14.24.1000 port 1 (ACTIVE) ==> bond0 (Up) 0000:88:01.2 mlx5_10 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f2 (Up) 0000:88:01.3 mlx5_11 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f3 (Up) 0000:88:01.4 mlx5_12 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f4 (Up) 0000:88:01.5 mlx5_13 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f5 (Up) 0000:88:01.6 mlx5_14 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f6 (Up) 0000:88:01.7 mlx5_15 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f7 (Up) 0000:88:02.0 mlx5_16 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s2 (Up) 0000:88:02.1 mlx5_17 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s2f1 (Up) 0000:88:00.2 mlx5_2 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f2 (Up) 0000:88:00.3 mlx5_3 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f3 (Up) 0000:88:00.4 mlx5_4 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f4 (Up) 0000:88:00.5 mlx5_5 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f5 (Up) 0000:88:00.6 mlx5_6 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f6 (Up) 0000:88:00.7 mlx5_7 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f7 (Up) 0000:88:01.0 mlx5_8 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1 (Up) 0000:88:01.1 mlx5_9 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f1 (Up) ens2f2 (0000:88:00.2) and ens2f3 (0000:88:00.3) are part of bond0 which I want to assign to OVS. I would like to do something like: exclude_devices = :0000:88:00.2,0000:88:00.3 How can I find out ethX and ethY? Are they PFs? Thank you very much NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Feb 21 08:47:20 2019 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Thu, 21 Feb 2019 09:47:20 +0100 Subject: Renaissance Denver Hotel: Quiet Zone (no more train horns!) is OFFICIAL for the A line Light Rail! In-Reply-To: References: Message-ID: Hi, > Wiadomość napisana przez Marios Andreou w dniu 21.02.2019, o godz. 08:00: > > I'll just leave this here > > https://www.dropbox.com/s/9xtmh7n0664rd4l/DenverChooChoo.mp4?dl=0 > > you can play it on a loop in your room. you're welcome! Thx. That made my day :D > > On Thu, Feb 21, 2019 at 4:32 AM Jay Bryant wrote: > It won't have the same ambiance without the horns. > > Dont' think it will stop us from Drunkenly singing next to pianos though. > > Jay > > On 2/20/2019 2:15 PM, Alexandra Settle wrote: > > Is it sad I'm almost disappointed by the lack of said horns? > > > > I'll have to find something else to complain about drunkenly next to a > > piano. > > They don't grow on trees ya know... > > > > On 20/02/2019 19:52, David Medberry wrote: > >> Just a note from the hotel used at both Denver PTGs... There are no > >> horns at the old Stapleton Renaissance now. They have probably raised > >> their rates as a result. Of course, this is too late for most/all of > >> us to appreciate, but sharing nonetheless. > >> > >> -dave > >> > >> ---------- Forwarded message --------- > >> From: Mioduchoski, Lauren > >> Date: Wed, Feb 20, 2019 at 12:48 PM > >> Subject: Renaissance Denver Hotel: Quiet Zone (no more train horns!) > >> is OFFICIAL for the A line Light Rail! > >> To: > >> > >> > >> Good afternoon, > >> > >> We are very excited to announce that as of March 1st all intersections > >> along Denver’s A Line light rail train will be Quiet Zones! This means > >> the A Line trains will no longer use their horns when passing through > >> intersections (unless there is an unusual, emergent situation). I know > >> during your stay here, you shared in your feedback that the train > >> noise was disruptive so I wanted to personally reach out and share > >> this exciting news with you! > >> > >> > >> > >> We hope to welcome you back to our hotel in the future so that you can > >> enjoy our Quiet Zone and our newly renovated rooms! > >> > >> > >> > >> Sincerely, > >> > >> > >> > >> LAUREN MIODUCHOSKI > >> > >> FRONT OFFICE MANAGER > >> > >> Renaissance Denver Hotel > >> > >> 3801 Quebec St, Denver, CO 80207 > >> > >> T 303.399.7500 F 303.321.1966 > >> > >> Renaissance Hotels > >> > >> Renhotels.com l facebook.com/renhotels l twitter.com/renhotels > >> > >> > >> > >> Notice: This e-mail message and or fax is confidential, intended only > >> for the named recipient(s) above and may contain information that is > >> privileged, attorney work product or exempt from disclosure under > >> applicable law. If you have received this message in error, or are not > >> the named recipient(s), please immediately notify the sender at > >> 303.399.7500 and delete/destroy this e-mail message/fax. Thank you. > >> > — Slawek Kaplonski Senior software engineer Red Hat From dtantsur at redhat.com Thu Feb 21 09:00:14 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 21 Feb 2019 10:00:14 +0100 Subject: [baremetal-sig][ironic] Bare Metal SIG First Steps In-Reply-To: <751D0BC0-B349-4038-A53E-F6D43BA04227@openstack.org> References: <4191B2EA-A6F0-4183-B0EF-C5C013E3A982@openstack.org> <098CC2A3-B207-47D5-A0F1-F227C33C2F01@openstack.org> <751D0BC0-B349-4038-A53E-F6D43BA04227@openstack.org> Message-ID: <3fc278e9-4edd-38cd-267d-a225700a3f27@redhat.com> Hi, On 2/20/19 8:13 PM, Chris Hoge wrote: > Monday the patch for the creation of the Baremetal-SIG was approved by > the TC and UC [1]. It's exciting to see the level of interest we've > already seen in the planning etherpad [2], and it's time to start kicking > off our first initiatives. \o/ > > I'd like to begin by addressing some of the comments in the patch. > > * Wiki vs Etherpad. My own personal preference is to start with the > Etherpad as we get our feet underneath us. As more artifacts and begin > to materialize, I think a Wiki would be an excellent location for > hosting the information. My primary concern with Wikis is their > tendency (from my point of view) to become out of date with the goals > of a group. So, to begin with, unless there are any strong objections, > we can do initial planning on the Etherpad and graduate to more > permanent and resilient landing pages later. I think it's a good plan. Do you know if the problems with adding new users to Wiki have been addressed? > > * Addressing operational aspects of Ironic. I see this as an absolutely > critical aspect of the SIG. We already have organization devoted mostly > to development, the Ironic team itself. SIGs are meant to be a > collaborative effort between developers, operators, and users. We can > send a patch up to clarify that in the governance document. If you are > an operator, please use this [baremetal-sig] subject heading to start > discussions and organize shared experiences and documentation. > > * The SIG is focused on all aspects of running bare-metal and Ironic, > whether it be as a driver to Nova, a stand-alone service, or built into > another project as a component. One of the amazing things about Ironic > is its flexibility and versatility. We want to highlight that there's > more than one way to do things with Ironic. > > * Chairs. I would very much like for this to be a community experience, > and welcome nominations for co-chairs. I've found in the past that 2-3 > co-chairs makes for a good balance, and given the number of people who > have expressed interest in the SIG in general I think we should go > ahead and appoint two extra people to co-lead the SIG. If this > interests you, please self-nominate here and we can use lazy consensus > to round out the rest of the leadership. If we have several people step > up, we can consider a stronger form of voting using the systems > available to us. I'm happy to co-chair. I'm in CET timezone. > > First goals: > > I think that an important first goal is in the publication of a > whitepaper outlining the motivation, deployment methods, and case studies > surrounding OpenStack bare metal, similar to what we did with the > containers whitepaper last year. A goal would be to publish at the Denver > Open Infrastructure summit. Some initial thoughts and rough schedule can > be found here [3], and also linked from the planning etherpad. > > One of the nice things about working on the whitepaper is we can also > generate a bunch of other expanded content based on that work. In > particular, I'd very much like to highlight deployment scenarios and case > studies. I'm thinking of the whitepaper as a seed from which multiple > authors demonstrate their experience and expertise to the benefit of the > entire community. > > Another goal we've talked about at the Foundation is the creation of a > new bare metal logo program. Distinct from something like the OpenStack > Powered Trademark, which focuses on interoperability between OpenStack > products with an emphasis on interoperability, this program would be > devoted to highlighting products that are shipping with Ironic as a key > component of their bare metal management strategy. This could be in many > different configurations, and is focused on the shipping of code that > solves particular problems, whether Ironic is user-facing or not. We're > very much in the planning stages of a program like this, and it's > important to get community feedback early on about if you would find it > useful and what features you would like to see a program like this have. > A few items that we're very interested in getting early feedback on are: > > * The Pixie Boots mascot has been an important part of the Ironic > project, and we would like to apply it to highlight Ironic usage within > the logo program. ++ for Pixie :) > * If you're a public cloud, sell a distribution, provide installation > services, or otherwise have some product that uses Ironic, what is your > interest in participating in a logo program? > * In addition to the logo, would you find collaboration to build content > on how Ironic is being used in projects and products in our ecosystem > useful? As an upstream developer I'm always curious how my project is used, so +1 here. > > Finally, we have the goals of producing and highlighting content for > using and operating Ironic. A list of possible use-cases is included in > the SIG etherpad. We're also thinking about setting up a demo booth with > a small set of server hardware to demonstrate Ironic at the Open > Infrastructure summit. > > On all of those items, your feedback and collaboration is essential. > Please respond to this mailing list if you have thoughts or want to > volunteer for any of these items, and also contribute to the etherpad to > help organize efforts and add any resources you might have available. > Thanks to everyone, and I'll be following up soon with more information > and updates. > > -Chris > > [1] https://review.openstack.org/#/c/634824/ > [2] https://etherpad.openstack.org/p/bare-metal-sig > [3] https://etherpad.openstack.org/p/bare-metal-whitepaper > > From km.giuseppesannino at gmail.com Thu Feb 21 10:03:38 2019 From: km.giuseppesannino at gmail.com (Giuseppe Sannino) Date: Thu, 21 Feb 2019 11:03:38 +0100 Subject: [kolla][magnum] Cluster creation failed due to "Waiting for Kubernetes API..." In-Reply-To: References: <1f5506ea-add1-749d-b6c3-1040776b0ff4@catalyst.net.nz> <54760998-DCF6-4E01-85C8-BB3F5879A14C@stackhpc.com> Message-ID: Ciao Mark, finally it works! Many many thanks! That was the missing piece of the puzzle. Just FYI information, from the systemctl status for the heat-container-agent I can still see this repetitive logs: : Feb 21 08:00:40 kube-cluster-goddard-lq54faeabuhu-master-0.novalocal runc[2715]: /var/lib/os-collect-config/local-data not found. Skipping Feb 21 08:01:11 kube-cluster-goddard-lq54faeabuhu-master-0.novalocal runc[2715]: /var/lib/os-collect-config/local-data not found. Skipping : This doesn't seem to harm the deployment but I will check further. Thanks a lot to everyone! /Giuseppe On Wed, 20 Feb 2019 at 20:16, Mark Goddard wrote: > Hi, I think we've hit this, and John Garbutt has added the following > configuration for Kolla Ansible in /etc/kolla/config/heat.conf: > > [DEFAULT] > region_name_for_services=RegionOne > We'll need a patch in kolla ansible to do that without custom config > changes. > Mark > > On Wed, 20 Feb 2019 at 11:05, Bharat Kunwar wrote: > >> Hi Giuseppe, >> >> What version of heat are you running? >> >> Can you check if you have this patch merged? >> https://review.openstack.org/579485 >> >> https://review.openstack.org/579485 >> >> Bharat >> >> Sent from my iPhone >> >> On 20 Feb 2019, at 10:38, Giuseppe Sannino >> wrote: >> >> Hi Feilong, Bharat, >> thanks for your answer. >> >> @Feilong, >> From /etc/kolla/heat-engine/heat.conf I see: >> [clients_keystone] >> auth_uri = http://10.1.7.201:5000 >> >> This should map into auth_url within the k8s master. >> Within the k8s master in /etc/os-collect-config.conf I see: >> >> [heat] >> auth_url = http://10.1.7.201:5000/v3/ >> : >> : >> resource_name = kube-master >> region_name = null >> >> >> and from /etc/sysconfig/heat-params (among the others): >> : >> REGION_NAME="RegionOne" >> : >> AUTH_URL="http://10.1.7.201:5000/v3" >> >> This URL corresponds to the "public" Heat endpoint >> openstack endpoint list | grep heat >> | 3d5f58c43f6b44f6b54990d6fd9ff55d | RegionOne | heat | >> orchestration | True | internal | >> http://10.1.7.200:8004/v1/%(tenant_id)s | >> | 8c2492cb0ddc48ca94942a4a299a88dc | RegionOne | heat-cfn | >> cloudformation | True | internal | http://10.1.7.200:8000/v1 >> | >> | b164c4618a784da9ae14da75a6c764a3 | RegionOne | heat | >> orchestration | True | public | >> http://10.1.7.201:8004/v1/%(tenant_id)s | >> | da203f7d337b4587a0f5fc774c993390 | RegionOne | heat | >> orchestration | True | admin | >> http://10.1.7.200:8004/v1/%(tenant_id)s | >> | e0d3743e7c604e5c8aa4684df2d1ce53 | RegionOne | heat-cfn | >> cloudformation | True | public | http://10.1.7.201:8000/v1 >> | >> | efe0b8418aa24dfca33c243e7eed7e90 | RegionOne | heat-cfn | >> cloudformation | True | admin | http://10.1.7.200:8000/v1 >> | >> >> Connectivity tests: >> [fedora at kube-cluster-fed27-k5di3i7stgks-master-0 ~]$ ping 10.1.7.201 >> PING 10.1.7.201 (10.1.7.201) 56(84) bytes of data. >> 64 bytes from 10.1.7.201: icmp_seq=1 ttl=63 time=0.285 ms >> >> [fedora at kube-cluster-fed27-k5di3i7stgks-master-0 ~]$ curl >> http://10.1.7.201:5000/v3/ >> {"version": {"status": "stable", "updated": "2018-10-15T00:00:00Z", >> "media-types": [{"base": "application/json", "type": >> "application/vnd.openstack.identity-v3+json"}], "id": "v3.11", "links": >> [{"href": "http://10.1.7.201:5000/v3/", "rel": "self"}]}} >> >> >> Apparently, I can reach such endpoint from within the k8s master >> >> >> @Bharat, >> that file seems to be properly conifugured to me as well. >> The problem pointed by "systemctl status heat-container-agent" is with: >> >> Feb 20 09:33:23 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal >> runc[2837]: publicURL endpoint for orchestration service in null region not >> found >> Feb 20 09:33:23 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal >> runc[2837]: Source [heat] Unavailable. >> Feb 20 09:33:23 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal >> runc[2837]: /var/lib/os-collect-config/local-data not found. Skipping >> Feb 20 09:33:53 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal >> runc[2837]: publicURL endpoint for orchestration service in null region not >> found >> Feb 20 09:33:53 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal >> runc[2837]: Source [heat] Unavailable. >> Feb 20 09:33:53 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal >> runc[2837]: /var/lib/os-collect-config/local-data not found. Skipping >> >> >> Still no way forward from my side. >> >> /Giuseppe >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> On Tue, 19 Feb 2019 at 22:16, Bharat Kunwar wrote: >> >>> I have the same problem. Weird thing is /etc/sysconfig/heat-params has >>> region_name specified in my case! >>> >>> Sent from my iPhone >>> >>> On 19 Feb 2019, at 22:00, Feilong Wang wrote: >>> >>> Can you talk to the Heat API from your master node? >>> >>> >>> On 20/02/19 6:43 AM, Giuseppe Sannino wrote: >>> >>> Hi all...again, >>> I managed to get over the previous issue by "not disabling" the TLS in >>> the cluster template. >>> From the cloud-init-output.log I see: >>> Cloud-init v. 17.1 running 'modules:final' at Tue, 19 Feb 2019 17:03:53 >>> +0000. Up 38.08 seconds. >>> Cloud-init v. 17.1 finished at Tue, 19 Feb 2019 17:13:22 +0000. >>> Datasource DataSourceEc2. Up 607.13 seconds >>> >>> But the cluster creation keeps on failing. >>> From the journalctl -f I see a possible issue: >>> Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal >>> runc[2723]: publicURL endpoint for orchestration service in null region not >>> found >>> Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal >>> runc[2723]: Source [heat] Unavailable. >>> Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal >>> runc[2723]: /var/lib/os-collect-config/local-data not found. Skipping >>> >>> anyone familiar with this problem ? >>> >>> Thanks as usual. >>> /Giuseppe >>> >>> >>> >>> >>> >>> >>> >>> On Tue, 19 Feb 2019 at 17:35, Giuseppe Sannino < >>> km.giuseppesannino at gmail.com> wrote: >>> >>>> Hi all, >>>> need an help. >>>> I deployed an AIO via Kolla on a baremetal node. Here some information >>>> about the deployment: >>>> --------------- >>>> kolla-ansible: 7.0.1 >>>> openstack_release: Rocky >>>> kolla_base_distro: centos >>>> kolla_install_type: source >>>> TLS: disabled >>>> --------------- >>>> >>>> >>>> VMs spawn without issue but I can't make the "Kubernetes cluster >>>> creation" successfully. It fails due to "Time out" >>>> >>>> I managed to log into Kuber Master and from the cloud-init-output.log I >>>> can see: >>>> + echo 'Waiting for Kubernetes API...' >>>> Waiting for Kubernetes API... >>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>> + '[' ok = '' ']' >>>> + sleep 5 >>>> >>>> >>>> Checking via systemctl and journalctl I see: >>>> [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ systemctl status >>>> kube-apiserver >>>> ● kube-apiserver.service - kubernetes-apiserver >>>> Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; >>>> vendor preset: disabled) >>>> Active: failed (Result: exit-code) since Tue 2019-02-19 15:31:41 >>>> UTC; 45min ago >>>> Process: 3796 ExecStart=/usr/bin/runc --systemd-cgroup run >>>> kube-apiserver (code=exited, status=1/FAILURE) >>>> Main PID: 3796 (code=exited, status=1/FAILURE) >>>> >>>> Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> systemd[1]: kube-apiserver.service: Main process exited, code=exited, >>>> status=1/FAILURE >>>> Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. >>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> systemd[1]: kube-apiserver.service: Service RestartSec=100ms expired, >>>> scheduling restart. >>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter >>>> is at 6. >>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> systemd[1]: Stopped kubernetes-apiserver. >>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> systemd[1]: kube-apiserver.service: Start request repeated too quickly. >>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. >>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> systemd[1]: Failed to start kubernetes-apiserver. >>>> >>>> [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ sudo journalctl -u >>>> kube-apiserver >>>> -- Logs begin at Tue 2019-02-19 15:21:36 UTC, end at Tue 2019-02-19 >>>> 16:17:00 UTC. -- >>>> Feb 19 15:31:33 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> systemd[1]: Started kubernetes-apiserver. >>>> Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> runc[2794]: Flag --insecure-bind-address has been deprecated, This flag >>>> will be removed in a future version. >>>> Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> runc[2794]: Flag --insecure-port has been deprecated, This flag will be >>>> removed in a future version. >>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> runc[2794]: Error: error creating self-signed certificates: open >>>> /var/run/kubernetes/apiserver.crt: permission denied >>>> : >>>> : >>>> : >>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> runc[2794]: error: error creating self-signed certificates: open >>>> /var/run/kubernetes/apiserver.crt: permission denied >>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> systemd[1]: kube-apiserver.service: Main process exited, code=exited, >>>> status=1/FAILURE >>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. >>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> systemd[1]: kube-apiserver.service: Service RestartSec=100ms expired, >>>> scheduling restart. >>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal >>>> systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter >>>> is at 1. >>>> >>>> >>>> May I ask for an help on this ? >>>> >>>> Many thanks >>>> /Giuseppe >>>> >>>> >>>> >>>> >>>> -- >>> Cheers & Best regards, >>> Feilong Wang (王飞龙) >>> -------------------------------------------------------------------------- >>> Senior Cloud Software Engineer >>> Tel: +64-48032246 >>> Email: flwang at catalyst.net.nz >>> Catalyst IT Limited >>> Level 6, Catalyst House, 150 Willis Street, Wellington >>> -------------------------------------------------------------------------- >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Thu Feb 21 10:48:58 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 21 Feb 2019 11:48:58 +0100 Subject: [tc] Questions for TC Candidates In-Reply-To: <15e37e83-0310-5258-0662-65c650b4ccfd@gmail.com> References: <15e37e83-0310-5258-0662-65c650b4ccfd@gmail.com> Message-ID: On Thu, Feb 21, 2019 at 1:54 AM Lance Bragstad wrote: > > > On 2/20/19 11:23 AM, Sylvain Bauza wrote: > > Thanks Chris for asking us questions so we can clarify our opinions. > > On Wed, Feb 20, 2019 at 3:52 PM Chris Dent wrote: > >> >> It's the Campaigning slot of the TC election process, where members >> of the community (including the candidates) are encouraged to ask >> the candidates questions and witness some debate. I have some >> questions. >> >> First off, I'd like to thank all the candidates for running and >> being willing to commit some of their time. I'd also like to that >> group as a whole for being large enough to force an election. A >> representative body that is not the result of an election would not >> be very representing nor have much of a mandate. >> >> > I agree with you on this point. It's important for OpenStack to have time > to discuss about mandates. > > The questions follow. Don't feel obliged to answer all of these. The >> point here is to inspire some conversation that flows to many >> places. I hope other people will ask in the areas I've chosen to >> skip. If you have a lot to say, it might make sense to create a >> different message for each response. Beware, you might be judged on >> your email etiquette and attention to good email technique! >> >> * How do you account for the low number of candidates? Do you >> consider this a problem? Why or why not? >> >> > Yes, again, I agree and to be honest, when I only saw we were only having > 4 candidates 8 hours before the deadline, I said to myself "OK, you love > OpenStack. You think the TC is important. But then, why aren't you then > throwing your hat ?" > We all have opinions, right ? But then, why people don't want to be in the > TC ? Because we don't have a lot of time for it ? Or because people think > the TC isn't important ? > > I don't want to discuss about politics here. But I somehow see a parallel > in between what the TC is and what the European Union is : both are > governances not fully decision-makers but are there for sharing same rules > and vision. > If we stop having the TC, what would become OpenStack ? Just a set of > parallel projects with no common guidance ? > > The fact that a large number of candidacies went very late (including me) > is a bit concerning to me. How can we become better ? I have no idea but > saying that probably given the time investment it requires, most of the > candidacies were probably holding some management acceptance before people > would propose their names. Probably worth thinking about how the investment > it requires, in particular given we have less full-time contributors that > can dedicate large time for governance. > > > * Compare and contrast the role of the TC now to 4 years ago. If you >> weren't around 4 years ago, comment on the changes you've seen >> over the time you have been around. In either case: What do you >> think the TC role should be now? >> >> > 4 years ago, we were in the Kilo timeframe. That's fun you mention this > period, because at that exact time of the year, the TC voted on one of the > probably most important decisions that impacted OpenStack : The Big Tent > reform [1] > Taking a look at this time, I remember frustration and hard talks but also > people committed to change things. > This decision hasn't changed a lot the existing service projects that were > before the Big Tent, but it actually created a whole new ecosystem for > developers. It had challenges but it never required to be abandoned, which > means the program is a success. > > Now the buzz is gone and the number of projects stable, the TC necessarly > has to mutate to a role of making sure all the projects sustain the same > pace and reliability. Most of the challenges for the TC is now about > defining and applying criterias for ensuring that all our projects have a > reasonable state for production. If you see my candidacy letter, two of my > main drivers for my nomination are about upgradability and scalability > concerns. > > > * What, to you, is the single most important thing the OpenStack >> community needs to do to ensure that packagers, deployers, and >> hobbyist users of OpenStack are willing to consistently upstream >> their fixes and have a positive experience when they do? What is >> the TC's role in helping make that "important thing" happen? >> >> > There are two very distinct reasons when a company decides to > downstream-only : either by choice or because of technical reasons. > I don't think a lot of companies decide to manage technical debt on their > own by choice. OpenStack is nearly 9 years old and most of the users know > the price it is. > > Consequently, I assume that the reasons are technical : > 1/ they're running an old version and haven't upgraded (yet). We have good > user stories of large cloud providers that invested in upgrades (for > example OVH) and see the direct benefit of it. Maybe we can educate more on > the benefits of upgrading frequently. > 2/ they think upstreaming is difficult. I'm all open to hear the barriers > they have. For what it's worth, OpenStack invested a lot in mentoring with > the FirstContact SIG, documentation and Upstream Institute. There will > probably also be a new program about peer-mentoring and recognition [2] if > the community agrees with the idea. Honestly, I don't know what do do more. > If you really can't upstream but care about your production, just take a > service contract I guess. > > > >> * If you had a magic wand and could inspire and make a single >> sweeping architectural or software change across the services, >> what would it be? For now, ignore legacy or upgrade concerns. >> What role should the TC have in inspiring and driving such >> changes? >> >> > Take me as a fool but I don't think the role of the TC is to drive > architectural decision between projects. > The TC can help two projects to discuss, the TC can (somehow) help > moderate between two teams about some architectural concern but certainly > not be the driver of such change. > > > Is there a particular reason why you feel this way? > > I think the TC is in a great position to have a profound impact on the > architecture of OpenStack, with a caveat. > > I believe if you ask anyone with even a brief history in OpenStack, you'll > dust up some architectural opinions. For example, Jim and Mohammed have > already pointed out a bunch in their responses. Another example, Melanie > and I had a productive discussion today about how restructuring the > architecture of policy enforcement could significantly improvement > usability and security [0], which certainly isn't specific to keystone or > nova. I don't think we have to look very far to find excellent areas for > improvement. As others have noted, the project is at a point where > development and hype isn't nearly as intense as it was 4 years ago. While > contributor numbers, in a way, reflect project stabilization, I also think > it puts us in a prime position to address some of the architectural pain > points we've grown to live with over the years. I think we can use the > opportunity to make services more consistent, giving consumers and users a > more refined and polished experience, among other benefits. > > That said, I certainly think if the TC is to _facilitate_ in architectural > decisions, it needs to be done in the open and with plenty of communication > and feedback with the entire community. Similar to the approach we try and > take with community goals. > > I understand there may be a fine line in making decisions of this nature > at the TC level, but I also think it presents numerous opportunities to > communicate and focus efforts in a unified direction. I see that > involvement range from socializing issues to advocating for sponsorship on > a particular initiative to diving into the problem and helping projects > directly. > > Hi Lance, Thanks for your reply and leaving me a chance to clarify my thoughts on a possible strawman. I'm not opposed to architectural modifications, and like you mention the fact that we're past the hype leaves us a good opportunity for revisiting some crucial cross-project pain points like the ones you mention (and which I'm fully onboard). For example, I remember some hard talks we had at Summits about a potential 'Architecture WG' and all the concerns that were around it. What I said earlier was that IMHO the TC should not dictate any architectural changes or be the initiator of important architectural redesigns (unless I misunderstand the word 'driving' and in this case my bad). I rather prefer to see the TC to be a facilitator of such initiatives coming from individuals or projects (in particular given we're not in a position where we have less resources that can take time to work full-time on those big changes that are multi-cycle). We sometimes failed in the past to have such initiatives, but now we have goals we're in a better position than in the past. That said, given the nature of goals to be community-wide, I'm open for discussing other ways to engage architectural redesigns that imply only a few services (eg. placement and other services that would consume it). We generally have cross-project sessions at the PTG that help driving those discussions, but I somehow feel we miss some better way to promote those. architectural redesigns. Hoping I answered your concerns. -Sylvain > [0] > http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-02-20.log.html#t2019-02-20T18:35:06 > > > That doesn't mean the TC can't be technical. We have goals, for example. > But in order to have well defined goals that are understandable by project > contributors, we also need to have the projects be the drivers of such > architectural changes. > > > >> * What can the TC do to make sure that the community (in its many >> dimensions) is informed of and engaged in the discussions and >> decisions of the TC? >> >> > You made a very good job in providing TC feedback. I surely think the TC > has to make sure that a regular weekly feedback is provided. > For decisions that impact projects, I don't really see how TC members can > vote without getting feedback from the project contributors, so here I see > communication (thru Gerrit at least). > > > > >> * How do you counter people who assert the TC is not relevant? >> (Presumably you think it is, otherwise you would not have run. If >> you don't, why did you run?) >> > > Again, I think that is a matter of considering the TC responsibilities. We > somehow need to clarify what are those responsibilities and I think I > voiced on that above. > > > >> That's probably more than enough. Thanks for your attention. >> >> > I totally appreciate you challenging us. That's very important that people > vote based on opinions rather than popularity. > -Sylvain > > [1] > https://governance.openstack.org/tc/resolutions/20141202-project-structure-reform-spec.html > [2] https://review.openstack.org/#/c/636956/ > >> -- >> Chris Dent ٩◔̯◔۶ https://anticdent.org/ >> freenode: cdent tw: @anticdent > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bharat at stackhpc.com Thu Feb 21 11:03:20 2019 From: bharat at stackhpc.com (Bharat Kunwar) Date: Thu, 21 Feb 2019 12:03:20 +0100 Subject: [kolla][magnum] Cluster creation failed due to "Waiting for Kubernetes API..." In-Reply-To: References: <1f5506ea-add1-749d-b6c3-1040776b0ff4@catalyst.net.nz> <54760998-DCF6-4E01-85C8-BB3F5879A14C@stackhpc.com> Message-ID: Yes I’ve seen those messages too, I think it’s normal so wouldn’t worry too much. Glad this is sorted! Sent from my iPhone > On 21 Feb 2019, at 11:03, Giuseppe Sannino wrote: > > Ciao Mark, > finally it works! Many many thanks! > That was the missing piece of the puzzle. > > Just FYI information, from the systemctl status for the heat-container-agent I can still see this repetitive logs: > : > Feb 21 08:00:40 kube-cluster-goddard-lq54faeabuhu-master-0.novalocal runc[2715]: /var/lib/os-collect-config/local-data not found. Skipping > Feb 21 08:01:11 kube-cluster-goddard-lq54faeabuhu-master-0.novalocal runc[2715]: /var/lib/os-collect-config/local-data not found. Skipping > : > > This doesn't seem to harm the deployment but I will check further. > > Thanks a lot to everyone! > > /Giuseppe > >> On Wed, 20 Feb 2019 at 20:16, Mark Goddard wrote: >> Hi, I think we've hit this, and John Garbutt has added the following configuration for Kolla Ansible in /etc/kolla/config/heat.conf: >> >> [DEFAULT] >> region_name_for_services=RegionOne >> >> We'll need a patch in kolla ansible to do that without custom config changes. >> Mark >> >>> On Wed, 20 Feb 2019 at 11:05, Bharat Kunwar wrote: >>> Hi Giuseppe, >>> >>> What version of heat are you running? >>> >>> Can you check if you have this patch merged? https://review.openstack.org/579485 >>> >>> https://review.openstack.org/579485 >>> >>> Bharat >>> >>> Sent from my iPhone >>> >>>> On 20 Feb 2019, at 10:38, Giuseppe Sannino wrote: >>>> >>>> Hi Feilong, Bharat, >>>> thanks for your answer. >>>> >>>> @Feilong, >>>> From /etc/kolla/heat-engine/heat.conf I see: >>>> [clients_keystone] >>>> auth_uri = http://10.1.7.201:5000 >>>> >>>> This should map into auth_url within the k8s master. >>>> Within the k8s master in /etc/os-collect-config.conf I see: >>>> >>>> [heat] >>>> auth_url = http://10.1.7.201:5000/v3/ >>>> : >>>> : >>>> resource_name = kube-master >>>> region_name = null >>>> >>>> >>>> and from /etc/sysconfig/heat-params (among the others): >>>> : >>>> REGION_NAME="RegionOne" >>>> : >>>> AUTH_URL="http://10.1.7.201:5000/v3" >>>> >>>> This URL corresponds to the "public" Heat endpoint >>>> openstack endpoint list | grep heat >>>> | 3d5f58c43f6b44f6b54990d6fd9ff55d | RegionOne | heat | orchestration | True | internal | http://10.1.7.200:8004/v1/%(tenant_id)s | >>>> | 8c2492cb0ddc48ca94942a4a299a88dc | RegionOne | heat-cfn | cloudformation | True | internal | http://10.1.7.200:8000/v1 | >>>> | b164c4618a784da9ae14da75a6c764a3 | RegionOne | heat | orchestration | True | public | http://10.1.7.201:8004/v1/%(tenant_id)s | >>>> | da203f7d337b4587a0f5fc774c993390 | RegionOne | heat | orchestration | True | admin | http://10.1.7.200:8004/v1/%(tenant_id)s | >>>> | e0d3743e7c604e5c8aa4684df2d1ce53 | RegionOne | heat-cfn | cloudformation | True | public | http://10.1.7.201:8000/v1 | >>>> | efe0b8418aa24dfca33c243e7eed7e90 | RegionOne | heat-cfn | cloudformation | True | admin | http://10.1.7.200:8000/v1 | >>>> >>>> Connectivity tests: >>>> [fedora at kube-cluster-fed27-k5di3i7stgks-master-0 ~]$ ping 10.1.7.201 >>>> PING 10.1.7.201 (10.1.7.201) 56(84) bytes of data. >>>> 64 bytes from 10.1.7.201: icmp_seq=1 ttl=63 time=0.285 ms >>>> >>>> [fedora at kube-cluster-fed27-k5di3i7stgks-master-0 ~]$ curl http://10.1.7.201:5000/v3/ >>>> {"version": {"status": "stable", "updated": "2018-10-15T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.11", "links": [{"href": "http://10.1.7.201:5000/v3/", "rel": "self"}]}} >>>> >>>> >>>> Apparently, I can reach such endpoint from within the k8s master >>>> >>>> >>>> @Bharat, >>>> that file seems to be properly conifugured to me as well. >>>> The problem pointed by "systemctl status heat-container-agent" is with: >>>> >>>> Feb 20 09:33:23 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: publicURL endpoint for orchestration service in null region not found >>>> Feb 20 09:33:23 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: Source [heat] Unavailable. >>>> Feb 20 09:33:23 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: /var/lib/os-collect-config/local-data not found. Skipping >>>> Feb 20 09:33:53 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: publicURL endpoint for orchestration service in null region not found >>>> Feb 20 09:33:53 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: Source [heat] Unavailable. >>>> Feb 20 09:33:53 kube-cluster-fed27-k5di3i7stgks-master-0.novalocal runc[2837]: /var/lib/os-collect-config/local-data not found. Skipping >>>> >>>> >>>> Still no way forward from my side. >>>> >>>> /Giuseppe >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>>> On Tue, 19 Feb 2019 at 22:16, Bharat Kunwar wrote: >>>>> I have the same problem. Weird thing is /etc/sysconfig/heat-params has region_name specified in my case! >>>>> >>>>> Sent from my iPhone >>>>> >>>>>> On 19 Feb 2019, at 22:00, Feilong Wang wrote: >>>>>> >>>>>> Can you talk to the Heat API from your master node? >>>>>> >>>>>> >>>>>> >>>>>>> On 20/02/19 6:43 AM, Giuseppe Sannino wrote: >>>>>>> Hi all...again, >>>>>>> I managed to get over the previous issue by "not disabling" the TLS in the cluster template. >>>>>>> From the cloud-init-output.log I see: >>>>>>> Cloud-init v. 17.1 running 'modules:final' at Tue, 19 Feb 2019 17:03:53 +0000. Up 38.08 seconds. >>>>>>> Cloud-init v. 17.1 finished at Tue, 19 Feb 2019 17:13:22 +0000. Datasource DataSourceEc2. Up 607.13 seconds >>>>>>> >>>>>>> But the cluster creation keeps on failing. >>>>>>> From the journalctl -f I see a possible issue: >>>>>>> Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal runc[2723]: publicURL endpoint for orchestration service in null region not found >>>>>>> Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal runc[2723]: Source [heat] Unavailable. >>>>>>> Feb 19 17:42:38 kube-cluster-tls-6hezqcq4ien3-master-0.novalocal runc[2723]: /var/lib/os-collect-config/local-data not found. Skipping >>>>>>> >>>>>>> anyone familiar with this problem ? >>>>>>> >>>>>>> Thanks as usual. >>>>>>> /Giuseppe >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>>> On Tue, 19 Feb 2019 at 17:35, Giuseppe Sannino wrote: >>>>>>>> Hi all, >>>>>>>> need an help. >>>>>>>> I deployed an AIO via Kolla on a baremetal node. Here some information about the deployment: >>>>>>>> --------------- >>>>>>>> kolla-ansible: 7.0.1 >>>>>>>> openstack_release: Rocky >>>>>>>> kolla_base_distro: centos >>>>>>>> kolla_install_type: source >>>>>>>> TLS: disabled >>>>>>>> --------------- >>>>>>>> >>>>>>>> >>>>>>>> VMs spawn without issue but I can't make the "Kubernetes cluster creation" successfully. It fails due to "Time out" >>>>>>>> >>>>>>>> I managed to log into Kuber Master and from the cloud-init-output.log I can see: >>>>>>>> + echo 'Waiting for Kubernetes API...' >>>>>>>> Waiting for Kubernetes API... >>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> + '[' ok = '' ']' >>>>>>>> + sleep 5 >>>>>>>> >>>>>>>> >>>>>>>> Checking via systemctl and journalctl I see: >>>>>>>> [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ systemctl status kube-apiserver >>>>>>>> ● kube-apiserver.service - kubernetes-apiserver >>>>>>>> Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled) >>>>>>>> Active: failed (Result: exit-code) since Tue 2019-02-19 15:31:41 UTC; 45min ago >>>>>>>> Process: 3796 ExecStart=/usr/bin/runc --systemd-cgroup run kube-apiserver (code=exited, status=1/FAILURE) >>>>>>>> Main PID: 3796 (code=exited, status=1/FAILURE) >>>>>>>> >>>>>>>> Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE >>>>>>>> Feb 19 15:31:40 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. >>>>>>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Service RestartSec=100ms expired, scheduling restart. >>>>>>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter is at 6. >>>>>>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: Stopped kubernetes-apiserver. >>>>>>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Start request repeated too quickly. >>>>>>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. >>>>>>>> Feb 19 15:31:41 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: Failed to start kubernetes-apiserver. >>>>>>>> >>>>>>>> [fedora at kube-clsuter-qamdealetlbi-master-0 log]$ sudo journalctl -u kube-apiserver >>>>>>>> -- Logs begin at Tue 2019-02-19 15:21:36 UTC, end at Tue 2019-02-19 16:17:00 UTC. -- >>>>>>>> Feb 19 15:31:33 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: Started kubernetes-apiserver. >>>>>>>> Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version. >>>>>>>> Feb 19 15:31:34 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: Flag --insecure-port has been deprecated, This flag will be removed in a future version. >>>>>>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: Error: error creating self-signed certificates: open /var/run/kubernetes/apiserver.crt: permission denied >>>>>>>> : >>>>>>>> : >>>>>>>> : >>>>>>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal runc[2794]: error: error creating self-signed certificates: open /var/run/kubernetes/apiserver.crt: permission denied >>>>>>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE >>>>>>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. >>>>>>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Service RestartSec=100ms expired, scheduling restart. >>>>>>>> Feb 19 15:31:35 kube-clsuter-qamdealetlbi-master-0.novalocal systemd[1]: kube-apiserver.service: Scheduled restart job, restart counter is at 1. >>>>>>>> >>>>>>>> >>>>>>>> May I ask for an help on this ? >>>>>>>> >>>>>>>> Many thanks >>>>>>>> /Giuseppe >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>> -- >>>>>> Cheers & Best regards, >>>>>> Feilong Wang (王飞龙) >>>>>> -------------------------------------------------------------------------- >>>>>> Senior Cloud Software Engineer >>>>>> Tel: +64-48032246 >>>>>> Email: flwang at catalyst.net.nz >>>>>> Catalyst IT Limited >>>>>> Level 6, Catalyst House, 150 Willis Street, Wellington >>>>>> -------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu Feb 21 11:13:22 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 21 Feb 2019 11:13:22 +0000 (GMT) Subject: [tc] [election] Candidate question: growth of projects Message-ID: This is another set of questions for TC candidates, to look at a different side of things from my first one [1] and somewhat related to the one Doug has asked [2]. As Doug mentions, a continuing role of the TC is to evaluate applicants to be official projects. These questions are about that. There are 63 teams in the official list of projects. How do you feel about this size? Too big, too small, just right? Why? If you had to make a single declaration about growth in the number of projects would you prefer to see (and why, of course): * More projects as required by demand. * Slower or no growth to focus on what we've got. * Trim the number of projects to "get back to our roots". * Something else. How has the relatively recent emergence of the open infrastructure projects that are at the same "level" in the Foundation as OpenStack changed your thoughts on the above questions? Do you think the number of projects has any impact (positive or negative) on our overall ability to get things done? Recognizing that there are many types of contributors, not just developers, this question is about developers: Throughout history different members of the community have sometimes identified as an "OpenStack developer", sometimes as a project developer (e.g., "Nova developer"). Should we encourage contributors to think of themselves as primarily OpenStack developers? If so, how do we do that? If not, why not? Thanks. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002914.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002923.html -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From mark at stackhpc.com Thu Feb 21 11:36:12 2019 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 21 Feb 2019 11:36:12 +0000 Subject: [kolla][magnum] Cluster creation failed due to "Waiting for Kubernetes API..." In-Reply-To: References: <1f5506ea-add1-749d-b6c3-1040776b0ff4@catalyst.net.nz> <54760998-DCF6-4E01-85C8-BB3F5879A14C@stackhpc.com> Message-ID: On Thu, 21 Feb 2019 at 10:03, Giuseppe Sannino wrote: > Ciao Mark, > finally it works! Many many thanks! > That was the missing piece of the puzzle. > > Just FYI information, from the systemctl status for the > heat-container-agent I can still see this repetitive logs: > : > Feb 21 08:00:40 kube-cluster-goddard-lq54faeabuhu-master-0.novalocal > runc[2715]: /var/lib/os-collect-config/local-data not found. Skipping > Feb 21 08:01:11 kube-cluster-goddard-lq54faeabuhu-master-0.novalocal > runc[2715]: /var/lib/os-collect-config/local-data not found. Skipping > : > > This doesn't seem to harm the deployment but I will check further. > > Thanks a lot to everyone! > > /Giuseppe > Glad to hear it worked for you. I've raised a bug [1] and proposed a fix [2] in kolla ansible. Mark [1] https://bugs.launchpad.net/kolla-ansible/+bug/1817051 [2] https://review.openstack.org/638400 -------------- next part -------------- An HTML attachment was scrubbed... URL: From georg.kunz at ericsson.com Thu Feb 21 12:10:13 2019 From: georg.kunz at ericsson.com (Georg Kunz) Date: Thu, 21 Feb 2019 12:10:13 +0000 Subject: Presentation material of previous summits Message-ID: Hi all, Can somebody help me finding the presentation slides of the Berlin summit (or previous summits in general)? As far as I know, the presentation slides were linked on the summit schedule. Now, as the Denver summit schedule is online, I cannot find the slides anymore. Maybe I just missed to look in an obvious place... Thank you Georg -------------- next part -------------- An HTML attachment was scrubbed... URL: From grant at absolutedevops.io Thu Feb 21 12:19:08 2019 From: grant at absolutedevops.io (Grant Morley) Date: Thu, 21 Feb 2019 12:19:08 +0000 Subject: Presentation material of previous summits In-Reply-To: References: Message-ID: <07f44f6f-34b0-605c-4ee6-b9de246c424d@absolutedevops.io> Hi Georg, I think if you go here: https://www.openstack.org/videos/summits/berlin-2018 that should get you what you need. Regards, On 21/02/2019 12:10, Georg Kunz wrote: > > Hi all, > > Can somebody help me finding the presentation slides of the Berlin > summit (or previous summits in general)? As far as I know, the > presentation slides were linked on the summit schedule. Now, as the > Denver summit schedule is online, I cannot find the slides anymore. > Maybe I just missed to look in an obvious place… > > Thank you > > Georg > -- Grant Morley Cloud Lead Absolute DevOps Ltd Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP www.absolutedevops.io grant at absolutedevops.io 0845 874 0580 -------------- next part -------------- An HTML attachment was scrubbed... URL: From georg.kunz at ericsson.com Thu Feb 21 12:22:47 2019 From: georg.kunz at ericsson.com (Georg Kunz) Date: Thu, 21 Feb 2019 12:22:47 +0000 Subject: Presentation material of previous summits In-Reply-To: <07f44f6f-34b0-605c-4ee6-b9de246c424d@absolutedevops.io> References: <07f44f6f-34b0-605c-4ee6-b9de246c424d@absolutedevops.io> Message-ID: Hi Grant, Ok, right. Thanks a lot. I actually somehow missed that... Thank you Georg From: Grant Morley Sent: Thursday, February 21, 2019 1:19 PM To: Georg Kunz ; openstack-discuss at lists.openstack.org Subject: Re: Presentation material of previous summits Hi Georg, I think if you go here: https://www.openstack.org/videos/summits/berlin-2018 that should get you what you need. Regards, On 21/02/2019 12:10, Georg Kunz wrote: Hi all, Can somebody help me finding the presentation slides of the Berlin summit (or previous summits in general)? As far as I know, the presentation slides were linked on the summit schedule. Now, as the Denver summit schedule is online, I cannot find the slides anymore. Maybe I just missed to look in an obvious place... Thank you Georg -- [Image removed by sender.] Grant Morley Cloud Lead Absolute DevOps Ltd Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP www.absolutedevops.io grant at absolutedevops.io 0845 874 0580 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ~WRD000.jpg Type: image/jpeg Size: 823 bytes Desc: ~WRD000.jpg URL: From smooney at redhat.com Thu Feb 21 12:36:44 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 21 Feb 2019 12:36:44 +0000 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: <31313601b6d888de63650436007f2d477d0ebec4.camel@redhat.com> On Wed, 2019-02-20 at 10:24 -0500, Mohammed Naser wrote: > Hi Chris, > > Thanks for kicking this off. I've added my replies in-line. > > Thank you for your past term as well. > > Regards, > Mohammed > > On Wed, Feb 20, 2019 at 9:49 AM Chris Dent wrote: > > > > > > It's the Campaigning slot of the TC election process, where members > > of the community (including the candidates) are encouraged to ask > > the candidates questions and witness some debate. I have some > > questions. > > > > First off, I'd like to thank all the candidates for running and > > being willing to commit some of their time. I'd also like to that > > group as a whole for being large enough to force an election. A > > representative body that is not the result of an election would not > > be very representing nor have much of a mandate. > > > > The questions follow. Don't feel obliged to answer all of these. The > > point here is to inspire some conversation that flows to many > > places. I hope other people will ask in the areas I've chosen to > > skip. If you have a lot to say, it might make sense to create a > > different message for each response. Beware, you might be judged on > > your email etiquette and attention to good email technique! > > > > * How do you account for the low number of candidates? Do you > > consider this a problem? Why or why not? > > Just for context, I wanted to share the following numbers to formulate my > response: > > Ocata candidates: 21 > Pike candidates: 14 > Queens candidates: 16 > Rocky candidates: 10 > > We're indeed seeing the numbers grow cycle over cycle. However, a lot > of the candidates are people that most seem to have ran once and upon > not being elected, they didn't take a chance to go again. I think perhaps we > should encourage reaching out to those previous candidates, especially those > who are still parts of the community still to nominate themselves again. > > I do however think that with the fact that our software is becoming more stable > and having less overall contributors than before, it might be a good time to > evaluate the size of the TC, but that could be a really interesting challenge > to deal with and I'm not quite so sure yet about how we can approach that. > > I don't think it's a problem, we had a really quiet start but then a > lot of people > put their names in. I think if the first candidate had come in a bit > earlier, we > would have seen more candidates because I get this feeling no one wants to > go "first". > > > * Compare and contrast the role of the TC now to 4 years ago. If you > > weren't around 4 years ago, comment on the changes you've seen > > over the time you have been around. In either case: What do you > > think the TC role should be now? > > 4 years ago, we were probably around the Kilo release cycle at that > time and things were a lot different in the ecosystem. At the time, I think > the TC had more of a role of governing as the projects had plenty of > traction and things were moving. > > As OpenStack seems to come closer to delivering most of the value > that you need, without needing as much effort, I think it's important > for us to try and envision how we can better place OpenStack in the > overall infrastructure ecosystem and focus on marketing it. > > I speak a lot to users and deployers daily and I find out a lot of things > about current impressions of OpenStack, once I explain it to them, > they are all super impressed by it so I think we need to do a better job > at educating people. > > Also, I think the APAC region is one that is a big growing user and > community of OpenStack that we usually don't put as much thought > into. We need to make sure that we invest more time into the community > there. > > > * What, to you, is the single most important thing the OpenStack > > community needs to do to ensure that packagers, deployers, and > > hobbyist users of OpenStack are willing to consistently upstream > > their fixes and have a positive experience when they do? What is > > the TC's role in helping make that "important thing" happen? > > I think our tooling is hard to use. I really love it, but it's really not > straightforward for most new comers. > > The majority of users are familiar with the GitHub workflow, the > Gerrit one is definitely one that needs a bit of a learning curve. I think > this introduces a really weird situation where if I'm not familiar with > all of that and I want to submit a patch that's a simple change, it will > take me more work to get setup on Gerrit than it does to make the > fix. > > I think most people give up and just don't want to bother at that point, > perhaps a few more might be more inclined to get through it but it's > really a lot of work to allow pushing a simple patch. > > > * If you had a magic wand and could inspire and make a single > > sweeping architectural or software change across the services, > > what would it be? For now, ignore legacy or upgrade concerns. > > What role should the TC have in inspiring and driving such > > changes? > > Oh. > > - Stop using RabbitMQ as an RPC, it's the worst most painful component > to run in an entire OpenStack deployment. It's always broken. Switch > into something that uses HTTP + service registration to find endpoints. as an on looker i have mixed feeling about this statement. RabbitMQ can have issue at scale but it morstly works when its not on fire. Would you be advocating building a openstack specific RPC layer perhaps using keystone as the service registry and a custom http mechanism or adopting an existing technology like grpc? investigating an alternative RPC backend has come up in the past (zeromq and qupid) and i think it has merit but im not sure as a comunity creating a new RPC framework is a project wide effort that openstack need to solve. that said zaqar is a thing https://docs.openstack.org/zaqar/latest/ if it is good enough for our endusers to consume perhaps it would be up to the task of being openstack rpc transport layer. anyway my main question was would you advocate adoption of an exisiting technology or creating our own solution if we were to work on this goal as a community. > > - Use Keystone as an authoritative service catalog, stop having to configure > URLs for services inside configuration files. It's confusing and unreliable > and causes a lot of breakages often. > > - SSL first. For most services, the overhead is so small, I don't see why we > wouldn't ever have all services to run SSL only. > > - Single unified client, we're already moving towards this with the OpenStack > client but it's probably been one of our biggest weaknesses that have not > been completed and fully cleared out. > > Those are a few that come to mind right now, I'm sure I could come up with > so much more. > > > * What can the TC do to make sure that the community (in its many > > dimensions) is informed of and engaged in the discussions and > > decisions of the TC? > > We need to follow the mailing lists and keep up to date at what users are > trying to use OpenStack for. There's emerging use cases such as using it > for edge deployments, the increase of bare-metal deployments (ironic) and > thinking about how it can benefit end users, all of this can be seen by > following mailing list discussions, Twitter-verse, and other avenues. > > I've also found amazing value in being part of WeChat communities which > bring a lot of insight from the APAC region. > > > * How do you counter people who assert the TC is not relevant? > > (Presumably you think it is, otherwise you would not have run. If > > you don't, why did you run?) > > This is a tough one. That's something we need to work and change, I think > that historically the involvement of the TC and projects have been very > hands-off because of the velocity that projects moved at. > > Now that we're a bit slower, I think that having the TC involved in the projects > can be very interesting. It provides access to a group of diverse and highly > technical individuals from different backgrounds (operators, developers -- but > maybe not as much users) to chime in on certain directions of the projects. > > > That's probably more than enough. Thanks for your attention. > > Thank you for starting this. > > > -- > > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > > freenode: cdent tw: @anticdent > > > From smooney at redhat.com Thu Feb 21 12:42:52 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 21 Feb 2019 12:42:52 +0000 Subject: how to find the interfaces to exclude in SR-IOV? In-Reply-To: <9D8A2486E35F0941A60430473E29F15B017E85627D@MXDB2.ad.garvan.unsw.edu.au> References: <9D8A2486E35F0941A60430473E29F15B017E85627D@MXDB2.ad.garvan.unsw.edu.au> Message-ID: On Thu, 2019-02-21 at 06:05 +0000, Manuel Sopena Ballesteros wrote: > Hi, > > I would like to exclude a couple of VFs from the neutron SR-IOV configuration… > > According to documentation > https://docs.openstack.org/neutron/latest/admin/config-sriov#enable-neutron-sriov-nic-agent-compute > > exclude_devices = eth1:0000:07:00.2;0000:07:00.3,eth2:0000:05:00.1;0000:05:00.2 > > > This is my configuration > > [root at zeus-59 ~]# ibdev2netdev -v > 0000:88:00.0 mlx5_0 (MT4117 - MT1611X10113) CX4121A - ConnectX-4 LX SFP28 fw 14.24.1000 port 1 (ACTIVE) ==> bond0 (Up) > 0000:88:00.1 mlx5_1 (MT4117 - MT1611X10113) CX4121A - ConnectX-4 LX SFP28 fw 14.24.1000 port 1 (ACTIVE) ==> bond0 (Up) > 0000:88:01.2 mlx5_10 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f2 (Up) > 0000:88:01.3 mlx5_11 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f3 (Up) > 0000:88:01.4 mlx5_12 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f4 (Up) > 0000:88:01.5 mlx5_13 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f5 (Up) > 0000:88:01.6 mlx5_14 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f6 (Up) > 0000:88:01.7 mlx5_15 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f7 (Up) > 0000:88:02.0 mlx5_16 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s2 (Up) > 0000:88:02.1 mlx5_17 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s2f1 (Up) > 0000:88:00.2 mlx5_2 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f2 (Up) > 0000:88:00.3 mlx5_3 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f3 (Up) > 0000:88:00.4 mlx5_4 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f4 (Up) > 0000:88:00.5 mlx5_5 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f5 (Up) > 0000:88:00.6 mlx5_6 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f6 (Up) > 0000:88:00.7 mlx5_7 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> ens2f7 (Up) > 0000:88:01.0 mlx5_8 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1 (Up) > 0000:88:01.1 mlx5_9 (MT4118 - NA) fw 14.24.1000 port 1 (ACTIVE) ==> enp136s1f1 (Up) > > > ens2f2 (0000:88:00.2) and ens2f3 (0000:88:00.3) are part of bond0 which I want to assign to OVS. > > I would like to do something like: > exclude_devices = :0000:88:00.2,0000:88:00.3 > > How can I find out ethX and ethY? Are they PFs? yes they are the PF netdev names if i recall correctly. > > > Thank you very much > > NOTICE > Please consider the environment before printing this email. This message and any attachments are intended for the > addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended > recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this > message in error please notify us at once by return email and then delete both messages. We accept no liability for > the distribution of viruses or similar in electronic communications. This notice should not be removed. From doug at doughellmann.com Thu Feb 21 12:52:25 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 21 Feb 2019 07:52:25 -0500 Subject: [tc][election] candidate question: strategic leadership Message-ID: With the changes at the Foundation level, adding new OIPs, a few board members have suggested that this is an opportunity for the TC to evolve from providing what some have seen as tactical management through dealing with day-to-day issues to more long-term strategic leadership for the project. This theme has also come up in the recent discussions of the role of the TC, especially when considering how to make community-wide technical decisions and how much influence the TC should have over the direction individual projects take. What do you think OpenStack, as a whole, should be doing over the next 1, 3, and 5 years? Why? -- Doug From zigo at debian.org Thu Feb 21 13:02:00 2019 From: zigo at debian.org (Thomas Goirand) Date: Thu, 21 Feb 2019 14:02:00 +0100 Subject: [uwsgi] [glance] Support for wsgi-manage-chunked-input in uwsgi: glance-api finally working over SSL as expected Message-ID: Hi, It was quite famous that we had no way to run Glance under Python 3 with SSL, because of eventlet, and the fact that Glance needed chunked-input, which made uwsgi not a good candidate. Well, this was truth until 12 days ago, when uwsgi 2.0.18 was released, adding the --wsgi-manage-chunked-input. I've just installed Glance this way, and it's finally working as expected. I'll be releasing Glance in Debian Buster this way. I believe it's now time to make this config the default in the Gate, which is why I'm writing this message. I hope this helps, Cheers, Thomas Goirand (zigo) From doug at doughellmann.com Thu Feb 21 13:47:59 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 21 Feb 2019 08:47:59 -0500 Subject: [Searchlight] TC vision reflection In-Reply-To: References: Message-ID: Trinh Nguyen writes: > Hello team, > > We finally finished the initial version of the vision reflection document. > Please check it out [1]. Note that this is a live document and will be > updated frequently as we move forward. > > If you have any questions, please let me know. > > I would like to say thank Christ Dent and Julia Kreger for their initiative > at the Placement and Ironic team. I learn a lot from you guys when making > this document. > > [1] > https://docs.openstack.org/searchlight/latest/contributor/vision-reflection.html > [2] https://review.openstack.org/#/c/630216/ > [3] https://review.openstack.org/#/c/629060/ > > Yours, > > On Tue, Feb 12, 2019 at 3:55 PM Trinh Nguyen wrote: > >> Hi team, >> >> Follow by the call of the TC [1] for each project to self-evaluate against >> the OpenStack Cloud Vision [2], the Searchlight team would like to produce >> a short bullet point style document comparing itself with the vision. The >> purpose is to find the gaps between Searchlight and the TC vision and it is >> a good practice to align our work with the rest. I created a new pad [3] >> and welcome all of your opinions. Then, after about 3 weeks, I will submit >> a patch set to add the vision reflection document to our doc source. >> >> [1] >> http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001417.html >> [2] https://governance.openstack.org/tc/reference/technical-vision.html >> [3] https://etherpad.openstack.org/p/-tc-vision-self-eval >> >> Ping me on the channel #openstack-searchlight >> >> Bests, >> >> -- >> *Trinh Nguyen* >> *www.edlab.xyz * >> >> > > -- > *Trinh Nguyen* > *www.edlab.xyz * Nice work, thank you for sharing it! -- Doug From sbauza at redhat.com Thu Feb 21 14:09:11 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 21 Feb 2019 15:09:11 +0100 Subject: [tc] [election] Candidate question: growth of projects In-Reply-To: References: Message-ID: Hola, Thanks for this question. On Thu, Feb 21, 2019 at 12:19 PM Chris Dent wrote: > > This is another set of questions for TC candidates, to look at a > different side of things from my first one [1] and somewhat related > to the one Doug has asked [2]. > > As Doug mentions, a continuing role of the TC is to evaluate > applicants to be official projects. These questions are about that. > > There are 63 teams in the official list of projects. How do you feel > about this size? Too big, too small, just right? Why? > > IMHO, the size is never a problem. The real question is rather about whether all of them are going to the same direction (and I'm trying hard to not make a parallel with geopolitics). Oh, I'm not saying we don't have problems with 63 teams, right? At least, having this number of teams is a bit difficult because it's more difficult to know about all of them but just a small number (say 12) It also means that it's somehow difficult to work on the same page of course. So, what to do with those problems ? Maybe the TC should be more governing this list, by at least making sure that all projects run at the same page. We have a maintenance tag. It's a very difficult tag to assign, right? Maybe it's time for us to be discussing about what it means for a project to be 'maintained'. If you had to make a single declaration about growth in the number > of projects would you prefer to see (and why, of course): > > * More projects as required by demand. > * Slower or no growth to focus on what we've got. > * Trim the number of projects to "get back to our roots". > * Something else. > > My statement would be "focus on the existing projects, define a common set of attributes that would necessarly be more strict than today and see if and how all the current projects can fill the gaps for all of them". Somehow tied to the 2nd proposal you make, but not by principe, just pragmatism in order to help our users to have a decent experience. That said, I'm not opposed to accepting new candidates if those are able to cope with all the necessary tasks. We had an incubation process early in OpenStack, that could be an idea for those new projects to get approved. > How has the relatively recent emergence of the open infrastructure > projects that are at the same "level" in the Foundation as OpenStack > changed your thoughts on the above questions? > > Not really. I don't see this as a threat for OpenStack and I think it's good for the Foundation to evolve. But it will come with challenges, the first being the integration process with the approval checklist. The only problem I see is that while granting a project is easy, calling the cut is very hard. The more we are clear on the requirements, the less we could be disappointed in the future. Do you think the number of projects has any impact (positive or > negative) on our overall ability to get things done? > > I'll restate here what I already said in another thread : I just don't think the TC role is about to drive architectural designs. Getting the shXt done is the matter of projects and individuals that are able to get some time for this. What the TC is good at is to make those projects and contributors to communicate. We're far away from a BDFL model where a couple of people decide for all the community. If we really want to have things done, just make people discussing and act as a mediator. Recognizing that there are many types of contributors, not just > developers, this question is about developers: Throughout history > different members of the community have sometimes identified as an > "OpenStack developer", sometimes as a project developer (e.g., "Nova > developer"). Should we encourage contributors to think of themselves > as primarily OpenStack developers? If so, how do we do that? If not, > why not? > > Good question. I don't think that people claiming to be "Project X developer" misconsider other projects, so it's not really a qualitative aspect. It's more the fact that most of the day-to-day work is made within a single project, so most of the social traction is made there. Also, I don't think forcing developers to consider themselves "OpenStack developers" will change anything. We should rather ask ourselves "How can we make the larger community of developers to share the same vision and pace ?". -Sylvain Thanks. > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002914.html > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002923.html > > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.settle at outlook.com Thu Feb 21 14:39:13 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Thu, 21 Feb 2019 14:39:13 +0000 Subject: [tc][election] campaign question: team approval criteria In-Reply-To: References: Message-ID: Thanks Doug :) see responses inline below. On 20/02/2019 17:58, Doug Hellmann wrote: > One of the key responsibilities of the Technical Committee is still > evaluating projects and teams that want to become official OpenStack > projects. The Foundation Open Infrastructure Project approval process > has recently produced a different set of criteria for the Board to use > for approving projects [1] than the TC uses for approving teams [2]. To be open and honest, I have mostly been a bystander during this change. > What parts, if any, of the OIP approval criteria do you think should > apply to OpenStack teams? I want to start by saying I agree with Sylvain, he noted that this set of criteria has primarily been driven by the board, but it has also been driven by community members, Foundation staff, the TC, and the UC who have all been long-term members of the open source community and OpenStack. I would also like to note that your question does not specify whether or not you are talking about pre-existing OpenStack teams or new teams. Since we are talking about the future TC, I am going to assume "new" projects for my answer. But if I am wrong, Doug, could you please clarify and I will happily review my answer if there are changes to be made. The requirements for new projects [2] has continuously been iterated on over the years, and it ultimately reflects the beginning of the community and what we believed was an important criteria set. The development process and the writing of the OIP approval criteria is seasoned with experts from the key use cases. We have had experience writing this type of thing before, and therefore it is a more formal set of Confirmation Guidelines. That being said, to answer your question truthfully (IMHO): The guidelines for introducing a new OpenStack team is not as relevant as they once were. A quick search (and it is not covering all bases, just the majority) returned a list [3] indicating that in the last 18 months we have accepted more removals of projects, more integration of sub-projects than new projects to the OpenStack ecosystem. I am making my comparisons to 2015 - 2017 where we saw a surge of new projects throwing their hat into the ring. It was very important during that point in time to govern those projects ensuring they met OpenStack community guidelines. At this point in the OpenStack product lifecycle, I see this as a better opportunity for the new phase (OIP) to learn from us, rather than the other way around. This is not to say that we can not continue to iterate on what it means to be a successful OpenStack project and apply great changes. But I believe we should be looking forwards.The new OSF Guidelines for OIP clearly has similar criteria to the requirements for new OpenStack projects. As already noted, those are:     - 4 opens     - Communication TL;DR - What we have is good. We can learn from the experience's OIP will encounter, but we must stop looking behind us to fix what isn't broken. > What other changes, if any, would you propose to the official team > approval process or criteria? Are we asking the right questions and > setting the minimum requirements high enough? Are there any criteria > that are too hard to meet? I actually do not have anything to add to this. I think our official team approval process and criteria for what it is worth - is good. We have just begun the work to fully integrate OIP into our ecosystem, let's focus on the future. > [1] http://lists.openstack.org/pipermail/foundation/2019-February/002708.html > [2] https://governance.openstack.org/tc/reference/new-projects-requirements.html [3] https://review.openstack.org/#/q/project:+openstack/governance+file:+projects.yaml+branch:+master+(new-project) From camilapaleo at gmail.com Thu Feb 21 14:40:50 2019 From: camilapaleo at gmail.com (Camila Moura) Date: Thu, 21 Feb 2019 15:40:50 +0100 Subject: Outreachy Message-ID: Hi Folks I'm Camila, I participating in Outreachy. I'm from Brazil, but I live em Czech Republic. I've been studying Python, Django, Flask and a little bit HTML and CSS. So, I'll to start to contribute to the project. Please, be patient I'm learning :) Thank you for your attention! Camila -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Thu Feb 21 14:47:28 2019 From: gr at ham.ie (Graham Hayes) Date: Thu, 21 Feb 2019 14:47:28 +0000 Subject: [tc] [election] Candidate question: growth of projects In-Reply-To: References: Message-ID: On 21/02/2019 11:13, Chris Dent wrote: > > This is another set of questions for TC candidates, to look at a > different side of things from my first one [1] and somewhat related > to the one Doug has asked [2]. > > As Doug mentions, a continuing role of the TC is to evaluate > applicants to be official projects. These questions are about that. > > There are 63 teams in the official list of projects. How do you feel > about this size? Too big, too small, just right? Why? I don't think the number of projects is an issue - I think that the key point is "does this project help OpenStack fufil its mission". That should be judged on several criteria: 1. Usefulness for OpenStack Users / Operators / Developers? 2. Is it "OpenStack-y" - does it follow our normal models? 3. Is the team engaged with the rest of the community? 4. Is the project maintained to suitible standard that we think it is something that can be used? > If you had to make a single declaration about growth in the number > of projects would you prefer to see (and why, of course): > * Something else. I have no preference on number of projects - as long as they meet the above list. > How has the relatively recent emergence of the open infrastructure > projects that are at the same "level" in the Foundation as OpenStack > changed your thoughts on the above questions? I think we are still too early for us to see what (if any impact) this will have on the OpenStack sub-projects. I do think that Open Infrastructure Projects (OIPs) will end up a lot smaller that OpenStack, and this may encourage teams to split out like Zuul has. > Do you think the number of projects has any impact (positive or > negative) on our overall ability to get things done? Yes - the more projects we have, the harder it is to make large community wide changes, like API References, quota standardisation, or healthchecking. But, this cost has to be wieghed against what a project brings to the project. > Recognizing that there are many types of contributors, not just > developers, this question is about developers: Throughout history > different members of the community have sometimes identified as an > "OpenStack developer", sometimes as a project developer (e.g., "Nova > developer"). Should we encourage contributors to think of themselves > as primarily OpenStack developers? If so, how do we do that? If not, > why not? We should encourage people to think outside of their traditional "team", but we should also recognise that a lot of our contributors are paid to work on a specific segment. In some of those segments, keeping context for just that project is hard enough, and trying to get people to learn about the layout and structure of another project can be a large ask. We do need more "OpenStack Developers", who help drive larger community efforts, but we do need to understand that not all people can step into that role (for many reasons). > Thanks. > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002914.html > > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002923.html > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From gr at ham.ie Thu Feb 21 14:56:22 2019 From: gr at ham.ie (Graham Hayes) Date: Thu, 21 Feb 2019 14:56:22 +0000 Subject: [tc][election] campaign question: team approval criteria In-Reply-To: References: Message-ID: On 20/02/2019 17:58, Doug Hellmann wrote: > > One of the key responsibilities of the Technical Committee is still > evaluating projects and teams that want to become official OpenStack > projects. The Foundation Open Infrastructure Project approval process > has recently produced a different set of criteria for the Board to use > for approving projects [1] than the TC uses for approving teams [2]. > > What parts, if any, of the OIP approval criteria do you think should > apply to OpenStack teams? There was a line in the orignal draft of the OIP guidelines that I liked > Project does not significantly harm another existing confirmed > project. This has since been removed, but I would have liked to see this as: > Project does not harm another existing confirmed project. and adopting that to our rules. > What other changes, if any, would you propose to the official team > approval process or criteria? Are we asking the right questions and > setting the minimum requirements high enough? Are there any criteria > that are too hard to meet? I think the rules we have now are good, and now that we are becoming a more stable project, most of our applications are nearly formalities. For the more complex applications, I am not sure that adding any extra rules will make it any better, as the TC will still have to take to evaluate it. > How would you apply those rule changes to existing teams? > > [1] http://lists.openstack.org/pipermail/foundation/2019-February/002708.html > [2] https://governance.openstack.org/tc/reference/new-projects-requirements.html > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From jaosorior at redhat.com Thu Feb 21 15:02:14 2019 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Thu, 21 Feb 2019 17:02:14 +0200 Subject: =?UTF-8?Q?=5btripleo=5d_nominating_Harald_Jens=c3=a5s_as_a_core_rev?= =?UTF-8?Q?iewer?= Message-ID: Hey folks! I would like to nominate Harald as a general TripleO core reviewer. He has consistently done quality reviews throughout our code base, helping us with great feedback and technical insight. While he has done a lot of work on the networking and baremetal sides of the deployment, he's also helped out on security, CI, and even on the tripleoclient side. Overall, I think he would be a great addition to the core team, and I trust his judgment on reviews. What do you think? Best regards From smooney at redhat.com Thu Feb 21 15:02:55 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 21 Feb 2019 15:02:55 +0000 Subject: [tc] Questions for TC Candidates In-Reply-To: References: <31313601b6d888de63650436007f2d477d0ebec4.camel@redhat.com> Message-ID: On Thu, 2019-02-21 at 08:12 -0500, Jim Rollenhagen wrote: > On Thu, Feb 21, 2019 at 7:37 AM Sean Mooney wrote: > > On Wed, 2019-02-20 at 10:24 -0500, Mohammed Naser wrote: > > > Hi Chris, > > > > > > Thanks for kicking this off. I've added my replies in-line. > > > > > > Thank you for your past term as well. > > > > > > Regards, > > > Mohammed > > > > > > On Wed, Feb 20, 2019 at 9:49 AM Chris Dent wrote: > > > > > > > > * If you had a magic wand and could inspire and make a single > > > > sweeping architectural or software change across the services, > > > > what would it be? For now, ignore legacy or upgrade concerns. > > > > What role should the TC have in inspiring and driving such > > > > changes? > > > > > > Oh. > > > > > > - Stop using RabbitMQ as an RPC, it's the worst most painful component > > > to run in an entire OpenStack deployment. It's always broken. Switch > > > into something that uses HTTP + service registration to find endpoints. > > as an on looker i have mixed feeling about this statement. > > RabbitMQ can have issue at scale but it morstly works when its not on fire. > > Would you be advocating building a openstack specific RPC layer perhaps > > using keystone as the service registry and a custom http mechanism or adopting > > an existing technology like grpc? > > > > investigating an alternative RPC backend has come up in the past (zeromq and qupid) > > and i think it has merit but im not sure as a comunity creating a new RPC framework is > > a project wide effort that openstack need to solve. that said zaqar is a thing > > https://docs.openstack.org/zaqar/latest/ if it is good enough for our endusers to consume > > perhaps it would be up to the task of being openstack rpc transport layer. > > > > anyway my main question was would you advocate adoption of an exisiting technology > > or creating our own solution if we were to work on this goal as a community. > > > > I'll also chime in here since I agree with Mohammed. > > We're certainly going to have to write software to make this happen. Maybe > that's a new oslo.messaging driver, maybe it's a new equivalent of that layer. > > But we've re-invented enough software already. This isn't a place where we > should do it. We should use build on or glue together existing tools to build > something scalable and relatively simple to operate. > > Are your mixed feelings about this statement a concern about re-inventing > wheels, or about changing an underlying architectural thing at all? with re-inventing wheels. a new oslo.messaging driver i think would be perfectly fine. that said we dont want 50 of them as it will be impossible to test and maintain them all. my concern would be that even if we deveploped the perfect rpc layer for openstack ourselves it would be a lot of effort that could have been spent elsewhere. developing glue logic to use an alternitiv is much less invasive and achievable. the other thing i woudl say is if im felling pedantinc the fact that we have an rpc bus is an architecutal question as may in some cases be the perfromce or feature set the final system provide. but personally i dont see rabbitmq vs grps as an architectual question at all. the message bus is just an io device. we should be able to change that io device for another without it martially effecting our architecutre. if it does we are too tightly coupled to the implementation. you can sub out rabbitmq today with qpid or activmq and in the past you could use zeromq too but rabbit more or less one out over that set of message queue. there is also limited support for kafka. kafka is pretty heavy weight as is itsio from what i have heard so im not sure they would be a good replacement for small cloud but i belive they are ment to scale well. grpc and nats are proably the two rabitmq alternative is would personally consider but i know some projects hack etcd to act as a psudo rpc bus. this type of change is something i could see as a comunity goal eventually but would like to see done with one project first before it got to that point. there is value in useing 1 or 2 rpc buses istead of support many and this is the type of change i hope would be guided by measurement and community feedback. thanks for following up. > > // jim From gr at ham.ie Thu Feb 21 15:04:29 2019 From: gr at ham.ie (Graham Hayes) Date: Thu, 21 Feb 2019 15:04:29 +0000 Subject: [tc][election] candidate question: strategic leadership In-Reply-To: References: Message-ID: <9e8a385a-cb3b-93e1-98e1-3d21a304eaf8@ham.ie> On 21/02/2019 12:52, Doug Hellmann wrote: > > With the changes at the Foundation level, adding new OIPs, a few board > members have suggested that this is an opportunity for the TC to evolve > from providing what some have seen as tactical management through > dealing with day-to-day issues to more long-term strategic leadership > for the project. This theme has also come up in the recent discussions > of the role of the TC, especially when considering how to make > community-wide technical decisions and how much influence the TC should > have over the direction individual projects take. > > What do you think OpenStack, as a whole, should be doing over the next > 1, 3, and 5 years? Why? I think that as a project we should be looking at ways to be good nieghbours to other Open Source Infrastructure projects (both inside and outside of the OpenStack Foundation). From the Foundation level I think we fall into the Data Centre Stratigic Focus Area, and this is a good place for us to focus our efforts - becoming the Linux of the datacentre is a good goal. This does not mean just compute either - replacing as much of the DC boxes as we can should be our goal. Things like load balancing, networking, DNS (of course), GPUs / FPGAs, are all as much a part of automating the datacentre as compute and storage. Doing it in a way that is portable, so that other projects know that if they target OpenStack as a base they get a known base is also core to this. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From dtantsur at redhat.com Thu Feb 21 15:16:54 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 21 Feb 2019 16:16:54 +0100 Subject: =?UTF-8?Q?Re=3a_=5btripleo=5d_nominating_Harald_Jens=c3=a5s_as_a_co?= =?UTF-8?Q?re_reviewer?= In-Reply-To: References: Message-ID: <331ba423-a9b7-2a2c-4387-fc6f717af23b@redhat.com> +1 to Harald, his networking knowledge is incredible. On 2/21/19 4:02 PM, Juan Antonio Osorio Robles wrote: > Hey folks! > > > I would like to nominate Harald as a general TripleO core reviewer. > > He has consistently done quality reviews throughout our code base, > helping us with great feedback and technical insight. > > While he has done a lot of work on the networking and baremetal sides of > the deployment, he's also helped out on security, CI, and even on the > tripleoclient side. > > Overall, I think he would be a great addition to the core team, and I > trust his judgment on reviews. > > > What do you think? > > > Best regards > > > From liliueecg at gmail.com Thu Feb 21 15:21:10 2019 From: liliueecg at gmail.com (Li Liu) Date: Thu, 21 Feb 2019 10:21:10 -0500 Subject: [nova][numa]Question regarding numa affinity balancer/weigher per host Message-ID: HI Nova folks, I am trying to find out how Numa balance/weighs per host is taking care of in Nova. I know how weighers work in general, but it's weighing between hosts. I am not so clear when it comes to a single host with multiple sockets, how does nova weigh them? For instance, a host has 4 sockets, A, B, C, D. When scheduling request comes in asking for 4 cores on 2 sockets(2 cores per socket), scheduler realized that A+B, A+C, and C+D combination can all fit the request. In this case, how does nova make the decision on which combination to choose from? -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Thu Feb 21 15:24:22 2019 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 21 Feb 2019 16:24:22 +0100 Subject: =?UTF-8?Q?Re=3a_=5btripleo=5d_nominating_Harald_Jens=c3=a5s_as_a_co?= =?UTF-8?Q?re_reviewer?= In-Reply-To: <331ba423-a9b7-2a2c-4387-fc6f717af23b@redhat.com> References: <331ba423-a9b7-2a2c-4387-fc6f717af23b@redhat.com> Message-ID: On 21.02.2019 16:16, Dmitry Tantsur wrote: > +1 to Harald, his networking knowledge is incredible. +1 > > On 2/21/19 4:02 PM, Juan Antonio Osorio Robles wrote: >> Hey folks! >> >> >> I would like to nominate Harald as a general TripleO core reviewer. >> >> He has consistently done quality reviews throughout our code base, >> helping us with great feedback and technical insight. >> >> While he has done a lot of work on the networking and baremetal sides of >> the deployment, he's also helped out on security, CI, and even on the >> tripleoclient side. >> >> Overall, I think he would be a great addition to the core team, and I >> trust his judgment on reviews. >> >> >> What do you think? >> >> >> Best regards >> >> >> > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From jim at jimrollenhagen.com Thu Feb 21 15:25:39 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Thu, 21 Feb 2019 10:25:39 -0500 Subject: [tc] Questions for TC Candidates In-Reply-To: References: <31313601b6d888de63650436007f2d477d0ebec4.camel@redhat.com> Message-ID: On Thu, Feb 21, 2019 at 10:02 AM Sean Mooney wrote: > On Thu, 2019-02-21 at 08:12 -0500, Jim Rollenhagen wrote: > > On Thu, Feb 21, 2019 at 7:37 AM Sean Mooney wrote: > > > On Wed, 2019-02-20 at 10:24 -0500, Mohammed Naser wrote: > > > > Hi Chris, > > > > > > > > Thanks for kicking this off. I've added my replies in-line. > > > > > > > > Thank you for your past term as well. > > > > > > > > Regards, > > > > Mohammed > > > > > > > > On Wed, Feb 20, 2019 at 9:49 AM Chris Dent > wrote: > > > > > > > > > > * If you had a magic wand and could inspire and make a single > > > > > sweeping architectural or software change across the services, > > > > > what would it be? For now, ignore legacy or upgrade concerns. > > > > > What role should the TC have in inspiring and driving such > > > > > changes? > > > > > > > > Oh. > > > > > > > > - Stop using RabbitMQ as an RPC, it's the worst most painful > component > > > > to run in an entire OpenStack deployment. It's always broken. > Switch > > > > into something that uses HTTP + service registration to find > endpoints. > > > as an on looker i have mixed feeling about this statement. > > > RabbitMQ can have issue at scale but it morstly works when its not > on fire. > > > Would you be advocating building a openstack specific RPC layer > perhaps > > > using keystone as the service registry and a custom http mechanism > or adopting > > > an existing technology like grpc? > > > > > > investigating an alternative RPC backend has come up in the past > (zeromq and qupid) > > > and i think it has merit but im not sure as a comunity creating a > new RPC framework is > > > a project wide effort that openstack need to solve. that said > zaqar is a thing > > > https://docs.openstack.org/zaqar/latest/ if it is good enough for > our endusers to consume > > > perhaps it would be up to the task of being openstack rpc > transport layer. > > > > > > anyway my main question was would you advocate adoption of an > exisiting technology > > > or creating our own solution if we were to work on this goal as a > community. > > > > > > > I'll also chime in here since I agree with Mohammed. > > > > We're certainly going to have to write software to make this happen. > Maybe > > that's a new oslo.messaging driver, maybe it's a new equivalent of that > layer. > > > > But we've re-invented enough software already. This isn't a place where > we > > should do it. We should use build on or glue together existing tools to > build > > something scalable and relatively simple to operate. > > > > Are your mixed feelings about this statement a concern about re-inventing > > wheels, or about changing an underlying architectural thing at all? > with re-inventing wheels. > a new oslo.messaging driver i think would be perfectly fine. that said we > dont want 50 of them as it will be impossible > to test and maintain them all. > my concern would be that even if we deveploped the perfect rpc layer for > openstack > ourselves it would be a lot of effort that could have been spent elsewhere. > developing glue logic to use an alternitiv is much less invasive and > achievable. > > the other thing i woudl say is if im felling pedantinc the fact that we > have an rpc > bus is an architecutal question as may in some cases be the perfromce or > feature set > the final system provide. but personally i dont see rabbitmq vs grps as an > architectual > question at all. the message bus is just an io device. we should be able > to change > that io device for another without it martially effecting our > architecutre. if it does > we are too tightly coupled to the implementation. > +1. There's some question about our RPC casts that need to be addressed, but agree in general. > you can sub out rabbitmq today with qpid or activmq and in the past you > could use > zeromq too but rabbit more or less one out over that set of message queue. > there is also limited support for kafka. kafka is pretty heavy weight as > is itsio from > what i have heard so im not sure they would be a good replacement for > small cloud but i > belive they are ment to scale well. > > grpc and nats are proably the two rabitmq alternative is would personally > consider but i know > some projects hack etcd to act as a psudo rpc bus. > this type of change is something i could see as a comunity goal eventually > but would like to > see done with one project first before it got to that point. > Agree with that. I don't want to get into implementation details here. :) > > there is value in useing 1 or 2 rpc buses istead of support many and this > is the type of change > i hope would be guided by measurement and community feedback. > 100%, the first steps to doing anything like this is to measure performance today, identify other options, measure performance on those. This is a high-effort change, and it'd be crazy to do it without data. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Thu Feb 21 15:26:34 2019 From: senrique at redhat.com (Sofia Enriquez) Date: Thu, 21 Feb 2019 12:26:34 -0300 Subject: Outreachy In-Reply-To: References: Message-ID: Welcome, Camila! Nice to hear from you here! Let me know if you have any questions! Sofi On Thu, Feb 21, 2019 at 11:44 AM Camila Moura wrote: > Hi Folks > > I'm Camila, I participating in Outreachy. I'm from Brazil, but I live em > Czech Republic. I've been studying Python, Django, Flask and a little bit > HTML and CSS. > So, I'll to start to contribute to the project. > Please, be patient I'm learning :) > > Thank you for your attention! > Camila > -- Sofia Enriquez Associate Software Engineer Red Hat PnT Ingeniero Butty 240, Piso 14 (C1001AFB) Buenos Aires - Argentina +541143297471 (8426471) senrique at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain.bauza at gmail.com Thu Feb 21 15:36:04 2019 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Thu, 21 Feb 2019 16:36:04 +0100 Subject: [tc] Questions for TC Candidates In-Reply-To: References: <31313601b6d888de63650436007f2d477d0ebec4.camel@redhat.com> Message-ID: Le jeu. 21 févr. 2019 à 16:29, Jim Rollenhagen a écrit : > On Thu, Feb 21, 2019 at 10:02 AM Sean Mooney wrote: > >> On Thu, 2019-02-21 at 08:12 -0500, Jim Rollenhagen wrote: >> > On Thu, Feb 21, 2019 at 7:37 AM Sean Mooney wrote: >> > > On Wed, 2019-02-20 at 10:24 -0500, Mohammed Naser wrote: >> > > > Hi Chris, >> > > > >> > > > Thanks for kicking this off. I've added my replies in-line. >> > > > >> > > > Thank you for your past term as well. >> > > > >> > > > Regards, >> > > > Mohammed >> > > > >> > > > On Wed, Feb 20, 2019 at 9:49 AM Chris Dent >> wrote: >> > > > > >> > > > > * If you had a magic wand and could inspire and make a single >> > > > > sweeping architectural or software change across the services, >> > > > > what would it be? For now, ignore legacy or upgrade concerns. >> > > > > What role should the TC have in inspiring and driving such >> > > > > changes? >> > > > >> > > > Oh. >> > > > >> > > > - Stop using RabbitMQ as an RPC, it's the worst most painful >> component >> > > > to run in an entire OpenStack deployment. It's always broken. >> Switch >> > > > into something that uses HTTP + service registration to find >> endpoints. >> > > as an on looker i have mixed feeling about this statement. >> > > RabbitMQ can have issue at scale but it morstly works when its >> not on fire. >> > > Would you be advocating building a openstack specific RPC layer >> perhaps >> > > using keystone as the service registry and a custom http >> mechanism or adopting >> > > an existing technology like grpc? >> > > >> > > investigating an alternative RPC backend has come up in the past >> (zeromq and qupid) >> > > and i think it has merit but im not sure as a comunity creating a >> new RPC framework is >> > > a project wide effort that openstack need to solve. that said >> zaqar is a thing >> > > https://docs.openstack.org/zaqar/latest/ if it is good enough >> for our endusers to consume >> > > perhaps it would be up to the task of being openstack rpc >> transport layer. >> > > >> > > anyway my main question was would you advocate adoption of an >> exisiting technology >> > > or creating our own solution if we were to work on this goal as a >> community. >> > > >> > >> > I'll also chime in here since I agree with Mohammed. >> > >> > We're certainly going to have to write software to make this happen. >> Maybe >> > that's a new oslo.messaging driver, maybe it's a new equivalent of that >> layer. >> > >> > But we've re-invented enough software already. This isn't a place where >> we >> > should do it. We should use build on or glue together existing tools to >> build >> > something scalable and relatively simple to operate. >> > >> > Are your mixed feelings about this statement a concern about >> re-inventing >> > wheels, or about changing an underlying architectural thing at all? >> with re-inventing wheels. >> a new oslo.messaging driver i think would be perfectly fine. that said we >> dont want 50 of them as it will be impossible >> to test and maintain them all. >> my concern would be that even if we deveploped the perfect rpc layer for >> openstack >> ourselves it would be a lot of effort that could have been spent >> elsewhere. >> developing glue logic to use an alternitiv is much less invasive and >> achievable. >> >> the other thing i woudl say is if im felling pedantinc the fact that we >> have an rpc >> bus is an architecutal question as may in some cases be the perfromce or >> feature set >> the final system provide. but personally i dont see rabbitmq vs grps as >> an architectual >> question at all. the message bus is just an io device. we should be able >> to change >> that io device for another without it martially effecting our >> architecutre. if it does >> we are too tightly coupled to the implementation. >> > > +1. There's some question about our RPC casts that need to be addressed, > but agree in general. > > >> you can sub out rabbitmq today with qpid or activmq and in the past you >> could use >> zeromq too but rabbit more or less one out over that set of message >> queue. >> there is also limited support for kafka. kafka is pretty heavy weight as >> is itsio from >> what i have heard so im not sure they would be a good replacement for >> small cloud but i >> belive they are ment to scale well. >> >> grpc and nats are proably the two rabitmq alternative is would personally >> consider but i know >> some projects hack etcd to act as a psudo rpc bus. >> this type of change is something i could see as a comunity goal >> eventually but would like to >> see done with one project first before it got to that point. >> > > Agree with that. I don't want to get into implementation details here. :) > > >> >> there is value in useing 1 or 2 rpc buses istead of support many and this >> is the type of change >> i hope would be guided by measurement and community feedback. >> > > 100%, the first steps to doing anything like this is to measure performance > today, identify other options, measure performance on those. This is a > high-effort change, and it'd be crazy to do it without data. > > Yup, all this. If we want things to happen, we need first to identify the pain points and have people acting on those. All of that can happen with or without the TC scope, to answer the original concern. I'm just glad Chris pointed out this question because now we can start brainstorming about this at the PTG. > // jim > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Feb 21 15:41:24 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 21 Feb 2019 15:41:24 +0000 Subject: [nova][numa]Question regarding numa affinity balancer/weigher per host In-Reply-To: References: Message-ID: <46d181e28eb66d3250ec3f473e5c672a3b976378.camel@redhat.com> On Thu, 2019-02-21 at 10:21 -0500, Li Liu wrote: > HI Nova folks, > > I am trying to find out how Numa balance/weighs per host is taking care of in Nova. > > I know how weighers work in general, but it's weighing between hosts. I am not so clear when it comes to a single host > with multiple sockets, how does nova weigh them? the short answer is it doesn't. we then to pack numa node 1 before we move on to numa node 2 the actual asignment of numa resouce to the vm is done by the resouce tracker on the compute node. stephen on cc has done some work around avoidign host with pci device when they are not requested and im not sur if he extened that to numa nodes as well. the specifc code in the libvirt dirver and hardware.py file is rather complicated and hard to extend so while this has come up in the past we have not really put alot of errort into this topic. it is not clear that balancing the vm placement is always corect and this is not something a enduser should be able to influnce. that means adding a flavor extraspec. this makes modeling numa in placement more complicated which is another reason we have not really spent much time on this lately. one example where you dont want to blance blindly is when you have sriov devices, gpus or fpgas on the host. in this case assuming you have not already seperated the host into a dedicated hostaggrate for host with special deivces you would want to avoid blancing so that instance that dont request an fpga are first tried to be placed on numa nodes without an fpga before they are assingined to a numa node with an fpga. > > For instance, a host has 4 sockets, A, B, C, D. When scheduling request comes in asking for 4 cores on 2 sockets(2 > cores per socket), scheduler realized that A+B, A+C, and C+D combination can all fit the request. In this case, how > does nova make the decision on which combination to choose from? if i remember the code correctly it will take resoces for the first two numa nodes that can fit the vm. > From openstack at nemebean.com Thu Feb 21 15:43:23 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 21 Feb 2019 09:43:23 -0600 Subject: =?UTF-8?Q?Re=3a_=5btripleo=5d_nominating_Harald_Jens=c3=a5s_as_a_co?= =?UTF-8?Q?re_reviewer?= In-Reply-To: References: Message-ID: +1. I made him an OVB core for a reason. :-) On 2/21/19 9:02 AM, Juan Antonio Osorio Robles wrote: > Hey folks! > > > I would like to nominate Harald as a general TripleO core reviewer. > > He has consistently done quality reviews throughout our code base, > helping us with great feedback and technical insight. > > While he has done a lot of work on the networking and baremetal sides of > the deployment, he's also helped out on security, CI, and even on the > tripleoclient side. > > Overall, I think he would be a great addition to the core team, and I > trust his judgment on reviews. > > > What do you think? > > > Best regards > > > From a.settle at outlook.com Thu Feb 21 15:44:44 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Thu, 21 Feb 2019 15:44:44 +0000 Subject: [tc] [election] Candidate question: growth of projects In-Reply-To: References: Message-ID: Well hello again! As always, inline below: On 21/02/2019 11:13, Chris Dent wrote: > > This is another set of questions for TC candidates, to look at a > different side of things from my first one [1] and somewhat related > to the one Doug has asked [2]. > > As Doug mentions, a continuing role of the TC is to evaluate > applicants to be official projects. These questions are about that. > > There are 63 teams in the official list of projects. How do you feel > about this size? Too big, too small, just right? Why? I think it's safe for me to say that isn't a single person in our entire community that could honestly say they know what all those 63 projects are and how they function. We are all specialists in our own right and that's how our community works together. I do not believe it is my place to determine via a standalone number if our project list is too big, too small, or just right. I could very easily say that we only require "the main 5 projects" for OpenStack to work, but part of the beauty of OpenStack as an open source product is we allow freedom of development (within technical guidelines, as discussed) and that is one of the things that draws developments and their projects to integrate and grow with OpenStack. That also being said, there has been duplication of efforts in certain areas. Projects that are eerily similar, yet not working together. I think these are areas that we could potentially be reviewing, in the sense of encouraging teams to collaborate more. > If you had to make a single declaration about growth in the number > of projects would you prefer to see (and why, of course): > > * More projects as required by demand. > * Slower or no growth to focus on what we've got. > * Trim the number of projects to "get back to our roots". > * Something else. My answer is: Something else (ha, what a surprise). I think all those options are applicable. If there is room and movement for growth, we should be encouraging of that. If there is a slow down, we should not be pushing the community to grow when it clearly is stablising. I believe we should always be looking at ways to trim - if you do not cut back, there is no room for improvement. If a stable, yet unattended project is left to expire on its own does not open up for new change and new ownership. We've seen this happen before. > How has the relatively recent emergence of the open infrastructure > projects that are at the same "level" in the Foundation as OpenStack > changed your thoughts on the above questions? The OIP has mostly changed the way I think about your questions in the sense that it isn't just "us" anymore. And we need to be looking more towards future development. We needn't have such a focus on OpenStack projects alone but where the revised community is going and how we're going to get there. As I said in my email to Doug: Don't fix what's broken, let's move forward and focus on that. > > Do you think the number of projects has any impact (positive or > negative) on our overall ability to get things done? I'd be lying if I didn't say: Sometimes. I think ensuring you are considering the needs and wants of 63 projects is an enormous task. And often that means what could be a cut and dry is not because you need to ensure you're considering every angle. But, this isn't necessarily a bad thing. > > Recognizing that there are many types of contributors, not just > developers, this question is about developers: Throughout history > different members of the community have sometimes identified as an > "OpenStack developer", sometimes as a project developer (e.g., "Nova > developer"). Should we encourage contributors to think of themselves > as primarily OpenStack developers? If so, how do we do that? If not, > why not? To reiterate my first answer to your question: There is not one person who understands the intimidate details of every OpenStack project. While I encourage anyone to identify as an OpenStack developer, I can see why someone would prefer to refer to themselves as a Nova developer on the OpenStack product. Being an OpenStack developer can often imply that you know everything about the entire ecosystem - which I believe many to have very in-depth knowledge on several projects, but not the entire product. That being said, there are those that are core contributors on several projects and identifying as an OpenStack developer is an easier course than saying they are core developers on Keystone, Glance, Cinder and the Potato project (so be it). While you address this question to developers and recognise that there are many different types of contributors, I think documentation sits in a weird loop hole here. We are often considered developers because we follow developmental workflows, and integrate with the projects directly. Some of us are more technical than others and contribute to both the code base and to the physical documentation. Risking a straw man here: How would you define the technical writers that work for OpenStack? We too are often considered "OpenStack" writers and experts, yet as I say, we are not experts on every project. > Thanks. > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002914.html > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002923.html Looking forward to your response to my question :) From sbauza at redhat.com Thu Feb 21 15:52:04 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 21 Feb 2019 16:52:04 +0100 Subject: [tc][election] candidate question: strategic leadership In-Reply-To: References: Message-ID: On Thu, Feb 21, 2019 at 1:56 PM Doug Hellmann wrote: > > With the changes at the Foundation level, adding new OIPs, a few board > members have suggested that this is an opportunity for the TC to evolve > from providing what some have seen as tactical management through > dealing with day-to-day issues to more long-term strategic leadership > for the project. This theme has also come up in the recent discussions > of the role of the TC, especially when considering how to make > community-wide technical decisions and how much influence the TC should > have over the direction individual projects take. > > What do you think OpenStack, as a whole, should be doing over the next > 1, 3, and 5 years? Why? > > Good question. I know this attempt of providing a clear direction over the future from the TC has been thought since [1]. Now we also have [2] in place that requires projects feedback. FWIW, (and I think I said that in another thread and in my candicacy email), OpenStack should put short-term efforts on having all the service projects supporting upgrades and scalability (think for example of Mohammed's wishes about having a magic wand for fixing all the RabbitMQ issues he has). A more difficult brainstorming would be to think of what would become OpenStack in 2 years. Given how the IT world is fastly evolving, I think there is a will for high throughput, highly resilient infrastructure as a service. That would be one of the items I'd like to see engaged in order to make OpenStack a silver bullet for high performance computing and network. Now, for 5 years, I'd like to look at the mirror and compare with what was the IT ecosystem 5 years ago. By that time, a new startup company was just showing some progress in showcasing how they were using containers. I guess in 5 years, applications being cloud-aware will be the norm. Accordingly, OpenStack has to evolve to match with this and provide the infrastructure than can hyperscale from 1 to the infinity. -Sylvain [1] https://governance.openstack.org/tc/resolutions/20170404-vision-2019.html [2] https://governance.openstack.org/tc/reference/technical-vision.html > -- > Doug > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liliueecg at gmail.com Thu Feb 21 16:09:08 2019 From: liliueecg at gmail.com (Li Liu) Date: Thu, 21 Feb 2019 11:09:08 -0500 Subject: [nova][numa]Question regarding numa affinity balancer/weigher per host In-Reply-To: <46d181e28eb66d3250ec3f473e5c672a3b976378.camel@redhat.com> References: <46d181e28eb66d3250ec3f473e5c672a3b976378.camel@redhat.com> Message-ID: Thanks a lot, Sean for the clarification :P Regards Li Liu On Thu, Feb 21, 2019 at 10:41 AM Sean Mooney wrote: > On Thu, 2019-02-21 at 10:21 -0500, Li Liu wrote: > > HI Nova folks, > > > > I am trying to find out how Numa balance/weighs per host is taking care > of in Nova. > > > > I know how weighers work in general, but it's weighing between hosts. I > am not so clear when it comes to a single host > > with multiple sockets, how does nova weigh them? > the short answer is it doesn't. > > we then to pack numa node 1 before we move on to numa node 2 > the actual asignment of numa resouce to the vm is done by the resouce > tracker on the compute node. > stephen on cc has done some work around avoidign host with pci device when > they are not requested > and im not sur if he extened that to numa nodes as well. > > the specifc code in the libvirt dirver and hardware.py file is rather > complicated and hard to extend > so while this has come up in the past we have not really put alot of > errort into this topic. > > it is not clear that balancing the vm placement is always corect and this > is not something a enduser should > be able to influnce. that means adding a flavor extraspec. this makes > modeling numa in placement more complicated > which is another reason we have not really spent much time on this lately. > one example where you dont want to blance blindly is when you have sriov > devices, gpus or fpgas on the host. > in this case assuming you have not already seperated the host into a > dedicated hostaggrate for host with special deivces > you would want to avoid blancing so that instance that dont request an > fpga are first tried to be placed on numa nodes > without an fpga before they are assingined to a numa node with an fpga. > > > > For instance, a host has 4 sockets, A, B, C, D. When scheduling request > comes in asking for 4 cores on 2 sockets(2 > > cores per socket), scheduler realized that A+B, A+C, and C+D combination > can all fit the request. In this case, how > > does nova make the decision on which combination to choose from? > if i remember the code correctly it will take resoces for the first two > numa nodes that can fit the vm. > > > > -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From isanjayk5 at gmail.com Thu Feb 21 16:10:27 2019 From: isanjayk5 at gmail.com (Sanjay K) Date: Thu, 21 Feb 2019 21:40:27 +0530 Subject: [nova][dev] Any VMware resource pool and shares kind of feature available in openstack nova? Message-ID: Hi All, Is there any VMware resource pools and shares kind of features available in nova compute scheduler which can do run time scheduling based on available resources on compute host for a launched instance with KVM/Qemu hypervisors based? Basically I want to prioritize some vm instances over others so that higher priority VM instances get better resource allocated at runtime and chance to run than lower priority VMs? Any future proposal on this kind of feature in scope? Please let me know if there is any work around or have to manage with the current default scheduling functions inside nova compute? thanks and regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Thu Feb 21 16:11:25 2019 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Thu, 21 Feb 2019 08:11:25 -0800 Subject: [uwsgi] [glance] Support for wsgi-manage-chunked-input in uwsgi: glance-api finally working over SSL as expected In-Reply-To: References: Message-ID: Fanastic news! Thank you for sharing with the community. I'm happy to see more projects able to use uwsgi. Cheers, --Morgan On Thu, Feb 21, 2019, 05:02 Thomas Goirand Hi, > > It was quite famous that we had no way to run Glance under Python 3 with > SSL, because of eventlet, and the fact that Glance needed chunked-input, > which made uwsgi not a good candidate. Well, this was truth until 12 > days ago, when uwsgi 2.0.18 was released, adding the > --wsgi-manage-chunked-input. I've just installed Glance this way, and > it's finally working as expected. I'll be releasing Glance in Debian > Buster this way. > > I believe it's now time to make this config the default in the Gate, > which is why I'm writing this message. > > I hope this helps, > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Feb 21 16:21:43 2019 From: marios at redhat.com (Marios Andreou) Date: Thu, 21 Feb 2019 18:21:43 +0200 Subject: =?UTF-8?Q?Re=3A_=5Btripleo=5D_nominating_Harald_Jens=C3=A5s_as_a_core_re?= =?UTF-8?Q?viewer?= In-Reply-To: References: Message-ID: On Thu, Feb 21, 2019 at 5:03 PM Juan Antonio Osorio Robles < jaosorior at redhat.com> wrote: > Hey folks! > > > I would like to nominate Harald as a general TripleO core reviewer. > > He has consistently done quality reviews throughout our code base, > helping us with great feedback and technical insight. > > While he has done a lot of work on the networking and baremetal sides of > the deployment, he's also helped out on security, CI, and even on the > tripleoclient side. > > Overall, I think he would be a great addition to the core team, and I > trust his judgment on reviews. > > +1! and big ++ to all the above, I was blown away by Harald skills with real world deployments & during customer escalations we were involved in, before he even joined the tripleo core engineering side we are very lucky to have him as core > > What do you think? > > > Best regards > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Feb 21 16:22:03 2019 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 21 Feb 2019 11:22:03 -0500 Subject: =?UTF-8?Q?Re=3A_=5Btripleo=5D_nominating_Harald_Jens=C3=A5s_as_a_core_re?= =?UTF-8?Q?viewer?= In-Reply-To: References: Message-ID: +1 of course && thanks for your hard work! On Thu, Feb 21, 2019 at 10:47 AM Ben Nemec wrote: > +1. I made him an OVB core for a reason. :-) > > On 2/21/19 9:02 AM, Juan Antonio Osorio Robles wrote: > > Hey folks! > > > > > > I would like to nominate Harald as a general TripleO core reviewer. > > > > He has consistently done quality reviews throughout our code base, > > helping us with great feedback and technical insight. > > > > While he has done a lot of work on the networking and baremetal sides of > > the deployment, he's also helped out on security, CI, and even on the > > tripleoclient side. > > > > Overall, I think he would be a great addition to the core team, and I > > trust his judgment on reviews. > > > > > > What do you think? > > > > > > Best regards > > > > > > > > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From km.giuseppesannino at gmail.com Thu Feb 21 16:25:38 2019 From: km.giuseppesannino at gmail.com (Giuseppe Sannino) Date: Thu, 21 Feb 2019 17:25:38 +0100 Subject: [kolla][magnum] Cluster creation failed due to "Waiting for Kubernetes API..." In-Reply-To: References: <1f5506ea-add1-749d-b6c3-1040776b0ff4@catalyst.net.nz> <54760998-DCF6-4E01-85C8-BB3F5879A14C@stackhpc.com> Message-ID: Sounds great!! Thanks again ! /Giuseppe On Thu, 21 Feb 2019 at 12:36, Mark Goddard wrote: > > > On Thu, 21 Feb 2019 at 10:03, Giuseppe Sannino < > km.giuseppesannino at gmail.com> wrote: > >> Ciao Mark, >> finally it works! Many many thanks! >> That was the missing piece of the puzzle. >> >> Just FYI information, from the systemctl status for the >> heat-container-agent I can still see this repetitive logs: >> : >> Feb 21 08:00:40 kube-cluster-goddard-lq54faeabuhu-master-0.novalocal >> runc[2715]: /var/lib/os-collect-config/local-data not found. Skipping >> Feb 21 08:01:11 kube-cluster-goddard-lq54faeabuhu-master-0.novalocal >> runc[2715]: /var/lib/os-collect-config/local-data not found. Skipping >> : >> >> This doesn't seem to harm the deployment but I will check further. >> >> Thanks a lot to everyone! >> >> /Giuseppe >> > Glad to hear it worked for you. I've raised a bug [1] and proposed a fix > [2] in kolla ansible. > Mark > > [1] https://bugs.launchpad.net/kolla-ansible/+bug/1817051 > [2] https://review.openstack.org/638400 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From beagles at redhat.com Thu Feb 21 16:28:28 2019 From: beagles at redhat.com (Brent Eagles) Date: Thu, 21 Feb 2019 12:58:28 -0330 Subject: =?UTF-8?Q?Re=3A_=5Btripleo=5D_nominating_Harald_Jens=C3=A5s_as_a_core_re?= =?UTF-8?Q?viewer?= In-Reply-To: References: Message-ID: +1 On Thu, Feb 21, 2019 at 11:39 AM Juan Antonio Osorio Robles < jaosorior at redhat.com> wrote: > Hey folks! > > > I would like to nominate Harald as a general TripleO core reviewer. > > He has consistently done quality reviews throughout our code base, > helping us with great feedback and technical insight. > > While he has done a lot of work on the networking and baremetal sides of > the deployment, he's also helped out on security, CI, and even on the > tripleoclient side. > > Overall, I think he would be a great addition to the core team, and I > trust his judgment on reviews. > > > What do you think? > > > Best regards > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu Feb 21 16:36:23 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 21 Feb 2019 16:36:23 +0000 (GMT) Subject: [placement] [translation] translating exceptions Message-ID: In placement we've been experimenting with removing oslo_versionedobjects. There's neither RPC nor rolling upgrades in placement so it is effectively overkill and the experiment has revealed that removing it also improves performance. The removal of OVO allows us to remove a lot of transitive dependencies. See https://review.openstack.org/#/c/636807/ where the lower-constraints.txt file is updated to reflect the changes. After those changes the biggest single package remaining in the test virtualenvs is Babel (included via oslo.i18n), weighing in at a mighty 26M. Placement is currently set to enable translation of exception messages, especially those that will end up in API responses. It has no other UI to speak of, so we've begun to wonder about getting rid of translation entirely. I asked about this in the TC office hours[1], which generated some useful but not entirely conclusive discussion. One way to summarize that discussion is that it might be okay to not translate the error messages, because it is unlikely they will be a priority for translators who are more oriented towards docs and UI, and that since no translation has yet happened, if there were ever going to be a time to turn off translation, now would be the time. It is additionally okay because placement is striving to follow the errors guideline [2] and use error.code in error responses, which exist at least in part to make googling for "what do I do when I get this error" a bit easier. That desire to google is also a good reason to keep the error responses in one language. However: We shouldn't do this if people need or want the error responses to be translated. Thus this message. Do people have opinions on this? Is it okay to not translate error responses in an API-only service like placement? Thanks. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-02-21.log.html#t2019-02-21T15:33:45 [2] https://specs.openstack.org/openstack/api-wg/guidelines/errors.html -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From camilapaleo at gmail.com Thu Feb 21 16:47:55 2019 From: camilapaleo at gmail.com (Camila Moura) Date: Thu, 21 Feb 2019 17:47:55 +0100 Subject: Outreachy In-Reply-To: References: Message-ID: Sofia, thank you! I'm reading the documentation, familiarizing myself with terms and configuring my work environment, so, I'll soon be full of questions. Best regards Camila Em qui, 21 de fev de 2019 às 16:26, Sofia Enriquez escreveu: > Welcome, Camila! Nice to hear from you here! > > Let me know if you have any questions! > > Sofi > > On Thu, Feb 21, 2019 at 11:44 AM Camila Moura > wrote: > >> Hi Folks >> >> I'm Camila, I participating in Outreachy. I'm from Brazil, but I live em >> Czech Republic. I've been studying Python, Django, Flask and a little bit >> HTML and CSS. >> So, I'll to start to contribute to the project. >> Please, be patient I'm learning :) >> >> Thank you for your attention! >> Camila >> > > > -- > > Sofia Enriquez > > Associate Software Engineer > Red Hat PnT > > Ingeniero Butty 240, Piso 14 > > (C1001AFB) Buenos Aires - Argentina > +541143297471 (8426471) > > senrique at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Feb 21 16:48:52 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 21 Feb 2019 10:48:52 -0600 Subject: [release] Release countdown for week R-6, February 25 - March 1 Message-ID: <20190221164852.GA15567@sm-workstation> Some important deadlines in this week's edition as we get closer to the end of the cycle. PTLs and release liaisons, please read through and make sure you are tracking things as we start to get close to the end of Stein. Development Focus ----------------- Non-client library work should be wrapping up for this weeks freeze. Any changes that will require client library changes should get some focus to make sure anything needed there is merged by the client library freeze on March 7. Feature freeze is also coming up with the Stein-3 milestone on March 7. General Information ------------------- We are now getting close to the end of the cycle. The non-client library (typically any lib other than the "python-$PROJECTclient" deliverables) deadline is February 28, followed quickly the next Thursday with the final client library release. Releases for critical fixes will be allowed after this point, but we will be much more restrictive about what is allowed if there are more lib release requests after this point. Please keep this in mind. When requesting these library releases, you should also include the stable branching request with the review (as an example, see the "branches" section here: http://git.openstack.org/cgit/openstack/releases/tree/deliverables/pike/os-brick.yaml#n2) As we are getting to the point of creating stable/stein branches, this would be a good point for teams to review membership in their $project-stable-maint groups. Once the stable/stein branches are cut for a repo, the ability to approve any necessary backports into those branches for stein will be limited to the members of that stable team. If there are any questions about stable policy or stable team membership, please each out in the #openstack-stable channel. The following cycle-with-intermediary release model deliverables have not had a release for Stein yet: ansible-role-container-registry ansible-role-openstack-operations ansible-role-tripleo-modify-image automaton blazar-nova ceilometermiddleware debtcollector heat-agents instack-undercloud instack kuryr-libnetwork mistral-lib monasca-statsd monasca-thresh mox3 murano-agent networking-baremetal networking-hyperv os-client-config patrole sahara-extra tripleo-ansible tripleo-ipsec If no release is requested by the teams in the week after the deadline, the release team will need to force a release from HEAD in order to have a good point to create stable branches. Upcoming Deadlines & Dates -------------------------- Non-client library freeze: February 28 Stein-3 milestone: March 7 RC1 deadline: March 21 -- Sean McGinnis (smcginnis) From mriedemos at gmail.com Thu Feb 21 16:54:41 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 21 Feb 2019 10:54:41 -0600 Subject: [nova][dev] Any VMware resource pool and shares kind of feature available in openstack nova? In-Reply-To: References: Message-ID: On 2/21/2019 10:10 AM, Sanjay K wrote: > Basically I want to prioritize some vm instances over others so that > higher priority VM instances get better resource allocated at runtime > and chance to run than lower priority VMs? How do you define priority? It sounds like what you're looking for is a weigher [1] and if the ones available in-tree (upstream) are not sufficient you can plugin your own [2]. [1] https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#weights [2] https://docs.openstack.org/nova/latest/user/filter-scheduler.html#weights -- Thanks, Matt From doug at doughellmann.com Thu Feb 21 17:07:18 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 21 Feb 2019 12:07:18 -0500 Subject: [placement] [translation][i18n] translating exceptions In-Reply-To: References: Message-ID: (Added i18n tag) Chris Dent writes: > In placement we've been experimenting with removing > oslo_versionedobjects. There's neither RPC nor rolling upgrades in > placement so it is effectively overkill and the experiment has > revealed that removing it also improves performance. > > The removal of OVO allows us to remove a lot of transitive > dependencies. See https://review.openstack.org/#/c/636807/ where > the lower-constraints.txt file is updated to reflect the changes. > > After those changes the biggest single package remaining in the > test virtualenvs is Babel (included via oslo.i18n), weighing in > at a mighty 26M. > > Placement is currently set to enable translation of exception > messages, especially those that will end up in API responses. > It has no other UI to speak of, so we've begun to wonder about > getting rid of translation entirely. > > I asked about this in the TC office hours[1], which generated some > useful but not entirely conclusive discussion. > > One way to summarize that discussion is that it might be okay to not > translate the error messages, because it is unlikely they will be a > priority for translators who are more oriented towards docs and UI, > and that since no translation has yet happened, if there were ever > going to be a time to turn off translation, now would be the time. > > It is additionally okay because placement is striving to follow the > errors guideline [2] and use error.code in error responses, which > exist at least in part to make googling for "what do I do when I get > this error" a bit easier. > > That desire to google is also a good reason to keep the error > responses in one language. > > However: We shouldn't do this if people need or want the error > responses to be translated. > > Thus this message. > > Do people have opinions on this? Is it okay to not translate error > responses in an API-only service like placement? Does an end-user interact with placement directly, or are all of the errors going to be seen and handled by other services that will report their own errors to end users? > > Thanks. > > > [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-02-21.log.html#t2019-02-21T15:33:45 > [2] https://specs.openstack.org/openstack/api-wg/guidelines/errors.html > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent -- Doug From ltoscano at redhat.com Thu Feb 21 17:08:15 2019 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 21 Feb 2019 18:08:15 +0100 Subject: [docs] Implementation of the api-ref consolidation under doc/source/ Message-ID: <15224217.xv0GsKRgh2@whitebase.usersys.redhat.com> Hi all, During the last PTG it was decided to move forward with the migration of the api-ref documentation together with the rest of the documentation [1]. This is one of the item still open after the (not so recent anymore) massive documentation restructuring [2]. (most likely anything below applies to releasenotes/ as well.) I asked about this item few weeks ago on the documentation channels. So far no one seems against moving forward with this, but, you know, resources :) I think that the process itself shouldn't be too complicated on the technical side, but more on the definition of the desired outcome. I can help with the technical part (the moving), but it would be better if someone from the doc team with the required knowledge and background on the doc process would start with at least a draft of a spec, which can be used to start the discussion. If implemented, this change would also fix an inconsistency in the guidelines [2]: the content of reference/ seems to overlap with the content of api- ref("Library projects should place their automatically generated class documentation here."), but then having api-ref there would allow us to always use api-ref. That's where the entire discussion started in the QA session [3]: some client libraries document their API in different places. [1] https://etherpad.openstack.org/p/docs-i18n-ptg-stein line 144 [2] https://docs.openstack.org/doc-contrib-guide/project-guides.html [3] https://etherpad.openstack.org/p/clean-up-the-tempest-documentation Ciao -- Luigi From gr at ham.ie Thu Feb 21 17:10:03 2019 From: gr at ham.ie (Graham Hayes) Date: Thu, 21 Feb 2019 17:10:03 +0000 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: On 20/02/2019 14:46, Chris Dent wrote: > > It's the Campaigning slot of the TC election process, where members > of the community (including the candidates) are encouraged to ask > the candidates questions and witness some debate. I have some > questions. > > First off, I'd like to thank all the candidates for running and > being willing to commit some of their time. I'd also like to that > group as a whole for being large enough to force an election. A > representative body that is not the result of an election would not > be very representing nor have much of a mandate. > > The questions follow. Don't feel obliged to answer all of these. The > point here is to inspire some conversation that flows to many > places. I hope other people will ask in the areas I've chosen to > skip. If you have a lot to say, it might make sense to create a > different message for each response. Beware, you might be judged on > your email etiquette and attention to good email technique! > > * How do you account for the low number of candidates? Do you >   consider this a problem? Why or why not? I think we are reaching a more stable space, and the people who are developing the software are comfortable in the roles they are in. As the demographic of our developers shifts east, our leadership is still very US / EU based, which may be why we are not getting the same amount of people growing into TC candidates. > * Compare and contrast the role of the TC now to 4 years ago. If you >   weren't around 4 years ago, comment on the changes you've seen >   over the time you have been around. In either case: What do you >   think the TC role should be now? 4 years ago, was just before the big tent I think? Ironically, there was a lot of the same discussion - python3, new project requirements (at that point the incubation requirements), asyncio / eventlet. The TC was also in the process of dealing with a By-Laws change, in this case getting the trademark program off the ground. We were still struggling with the "what is OpenStack?" question. Looking back on the mailing list archives is actually quite interesting and while the topics are the same, a lot of the answers have changed. > * What, to you, is the single most important thing the OpenStack >   community needs to do to ensure that packagers, deployers, and >   hobbyist users of OpenStack are willing to consistently upstream >   their fixes and have a positive experience when they do? What is >   the TC's role in helping make that "important thing" happen? I think things like the review culture change have been good for this. The only other thing we can do is have more people reviewing, to make that first contact nice and quick, but E_NO_TIME or E_NO_HUMANS becomes the issue. > * If you had a magic wand and could inspire and make a single >   sweeping architectural or software change across the services, >   what would it be? For now, ignore legacy or upgrade concerns. >   What role should the TC have in inspiring and driving such >   changes? 1: Single agent on each compute node that allows for plugins to do all the work required. (Nova / Neutron / Vitrage / watcher / etc) 2: Remove RMQ where it makes sense - e.g. for nova-api -> nova-compute using something like HTTP(S) would make a lot of sense. 3: Unified Error codes, with a central registry, but at the very least each time we raise an error, and it gets returned a user can see where in the code base it failed. e.g. a header that has OS-ERROR-COMPUTE-3142, which means that someone can google for something more informative than the VM failed scheduling 4: OpenTracing support in all projects. 5: Possibly something with pub / sub where each project can listen for events and not create something like designate did using notifications. > * What can the TC do to make sure that the community (in its many >   dimensions) is informed of and engaged in the discussions and >   decisions of the TC? This is a difficult question, especially in a community where a lot of contributors are sponsored. The most effective way would be for the TC to start directly telling projects what to do - but I feel like that would mean that everyone would be unhappy with us. > * How do you counter people who assert the TC is not relevant? >   (Presumably you think it is, otherwise you would not have run. If >   you don't, why did you run?) Highlight the work done by the TC communicating with the board, guiding teams on what our vision is, and helping to pick goals. I think the goals are a great way, and we are starting to see the benifits as we continue with the practice. For some people, we will always be surplus to requirements, and they just want to dig into bugs and features, and not worry about politics. Thats fine - we just have to work with enough of the people on the teams to make sure that the project is heading in the correct direction, and as long as people can pick up what the priotities are from that process, I think we win. > That's probably more than enough. Thanks for your attention. > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From fungi at yuggoth.org Thu Feb 21 17:24:18 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 21 Feb 2019 17:24:18 +0000 Subject: Fwd: Renaissance Denver Hotel: Quiet Zone (no more train horns!) is OFFICIAL for the A line Light Rail! In-Reply-To: References: Message-ID: <20190221172418.2oibavbt5fmkndio@yuggoth.org> On 2019-02-20 20:15:35 +0000 (+0000), Alexandra Settle wrote: > Is it sad I'm almost disappointed by the lack of said horns? [...] I bet we can convince folks to bring wooden train whistles with them, or find some in a local shop. Problem solved. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From bodenvmw at gmail.com Thu Feb 21 17:26:49 2019 From: bodenvmw at gmail.com (Boden Russell) Date: Thu, 21 Feb 2019 10:26:49 -0700 Subject: [infra][qa] installing required projects from source in functional/devstack jobs Message-ID: Question: What's the proper way to install "siblings" [1] in devstack based zuul v3 jobs for projects that also require the siblings via requirements.txt? Background: Following the zuul v3 migration guide for "sibling requirements" [1] works fine for non-devstack based jobs. However, jobs that use devstack must take other measures to install those siblings in their playbooks. Based on what I see projects like oslo.messaging doing for cross testing, they are using the PROJECTS env var to specify the siblings in their playbook (example [2]). This approach may work if those siblings are not in requirements.txt, but for projects that also require the siblings at runtime (in requirements.txt) it appears the version from the requirements.txt is used rather than the sibling's source. For example the changes in [3][4]. Thanks [1] https://docs.openstack.org/infra/manual/zuulv3.html#installation-of-sibling-requirements [2] https://github.com/openstack/oslo.messaging/blob/master/playbooks/oslo.messaging-telemetry-dsvm-integration-amqp1/run.yaml#L37 [3] https://review.openstack.org/#/c/638099 [4] http://logs.openstack.org/99/638099/6/check/tricircle-functional/0b34687/logs/devstacklog.txt.gz#_2019-02-21_14_57_44_553 From gr at ham.ie Thu Feb 21 17:28:35 2019 From: gr at ham.ie (Graham Hayes) Date: Thu, 21 Feb 2019 17:28:35 +0000 Subject: [tc][election] TC Candidacy Message-ID: <1d7b6664-97eb-1352-7d1c-1f190cd566e8@ham.ie> Hi All, I realised I missed this important part of the election process - sending it to the list. > Hello Everyone, > > I have been a member of the TC for a a year now, and I would like to > renominate myself for another term on the TC. I have been working on OpenStack > (mainly in Desigate - the DNS as a Service project) since Havana, which is when > I started to get involved in the work of Technical Committee, for the Designate > Incubation application. > > I have been PTL for Designate for Mitaka, Newton, Ocata, Queens, Rocky, Stein > and Train cycles, and a core for a longer period. I believe my experience > working in a younger, smaller project within OpenStack is still a benefit, > especially as we add more projects to the OpenStack Foundation (OSF) along > side the original OpenStack project. > > I have spent time recently working on a very large OpenStack cloud in a day to > operations role, and I think that this experience is important to have on the > Technical Committee. The experience that a lot of our users have is very > different to what we may assume, and knowing how end users deal with bugs, > deployment life cycles and vendors should guide us. > > I have been involved in the discussion around the future goals this cycle, and > I would like to try and keep driving this forward. I see goals as a real, > tangible change that the TC can drive that helps the community as a whole. > > I think the experience I have had (from both an operator perspective, and as > the PTL of a small project that is struggling with contributor levels) is > valuable. > > I do think cycling out members of the TC is important, but I think that with > the turn over we have is at a good level, and that I can still provide a fresh > out look on the TC. > > Thanks, > > Graham Additionally, I did want to address something that a few people have been asking about - my new employment :) Yes - I am no longer working for a company that has ties (direct or otherwise) to OpenStack. However they do have a strong open source culture, and are completely supportive of me spending some of my time on OpenStack and the TC. I really enjoy working with this community, and don't see myself going anywhere in the near or medium term. My new role has components in the same area (infrastructure as a service) , and I think the insight I will be gaining from this work will definitely benifit OpenStack, combined with my history of involvement and experience running production OpenStack clouds. I am happy to talk to anyone who has any concerns about this, ping me on IRC (mugsie), gr at ham.ie or reply to this thread. Thanks for your attention, and please remember to vote! - Graham -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From sbauza at redhat.com Thu Feb 21 17:28:39 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 21 Feb 2019 18:28:39 +0100 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: On Thu, Feb 21, 2019 at 6:14 PM Graham Hayes wrote: > On 20/02/2019 14:46, Chris Dent wrote: > > > > It's the Campaigning slot of the TC election process, where members > > of the community (including the candidates) are encouraged to ask > > the candidates questions and witness some debate. I have some > > questions. > > > > First off, I'd like to thank all the candidates for running and > > being willing to commit some of their time. I'd also like to that > > group as a whole for being large enough to force an election. A > > representative body that is not the result of an election would not > > be very representing nor have much of a mandate. > > > > The questions follow. Don't feel obliged to answer all of these. The > > point here is to inspire some conversation that flows to many > > places. I hope other people will ask in the areas I've chosen to > > skip. If you have a lot to say, it might make sense to create a > > different message for each response. Beware, you might be judged on > > your email etiquette and attention to good email technique! > > > > * How do you account for the low number of candidates? Do you > > consider this a problem? Why or why not? > > I think we are reaching a more stable space, and the people who > are developing the software are comfortable in the roles they are in. > > As the demographic of our developers shifts east, our leadership is > still very US / EU based, which may be why we are not getting the > same amount of people growing into TC candidates. > > > * Compare and contrast the role of the TC now to 4 years ago. If you > > weren't around 4 years ago, comment on the changes you've seen > > over the time you have been around. In either case: What do you > > think the TC role should be now? > > 4 years ago, was just before the big tent I think? Ironically, there > was a lot of the same discussion - python3, new project requirements > (at that point the incubation requirements), asyncio / eventlet. > > The TC was also in the process of dealing with a By-Laws change, in > this case getting the trademark program off the ground. > > We were still struggling with the "what is OpenStack?" question. > > Looking back on the mailing list archives is actually quite interesting > and while the topics are the same, a lot of the answers have changed. > > > > * What, to you, is the single most important thing the OpenStack > > community needs to do to ensure that packagers, deployers, and > > hobbyist users of OpenStack are willing to consistently upstream > > their fixes and have a positive experience when they do? What is > > the TC's role in helping make that "important thing" happen? > > I think things like the review culture change have been good for this. > The only other thing we can do is have more people reviewing, to make > that first contact nice and quick, but E_NO_TIME or E_NO_HUMANS > becomes the issue. > > > * If you had a magic wand and could inspire and make a single > > sweeping architectural or software change across the services, > > what would it be? For now, ignore legacy or upgrade concerns. > > What role should the TC have in inspiring and driving such > > changes? > > 1: Single agent on each compute node that allows for plugins to do > all the work required. (Nova / Neutron / Vitrage / watcher / etc) > > 2: Remove RMQ where it makes sense - e.g. for nova-api -> nova-compute > using something like HTTP(S) would make a lot of sense. > > 3: Unified Error codes, with a central registry, but at the very least > each time we raise an error, and it gets returned a user can see > where in the code base it failed. e.g. a header that has > OS-ERROR-COMPUTE-3142, which means that someone can google for > something more informative than the VM failed scheduling > > 4: OpenTracing support in all projects. > > 5: Possibly something with pub / sub where each project can listen for > events and not create something like designate did using > notifications. > > That's the exact reason why I tried to avoid to answer about architectural changes I'd like to see it done. Because when I read the above lines, I'm far off any consensus on those. To answer 1. and 2. from my Nova developer's hat, I'd just say that we invented Cells v2 and Placement. To be clear, the redesign wasn't coming from any other sources but our users, complaining about scale. IMHO If we really want to see some comittee driving us about feature requests, this should be the UC and not the TC. Whatever it is, at the end of the day, we're all paid by our sponsors. Meaning that any architectural redesign always hits the reality wall where you need to convince your respective Product Managers of the great benefit of the redesign. I'm maybe too pragmatic, but I remember so many discussions we had about redesigns that I now feel we just need hands, not ideas. -Sylvain > * What can the TC do to make sure that the community (in its many > > dimensions) is informed of and engaged in the discussions and > > decisions of the TC? > > This is a difficult question, especially in a community where a lot of > contributors are sponsored. > > The most effective way would be for the TC to start directly telling > projects what to do - but I feel like that would mean that everyone > would be unhappy with us. > > > * How do you counter people who assert the TC is not relevant? > > (Presumably you think it is, otherwise you would not have run. If > > you don't, why did you run?) > > Highlight the work done by the TC communicating with the board, guiding > teams on what our vision is, and helping to pick goals. I think the > goals are a great way, and we are starting to see the benifits as we > continue with the practice. > > For some people, we will always be surplus to requirements, and they > just want to dig into bugs and features, and not worry about politics. > > Thats fine - we just have to work with enough of the people on the teams > to make sure that the project is heading in the correct direction, and > as long as people can pick up what the priotities are from that process, > I think we win. > > > That's probably more than enough. Thanks for your attention. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Thu Feb 21 17:28:45 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 21 Feb 2019 12:28:45 -0500 Subject: Fwd: Renaissance Denver Hotel: Quiet Zone (no more train horns!) is OFFICIAL for the A line Light Rail! In-Reply-To: <20190221172418.2oibavbt5fmkndio@yuggoth.org> References: <20190221172418.2oibavbt5fmkndio@yuggoth.org> Message-ID: On Thu, Feb 21, 2019 at 12:26 PM Jeremy Stanley wrote: > > On 2019-02-20 20:15:35 +0000 (+0000), Alexandra Settle wrote: > > Is it sad I'm almost disappointed by the lack of said horns? > [...] > > I bet we can convince folks to bring wooden train whistles with > them, or find some in a local shop. Problem solved. > -- > Jeremy Stanley Denver swag = trainhorns? -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From sean.mcginnis at gmx.com Thu Feb 21 17:34:03 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 21 Feb 2019 11:34:03 -0600 Subject: [docs] Implementation of the api-ref consolidation under doc/source/ In-Reply-To: <15224217.xv0GsKRgh2@whitebase.usersys.redhat.com> References: <15224217.xv0GsKRgh2@whitebase.usersys.redhat.com> Message-ID: <20190221173402.GA20285@sm-workstation> On Thu, Feb 21, 2019 at 06:08:15PM +0100, Luigi Toscano wrote: > Hi all, > > During the last PTG it was decided to move forward with the migration of the > api-ref documentation together with the rest of the documentation [1]. > This is one of the item still open after the (not so recent anymore) massive > documentation restructuring [2]. > How is this going to work with the publishing of these separate content types to different locations? From gr at ham.ie Thu Feb 21 17:43:12 2019 From: gr at ham.ie (Graham Hayes) Date: Thu, 21 Feb 2019 17:43:12 +0000 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: On 21/02/2019 17:28, Sylvain Bauza wrote: > > > On Thu, Feb 21, 2019 at 6:14 PM Graham Hayes > wrote: > > > > * If you had a magic wand and could inspire and make a single > >   sweeping architectural or software change across the services, > >   what would it be? For now, ignore legacy or upgrade concerns. > >   What role should the TC have in inspiring and driving such > >   changes? > > 1: Single agent on each compute node that allows for plugins to do >    all the work required. (Nova / Neutron / Vitrage / watcher / etc) > > 2: Remove RMQ where it makes sense - e.g. for nova-api -> nova-compute >    using something like HTTP(S) would make a lot of sense. > > 3: Unified Error codes, with a central registry, but at the very least >    each time we raise an error, and it gets returned a user can see >    where in the code base it failed. e.g. a header that has >    OS-ERROR-COMPUTE-3142, which means that someone can google for >    something more informative than the VM failed scheduling > > 4: OpenTracing support in all projects. > > 5: Possibly something with pub / sub where each project can listen for >    events and not create something like designate did using >    notifications. > > > That's the exact reason why I tried to avoid to answer about > architectural changes I'd like to see it done. Because when I read the > above lines, I'm far off any consensus on those. > To answer 1. and 2. from my Nova developer's hat, I'd just say that we > invented Cells v2 and Placement. Sure - this was if *I* had a magic wand - I have a completely different viewpoint to others. No community really ever has a full agreement. From a TC perspective we have to look at these things from an overall view. My suggestions above were for *all* projects, specifically for #2 - I used a well known pattern as an example, but it can apply to Trove talking to DB instances, Octavia to LBaaS nodes (they already do this, and it is a good pattern), Zun, possibly Magnum (this is not an exaustive list, and may not suit all listed projects, I am taking them from the top of my head). From what I understand there was even talk of doing it for Nova so that a central control plane could manage remote edge compute nodes without having to keep a RMQ connection alive across the WAN, but I am not sure where that got to. > To be clear, the redesign wasn't coming from any other sources but our > users, complaining about scale. IMHO If we really want to see some > comittee driving us about feature requests, this should be the UC and > not the TC. It should be a combination - UC and TC should be communicating about these requests - UC for the feedback, and the TC to see hwo they fit with the TCs vision for the direction of OpenStack. > Whatever it is, at the end of the day, we're all paid by our sponsors. > Meaning that any architectural redesign always hits the reality wall > where you need to convince your respective Product Managers of the great > benefit of the redesign. I'm maybe too pragmatic, but I remember so many > discussions we had about redesigns that I now feel we just need hands, > not ideas. I fully agree, and it has been an issue in the community for as long as I can remember. It doesn't mean that we should stop pushing the project forward. We have already moved the needle with the cycle goals, so we can influence what features are added to projects. Lets continue to do so. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From ltoscano at redhat.com Thu Feb 21 17:44:20 2019 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 21 Feb 2019 18:44:20 +0100 Subject: [docs] Implementation of the api-ref consolidation under doc/source/ In-Reply-To: <20190221173402.GA20285@sm-workstation> References: <15224217.xv0GsKRgh2@whitebase.usersys.redhat.com> <20190221173402.GA20285@sm-workstation> Message-ID: <2034589.JRkO46kZ9W@whitebase.usersys.redhat.com> On Thursday, 21 February 2019 18:34:03 CET Sean McGinnis wrote: > On Thu, Feb 21, 2019 at 06:08:15PM +0100, Luigi Toscano wrote: > > Hi all, > > > > During the last PTG it was decided to move forward with the migration of > > the api-ref documentation together with the rest of the documentation > > [1]. This is one of the item still open after the (not so recent anymore) > > massive documentation restructuring [2]. > > How is this going to work with the publishing of these separate content > types to different locations? I can just guess, as this is a work in progress and I don't know about most of the previous discussions. The publishing job is just code and can be adapted to publish two (three) subtrees to different places, or exclude some directories. The global index files from doc/source do not necessarily need to include all the index files of the subdirectories, so that shouldn't be a problem. Do you have a specific concern that it may difficult to address? Ciao -- Luigi From melwittt at gmail.com Thu Feb 21 17:50:57 2019 From: melwittt at gmail.com (melanie witt) Date: Thu, 21 Feb 2019 09:50:57 -0800 Subject: [nova][dev] forum brainstorming etherpad available Message-ID: <2075bbaf-a79d-696d-0600-61e4fa1b6b99@gmail.com> Hey all, We have an etherpad for brainstorming forum topics for the Denver Summit which is open for adding topics: https://etherpad.openstack.org/p/DEN-train-nova-brainstorming There will be an online tool for submitting abstracts opening tomorrow Feb 22, that we'll use for proposing the most popular topics. Cheers, -melanie From mriedemos at gmail.com Thu Feb 21 17:51:32 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 21 Feb 2019 11:51:32 -0600 Subject: [placement] [translation][i18n] translating exceptions In-Reply-To: References: Message-ID: <7ed50c75-64e3-7bea-a07f-9a70db42583f@gmail.com> On 2/21/2019 11:07 AM, Doug Hellmann wrote: > Does an end-user interact with placement directly, or are all of the > errors going to be seen and handled by other services that will report > their own errors to end users? The Placement APIs are admin-only by default. That can be configured via policy but assume it's like Ironic and admin-only for the most part. Speaking of, what does Ironic do about translations in its API? As for the question at hand, I'm OK with *not* translating errors in placement for both the admin-only aspect and the push to use standard codes in error responses which can be googled. FWIW I also shared this thread on WeChat to see if anyone has an opinion there. -- Thanks, Matt From cboylan at sapwetik.org Thu Feb 21 17:55:35 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 21 Feb 2019 12:55:35 -0500 Subject: =?UTF-8?Q?Re:_[infra][qa]_installing_required_projects_from_source_in_fu?= =?UTF-8?Q?nctional/devstack_jobs?= In-Reply-To: References: Message-ID: <5d1ebc25-4530-4a93-a640-b30e93f0a424@www.fastmail.com> On Thu, Feb 21, 2019, at 9:26 AM, Boden Russell wrote: > Question: > What's the proper way to install "siblings" [1] in devstack based zuul > v3 jobs for projects that also require the siblings via requirements.txt? > > > Background: > Following the zuul v3 migration guide for "sibling requirements" [1] > works fine for non-devstack based jobs. However, jobs that use devstack > must take other measures to install those siblings in their playbooks. > > Based on what I see projects like oslo.messaging doing for cross > testing, they are using the PROJECTS env var to specify the siblings in > their playbook (example [2]). This approach may work if those siblings > are not in requirements.txt, but for projects that also require the > siblings at runtime (in requirements.txt) it appears the version from > the requirements.txt is used rather than the sibling's source. By default devstack installs "libraries" (mostly things listed in requirements files) from pypi to ensure that our software works with released libraries. However, it is often important to also test that the next version of our own libraries will work with existing software. For this devstack has the LIBS_FROM_GIT [5] variable which overrides the install via pypi behavior. Note that I believe you must handle this flag in your devstack plugins. > > For example the changes in [3][4]. > > > > Thanks > > > [1] > https://docs.openstack.org/infra/manual/zuulv3.html#installation-of-sibling-requirements > [2] > https://github.com/openstack/oslo.messaging/blob/master/playbooks/oslo.messaging-telemetry-dsvm-integration-amqp1/run.yaml#L37 > [3] https://review.openstack.org/#/c/638099 > [4] > http://logs.openstack.org/99/638099/6/check/tricircle-functional/0b34687/logs/devstacklog.txt.gz#_2019-02-21_14_57_44_553 [5] https://docs.openstack.org/devstack/latest/development.html#testing-changes-to-libraries From sbauza at redhat.com Thu Feb 21 18:04:37 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 21 Feb 2019 19:04:37 +0100 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: On Thu, Feb 21, 2019 at 6:47 PM Graham Hayes wrote: > > > On 21/02/2019 17:28, Sylvain Bauza wrote: > > > > > > On Thu, Feb 21, 2019 at 6:14 PM Graham Hayes > > wrote: > > > > > > > > > > * If you had a magic wand and could inspire and make a single > > > sweeping architectural or software change across the services, > > > what would it be? For now, ignore legacy or upgrade concerns. > > > What role should the TC have in inspiring and driving such > > > changes? > > > > 1: Single agent on each compute node that allows for plugins to do > > all the work required. (Nova / Neutron / Vitrage / watcher / etc) > > > > 2: Remove RMQ where it makes sense - e.g. for nova-api -> > nova-compute > > using something like HTTP(S) would make a lot of sense. > > > > 3: Unified Error codes, with a central registry, but at the very > least > > each time we raise an error, and it gets returned a user can see > > where in the code base it failed. e.g. a header that has > > OS-ERROR-COMPUTE-3142, which means that someone can google for > > something more informative than the VM failed scheduling > > > > 4: OpenTracing support in all projects. > > > > 5: Possibly something with pub / sub where each project can listen > for > > events and not create something like designate did using > > notifications. > > > > > > That's the exact reason why I tried to avoid to answer about > > architectural changes I'd like to see it done. Because when I read the > > above lines, I'm far off any consensus on those. > > To answer 1. and 2. from my Nova developer's hat, I'd just say that we > > invented Cells v2 and Placement. > > Sure - this was if *I* had a magic wand - I have a completely different > viewpoint to others. No community really ever has a full agreement. > > Fair point, we work with consensus, not full agreements. It's always good to keep that distinction in mind. >From a TC perspective we have to look at these things from an > overall view. My suggestions above were for *all* projects, specifically > for #2 - I used a well known pattern as an example, but it can apply to > Trove talking to DB instances, Octavia to LBaaS nodes (they already do > this, and it is a good pattern), Zun, possibly Magnum (this is not an > exaustive list, and may not suit all listed projects, I am taking them > from the top of my head). > > I'd be interested in discussing the use cases requiring such important architectural splits. The main reason why Cells v2 was implemented was to address the MQ/DB scalability issue of 1000+ compute nodes. The Edge thingy came after this, so it wasn't the main driver for change. If the projects you mention have the same footprints at scale, then yeah I'm supportive of any redesign discussion that would come up. That said, before stepping in into major redesigns, I'd wonder : could the inter-services communication be improved in terms of reducing payload ? > From what I understand there was even talk of doing it for Nova so that > a central control plane could manage remote edge compute nodes without > having to keep a RMQ connection alive across the WAN, but I am not sure > where that got to. > > That's a separate usecase (Edge) which wasn't the initial reason why we started implementing Cells V2. I haven't heard any request from the Edge WG during the PTGs about changing our messaging interface because $WAN but I'm open to ideas. -Sylvain > To be clear, the redesign wasn't coming from any other sources but our > > users, complaining about scale. IMHO If we really want to see some > > comittee driving us about feature requests, this should be the UC and > > not the TC. > > It should be a combination - UC and TC should be communicating about > these requests - UC for the feedback, and the TC to see hwo they fit > with the TCs vision for the direction of OpenStack. > > > Whatever it is, at the end of the day, we're all paid by our sponsors. > > Meaning that any architectural redesign always hits the reality wall > > where you need to convince your respective Product Managers of the great > > benefit of the redesign. I'm maybe too pragmatic, but I remember so many > > discussions we had about redesigns that I now feel we just need hands, > > not ideas. > > I fully agree, and it has been an issue in the community for as long as > I can remember. It doesn't mean that we should stop pushing the project > forward. We have already moved the needle with the cycle goals, so we > can influence what features are added to projects. Lets continue to do > so. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Thu Feb 21 18:15:45 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 21 Feb 2019 12:15:45 -0600 Subject: [keystone][dev] Forum topic brainstorming Message-ID: Hi all, This is going out a little later than I'd like, so I apologize for letting it slip. Submissions for forum topics opens tomorrow [0]. Per usual, I've created an etherpad [1] for us to come up with topics we'd like to discuss at the forum. It looks like we only have a couple weeks to submit sessions [2], so I'll be putting this on the agenda for the keystone meeting next week. Please have a look and add suggestions or feedback before then.   [0] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002900.html [1] https://etherpad.openstack.org/p/DEN-keystone-forum-sessions [2] https://wiki.openstack.org/wiki/Forum -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jean-philippe at evrard.me Thu Feb 21 18:28:16 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Thu, 21 Feb 2019 13:28:16 -0500 Subject: [openstack-helm] would like to discuss review turnaround time In-Reply-To: References: Message-ID: Hello, These were just examples. We can always find give counter examples of reviews lagging behind by checking in gerrit. As far as I understand it, the problem is not that people don't get reviews in a timely fashion, is how they are prioritized. In Openstack, reviews stays behind for a certain time. It's sad but normal. But when people are actively pointing to a review, don't get a review, and others patches seem to go through the system... Then a tension appears. That tension is caused by different understanding of priorities. I raised that in the past. I think this is a problem we should address -- We don't want to alienate community members. We want to bring them in, by listening to their use case and work all together. For this project to become a community project, we truly need to scale that common sense of priorities all together, far beyond the walls of a single company. We also need to help others achieve their goals, as long as they are truly beneficial for the project in the long term. I like this guidance of the TC: https://governance.openstack.org/tc/reference/principles.html#openstack-first-project-team-second-company-third . Let's get there together. Long story short, I would love to see more attention to non AT&T patches. I guess IRC, ML, or meetings are good channels to raise those. Would that be a correct assumption for the future? Anyway, I am glad you've answered on this email. Please keep in mind it's not possible for everyone to join IRC, some people are not fluent in English, and some people simply don't want to join instant messaging. For those, the official communication channel is also the emails, so we should, IMO, treat it as an important channel too. Regards, Jean-Philippe Evrard (evrardjp) From Kevin.Fox at pnnl.gov Thu Feb 21 18:28:20 2019 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 21 Feb 2019 18:28:20 +0000 Subject: [tc] Questions for TC Candidates In-Reply-To: References: , Message-ID: <1A3C52DFCD06494D8528644858247BF01C2B52EE@EX10MBOX03.pnnl.gov> I think its good for the TC to discuss architectural shortcomings... Someone really needs to do it. If the way to do that is to have folks be elected based on recommending architecture changes and we vote on electing them, at least thats some way for folks to provide feedback on whats important there. Gives us operators more of a way to provide feedback. For example, I think CellsV2 mostly came about due to the current architecture not scaling well due to mysql/and torturing rabbit. The nova team felt I think it best to stick with the blessed architecture and try and scale it (not unreasonable from a single project perspective). But, rather then add a bunch of complexity which operators now suffer for, it could have been handled by fixing the underlying architectural issue. Stop torturing rabbit. If k8s can do 5000 nodes with one, non sharded control plane, nova should be able to too. Scheduling/starting a container and scheduling/starting a vm are not fundamentally different in their system requirements. Before, operators didn't have a frame of reference so just went along with it. Now they have more options and can more easily see the pain points in OpenStack and can decide to shift workload elsewhere. A single project can't make these sorts of overarching architectural decisions. The TC should do one of decide/help decide/facilitate deciding/delegate deciding. But someone needs to drive it, otherwise it gets dropped. That should be the TC IMO. The TC candidates are talking more and more about OpenStack being stable. One development quote I like, "the code is done, not when there is nothing more to add, but nothing more to remove" speaks to me here... Do TC candidates think that should that be an architectural goal coming up soon? Figure out how to continue to do what OpenStack does, but do it simpler and/or with less code/services? That may require braking down some project walls. Is that a good thing to do? Thanks, Kevin ________________________________ From: Sylvain Bauza [sbauza at redhat.com] Sent: Thursday, February 21, 2019 9:28 AM To: Graham Hayes Cc: openstack-discuss at lists.openstack.org Subject: Re: [tc] Questions for TC Candidates On Thu, Feb 21, 2019 at 6:14 PM Graham Hayes > wrote: On 20/02/2019 14:46, Chris Dent wrote: > > It's the Campaigning slot of the TC election process, where members > of the community (including the candidates) are encouraged to ask > the candidates questions and witness some debate. I have some > questions. > > First off, I'd like to thank all the candidates for running and > being willing to commit some of their time. I'd also like to that > group as a whole for being large enough to force an election. A > representative body that is not the result of an election would not > be very representing nor have much of a mandate. > > The questions follow. Don't feel obliged to answer all of these. The > point here is to inspire some conversation that flows to many > places. I hope other people will ask in the areas I've chosen to > skip. If you have a lot to say, it might make sense to create a > different message for each response. Beware, you might be judged on > your email etiquette and attention to good email technique! > > * How do you account for the low number of candidates? Do you > consider this a problem? Why or why not? I think we are reaching a more stable space, and the people who are developing the software are comfortable in the roles they are in. As the demographic of our developers shifts east, our leadership is still very US / EU based, which may be why we are not getting the same amount of people growing into TC candidates. > * Compare and contrast the role of the TC now to 4 years ago. If you > weren't around 4 years ago, comment on the changes you've seen > over the time you have been around. In either case: What do you > think the TC role should be now? 4 years ago, was just before the big tent I think? Ironically, there was a lot of the same discussion - python3, new project requirements (at that point the incubation requirements), asyncio / eventlet. The TC was also in the process of dealing with a By-Laws change, in this case getting the trademark program off the ground. We were still struggling with the "what is OpenStack?" question. Looking back on the mailing list archives is actually quite interesting and while the topics are the same, a lot of the answers have changed. > * What, to you, is the single most important thing the OpenStack > community needs to do to ensure that packagers, deployers, and > hobbyist users of OpenStack are willing to consistently upstream > their fixes and have a positive experience when they do? What is > the TC's role in helping make that "important thing" happen? I think things like the review culture change have been good for this. The only other thing we can do is have more people reviewing, to make that first contact nice and quick, but E_NO_TIME or E_NO_HUMANS becomes the issue. > * If you had a magic wand and could inspire and make a single > sweeping architectural or software change across the services, > what would it be? For now, ignore legacy or upgrade concerns. > What role should the TC have in inspiring and driving such > changes? 1: Single agent on each compute node that allows for plugins to do all the work required. (Nova / Neutron / Vitrage / watcher / etc) 2: Remove RMQ where it makes sense - e.g. for nova-api -> nova-compute using something like HTTP(S) would make a lot of sense. 3: Unified Error codes, with a central registry, but at the very least each time we raise an error, and it gets returned a user can see where in the code base it failed. e.g. a header that has OS-ERROR-COMPUTE-3142, which means that someone can google for something more informative than the VM failed scheduling 4: OpenTracing support in all projects. 5: Possibly something with pub / sub where each project can listen for events and not create something like designate did using notifications. That's the exact reason why I tried to avoid to answer about architectural changes I'd like to see it done. Because when I read the above lines, I'm far off any consensus on those. To answer 1. and 2. from my Nova developer's hat, I'd just say that we invented Cells v2 and Placement. To be clear, the redesign wasn't coming from any other sources but our users, complaining about scale. IMHO If we really want to see some comittee driving us about feature requests, this should be the UC and not the TC. Whatever it is, at the end of the day, we're all paid by our sponsors. Meaning that any architectural redesign always hits the reality wall where you need to convince your respective Product Managers of the great benefit of the redesign. I'm maybe too pragmatic, but I remember so many discussions we had about redesigns that I now feel we just need hands, not ideas. -Sylvain > * What can the TC do to make sure that the community (in its many > dimensions) is informed of and engaged in the discussions and > decisions of the TC? This is a difficult question, especially in a community where a lot of contributors are sponsored. The most effective way would be for the TC to start directly telling projects what to do - but I feel like that would mean that everyone would be unhappy with us. > * How do you counter people who assert the TC is not relevant? > (Presumably you think it is, otherwise you would not have run. If > you don't, why did you run?) Highlight the work done by the TC communicating with the board, guiding teams on what our vision is, and helping to pick goals. I think the goals are a great way, and we are starting to see the benifits as we continue with the practice. For some people, we will always be surplus to requirements, and they just want to dig into bugs and features, and not worry about politics. Thats fine - we just have to work with enough of the people on the teams to make sure that the project is heading in the correct direction, and as long as people can pick up what the priotities are from that process, I think we win. > That's probably more than enough. Thanks for your attention. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Feb 21 18:45:53 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 21 Feb 2019 13:45:53 -0500 Subject: [docs] Implementation of the api-ref consolidation under doc/source/ In-Reply-To: <2034589.JRkO46kZ9W@whitebase.usersys.redhat.com> References: <15224217.xv0GsKRgh2@whitebase.usersys.redhat.com> <20190221173402.GA20285@sm-workstation> <2034589.JRkO46kZ9W@whitebase.usersys.redhat.com> Message-ID: Luigi Toscano writes: > On Thursday, 21 February 2019 18:34:03 CET Sean McGinnis wrote: >> On Thu, Feb 21, 2019 at 06:08:15PM +0100, Luigi Toscano wrote: >> > Hi all, >> > >> > During the last PTG it was decided to move forward with the migration of >> > the api-ref documentation together with the rest of the documentation >> > [1]. This is one of the item still open after the (not so recent anymore) >> > massive documentation restructuring [2]. >> >> How is this going to work with the publishing of these separate content >> types to different locations? > > I can just guess, as this is a work in progress and I don't know about most of > the previous discussions. > > The publishing job is just code and can be adapted to publish two (three) > subtrees to different places, or exclude some directories. > The global index files from doc/source do not necessarily need to include all > the index files of the subdirectories, so that shouldn't be a problem. > > Do you have a specific concern that it may difficult to address? Sphinx is really expecting to build a complete output set that is used together. Several things may break. It connects the output files together with "next" and "previous" navigation links, for one. It uses relative links to resources like CSS and JS files that will be in a different place if /some/deep/path/to/index.html becomes /index.html or vice versa. What is the motivation for changing how the API documentation is built and published? -- Doug From doug at doughellmann.com Thu Feb 21 18:56:28 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 21 Feb 2019 13:56:28 -0500 Subject: [placement] [translation][i18n] translating exceptions In-Reply-To: <7ed50c75-64e3-7bea-a07f-9a70db42583f@gmail.com> References: <7ed50c75-64e3-7bea-a07f-9a70db42583f@gmail.com> Message-ID: Matt Riedemann writes: > On 2/21/2019 11:07 AM, Doug Hellmann wrote: >> Does an end-user interact with placement directly, or are all of the >> errors going to be seen and handled by other services that will report >> their own errors to end users? > > The Placement APIs are admin-only by default. That can be configured via > policy but assume it's like Ironic and admin-only for the most part. > > Speaking of, what does Ironic do about translations in its API? > > As for the question at hand, I'm OK with *not* translating errors in > placement for both the admin-only aspect and the push to use standard > codes in error responses which can be googled. > > FWIW I also shared this thread on WeChat to see if anyone has an opinion > there. > > -- > > Thanks, > > Matt > I agree. If placement isn't meant for cloud end-users to interact with, I don't see a lot of benefit to translating the error messages coming through the API. -- Doug From openstack at medberry.net Thu Feb 21 18:56:12 2019 From: openstack at medberry.net (David Medberry) Date: Thu, 21 Feb 2019 11:56:12 -0700 Subject: Fwd: Renaissance Denver Hotel: Quiet Zone (no more train horns!) is OFFICIAL for the A line Light Rail! In-Reply-To: <20190221172418.2oibavbt5fmkndio@yuggoth.org> References: <20190221172418.2oibavbt5fmkndio@yuggoth.org> Message-ID: I can definitely round some up for my talk and maybe some other sessions I attend.... DONE! On Thu, Feb 21, 2019 at 10:24 AM Jeremy Stanley wrote: > > On 2019-02-20 20:15:35 +0000 (+0000), Alexandra Settle wrote: > > Is it sad I'm almost disappointed by the lack of said horns? > [...] > > I bet we can convince folks to bring wooden train whistles with > them, or find some in a local shop. Problem solved. > -- > Jeremy Stanley From feilong at catalyst.net.nz Thu Feb 21 19:47:11 2019 From: feilong at catalyst.net.nz (Feilong Wang) Date: Fri, 22 Feb 2019 08:47:11 +1300 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: <63003af8-7e67-41ac-a961-082b23211a97@catalyst.net.nz> Not sure my answer if fit one or some questions. But from my personal PoV, there is BIG gap between the expectation from contributors and the power of TC. As a cloud platform, though there are many components/services, their API should be highly consistent. The contracts between each other should be very stable. However, I don't think we did a great job around this. And also, we do have some cross project features are still weak, e.g. quota management, role management, etc. And for the single most important thing I would like to do, is getting a most restrict API design cross project to make OpenStack like a whole building, but not building blocks, from user perspective. On 21/02/19 3:46 AM, Chris Dent wrote: > > It's the Campaigning slot of the TC election process, where members > of the community (including the candidates) are encouraged to ask > the candidates questions and witness some debate. I have some > questions. > > First off, I'd like to thank all the candidates for running and > being willing to commit some of their time. I'd also like to that > group as a whole for being large enough to force an election. A > representative body that is not the result of an election would not > be very representing nor have much of a mandate. > > The questions follow. Don't feel obliged to answer all of these. The > point here is to inspire some conversation that flows to many > places. I hope other people will ask in the areas I've chosen to > skip. If you have a lot to say, it might make sense to create a > different message for each response. Beware, you might be judged on > your email etiquette and attention to good email technique! > > * How do you account for the low number of candidates? Do you >   consider this a problem? Why or why not? > > * Compare and contrast the role of the TC now to 4 years ago. If you >   weren't around 4 years ago, comment on the changes you've seen >   over the time you have been around. In either case: What do you >   think the TC role should be now? > > * What, to you, is the single most important thing the OpenStack >   community needs to do to ensure that packagers, deployers, and >   hobbyist users of OpenStack are willing to consistently upstream >   their fixes and have a positive experience when they do? What is >   the TC's role in helping make that "important thing" happen? > > * If you had a magic wand and could inspire and make a single >   sweeping architectural or software change across the services, >   what would it be? For now, ignore legacy or upgrade concerns. >   What role should the TC have in inspiring and driving such >   changes? > > * What can the TC do to make sure that the community (in its many >   dimensions) is informed of and engaged in the discussions and >   decisions of the TC? > > * How do you counter people who assert the TC is not relevant? >   (Presumably you think it is, otherwise you would not have run. If >   you don't, why did you run?) > > That's probably more than enough. Thanks for your attention. > -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- From feilong at catalyst.net.nz Thu Feb 21 19:54:33 2019 From: feilong at catalyst.net.nz (Feilong Wang) Date: Fri, 22 Feb 2019 08:54:33 +1300 Subject: [tc] [election] Candidate question: growth of projects In-Reply-To: References: Message-ID: Obviously, the safe answer for these questions is number doesn't matter, we should focus on the core/mission of OpenStack. But as Alex said, not much people can know all of those 63 projects. And given some companies are reducing their investiment on OpenStack, we don't have much resources like before(say 2014-2015). So I do think we need more focus. On 22/02/19 12:13 AM, Chris Dent wrote: > > This is another set of questions for TC candidates, to look at a > different side of things from my first one [1] and somewhat related > to the one Doug has asked [2]. > > As Doug mentions, a continuing role of the TC is to evaluate > applicants to be official projects. These questions are about that. > > There are 63 teams in the official list of projects. How do you feel > about this size? Too big, too small, just right? Why? > > If you had to make a single declaration about growth in the number > of projects would you prefer to see (and why, of course): > > * More projects as required by demand. > * Slower or no growth to focus on what we've got. > * Trim the number of projects to "get back to our roots". > * Something else. > > How has the relatively recent emergence of the open infrastructure > projects that are at the same "level" in the Foundation as OpenStack > changed your thoughts on the above questions? > > Do you think the number of projects has any impact (positive or > negative) on our overall ability to get things done? > > Recognizing that there are many types of contributors, not just > developers, this question is about developers: Throughout history > different members of the community have sometimes identified as an > "OpenStack developer", sometimes as a project developer (e.g., "Nova > developer"). Should we encourage contributors to think of themselves > as primarily OpenStack developers? If so, how do we do that? If not, > why not? > > Thanks. > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002914.html > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002923.html > > -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- From openstack at nemebean.com Thu Feb 21 20:56:24 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 21 Feb 2019 14:56:24 -0600 Subject: [oslo][operators] oslo.messaging and RabbitMQ SSL Message-ID: <873efd8d-0173-e5bd-83bc-8a283cf0184a@nemebean.com> For the past few months, we've been investigating a significant bug when enabling SSL for oslo.messaging connections to RabbitMQ.[0] Thanks to some patient and excellent investigation, it was tracked down to an issue in the amqp library that we use in oslo.messaging. The fix has now been released as 2.4.1, and we've updated the requirements on master to reflect that, but we can't backport requirements changes to the stable branches. Since this affects releases going back to Pike, that's potentially a lot of affected users. We're planning to release note[1] all of the stable branches to communicate the need to use a newer version of the library, but I also wanted to send an email to the list in order to help get the word out. Basically, since we can't fix this in the library itself I'm running a publicity campaign to let everyone know what the fix is. :-) If you have any questions, feel free to reach out here or on IRC in #openstack-oslo. Thanks. -Ben 0: https://bugs.launchpad.net/oslo.messaging/+bug/1800957 1: https://review.openstack.org/#/c/638461 From mthode at mthode.org Thu Feb 21 21:07:25 2019 From: mthode at mthode.org (Matthew Thode) Date: Thu, 21 Feb 2019 15:07:25 -0600 Subject: [oslo][operators] oslo.messaging and RabbitMQ SSL In-Reply-To: <873efd8d-0173-e5bd-83bc-8a283cf0184a@nemebean.com> References: <873efd8d-0173-e5bd-83bc-8a283cf0184a@nemebean.com> Message-ID: <20190221210725.tzede3kgwa6rieya@mthode.org> On 19-02-21 14:56:24, Ben Nemec wrote: > For the past few months, we've been investigating a significant bug when > enabling SSL for oslo.messaging connections to RabbitMQ.[0] Thanks to some > patient and excellent investigation, it was tracked down to an issue in the > amqp library that we use in oslo.messaging. The fix has now been released as > 2.4.1, and we've updated the requirements on master to reflect that, but we > can't backport requirements changes to the stable branches. Since this > affects releases going back to Pike, that's potentially a lot of affected > users. We're planning to release note[1] all of the stable branches to > communicate the need to use a newer version of the library, but I also > wanted to send an email to the list in order to help get the word out. > Basically, since we can't fix this in the library itself I'm running a > publicity campaign to let everyone know what the fix is. :-) > > If you have any questions, feel free to reach out here or on IRC in > #openstack-oslo. Thanks. > > -Ben > > 0: https://bugs.launchpad.net/oslo.messaging/+bug/1800957 > 1: https://review.openstack.org/#/c/638461 > Stable policy may allow for the backport, depending on the details of the issue. https://docs.openstack.org/project-team-guide/stable-branches.html -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From colleen at gazlene.net Thu Feb 21 21:11:07 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Thu, 21 Feb 2019 16:11:07 -0500 Subject: [dev][keystone] App Cred Capabilities Update Message-ID: <4434fb3c-cdc3-48fc-b9ef-5a7dd0a8e70c@www.fastmail.com> I have an initial draft of application credential capabilities available for review[1]. The spec[2] was not straightforward to implement. There were a few parts that I found unintuitive, user-unfriendly, and overcomplicated. The current proposed implementation differs from the spec in some ways that I want to discuss with the team. Given that non-client library freeze is next week, and the changes we need in keystonemiddleware also require changes in keystoneclient, I'm not sure there is enough time to properly flesh this out and allow for thorough code review, but if we miss the deadline we can be ready to get this in in the beginning of next cycle (with apologies to everyone waiting on this feature - I just want to make sure we get it right with minimal regrets). * Naming As always, naming is hard. In the spec, we've called the property that is attached to the app cred "capabilities" (for the sake of this email I'm going to call it user-created-rules), and we've called the operator-configured list of available endpoints "permissible path templates" (for the sake of this email I'm going to call it operator-created-rules). I find both confusing and awkward. "Permissible path templates" is not a great name because the rule is actually about more than just the path, it's about the request as a whole, including the method. I'd like to avoid saying "template" too because that evokes a picture of something like a Jinja or ERB template, which is not what this is, and because I'd like to avoid the whole string substitution thing - more on that below. In the implementation, I've renamed the operator-created-rules to "access rules". I stole this from Istio after Adam pointed out they have a really similar concept[3]. I really like this name because I think it well-describes the thing we're building without already being overloaded. So far, I've kept the user-created-rules as "capabilities" but I'm not a fan of it because it's an overloaded word and not very descriptive, although in my opinion it is still more descriptive than "whitelist" which is what we were originally going to call it. It might make sense to relate this property somehow to the operator-created-rules - perhaps by calling it access_rules_list, or granted_access_rules. Or we could call *this* thing the access rules, and call the other thing allowed_access_rules or permitted_access_rules. * Substitutions The way the spec lays out variable components of the URL paths for both user-created-rules and operator-created-rules is unnecessarily complex and in some cases faulty. The only way I can explain how complicated it is is to try to give an example: Let's say we want to allow a user to create an application credential that allows the holder to issue a GET request on the identity API that looks like /v3/projects/ef7284b4-3a75-4570-8ea8-b30214f18538/tags/foobar. The spec says that the string '/v3/projects/{project_id}/tags/{tag}' is what should be provided verbatim in the "path" attribute of a "capability", then there should be a "substitutions" attribute that sets {"tag": "foobar"}, then the project_id should be taken from the token scope at app cred usage time. When the capability is validated against the operator-created-rules at app cred creation time, it needs to check that the path string matches exactly, that the keys of the "substitutions" dict matches the "user template keys" list, and that keys required by the "context template keys" are provided by the token context. Taking the project ID, domain ID, or user ID from the token scope is not going to work because some of these APIs may actually be system-scoped APIs - it's just not a hard and fast rule that a project/domain/user ID in the URL maps to the same user and scope of the token used to create it. Once we do away with that, it stops making sense to have a separate attribute for the user-provided substitutions when they could just include that in the URL path to begin with. So the proposed implementation simply allows the wildcards * and ** in both the operator-created-rules and user-created-rules, no python-formatting variable substitutions. * UUIDs The spec says each operator-created-rule should have its own UUID, and the end user needs to look up and provide this UUID when they create the app cred rule. This has the benefit of having a fast lookup because we've put the onus on the user to look up the rule themselves, but I think it is very user-unfriendly. In the proposed implementation, I've done away with UUIDs on the operator-created-rules, and instead a match is looked up based on the service type, request path and method, and the "allow_chained" attribute (more on that next). Depending on how big we think this list of APIs will get, this will have some impact on performance on creation time (not at token validation time). UUIDs make sense for singleton resources that are created in a database. I imagine this starting as an operator-managed configuration file, and maybe some day in the future a catalog of this sort could be published by the services so that the operator doesn't have to maintain it themselves. To that end, I've implemented the operator-created-rules driver with a JSON file backend. But with this style of implementation, including UUIDs for every rule is awkward - it only makes sense if we're generating the resources within keystone and storing them in a database. * allow_chained The allow_chained attribute looks and feels awkward and adds complexity to the code Is there really a case when either the user or operator would not want to allow a service to make a request on behalf of a user who is making a more general request? Also, can we find a better name for it? Those are all the major question marks for me. There are some other minor differentiations from the spec and I will propose an update that makes it consistent with reality after we have these other questions sorted out. Colleen [1] https://review.openstack.org/#/q/topic:bp/whitelist-extension-for-app-creds [2] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/stein/capabilities-app-creds.html [3] https://istio.io/docs/reference/config/authorization/istio.rbac.v1alpha1/#AccessRule From lars at redhat.com Thu Feb 21 21:20:42 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 21 Feb 2019 16:20:42 -0500 Subject: [neutron] openvswitch switch connection timeout? In-Reply-To: References: <20190221031326.kx3lrd226ts7a64j@redhat.com> Message-ID: I'm seeing identical behavior on a second system. Has anyone else experienced this? On Wed, Feb 20, 2019 at 10:58 PM Lars Kellogg-Stedman wrote: > On Wed, Feb 20, 2019 at 10:13 PM Lars Kellogg-Stedman > wrote: > >> I was trying to track down some connectivity issues with some >> baremetal nodes booting from iSCSI LUNs provided by Cinder. It turns >> out that openvswitch is going belly-up. We see in >> openvswitch-agent.log a "Switch connection timeout" error [1]. Just >> before that, in /var/log/openvswitch/ovs-vswitchd.log, we see: >> >> 2019-02-21T00:32:38.696Z|00795|bridge|INFO|bridge br-tun: deleted >> interface patch-int on port 1 >> 2019-02-21T00:32:38.696Z|00796|bridge|INFO|bridge br-tun: deleted >> interface br-tun on port 65534 >> 2019-02-21T00:32:38.823Z|00797|bridge|INFO|bridge br-int: deleted >> interface int-br-ctlplane on port 1 >> 2019-02-21T00:32:38.823Z|00798|bridge|INFO|bridge br-int: deleted >> interface br-int on port 65534 >> 2019-02-21T00:32:38.823Z|00799|bridge|INFO|bridge br-int: deleted >> interface tapb0101920-b9 on port 4 >> 2019-02-21T00:32:38.824Z|00800|bridge|INFO|bridge br-int: deleted >> interface patch-tun on port 3 >> 2019-02-21T00:32:38.954Z|00801|bridge|INFO|bridge br-ctlplane: >> deleted interface phy-br-ctlplane on port 4 >> 2019-02-21T00:32:38.954Z|00802|bridge|INFO|bridge br-ctlplane: >> deleted interface br-ctlplane on port 65534 >> 2019-02-21T00:32:38.954Z|00803|bridge|INFO|bridge br-ctlplane: >> deleted interface em2 on port 3 >> > > The plot thickens: it looks as if something may be doing this explicitly? > At the same time, we see in the system journal: > > Thu 2019-02-21 00:32:38.697531 UTC > [s=fa1c368ed0314169b286a29ffe7d9f87;i=4aa2b;b=d64947ee218546d8a94103aa9bbee154;m=4b31e6780;t=5825c9c8dda3b;x=13507b4516ddcb27] > _TRANSPORT=stdout > PRIORITY=6 > SYSLOG_FACILITY=3 > _UID=0 > _GID=0 > _CAP_EFFECTIVE=1fffffffff > _SELINUX_CONTEXT=system_u:system_r:init_t:s0 > _BOOT_ID=d64947ee218546d8a94103aa9bbee154 > _MACHINE_ID=4a470fefdd3b4033a163bb69bc8578da > _HOSTNAME=localhost.localdomain > _SYSTEMD_SLICE=system.slice > _EXE=/usr/bin/bash > _STREAM_ID=951219afe7fb4eb4bf5d71af983d1f11 > SYSLOG_IDENTIFIER=ovs-ctl > MESSAGE=Exiting ovs-vswitchd (9340) [ OK ] > _PID=868225 > _COMM=ovs-ctl > _CMDLINE=/bin/sh /usr/share/openvswitch/scripts/ovs-ctl > --no-ovsdb-server stop > _SYSTEMD_CGROUP=/system.slice/ovs-vswitchd.service/control > _SYSTEMD_UNIT=ovs-vswitchd.service > > ...but there's nothing else around that time that seems relevant. > > -- > Lars Kellogg-Stedman > > -- Lars Kellogg-Stedman -------------- next part -------------- An HTML attachment was scrubbed... URL: From antonio.ojea.garcia at gmail.com Thu Feb 21 21:26:19 2019 From: antonio.ojea.garcia at gmail.com (Antonio Ojea) Date: Thu, 21 Feb 2019 22:26:19 +0100 Subject: [oslo][operators] oslo.messaging and RabbitMQ SSL In-Reply-To: <20190221210725.tzede3kgwa6rieya@mthode.org> References: <873efd8d-0173-e5bd-83bc-8a283cf0184a@nemebean.com> <20190221210725.tzede3kgwa6rieya@mthode.org> Message-ID: On Thu, 21 Feb 2019 at 22:10, Matthew Thode wrote: > > On 19-02-21 14:56:24, Ben Nemec wrote: > > For the past few months, we've been investigating a significant bug when > > enabling SSL for oslo.messaging connections to RabbitMQ.[0] Thanks to some > > patient and excellent investigation, it was tracked down to an issue in the > > amqp library that we use in oslo.messaging. The fix has now been released as > > 2.4.1, and we've updated the requirements on master to reflect that, but we > > can't backport requirements changes to the stable branches. Since this > > affects releases going back to Pike, that's potentially a lot of affected > > users. We're planning to release note[1] all of the stable branches to > > communicate the need to use a newer version of the library, but I also > > wanted to send an email to the list in order to help get the word out. > > Basically, since we can't fix this in the library itself I'm running a > > publicity campaign to let everyone know what the fix is. :-) > > > > If you have any questions, feel free to reach out here or on IRC in > > #openstack-oslo. Thanks. > > > > -Ben > > > > 0: https://bugs.launchpad.net/oslo.messaging/+bug/1800957 > > 1: https://review.openstack.org/#/c/638461 > > > > Stable policy may allow for the backport, depending on the details of > the issue. > > https://docs.openstack.org/project-team-guide/stable-branches.html > quoting Ken Giusti > Unfortunately we can't backport this fix to previous stable branches since it is a change to requirements which is technically a feature release. https://bugs.launchpad.net/oslo.messaging/+bug/1800957/comments/52 From openstack at nemebean.com Thu Feb 21 21:27:43 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 21 Feb 2019 15:27:43 -0600 Subject: [oslo][operators] oslo.messaging and RabbitMQ SSL In-Reply-To: <20190221210725.tzede3kgwa6rieya@mthode.org> References: <873efd8d-0173-e5bd-83bc-8a283cf0184a@nemebean.com> <20190221210725.tzede3kgwa6rieya@mthode.org> Message-ID: <37aee696-a20f-633a-bc69-9e17b47774a9@nemebean.com> On 2/21/19 3:07 PM, Matthew Thode wrote: > On 19-02-21 14:56:24, Ben Nemec wrote: >> For the past few months, we've been investigating a significant bug when >> enabling SSL for oslo.messaging connections to RabbitMQ.[0] Thanks to some >> patient and excellent investigation, it was tracked down to an issue in the >> amqp library that we use in oslo.messaging. The fix has now been released as >> 2.4.1, and we've updated the requirements on master to reflect that, but we >> can't backport requirements changes to the stable branches. Since this >> affects releases going back to Pike, that's potentially a lot of affected >> users. We're planning to release note[1] all of the stable branches to >> communicate the need to use a newer version of the library, but I also >> wanted to send an email to the list in order to help get the word out. >> Basically, since we can't fix this in the library itself I'm running a >> publicity campaign to let everyone know what the fix is. :-) >> >> If you have any questions, feel free to reach out here or on IRC in >> #openstack-oslo. Thanks. >> >> -Ben >> >> 0: https://bugs.launchpad.net/oslo.messaging/+bug/1800957 >> 1: https://review.openstack.org/#/c/638461 >> > > Stable policy may allow for the backport, depending on the details of > the issue. > > https://docs.openstack.org/project-team-guide/stable-branches.html > The stable policy may allow it, but as I understand it we couldn't release the resulting library. Requirements changes mandate a feature release, which we can't do from stable branches. From wilkers.steve at gmail.com Thu Feb 21 21:38:25 2019 From: wilkers.steve at gmail.com (Steve Wilkerson) Date: Thu, 21 Feb 2019 15:38:25 -0600 Subject: [openstack-helm] would like to discuss review turnaround time In-Reply-To: References: Message-ID: Sorry I'm late to the party. I won't disagree that we have room to grow here; in fact, I'd love to see our review throughput increase, both in terms of the number of reviews we're able to perform and the number of people performing them across the board. I also won't disagree that there have been times where review turnaround has been slower than others; however, I'd also like to point out that the response to reviews (whether it's further discussion or addressing review feedback) has also been slower at times than others. I'm not arguing that either one of those is more frustrating than the other, but in my mind it's an exercise in building trust. I'm always happy to review changes that come in that I feel qualified in reviewing, and I'm not above saying "today's a bit busier than usual, so it may be later before I can take a gander" -- we all have day jobs that demand our attention, even if those work periods don't line up across time zones. I don't think it's unreasonable to ask individuals desiring a quicker turnaround time on reviews for a quicker turnaround time on the feedback provided, else we don't really move forward here. I don't think solving the perceived priority and review latency issues is an insurmountable problem. The first step to tackling this is continued involvement in the channels mentioned above. Our weekly meeting attendance has fluctuated in the past, and I'm sure we can attribute some of that to the time zone difference; however, it's been pretty sparse recently. While our mailing list involvement may not be the best (it's easy to lose track of emails, personally), we've always had a standing portion of our weekly meeting devoted to posting changes we'd like reviewed. One of the items brought up recently was ensuring the following meeting's etherpad is posted at the end of the current weekly meeting so we've got ample time to get visibility on those. Furthermore, our weekly meetings are where we tend to discuss what's happening with OpenStack-Helm at any given time. Whether it's expanding what our jobs do, talking about what's required for getting new features in, or determining what someone can work on to add value, our weekly meeting would be the place to attend (or at least check out the meeting log if you can't make it). Our IRC channel is another -- I see plenty of chatter happening most days, so others needn't be shy about specifically calling out for reviews in the channel. I'm certain there are individuals who can't or don't want to be in IRC for reasons of their choosing, and that's fine - I personally plan to be more involved with the mailing list, but I also can't and don't want to be answering emails all day either. Once again, it's give and take. I'd challenge everyone in this thread to be part of the solution. We want to bring others in and accommodate their needs and uses, we want to grow a more diverse core team, and I personally enjoy talking to strangers on the internet. However, that requires active, continuous involvement from all parties involved, not just the core reviewer team - we're not the only ones who are capable of reviewing changes. I'm sure there's a path forward here, so let's find it together. Steve On Thu, Feb 21, 2019 at 12:28 PM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Hello, > > These were just examples. We can always find give counter examples of > reviews lagging behind by checking in gerrit. > > As far as I understand it, the problem is not that people don't get > reviews in a timely fashion, is how they are prioritized. > In Openstack, reviews stays behind for a certain time. It's sad but normal. > > But when people are actively pointing to a review, don't get a review, and > others patches seem to go through the system... Then a tension appears. > That tension is caused by different understanding of priorities. I raised > that in the past. > > I think this is a problem we should address -- We don't want to alienate > community members. We want to bring them in, by listening to their use case > and work all together. For this project to become a community project, we > truly need to scale that common sense of priorities all together, far > beyond the walls of a single company. We also need to help others achieve > their goals, as long as they are truly beneficial for the project in the > long term. I like this guidance of the TC: > https://governance.openstack.org/tc/reference/principles.html#openstack-first-project-team-second-company-third > . Let's get there together. Long story short, I would love to see more > attention to non AT&T patches. I guess IRC, ML, or meetings are good > channels to raise those. Would that be a correct assumption for the future? > > Anyway, I am glad you've answered on this email. > > Please keep in mind it's not possible for everyone to join IRC, some > people are not fluent in English, and some people simply don't want to > join instant messaging. For those, the official communication channel is > also the emails, so we should, IMO, treat it as an important channel too. > > Regards, > Jean-Philippe Evrard (evrardjp) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Feb 21 21:41:26 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 21 Feb 2019 15:41:26 -0600 Subject: [docs] Implementation of the api-ref consolidation under doc/source/ In-Reply-To: <2034589.JRkO46kZ9W@whitebase.usersys.redhat.com> References: <15224217.xv0GsKRgh2@whitebase.usersys.redhat.com> <20190221173402.GA20285@sm-workstation> <2034589.JRkO46kZ9W@whitebase.usersys.redhat.com> Message-ID: <20190221214125.GA3443@sm-workstation> > The publishing job is just code and can be adapted to publish two (three) > subtrees to different places, or exclude some directories. > The global index files from doc/source do not necessarily need to include all > the index files of the subdirectories, so that shouldn't be a problem. > > Do you have a specific concern that it may difficult to address? > As Doug pointed out, it is very difficult to split out a generated set of documentation from Sphinx. There are too many links between the resulting output. Assuming that is possible, the existing standard set of jobs for publishing these to their respective locations cannot handle this. It would be a non-standard thing (unless all repos switched over to this new structure) so it would not be able to run the jobs that are expected of all projects. We would also need to update the published documentation for how this should be done here: https://governance.openstack.org/tc/reference/project-testing-interface.html#documentation If the desire is to better organize documentation files within the repo, I could see potentially having being able to more easily do something where you have doc/source, doc/api-ref, and doc/releasenotes. But the few things mentioned above would need to be updated to allow that. From lbragstad at gmail.com Thu Feb 21 22:31:23 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 21 Feb 2019 16:31:23 -0600 Subject: [dev][keystone] App Cred Capabilities Update In-Reply-To: <4434fb3c-cdc3-48fc-b9ef-5a7dd0a8e70c@www.fastmail.com> References: <4434fb3c-cdc3-48fc-b9ef-5a7dd0a8e70c@www.fastmail.com> Message-ID: On 2/21/19 3:11 PM, Colleen Murphy wrote: > I have an initial draft of application credential capabilities available for > review[1]. The spec[2] was not straightforward to implement. There were a few > parts that I found unintuitive, user-unfriendly, and overcomplicated. The > current proposed implementation differs from the spec in some ways that I want > to discuss with the team. Given that non-client library freeze is next week, > and the changes we need in keystonemiddleware also require changes in > keystoneclient, I'm not sure there is enough time to properly flesh this out > and allow for thorough code review, but if we miss the deadline we can be ready > to get this in in the beginning of next cycle (with apologies to everyone waiting > on this feature - I just want to make sure we get it right with minimal regrets). I'm all for spending extra time to work through this if needed. Thanks for raising it to the list. > > * Naming > > As always, naming is hard. In the spec, we've called the property that is > attached to the app cred "capabilities" (for the sake of this email I'm going > to call it user-created-rules), and we've called the operator-configured list > of available endpoints "permissible path templates" (for the sake of this email > I'm going to call it operator-created-rules). I find both confusing and > awkward. > > "Permissible path templates" is not a great name because the rule is actually > about more than just the path, it's about the request as a whole, including the > method. I'd like to avoid saying "template" too because that evokes a picture > of something like a Jinja or ERB template, which is not what this is, and > because I'd like to avoid the whole string substitution thing - more on that > below. In the implementation, I've renamed the operator-created-rules to > "access rules". I stole this from Istio after Adam pointed out they have a > really similar concept[3]. I really like this name because I think it > well-describes the thing we're building without already being overloaded. I struggled with this name, but I didn't have a worth-while replacement to suggest. I like using "access rules" but... > > So far, I've kept the user-created-rules as "capabilities" but I'm not a fan of > it because it's an overloaded word and not very descriptive, although in my > opinion it is still more descriptive than "whitelist" which is what we were > originally going to call it. It might make sense to relate this property > somehow to the operator-created-rules - perhaps by calling it > access_rules_list, or granted_access_rules. Or we could call *this* thing the > access rules, and call the other thing allowed_access_rules or > permitted_access_rules. ... I think the final proposal here is a great idea, using "access rules" for both with one having a slightly more specific derivative, tailored either for operators or users. The fact we have an association between the two via naming is better than what we had with "template" and "capability". > * Substitutions > > The way the spec lays out variable components of the URL paths for both > user-created-rules and operator-created-rules is unnecessarily complex and in > some cases faulty. The only way I can explain how complicated it is is to try > to give an example: > > Let's say we want to allow a user to create an application credential that > allows the holder to issue a GET request on the identity API that looks like > /v3/projects/ef7284b4-3a75-4570-8ea8-b30214f18538/tags/foobar. The spec says > that the string '/v3/projects/{project_id}/tags/{tag}' is what should be > provided verbatim in the "path" attribute of a "capability", then there should > be a "substitutions" attribute that sets {"tag": "foobar"}, then the project_id > should be taken from the token scope at app cred usage time. When the > capability is validated against the operator-created-rules at app cred creation > time, it needs to check that the path string matches exactly, that the keys of > the "substitutions" dict matches the "user template keys" list, and that keys > required by the "context template keys" are provided by the token context. > > Taking the project ID, domain ID, or user ID from the token scope is not going > to work because some of these APIs may actually be system-scoped APIs - it's > just not a hard and fast rule that a project/domain/user ID in the URL maps to > the same user and scope of the token used to create it. Once we do away with > that, it stops making sense to have a separate attribute for the user-provided > substitutions when they could just include that in the URL path to begin with. > So the proposed implementation simply allows the wildcards * and ** in both the > operator-created-rules and user-created-rules, no python-formatting variable > substitutions. I agree about the awkwardness and complexity, but I do want to clarify. Using the example above, going with * and ** would mean that tokens generated from that application credential would be actionable on any project tag for the project the application credential was created for and not just 'foobar'. For an initial implementation, I think that's fine. Sure, creating an application credential specific to a single server is ideal, but at least we're heading in the right direction by limiting its usage to a single API. If we get that right - we should be able to iteratively add filtering later*. I wouldn't mind re-raising this particular point after we have more feedback from the user community. * iff we need to > > * UUIDs > > The spec says each operator-created-rule should have its own UUID, and the end > user needs to look up and provide this UUID when they create the app cred rule. > This has the benefit of having a fast lookup because we've put the onus on the > user to look up the rule themselves, but I think it is very user-unfriendly. In > the proposed implementation, I've done away with UUIDs on the > operator-created-rules, and instead a match is looked up based on the service > type, request path and method, and the "allow_chained" attribute (more on that > next). Depending on how big we think this list of APIs will get, this will have > some impact on performance on creation time (not at token validation time). > > UUIDs make sense for singleton resources that are created in a database. I > imagine this starting as an operator-managed configuration file, and maybe some > day in the future a catalog of this sort could be published by the services so > that the operator doesn't have to maintain it themselves. To that end, I've > implemented the operator-created-rules driver with a JSON file backend. But > with this style of implementation, including UUIDs for every rule is awkward - > it only makes sense if we're generating the resources within keystone and > storing them in a database. > > * allow_chained > > The allow_chained attribute looks and feels awkward and adds complexity to the code > Is there really a case when either the user or operator would not want to allow a > service to make a request on behalf of a user who is making a more general request? > Also, can we find a better name for it? Not that I can think of, but I'd like to hear an operator or user chime in. The example brought up in IRC advocating for the attribute was: - I need to create an application credential that "something" can use to create a server, but that "something" can't create ephemeral storage Otherwise, maybe we can meet in the middle by deciding a sane default behavior and implementing the toggle later? Regarding the name, it's not really clear what we're chaining. But, after re-reading the details in the specification, it's clear this was specific to "service chaining". I'll have to dig my naming hat out of the closet to come up with something more useful... > > Those are all the major question marks for me. There are some other minor > differentiations from the spec and I will propose an update that makes it > consistent with reality after we have these other questions sorted out. It would have taken me way longer to distill this from Gerrit. Thanks again for taking the time to write it up. > > Colleen > > [1] https://review.openstack.org/#/q/topic:bp/whitelist-extension-for-app-creds > [2] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/stein/capabilities-app-creds.html > [3] https://istio.io/docs/reference/config/authorization/istio.rbac.v1alpha1/#AccessRule > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From tony at bakeyournoodle.com Thu Feb 21 23:34:44 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 22 Feb 2019 10:34:44 +1100 Subject: [infra][releases][requirements] Publishing per branch constraints files In-Reply-To: References: <20190214024541.GE12795@thor.bakeyournoodle.com> <20190214212901.GI12795@thor.bakeyournoodle.com> <20190215003231.GJ12795@thor.bakeyournoodle.com> <20190215033217.GK12795@thor.bakeyournoodle.com> <20190215053728.GN12795@thor.bakeyournoodle.com> Message-ID: <20190221233444.GB13081@thor.bakeyournoodle.com> On Fri, Feb 15, 2019 at 09:37:09AM -0500, Doug Hellmann wrote: > That should also be possible to integrate with sphinx. Cool. I'll work on that today. > > > I'll try coding that up next week. Expect sphinx questions ;P > > Yep, I'll try to help. > > >> Yeah, we should make sure redirects are enabled. I think we made that a > >> blanket change when we did the docs redirect work, but possibly not. > > > > So I used Rewrite rather then Redirect but I think for this I can switch > > to the latter. > > I don't know the difference, so I don't know if it matters. We're using > redirects elsewhere for docs, but we should just do whatever works for > this case. Either works just fine and in this case the difference isn't important. I've proposed https://review.openstack.org/638527 to allow us to use redirects for this. Once it (or something similar) merges we should be able to test this Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From gmann at ghanshyammann.com Thu Feb 21 23:54:35 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 22 Feb 2019 08:54:35 +0900 Subject: [infra][qa] installing required projects from source in functional/devstack jobs In-Reply-To: <5d1ebc25-4530-4a93-a640-b30e93f0a424@www.fastmail.com> References: <5d1ebc25-4530-4a93-a640-b30e93f0a424@www.fastmail.com> Message-ID: <169127b779a.c2a7cc0895597.8824954749040304365@ghanshyammann.com> ---- On Fri, 22 Feb 2019 02:55:35 +0900 Clark Boylan wrote ---- > On Thu, Feb 21, 2019, at 9:26 AM, Boden Russell wrote: > > Question: > > What's the proper way to install "siblings" [1] in devstack based zuul > > v3 jobs for projects that also require the siblings via requirements.txt? > > > > > > Background: > > Following the zuul v3 migration guide for "sibling requirements" [1] > > works fine for non-devstack based jobs. However, jobs that use devstack > > must take other measures to install those siblings in their playbooks. > > > > Based on what I see projects like oslo.messaging doing for cross > > testing, they are using the PROJECTS env var to specify the siblings in > > their playbook (example [2]). This approach may work if those siblings > > are not in requirements.txt, but for projects that also require the > > siblings at runtime (in requirements.txt) it appears the version from > > the requirements.txt is used rather than the sibling's source. > > By default devstack installs "libraries" (mostly things listed in requirements files) from pypi to ensure that our software works with released libraries. However, it is often important to also test that the next version of our own libraries will work with existing software. For this devstack has the LIBS_FROM_GIT [5] variable which overrides the install via pypi behavior. > > Note that I believe you must handle this flag in your devstack plugins. In addition to what Clark mentioned, all repo defined in "required-projects" variable in zuul v3 job gets appended to devstack's LIBS_FROM_GIT variable by default. -gmann > > > > > For example the changes in [3][4]. > > > > > > > > Thanks > > > > > > [1] > > https://docs.openstack.org/infra/manual/zuulv3.html#installation-of-sibling-requirements > > [2] > > https://github.com/openstack/oslo.messaging/blob/master/playbooks/oslo.messaging-telemetry-dsvm-integration-amqp1/run.yaml#L37 > > [3] https://review.openstack.org/#/c/638099 > > [4] > > http://logs.openstack.org/99/638099/6/check/tricircle-functional/0b34687/logs/devstacklog.txt.gz#_2019-02-21_14_57_44_553 > [5] https://docs.openstack.org/devstack/latest/development.html#testing-changes-to-libraries > > From gmann at ghanshyammann.com Fri Feb 22 00:35:12 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 22 Feb 2019 09:35:12 +0900 Subject: [qa][dev] forum sessions brainstorming Message-ID: <16912a0a9aa.112f3d8ef95727.8279047583257071584@ghanshyammann.com> Hi All, I have created the below etherpad to collect the forum ideas related to QA for Denver Summit. Please write up your ideas with your irc name on etherpad. https://etherpad.openstack.org/p/DEN-train-forum-qa-brainstorming -gmann From openstack at fried.cc Fri Feb 22 00:42:23 2019 From: openstack at fried.cc (Eric Fried) Date: Thu, 21 Feb 2019 18:42:23 -0600 Subject: [placement] [translation][i18n] translating exceptions In-Reply-To: References: <7ed50c75-64e3-7bea-a07f-9a70db42583f@gmail.com> Message-ID: <63470B88-80C7-4F86-B6DD-52B8D44E5DA0@fried.cc> We could play it safe: rip out the dependencies, but keep the macros (and continue to use them for exceptions) and make placement.i18n._() a no-op. That's a hedge against future operators who don't have the background or experience to make this call yet, making it much easier to (re)instate. Eric Fried Concept Brazilian Jiu Jitsu http://taylorbjj.com > On Feb 21, 2019, at 12:56, Doug Hellmann wrote: > > Matt Riedemann writes: > >>> On 2/21/2019 11:07 AM, Doug Hellmann wrote: >>> Does an end-user interact with placement directly, or are all of the >>> errors going to be seen and handled by other services that will report >>> their own errors to end users? >> >> The Placement APIs are admin-only by default. That can be configured via >> policy but assume it's like Ironic and admin-only for the most part. >> >> Speaking of, what does Ironic do about translations in its API? >> >> As for the question at hand, I'm OK with *not* translating errors in >> placement for both the admin-only aspect and the push to use standard >> codes in error responses which can be googled. >> >> FWIW I also shared this thread on WeChat to see if anyone has an opinion >> there. >> >> -- >> >> Thanks, >> >> Matt >> > > I agree. If placement isn't meant for cloud end-users to interact with, > I don't see a lot of benefit to translating the error messages coming > through the API. > > -- > Doug > From matt at oliver.net.au Fri Feb 22 00:50:29 2019 From: matt at oliver.net.au (Matthew Oliver) Date: Fri, 22 Feb 2019 11:50:29 +1100 Subject: Outreachy In-Reply-To: References: Message-ID: Welcome Camila, It's great to have you onboard :) Was there anywhere in OpenStack you had in mind? I'm a member of the First Contact SIG[0] are we are available to help in anyway we can. Be it helping you get setup with your environment, in the community and/or getting connected to any particular project inside of the OpenStack project. That is if that hasn't already been worked out through outreachy :) Welcome again to the OpenStack community and if there is anything you need just ask! Regards, Matt [0] - https://wiki.openstack.org/wiki/First_Contact_SIG On Fri, Feb 22, 2019 at 3:50 AM Camila Moura wrote: > Sofia, thank you! > I'm reading the documentation, familiarizing myself with terms and > configuring my work environment, so, I'll soon be full of questions. > Best regards > Camila > > > Em qui, 21 de fev de 2019 às 16:26, Sofia Enriquez > escreveu: > >> Welcome, Camila! Nice to hear from you here! >> >> Let me know if you have any questions! >> >> Sofi >> >> On Thu, Feb 21, 2019 at 11:44 AM Camila Moura >> wrote: >> >>> Hi Folks >>> >>> I'm Camila, I participating in Outreachy. I'm from Brazil, but I live >>> em Czech Republic. I've been studying Python, Django, Flask and a little >>> bit HTML and CSS. >>> So, I'll to start to contribute to the project. >>> Please, be patient I'm learning :) >>> >>> Thank you for your attention! >>> Camila >>> >> >> >> -- >> >> Sofia Enriquez >> >> Associate Software Engineer >> Red Hat PnT >> >> Ingeniero Butty 240, Piso 14 >> >> (C1001AFB) Buenos Aires - Argentina >> +541143297471 (8426471) >> >> senrique at redhat.com >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Feb 22 00:56:18 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 22 Feb 2019 09:56:18 +0900 Subject: [Congress] Congress @ PTG? In-Reply-To: References: Message-ID: <16912b3fb12.cc0433ec95770.2481798772931262339@ghanshyammann.com> Thanks Eric for planning the same. I would like to attend few discussion about alarm management in Congress. I will be in Denver PTG and can attend physical PTG as long as there is no conflict. -gmann ---- On Thu, 21 Feb 2019 09:45:45 +0900 Eric K wrote ---- > If you are interested in Congress sessions at the upcoming PTG, please > indicate it in the following two-question form! > https://goo.gl/forms/NtBiaDCOUcEagLmB3 > > Feel free to add topics/comments at this etherpad even if you are not > interested in attending. > https://etherpad.openstack.org/p/congress-ptg-train > > Thank you! > > From matt at oliver.net.au Fri Feb 22 00:58:22 2019 From: matt at oliver.net.au (Matthew Oliver) Date: Fri, 22 Feb 2019 11:58:22 +1100 Subject: outreachy candidate In-Reply-To: References: Message-ID: Also feel free to contact anyone in the First Contact SIG[0] as we're here to help get you onboard and in the community :) Regards, Matt [0] - https://wiki.openstack.org/wiki/First_Contact_SIG On Wed, Feb 20, 2019 at 2:20 AM Jay Bryant wrote: > Ramsha, > > Welcome to the community! A good place to start is with the contributor > guide: https://docs.openstack.org/manila/latest/contributor/index.html > > We also have a lot of information about getting started in the OpenStack > Upstream Institute: https://docs.openstack.org/upstream-training/ There > will be an Upstream Institute before the Denver Summit if you are able to > attend in person. > https://www.openstack.org/summit/denver-2019?gclid=CjwKCAiA767jBRBqEiwAGdAOr8USf8TDJ3Gq45BPthBDikdyaA41J0XCIOpI2Im0jsrF8h825c11DBoCMwcQAvD_BwE > > Hope this information helps! > > Thanks! > > Jay > > IRC: jungleboyj > > > On 2/19/2019 9:09 AM, Ramsha Azeemi wrote: > > hi ! I am an applicant , and i want to contribute in "OpenStack Manila > Integration with OpenStack CLI (OSC)" project but i couldnt find a way > tosetup environment , contribute , find newcomer friendly issues , or codes > to fix etc . Kindly guide me . > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Feb 22 01:26:13 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 22 Feb 2019 10:26:13 +0900 Subject: [first-contact-sig] Contributing from Windows In-Reply-To: <20190220162125.4mqwvgjy2v4ti67m@csail.mit.edu> References: <20190220155020.k7nhpqgu5mjkfvw3@yuggoth.org> <20190220162125.4mqwvgjy2v4ti67m@csail.mit.edu> Message-ID: <16912cf5dd4.b019535895840.382794665154702341@ghanshyammann.com> ---- On Thu, 21 Feb 2019 01:21:25 +0900 Jonathan Proulx wrote ---- > On Wed, Feb 20, 2019 at 03:50:21PM +0000, Jeremy Stanley wrote: > :On 2019-02-20 15:31:49 +0500 (+0500), Ramsha Azeemi wrote: > :> hi! i am windows user is it necessary to be a linux ubuntu user > :> for contribution in openstack projects. > > Welcome, > > It's a big community with many different things that neeed doing so > whatever skills and resources you bring there is liekly a use for > them! > > Jeremy's response was pretty extensive, just to undeline one of his > points, it depends how you want to contribute. > > Most of OpenStack runs on Linux so would require some interaction with > Linux either as a VM or remote resource for testing. > > Some parts however like Documentation, Translation, and the > Commandline Client are not tied to a particular operating system and > may be easier to work on directly in a non-Linux environment. +1, that can be a great start to contribute. As Jeremy and Jonathan explained it well, we do have the different type of contribution you can participate as per your interest and requirement. First Contact SIG [1] can definitely help you to onboard in contribution. FC SIG has each project liaison for new contributors with their TZ information. Depends on your interested are, we can help you and connect to a particular mentor. We also conduct the biweekly meeting [2], so feel free to attend that for any queries. [1] https://wiki.openstack.org/wiki/First_Contact_SIG [2] https://wiki.openstack.org/wiki/First_Contact_SIG#Meeting_Agenda -gmann > > -Jon > > :[I've added a subject to your message and tagged it for our "First > :Contact" special interest group, for better visibility.] > : > :I think it really depends on what sort of contributions you want to > :make, as far as how easy that would be without learning to make use > :of common Unix/Linux tools and commands. There are a number of ways > :to contribute to the community, many of which can be found outlined > :here: https://www.openstack.org/community/ > : > :That said, it's hard to know what you mean by "windows user" or > :"linux ubuntu user" in your question. Are you worried about your > :ability to use command-line tools, or is there some deeper problem > :you're concerned with there? For example, if you are interested in > :contributing by improving the software which makes up OpenStack, > :then using a Linux environment will make you far more effective at > :that in the long run. To be frank, OpenStack is complicated > :software, and learning to use a Linux command-line environment is > :unlikely to be one of the greater challenges you'll face as a > :contributor. > : > :I gather we have quite a few contributors whose desktop environment > :is MS Windows but who do development work in a local virtual machine > :or even over the Internet in remote VM instances in public service > :providers. Also, I'm led to believe Windows now provides a > :Linux-like command shell with emulated support for Ubuntu packages > :(I was talking to a new contributor just last week who was using > :that to propose source code changes for review). > : > :So to summarize, I recommend first contemplating what manner of > :contribution most excites you. Expect to have to learn lots of new > :things (not just new tools and workflows, those are only the > :beginning of the journey), and most of all have patience with the > :process. We're a friendly bunch and eager to help newcomers turn > :into productive members of our community. > :-- > :Jeremy Stanley > > > From gmann at ghanshyammann.com Fri Feb 22 01:28:53 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 22 Feb 2019 10:28:53 +0900 Subject: Outreachy contribution In-Reply-To: References: Message-ID: <16912d1cf1a.123148be995844.7797751455725636369@ghanshyammann.com> ---- On Thu, 21 Feb 2019 05:06:20 +0900 Sofia Enriquez wrote ---- > Hi Peetpal, Welcome to OpenStack! > > I think you can find the steps to start on the Outreachy web. However, this could help you:First, I recommend you to read about Devstack [1] (It's a series of scripts used to quickly bring up a complete OpenStack environment) > Try to follow the guide [1] and install Devstack on the host machine. > Read the [2] developers guide. > Maybe this guide is old but could help you [3]. > Let me know if you have any questions!Sofi > > [1] https://docs.openstack.org/devstack/latest/[2] https://docs.openstack.org/infra/manual/developers.html[3] https://enriquetaso.wordpress.com/2016/05/07/installing-devstack-on-a-vagrant-virtual-machine/ In addition to what Sofia mentioned, OpenStack Upstream Institute[1] conduct upstream training in every Summit which helps on initial onboarding and learns the complete process of merge your first patch. There are many OpenStack mentor/expert (PTL, Core) in training for guiding you about project-specific contribution. Denver Summit upstream training schedule is out [2], If you plan to attend the summit. First Contact SIG [3] can also help you to onboard you and connect you in specific project team you are interested in. FC SIG has each project liaison for new contributors with their TZ information. We also conduct the biweekly meeting [4], so feel free to attend that for any queries. Feel free to reach any of those groups and thanks for showing interest in OpenStack Contribution. [1] https://docs.openstack.org/upstream-training/ [2] https://www.openstack.org/summit/denver-2019/summit-schedule/global-search?t=OpenStack+Upstream+Institute [3] https://wiki.openstack.org/wiki/First_Contact_SIG [4] https://wiki.openstack.org/wiki/First_Contact_SIG#Meeting_Agenda -gmann > On Wed, Feb 20, 2019 at 2:59 PM Preetpal Kaur wrote: > > > -- > Sofia Enriquez > Associate Software Engineer > Red Hat PnT Ingeniero Butty 240, Piso 14 > (C1001AFB) Buenos Aires - Argentina > +541143297471 (8426471) > > senrique at redhat.com > > > Hi! > I am Preetpal Kaur new in open source. I want to contribute to open > source with the help of outreachy. > I choose this project to contribute to.OpenStack Manila Integration > with OpenStack CLI (OSC) > So @Sofia Enriquez Can you please guide me on how to start > > -- > Preetpal Kaur > https://preetpalk.wordpress.com/ > https://github.com/Preetpalkaur3701 > > From skramaja at redhat.com Fri Feb 22 05:55:44 2019 From: skramaja at redhat.com (Saravanan KR) Date: Fri, 22 Feb 2019 11:25:44 +0530 Subject: =?UTF-8?Q?Re=3A_=5Btripleo=5D_nominating_Harald_Jens=C3=A5s_as_a_core_re?= =?UTF-8?Q?viewer?= In-Reply-To: References: Message-ID: +1 Regards, Saravanan KR On Thu, Feb 21, 2019 at 8:33 PM Juan Antonio Osorio Robles wrote: > > Hey folks! > > > I would like to nominate Harald as a general TripleO core reviewer. > > He has consistently done quality reviews throughout our code base, > helping us with great feedback and technical insight. > > While he has done a lot of work on the networking and baremetal sides of > the deployment, he's also helped out on security, CI, and even on the > tripleoclient side. > > Overall, I think he would be a great addition to the core team, and I > trust his judgment on reviews. > > > What do you think? > > > Best regards > > > From aj at suse.com Fri Feb 22 07:33:34 2019 From: aj at suse.com (Andreas Jaeger) Date: Fri, 22 Feb 2019 08:33:34 +0100 Subject: [docs] Implementation of the api-ref consolidation under doc/source/ In-Reply-To: <20190221214125.GA3443@sm-workstation> References: <15224217.xv0GsKRgh2@whitebase.usersys.redhat.com> <20190221173402.GA20285@sm-workstation> <2034589.JRkO46kZ9W@whitebase.usersys.redhat.com> <20190221214125.GA3443@sm-workstation> Message-ID: <60fce8ff-468c-cd46-419b-91e8ce36ea65@suse.com> If we move the api-ref into doc/source, we should follow the example done by install-guides move: Publish as a single tree to docs.openstack.org and stop publishing to developer.openstack.org. If we want to keep publishing to developer.o.o, let's not move - it gets too complicated, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From jistr at redhat.com Fri Feb 22 08:32:05 2019 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Fri, 22 Feb 2019 09:32:05 +0100 Subject: =?UTF-8?Q?Re=3a_=5btripleo=5d_nominating_Harald_Jens=c3=a5s_as_a_co?= =?UTF-8?Q?re_reviewer?= In-Reply-To: References: Message-ID: <27a4583f-07fa-5455-1e7d-5d6f31ddb6cc@redhat.com> +1! On 21. 02. 19 16:02, Juan Antonio Osorio Robles wrote: > Hey folks! > > > I would like to nominate Harald as a general TripleO core reviewer. > > He has consistently done quality reviews throughout our code base, > helping us with great feedback and technical insight. > > While he has done a lot of work on the networking and baremetal sides of > the deployment, he's also helped out on security, CI, and even on the > tripleoclient side. > > Overall, I think he would be a great addition to the core team, and I > trust his judgment on reviews. > > > What do you think? > > > Best regards > > > From isanjayk5 at gmail.com Fri Feb 22 08:46:05 2019 From: isanjayk5 at gmail.com (Sanjay K) Date: Fri, 22 Feb 2019 14:16:05 +0530 Subject: [nova][dev] Any VMware resource pool and shares kind of feature available in openstack nova? Message-ID: Hi Matt, I will define/derive priority based on the which sub network the VM belongs to - mostly Production or Development. From this, the Prod VMs will have higher resource allocation criteria than other normal VMs and these can be calculated at runtime when a VM is also rebooted like how VMware resource pools and shares features work. I appreciate any other suggestions. thanks and regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri Feb 22 09:16:12 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 22 Feb 2019 10:16:12 +0100 Subject: [placement] [translation][i18n] translating exceptions In-Reply-To: <7ed50c75-64e3-7bea-a07f-9a70db42583f@gmail.com> References: <7ed50c75-64e3-7bea-a07f-9a70db42583f@gmail.com> Message-ID: On 2/21/19 6:51 PM, Matt Riedemann wrote: > On 2/21/2019 11:07 AM, Doug Hellmann wrote: >> Does an end-user interact with placement directly, or are all of the >> errors going to be seen and handled by other services that will report >> their own errors to end users? > > The Placement APIs are admin-only by default. That can be configured via policy > but assume it's like Ironic and admin-only for the most part. > > Speaking of, what does Ironic do about translations in its API? We're trying to keep all user-visible strings translatable. > > As for the question at hand, I'm OK with *not* translating errors in placement > for both the admin-only aspect and the push to use standard codes in error > responses which can be googled. > > FWIW I also shared this thread on WeChat to see if anyone has an opinion there. > From a.settle at outlook.com Fri Feb 22 09:24:33 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Fri, 22 Feb 2019 09:24:33 +0000 Subject: Fwd: Renaissance Denver Hotel: Quiet Zone (no more train horns!) is OFFICIAL for the A line Light Rail! In-Reply-To: References: <20190221172418.2oibavbt5fmkndio@yuggoth.org> Message-ID: Not that I don't think you're all great but I think I can 100% deal without it. There's a website I can play on loop and snooze gently too. I'll wake up really cranky for you all <3 it'll be just like old times. On 21/02/2019 17:28, Mohammed Naser wrote: > On Thu, Feb 21, 2019 at 12:26 PM Jeremy Stanley wrote: >> On 2019-02-20 20:15:35 +0000 (+0000), Alexandra Settle wrote: >>> Is it sad I'm almost disappointed by the lack of said horns? >> [...] >> >> I bet we can convince folks to bring wooden train whistles with >> them, or find some in a local shop. Problem solved. >> -- >> Jeremy Stanley > Denver swag = trainhorns? > From camilapaleo at gmail.com Fri Feb 22 10:06:01 2019 From: camilapaleo at gmail.com (Camila Moura) Date: Fri, 22 Feb 2019 11:06:01 +0100 Subject: Outreachy In-Reply-To: References: Message-ID: Thank you so much, Mattew I have in mind OpenStack Manila Integration with OpenStack CLI(OSC), this project is available on Otreachy I'm happy to know about all this support! Best Regards, Camila Em sex, 22 de fev de 2019 às 01:50, Matthew Oliver escreveu: > Welcome Camila, > > It's great to have you onboard :) > > Was there anywhere in OpenStack you had in mind? > I'm a member of the First Contact SIG[0] are we are available to help in > anyway we can. Be it helping you get setup with your environment, in the > community and/or getting connected to any particular project inside of the > OpenStack project. That is if that hasn't already been worked out through > outreachy :) > > Welcome again to the OpenStack community and if there is anything you need > just ask! > > Regards, > Matt > > [0] - https://wiki.openstack.org/wiki/First_Contact_SIG > > On Fri, Feb 22, 2019 at 3:50 AM Camila Moura > wrote: > >> Sofia, thank you! >> I'm reading the documentation, familiarizing myself with terms and >> configuring my work environment, so, I'll soon be full of questions. >> Best regards >> Camila >> >> >> Em qui, 21 de fev de 2019 às 16:26, Sofia Enriquez >> escreveu: >> >>> Welcome, Camila! Nice to hear from you here! >>> >>> Let me know if you have any questions! >>> >>> Sofi >>> >>> On Thu, Feb 21, 2019 at 11:44 AM Camila Moura >>> wrote: >>> >>>> Hi Folks >>>> >>>> I'm Camila, I participating in Outreachy. I'm from Brazil, but I live >>>> em Czech Republic. I've been studying Python, Django, Flask and a little >>>> bit HTML and CSS. >>>> So, I'll to start to contribute to the project. >>>> Please, be patient I'm learning :) >>>> >>>> Thank you for your attention! >>>> Camila >>>> >>> >>> >>> -- >>> >>> Sofia Enriquez >>> >>> Associate Software Engineer >>> Red Hat PnT >>> >>> Ingeniero Butty 240, Piso 14 >>> >>> (C1001AFB) Buenos Aires - Argentina >>> +541143297471 (8426471) >>> >>> senrique at redhat.com >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Fri Feb 22 10:47:11 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Fri, 22 Feb 2019 10:47:11 +0000 Subject: [docs] Implementation of the api-ref consolidation under doc/source/ In-Reply-To: <15224217.xv0GsKRgh2@whitebase.usersys.redhat.com> References: <15224217.xv0GsKRgh2@whitebase.usersys.redhat.com> Message-ID: <60a6599cd9e7a9bebb49744fe416e3711848e529.camel@redhat.com> On Thu, 2019-02-21 at 18:08 +0100, Luigi Toscano wrote: > Hi all, > > During the last PTG it was decided to move forward with the migration of the > api-ref documentation together with the rest of the documentation [1]. > This is one of the item still open after the (not so recent anymore) massive > documentation restructuring [2]. > > (most likely anything below applies to releasenotes/ as well.) > > I asked about this item few weeks ago on the documentation channels. So far no > one seems against moving forward with this, but, you know, resources > :) > > I think that the process itself shouldn't be too complicated on the technical > side, but more on the definition of the desired outcome. I can help with the > technical part (the moving), but it would be better if someone from the doc > team with the required knowledge and background on the doc process would start > with at least a draft of a spec, which can be used to start the discussion. > > > If implemented, this change would also fix an inconsistency in the guidelines > [2]: the content of reference/ seems to overlap with the content of api- > ref("Library projects should place their automatically generated class > documentation here."), but then having api-ref there would allow us to always > use api-ref. That's where the entire discussion started in the QA session [3]: > some client libraries document their API in different places. I thought the reason we hadn't done this was because the API reference was intentionally unversioned, while the of the documentation was not? What's changed that we could start moving this in-tree? Stephen > > [1] https://etherpad.openstack.org/p/docs-i18n-ptg-stein line 144 > [2] https://docs.openstack.org/doc-contrib-guide/project-guides.html > [3] https://etherpad.openstack.org/p/clean-up-the-tempest-documentation > > > Ciao From sbauza at redhat.com Fri Feb 22 10:49:24 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Fri, 22 Feb 2019 11:49:24 +0100 Subject: [tc] Questions for TC Candidates In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C2B52EE@EX10MBOX03.pnnl.gov> References: <1A3C52DFCD06494D8528644858247BF01C2B52EE@EX10MBOX03.pnnl.gov> Message-ID: On Thu, Feb 21, 2019 at 7:28 PM Fox, Kevin M wrote: > I think its good for the TC to discuss architectural shortcomings... > Someone really needs to do it. > > If the way to do that is to have folks be elected based on recommending > architecture changes and we vote on electing them, at least thats some way > for folks to provide feedback on whats important there. Gives us operators > more of a way to provide feedback. > > For example, I think CellsV2 mostly came about due to the current > architecture not scaling well due to mysql/and torturing rabbit. The nova > team felt I think it best to stick with the blessed architecture and try > and scale it (not unreasonable from a single project perspective). But, > rather then add a bunch of complexity which operators now suffer for, it > could have been handled by fixing the underlying architectural issue. > That's a bit unfortunate if you feel having more operator complexity with Cells V2 (rather than, per say, Cells V1) because Cells V2 is the default Nova now. Operators don't have to configure anything in order to start with a single cell, right ? Doing Cells V2 was actually what you said "fixing the underlying architectural issue". See https://docs.openstack.org/nova/latest/user/cells.html#manifesto for details. Stop torturing rabbit. If k8s can do 5000 nodes with one, non sharded > control plane, nova should be able to too. Scheduling/starting a container > and scheduling/starting a vm are not fundamentally different in their > system requirements. > >From a 30K-feet high level, yes. Also, I can point some recommended architecture that allows you to have a single MQ for 5000 compute nodes with Nova, of course. But let's not jump into technical details here. We'll have the Forum late April which is the perfect place for operator/developer communication and address this particular concern. Feel free to propose a Forum session about this, I'd be happy to participate to it. Before, operators didn't have a frame of reference so just went along with > it. Now they have more options and can more easily see the pain points in > OpenStack and can decide to shift workload elsewhere. A single project > can't make these sorts of overarching architectural decisions. The TC > should do one of decide/help decide/facilitate deciding/delegate deciding. > But someone needs to drive it, otherwise it gets dropped. That should be > the TC IMO. > > The TC can make people discussing, certainly. The TC can help arbitraring priorities, for sure. The TC can insufflate some guidance on archtectural decisions eventually. But the TC can't really identify the pain points for a specific project and allocate resources to work on those. TC folks aren't PMs or EMs. The TC candidates are talking more and more about OpenStack being stable. > One development quote I like, "the code is done, not when there is nothing > more to add, but nothing more to remove" speaks to me here... Do TC > candidates think that should that be an architectural goal coming up soon? > Figure out how to continue to do what OpenStack does, but do it simpler > and/or with less code/services? That may require braking down some project > walls. Is that a good thing to do? > > I personnally expressed the idea to have the TC focusing on maturity gaps between projects. Having TC goals be aligned with efforts for filling those gaps is somehow something I look for. What you mention is slighly different, you'd propose a goal about reducing tech debt. I don't really see actionable items on this from a first sight, but it's worth to consider discussing it at the PTG. -Sylvain Thanks, > Kevin > > ------------------------------ > *From:* Sylvain Bauza [sbauza at redhat.com] > *Sent:* Thursday, February 21, 2019 9:28 AM > *To:* Graham Hayes > *Cc:* openstack-discuss at lists.openstack.org > *Subject:* Re: [tc] Questions for TC Candidates > > > > On Thu, Feb 21, 2019 at 6:14 PM Graham Hayes wrote: > >> On 20/02/2019 14:46, Chris Dent wrote: >> > >> > It's the Campaigning slot of the TC election process, where members >> > of the community (including the candidates) are encouraged to ask >> > the candidates questions and witness some debate. I have some >> > questions. >> > >> > First off, I'd like to thank all the candidates for running and >> > being willing to commit some of their time. I'd also like to that >> > group as a whole for being large enough to force an election. A >> > representative body that is not the result of an election would not >> > be very representing nor have much of a mandate. >> > >> > The questions follow. Don't feel obliged to answer all of these. The >> > point here is to inspire some conversation that flows to many >> > places. I hope other people will ask in the areas I've chosen to >> > skip. If you have a lot to say, it might make sense to create a >> > different message for each response. Beware, you might be judged on >> > your email etiquette and attention to good email technique! >> > >> > * How do you account for the low number of candidates? Do you >> > consider this a problem? Why or why not? >> >> I think we are reaching a more stable space, and the people who >> are developing the software are comfortable in the roles they are in. >> >> As the demographic of our developers shifts east, our leadership is >> still very US / EU based, which may be why we are not getting the >> same amount of people growing into TC candidates. >> >> > * Compare and contrast the role of the TC now to 4 years ago. If you >> > weren't around 4 years ago, comment on the changes you've seen >> > over the time you have been around. In either case: What do you >> > think the TC role should be now? >> >> 4 years ago, was just before the big tent I think? Ironically, there >> was a lot of the same discussion - python3, new project requirements >> (at that point the incubation requirements), asyncio / eventlet. >> >> The TC was also in the process of dealing with a By-Laws change, in >> this case getting the trademark program off the ground. >> >> We were still struggling with the "what is OpenStack?" question. >> >> Looking back on the mailing list archives is actually quite interesting >> and while the topics are the same, a lot of the answers have changed. >> >> >> > * What, to you, is the single most important thing the OpenStack >> > community needs to do to ensure that packagers, deployers, and >> > hobbyist users of OpenStack are willing to consistently upstream >> > their fixes and have a positive experience when they do? What is >> > the TC's role in helping make that "important thing" happen? >> >> I think things like the review culture change have been good for this. >> The only other thing we can do is have more people reviewing, to make >> that first contact nice and quick, but E_NO_TIME or E_NO_HUMANS >> becomes the issue. >> >> > * If you had a magic wand and could inspire and make a single >> > sweeping architectural or software change across the services, >> > what would it be? For now, ignore legacy or upgrade concerns. >> > What role should the TC have in inspiring and driving such >> > changes? >> >> 1: Single agent on each compute node that allows for plugins to do >> all the work required. (Nova / Neutron / Vitrage / watcher / etc) >> >> 2: Remove RMQ where it makes sense - e.g. for nova-api -> nova-compute >> using something like HTTP(S) would make a lot of sense. >> >> 3: Unified Error codes, with a central registry, but at the very least >> each time we raise an error, and it gets returned a user can see >> where in the code base it failed. e.g. a header that has >> OS-ERROR-COMPUTE-3142, which means that someone can google for >> something more informative than the VM failed scheduling >> >> 4: OpenTracing support in all projects. >> >> 5: Possibly something with pub / sub where each project can listen for >> events and not create something like designate did using >> notifications. >> >> > That's the exact reason why I tried to avoid to answer about architectural > changes I'd like to see it done. Because when I read the above lines, I'm > far off any consensus on those. > To answer 1. and 2. from my Nova developer's hat, I'd just say that we > invented Cells v2 and Placement. > To be clear, the redesign wasn't coming from any other sources but our > users, complaining about scale. IMHO If we really want to see some comittee > driving us about feature requests, this should be the UC and not the TC. > > Whatever it is, at the end of the day, we're all paid by our sponsors. > Meaning that any architectural redesign always hits the reality wall where > you need to convince your respective Product Managers of the great benefit > of the redesign. I'm maybe too pragmatic, but I remember so many > discussions we had about redesigns that I now feel we just need hands, not > ideas. > > -Sylvain > > > > * What can the TC do to make sure that the community (in its many >> > dimensions) is informed of and engaged in the discussions and >> > decisions of the TC? >> >> This is a difficult question, especially in a community where a lot of >> contributors are sponsored. >> >> The most effective way would be for the TC to start directly telling >> projects what to do - but I feel like that would mean that everyone >> would be unhappy with us. >> >> > * How do you counter people who assert the TC is not relevant? >> > (Presumably you think it is, otherwise you would not have run. If >> > you don't, why did you run?) >> >> Highlight the work done by the TC communicating with the board, guiding >> teams on what our vision is, and helping to pick goals. I think the >> goals are a great way, and we are starting to see the benifits as we >> continue with the practice. >> >> For some people, we will always be surplus to requirements, and they >> just want to dig into bugs and features, and not worry about politics. >> >> Thats fine - we just have to work with enough of the people on the teams >> to make sure that the project is heading in the correct direction, and >> as long as people can pick up what the priotities are from that process, >> I think we win. >> >> > That's probably more than enough. Thanks for your attention. >> > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Fri Feb 22 10:54:48 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 22 Feb 2019 05:54:48 -0500 Subject: [dev][keystone] App Cred Capabilities Update In-Reply-To: References: <4434fb3c-cdc3-48fc-b9ef-5a7dd0a8e70c@www.fastmail.com> Message-ID: <52922cb0-655d-41d6-aea7-bb86240b2703@www.fastmail.com> On Thu, Feb 21, 2019, at 11:32 PM, Lance Bragstad wrote: > > > On 2/21/19 3:11 PM, Colleen Murphy wrote: [snipped] > > > * Substitutions > > > > The way the spec lays out variable components of the URL paths for both > > user-created-rules and operator-created-rules is unnecessarily complex and in > > some cases faulty. The only way I can explain how complicated it is is to try > > to give an example: > > > > Let's say we want to allow a user to create an application credential that > > allows the holder to issue a GET request on the identity API that looks like > > /v3/projects/ef7284b4-3a75-4570-8ea8-b30214f18538/tags/foobar. The spec says > > that the string '/v3/projects/{project_id}/tags/{tag}' is what should be > > provided verbatim in the "path" attribute of a "capability", then there should > > be a "substitutions" attribute that sets {"tag": "foobar"}, then the project_id > > should be taken from the token scope at app cred usage time. When the > > capability is validated against the operator-created-rules at app cred creation > > time, it needs to check that the path string matches exactly, that the keys of > > the "substitutions" dict matches the "user template keys" list, and that keys > > required by the "context template keys" are provided by the token context. > > > > Taking the project ID, domain ID, or user ID from the token scope is not going > > to work because some of these APIs may actually be system-scoped APIs - it's > > just not a hard and fast rule that a project/domain/user ID in the URL maps to > > the same user and scope of the token used to create it. Once we do away with > > that, it stops making sense to have a separate attribute for the user-provided > > substitutions when they could just include that in the URL path to begin with. > > So the proposed implementation simply allows the wildcards * and ** in both the > > operator-created-rules and user-created-rules, no python-formatting variable > > substitutions. > > I agree about the awkwardness and complexity, but I do want to clarify. > Using the example above, going with * and ** would mean that tokens > generated from that application credential would be actionable on any > project tag for the project the application credential was created for > and not just 'foobar'. Not exactly. Say the operator has configured a rule like: GET /v3/projects/*/tags/* The user then has the ability to configure one or more of several rules: GET /v3/projects/*/tags/* # this application credential can be used on any project on any tag GET /v3/projects/UUID/tags/* # this application credential can be used on a specific project on any tag GET /v3/projects/*/tags/foobar # this application credential can be used on any project but only for tag "foobar" GET /v3/projects/UUID/tags/foobar # this application credential can only be used on one specific project and one specific tag The matching rule for capability creation would be flexible enough to allow any of these. > > For an initial implementation, I think that's fine. Sure, creating an > application credential specific to a single server is ideal, but at > least we're heading in the right direction by limiting its usage to a > single API. If we get that right - we should be able to iteratively add > filtering later*. I wouldn't mind re-raising this particular point after > we have more feedback from the user community. > > * iff we need to > [snipped] Colleen From colleen at gazlene.net Fri Feb 22 11:01:00 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 22 Feb 2019 06:01:00 -0500 Subject: [keystone][dev] Forum topic brainstorming In-Reply-To: References: Message-ID: <52e35dc9-74c1-46a8-b495-f359ee4dacd0@www.fastmail.com> On Thu, Feb 21, 2019, at 7:16 PM, Lance Bragstad wrote: > Hi all, > > This is going out a little later than I'd like, so I apologize for > letting it slip. > > Submissions for forum topics opens tomorrow [0]. Per usual, I've > created an etherpad [1] for us to come up with topics we'd like to > discuss at the forum. It looks like we only have a couple weeks to > submit sessions [2], so I'll be putting this on the agenda for the > keystone meeting next week. > Please have a look and add suggestions or feedback before then. > > [0] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002900.html > [1] https://etherpad.openstack.org/p/DEN-keystone-forum-sessions > [2] https://wiki.openstack.org/wiki/Forum > > Attachments: > * signature.asc We should also presumably be brainstorming PTG topics too? I suppose we don't have to submit those ahead of time but it would be good to start considering them as well. Colleen From gr at ham.ie Fri Feb 22 11:26:50 2019 From: gr at ham.ie (Graham Hayes) Date: Fri, 22 Feb 2019 11:26:50 +0000 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: On 21/02/2019 18:04, Sylvain Bauza wrote: > > > I'd be interested in discussing the use cases requiring such important > architectural splits. > The main reason why Cells v2 was implemented was to address the MQ/DB > scalability issue of 1000+ compute nodes.  The Edge thingy came after > this, so it wasn't the main driver for change. > If the projects you mention have the same footprints at scale, then yeah > I'm supportive of any redesign discussion that would come up. > > That said, before stepping in into major redesigns, I'd wonder : could > the inter-services communication be improved in terms of reducing payload ? This is actually orthogonal to cells v2. There is other good reasons to remove RMQ in some places: nova control plane <-> compute traffic can be point to point, so a HTTP request is perfectly workable for things that use calls() (cast() is a different story). This removes a lot of intermediate components, (oslo.messaging, RMQ, persistent connections, etc). It is not with out its own complexity, and potential pitfalls, but I am not going to design a spec on this thread :) for other services using RMQ: 1. Having service VMs connect to RMQ means that if one VM gets compromised, the attacker could cause havoc on cloud by deleting VMs, networks, or other resources. You can help this by running multiple RMQ services, combinations or vhosts and permissions, but the service resources are still under threat in all cases. 2. Possibly having to open ports from in cloud workloads to the under cloud so that RMQ is accessible for the in cloud services. This ties into the single agent for all openstack services - if we had a standard agent on machines that do things for openstack, we could have cross project TLS mutual auth, / app credentials / other auth tooling and do it once, and then just make sure that each image build script for in cloud services includes it.   > > From what I understand there was even talk of doing it for Nova so that > a central control plane could manage remote edge compute nodes without > having to keep a RMQ connection alive across the WAN, but I am not sure > where that got to. > > > That's a separate usecase (Edge) which wasn't the initial reason why we > started implementing Cells V2. I haven't heard any request from the Edge > WG during the PTGs about changing our messaging interface because $WAN > but I'm open to ideas. It was discussed with a few people from the Nova team in the Denver PTG Edge room from what I remember. > -Sylvain > > > To be clear, the redesign wasn't coming from any other sources but our > > users, complaining about scale. IMHO If we really want to see some > > comittee driving us about feature requests, this should be the UC and > > not the TC. > > It should be a combination - UC and TC should be communicating about > these requests - UC for the feedback, and the TC to see hwo they fit > with the TCs vision for the direction of OpenStack. > > > Whatever it is, at the end of the day, we're all paid by our sponsors. > > Meaning that any architectural redesign always hits the reality wall > > where you need to convince your respective Product Managers of the > great > > benefit of the redesign. I'm maybe too pragmatic, but I remember > so many > > discussions we had about redesigns that I now feel we just need hands, > > not ideas. > > I fully agree, and it has been an issue in the community for as long as > I can remember. It doesn't mean that we should stop pushing the project > forward. We have already moved the needle with the cycle goals, so we > can influence what features are added to projects. Lets continue to do > so. > > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From tpb at dyncloud.net Fri Feb 22 11:38:53 2019 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 22 Feb 2019 06:38:53 -0500 Subject: [manila] Forum session brainstorming Message-ID: <20190222113853.apvegdyzglmgp2di@barron.net> It's time to submit proposals for Forum sessions for the Denver Open Infrastructure Summit so I set up an etherpad [1] where we can brainstorm ideas. Please put your irc nick or email next to your idea. -- Tom Barron https://etherpad.openstack.org/p/DEN-train-forum-manila-brainstorming From bodenvmw at gmail.com Fri Feb 22 13:11:33 2019 From: bodenvmw at gmail.com (Boden Russell) Date: Fri, 22 Feb 2019 06:11:33 -0700 Subject: [infra][qa] installing required projects from source in functional/devstack jobs In-Reply-To: <169127b779a.c2a7cc0895597.8824954749040304365@ghanshyammann.com> References: <5d1ebc25-4530-4a93-a640-b30e93f0a424@www.fastmail.com> <169127b779a.c2a7cc0895597.8824954749040304365@ghanshyammann.com> Message-ID: On 2/21/19 4:54 PM, Ghanshyam Mann wrote: > In addition to what Clark mentioned, all repo defined in "required-projects" variable in zuul v3 job gets appended to devstack's LIBS_FROM_GIT variable by default. Thanks for the info. However, based on trial and error, using LIBS_FROM_GIT only works if those projects are not in requirements.txt. If the projects used in LIBS_FROM_GIT are also in requirements.txt; the versions from requirements.txt are used; not the source from git. For example the tricircle-functional job passes when neutron and networking-sfc are removed from requirements.txt [1], but fails if they are in requirements.txt [2]. I've also tried moving those required projects into their own requirements file [3], but that does not work either. That said; the only solution I see at the moment is to remove those required projects from requirements.txt until we are ready to release the given project and then specify the versions for these source projects. Am I missing something here; it seems there must be a better solution? [1] https://review.openstack.org/#/c/638099/8 [2] https://review.openstack.org/#/c/638099/7 [3] https://review.openstack.org/#/c/638099/9 From a.settle at outlook.com Fri Feb 22 13:45:09 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Fri, 22 Feb 2019 13:45:09 +0000 Subject: [docs] Implementation of the api-ref consolidation under doc/source/ In-Reply-To: <60a6599cd9e7a9bebb49744fe416e3711848e529.camel@redhat.com> References: <15224217.xv0GsKRgh2@whitebase.usersys.redhat.com> <60a6599cd9e7a9bebb49744fe416e3711848e529.camel@redhat.com> Message-ID: On 22/02/2019 10:47, Stephen Finucane wrote: > On Thu, 2019-02-21 at 18:08 +0100, Luigi Toscano wrote: >> Hi all, >> >> During the last PTG it was decided to move forward with the migration of the >> api-ref documentation together with the rest of the documentation [1]. >> This is one of the item still open after the (not so recent anymore) massive >> documentation restructuring [2]. >> >> (most likely anything below applies to releasenotes/ as well.) >> >> I asked about this item few weeks ago on the documentation channels. So far no >> one seems against moving forward with this, but, you know, resources >> :) >> >> I think that the process itself shouldn't be too complicated on the technical >> side, but more on the definition of the desired outcome. I can help with the >> technical part (the moving), but it would be better if someone from the doc >> team with the required knowledge and background on the doc process would start >> with at least a draft of a spec, which can be used to start the discussion. >> >> >> If implemented, this change would also fix an inconsistency in the guidelines >> [2]: the content of reference/ seems to overlap with the content of api- >> ref("Library projects should place their automatically generated class >> documentation here."), but then having api-ref there would allow us to always >> use api-ref. That's where the entire discussion started in the QA session [3]: >> some client libraries document their API in different places. > I thought the reason we hadn't done this was because the API reference > was intentionally unversioned, while the of the documentation was not? > What's changed that we could start moving this in-tree? Now you say it out loud, I'm fairly certain this is exactly why we didn't do it. As per the spec we wrote for the OS manuals migration, we did "say" we were going to do this work. See [4]. Although the caveat was "... is deferred until we are farther (further, jeez) along with the initial migration work." I'm not sure if we have any other historical notes on this. [4] https://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html > > Stephen > >> [1] https://etherpad.openstack.org/p/docs-i18n-ptg-stein line 144 >> [2] https://docs.openstack.org/doc-contrib-guide/project-guides.html >> [3] https://etherpad.openstack.org/p/clean-up-the-tempest-documentation >> >> >> Ciao From ltoscano at redhat.com Fri Feb 22 15:04:22 2019 From: ltoscano at redhat.com (Luigi Toscano) Date: Fri, 22 Feb 2019 16:04:22 +0100 Subject: [docs] Implementation of the api-ref consolidation under doc/source/ In-Reply-To: References: <15224217.xv0GsKRgh2@whitebase.usersys.redhat.com> <2034589.JRkO46kZ9W@whitebase.usersys.redhat.com> Message-ID: <1643155.mk73lP1mh2@whitebase.usersys.redhat.com> On Thursday, 21 February 2019 19:45:53 CET Doug Hellmann wrote: > Luigi Toscano writes: > > On Thursday, 21 February 2019 18:34:03 CET Sean McGinnis wrote: > >> On Thu, Feb 21, 2019 at 06:08:15PM +0100, Luigi Toscano wrote: > >> > Hi all, > >> > > >> > During the last PTG it was decided to move forward with the migration > >> > of > >> > the api-ref documentation together with the rest of the documentation > >> > [1]. This is one of the item still open after the (not so recent > >> > anymore) > >> > massive documentation restructuring [2]. > >> > >> How is this going to work with the publishing of these separate content > >> types to different locations? > > > > I can just guess, as this is a work in progress and I don't know about > > most of the previous discussions. > > > > The publishing job is just code and can be adapted to publish two (three) > > subtrees to different places, or exclude some directories. > > The global index files from doc/source do not necessarily need to include > > all the index files of the subdirectories, so that shouldn't be a > > problem. > > > > Do you have a specific concern that it may difficult to address? > > Sphinx is really expecting to build a complete output set that is used > together. Several things may break. It connects the output files > together with "next" and "previous" navigation links, for one. It uses > relative links to resources like CSS and JS files that will be in a > different place if /some/deep/path/to/index.html becomes /index.html or > vice versa. > > What is the motivation for changing how the API documentation is built > and published? I guess that a bit of context is required (Ghanshyam, Petr, please correct me if I forgot anything). During the last PTG we had a QA session focused on documentation and its restructuring (see the already mentioned [1]) One of the point discussed was the location of the generated API. Right now tempest uses doc/source/library, which is not a place documented by the [2]. I was arguing about the usage of reference/ instead (which is used by various python-client) and we couldn't come to an agreement. So we asked the Doc representative about this and Petr Kovar kindly chimed in. I don't remember exactly how we ended up on api-ref, but I think that the idea was that api-ref, moved to doc/source/, could have solved the problem, giving the proper place for this kind of content. Then we forgot about this, but fixing the Tempest documentation was still on the list, so I asked again on the doc channels and as suggested I have sent the email that started this thread. This is to say that I'm not particularly attached to this change, but I only tried to push it forward because it was the suggested solution of another problem. I'm more than happy to not have to do more work - but then I'd really appreciate a solution to the original problem. [1] https://etherpad.openstack.org/p/clean-up-the-tempest-documentation [2] https://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html Ciao -- Luigi From mriedemos at gmail.com Fri Feb 22 15:06:35 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 22 Feb 2019 09:06:35 -0600 Subject: [nova][dev] Any VMware resource pool and shares kind of feature available in openstack nova? In-Reply-To: References: Message-ID: <492012d2-18b7-436c-9990-d11136f62b9a@gmail.com> On 2/22/2019 2:46 AM, Sanjay K wrote: > I will define/derive priority based on the which sub network the VM > belongs to - mostly Production or Development. From this, the Prod VMs > will have higher resource allocation criteria than other normal VMs and > these can be calculated at runtime when a VM is also rebooted like how > VMware resource pools and shares features work. It sounds like a weigher in scheduling isn't appropriate for your use case then, because weighers in scheduling are meant to weigh compute hosts once they have been filtered. It sounds like you're trying to prioritize which VMs will get built, which sounds more like a pre-emptible/spot instances use case [1][2]. As for VMware resource pools and shares features, I don't know anything about those since I'm not a vCenter user. Maybe someone more worldly, like Jay Pipes, can chime in here. [1] https://www.openstack.org/videos/summits/berlin-2018/science-demonstrations-preemptible-instances-at-cern-and-bare-metal-containers-for-hpc-at-ska [2] https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/enable-rebuild-for-instances-in-cell0.html -- Thanks, Matt From doug at doughellmann.com Fri Feb 22 15:15:36 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 22 Feb 2019 10:15:36 -0500 Subject: [docs] Implementation of the api-ref consolidation under doc/source/ In-Reply-To: <1643155.mk73lP1mh2@whitebase.usersys.redhat.com> References: <15224217.xv0GsKRgh2@whitebase.usersys.redhat.com> <2034589.JRkO46kZ9W@whitebase.usersys.redhat.com> <1643155.mk73lP1mh2@whitebase.usersys.redhat.com> Message-ID: Luigi Toscano writes: > On Thursday, 21 February 2019 19:45:53 CET Doug Hellmann wrote: >> Luigi Toscano writes: >> > On Thursday, 21 February 2019 18:34:03 CET Sean McGinnis wrote: >> >> On Thu, Feb 21, 2019 at 06:08:15PM +0100, Luigi Toscano wrote: >> >> > Hi all, >> >> > >> >> > During the last PTG it was decided to move forward with the migration >> >> > of >> >> > the api-ref documentation together with the rest of the documentation >> >> > [1]. This is one of the item still open after the (not so recent >> >> > anymore) >> >> > massive documentation restructuring [2]. >> >> >> >> How is this going to work with the publishing of these separate content >> >> types to different locations? >> > >> > I can just guess, as this is a work in progress and I don't know about >> > most of the previous discussions. >> > >> > The publishing job is just code and can be adapted to publish two (three) >> > subtrees to different places, or exclude some directories. >> > The global index files from doc/source do not necessarily need to include >> > all the index files of the subdirectories, so that shouldn't be a >> > problem. >> > >> > Do you have a specific concern that it may difficult to address? >> >> Sphinx is really expecting to build a complete output set that is used >> together. Several things may break. It connects the output files >> together with "next" and "previous" navigation links, for one. It uses >> relative links to resources like CSS and JS files that will be in a >> different place if /some/deep/path/to/index.html becomes /index.html or >> vice versa. >> >> What is the motivation for changing how the API documentation is built >> and published? > > > I guess that a bit of context is required (Ghanshyam, Petr, please correct me > if I forgot anything). > > During the last PTG we had a QA session focused on documentation and its > restructuring (see the already mentioned [1]) One of the point discussed was > the location of the generated API. Right now tempest uses doc/source/library, > which is not a place documented by the [2]. > I was arguing about the usage of reference/ instead (which is used by various > python-client) and we couldn't come to an agreement. So we asked the Doc > representative about this and Petr Kovar kindly chimed in. > > I don't remember exactly how we ended up on api-ref, but I think that the idea > was that api-ref, moved to doc/source/, could have solved the problem, giving > the proper place for this kind of content. > Then we forgot about this, but fixing the Tempest documentation was still on > the list, so I asked again on the doc channels and as suggested I have sent > the email that started this thread. > > This is to say that I'm not particularly attached to this change, but I only > tried to push it forward because it was the suggested solution of another > problem. I'm more than happy to not have to do more work - but then I'd really > appreciate a solution to the original problem. If this is internal API documentation, like for using Python libraries, use the reference directory as the spec says. The api-ref stuff is for the REST API. Some projects have both, so we do not want to mix them. > [1] https://etherpad.openstack.org/p/clean-up-the-tempest-documentation > [2] https://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html > > Ciao > -- > Luigi > > > -- Doug From doug at doughellmann.com Fri Feb 22 15:25:21 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 22 Feb 2019 10:25:21 -0500 Subject: [tc][election] candidate question: managing change Message-ID: We are consistently presented with the challenges of trying to convince our large community to change direction, collaborate across team boundaries, and work on features that require integration of several services. Other threads with candidate questions include discussions of some significant technical changes people would like to see in OpenStack's implementation. Taking one of those ideas, or one of your own idea, as inspiration, consider how you would make the change happen if it was your responsibility to do so. Which change management approaches that we have used unsuccessfully in the past did you expect to see work? Why do you think they failed? Which would you like to try again? How would you do things differently? What new suggestions do you have for addressing this recurring challenge? -- Doug From doug at doughellmann.com Fri Feb 22 15:27:58 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 22 Feb 2019 10:27:58 -0500 Subject: [tc] forum planning Message-ID: It's time to start planning the forum. We need a volunteer (or 2) to set up the etherpad and start collecting ideas for sessions the TC should be running. Who wants to handle that? -- Doug From dave at medberry.net Thu Feb 21 18:53:23 2019 From: dave at medberry.net (David Medberry) Date: Thu, 21 Feb 2019 11:53:23 -0700 Subject: Fwd: Renaissance Denver Hotel: Quiet Zone (no more train horns!) is OFFICIAL for the A line Light Rail! In-Reply-To: <20190221172418.2oibavbt5fmkndio@yuggoth.org> References: <20190221172418.2oibavbt5fmkndio@yuggoth.org> Message-ID: I can definitely round some up for my talk and maybe some other sessions I attend.... On Thu, Feb 21, 2019 at 10:24 AM Jeremy Stanley wrote: > > On 2019-02-20 20:15:35 +0000 (+0000), Alexandra Settle wrote: > > Is it sad I'm almost disappointed by the lack of said horns? > [...] > > I bet we can convince folks to bring wooden train whistles with > them, or find some in a local shop. Problem solved. > -- > Jeremy Stanley From lbragstad at gmail.com Fri Feb 22 16:24:16 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 22 Feb 2019 10:24:16 -0600 Subject: [keystone][dev] Forum topic brainstorming In-Reply-To: <52e35dc9-74c1-46a8-b495-f359ee4dacd0@www.fastmail.com> References: <52e35dc9-74c1-46a8-b495-f359ee4dacd0@www.fastmail.com> Message-ID: <58ae55b6-fcd0-81b0-e95d-d9810eb50da0@gmail.com> On 2/22/19 5:01 AM, Colleen Murphy wrote: > On Thu, Feb 21, 2019, at 7:16 PM, Lance Bragstad wrote: >> Hi all, >> >> This is going out a little later than I'd like, so I apologize for >> letting it slip. >> >> Submissions for forum topics opens tomorrow [0]. Per usual, I've >> created an etherpad [1] for us to come up with topics we'd like to >> discuss at the forum. It looks like we only have a couple weeks to >> submit sessions [2], so I'll be putting this on the agenda for the >> keystone meeting next week. >> Please have a look and add suggestions or feedback before then. >> >> [0] >> http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002900.html >> [1] https://etherpad.openstack.org/p/DEN-keystone-forum-sessions >> [2] https://wiki.openstack.org/wiki/Forum >> >> Attachments: >> * signature.asc > We should also presumably be brainstorming PTG topics too? I suppose we don't have to submit those ahead of time but it would be good to start considering them as well. ++ Since this is the first time we've co-located the events, I'm open to using whatever approach is easiest for collecting ideas. In the past we've always used an etherpad to collect topics and then we formalize it into a schedule as we got closer to the event. I don't know if people are adverse to another etherpad for topics that are closely related, just because that's how we've done it in the past. Conversely, we could generalize the forum etherpad into a general bucket of "topics", and denote which are going to be submitted for the forum and which are going to be sessions at the PTG. I'm fine with either. I can restructure things or consolidate if folks think it will be easier to get ideas on paper. Thoughts? > > Colleen > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lbragstad at gmail.com Fri Feb 22 16:26:51 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 22 Feb 2019 10:26:51 -0600 Subject: [dev][keystone] App Cred Capabilities Update In-Reply-To: <52922cb0-655d-41d6-aea7-bb86240b2703@www.fastmail.com> References: <4434fb3c-cdc3-48fc-b9ef-5a7dd0a8e70c@www.fastmail.com> <52922cb0-655d-41d6-aea7-bb86240b2703@www.fastmail.com> Message-ID: <8245210f-1cd9-bd74-4f06-28c02a43439a@gmail.com> On 2/22/19 4:54 AM, Colleen Murphy wrote: > On Thu, Feb 21, 2019, at 11:32 PM, Lance Bragstad wrote: >> >> On 2/21/19 3:11 PM, Colleen Murphy wrote: > [snipped] > >>> * Substitutions >>> >>> The way the spec lays out variable components of the URL paths for both >>> user-created-rules and operator-created-rules is unnecessarily complex and in >>> some cases faulty. The only way I can explain how complicated it is is to try >>> to give an example: >>> >>> Let's say we want to allow a user to create an application credential that >>> allows the holder to issue a GET request on the identity API that looks like >>> /v3/projects/ef7284b4-3a75-4570-8ea8-b30214f18538/tags/foobar. The spec says >>> that the string '/v3/projects/{project_id}/tags/{tag}' is what should be >>> provided verbatim in the "path" attribute of a "capability", then there should >>> be a "substitutions" attribute that sets {"tag": "foobar"}, then the project_id >>> should be taken from the token scope at app cred usage time. When the >>> capability is validated against the operator-created-rules at app cred creation >>> time, it needs to check that the path string matches exactly, that the keys of >>> the "substitutions" dict matches the "user template keys" list, and that keys >>> required by the "context template keys" are provided by the token context. >>> >>> Taking the project ID, domain ID, or user ID from the token scope is not going >>> to work because some of these APIs may actually be system-scoped APIs - it's >>> just not a hard and fast rule that a project/domain/user ID in the URL maps to >>> the same user and scope of the token used to create it. Once we do away with >>> that, it stops making sense to have a separate attribute for the user-provided >>> substitutions when they could just include that in the URL path to begin with. >>> So the proposed implementation simply allows the wildcards * and ** in both the >>> operator-created-rules and user-created-rules, no python-formatting variable >>> substitutions. >> I agree about the awkwardness and complexity, but I do want to clarify. >> Using the example above, going with * and ** would mean that tokens >> generated from that application credential would be actionable on any >> project tag for the project the application credential was created for >> and not just 'foobar'. > Not exactly. Say the operator has configured a rule like: > > GET /v3/projects/*/tags/* > > The user then has the ability to configure one or more of several rules: > > GET /v3/projects/*/tags/* # this application credential can be used on any project on any tag > GET /v3/projects/UUID/tags/* # this application credential can be used on a specific project on any tag > GET /v3/projects/*/tags/foobar # this application credential can be used on any project but only for tag "foobar" > GET /v3/projects/UUID/tags/foobar # this application credential can only be used on one specific project and one specific tag > > The matching rule for capability creation would be flexible enough to allow any of these. Oh - so we're just not allowing for the substitution to be a formal attribute of the application credential? > >> For an initial implementation, I think that's fine. Sure, creating an >> application credential specific to a single server is ideal, but at >> least we're heading in the right direction by limiting its usage to a >> single API. If we get that right - we should be able to iteratively add >> filtering later*. I wouldn't mind re-raising this particular point after >> we have more feedback from the user community. >> >> * iff we need to >> > [snipped] > > Colleen > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From florian.engelmann at everyware.ch Fri Feb 22 16:38:53 2019 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Fri, 22 Feb 2019 17:38:53 +0100 Subject: [ceilometer] radosgw pollster Message-ID: Hi, I failed to poll any usage data from our radosgw. I get 2019-02-22 17:23:57.461 24 INFO ceilometer.polling.manager [-] Polling pollster radosgw.containers.objects in the context of radosgw_300s_pollsters 2019-02-22 17:23:57.462 24 ERROR ceilometer.polling.manager [-] Prevent pollster radosgw.containers.objects from polling [ From sbauza at redhat.com Fri Feb 22 16:41:54 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Fri, 22 Feb 2019 17:41:54 +0100 Subject: [tc][election] candidate question: managing change In-Reply-To: References: Message-ID: On Fri, Feb 22, 2019 at 4:29 PM Doug Hellmann wrote: > > We are consistently presented with the challenges of trying to convince > our large community to change direction, collaborate across team > boundaries, and work on features that require integration of several > services. Other threads with candidate questions include discussions of > some significant technical changes people would like to see in > OpenStack's implementation. Taking one of those ideas, or one of your > own idea, as inspiration, consider how you would make the change happen > if it was your responsibility to do so. > > Which change management approaches that we have used unsuccessfully in > the past did you expect to see work? Why do you think they failed? > > One of the ideas that we tested in the past which I was expecting to succeed was the Architecture WG [1]. It was a very interesting approach to see some experts discussing about common pitfalls and see what we could change, but I personnally feel we felt short in terms of deliverables because those discussions weren't really engaged with the corresponding project teams they were impacting. On the other hand, another WG, the API WG (now a SIG) is a great example of success because inputs were directly coming from contributors coming from different projects and seeing common patterns. I can also recall a few discussions we had at Summits (and later Forum) that were promising but did lack of resources for acting on changes. To summarize my thoughts I already said earlier, nothing can happen if you can't have contributors that are familiar with the respective projects that are impacted and that can dedicate time for it (meaning you also need to convince your respective managements of the great benefits it can be). Which would you like to try again? How would you do things differently? > > There a couple of things I'd get things done. Say at least, OSC supporting projects microversions at their latest. Also, I'd like to see most of the projects supporting upgrades and follow the old Design Tenets we agreed on a couple of years before. How to make this work ? Well, no magic bullet : 1/ make sure that we can get projects sign-off on any initiative (for example a TC goal) and make sure you have a champion on each project that is reasonably expert on this project to address the need. 2/ have the SIGs/WGs providing us feedback (like Public WG tries to achieve) and make sure we can have resources matching those feature/bugfix requests. 3/ accept the fact that an architectural redesign can span multiple cycles and ensure that the change is itself iterative with no upgrade impact. What new suggestions do you have for addressing this recurring > challenge? > We currently address at the PTGs cross-project talks in a 1:1 fashion (for example Nova-Cinder). We also have Forum sessions that span multiple projects impact. Now that the PTG directly follows the Forum, it would be a good idea to make sure that ideas that pop up at the Forum are actually translated in real PTG discussions for each service. Yeah, we'll work 6 days and it's going to be stressful, but let's take the opportunity for focusing on real actionable items by having 6 days for it. -Sylvain [1] https://github.com/openstack/arch-wg > -- > Doug > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Fri Feb 22 17:15:41 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Fri, 22 Feb 2019 17:15:41 +0000 Subject: [docs] Implementation of the api-ref consolidation under doc/source/ In-Reply-To: References: <15224217.xv0GsKRgh2@whitebase.usersys.redhat.com> <2034589.JRkO46kZ9W@whitebase.usersys.redhat.com> <1643155.mk73lP1mh2@whitebase.usersys.redhat.com> Message-ID: <5dd2d21778ce60d25a5f600f7240c6660d0556f6.camel@redhat.com> On Fri, 2019-02-22 at 10:15 -0500, Doug Hellmann wrote: > Luigi Toscano writes: > > > On Thursday, 21 February 2019 19:45:53 CET Doug Hellmann wrote: > > > Luigi Toscano writes: > > > > On Thursday, 21 February 2019 18:34:03 CET Sean McGinnis wrote: > > > > > On Thu, Feb 21, 2019 at 06:08:15PM +0100, Luigi Toscano wrote: > > > > > > Hi all, > > > > > > > > > > > > During the last PTG it was decided to move forward with the migration > > > > > > of > > > > > > the api-ref documentation together with the rest of the documentation > > > > > > [1]. This is one of the item still open after the (not so recent > > > > > > anymore) > > > > > > massive documentation restructuring [2]. > > > > > > > > > > How is this going to work with the publishing of these separate content > > > > > types to different locations? > > > > > > > > I can just guess, as this is a work in progress and I don't know about > > > > most of the previous discussions. > > > > > > > > The publishing job is just code and can be adapted to publish two (three) > > > > subtrees to different places, or exclude some directories. > > > > The global index files from doc/source do not necessarily need to include > > > > all the index files of the subdirectories, so that shouldn't be a > > > > problem. > > > > > > > > Do you have a specific concern that it may difficult to address? > > > > > > Sphinx is really expecting to build a complete output set that is used > > > together. Several things may break. It connects the output files > > > together with "next" and "previous" navigation links, for one. It uses > > > relative links to resources like CSS and JS files that will be in a > > > different place if /some/deep/path/to/index.html becomes /index.html or > > > vice versa. > > > > > > What is the motivation for changing how the API documentation is built > > > and published? > > > > I guess that a bit of context is required (Ghanshyam, Petr, please correct me > > if I forgot anything). > > > > During the last PTG we had a QA session focused on documentation and its > > restructuring (see the already mentioned [1]) One of the point discussed was > > the location of the generated API. Right now tempest uses doc/source/library, > > which is not a place documented by the [2]. > > I was arguing about the usage of reference/ instead (which is used by various > > python-client) and we couldn't come to an agreement. So we asked the Doc > > representative about this and Petr Kovar kindly chimed in. > > > > I don't remember exactly how we ended up on api-ref, but I think that the idea > > was that api-ref, moved to doc/source/, could have solved the problem, giving > > the proper place for this kind of content. > > Then we forgot about this, but fixing the Tempest documentation was still on > > the list, so I asked again on the doc channels and as suggested I have sent > > the email that started this thread. > > > > This is to say that I'm not particularly attached to this change, but I only > > tried to push it forward because it was the suggested solution of another > > problem. I'm more than happy to not have to do more work - but then I'd really > > appreciate a solution to the original problem. > > If this is internal API documentation, like for using Python libraries, > use the reference directory as the spec says. The api-ref stuff is for > the REST API. Some projects have both, so we do not want to mix them. Yeah, you want to put _Python_ APIs in 'doc/source/reference/api' (you could drop the '/api' bit too, I guess). REST API should stay in 'api- ref' for the above reasons. Perhaps projects other than Tempest aren't doing this consistently, in which case we should probably fix that, but REST APIs should stay where they are, I think. Stephen > > [1] https://etherpad.openstack.org/p/clean-up-the-tempest-documentation > > [2] https://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html > > > > Ciao > > -- > > Luigi > > > > > > From kennelson11 at gmail.com Fri Feb 22 17:24:27 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 22 Feb 2019 09:24:27 -0800 Subject: [all] Forum Submissions are Open! Message-ID: Hello Everyone! We are now accepting Forum [1] submissions for the 2019 Open Infrastructure Summit in Denver [2]. Please submit your ideas through the Summit CFP tool [3] through March 8th. Don't forget to put your brainstorming etherpad up on the Denver Forum page [4]. This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. More information about the Forum [1]. The timeline for submissions is as follows: Feb 22nd | Formal topic submission tool opens: https://www.openstack.org/summit/denver-2019/call-for-presentations. March 8th | Deadline for proposing Forum topics. Scheduling committee meeting to make draft agenda. March 22nd | Draft Forum schedule published. Crowd sourced session conflict detection. Forum promotion begins. March 29th | Scheduling committee final meeting April 5th | Forum schedule final April 29th-May 1st | Forum Time! If you have questions or concerns, please reach out to speakersupport at openstack.org. Cheers, Kendall Nelson (diablo_rojo) [1] https://wiki.openstack.org/wiki/Forum [2] https://www.openstack.org/summit/denver-2019/ [3] https://www.openstack.org/summit/denver-2019/call-for-presentations [4] https://wiki.openstack.org/wiki/Forum/Denver2019 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Fri Feb 22 17:42:19 2019 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 22 Feb 2019 12:42:19 -0500 Subject: [uwsgi] [glance] Support for wsgi-manage-chunked-input in uwsgi: glance-api finally working over SSL as expected In-Reply-To: References: Message-ID: On 2/21/19 8:02 AM, Thomas Goirand wrote: > Hi, > > It was quite famous that we had no way to run Glance under Python 3 with > SSL, because of eventlet, and the fact that Glance needed chunked-input, > which made uwsgi not a good candidate. Well, this was truth until 12 > days ago, when uwsgi 2.0.18 was released, adding the > --wsgi-manage-chunked-input. I've just installed Glance this way, and > it's finally working as expected. I'll be releasing Glance in Debian > Buster this way. Have you tested image import when running Glance under uwsgi? In previous uwsgi versions, a problem has been that the tasks that perform the import operations (either glance_direct or web_download) get stuck in 'pending' status upon creation and never execute. > > I believe it's now time to make this config the default in the Gate, > which is why I'm writing this message. > > I hope this helps, > Cheers, > > Thomas Goirand (zigo) > From petebirley+openstack-dev at gmail.com Fri Feb 22 17:52:42 2019 From: petebirley+openstack-dev at gmail.com (Pete Birley) Date: Fri, 22 Feb 2019 11:52:42 -0600 Subject: [openstack-helm] Team Meeting (26th Feb 2019) Message-ID: Hey! The next OpenStack-Helm meeting will be held on the 26th February, at 3pm UTC in #openstack-meeting-4 in freenode IRC. It would be great if people interested in OSH could attend, though we appreciate that's not possible, or desirable, for many. The agenda for the meeting is here: https://etherpad.openstack.org/p/openstack-helm-meeting-2019-02-26, please feel free to add to it, even if you cannot attend. Following the meeting, we'll put up next weeks agenda here on the Mailing list, along with the minutes from the last meeting - so that those who can't attend have an opportunity to get involved. Look forward to seeing you all either in IRC or here. Pete From james.slagle at gmail.com Fri Feb 22 17:55:53 2019 From: james.slagle at gmail.com (James Slagle) Date: Fri, 22 Feb 2019 12:55:53 -0500 Subject: =?UTF-8?Q?Re=3A_=5Btripleo=5D_nominating_Harald_Jens=C3=A5s_as_a_core_re?= =?UTF-8?Q?viewer?= In-Reply-To: References: Message-ID: On Thu, Feb 21, 2019 at 10:05 AM Juan Antonio Osorio Robles wrote: > > Hey folks! > > > I would like to nominate Harald as a general TripleO core reviewer. > > He has consistently done quality reviews throughout our code base, > helping us with great feedback and technical insight. > > While he has done a lot of work on the networking and baremetal sides of > the deployment, he's also helped out on security, CI, and even on the > tripleoclient side. > > Overall, I think he would be a great addition to the core team, and I > trust his judgment on reviews. > > > What do you think? +1 -- -- James Slagle -- From akekane at redhat.com Fri Feb 22 17:56:27 2019 From: akekane at redhat.com (Abhishek Kekane) Date: Fri, 22 Feb 2019 23:26:27 +0530 Subject: [uwsgi] [glance] Support for wsgi-manage-chunked-input in uwsgi: glance-api finally working over SSL as expected In-Reply-To: References: Message-ID: I have encountered some issues with uwsgi [1]. I guess its worth checking all are resolved or only eventlet related issue is resolved. I will going to test same on Monday but meanwhile someone has got time kindly do the needful. [1] https://etherpad.openstack.org/p/uwsgi-issues Thank you, Abhishek On Fri, 22 Feb 2019 at 11:19 PM, Brian Rosmaita wrote: > On 2/21/19 8:02 AM, Thomas Goirand wrote: > > Hi, > > > > It was quite famous that we had no way to run Glance under Python 3 with > > SSL, because of eventlet, and the fact that Glance needed chunked-input, > > which made uwsgi not a good candidate. Well, this was truth until 12 > > days ago, when uwsgi 2.0.18 was released, adding the > > --wsgi-manage-chunked-input. I've just installed Glance this way, and > > it's finally working as expected. I'll be releasing Glance in Debian > > Buster this way. > > Have you tested image import when running Glance under uwsgi? In > previous uwsgi versions, a problem has been that the tasks that perform > the import operations (either glance_direct or web_download) get stuck > in 'pending' status upon creation and never execute. > > > > > I believe it's now time to make this config the default in the Gate, > > which is why I'm writing this message. > > > > I hope this helps, > > Cheers, > > > > Thomas Goirand (zigo) > > > > > -- Thanks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Fri Feb 22 17:56:43 2019 From: melwittt at gmail.com (melanie witt) Date: Fri, 22 Feb 2019 09:56:43 -0800 Subject: [nova][dev] 2 weeks until feature freeze Message-ID: Howdy all, We've about 2 weeks left until feature freeze milestone s-3 on March 7: https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule Non-client library freeze is in 1 week February 28, so if you need changes released in Stein for os-vif, os-traits, or os-resource-classes, they need to be merged by Feb 28 and the releases will be proposed on Feb 28. Ping us if you need review. The blueprint status tracking etherpad is up-to-date: https://etherpad.openstack.org/p/nova-stein-blueprint-status For our Cycle Themes: Multi-cell operational enhancements: The patch series for the API microversion for handling of down cells (nova side and novaclient side) is complete with one admin docs patch remaining. It is actively being reviewed. Counting quota usage from placement has its implementation done and WIP on test coverage. Cross-cell resize is still making good progress with active code review. Compute nodes able to upgrade and exist with nested resource providers for multiple vGPU types: The libvirt driver reshaper patch is up-to-date and passing CI. There's a comment on the patch pointing out a bit of missing test coverage which needs to be added. Volume-backed user experience and API improvement: The detach boot volume and volume-backed server rebuild patches are active WIP. If you are the owner of an approved blueprint, please: * Add the blueprint if I've missed it * Update the status if it is not accurate * If your blueprint is in the "Wayward changes" section, please upload and update patches as soon as you can, to allow maximum time for review * If your patches are noted as Merge Conflict or WIP or needing an update, please update them and update the status on the etherpad * Add a note under your blueprint if you're no longer able to work on it this cycle Let us know if you have any questions or need assistance with your blueprint. Cheers, -melanie From doug at doughellmann.com Fri Feb 22 18:05:42 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 22 Feb 2019 13:05:42 -0500 Subject: [Release-job-failures][ansible] Release of openstack/openstack-ansible failed In-Reply-To: References: Message-ID: zuul at openstack.org writes: > Build failed. > > - announce-release http://logs.openstack.org/14/1426df14f2bec09521dbf85486537250b8865fbc/release/announce-release/3e13c28/ : FAILURE in 3m 33s > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures This looks like a transient failure installing pbr (another job for the same repo at a different version worked fine). I ran tools/announce.sh in the releases repo by hand to send the announcement instead of requeuing the job [1]. That script did report a warning: No bug url found in 'openstack/openstack-ansible/README.rst' It just means that the script couldn't find the pattern it was looking for to be able to link to the bug tracker in the generated email. When that happens it just leaves the section out of the message body. I don't know how important that is to the OSA team. [1] http://lists.openstack.org/pipermail/release-announce/2019-February/006474.html -- Doug From jim at jimrollenhagen.com Fri Feb 22 18:10:29 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 22 Feb 2019 13:10:29 -0500 Subject: [tc][election] candidate question: managing change In-Reply-To: References: Message-ID: On Fri, Feb 22, 2019 at 10:29 AM Doug Hellmann wrote: > > We are consistently presented with the challenges of trying to convince > our large community to change direction, collaborate across team > boundaries, and work on features that require integration of several > services. Other threads with candidate questions include discussions of > some significant technical changes people would like to see in > OpenStack's implementation. Taking one of those ideas, or one of your > own idea, as inspiration, consider how you would make the change happen > if it was your responsibility to do so. > > Which change management approaches that we have used unsuccessfully in > the past did you expect to see work? Why do you think they failed? I think there's two classes of approaches we've taken in the past: groups of OpenStack developers with time dedicated to making the change, and everything else. Guess which one worked? Whether we like it or not, this is a community of doers. We've seen lots of working groups with good ideas talk and talk, and while they may be able to convince people that their ideas are good, nothing gets done about them. One other thing that I've noticed with larger changes, or changes that are necessary to scale OpenStack, is that we often lack data or proper resources to prove that a change helps things. This is improving with companies like CERN or Vexxhost running closer to master and being able to deploy/test changes easily at a reasonable scale, but still could be better. > > Which would you like to try again? How would you do things differently? > > What new suggestions do you have for addressing this recurring > challenge? > For these large changes, what we really need is a group of people with the time and resources to accomplish it. Finding that can be difficult. I would like to try something similar to the "help wanted" list for large cross-project changes that we want to see happen. Things like "remove rabbit" or "proper pub-sub for everything" might go on that list (these are examples, please don't look too hard into it). Maybe we need some number of +1s from operators to get a thing on this list. From there, we can get a small group to gather data and propose solutions. The next step is to propose and measure POCs of those solutions, but we'll probably need more people/resources for that. The TC/group can work together on messaging calls for help to companies that have these problems, and hopefully get a group of developers that can push forward and make things happen. AIUI, the "help wanted" list hasn't been very successful in getting contributors for those tasks. I think this proposal has a better chance of succeeding as it would be solving needs present in all/most production clouds. The things on the help needed list aren't pain points for many deployed clouds, but fixing large cross-project problems would be. I do realize it's hard to convince businesses to throw more money at OpenStack these days, so this might not work at all. But it's the best idea I have at the moment. :) // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Fri Feb 22 18:12:10 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 22 Feb 2019 13:12:10 -0500 Subject: [Release-job-failures][ansible] Release of openstack/openstack-ansible failed In-Reply-To: References: Message-ID: On Fri, Feb 22, 2019 at 1:07 PM Doug Hellmann wrote: > > zuul at openstack.org writes: > > > Build failed. > > > > - announce-release http://logs.openstack.org/14/1426df14f2bec09521dbf85486537250b8865fbc/release/announce-release/3e13c28/ : FAILURE in 3m 33s > > > > _______________________________________________ > > Release-job-failures mailing list > > Release-job-failures at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > > This looks like a transient failure installing pbr (another job for the > same repo at a different version worked fine). I ran tools/announce.sh > in the releases repo by hand to send the announcement instead of > requeuing the job [1]. > > That script did report a warning: > > No bug url found in 'openstack/openstack-ansible/README.rst' > > It just means that the script couldn't find the pattern it was looking > for to be able to link to the bug tracker in the generated email. When > that happens it just leaves the section out of the message body. I don't > know how important that is to the OSA team. Is there any specific format we should be using/maintaining? > > [1] http://lists.openstack.org/pipermail/release-announce/2019-February/006474.html > > -- > Doug > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From colleen at gazlene.net Fri Feb 22 18:22:25 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 22 Feb 2019 13:22:25 -0500 Subject: [dev][keystone] App Cred Capabilities Update In-Reply-To: <8245210f-1cd9-bd74-4f06-28c02a43439a@gmail.com> References: <4434fb3c-cdc3-48fc-b9ef-5a7dd0a8e70c@www.fastmail.com> <52922cb0-655d-41d6-aea7-bb86240b2703@www.fastmail.com> <8245210f-1cd9-bd74-4f06-28c02a43439a@gmail.com> Message-ID: <9ffdbf49-8ba0-4ba8-aab7-242a6fcf122e@www.fastmail.com> On Fri, Feb 22, 2019, at 5:27 PM, Lance Bragstad wrote: > > > On 2/22/19 4:54 AM, Colleen Murphy wrote: > > On Thu, Feb 21, 2019, at 11:32 PM, Lance Bragstad wrote: > >> > >> On 2/21/19 3:11 PM, Colleen Murphy wrote: > > [snipped] > > > >>> * Substitutions > >>> > >>> The way the spec lays out variable components of the URL paths for both > >>> user-created-rules and operator-created-rules is unnecessarily complex and in > >>> some cases faulty. The only way I can explain how complicated it is is to try > >>> to give an example: > >>> > >>> Let's say we want to allow a user to create an application credential that > >>> allows the holder to issue a GET request on the identity API that looks like > >>> /v3/projects/ef7284b4-3a75-4570-8ea8-b30214f18538/tags/foobar. The spec says > >>> that the string '/v3/projects/{project_id}/tags/{tag}' is what should be > >>> provided verbatim in the "path" attribute of a "capability", then there should > >>> be a "substitutions" attribute that sets {"tag": "foobar"}, then the project_id > >>> should be taken from the token scope at app cred usage time. When the > >>> capability is validated against the operator-created-rules at app cred creation > >>> time, it needs to check that the path string matches exactly, that the keys of > >>> the "substitutions" dict matches the "user template keys" list, and that keys > >>> required by the "context template keys" are provided by the token context. > >>> > >>> Taking the project ID, domain ID, or user ID from the token scope is not going > >>> to work because some of these APIs may actually be system-scoped APIs - it's > >>> just not a hard and fast rule that a project/domain/user ID in the URL maps to > >>> the same user and scope of the token used to create it. Once we do away with > >>> that, it stops making sense to have a separate attribute for the user-provided > >>> substitutions when they could just include that in the URL path to begin with. > >>> So the proposed implementation simply allows the wildcards * and ** in both the > >>> operator-created-rules and user-created-rules, no python-formatting variable > >>> substitutions. > >> I agree about the awkwardness and complexity, but I do want to clarify. > >> Using the example above, going with * and ** would mean that tokens > >> generated from that application credential would be actionable on any > >> project tag for the project the application credential was created for > >> and not just 'foobar'. > > Not exactly. Say the operator has configured a rule like: > > > > GET /v3/projects/*/tags/* > > > > The user then has the ability to configure one or more of several rules: > > > > GET /v3/projects/*/tags/* # this application credential can be used on any project on any tag > > GET /v3/projects/UUID/tags/* # this application credential can be used on a specific project on any tag > > GET /v3/projects/*/tags/foobar # this application credential can be used on any project but only for tag "foobar" > > GET /v3/projects/UUID/tags/foobar # this application credential can only be used on one specific project and one specific tag > > > > The matching rule for capability creation would be flexible enough to allow any of these. > > Oh - so we're just not allowing for the substitution to be a formal > attribute of the application credential? Right, and similarly the list of "user template keys" and "context template keys" in the operator-created-rules won't be needed any more. > > > > >> For an initial implementation, I think that's fine. Sure, creating an > >> application credential specific to a single server is ideal, but at > >> least we're heading in the right direction by limiting its usage to a > >> single API. If we get that right - we should be able to iteratively add > >> filtering later*. I wouldn't mind re-raising this particular point after > >> we have more feedback from the user community. > >> > >> * iff we need to > >> > > [snipped] > > > > Colleen > > > > > > Attachments: > * signature.asc From jim at jimrollenhagen.com Fri Feb 22 18:26:37 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 22 Feb 2019 13:26:37 -0500 Subject: [tc] [election] Candidate question: growth of projects In-Reply-To: References: Message-ID: On Thu, Feb 21, 2019 at 6:14 AM Chris Dent wrote: > > This is another set of questions for TC candidates, to look at a > different side of things from my first one [1] and somewhat related > to the one Doug has asked [2]. > > As Doug mentions, a continuing role of the TC is to evaluate > applicants to be official projects. These questions are about that. > > There are 63 teams in the official list of projects. How do you feel > about this size? Too big, too small, just right? Why? > > If you had to make a single declaration about growth in the number > of projects would you prefer to see (and why, of course): > > * More projects as required by demand. > * Slower or no growth to focus on what we've got. > * Trim the number of projects to "get back to our roots". > * Something else. > > How has the relatively recent emergence of the open infrastructure > projects that are at the same "level" in the Foundation as OpenStack > changed your thoughts on the above questions? > > Do you think the number of projects has any impact (positive or > negative) on our overall ability to get things done? > I haven't formed a strong opinion on the above, but I'll answer this. I don't think the number of projects has made a significant impact on our ability to get things done overall. If something is important to a certain amount of users or operators, I believe it will somehow get done eventually (with the caveat that the number of people it is important to scales with the effort required to get it done). But I do think it makes a negative impact on the ability to keep things consistent. The projects with fewer contributor-hours, so to speak, will have a hard time finding sufficient time to keep up with the large number of things we attempt to make consistent between projects. Between python versions, the PTI, docs structure, rolling upgrades, stable policy, API versioning, API "feel", client consistency, etc.etc.etc., there's a lot to keep up with that seems to change fairly frequently. > > Recognizing that there are many types of contributors, not just > developers, this question is about developers: Throughout history > different members of the community have sometimes identified as an > "OpenStack developer", sometimes as a project developer (e.g., "Nova > developer"). Should we encourage contributors to think of themselves > as primarily OpenStack developers? If so, how do we do that? If not, > why not? > Yes. Or maybe just anything other than "$project developers". Ideally I like to think that we would organize ourselves more around classes of features or layers of the stack. I'd like to see more "compute node developers", "networking developers", "REST API developers", "quota developers", etc. I think this would allow us to get important things done consistently across a wider number of projects, eventually making OpenStack a more coherent thing. That said, our culture is so ingrained as-is (both here and inside our supporting employers), that I'm not sure how to make this change. I'd love to talk with others and figure that out. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Feb 22 18:28:15 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 22 Feb 2019 13:28:15 -0500 Subject: [Release-job-failures][ansible] Release of openstack/openstack-ansible failed In-Reply-To: References: Message-ID: Mohammed Naser writes: > On Fri, Feb 22, 2019 at 1:07 PM Doug Hellmann wrote: >> >> zuul at openstack.org writes: >> >> > Build failed. >> > >> > - announce-release http://logs.openstack.org/14/1426df14f2bec09521dbf85486537250b8865fbc/release/announce-release/3e13c28/ : FAILURE in 3m 33s >> > >> > _______________________________________________ >> > Release-job-failures mailing list >> > Release-job-failures at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures >> >> This looks like a transient failure installing pbr (another job for the >> same repo at a different version worked fine). I ran tools/announce.sh >> in the releases repo by hand to send the announcement instead of >> requeuing the job [1]. >> >> That script did report a warning: >> >> No bug url found in 'openstack/openstack-ansible/README.rst' >> >> It just means that the script couldn't find the pattern it was looking >> for to be able to link to the bug tracker in the generated email. When >> that happens it just leaves the section out of the message body. I don't >> know how important that is to the OSA team. > > Is there any specific format we should be using/maintaining? > >> >> [1] http://lists.openstack.org/pipermail/release-announce/2019-February/006474.html >> >> -- >> Doug >> > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > https://review.openstack.org/638738 will add the link to future releases off of master. I think you'll need to backport that if you want it in the stable branches. -- Doug From colleen at gazlene.net Fri Feb 22 18:37:22 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 22 Feb 2019 13:37:22 -0500 Subject: [dev][keystone] Keystone Team Update - Week of 18 February 2019 Message-ID: <2b1b47dd-7256-413e-ab84-f9414e2c8f0e@www.fastmail.com> # Keystone Team Update - Week of 18 February 2019 ## News ### Scope 101 Melanie started a nova thread [0] that highlighted an API in nova that would benefit from leveraging different scopes in keystone and scope_types in oslo.policy. This thread ultimately kicked up a long discussion in IRC [1] about the concept of authorization scope and how it's actually useful to other OpenStack developers. While we document various token scopes in our admin guide [2], contributor guide [3], and explain how to get them in our API reference [4], we don't do a great job of breaking it down for other developers. Specifically, we don't help connect the dots for developers working on other parts of OpenStack that would benefit from the work we've done in keystone, keystonemiddleware, oslo.policy, and oslo.context to protect APIs they write. This is apparent in discussions we have with experienced OpenStack developers. What we need is a concise and digestable document that clearly explains how other developers in OpenStack can use these tools to provide more of the work they do to end users in a secure way. Lance has a WIP patch [5] up to our contributor guide that attempts to outline the questions people have about authorization scopes and how to consume them. If you have unanswered questions about authorization scopes or just want to learn more about it, please add your perspective to the review and we'll work on smoothing out the wrinkles. [0] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002740.html [1] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-02-20.log.html#t2019-02-20T18:35:06 [2] https://docs.openstack.org/keystone/latest/admin/tokens-overview.html#authorization-scopes [3] https://docs.openstack.org/keystone/latest/contributor/services.html#authorization-scopes [4] https://developer.openstack.org/api-ref/identity/v3/index.html?expanded=password-authentication-with-scoped-authorization-detail#system-scoped-example [5] https://review.openstack.org/#/c/638563/ ### Forum, PTG and Summit Sessions Lance posted a call for forum topics for the Denver summit[6]. As the PTG will be in the same place immediately following it, we also need to start thinking about PTG topics too. The presentation schedule has been finalized and posted[7], so make sure to check out all the keystone breakout sessions! [6] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/003021.html [7] https://www.openstack.org/summit/denver-2019/summit-schedule ### App Creds Update I posted an update on our progress on the fine-grained-access-control feature for application credentials[8] and we had a brief discussion about it on IRC[9]. Please respond on that thread if you have opinions about. I am expecting we will not meet the feature freeze deadline, which means it's perfectly okay to have a naming bikeshed. [8] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/003031.html [9] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-02-21.log.html#t2019-02-21T21:13:50 ### Outreachy Applications Open You may have noticed some activity from Outreachy applicants on the mailing list. The next round is open for both project and intern applications until March 26[10]. As you can tell, interns are already searching for and applying for projects, so best to submit project ideas ASAP. If you have an idea for an Outreachy project and would like to be a mentor, feel free to ask me about it: I can give you an idea of what the process is like, what the time commitment is, and other things you should know. [10] https://www.outreachy.org/communities/cfp/openstack/ ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 37 changes this week. ## Changes that need Attention Search query: https://bit.ly/2tymTje There are 44 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs Just after I sent my report last week, we converted several old blueprints to RFE bug reports, so I altered my filter this week to include those: Bugs opened (23) Bug #1816833 (keystone:High) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816833 Bug #1817313 (keystone:High) opened by Colleen Murphy https://bugs.launchpad.net/keystone/+bug/1817313 Bug #1816927 (keystone:Low) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816927 Bug #1817047 (keystone:Low) opened by André Luis Penteado https://bugs.launchpad.net/keystone/+bug/1817047 Bug #1816054 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816054 Bug #1816059 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816059 Bug #1816066 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816066 Bug #1816076 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816076 Bug #1816097 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816097 Bug #1816099 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816099 Bug #1816105 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816105 Bug #1816107 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816107 Bug #1816109 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816109 Bug #1816112 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816112 Bug #1816115 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816115 Bug #1816120 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816120 Bug #1816158 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816158 Bug #1816160 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816160 Bug #1816163 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816163 Bug #1816164 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816164 Bug #1816165 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816165 Bug #1816166 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816166 Bug #1816167 (keystone:Wishlist) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1816167 Bugs fixed (8) Bug #1811605 (keystone:High) fixed by Guang Yee https://bugs.launchpad.net/keystone/+bug/1811605 Bug #1814589 (keystone:High) fixed by Guang Yee https://bugs.launchpad.net/keystone/+bug/1814589 Bug #1815539 (keystone:High) fixed by Guang Yee https://bugs.launchpad.net/keystone/+bug/1815539 Bug #1757000 (keystone:Medium) fixed by erus https://bugs.launchpad.net/keystone/+bug/1757000 Bug #1804292 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1804292 Bug #1804516 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1804516 Bug #1804519 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1804519 Bug #1804521 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1804521 ## Milestone Outlook https://releases.openstack.org/stein/schedule.html The final release of non-client libraries is next week. As bnemec pointed out, this doesn't include the oslo libraries, for which the freeze is this week. Luckily it doesn't look like we have anything major in flight for oslo.policy and oslo.limit currently. Feature freeze for keystone and final release of client libraries is in two weeks. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From jim at jimrollenhagen.com Fri Feb 22 18:45:17 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 22 Feb 2019 13:45:17 -0500 Subject: [tc][election] candidate question: strategic leadership In-Reply-To: References: Message-ID: On Thu, Feb 21, 2019 at 7:56 AM Doug Hellmann wrote: > > With the changes at the Foundation level, adding new OIPs, a few board > members have suggested that this is an opportunity for the TC to evolve > from providing what some have seen as tactical management through > dealing with day-to-day issues to more long-term strategic leadership > for the project. This theme has also come up in the recent discussions > of the role of the TC, especially when considering how to make > community-wide technical decisions and how much influence the TC should > have over the direction individual projects take. > > What do you think OpenStack, as a whole, should be doing over the next > 1, 3, and 5 years? Why? > I didn't really think about it much until Chris' first set of TC questions, but like I said there, I would like the TC to take a more hands-on role in solving large technical problems in OpenStack. We should be looking to the user and operator communities to see what their true pain points are - the things that wake support up at 2am. It probably isn't mutable config, tempest plugin structure, or identity configuration, as awesome as those are. We should be looking at the problems that seem insurmountable; if we don't, who else will? I'm not sure if the TC should prescribe solutions or simply work with projects to explore them, but I suspect the former may be the most technically beneficial to OpenStack in the long run. I'm not sure what that would do to the community, though - is it worth it if it causes rifts there? As far as what OpenStack should be doing - I believe we should put more focus on these operational problems. Since feature work seems to have slowed down some since the peak of the hype cycle, we should be able to manage making large changes underneath ongoing work. From what I've seen in the wider tech community, there's a sizable cohort of people who have tried OpenStack and now recommend against it when folks ask. Let's create less of these people, and maybe even gain some back. :) I don't have specific items that I think we should address in the given time frames, as I haven't done enough research to be able to say. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Feb 22 18:47:06 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 22 Feb 2019 12:47:06 -0600 Subject: [dev][keystone] App Cred Capabilities Update In-Reply-To: <9ffdbf49-8ba0-4ba8-aab7-242a6fcf122e@www.fastmail.com> References: <4434fb3c-cdc3-48fc-b9ef-5a7dd0a8e70c@www.fastmail.com> <52922cb0-655d-41d6-aea7-bb86240b2703@www.fastmail.com> <8245210f-1cd9-bd74-4f06-28c02a43439a@gmail.com> <9ffdbf49-8ba0-4ba8-aab7-242a6fcf122e@www.fastmail.com> Message-ID: <0838ff04-04af-b77c-6749-65b6a8ba303c@gmail.com> On 2/22/19 12:22 PM, Colleen Murphy wrote: > On Fri, Feb 22, 2019, at 5:27 PM, Lance Bragstad wrote: >> >> On 2/22/19 4:54 AM, Colleen Murphy wrote: >>> On Thu, Feb 21, 2019, at 11:32 PM, Lance Bragstad wrote: >>>> On 2/21/19 3:11 PM, Colleen Murphy wrote: >>> [snipped] >>> >>>>> * Substitutions >>>>> >>>>> The way the spec lays out variable components of the URL paths for both >>>>> user-created-rules and operator-created-rules is unnecessarily complex and in >>>>> some cases faulty. The only way I can explain how complicated it is is to try >>>>> to give an example: >>>>> >>>>> Let's say we want to allow a user to create an application credential that >>>>> allows the holder to issue a GET request on the identity API that looks like >>>>> /v3/projects/ef7284b4-3a75-4570-8ea8-b30214f18538/tags/foobar. The spec says >>>>> that the string '/v3/projects/{project_id}/tags/{tag}' is what should be >>>>> provided verbatim in the "path" attribute of a "capability", then there should >>>>> be a "substitutions" attribute that sets {"tag": "foobar"}, then the project_id >>>>> should be taken from the token scope at app cred usage time. When the >>>>> capability is validated against the operator-created-rules at app cred creation >>>>> time, it needs to check that the path string matches exactly, that the keys of >>>>> the "substitutions" dict matches the "user template keys" list, and that keys >>>>> required by the "context template keys" are provided by the token context. >>>>> >>>>> Taking the project ID, domain ID, or user ID from the token scope is not going >>>>> to work because some of these APIs may actually be system-scoped APIs - it's >>>>> just not a hard and fast rule that a project/domain/user ID in the URL maps to >>>>> the same user and scope of the token used to create it. Once we do away with >>>>> that, it stops making sense to have a separate attribute for the user-provided >>>>> substitutions when they could just include that in the URL path to begin with. >>>>> So the proposed implementation simply allows the wildcards * and ** in both the >>>>> operator-created-rules and user-created-rules, no python-formatting variable >>>>> substitutions. >>>> I agree about the awkwardness and complexity, but I do want to clarify. >>>> Using the example above, going with * and ** would mean that tokens >>>> generated from that application credential would be actionable on any >>>> project tag for the project the application credential was created for >>>> and not just 'foobar'. >>> Not exactly. Say the operator has configured a rule like: >>> >>> GET /v3/projects/*/tags/* >>> >>> The user then has the ability to configure one or more of several rules: >>> >>> GET /v3/projects/*/tags/* # this application credential can be used on any project on any tag >>> GET /v3/projects/UUID/tags/* # this application credential can be used on a specific project on any tag >>> GET /v3/projects/*/tags/foobar # this application credential can be used on any project but only for tag "foobar" >>> GET /v3/projects/UUID/tags/foobar # this application credential can only be used on one specific project and one specific tag >>> >>> The matching rule for capability creation would be flexible enough to allow any of these. >> Oh - so we're just not allowing for the substitution to be a formal >> attribute of the application credential? > Right, and similarly the list of "user template keys" and "context template keys" in the operator-created-rules won't be needed any more. Awesome, I'm on board. This is more usable than how I originally interpreted the email. We can always revisit this later, too. > >>>> For an initial implementation, I think that's fine. Sure, creating an >>>> application credential specific to a single server is ideal, but at >>>> least we're heading in the right direction by limiting its usage to a >>>> single API. If we get that right - we should be able to iteratively add >>>> filtering later*. I wouldn't mind re-raising this particular point after >>>> we have more feedback from the user community. >>>> >>>> * iff we need to >>>> >>> [snipped] >>> >>> Colleen >>> >> >> >> Attachments: >> * signature.asc -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jaypipes at gmail.com Fri Feb 22 18:50:54 2019 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 22 Feb 2019 13:50:54 -0500 Subject: [nova][dev] Any VMware resource pool and shares kind of feature available in openstack nova? In-Reply-To: <492012d2-18b7-436c-9990-d11136f62b9a@gmail.com> References: <492012d2-18b7-436c-9990-d11136f62b9a@gmail.com> Message-ID: <0da87145-8c10-7d4e-ec3c-70cbf6e31729@gmail.com> On 02/22/2019 10:06 AM, Matt Riedemann wrote: > On 2/22/2019 2:46 AM, Sanjay K wrote: >> I will define/derive priority based on the which sub network the VM >> belongs to - mostly Production or Development. From this, the Prod VMs >> will have higher resource allocation criteria than other normal VMs >> and these can be calculated at runtime when a VM is also rebooted like >> how VMware resource pools and shares features work. > > It sounds like a weigher in scheduling isn't appropriate for your use > case then, because weighers in scheduling are meant to weigh compute > hosts once they have been filtered. It sounds like you're trying to > prioritize which VMs will get built, which sounds more like a > pre-emptible/spot instances use case [1][2]. > > As for VMware resource pools and shares features, I don't know anything > about those since I'm not a vCenter user. Maybe someone more worldly, > like Jay Pipes, can chime in here. I am neither worldly nor a vCenter user. Perhaps someone from VMWare, like Chris Dent, can chime in here ;) Best, -jay > [1] > https://www.openstack.org/videos/summits/berlin-2018/science-demonstrations-preemptible-instances-at-cern-and-bare-metal-containers-for-hpc-at-ska > > [2] > https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/enable-rebuild-for-instances-in-cell0.html > > From chris at openstack.org Fri Feb 22 20:30:04 2019 From: chris at openstack.org (Chris Hoge) Date: Fri, 22 Feb 2019 12:30:04 -0800 Subject: [baremetal-sig] Bare metal white paper (volunteers needed) Message-ID: <23DB4999-E978-4FEC-B52B-D11F31871E28@openstack.org> One of the first goals of the Bare Metal SIG will be to publish a white paper for the Denver Open Infrastructure Summit, similar to the document we produced last year for container integrations. We're looking for volunteers to help organize and write the paper. I've started a planning etherpad with a basic time-line and proposed outline: https://etherpad.openstack.org/p/bare-metal-whitepaper Time line: * February 22: White paper Kick Off Feedback from SIG on proposed outline, content, and schedule * March 4: High-level outline and content assignments Writing begins with weekly check-ins on progress and assistance * March 25: Initial content completed Design requests for figures and diagrams Editing and revisions on content for length, style, and correctness * April 8: Final copy to design and web teams for layout and publication * April 29: Publication at Open Infrastructure Summit, Denver Proposed Outline: * Introduction * High Level Overview of Bare Metal provisioning * Integrations into management tools * Case Studies * Conclusion * Glossary * Authors If we're going to have a successful launch for the summit, we'll need to organize writing efforts around this. Right now we need volunteers to help write the different sections of the document, as well as massage the outline into its final form. In particular we need deployers and users to write a variety of case studies to demonstrate the flexibility and power of Ironic in production. I'd like to get some feedback on the outline in the mailing list, and also have volunteers sign up for different parts of the paper in the planning etherpad. I've also started writing some introduction text to help everyone get started on thinking about how to frame this work in a compelling way. Thanks, and I'm excited to get started on this work with everyone. -Chris From rfolco at redhat.com Fri Feb 22 20:30:21 2019 From: rfolco at redhat.com (Rafael Folco) Date: Fri, 22 Feb 2019 17:30:21 -0300 Subject: [openstack-dev][tripleo] TripleO CI Summary: Sprint 26 Message-ID: Greetings, The TripleO CI team has just completed Sprint 26 / Unified Sprint 5 (Jan 31 thru Feb 20). The following is a summary of completed work during this sprint cycle: - Added support to Fedora on build containers job, including tripleo-repos. Updated the promotion pipeline jobs to use the same workflow for building containers on CentOS 7 and Fedora 28. Container builds on Fedora-28 are still a work in progress. - Converted scenario 12 to standalone. - Investigated the use of Standalone and OpenShift deployments. We determined to continue using a multinode topology for OpenShift. - Improved usability of Zuul container reproducer with launcher and user documentation. - Completed support of additional OVB node in TripleO jobs and implemented a FreeIPA deployment via CI tooling for TLS deployments. The planned work for the next sprint [1] are: - Get the Fedora 28 containers build job running in RDO and reporting failures. - Enable real bare metal testing in upstream promotion jobs - Test and provide feedback on Zuul reproducer with bugs and fixes. - Use ovb to enable an environment for CI testing of TLS. - Continue work on TLS deployments with nova migration tests. - Close out remaining OpenStack-Ansible tempest integration for a first cut or mvp. The Ruck and Rover for this sprint are Rafael Folco (rfolco) and Gabriele Cerami (panda). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Notes are recorded on etherpad [2]. Thanks, rfolco [1] https://tree.taiga.io/project/tripleo-ci-board/taskboard/unified-sprint-6 [2] https://review.rdoproject.org/etherpad/p/ruckrover-unisprint6 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Feb 22 22:34:04 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 23 Feb 2019 07:34:04 +0900 Subject: [tc] forum planning In-Reply-To: References: Message-ID: <16917581f13.eefc186b26671.1022981435736037359@ghanshyammann.com> ---- On Sat, 23 Feb 2019 00:27:58 +0900 Doug Hellmann wrote ---- > > It's time to start planning the forum. We need a volunteer (or 2) to set > up the etherpad and start collecting ideas for sessions the TC should be > running. > > Who wants to handle that? I volunteer for that. I have filled the initial information in etherpad [1] and can post it to ML for collecting the ideas. [1] https://etherpad.openstack.org/p/DEN-Train-TC-brainstorming -gmann > > -- > Doug > > From gmann at ghanshyammann.com Fri Feb 22 22:46:00 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 23 Feb 2019 07:46:00 +0900 Subject: [docs] Implementation of the api-ref consolidation under doc/source/ In-Reply-To: References: <15224217.xv0GsKRgh2@whitebase.usersys.redhat.com> <2034589.JRkO46kZ9W@whitebase.usersys.redhat.com> <1643155.mk73lP1mh2@whitebase.usersys.redhat.com> Message-ID: <169176308b6.adefc42526768.3310651201305630203@ghanshyammann.com> ---- On Sat, 23 Feb 2019 00:14:41 +0900 Doug Hellmann wrote ---- > Luigi Toscano writes: > > > On Thursday, 21 February 2019 19:45:53 CET Doug Hellmann wrote: > >> Luigi Toscano writes: > >> > On Thursday, 21 February 2019 18:34:03 CET Sean McGinnis wrote: > >> >> On Thu, Feb 21, 2019 at 06:08:15PM +0100, Luigi Toscano wrote: > >> >> > Hi all, > >> >> > > >> >> > During the last PTG it was decided to move forward with the migration > >> >> > of > >> >> > the api-ref documentation together with the rest of the documentation > >> >> > [1]. This is one of the item still open after the (not so recent > >> >> > anymore) > >> >> > massive documentation restructuring [2]. > >> >> > >> >> How is this going to work with the publishing of these separate content > >> >> types to different locations? > >> > > >> > I can just guess, as this is a work in progress and I don't know about > >> > most of the previous discussions. > >> > > >> > The publishing job is just code and can be adapted to publish two (three) > >> > subtrees to different places, or exclude some directories. > >> > The global index files from doc/source do not necessarily need to include > >> > all the index files of the subdirectories, so that shouldn't be a > >> > problem. > >> > > >> > Do you have a specific concern that it may difficult to address? > >> > >> Sphinx is really expecting to build a complete output set that is used > >> together. Several things may break. It connects the output files > >> together with "next" and "previous" navigation links, for one. It uses > >> relative links to resources like CSS and JS files that will be in a > >> different place if /some/deep/path/to/index.html becomes /index.html or > >> vice versa. > >> > >> What is the motivation for changing how the API documentation is built > >> and published? > > > > > > I guess that a bit of context is required (Ghanshyam, Petr, please correct me > > if I forgot anything). > > > > During the last PTG we had a QA session focused on documentation and its > > restructuring (see the already mentioned [1]) One of the point discussed was > > the location of the generated API. Right now tempest uses doc/source/library, > > which is not a place documented by the [2]. > > I was arguing about the usage of reference/ instead (which is used by various > > python-client) and we couldn't come to an agreement. So we asked the Doc > > representative about this and Petr Kovar kindly chimed in. > > > > I don't remember exactly how we ended up on api-ref, but I think that the idea > > was that api-ref, moved to doc/source/, could have solved the problem, giving > > the proper place for this kind of content. > > Then we forgot about this, but fixing the Tempest documentation was still on > > the list, so I asked again on the doc channels and as suggested I have sent > > the email that started this thread. > > > > This is to say that I'm not particularly attached to this change, but I only > > tried to push it forward because it was the suggested solution of another > > problem. I'm more than happy to not have to do more work - but then I'd really > > appreciate a solution to the original problem. > > If this is internal API documentation, like for using Python libraries, > use the reference directory as the spec says. The api-ref stuff is for > the REST API. Some projects have both, so we do not want to mix them. Yeah, it was REST API vs internal API (or stable external interface for cross projects usage) Tempest publish its external stable interface under doc/source/library[1] as we call it more library interfaces than API but we can move it under consistent path if we do have any. To be honest, reference/ does not sound good to me for such external (or we call them internal to openstack) stable interface because reference/ is being used as internal contribution documentation or some history reference by the projects, so it was confusing for me that anything goes under reference/ is just internal reference document for new contributor or for history ref or is it for the stable interface (anything other than REST API) exposed by that project. [1] https://docs.openstack.org/tempest/latest/library.html -gmann > > > [1] https://etherpad.openstack.org/p/clean-up-the-tempest-documentation > > [2] https://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html > > > > Ciao > > -- > > Luigi > > > > > > > > -- > Doug > From doug at doughellmann.com Fri Feb 22 22:46:10 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 22 Feb 2019 17:46:10 -0500 Subject: [tc] forum planning In-Reply-To: <16917581f13.eefc186b26671.1022981435736037359@ghanshyammann.com> References: <16917581f13.eefc186b26671.1022981435736037359@ghanshyammann.com> Message-ID: <7010B786-18BF-411D-A94C-1B8EFE3AFDDF@doughellmann.com> > On Feb 22, 2019, at 5:34 PM, Ghanshyam Mann wrote: > > ---- On Sat, 23 Feb 2019 00:27:58 +0900 Doug Hellmann wrote ---- >> >> It's time to start planning the forum. We need a volunteer (or 2) to set >> up the etherpad and start collecting ideas for sessions the TC should be >> running. >> >> Who wants to handle that? > > I volunteer for that. I have filled the initial information in etherpad [1] and can post it to ML for collecting the ideas. > > [1] https://etherpad.openstack.org/p/DEN-Train-TC-brainstorming > > -gmann Thanks! Doug From gmann at ghanshyammann.com Fri Feb 22 23:12:13 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 23 Feb 2019 08:12:13 +0900 Subject: [all][tc] Denver Forum session brainstorming Message-ID: <169177b096b.f538068226848.4509759148545946309@ghanshyammann.com> Hi, TC, Stackers, TC has started the etherpad to brainstorm the topis ideas for Denver Summit Forum. You can add TC related sessions or cross community or something which target wider community discussion as whole and does not fit under any specific project/SIG group. Please add your ideas at: https://etherpad.openstack.org/p/DEN-Train-TC-brainstorming March 8th is the dealine to submit the forum sessions. More details on dates, refer - http://lists.openstack.org/pipermail/openstack-discuss/2019-February/003073.html -gmann From mriedemos at gmail.com Sat Feb 23 00:02:25 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 22 Feb 2019 18:02:25 -0600 Subject: [nova][dev] 2 weeks until feature freeze In-Reply-To: References: Message-ID: <1cc4d733-aee4-f0a9-67ad-55b2d5e8cd7b@gmail.com> On 2/22/2019 11:56 AM, melanie witt wrote: > Cross-cell resize is still making good progress with active code review. Yes and no. I have built the series from the bottom up such that the things which could be merged which are nice to have despite cross-cell resize are at the beginning and some of those are merged, some are +2d from Eric (thanks). Where it starts to get tricky is when I'm making DB schema changes, added the CrossCellWeigher (probably need to move that later in the series as noted on the review), and then the conductor stuff. Again, the conductor stuff is all built from the bottom up, so it's written like: - add compute service methods that are needed for a conductor task - add the conductor task that executes those compute service methods Once I get to the point of being able to get through to VERIFY_RESIZE status, I added a stub in the API so functional tests can run that code (like gibi's bw provider series). Then I iterate on resize confirm using the same pattern - add compute methods and then the conductor task that calls them. Then resize revert after that, and finally at the very end the new policy rule is added to the API which turns it all on. I've got lots of TODOs and FIXMEs later in the series once it gets into the compute/conductor code, and at least one known issue with tracking volume attachments during a revert (exposed in functional testing). But I've also got the "happy path" scenarios passing in functional tests for confirm/revert. To summarize, there is a ton of code out there [1] (38 patches I think?) and I'd love review on it to start getting feedback and have people punch some holes in it, but I know most of it is not going to land in Stein. My hope is to start whittling some of that series down though since as I said, none of it is "on" until the end, but clearly the latter half of the series still needs work. [1] https://review.openstack.org/#/q/status:open+topic:bp/cross-cell-resize -- Thanks, Matt From thierry at openstack.org Sat Feb 23 02:20:54 2019 From: thierry at openstack.org (Thierry Carrez) Date: Sat, 23 Feb 2019 03:20:54 +0100 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: <29ed1052-2dcb-1f4e-cc98-f62d1ec34125@openstack.org> Chris Dent wrote: > It's the Campaigning slot of the TC election process, where members > of the community (including the candidates) are encouraged to ask > the candidates questions and witness some debate. I have some > questions. Thanks Chris ! I'll try to make short answers, as the campaigning period unfortunately ended up overlapping with long-planned vacation :) > * How do you account for the low number of candidates? Do you >   consider this a problem? Why or why not? The TC activity traditionally took some significant time, so it was better suited to people who had the chance to spend 100% of their work time on OpenStack. As we mature, we have less and less people who can spend 100% of their time on OpenStack. Sometimes they have to share their time with other projects, or with their organization other priorities. It is therefore more difficult to find candidates with the available time. I don't think it is a problem in itself -- it is more a reflection of how our community changed and how much our systems remained the same. > * Compare and contrast the role of the TC now to 4 years ago. If you >   weren't around 4 years ago, comment on the changes you've seen >   over the time you have been around. In either case: What do you >   think the TC role should be now? Having recently worked on the "role of the TC" document, for me it captures the role of the TC as it is. 4 years ago, we just finished evolving our systems to cope with massive growth. I think today we need to start evolving them again, with an eye toward long-term sustainability. > [...] > * What can the TC do to make sure that the community (in its many >   dimensions) is informed of and engaged in the discussions and >   decisions of the TC? Maybe we need to differentiate "communicating what we are doing as the TC" and communicating the global direction". OpenStack developers can ignore the former but should not ignore the latter. In the past we did converge both and hope everyone would read everything, and that was not very successful. > * How do you counter people who assert the TC is not relevant? >   (Presumably you think it is, otherwise you would not have run. If >   you don't, why did you run?) The TC is responsible for the whole of OpenStack, rather than the pieces (which are separately handled by project teams). We are in a good position to take a step back and make sure "OpenStack" looks good, beyond individual pieces. I personally think it is important, and it is why we are relevant. -- Thierry Carrez (ttx) From thierry at openstack.org Sat Feb 23 12:16:15 2019 From: thierry at openstack.org (Thierry Carrez) Date: Sat, 23 Feb 2019 13:16:15 +0100 Subject: [tc][election] candidate question: managing change In-Reply-To: References: Message-ID: <970c6998-1cf1-f0f2-01bd-ade8fe246260@openstack.org> Doug Hellmann wrote: > > We are consistently presented with the challenges of trying to convince > our large community to change direction, collaborate across team > boundaries, and work on features that require integration of several > services. Other threads with candidate questions include discussions of > some significant technical changes people would like to see in > OpenStack's implementation. Taking one of those ideas, or one of your > own idea, as inspiration, consider how you would make the change happen > if it was your responsibility to do so. > > Which change management approaches that we have used unsuccessfully in > the past did you expect to see work? Why do you think they failed? > > Which would you like to try again? How would you do things differently? > > What new suggestions do you have for addressing this recurring > challenge? I think it takes three ingredients: some individual(s) leading the change, over-communication, and leadership. As other mentioned we won't go anywhere if there is nobody signed up to drive the work. Cross-project work is going orthogonal to our organizational structure, so it requires extra work (something we should continue to fix, but that's another topic). Without someone committed to drive that against all odds, it just won't happen by fiat. You also need to over-communicate: in an open source community, people are often more annoyed at feeling excluded from the decision, than at the decision itself. Finally you need a leadership group to say that this large goal is desirable for the group -- that is where the TC comes in, and I would say we need to do more of it. -- Thierry Carrez (ttx) From thierry at openstack.org Sat Feb 23 12:34:35 2019 From: thierry at openstack.org (Thierry Carrez) Date: Sat, 23 Feb 2019 13:34:35 +0100 Subject: [tc] [election] Candidate question: growth of projects In-Reply-To: References: Message-ID: <142499ed-67ea-a864-212f-b4790e2742df@openstack.org> Chris Dent wrote: > > This is another set of questions for TC candidates, to look at a > different side of things from my first one [1] and somewhat related > to the one Doug has asked [2]. > > As Doug mentions, a continuing role of the TC is to evaluate > applicants to be official projects. These questions are about that. > > There are 63 teams in the official list of projects. How do you feel > about this size? Too big, too small, just right? Why? I would say it's slightly too big. It is easy to add a project, it is more difficult to remove one. It is not that much of a problem, because it's not a zero-sum game (removing projects won't magically add resources to the remaining ones). However, sometimes we get people to step up to "save" a project -- they end up working on it mostly by themselves, because "someone has to". In some cases maybe the right call would have been to let that project disappear and apply those resources to ensure long-term sustainability of a more strategic project (think one that everyone else depends on). > If you had to make a single declaration about growth in the number > of projects would you prefer to see (and why, of course): > > * More projects as required by demand. > * Slower or no growth to focus on what we've got. > * Trim the number of projects to "get back to our roots". > * Something else. I'd say a combination of the first 3 :) We should be able to add new projects as required by demand -- capture the energy where it appears. At the same time, I'd like us to think about cutting dead branches rather than maintaining them forever just because that is what we always did. At one point, if the very few users of that service do not really step up to work on it, maybe we should reconsider heroic maintenance by one-person teams. > [...] > Recognizing that there are many types of contributors, not just > developers, this question is about developers: Throughout history > different members of the community have sometimes identified as an > "OpenStack developer", sometimes as a project developer (e.g., "Nova > developer"). Should we encourage contributors to think of themselves > as primarily OpenStack developers? If so, how do we do that? If not, > why not? I'm a strong believer in the "OpenStack developer" -- I think we are stronger as a coordinated framework than as separately-developed compatible pieces of technology. Conway's law plays against us as we are organized in project teams. I see SIGs and Popup teams as ways to encourage that "OpenStack" thinking. -- Thierry Carrez (ttx) From mrhillsman at gmail.com Sun Feb 24 15:25:57 2019 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Sun, 24 Feb 2019 09:25:57 -0600 Subject: [User-committee] UC Feb 2019 Election results In-Reply-To: References: Message-ID: Congratulations! On Sun, Feb 24, 2019, 6:12 AM Mohamed Elsakhawy wrote: > Good Afternoon all > > > On behalf of the User Committee Elections officers, I am pleased to > announce the results of the UC elections for Feb 2019. Please join me in > congratulating the winners of the 3 seats : > > - Amy Marrich > > - Belmiro Moreira > > - John Studarus > > Thank you to all of the candidates and all of you who voted > > * > https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_8760d5969c6275f1&rkey=75d7d496f7e50780 > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Sun Feb 24 16:24:18 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Sun, 24 Feb 2019 11:24:18 -0500 Subject: [all][ops] Ops Meetup Agenda Planning - Berlin Edition In-Reply-To: References: Message-ID: Hello all, This is a friendly reminder to get your session ideas in for the Berlin Ops Meetup. Time grows short and the pickings are pretty slim so far. See below for further details. -Erik On Fri, Feb 15, 2019, 11:05 AM Erik McCormick wrote: > Hello All, > > The time is rapidly approaching for the Ops Meetup in Berlin. In > preparation, we need your help developing the agenda. i put an [all] > tag on this because I'm hoping that anyone, not just ops, looking for > discussion and feedback on particular items might join in and suggest > sessions. > > It is not required that you attend the meetup to post session ideas. > If there is sufficient interest, we will hold the session and provide > feedback and etherpad links following the meetup. > > Please insert your session ideas into this etherpad, add subtopics to > already proposed sessions, and +1 those that you are interested in. > Also please put your name, and maybe some contact info, at the bottom. > If you'd be willing to moderate a session, please add yourself to the > moderators list. > > https://etherpad.openstack.org/p/BER-ops-meetup > > I'd like to give a big shout out to Deutsche Telekom for hosting us > and providing the catering. I look forward to seeing many of you in > Berlin! > > Cheers, > Erik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Sun Feb 24 23:12:20 2019 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Mon, 25 Feb 2019 00:12:20 +0100 Subject: [tripleo][ironic] What I had to do to get standalone ironic working with ovn enabled In-Reply-To: <20190221002132.a7tzwh7qxv55k3mi@redhat.com> References: <20190220041555.54yc5diqviszvb6e@redhat.com> <20190221002132.a7tzwh7qxv55k3mi@redhat.com> Message-ID: On Wed, 2019-02-20 at 19:21 -0500, Lars Kellogg-Stedman wrote: > On Thu, Feb 21, 2019 at 10:54:33AM +1300, Steve Baker wrote: > > > 1. I added to my deploy: > > > > > > -e /usr/share/tripleo-heat- > > > templates/environment/services/neutron-ovn-standalone.yaml > > > > > > With this change, `openstack tripleo container image prep` > > > correctly detected that ovn was enabled and generated the > > > appropriate image parameters. > > > > Can you provide your full deployment command. I think it is most > > likely that > > the order of environment files is resulting in an incorrect value > > in > > NeutronMechanismDrivers. You may be able to confirm this by looking > > at the > > resulting plan file with something like: > > Upon closer inspection, I believe you are correct. The problem is > twofold: > > - First, by default, NeutronMechanismDrivers is unset. So if you > simply run: > > openstack tripleo container image prepare -e container-prepare- > parameters.yaml > > ...you get no OVN images. > > - Second, the ironic.yaml environment file explicitly sets: > > NeutronMechanismDrivers: ['openvswitch', 'baremetal'] > > So if ironic.yaml is included after something like > neutron-ovn-standalone.yaml, it will override the value. > > Is this one bug or two? Arguably, ironic.yaml shouldn't be setting > NeutronMechanismDrivers explicitly like that (although I don't know > if > there is an "append" mechanism). But shouldn't > NeutronMechanismDrivers default to 'ovn', if that's the default > mechanism now? > The 'ovn' driver does not support VNIC_BAREMETAL type, so we should load a mechanism driver that does support VNIC_BAREMETAL when using ironic. With the switch to ovn by default in TripleO maybe ironic.yaml environment file should be updated to set: NeutronMechanismDrivers: ['ovn', 'baremetal'] We could also add the DCHP agent and Metadata agent, as these also seem to be required for ironic use, according previous messages in this thread? OS::TripleO::Services::NeutronDhcpAgent: /usr/share/openstack- tripleo-heat-templates/deployment/neutron/neutron-dhcp-container- puppet.yaml NeutronEnableForceMetadata: true OS::TripleO::Services::NeutronMetadataAgent: /usr/share/openstack- tripleo-heat-templates/deployment/neutron/neutron-metadata-container- puppet.yaml https://github.com/openstack/networking-ovn/blob/master/networking_ovn/ml2/mech_driver.py#L133-L134 From mriedemos at gmail.com Sun Feb 24 23:17:07 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Sun, 24 Feb 2019 17:17:07 -0600 Subject: [placement][nova] What is the plan for tracking placement blueprints post-extraction? Message-ID: <325410cc-69dd-3979-933a-287af4d73e3a@gmail.com> I was updating the status on some nova blueprints in launchpad today and we have 4 placement API blueprints marked as blocked, with one actually in progress now [1]. The issue is none of the changes are being reflected back in the whiteboard for the blueprint in launchpad because the changes are made in the extracted placement repo which doesn't have a launchpad project, nor is there anything in storyboard for placement. Right now it's a small thing, but I'm wondering what, if any, plans are in place for tracking blueprints for the extracted placement repo. I'm assuming the answer is storyboard but maybe it hasn't been discussed much yet. [1] https://blueprints.launchpad.net/nova/+spec/alloc-candidates-in-tree -- Thanks, Matt From jaypipes at gmail.com Mon Feb 25 00:02:24 2019 From: jaypipes at gmail.com (Jay Pipes) Date: Sun, 24 Feb 2019 19:02:24 -0500 Subject: [placement][nova] What is the plan for tracking placement blueprints post-extraction? In-Reply-To: <325410cc-69dd-3979-933a-287af4d73e3a@gmail.com> References: <325410cc-69dd-3979-933a-287af4d73e3a@gmail.com> Message-ID: <8282ef0a-57e9-5391-9e1b-884dd2780e3e@gmail.com> On 02/24/2019 06:17 PM, Matt Riedemann wrote: > I was updating the status on some nova blueprints in launchpad today and > we have 4 placement API blueprints marked as blocked, with one actually > in progress now [1]. > > The issue is none of the changes are being reflected back in the > whiteboard for the blueprint in launchpad because the changes are made > in the extracted placement repo which doesn't have a launchpad project, > nor is there anything in storyboard for placement. > > Right now it's a small thing, but I'm wondering what, if any, plans are > in place for tracking blueprints for the extracted placement repo. I'm > assuming the answer is storyboard but maybe it hasn't been discussed > much yet. > > [1] https://blueprints.launchpad.net/nova/+spec/alloc-candidates-in-tree There was some light discussion about it on IRC a couple weeks ago. I mentioned my preference was to not have a separate specs repo nor use the Launchpad blueprints feature. I'd rather have a "ideas" folder or similar inside the placement repo itself that tracks longer-form proposals in Markdown documents. Just my preference, though. Not sure where others stand on this. Best, -jay From openstack at fried.cc Mon Feb 25 01:25:39 2019 From: openstack at fried.cc (Eric Fried) Date: Sun, 24 Feb 2019 19:25:39 -0600 Subject: [placement][nova] What is the plan for tracking placement blueprints post-extraction? In-Reply-To: <8282ef0a-57e9-5391-9e1b-884dd2780e3e@gmail.com> References: <325410cc-69dd-3979-933a-287af4d73e3a@gmail.com> <8282ef0a-57e9-5391-9e1b-884dd2780e3e@gmail.com> Message-ID: <2AB6284B-D820-4A0B-9CE7-B2E76C4285D6@fried.cc> +1 to combining specs into the code repo. Re launchpad vs storyboard: This may be going too far, but is "neither" an option? The information in a blueprint has always seemed largely redundant to me. Approved blueprints can be review.o.o?proj=placement&status=merged&path=specs/$release/approved/*. Whiteboards can be etherpads (which is what I thought you were talking about originally, which confused me, because I marked the placement blueprints unblocked on the etherpad last week; didn't think to duplicate that to the blueprints, my bad.) Are etherpads any more... "ethereal" than lp whiteboards or storyboards? (Does anyone look at those after the blueprint is closed?) Eric Fried Concept Brazilian Jiu Jitsu http://taylorbjj.com > On Feb 24, 2019, at 18:02, Jay Pipes wrote: > >> On 02/24/2019 06:17 PM, Matt Riedemann wrote: >> I was updating the status on some nova blueprints in launchpad today and we have 4 placement API blueprints marked as blocked, with one actually in progress now [1]. >> The issue is none of the changes are being reflected back in the whiteboard for the blueprint in launchpad because the changes are made in the extracted placement repo which doesn't have a launchpad project, nor is there anything in storyboard for placement. >> Right now it's a small thing, but I'm wondering what, if any, plans are in place for tracking blueprints for the extracted placement repo. I'm assuming the answer is storyboard but maybe it hasn't been discussed much yet. >> [1] https://blueprints.launchpad.net/nova/+spec/alloc-candidates-in-tree > > There was some light discussion about it on IRC a couple weeks ago. I mentioned my preference was to not have a separate specs repo nor use the Launchpad blueprints feature. I'd rather have a "ideas" folder or similar inside the placement repo itself that tracks longer-form proposals in Markdown documents. > > Just my preference, though. Not sure where others stand on this. > > Best, > -jay > From akhil.jain at india.nec.com Mon Feb 25 03:09:36 2019 From: akhil.jain at india.nec.com (AKHIL Jain) Date: Mon, 25 Feb 2019 03:09:36 +0000 Subject: Fw: [congress] Handling alarms that can be erroneous In-Reply-To: References: , , Message-ID: Hi all, This discussion is about keeping, managing and executing actions based on old alarms. In Congress, when the policy is created the corresponding actions are executed based on data already existing in datasource tables and on the data that is received later in Congress datasource tables. So the alarms raised by projects like aodh, monasca are polled by congress and even the webhook notifications for alarm are received and stored in congress. In Congress, there are two scenarios of policy execution. One, execution based on data already existing before the policy is created and second, policy is created and action is executed at any time after the data is received Which can be harmful by keeping in mind that old alarms that are INVALID at present are still stored in Congress tables. So the user can trigger FALSE action based on that invalid alarm which can be very harmful to the environment. In order to tackle this, there can be multiple ways from the perspective of every OpenStack project handling alarms. One of the solutions can be: As action needs to be taken immediately after the alarm is raised, so storing only those alarms that have corresponding actions or policies(that will use the alarm) and after the policy is executed on them just discard those alarms or mark those alarm with some field like old, executed, etc. Or there are use cases that require old alarms? Also, we need to provide Operator the ability to delete the rows in congress datasource table. This will not completely help in solving this issue but still, it's better functionality to have IMO. Above solution or any discussed better solution can lead to change in mechanism i.e currently followed that involves policy execution on both new alarm and existing alarm to only new alarm. I have added the previous discussion below and discussion in Congress weekly IRC meeting can be found here http://eavesdrop.openstack.org/meetings/congressteammeeting/2019/congressteammeeting.2019-02-22-04.01.log.html Thanks and regards, Akhil ________________________________________ From: Eric K Sent: Tuesday, February 19, 2019 11:04 AM To: AKHIL Jain Subject: Re: Congress Demo and Output Thanks for the update! Yes of course if created_at field is needed by important use case then please feel free to add it! Sample policy in the commit message would be very helpful. Regarding old alarms, I need a couple clarifications: First, which categories of actions executions are we concerned about? 1. Actions executed automatically by congress policy. 2. Actions executed automatically by another service getting data from Congress. 3. Actions executed manually by operator based on data from Congress. Second, let's clarify exactly what we mean by "old". There are several categories I can think of: 1. Alarms which had been activated and then deactivated. 2. Alarms which had been activated and remains active, but it has been some time since it first became active. 3. Alarms which had been activated and triggered some action, but the alarm remains active because the action do not resolve the alarm. 4. Alarms which had been activated and triggered some action, and the action is in the process of resolving the alarm, but in the mean time the alarm remains active. (1) should generally not show up in Congress as active in push update case, but there are failure scenarios in which an update to deactivate can fail to reach Congress. (2) seems to be the thing option 1.1 would get rid of. But I am not clear what problems (2) causes. Why is a bad idea to execute actions based on an alarm that has been active for some time and remains active? An example would help me =) I can see (4) causing problems. But I'd like to work through an example to understand more concretely. In simple cases, Congress policy action execution behavior actually works well. If we have simple case like: execute[action(1)] :- alarm(1) Then action(1) is not going to be executed twice by congress because the behavior is that Congress executes only the NEWLY COMPUTED actions. If we have a more complex case like: execute[action(1)] :- alarm(1) execute[action(2)] :- alarm(1), alarm(2) If alarm (1) activates first, triggering action(1), then alarm (2) activates before alarm(1) deactivates, action(2) would be triggered because it is newly computed. Whether we WANT it executed may depend on the use case. And I'd also like to add option 1.3: Add a new table in (say monasca) called latest_alarm, which is the same as the current alarms table, except that it contains only the most recently received active alarm. That way, the policies which must avoid using older alarms can refer to the latest_alarm table. Whereas policies which would consider all currently active alarms can refer to the alarms table. Looking forward to more discussion! On 2/17/19, 10:44 PM, "AKHIL Jain" wrote: >Hi Eric, > >There are some questions raised while working on FaultManagement usecase, >mainly below ones: >1. Keeping old alarms can be very harmful, the operator can execute >actions based on alarms that are not even existing or valid. >2. Adding a created_at field in Nova servers table can be useful. > >So for the first question, there can be multiple options: >1.1 Do not store those alarms that do not have any policy created in >Congress to execute on that alarm >1.2 Add field in alarm that can tell if the policy is executed using that >row or not. And giving the operator a command to delete them or >automatically delete them. > >For 2nd question please tell me that its good to go and I will add it. > >Regards >Akhil From tomi.juvonen at nokia.com Mon Feb 25 08:43:45 2019 From: tomi.juvonen at nokia.com (Juvonen, Tomi (Nokia - FI/Espoo)) Date: Mon, 25 Feb 2019 08:43:45 +0000 Subject: [fenix] starting bi-weekly meetings Message-ID: Hi, Now reaching a point with the Fenix project where we should start to have meetings in #openstack-fenix. I propose to have a bi-weekly meeting at 6 AM UTC, starting from 11th March. Please give your +1 or indicate if the time does not suit to you. For those who have not heard about the project, you can find more from [1]. Currently working to have the rolling OpenStack upgrade implemented before ONS summit and then make it a job to be run on each patch later on. What is awesome is that I use multinode DevStack environment and I am modifying the scripts to do most of the node upgrade for me just by passing "UPGRADE=True" in local.conf. Let's see how it goes, might be a quite generic solution for anybody to test upgrade with own changes to projects. [1] https://wiki.openstack.org/wiki/Fenix Regards, Tomi Fenix PTL -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Mon Feb 25 08:45:10 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Mon, 25 Feb 2019 17:45:10 +0900 Subject: [searchlight] Team meeting cancelled today Message-ID: Hi team, I'll be out of the office for the rest of the day so I cannot hold the team meeting at 13:30 UTC today. Meeting will happen again on 11th March. Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Mon Feb 25 08:47:57 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Mon, 25 Feb 2019 17:47:57 +0900 Subject: [fenix] starting bi-weekly meetings In-Reply-To: References: Message-ID: +1 That works for me. Thanks. On Mon, Feb 25, 2019 at 5:46 PM Juvonen, Tomi (Nokia - FI/Espoo) < tomi.juvonen at nokia.com> wrote: > Hi, > > Now reaching a point with the Fenix project where we should start to have > meetings in #openstack-fenix. > I propose to have a bi-weekly meeting at 6 AM UTC, starting from 11th > March. Please give your +1 or indicate if the time does not suit to you. > > For those who have not heard about the project, you can find more from [1]. > > Currently working to have the rolling OpenStack upgrade implemented before > ONS summit and then make it a job to be run on each patch later on. What is > awesome is that I use multinode DevStack environment and I am modifying the > scripts to do most of the node upgrade for me just by passing “ > UPGRADE=True” in local.conf. Let’s see how it goes, might be a quite > generic solution for anybody to test upgrade with own changes to projects. > > [1] *https://wiki.openstack.org/wiki/Fenix* > > > Regards, > Tomi > Fenix PTL > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Mon Feb 25 09:28:05 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 25 Feb 2019 04:28:05 -0500 Subject: [tc][election] New series of campaign questions Message-ID: <0aae11d6-7db0-420a-a0ff-7cbf92ff9e1e@www.fastmail.com> Hello, Here are my questions for the candidates. Keep in mind some might overlap with existing questions, so I would expect a little different answer there than what was said. Most questions are intentionally controversial and non-strategic, so please play this spiritual game openly as much as you can (no hard feelings!). The objective for me with those questions is not to corner you/force you implement x if you were elected (that would be using my TC hat for asking you questions, which I believe would be wrong), but instead have a glimpse on your mindset (which is important for me as an individual member in OpenStack). It's more like the "magic wand" questions. After this long introduction, here is my volley of questions. A) In a world where "general" OpenStack issues/features are solved through community goals, do you think the TC should focus on "less interesting" technical issues across projects, like tech debt reduction? Or at the opposite, do you think the TC should tackle the hardest OpenStack wide problems? B) Do you think the TC must check and actively follow all the official projects' health and activities? Why? C) Do you think the TC's role is to "empower" project and PTLs? If yes, how do you think the TC can help those? If no, do you think it would be the other way around, with PTLs empowering the TC to achieve more? How and why? D) Do you think the community goals should be converted to a "backlog"of time constrained OpenStack "projects", instead of being constrained per cycle? (with the ability to align some goals with releasing when necessary) E) Do you think we should abandon projects' ML tags/IRC channels, to replace them by focus areas? For example, having [storage] to group people from [cinder] or [manila]. Do you think that would help new contributors, or communication in the community? F) There can be multiple years between a "user desired feature across OpenStack projects", and its actual implementation through the community goals. How do you think we can improve? G) What do you think of the elections process for the TC? Do you think it is good enough to gather a team to work on hard problems? Or do you think electing person per person have an opposite effect, highlighting individuals versus a common program/shared objectives? Corollary: Do you think we should now elect TC members by groups (of 2 or 3 persons for example), so that we would highlight their program vs highlight individual ideas/qualities? Thanks for your patience, and thanks for your application! Regards, Jean-Philippe Evrard (evrardjp) From cdent+os at anticdent.org Mon Feb 25 11:24:12 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 25 Feb 2019 11:24:12 +0000 (GMT) Subject: [tc] [election] Candidate question: growth of projects In-Reply-To: References: Message-ID: On Thu, 21 Feb 2019, Alexandra Settle wrote: > Well hello again! Hello! > While you address this question to developers and recognise that > there are many different types of contributors, I think > documentation sits in a weird loop hole here. We are often > considered developers because we follow developmental workflows, > and integrate with the projects directly. Some of us are more > technical than others and contribute to both the code base and to > the physical documentation. Risking a straw man here: How would > you define the technical writers that work for OpenStack? We too > are often considered "OpenStack" writers and experts, yet as I > say, we are not experts on every project. I'd hesitate to define anyone. Technical writers, developers, users, deployers and all the other terms we can come up with for people who are involved in the OpenStack community are all individuals and do things that overlap in many roles. I was reluctant to use the term developer in my original question because it's not a term I like because it is so frequently used to designate a priesthood which has special powers (and rewards and obligations) different from a (lesser) laity. Which is crap. Not as crap as "software engineer" but still crap. But I used it to try to forestall any "who do you mean" and "who does the TC represent" questions, which, upon reflection, might have been good questions to debate. Technical writers, and developers, and everyone else who is involved in the OpenStack community are co-authors of this thing which we call OpenStack. From my standpoint the thing we are authoring, and hope to keep alive, is the community and the style of collaboration we use in it. The thing that people run clouds with and companies sell is sort of secondary, but is the source of value that will keep people wanting the community to exist. The thing people who are active in the community and want be "leaders" should be doing is focusing on ensuring that we create and maintain the systems that allow people to contribute in a way that sustains the style of collaboration, respects their persons and their labor, and (critically, an area where I think we are doing far too little) makes sure that the people who profit off that labor attend to their responsibilities. We have to, however, make sure that the source of value is good. Different people are interested in or have aptitudes for different things (e.g., writing code or writing about what code does); enabling those people to contribute to the best of their abilities and in an equitable fashion makes the community and the product better. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From sbauza at redhat.com Mon Feb 25 11:46:33 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 25 Feb 2019 12:46:33 +0100 Subject: [tc][election] New series of campaign questions In-Reply-To: <0aae11d6-7db0-420a-a0ff-7cbf92ff9e1e@www.fastmail.com> References: <0aae11d6-7db0-420a-a0ff-7cbf92ff9e1e@www.fastmail.com> Message-ID: On Mon, Feb 25, 2019 at 10:32 AM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Hello, > > Here are my questions for the candidates. Keep in mind some might overlap > with existing questions, so I would expect a little different answer there > than what was said. Most questions are intentionally controversial and > non-strategic, so please play this spiritual game openly as much as you can > (no hard feelings!). > > Hola Jean-Philippe. No hard feelings, I actually think it's very important to ask us some difficult questions for knowing our opinions. The objective for me with those questions is not to corner you/force you > implement x if you were elected (that would be using my TC hat for asking > you questions, which I believe would be wrong), but instead have a glimpse > on your mindset (which is important for me as an individual member in > OpenStack). It's more like the "magic wand" questions. After this long > introduction, here is my volley of questions. > > NP. As I said in some other thread, I think I don't have a magic wand, but I'll try ;) A) In a world where "general" OpenStack issues/features are solved through > community goals, do you think the TC should focus on "less interesting" > technical issues across projects, like tech debt reduction? Or at the > opposite, do you think the TC should tackle the hardest OpenStack wide > problems? > > Heh, can I say "both" ? ;-) No, to be clear, I think it's probably one of the main priorities for the TC to discuss with the community about goals that should be helping to fix some hard and wide problems (like for example Py3). But it's also important for the TC to leave projects be discussing about technical issues they have and see how the TC can help those projects for this. Tech debt reduction is actually a good example. It's difficult for a project to find resources working on fixing tech debt reduction (and I know about it from my Nova scheduler expertise...). Here, the role of the TC could be to find ways to 'magically' find resources (heh, I finally found a magic wand \o/ ) for those projects. For example, the Padawan proposal [1] could be one way for the projects to bring to light their technical concerns. B) Do you think the TC must check and actively follow all the official > projects' health and activities? Why? > > Oh yeah, 100% this. If the TC should be only doing one thing, that should be this. The TC is the guardian of OpenStack health, making sure that all projects go into the same direction by the same page as much as possible. We now have a situation were there are ways different healthes between projects, and one of my main priorities if I was accepted for TC would be to see how all the projects can eventually be having the same technical health. > C) Do you think the TC's role is to "empower" project and PTLs? If yes, > how do you think the TC can help those? If no, do you think it would be the > other way around, with PTLs empowering the TC to achieve more? How and why? > > Oh, excellent question. Thanks for having said it. Well, it depends on the project, right ? Say, large projects don't really need the TC to "empower" their respective contributors, or even the PTL. In general, even if we now have less resources with a glass pane, most of the respective projects have their own sub-community where the PTL doesn't need some 'blessing'. On that case, the PTL can actually help the TC by instructing it about all the issues the project faces, and possibly ask it other projects have the same issues. On the other way, the TC could help this PTL by either helping to say it's a priority for the project, or just thinking it could be a nice goal. For small projects, it's sometimes different. Most of the times, the project doesn't have a lot of contributors and the PTL is just one of them. In that case, the TC could empower this PTL by giving his/her a way to ask some resources, or just having a way to discuss with other projects. For example, Cyborg and Nova are two different projects with not the same contributors, but thanks to the TC, we have ways to discuss between us. D) Do you think the community goals should be converted to a "backlog"of > time constrained OpenStack "projects", instead of being constrained per > cycle? (with the ability to align some goals with releasing when necessary) > > Hum, good question. I don't have an opinion about it if it's about the fact whether a goal can only be for a cycle or not. Maybe we could enlarge the goals to be for more than one cycle, but I'd rather prefer to discuss it by the Denver Forum first and see what people think about this. > E) Do you think we should abandon projects' ML tags/IRC channels, to > replace them by focus areas? For example, having [storage] to group people > from [cinder] or [manila]. Do you think that would help new contributors, > or communication in the community? > > I don't think we should change the IRC channels names, if I understand this strawman :-) We already have #openstack-dev where people can ping folks unrelated to a specific project. That said, if project contributors from both cinder and manila think they really want to have a #openstack-storage channel because it helps them, I don't think the TC should disagree. For ML threads, same goes. We have tags for projects because it's important for project contributors to just look at the tag before the title for knowing whether it's related to what they work, but if projects really want a larger tag for *good reasons*, I'm not opposed to. F) There can be multiple years between a "user desired feature across > OpenStack projects", and its actual implementation through the community > goals. How do you think we can improve? > > First, I'm not sold on the idea that a user-desired feature needs to go thru a community goal to be implemented. I guess you probably wanted to ask "how the TC can help having ideas transformed into code quickier ?". Well, surely a controversial question, right? The best way to have a feature is to have a contributor implementing this feature. No magic wand here. As I said elsewhere, you can have the best idea, it won't be automatically turned into the killing feature you want just because you explain us how crucial and important this idea is. What the TC can do tho is to help users and developers to discuss and see if common goals (in terms of achievement) can be shared. I already gave the example of the Public WG. Most of the public cloud operators share the same pain so providing a good way to merge all concerns into one deliverable document and sharing it to the corresponding project teams can help seeing whether some contributors can dedicate a bit of time. It won't magically happen because those contributors are nice, just because they probably have the same problems internally or know about this and want to fix them. G) What do you think of the elections process for the TC? Do you think it > is good enough to gather a team to work on hard problems? Or do you think > electing person per person have an opposite effect, highlighting > individuals versus a common program/shared objectives? Corollary: Do you > think we should now elect TC members by groups (of 2 or 3 persons for > example), so that we would highlight their program vs highlight individual > ideas/qualities? > > How do you see the groups to be elected then ? :-) It's like politicals right? Either you make an election with individuals (for example for the French president), or you have an election with two or more lists (like European elections). But those lists are actually created by another internal election ;-) See, I'm not against discussing this strawman, but I'm not very happy with it at the moment. At least, we need more than just one conversation here for that I guess. Thanks for all your questions that were important :-) -Sylvain Thanks for your patience, and thanks for your application! > > Regards, > Jean-Philippe Evrard (evrardjp) > > [1] https://review.openstack.org/#/c/636956/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Mon Feb 25 11:53:54 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 25 Feb 2019 11:53:54 +0000 (GMT) Subject: [placement][nova] What is the plan for tracking placement blueprints post-extraction? In-Reply-To: <2AB6284B-D820-4A0B-9CE7-B2E76C4285D6@fried.cc> References: <325410cc-69dd-3979-933a-287af4d73e3a@gmail.com> <8282ef0a-57e9-5391-9e1b-884dd2780e3e@gmail.com> <2AB6284B-D820-4A0B-9CE7-B2E76C4285D6@fried.cc> Message-ID: On Sun, 24 Feb 2019, Matt Riedemann wrote: > The issue is none of the changes are being reflected back in the whiteboard > for the blueprint in launchpad because the changes are made in the extracted > placement repo which doesn't have a launchpad project, nor is there anything > in storyboard for placement. Yeah, there's a similar issue with os-resource-classes releases not updating launchpad bugs as released. Probably others. > Right now it's a small thing, but I'm wondering what, if any, plans are in > place for tracking blueprints for the extracted placement repo. I'm assuming > the answer is storyboard but maybe it hasn't been discussed much yet. As Jay and Eric point out we've talked about what to do about specs and months ago there was some talk of "well, eventually we'll go to storyboard" but that was sort of pending governance decisions and stability. I guess we're nearly there. (more below...) On Sun, 24 Feb 2019, Jay Pipes wrote: > There was some light discussion about it on IRC a couple weeks ago. I > mentioned my preference was to not have a separate specs repo nor use the > Launchpad blueprints feature. I'd rather have a "ideas" folder or similar > inside the placement repo itself that tracks longer-form proposals in > Markdown documents. I like no-separate-specs-repo ideas as well and got the sense that most people felt the same way. I think still having some kinds of specs for at least some changes is a good idea, as in the past it has helped us talk through an API to get it closer to right before writing it. We don't need spec-cores and I reckon the rules for what needs a spec can be loosened, depending on how people feel. (Markdown! I'd love to do markdown but using existing tooling, and including all these artifacts in the existing docs (even if they are just "ideas") seems a good thing.) On Sun, 24 Feb 2019, Eric Fried wrote: > +1 to combining specs into the code repo. > Re launchpad vs storyboard: This may be going too far, but is > "neither" an option? The information in a blueprint has always > seemed largely redundant to me. Approved blueprints can be > review.o.o?proj=placement&status=merged&path=specs/$release/approved/*. > Whiteboards can be etherpads (which is what I thought you were > talking about originally, which confused me, because I marked the > placement blueprints unblocked on the etherpad last week; didn't > think to duplicate that to the blueprints, my bad.) Are etherpads > any more... "ethereal" than lp whiteboards or storyboards? (Does > anyone look at those after the blueprint is closed?) I agree that blueprint+spec management has felt largely redundant, probably because one came later than the other. However, I'd prefer for us to kind of unify under storyboard rather than mix and match several different pieces. I think etherpads are pretty good for the note taking thing at PTGs and the like, but execrable for pretty much everything else. (What follows are just some ideas, for comment, we'll have to figure out together what will work best.) I haven't had a chance to really learn up on storyboard as much as I would like, but I think we should probably try to use it as a central tracking place. Features that warrant a spec should be a story where one of the tasks in the story is the spec itself. The rest of the tasks are the implementation. tags, boards and lists ought to allow us to create useful views. Storyboard isn't fully soup yet, but we can help push it in that direction. We should probably create ourselves there and update infra config so gerrit updates happen well. For things that are currently in progress, we should simply carry on with them as they are, and manage them manually. That means paying some close attention to the 4 extant blueprints and the various bugs. At the end of Stein we should kill or migrate what's left. So, to take it back to Matt's question, my opinion is: Let's let existing stuff run its course to the end of the cycle, decide together how and when we want to implement storyboard. (I've been sort of putting these kinds of questions off until there's a new PTL.) -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From marcin.juszkiewicz at linaro.org Mon Feb 25 11:58:08 2019 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Mon, 25 Feb 2019 12:58:08 +0100 Subject: [kolla] Proposing Michal Nasiadka to the core team In-Reply-To: References: Message-ID: <91661ace-7d81-f36e-4150-1c325f986a3b@linaro.org> W dniu 15.02.2019 o 11:13, Eduardo Gonzalez pisze: > Hi, is my pleasure to propose Michal Nasiadka for the core team in > kolla-ansible. +1 From m.andre at redhat.com Mon Feb 25 12:20:41 2019 From: m.andre at redhat.com (=?UTF-8?Q?Martin_Andr=C3=A9?=) Date: Mon, 25 Feb 2019 13:20:41 +0100 Subject: [kolla] Proposing Michal Nasiadka to the core team In-Reply-To: References: Message-ID: On Fri, Feb 15, 2019 at 11:21 AM Eduardo Gonzalez wrote: > > Hi, is my pleasure to propose Michal Nasiadka for the core team in kolla-ansible. +1 I'd also be happy to welcome Michal to the kolla-core group (not just kolla-ansible) as he's done a great job reviewing the kolla patches too. Martin > Michal has been active reviewer in the last relases (https://www.stackalytics.com/?module=kolla-group&user_id=mnasiadka), has been keeping an eye on the bugs and being active help on IRC. > He has also made efforts in community interactions in Rocky and Stein releases, including PTG attendance. > > His main interest is NFV and Edge clouds and brings valuable couple of years experience as OpenStack/Kolla operator with good knowledge of Kolla code base. > > Planning to work on extending Kolla CI scenarios, Edge use cases and improving NFV-related functions ease of deployment. > > Consider this email as my +1 vote. Vote ends in 7 days (22 feb 2019) > > Regards From smooney at redhat.com Mon Feb 25 12:22:19 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 25 Feb 2019 12:22:19 +0000 Subject: [nova][neutron][os-vif] upcoming release of os-vif Message-ID: hi everyone. As many people know the non-client lib freeze is thursday the 28th so it is time to do the final os-vif release for stein. there are a number of pending patches to os-vif https://review.openstack.org/#/q/project:openstack/os-vif+status:open+branch:master while i think we can merge several of them i have sorted them into groups below. required: remove use of brctl from vif_plug_linux_bridge https://review.openstack.org/636822 remove use of brctl from vif_plug_linux_bridge https://review.openstack.org/636821 prefer to merge: Add native implementation OVSDB API https://review.openstack.org/482226 make functional tests run on python 3 https://review.openstack.org/638053 nice to have: modify functional base.py to allow using vscode https://review.openstack.org/638058 docs: Add API docs for VIF types https://review.openstack.org/637009 doc: Use sphinx.ext.todo for profile, datapath offload types https://review.openstack.org/638405 docs: Start using sphinx.ext.autodoc for VIF types https://review.openstack.org/638404 docs: Add API docs for profile, datapath offload types https://review.openstack.org/638395 defer: Add 'SUPPORT_BW_CONFIG' option to VIFs https://review.openstack.org/636933 the required patches are makeing there way though the gate. i would hope we can merge most of the patches in the prefer and nice to have buckets but my intent is to propose a patch to the release repo tonight with the head of master proably at or after 20:00 UTC which will be around noon PST. that will give eu and us folks that want to review these changes a resonably amount of time. ideally we can try and get the release out early tommorow. once that is done os-vif will go into a a feature freeze until RC1 is released of nova and neutron at which point it will unfreeze. once i submit the patch for the release until thursday i would like to do a full code freeze on os-vif so that if there are any bugs that show up in the gate we can fix those without accepting any other changes. once we pass the non-client lib freeze on thrusday non feature patches such as docs changes or testing changes are fine but features should wait to RC1. regards sean From amotoki at gmail.com Mon Feb 25 13:36:39 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 25 Feb 2019 22:36:39 +0900 Subject: [api][neutron] tagging support when creating a resource Message-ID: Hi API-SIG, neutron and tagging-related folks, This email ask opinions on tagging when creating a resource. We received a feature request to support tagging when creating a resource [1]. Neutron supports bulk creation of resources but the current tagging API recommended by the API-SIG does not define tagging when creating a resource. As a result, if we want to create 100 ports with specific tags, we need 1 (bulk creation) +100 (taggins per resource) API calls. It sounds nice to me to support tagging when creating a resource to address the above problem. What in my mind is to specify 'tags' attribute in a body of POST request like: {'port': {'name': 'foo', 'network_id': , 'tags': ['red', 'blue'] } } I don't know the reason why the current API-SIG recommended tagging API does not support this model. The current tagging API defines "tags" as a subresource. Is there any reason this model was adopted? Best Regards, Akihiro Motoki (irc: amotoki) [1] https://bugs.launchpad.net/neutron/+bug/1815933 [RFE] Allow bulk-tagging of resources [2] https://specs.openstack.org/openstack/api-wg/guidelines/tags.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.settle at outlook.com Mon Feb 25 13:41:18 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 25 Feb 2019 13:41:18 +0000 Subject: [tc][election] New series of campaign questions In-Reply-To: <0aae11d6-7db0-420a-a0ff-7cbf92ff9e1e@www.fastmail.com> References: <0aae11d6-7db0-420a-a0ff-7cbf92ff9e1e@www.fastmail.com> Message-ID: Well hello good friend! esponses inline :) Apologies to all but I've stopped wrapping my answers because they were coming through to the list formatted in the most odd ways. Working on it. On 25/02/2019 09:28, Jean-Philippe Evrard wrote: > Hello, > > Here are my questions for the candidates. Keep in mind some might overlap with existing questions, so I would expect a little different answer there than what was said. Most questions are intentionally controversial and non-strategic, so please play this spiritual game openly as much as you can (no hard feelings!). Never! Always good vibes :) > > The objective for me with those questions is not to corner you/force you implement x if you were elected (that would be using my TC hat for asking you questions, which I believe would be wrong), but instead have a glimpse on your mindset (which is important for me as an individual member in OpenStack). It's more like the "magic wand" questions. After this long introduction, here is my volley of questions. > > A) In a world where "general" OpenStack issues/features are solved through community goals, do you think the TC should focus on "less interesting" technical issues across projects, like tech debt reduction? Or at the opposite, do you think the TC should tackle the hardest OpenStack wide problems? Very interesting question. To answer directly: I believe the TC should focus on both - but it's not that simple. I believe, first and foremost, that technical debt is a huge issue that we sweep under the carpet, much like the original issues themselves that have been rolled up into technical debt, and it's a weird self-perpetuating cycle. Everyone actively works from cycle-to-cycle to better the project and the product, clearing out the backlog and focusing on new improvements - but it has been very easy in the past to avoid smaller, as you say, "less interesting" issues because it's not new and exciting. But this is a lot of what OpenStack is now - we're a stable product. While the TC can and should focus on helping guide discussions and decisions on the hard OpenStack-wide issues (the TC is elected to have the experience to do so), I do believe a lot of the focus should be around smaller issues to avoid the little things being swept under the carpet and forgotten about. > B) Do you think the TC must check and actively follow all the official projects' health and activities? Why? Health is key. I do believe we are all in a weird transition phase; evolving from being the newest, hottest, open source product on the market to a stable, reliable product that people flock towards to actively work on. Project activity is important, but we have set up a system of PTLs and liaisons that monitor that activity and work with the TC to inform of any major changes. As I said above, the TC should be looking at the smaller aspects of a project, and health is often considered one of those. > C) Do you think the TC's role is to "empower" project and PTLs? If yes, how do you think the TC can help those? If no, do you think it would be the other way around, with PTLs empowering the TC to achieve more? How and why? Empowerment is an interesting word - and I'm unsure if you used it deliberately or not - because the term refers to an increase in autonomy. Defining a word is very 1990's wedding speech stereotype (sorry sorry), but it's important to note in this case. There has in the past, and now, been a lot of discussion surrounding the TC's role and how much they empower teams vs. the TC making key decisions on behalf of everyone else. Anyway back on track - as Sylvain rightly pointed out, this question is entirely pertinent to the size of a team. I find this particularly interesting, having been the documentation PTL that was empowered by the TC. This relationship can 100% go two ways and I believe it does now. You asked above if the TC must check and actively follow all the official project's health and activities. In my experience, it was up to me as PTL to reach out to the TC and inform them of the documentation team's situation, like many have done before me. By informing them of the team's current situation, I was able to engage in a consistent dialog with the TC to get proper help to ensure the project did not die in a tire fire (like it very nearly did - thank you everyone!). > D) Do you think the community goals should be converted to a "backlog"of time constrained OpenStack "projects", instead of being constrained per cycle? (with the ability to align some goals with releasing when necessary) I'll be honest - no. I think this links back up to your point earlier; backlogs can lead to technical debt. Placing time restrictions seem just that, restrictive, but it helps encourage teams to ensure they have met minimum OpenStack-wide requirements per-cycle. Perhaps these time restrictions are increased, but I do not believe they should be erased and placed into a backlog-style bucket of "To do's". However, it has been evident in the past (and my experience) that project teams cannot complete all the community goals per-cycle because of resource constraints, or an unexplained bug that takes up the whole cycle. Perhaps this means that evaluation of resources per team, per the amount of work a community goal would take for a larger/smaller team should occur each cycle. > E) Do you think we should abandon projects' ML tags/IRC channels, to replace them by focus areas? For example, having [storage] to group people from [cinder] or [manila]. Do you think that would help new contributors, or communication in the community? Again, I'll be honest: No. Do I think that potentially generalising our tags could be more helpful to new comers? Absolutely. But I see nothing wrong with adding [storage] to our preexisting tags rather than replacing. We have generic channels such as #openstack-dev and #openstack-doc for newbies to get involved. And we now have the First Contact SIG which I have noticed do an amazing job at picking up new comers out of the lists and point them in the right direction. > G) What do you think of the elections process for the TC? Do you think it is good enough to gather a team to work on hard problems? Or do you think electing person per person have an opposite effect, highlighting individuals versus a common program/shared objectives? Corollary: Do you think we should now elect TC members by groups (of 2 or 3 persons for example), so that we would highlight their program vs highlight individual ideas/qualities? I was thinking about this the other day. It's an election, like all elections, and I think if we're all a bit honest with ourselves the results have more to do with the respect garnered in the community (maybe by something you implemented) and how well you're known than it is about your technical prowess. I don't believe changing the election or group format will change the outcome. I am standing for election because I believe there need to be new voices in the room that stand outside the "development" mindset and provide a different perspective to OpenStack-wide issues. Not every challenge that OpenStack faces is about RabbitMQ and how much we like it, there are a lot of social, cultural, and communication issues that impact on technical decisions that need to be dealt with and I hope to be that person bringing that voice. > > Thanks for your patience, and thanks for your application! > > Regards, > Jean-Philippe Evrard (evrardjp) Happy Monday! From a.settle at outlook.com Mon Feb 25 13:54:26 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 25 Feb 2019 13:54:26 +0000 Subject: [tc] [election] Candidate question: growth of projects In-Reply-To: References: Message-ID: On 25/02/2019 11:24, Chris Dent wrote: > On Thu, 21 Feb 2019, Alexandra Settle wrote: > >> Well hello again! > > Hello! Hello! I'm going for hello-inception. Unsure if it will catch on or not. > >> While you address this question to developers and recognise that >> there are many different types of contributors, I think >> documentation sits in a weird loop hole here. We are often >> considered developers because we follow developmental workflows, >> and integrate with the projects directly. Some of us are more >> technical than others and contribute to both the code base and to >> the physical documentation. Risking a straw man here: How would >> you define the technical writers that work for OpenStack? We too >> are often considered "OpenStack" writers and experts, yet as I >> say, we are not experts on every project. > > I'd hesitate to define anyone. Technical writers, developers, users, > deployers and all the other terms we can come up with for people who > are involved in the OpenStack community are all individuals and do > things that overlap in many roles. +1 > > I was reluctant to use the term developer in my original question > because it's not a term I like because it is so frequently used to > designate a priesthood which has special powers (and rewards and > obligations) different from a (lesser) laity. Which is crap. Not as > crap as "software engineer" but still crap. I actually laughed at this, you're quite right. Developers are often still seen as magical beings with strange powers that make the fun screen go bright. There's an air of *mystery* surrounding developers. > But I used it to try to forestall any "who do you mean" and "who does > the TC represent" questions, which, upon reflection, might have > been good questions to debate. > Technical writers, and developers, and everyone else who is involved > in the OpenStack community are co-authors of this thing which we > call OpenStack. From my standpoint the thing we are authoring, and > hope to keep alive, is the community and the style of collaboration > we use in it. The thing that people run clouds with and companies > sell is sort of secondary, but is the source of value that will keep > people wanting the community to exist. Agreed and thank you. Without sounding like the forgotten bird who sang all summer, it's nice to see it recognised that OpenStack is not just built on code, but it's supporting foundations such as documentation. But most importantly, that what we're trying to keep alive is actually the community and style of collaboration. Being a boomerang Stacker, I can't even tell you how much I appreciate coming back to a community who is so welcoming, and interested in working on cool new projects. > > The thing people who are active in the community and want be > "leaders" should be doing is focusing on ensuring that we create and > maintain the systems that allow people to contribute in a way that > sustains the style of collaboration, respects their persons and > their labor, and (critically, an area where I think we are doing far > too little) makes sure that the people who profit off that labor > attend to their responsibilities. I don't think I can add anything more to this than a +2. I can't agree enough and I hope to continue to foster and encourage this behaviour on or off the TC. > > We have to, however, make sure that the source of value is good. > Different people are interested in or have aptitudes for different > things (e.g., writing code or writing about what code does); > enabling those people to contribute to the best of their abilities > and in an equitable fashion makes the community and the product > better. > Thanks for responding Chris, I appreciate you taking the time to break this down. From doug at doughellmann.com Mon Feb 25 14:09:59 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 25 Feb 2019 09:09:59 -0500 Subject: [tc][election] campaign question: team approval criteria In-Reply-To: References: Message-ID: Doug Hellmann writes: > One of the key responsibilities of the Technical Committee is still > evaluating projects and teams that want to become official OpenStack > projects. The Foundation Open Infrastructure Project approval process > has recently produced a different set of criteria for the Board to use > for approving projects [1] than the TC uses for approving teams [2]. > > What parts, if any, of the OIP approval criteria do you think should > apply to OpenStack teams? > > What other changes, if any, would you propose to the official team > approval process or criteria? Are we asking the right questions and > setting the minimum requirements high enough? Are there any criteria > that are too hard to meet? > > How would you apply those rule changes to existing teams? > > [1] http://lists.openstack.org/pipermail/foundation/2019-February/002708.html > [2] https://governance.openstack.org/tc/reference/new-projects-requirements.html > -- > Doug One of the criteria that caught my eye as especially interesting was that a project must complete at least one release before being accepted. We've debated that rule in the past, and always come down on the side encouraging new projects by accepting them early. I wonder if it's time to reconsider that, and perhaps to start thinking hard about projects that don't release after they are approved. Thoughts? -- Doug From rocky700 at protonmail.com Sat Feb 23 09:49:24 2019 From: rocky700 at protonmail.com (rocky700) Date: Sat, 23 Feb 2019 02:49:24 -0700 (MST) Subject: [release] Release countdown for week R-15, May 14-18 In-Reply-To: <20180510154257.GA31753@sm-xps> References: <20180510154257.GA31753@sm-xps> Message-ID: <1550915364526-0.post@n7.nabble.com> The details which you have given will be very useful for the stack developers as they can know the basic process of it and developing a full stack-based programme will be easy for the developers. ----- apple ipad support -- Sent from: http://openstack.10931.n7.nabble.com/Developer-f2.html From m2elsakha at gmail.com Sun Feb 24 12:11:32 2019 From: m2elsakha at gmail.com (Mohamed Elsakhawy) Date: Sun, 24 Feb 2019 07:11:32 -0500 Subject: UC Feb 2019 Election results Message-ID: Good Afternoon all On behalf of the User Committee Elections officers, I am pleased to announce the results of the UC elections for Feb 2019. Please join me in congratulating the winners of the 3 seats : - Amy Marrich - Belmiro Moreira - John Studarus Thank you to all of the candidates and all of you who voted * https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_8760d5969c6275f1&rkey=75d7d496f7e50780 -------------- next part -------------- An HTML attachment was scrubbed... URL: From foundjem at ieee.org Sun Feb 24 15:00:11 2019 From: foundjem at ieee.org (Armstrong) Date: Sun, 24 Feb 2019 10:00:11 -0500 Subject: [User-committee] UC Feb 2019 Election results In-Reply-To: References: Message-ID: <39E08158-D3F9-41E1-9C93-1CF696737412@ieee.org> A big congrats to Amy Marrich et al. Regards, Armstrong > On Feb 24, 2019, at 07:11, Mohamed Elsakhawy wrote: > > Good Afternoon all > > On behalf of the User Committee Elections officers, I am pleased to announce the results of the UC elections for Feb 2019. Please join me in congratulating the winners of the 3 seats : > > - Amy Marrich > > - Belmiro Moreira > > - John Studarus > > Thank you to all of the candidates and all of you who voted > > * https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_8760d5969c6275f1&rkey=75d7d496f7e50780 > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmohankumar1011 at gmail.com Mon Feb 25 13:57:09 2019 From: nmohankumar1011 at gmail.com (Mohan Kumar) Date: Mon, 25 Feb 2019 19:27:09 +0530 Subject: =?UTF-8?Q?=5Bopenstack=2Ddev=5D_=5BMonasca=5D_How_to_get_=E2=80=9Caggregated_v?= =?UTF-8?Q?alue_of_one_metric_statistics=E2=80=9D_=3F?= In-Reply-To: <42188d5217d44601b282dbe78e50ff4f@SIDC1EXMBX27.in.ril.com> References: <42188d5217d44601b282dbe78e50ff4f@SIDC1EXMBX27.in.ril.com> Message-ID: Hi Team, How to get “aggregated value of one metric statistics” from starting of month to till now . If I try to group metrics using * --period* based on timestamp it including data from previous month metrics as well [1] In below example , trying to get last ~24.5 days of metrics from particular tenant , But I can see 2019-01-26 data . With UTC_START_TIME “2019-02-01T00:00:00Z” [2] does *--merge_metrics * not Merge multiple metrics into a single result ? Please suggest how to customise my API call to get “AVG (aggregated) value of one metric statistics” from starting of month to till now . *Regards.,* Mohankumar N "*Confidentiality Warning*: This message and any attachments are intended only for the use of the intended recipient(s), are confidential and may be privileged. If you are not the intended recipient, you are hereby notified that any review, re-transmission, conversion to hard copy, copying, circulation or other use of this message and any attachments is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return email and delete this message and any attachments from your system. *Virus Warning:* Although the company has taken reasonable precautions to ensure no viruses are present in this email. The company cannot accept responsibility for any loss or damage arising from the use of this email or attachment." -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 60195 bytes Desc: not available URL: From nmohankumar1011 at gmail.com Mon Feb 25 14:11:04 2019 From: nmohankumar1011 at gmail.com (Mohan Kumar) Date: Mon, 25 Feb 2019 19:41:04 +0530 Subject: =?UTF-8?Q?=5Bopenstack=2Ddev=5D_=5BMonasca=5D_How_to_get_=E2=80=9Caggregated_v?= =?UTF-8?Q?alue_of_one_metric_statistics=E2=80=9D_=3F?= In-Reply-To: References: <42188d5217d44601b282dbe78e50ff4f@SIDC1EXMBX27.in.ril.com> Message-ID: Hi Team, How to get “aggregated value of one metric statistics” from starting of month to till now . If I try to group metrics using * --period* based on timestamp it including data from previous month metrics as well [1] In below example , trying to get last ~24.5 days of metrics from particular tenant , But I can see 2019-01-26 data . With UTC_START_TIME “2019-02-01T00:00:00Z” [2] does *--merge_metrics * not Merge multiple metrics into a single result ? Please suggest how to customise my API call to get “AVG (aggregated) value of one metric statistics” from starting of month to till now . *Regards.,* Mohankumar N "*Confidentiality Warning*: This message and any attachments are intended only for the use of the intended recipient(s), are confidential and may be privileged. If you are not the intended recipient, you are hereby notified that any review, re-transmission, conversion to hard copy, copying, circulation or other use of this message and any attachments is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return email and delete this message and any attachments from your system. *Virus Warning:* Although the company has taken reasonable precautions to ensure no viruses are present in this email. The company cannot accept responsibility for any loss or damage arising from the use of this email or attachment." -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 60195 bytes Desc: not available URL: From sbauza at redhat.com Mon Feb 25 14:19:55 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 25 Feb 2019 15:19:55 +0100 Subject: [tc][election] campaign question: team approval criteria In-Reply-To: References: Message-ID: On Mon, Feb 25, 2019 at 3:13 PM Doug Hellmann wrote: > Doug Hellmann writes: > > > One of the key responsibilities of the Technical Committee is still > > evaluating projects and teams that want to become official OpenStack > > projects. The Foundation Open Infrastructure Project approval process > > has recently produced a different set of criteria for the Board to use > > for approving projects [1] than the TC uses for approving teams [2]. > > > > What parts, if any, of the OIP approval criteria do you think should > > apply to OpenStack teams? > > > > What other changes, if any, would you propose to the official team > > approval process or criteria? Are we asking the right questions and > > setting the minimum requirements high enough? Are there any criteria > > that are too hard to meet? > > > > How would you apply those rule changes to existing teams? > > > > [1] > http://lists.openstack.org/pipermail/foundation/2019-February/002708.html > > [2] > https://governance.openstack.org/tc/reference/new-projects-requirements.html > > -- > > Doug > > One of the criteria that caught my eye as especially interesting was > that a project must complete at least one release before being > accepted. We've debated that rule in the past, and always come down on > the side encouraging new projects by accepting them early. I wonder if > it's time to reconsider that, and perhaps to start thinking hard about > projects that don't release after they are approved. > > Thoughts? > > My personal opinion on that is that releasing a 1.0 version is just procedural. Or, said differently, political. It's just a signal saying "we think we are ready for production". The problem with that is that it's subjective. No real key metrics to attribute the meaning of "production-ready" (I'm even not talking of production-grade), just a feeling that the contributors team ideally, or just the "PTL" (given the project isn't official yet), considers it ready. As a consequence, there can be a big gap in between the contributors's expectations and the reality. That's what I called it "the reality wall". When you hit it, it's a pain. I'm more in favor of objective metrics to define the health of a project in order to be production ready : does this project allow upgrades ? is this project distributed enough to grow atomically ? can we easily provide bugfixes thanks to a stable policy ? Hope that clarifies your concern, -Sylvain -- > Doug > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon Feb 25 14:34:07 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 25 Feb 2019 15:34:07 +0100 Subject: [tc][election] campaign question: team approval criteria In-Reply-To: References: Message-ID: On Mon, Feb 25, 2019 at 3:19 PM Sylvain Bauza wrote: > > > On Mon, Feb 25, 2019 at 3:13 PM Doug Hellmann > wrote: > >> Doug Hellmann writes: >> >> > One of the key responsibilities of the Technical Committee is still >> > evaluating projects and teams that want to become official OpenStack >> > projects. The Foundation Open Infrastructure Project approval process >> > has recently produced a different set of criteria for the Board to use >> > for approving projects [1] than the TC uses for approving teams [2]. >> > >> > What parts, if any, of the OIP approval criteria do you think should >> > apply to OpenStack teams? >> > >> > What other changes, if any, would you propose to the official team >> > approval process or criteria? Are we asking the right questions and >> > setting the minimum requirements high enough? Are there any criteria >> > that are too hard to meet? >> > >> > How would you apply those rule changes to existing teams? >> > >> > [1] >> http://lists.openstack.org/pipermail/foundation/2019-February/002708.html >> > [2] >> https://governance.openstack.org/tc/reference/new-projects-requirements.html >> > -- >> > Doug >> >> One of the criteria that caught my eye as especially interesting was >> that a project must complete at least one release before being >> accepted. We've debated that rule in the past, and always come down on >> the side encouraging new projects by accepting them early. I wonder if >> it's time to reconsider that, and perhaps to start thinking hard about >> projects that don't release after they are approved. >> >> Thoughts? >> >> > My personal opinion on that is that releasing a 1.0 version is just > procedural. Or, said differently, political. It's just a signal saying "we > think we are ready for production". > The problem with that is that it's subjective. No real key metrics to > attribute the meaning of "production-ready" (I'm even not talking of > production-grade), just a feeling that the contributors team ideally, or > just the "PTL" (given the project isn't official yet), considers it ready. > As a consequence, there can be a big gap in between the contributors's > expectations and the reality. That's what I called it "the reality wall". > When you hit it, it's a pain. > > I'm more in favor of objective metrics to define the health of a project > in order to be production ready : does this project allow upgrades ? is > this project distributed enough to grow atomically ? can we easily provide > bugfixes thanks to a stable policy ? > > Hope that clarifies your concern, > -Sylvain > > Looking again at your question, I think I haven't answered the main question. Should we accept a project even if it's not yet ready (and doesn't match the points I said above) ? Well, I surely understand the big interest in getting approved as an official project. Keep in mind I was a Climate/Blazar contributor in 2013 ;-) By that time, the project wasn't official (it was pre-Tent so the approval process was a bit different) so we faced the same visibility issue than most of the small projects face I guess. That said, even knowing how much is a pain to attract new contributors, getting approved doesn't get you those resources magically. In order to get some contributors, you first need to find some companies that are willing to dedicare some of their own resources into your project. That's not a matter of being ready or not, it's more a matter of seeing a business case behind the project. For that precise reason, I don't really think it helps our projects to be approved very early as official projects. We should rather promote some incubation approach (and we did that with Stackforge and it was great for a small project like we were in 2013) and maybe have the TC members to shepherd those candidates in order to give them some light and visibility so that corporate sponsors could know of their existence, but the main step would still remain. HTH, -Sylvain -- >> Doug >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bfournie at redhat.com Mon Feb 25 14:45:12 2019 From: bfournie at redhat.com (Bob Fournier) Date: Mon, 25 Feb 2019 09:45:12 -0500 Subject: =?UTF-8?Q?Re=3A_=5Btripleo=5D_nominating_Harald_Jens=C3=A5s_as_a_core_re?= =?UTF-8?Q?viewer?= In-Reply-To: References: Message-ID: On Fri, Feb 22, 2019 at 12:58 PM James Slagle wrote: > On Thu, Feb 21, 2019 at 10:05 AM Juan Antonio Osorio Robles > wrote: > > > > Hey folks! > > > > > > I would like to nominate Harald as a general TripleO core reviewer. > > > > He has consistently done quality reviews throughout our code base, > > helping us with great feedback and technical insight. > > > > While he has done a lot of work on the networking and baremetal sides of > > the deployment, he's also helped out on security, CI, and even on the > > tripleoclient side. > > > > Overall, I think he would be a great addition to the core team, and I > > trust his judgment on reviews. > > > > > > What do you think? > > +1 > > > -- > -- James Slagle > -- > > +1!! Harald's breadth and depth of knowledge is tremendous, he's made many important contributions to TripleO. -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.settle at outlook.com Mon Feb 25 14:47:35 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 25 Feb 2019 14:47:35 +0000 Subject: [tc][election] campaign question: team approval criteria In-Reply-To: References: Message-ID: On 25/02/2019 14:09, Doug Hellmann wrote: > Doug Hellmann writes: > >> One of the key responsibilities of the Technical Committee is still >> evaluating projects and teams that want to become official OpenStack >> projects. The Foundation Open Infrastructure Project approval process >> has recently produced a different set of criteria for the Board to use >> for approving projects [1] than the TC uses for approving teams [2]. >> >> What parts, if any, of the OIP approval criteria do you think should >> apply to OpenStack teams? >> >> What other changes, if any, would you propose to the official team >> approval process or criteria? Are we asking the right questions and >> setting the minimum requirements high enough? Are there any criteria >> that are too hard to meet? >> >> How would you apply those rule changes to existing teams? >> >> [1] http://lists.openstack.org/pipermail/foundation/2019-February/002708.html >> [2] https://governance.openstack.org/tc/reference/new-projects-requirements.html >> -- >> Doug > One of the criteria that caught my eye as especially interesting was > that a project must complete at least one release before being > accepted. We've debated that rule in the past, and always come down on > the side encouraging new projects by accepting them early. I wonder if > it's time to reconsider that, and perhaps to start thinking hard about > projects that don't release after they are approved. > > Thoughts? My response to your initial email was that we should focus more on going forward. I stand by that. This is an interesting addition to the OIP acceptance criteria, especially considering that we also considered it originally and for multiple reasons we reconsidered this option. I don't think it would be a bad idea to implement this, maybe it would genuinely be a good idea - it should at least be discussed. It would reflect the maturation of the preexisting projects and their integration with each other. From gmann at ghanshyammann.com Mon Feb 25 15:05:47 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 26 Feb 2019 00:05:47 +0900 Subject: [dev] [all] [ptl] Migrating legacy jobs to Bionic (Ubuntu LTS 18.04) Message-ID: <1692530c549.db9aeb4717414.7978261848712100858@ghanshyammann.com> Hi Everyone, During Dec/Jan month, we have migrated the devstack jobs (zuulv3 native jobs) from Xenial to Bionic. [1]. But that did not move all gate job to Bionic as there are a large number of jobs are still the legacy job. All the legacy job still use Xenial as nodeset. As per the decided runtime for Stein, we need to test everything on OpenStack CI/CD on Bionic - https://governance.openstack.org/tc/reference/runtimes/stein.html This is something we discussed TC meeting as one of the Stein items to finish in openstack community[2]. I am starting this ML thread to coordinate the work to move all the legacy jobs to Bionic. I have created the Etherpad to include more details and status check for each project gate- https://etherpad.openstack.org/p/legacy-job-bionic The approach is the same as we did for previous migration for devstack jobs. 1. Push patch which migrates the legacy base jobs to bionic - 1. https://review.openstack.org/#/c/639096/ 2. Each Project team, add the testing patch with Depends-on on base patch (example 639096) and confirm the status of their gate on etherpad - Example: https://review.openstack.org/#/c/639017 3. Project jobs not using base job as parent, need to migrate their legacy job to bionic by own. Please add yourself as your project volunteer in etherpad and update the status accordingly. I am tagging [ptl] in this thread so that they can assign someone from their team if no volunteer. Deadline: 1st April to merge the base legacy job to bionic. That gives around 1 month to test the jobs which I feel enough for each project. let me know if more time is needed, we can adjust the same. The goal is to finish this activity before Stein release. [1] http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000837.html https://etherpad.openstack.org/p/devstack-bionic [2] http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-02-07-14.00.html -gmann From zhang.lei.fly+os-discuss at gmail.com Mon Feb 25 15:12:27 2019 From: zhang.lei.fly+os-discuss at gmail.com (Jeffrey Zhang) Date: Mon, 25 Feb 2019 23:12:27 +0800 Subject: [kolla] Proposing Michal Nasiadka to the core team In-Reply-To: References: Message-ID: +1 On Mon, Feb 25, 2019 at 8:30 PM Martin André wrote: > On Fri, Feb 15, 2019 at 11:21 AM Eduardo Gonzalez > wrote: > > > > Hi, is my pleasure to propose Michal Nasiadka for the core team in > kolla-ansible. > > +1 > I'd also be happy to welcome Michal to the kolla-core group (not just > kolla-ansible) as he's done a great job reviewing the kolla patches > too. > > Martin > > > Michal has been active reviewer in the last relases ( > https://www.stackalytics.com/?module=kolla-group&user_id=mnasiadka), has > been keeping an eye on the bugs and being active help on IRC. > > He has also made efforts in community interactions in Rocky and Stein > releases, including PTG attendance. > > > > His main interest is NFV and Edge clouds and brings valuable couple of > years experience as OpenStack/Kolla operator with good knowledge of Kolla > code base. > > > > Planning to work on extending Kolla CI scenarios, Edge use cases and > improving NFV-related functions ease of deployment. > > > > Consider this email as my +1 vote. Vote ends in 7 days (22 feb 2019) > > > > Regards > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Feb 25 15:25:19 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 25 Feb 2019 09:25:19 -0600 Subject: [placement][nova] What is the plan for tracking placement blueprints post-extraction? In-Reply-To: References: <325410cc-69dd-3979-933a-287af4d73e3a@gmail.com> <8282ef0a-57e9-5391-9e1b-884dd2780e3e@gmail.com> <2AB6284B-D820-4A0B-9CE7-B2E76C4285D6@fried.cc> Message-ID: <953ec1f3-b110-bd01-cc42-a08a4f1f1e0f@gmail.com> On 2/25/2019 5:53 AM, Chris Dent wrote: > So, to take it back to Matt's question, my opinion is: Let's let > existing stuff run its course to the end of the cycle, decide > together how and when we want to implement storyboard. FWIW I agree with this. I'm fine with no separate specs repo or core team. I do, however, think that having some tracking tool is important for project management to get an idea of what is approved for a release and what is left (and what was completed). Etherpads are not kanban boards nor are they indexed by search engines so they are fine for notes and such but not great for long-term documentation or tracking. -- Thanks, Matt From doug at doughellmann.com Mon Feb 25 15:27:42 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 25 Feb 2019 10:27:42 -0500 Subject: [dev] [all] [ptl] Migrating legacy jobs to Bionic (Ubuntu LTS 18.04) In-Reply-To: <1692530c549.db9aeb4717414.7978261848712100858@ghanshyammann.com> References: <1692530c549.db9aeb4717414.7978261848712100858@ghanshyammann.com> Message-ID: Ghanshyam Mann writes: > Hi Everyone, > > During Dec/Jan month, we have migrated the devstack jobs (zuulv3 native jobs) from Xenial to Bionic. [1]. > > But that did not move all gate job to Bionic as there are a large number of jobs are still the legacy job. All the legacy job still use Xenial as nodeset. > As per the decided runtime for Stein, we need to test everything on OpenStack CI/CD on Bionic - https://governance.openstack.org/tc/reference/runtimes/stein.html > This is something we discussed TC meeting as one of the Stein items to finish in openstack community[2]. > > I am starting this ML thread to coordinate the work to move all the legacy jobs to Bionic. I have created the > Etherpad to include more details and status check for each project gate- https://etherpad.openstack.org/p/legacy-job-bionic > > The approach is the same as we did for previous migration for devstack jobs. > 1. Push patch which migrates the legacy base jobs to bionic - 1. https://review.openstack.org/#/c/639096/ > 2. Each Project team, add the testing patch with Depends-on on base patch (example 639096) and confirm the status > of their gate on etherpad - Example: https://review.openstack.org/#/c/639017 > 3. Project jobs not using base job as parent, need to migrate their legacy job to bionic by own. > > Please add yourself as your project volunteer in etherpad and update the status accordingly. I am tagging [ptl] in this thread so that they can assign > someone from their team if no volunteer. > > Deadline: 1st April to merge the base legacy job to bionic. That gives around 1 month to test the jobs which I feel enough for each project. > let me know if more time is needed, we can adjust the same. The goal is to finish this activity before Stein release. > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000837.html > https://etherpad.openstack.org/p/devstack-bionic > > [2] http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-02-07-14.00.html > > > -gmann Thanks for driving this, Ghanshyam, and helping to ensure we're all testing on a consistent and current platform. -- Doug From jaypipes at gmail.com Mon Feb 25 15:41:27 2019 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 25 Feb 2019 10:41:27 -0500 Subject: [api][neutron] tagging support when creating a resource In-Reply-To: References: Message-ID: <9b60de67-7cde-1ffa-b004-01f82042da17@gmail.com> On 02/25/2019 08:36 AM, Akihiro Motoki wrote: > Hi API-SIG, neutron and tagging-related folks, > > This email ask opinions on tagging when creating a resource. > > We received a feature request to support tagging when creating a > resource [1]. > Neutron supports bulk creation of resources but the current tagging API > recommended > by the API-SIG does not define tagging when creating a resource. > As a result, if we want to create 100 ports with specific tags, > we need 1 (bulk creation) +100 (taggins per resource) API calls. > > It sounds nice to me to support tagging when creating a resource to > address the above problem. > What in my mind is to specify 'tags' attribute in a body of POST request > like: > >  {'port': >     {'name': 'foo', >      'network_id': , >      'tags': ['red', 'blue'] >      } >   } > > I don't know the reason why the current API-SIG recommended tagging API does > not support this model. The current tagging API defines "tags" as a > subresource. I think you're asking why the API working group's guidelines on tags don't have an example of setting one or more tags for some resource at the same time as the resource's creation? Probably just a simple oversight, frankly. The Compute API @ version 2.52 allows setting tags on a server instance (the main HTTP resource) on creation using exactly the same format that you describe above: https://developer.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server Best, -jay From smooney at redhat.com Mon Feb 25 15:55:41 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 25 Feb 2019 15:55:41 +0000 Subject: [placement][nova] What is the plan for tracking placement blueprints post-extraction? In-Reply-To: <953ec1f3-b110-bd01-cc42-a08a4f1f1e0f@gmail.com> References: <325410cc-69dd-3979-933a-287af4d73e3a@gmail.com> <8282ef0a-57e9-5391-9e1b-884dd2780e3e@gmail.com> <2AB6284B-D820-4A0B-9CE7-B2E76C4285D6@fried.cc> <953ec1f3-b110-bd01-cc42-a08a4f1f1e0f@gmail.com> Message-ID: <93732a4a8fd34c754cc253895dac2a9de70e6783.camel@redhat.com> On Mon, 2019-02-25 at 09:25 -0600, Matt Riedemann wrote: > On 2/25/2019 5:53 AM, Chris Dent wrote: > > So, to take it back to Matt's question, my opinion is: Let's let > > existing stuff run its course to the end of the cycle, decide > > together how and when we want to implement storyboard. > > FWIW I agree with this. I'm fine with no separate specs repo or core > team. I do, however, think that having some tracking tool is important > for project management to get an idea of what is approved for a release > and what is left (and what was completed). Etherpads are not kanban > boards nor are they indexed by search engines so they are fine for notes > and such but not great for long-term documentation or tracking. for what its worth i would proably jsut creat a specs folder in the placement repo and then choose to use it as much or as little as required. kolla-ansible for example has an in repo specs folder but it only uses it very sparingly so the presence of a spec folder does not mean you have to use it for every feature. they use rfe bugs/blueprint to track most features and you could make the decision on that later depening on what tool suits placment best. be that lauchpad,storyboard or just etherpad. etherpad i think make more sense for tracking review priorites rather then completion but for what its worth os-vif does have its own launchpad add we just use RFE bugs for any non cross project features. for cross project stuff e.g. nova usecases we use a nova bug/blueprint/sepc. the donwside of this is ironic approved a spec after m2 for a feature this release that required a os-vif change and we didnt know about it so we had to review it quickly not to block them. in that case i would have prefered at least an rfe bug against os-vif to tacked the change. anyway that is just my 2 cents out of tree specs repos with a sepreate specs core team has its uses but its overkill i think in most cases espcially for small repos like placement or os-vif but haveing a placemnt spec dir might be helpful in some cases. > From amy at demarco.com Mon Feb 25 16:05:36 2019 From: amy at demarco.com (Amy Marrich) Date: Mon, 25 Feb 2019 10:05:36 -0600 Subject: [User-committee] UC Feb 2019 Election results In-Reply-To: <39E08158-D3F9-41E1-9C93-1CF696737412@ieee.org> References: <39E08158-D3F9-41E1-9C93-1CF696737412@ieee.org> Message-ID: Thanks everyone who voted and I look forward to serving another term on the UC. I also wanted to say thank you to Leong for all the hard work he’s done on the UC in the past. Thanks, Amy > On Feb 24, 2019, at 9:00 AM, Armstrong wrote: > > A big congrats to Amy Marrich et al. > > Regards, > Armstrong > >> On Feb 24, 2019, at 07:11, Mohamed Elsakhawy wrote: >> >> Good Afternoon all >> >> On behalf of the User Committee Elections officers, I am pleased to announce the results of the UC elections for Feb 2019. Please join me in congratulating the winners of the 3 seats : >> >> - Amy Marrich >> >> - Belmiro Moreira >> >> - John Studarus >> >> Thank you to all of the candidates and all of you who voted >> >> * https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_8760d5969c6275f1&rkey=75d7d496f7e50780 >> >> _______________________________________________ >> User-committee mailing list >> User-committee at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Feb 25 16:19:00 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 25 Feb 2019 10:19:00 -0600 Subject: [User-committee] UC Feb 2019 Election results In-Reply-To: References: <39E08158-D3F9-41E1-9C93-1CF696737412@ieee.org> Message-ID: <5C741574.4080706@openstack.org> Congratulations to all of our candidates! And yes, thank you Leong for doing such great work with the Financial WG, among many other things. Cheers, Jimmy > Amy Marrich > February 25, 2019 at 10:05 AM > Thanks everyone who voted and I look forward to serving another term > on the UC. > > I also wanted to say thank you to Leong for all the hard work he’s > done on the UC in the past. > > Thanks, > > Amy > > > On Feb 24, 2019, at 9:00 AM, Armstrong > wrote: > >> A big congrats to Amy Marrich et al. >> >> Regards, >> Armstrong >> >> On Feb 24, 2019, at 07:11, Mohamed Elsakhawy > > wrote: >> >>> Good Afternoon all >>> >>> >>> On behalf of the User Committee Elections officers, I am pleased to >>> announce the results of the UC elections for Feb 2019. Please join >>> me in congratulating the winners of the 3 seats : >>> >>> - Amy Marrich >>> >>> - Belmiro Moreira >>> >>> - John Studarus >>> >>> Thank you to all of the candidates and all of you who voted >>> >>> * >>> https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_8760d5969c6275f1&rkey=75d7d496f7e50780 >>> >>> >>> _______________________________________________ >>> User-committee mailing list >>> User-committee at lists.openstack.org >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee >> _______________________________________________ >> User-committee mailing list >> User-committee at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > Mohamed Elsakhawy > February 24, 2019 at 6:11 AM > Good Afternoon all > On behalf of the User Committee Elections officers, I am pleased to > announce the results of the UC elections for Feb 2019. Please join me > in congratulating the winners of the 3 seats : > > - Amy Marrich > > - Belmiro Moreira > > - John Studarus > > Thank you to all of the candidates and all of you who voted > > * > https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_8760d5969c6275f1&rkey=75d7d496f7e50780 > > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Mon Feb 25 16:23:37 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Mon, 25 Feb 2019 16:23:37 +0000 Subject: [placement][nova] What is the plan for tracking placement blueprints post-extraction? In-Reply-To: References: <325410cc-69dd-3979-933a-287af4d73e3a@gmail.com> <8282ef0a-57e9-5391-9e1b-884dd2780e3e@gmail.com> <2AB6284B-D820-4A0B-9CE7-B2E76C4285D6@fried.cc> Message-ID: <7394bdba30860832e8dc86539ee5f9ee7597a940.camel@redhat.com> On Mon, 2019-02-25 at 11:53 +0000, Chris Dent wrote: > On Sun, 24 Feb 2019, Jay Pipes wrote: > > > There was some light discussion about it on IRC a couple weeks ago. I > > mentioned my preference was to not have a separate specs repo nor use the > > Launchpad blueprints feature. I'd rather have a "ideas" folder or similar > > inside the placement repo itself that tracks longer-form proposals in > > Markdown documents. > > I like no-separate-specs-repo ideas as well and got the sense that > most people felt the same way. > > I think still having some kinds of specs for at least some changes > is a good idea, as in the past it has helped us talk through an API > to get it closer to right before writing it. We don't need > spec-cores and I reckon the rules for what needs a spec can be > loosened, depending on how people feel. > > (Markdown! I'd love to do markdown but using existing tooling, and > including all these artifacts in the existing docs (even if they are > just "ideas") seems a good thing.) This is the only bit of this I actually have a strong opinion on :) Please, please stick to rST+Sphinx. It's certainly flawed but there is value in going to any OpenStack project and knowing how the documentation will work. /me goes back to lurking. Stephen From openstack at nemebean.com Mon Feb 25 17:03:02 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 25 Feb 2019 11:03:02 -0600 Subject: [oslo] Feature freeze this week In-Reply-To: <80ee1143-fd93-b0b1-1e15-dcce10661e01@nemebean.com> References: <80ee1143-fd93-b0b1-1e15-dcce10661e01@nemebean.com> Message-ID: And feature freeze is on. I just proposed all of the changes that merged last week for release. Hopefully those will be the last feature releases of the cycle, although I'm aware of at least one bug that may require a feature release due to requirements changes. TBD on that though. Oslo cores, as discussed in the meeting this week, please don't merge anything that would require a feature release without discussing it with me first. Thanks. -Ben On 2/18/19 10:07 AM, Ben Nemec wrote: > Just a reminder that Oslo feature freeze happens this week. Yes, it's > earlier than everyone else, and if you're wondering why, we have a > policy[1] that discusses it. > > The main thing is that if you have features you want to get into Oslo > libraries this cycle, please make sure they merge by Friday. After that > we'll need to go through the FFE process and there's no guarantee we can > land them. Feel free to ping us on IRC if you need reviews. > > Thanks. > > -Ben > > 1: > http://specs.openstack.org/openstack/oslo-specs/specs/policy/feature-freeze.html > From vedarthambharath at gmail.com Mon Feb 25 17:17:49 2019 From: vedarthambharath at gmail.com (Vedartham Bharath) Date: Mon, 25 Feb 2019 22:47:49 +0530 Subject: [dev][swift] Regarding fstab entries in a Swift controller Message-ID: Hi all, This is with regards to an issue in the documentation of Openstack Swift and mostly an issue to system administrators operating Swift.(Sorry if the subject tags are wrong!!) In the Swift Multiserver docs, When setting up an object storage server, the docs tell to use the disk labels in the disk's /etc/fstab entries. eg: /dev/sda /srv/node/sda xfs noatime............. I feel that we should encourage people to use UUIDs rather than disk labels. I have had a lot of issues with my Swift storage servers crashing whenever I reboot them. I have found out that the issue is with the /etc/fstab as the disk labels change whenever a disk is removed or added(depends on the OS's "mood" i.e boot order). I don't want to change my ring configuration every time I reboot my storage servers. Thank you Bharath From ed at leafe.com Mon Feb 25 17:25:14 2019 From: ed at leafe.com (Ed Leafe) Date: Mon, 25 Feb 2019 11:25:14 -0600 Subject: [api][neutron] tagging support when creating a resource In-Reply-To: References: Message-ID: <0EB60B0D-8081-4012-B847-3319C3FD19F5@leafe.com> On Feb 25, 2019, at 7:36 AM, Akihiro Motoki wrote: > > It sounds nice to me to support tagging when creating a resource to address the above problem. > What in my mind is to specify 'tags' attribute in a body of POST request like: > > {'port': > {'name': 'foo', > 'network_id': , > 'tags': ['red', 'blue'] > } > } > > I don't know the reason why the current API-SIG recommended tagging API does > not support this model. The current tagging API defines "tags" as a subresource. > Is there any reason this model was adopted? Tags are strings attached to resources, so there really should be no difference when creating a resource with tags than when creating a resource with a name. The part of the guideline for modifying tags assumes that the resource already exists. We could certainly add additional wording that clarifies that resources can be created with tags; there certainly isn’t anything in the current guideline that says they can’t. -- Ed Leafe From mihalis68 at gmail.com Mon Feb 25 17:42:08 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Mon, 25 Feb 2019 12:42:08 -0500 Subject: [all][ops] Ops Meetup Agenda Planning - Berlin Edition In-Reply-To: References: Message-ID: I'd like to echo Erik's plea, we have a rather short list of topics so far and an even shorter list of volunteer moderators. We'll try to put together an agenda this week but so far it looks like Erik and I will be moderating a lot of sessions. That can work but I'd rather see more diversity. Moderating a session is not scary, you do not need to be a domain expert on the topic, just alert to keeping the discussion going and letting everyone be heard etc etc. Chris On Sun, Feb 24, 2019 at 11:31 AM Erik McCormick wrote: > Hello all, > > This is a friendly reminder to get your session ideas in for the Berlin > Ops Meetup. Time grows short and the pickings are pretty slim so far. See > below for further details. > > -Erik > > On Fri, Feb 15, 2019, 11:05 AM Erik McCormick > wrote: > >> Hello All, >> >> The time is rapidly approaching for the Ops Meetup in Berlin. In >> preparation, we need your help developing the agenda. i put an [all] >> tag on this because I'm hoping that anyone, not just ops, looking for >> discussion and feedback on particular items might join in and suggest >> sessions. >> >> It is not required that you attend the meetup to post session ideas. >> If there is sufficient interest, we will hold the session and provide >> feedback and etherpad links following the meetup. >> >> Please insert your session ideas into this etherpad, add subtopics to >> already proposed sessions, and +1 those that you are interested in. >> Also please put your name, and maybe some contact info, at the bottom. >> If you'd be willing to moderate a session, please add yourself to the >> moderators list. >> >> https://etherpad.openstack.org/p/BER-ops-meetup >> >> I'd like to give a big shout out to Deutsche Telekom for hosting us >> and providing the catering. I look forward to seeing many of you in >> Berlin! >> >> Cheers, >> Erik >> > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Mon Feb 25 17:47:35 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 25 Feb 2019 17:47:35 +0000 (GMT) Subject: [placement] zuul job dependencies for greater good? Message-ID: Zuul has a feature that makes it possible to only run some jobs after others have passed: https://zuul-ci.org/docs/zuul/user/config.html#attr-job.dependencies Except for tempest and grenade (which take about an hour to 1.5 hours to run, sometimes a lot more) the usual time for any of the placement tests is less than 6 minutes each, sometimes less than 4. I've been wondering if we might want to consider only running tempest and grenade if the other tests have passed first? So here's this message seeking opinions. On the one hand this ought to be redundant. The expectation is that a submitter has already done at least one python version worth of unit and functional tests. Fast8 too. On one of my machines 'tox -efunctional-py37,py37,pep8' on warmed up virtualenvs is a bit under 53 seconds. So it's not like it's a huge burden or cpu melting. But on the other hand, if someone has failed to do that, and they have failing tests, they shouldn't get the pleasure of wasting a tempest or grenade node. Another argument I've heard for not doing this is if there are failures of different types in different tests, having all that info for the round of fixing that will be required is good. That is, getting a unit failure, fixing that, then subumitting again, only to get an integration failure which then needs another round of fixing (and testing) might be rather annoying. I'd argue that that's important information about unit or functional tests being insufficient. I'm not at all sold on the idea, but thought it worth "socializing" for input. Thanks. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From miguel at mlavalle.com Mon Feb 25 17:52:03 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 25 Feb 2019 11:52:03 -0600 Subject: [api][neutron] tagging support when creating a resource In-Reply-To: <0EB60B0D-8081-4012-B847-3319C3FD19F5@leafe.com> References: <0EB60B0D-8081-4012-B847-3319C3FD19F5@leafe.com> Message-ID: Ed, Jay, Thanks for the responses. We wanted to make sure we weren't breaking any community wide API guidelines Regards Miguel On Mon, Feb 25, 2019 at 11:26 AM Ed Leafe wrote: > On Feb 25, 2019, at 7:36 AM, Akihiro Motoki wrote: > > > > It sounds nice to me to support tagging when creating a resource to > address the above problem. > > What in my mind is to specify 'tags' attribute in a body of POST request > like: > > > > {'port': > > {'name': 'foo', > > 'network_id': , > > 'tags': ['red', 'blue'] > > } > > } > > > > I don't know the reason why the current API-SIG recommended tagging API > does > > not support this model. The current tagging API defines "tags" as a > subresource. > > Is there any reason this model was adopted? > > Tags are strings attached to resources, so there really should be no > difference when creating a resource with tags than when creating a resource > with a name. The part of the guideline for modifying tags assumes that the > resource already exists. > > We could certainly add additional wording that clarifies that resources > can be created with tags; there certainly isn’t anything in the current > guideline that says they can’t. > > > -- Ed Leafe > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From duc.openstack at gmail.com Mon Feb 25 18:17:06 2019 From: duc.openstack at gmail.com (Duc Truong) Date: Mon, 25 Feb 2019 10:17:06 -0800 Subject: [senlin] Forum session brainstorming Message-ID: I have created an etherpad to capture ideas for Senlin forum sessions [1]. Please add your ideas along with your irc name. We especially would like to hear from Senlin users and operators on what you would like to see discussed. [1] https://etherpad.openstack.org/p/DEN-train-senlin-forum-brainstorming From ssbarnea at redhat.com Mon Feb 25 18:20:13 2019 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Mon, 25 Feb 2019 18:20:13 +0000 Subject: [placement] zuul job dependencies for greater good? In-Reply-To: References: Message-ID: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> I asked the save some time ago but we didn't had time to implement it, as is much harder to do this projects where list of jobs does change a lot quickly. Maybe if we would have some placeholder job like phase1/2/3 it would be easier to migrate to such setup. stage1 - cheap jobs like linters, docs,... - <10min stage2 - medium jobs like functional <30min stage3 - fat/expensive jobs like tempest, update/upgrade. >30min The idea to placeholders is to avoid having to refactor lots of dependencies Cheers Sorin > On 25 Feb 2019, at 17:47, Chris Dent wrote: > > > Zuul has a feature that makes it possible to only run some jobs > after others have passed: > > https://zuul-ci.org/docs/zuul/user/config.html#attr-job.dependencies > > Except for tempest and grenade (which take about an hour to 1.5 > hours to run, sometimes a lot more) the usual time for any of the > placement tests is less than 6 minutes each, sometimes less than 4. > > I've been wondering if we might want to consider only running > tempest and grenade if the other tests have passed first? So here's > this message seeking opinions. > > On the one hand this ought to be redundant. The expectation is that > a submitter has already done at least one python version worth of > unit and functional tests. Fast8 too. On one of my machines 'tox > -efunctional-py37,py37,pep8' on warmed up virtualenvs is a bit under > 53 seconds. So it's not like it's a huge burden or cpu melting. > > But on the other hand, if someone has failed to do that, and they > have failing tests, they shouldn't get the pleasure of wasting a > tempest or grenade node. > > Another argument I've heard for not doing this is if there are > failures of different types in different tests, having all that info > for the round of fixing that will be required is good. That is, > getting a unit failure, fixing that, then subumitting again, only to > get an integration failure which then needs another round of fixing > (and testing) might be rather annoying. > > I'd argue that that's important information about unit or functional > tests being insufficient. > > I'm not at all sold on the idea, but thought it worth "socializing" > for input. > > Thanks. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent From smooney at redhat.com Mon Feb 25 18:40:45 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 25 Feb 2019 18:40:45 +0000 Subject: [placement] zuul job dependencies for greater good? In-Reply-To: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> Message-ID: <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> On Mon, 2019-02-25 at 18:20 +0000, Sorin Sbarnea wrote: > I asked the save some time ago but we didn't had time to implement it, as is much harder to do this projects where > list of jobs does change a lot quickly. > > Maybe if we would have some placeholder job like phase1/2/3 it would be easier to migrate to such setup. > stage1 - cheap jobs like linters, docs,... - <10min > stage2 - medium jobs like functional <30min > stage3 - fat/expensive jobs like tempest, update/upgrade. >30min yep i also suggesting somting similar where we woudls run all the non dvsm jobs first then everything else whtere the second set was conditonal or always run was a seperate conversation but i think there is value in reporting the result of the quick jobs first then everything else. i peronally would do just two levels os-vif for exampl complete all jobs except the one temest job in under 6 minutes. grated i run all the non integration job locally for my own patches but it would be nice to get the feedback quicker for other people patches as i ofter find my self checking zuul.openstack.org > > The idea to placeholders is to avoid having to refactor lots of dependencies > > Cheers > Sorin > > On 25 Feb 2019, at 17:47, Chris Dent wrote: > > > > > > Zuul has a feature that makes it possible to only run some jobs > > after others have passed: > > > > https://zuul-ci.org/docs/zuul/user/config.html#attr-job.dependencies > > > > Except for tempest and grenade (which take about an hour to 1.5 > > hours to run, sometimes a lot more) the usual time for any of the > > placement tests is less than 6 minutes each, sometimes less than 4. > > > > I've been wondering if we might want to consider only running > > tempest and grenade if the other tests have passed first? So here's > > this message seeking opinions. > > > > On the one hand this ought to be redundant. The expectation is that > > a submitter has already done at least one python version worth of > > unit and functional tests. Fast8 too. On one of my machines 'tox > > -efunctional-py37,py37,pep8' on warmed up virtualenvs is a bit under > > 53 seconds. So it's not like it's a huge burden or cpu melting. > > > > But on the other hand, if someone has failed to do that, and they > > have failing tests, they shouldn't get the pleasure of wasting a > > tempest or grenade node. > > > > Another argument I've heard for not doing this is if there are > > failures of different types in different tests, having all that info > > for the round of fixing that will be required is good. That is, > > getting a unit failure, fixing that, then subumitting again, only to > > get an integration failure which then needs another round of fixing > > (and testing) might be rather annoying. > > > > I'd argue that that's important information about unit or functional > > tests being insufficient. > > > > I'm not at all sold on the idea, but thought it worth "socializing" > > for input. > > > > Thanks. > > > > -- > > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > > freenode: cdent tw: @anticdent > > From fungi at yuggoth.org Mon Feb 25 19:11:28 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 25 Feb 2019 19:11:28 +0000 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: <20190225191128.fnjrryhf5dhyortw@yuggoth.org> On 2019-02-21 17:10:03 +0000 (+0000), Graham Hayes wrote: [...] > The most effective way would be for the TC to start directly telling > projects what to do - but I feel like that would mean that everyone > would be unhappy with us. [...] To be clear, I can deal with people being mad at the TC if there's sufficient benefit to the community and users. But as the development of OpenStack is based on collaboration, dictating terms (rather than showing teams why some alternative path is a benefit for them) is likely to result in further shrinking the contributor base and/or forming forks under a less authoritarian regime. This is the real cost which must be weighed any time the TC considers using its power to force the hand of a team it governs. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon Feb 25 19:53:14 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 25 Feb 2019 19:53:14 +0000 Subject: [tc][election] campaign question: team approval criteria In-Reply-To: References: Message-ID: <20190225195314.xy4achfbshiipufd@yuggoth.org> On 2019-02-20 19:36:28 +0100 (+0100), Sylvain Bauza wrote: [...] > The last item is interesting, because the OIP draft at the moment > shows more technical requirements than the Foundation ones. For > example, VMT is - at the moment I'm writing those lines - quoted > as a common best practice, which is something we don't ask for our > projects. That's actually a good food for thoughts : security is > crucial and shouldn't be just a tag [3]. OpenStack is mature and > it's our responsibility to care about CVEs. [...] Leaving aside the assertion that "caring about CVEs" is the same thing as caring about security, it's worth mentioning that the centralized OpenStack VMT doesn't (and can't) easily scale. It publishes a set of guidelines, process documents and templates which any team can follow to achieve similar results, but the governance tag we have right now serves mostly to set the scope of the centralized VMT (and in turn expresses some fairly strict criteria for expanding that scope to indicate direct oversight of more deliverables). I'm open to suggestions for how the OpenStack TC can better promote good security practices within teams. I have some thoughts as well, though it probably warrants a separate thread at a later date when I have more time to assemble words on the subject. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon Feb 25 20:03:28 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 25 Feb 2019 20:03:28 +0000 Subject: [tc][election] campaign question: team approval criteria In-Reply-To: References: Message-ID: <20190225200327.wt6u37bhpb73r4be@yuggoth.org> On 2019-02-25 09:09:59 -0500 (-0500), Doug Hellmann wrote: [...] > One of the criteria that caught my eye as especially interesting was > that a project must complete at least one release before being > accepted. We've debated that rule in the past, and always come down on > the side encouraging new projects by accepting them early. I wonder if > it's time to reconsider that, and perhaps to start thinking hard about > projects that don't release after they are approved. > > Thoughts? For me, the key difference is that OpenStack already has clear release processes outlined which teams are expected to follow for their deliverables. For confirming a new OIP it's seen as important that they've worked out what their release process *is* and proven that they can follow it (this is, perhaps, similar to why the OIP confirmation criteria mentions other things we don't for new OpenStack project team acceptance, like vulnerability management and governance). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From colleen at gazlene.net Mon Feb 25 20:04:46 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 25 Feb 2019 15:04:46 -0500 Subject: [dev][keystone] App Cred Capabilities Update In-Reply-To: <4434fb3c-cdc3-48fc-b9ef-5a7dd0a8e70c@www.fastmail.com> References: <4434fb3c-cdc3-48fc-b9ef-5a7dd0a8e70c@www.fastmail.com> Message-ID: <97014f31-c4a2-4a53-96eb-cb66d5be1a1e@www.fastmail.com> On Thu, Feb 21, 2019, at 10:11 PM, Colleen Murphy wrote: > I have an initial draft of application credential capabilities > available for > review[1]. The spec[2] was not straightforward to implement. There were > a few > parts that I found unintuitive, user-unfriendly, and overcomplicated. > The > current proposed implementation differs from the spec in some ways that > I want > to discuss with the team. Given that non-client library freeze is next > week, > and the changes we need in keystonemiddleware also require changes in > keystoneclient, I'm not sure there is enough time to properly flesh > this out > and allow for thorough code review, but if we miss the deadline we can > be ready > to get this in in the beginning of next cycle (with apologies to > everyone waiting > on this feature - I just want to make sure we get it right with minimal > regrets). > > * Naming > > As always, naming is hard. In the spec, we've called the property that is > attached to the app cred "capabilities" (for the sake of this email I'm going > to call it user-created-rules), and we've called the operator-configured list > of available endpoints "permissible path templates" (for the sake of this email > I'm going to call it operator-created-rules). I find both confusing and > awkward. > > "Permissible path templates" is not a great name because the rule is actually > about more than just the path, it's about the request as a whole, including the > method. I'd like to avoid saying "template" too because that evokes a picture > of something like a Jinja or ERB template, which is not what this is, and > because I'd like to avoid the whole string substitution thing - more on that > below. In the implementation, I've renamed the operator-created-rules to > "access rules". I stole this from Istio after Adam pointed out they have a > really similar concept[3]. I really like this name because I think it > well-describes the thing we're building without already being overloaded. > > So far, I've kept the user-created-rules as "capabilities" but I'm not a fan of > it because it's an overloaded word and not very descriptive, although in my > opinion it is still more descriptive than "whitelist" which is what we were > originally going to call it. It might make sense to relate this property > somehow to the operator-created-rules - perhaps by calling it > access_rules_list, or granted_access_rules. Or we could call *this* thing the > access rules, and call the other thing allowed_access_rules or > permitted_access_rules. > > * Substitutions > > The way the spec lays out variable components of the URL paths for both > user-created-rules and operator-created-rules is unnecessarily complex and in > some cases faulty. The only way I can explain how complicated it is is to try > to give an example: > > Let's say we want to allow a user to create an application credential that > allows the holder to issue a GET request on the identity API that looks like > /v3/projects/ef7284b4-3a75-4570-8ea8-b30214f18538/tags/foobar. The spec says > that the string '/v3/projects/{project_id}/tags/{tag}' is what should be > provided verbatim in the "path" attribute of a "capability", then there should > be a "substitutions" attribute that sets {"tag": "foobar"}, then the project_id > should be taken from the token scope at app cred usage time. When the > capability is validated against the operator-created-rules at app cred creation > time, it needs to check that the path string matches exactly, that the keys of > the "substitutions" dict matches the "user template keys" list, and that keys > required by the "context template keys" are provided by the token context. > > Taking the project ID, domain ID, or user ID from the token scope is not going > to work because some of these APIs may actually be system-scoped APIs - it's > just not a hard and fast rule that a project/domain/user ID in the URL maps to > the same user and scope of the token used to create it. Once we do away with > that, it stops making sense to have a separate attribute for the user-provided > substitutions when they could just include that in the URL path to begin with. > So the proposed implementation simply allows the wildcards * and ** in both the > operator-created-rules and user-created-rules, no python-formatting variable > substitutions. > > * UUIDs > > The spec says each operator-created-rule should have its own UUID, and the end > user needs to look up and provide this UUID when they create the app cred rule. > This has the benefit of having a fast lookup because we've put the onus on the > user to look up the rule themselves, but I think it is very user-unfriendly. In > the proposed implementation, I've done away with UUIDs on the > operator-created-rules, and instead a match is looked up based on the service > type, request path and method, and the "allow_chained" attribute (more on that > next). Depending on how big we think this list of APIs will get, this will have > some impact on performance on creation time (not at token validation time). > > UUIDs make sense for singleton resources that are created in a database. I > imagine this starting as an operator-managed configuration file, and maybe some > day in the future a catalog of this sort could be published by the services so > that the operator doesn't have to maintain it themselves. To that end, I've > implemented the operator-created-rules driver with a JSON file backend. But > with this style of implementation, including UUIDs for every rule is awkward - > it only makes sense if we're generating the resources within keystone and > storing them in a database. > > * allow_chained > > The allow_chained attribute looks and feels awkward and adds complexity > to the code > Is there really a case when either the user or operator would not want > to allow a > service to make a request on behalf of a user who is making a more > general request? > Also, can we find a better name for it? > > Those are all the major question marks for me. There are some other minor > differentiations from the spec and I will propose an update that makes it > consistent with reality after we have these other questions sorted out. > > Colleen > > [1] > https://review.openstack.org/#/q/topic:bp/whitelist-extension-for-app-creds > [2] > http://specs.openstack.org/openstack/keystone-specs/specs/keystone/stein/capabilities-app-creds.html > [3] > https://istio.io/docs/reference/config/authorization/istio.rbac.v1alpha1/#AccessRule We discussed this a bit on IRC[4] and I've proposed a revision to the spec[5] to capture the proposed alterations. Colleen [4] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-02-21.log.html#t2019-02-21T21:13:50 [5] https://review.openstack.org/639182 From lbragstad at gmail.com Mon Feb 25 20:24:12 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 25 Feb 2019 14:24:12 -0600 Subject: [nova][dev][ops] can we get rid of 'project_only' in the DB layer? In-Reply-To: <707a1e0d-a270-b030-da5e-9e93e8920c24@gmail.com> References: <3fb287ae-753f-7e56-aa2a-7e3a1d7d6d89@gmail.com> <47bf561e-439b-1642-1aa7-7bf48adca64a@gmail.com> <707a1e0d-a270-b030-da5e-9e93e8920c24@gmail.com> Message-ID: On 2/20/19 2:13 AM, melanie witt wrote: > On Tue, 19 Feb 2019 10:42:32 -0600, Matt Riedemann > wrote: >> On 2/18/2019 8:22 PM, melanie witt wrote: >>> Right, that is the proposal in this email. That we should remove >>> project_only=True and let the API policy check handle whether or not >>> the >>> user from a different project is allowed to get the instance. >>> Otherwise, >>> users are not able to use policy to control the behavior because it is >>> hard-coded in the database layer. >> >> I think this has always been the long-term goal and I remember a spec >> from John about it [1] but having said that, the spec was fairly >> complicated (to me at least) and sounds like there would be a fair bit >> of auditing of the API code we'd need to do before we can remove the DB >> API check, which means it's likely not something we can complete at this >> point in Stein. >> >> For example, I think we have a lot of APIs that run the policy check on >> the context (project_id and user_id) as the target before even pulling >> the resource from the database, and the resource itself should be the >> target, right? >> >> [1] https://review.openstack.org/#/c/433037/ > > Thanks for the link -- I hadn't seen this spec yet. > > Yes, Alex just pinged me in #openstack-nova and now I finally > understand his point that I kept missing before. He tried a test with > my WIP patch and a user from project A was able to 'nova show' an > instance from project B, even though the policy was set to > 'rule:admin_or_owner'. The reason is because when the instance > project/user isn't passed as a target to the policy check, the policy > check for the request context project_id won't do anything. There's > nothing for it to compare project_id with. This is interesting because > it makes me wonder, what does a policy check like that [2] do then? It > will take more learning on my part about the policy system to > understand it. Sending a follow up here since Melanie and I ended up going through scope types extensively last week in IRC [0]. Long story short, keystone doesn't do a great job of explaining how developers can use scope types to implement a consistent policy enforcement layer like what Melanie described earlier. We have a section of the contributor guide dedicated to other OpenStack developers and how they can use various identity concepts effectively. I added a section [1] that tries to condense everything we went through in IRC, but it's really two parts. The first is why it's important and the second describes the process of migrating from legacy enforcement patterns to something using the new tools and default roles provided through keystone. [0] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-02-20.log.html#t2019-02-20T18:35:06 [1] https://review.openstack.org/#/c/638563/ > > -melanie > > [2] > https://github.com/openstack/nova/blob/3548cf59217f62966a21ea65a8cb744606431bd6/nova/api/openstack/compute/servers.py#L425 > > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From sbauza at redhat.com Mon Feb 25 20:28:58 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 25 Feb 2019 21:28:58 +0100 Subject: [tc][election] campaign question: team approval criteria In-Reply-To: <20190225195314.xy4achfbshiipufd@yuggoth.org> References: <20190225195314.xy4achfbshiipufd@yuggoth.org> Message-ID: Le lun. 25 févr. 2019 à 20:58, Jeremy Stanley a écrit : > On 2019-02-20 19:36:28 +0100 (+0100), Sylvain Bauza wrote: > [...] > > The last item is interesting, because the OIP draft at the moment > > shows more technical requirements than the Foundation ones. For > > example, VMT is - at the moment I'm writing those lines - quoted > > as a common best practice, which is something we don't ask for our > > projects. That's actually a good food for thoughts : security is > > crucial and shouldn't be just a tag [3]. OpenStack is mature and > > it's our responsibility to care about CVEs. > [...] > > Leaving aside the assertion that "caring about CVEs" is the same > thing as caring about security, it's worth mentioning that the > centralized OpenStack VMT doesn't (and can't) easily scale. It > publishes a set of guidelines, process documents and templates which > any team can follow to achieve similar results, but the governance > tag we have right now serves mostly to set the scope of the > centralized VMT (and in turn expresses some fairly strict criteria > for expanding that scope to indicate direct oversight of more > deliverables). > Yup and I know that :-( When I said the above, I was about saying that all the projects should have at least one liaison (at least the PTL) and a way to have some security discussions if needed. > > I'm open to suggestions for how the OpenStack TC can better promote > good security practices within teams. I have some thoughts as well, > though it probably warrants a separate thread at a later date when I > have more time to assemble words on the subject. > Yeah agreed. Maybe in the next Forum because we need to have a discussion with the operators for this I think. Sylvain -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Mon Feb 25 20:36:44 2019 From: openstack at fried.cc (Eric Fried) Date: Mon, 25 Feb 2019 14:36:44 -0600 Subject: [placement] zuul job dependencies for greater good? In-Reply-To: <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> Message-ID: <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> -1 to serializing jobs with stop-on-first-failure. Human time (having to iterate fixes one failed job at a time) is more valuable than computer time. That's why we make computers. If you want quick feedback on fast-running jobs (that are running in parallel with slower-running jobs), zuul.o.o is available and easy to use. If we wanted to get more efficient about our CI resources, there are other possibilities I would prefer to see tried first. For example, do we need a whole separate node to run each unit & functional job, or could we run them in parallel (or even serially, since all together they would probably still take less time than e.g. a tempest) on a single node? I would also support a commit message tag (or something) that tells zuul not to bother running CI right now. Or a way to go to zuul.o.o and yank a patch out. Realizing of course that these suggestions come from someone who uses zuul in the most superficial way possible (like, I wouldn't know how to write a... job? playbook? with a gun to my head) so they're probably exponentially harder than using the thing Chris mentioned. -efried From arne.wiebalck at cern.ch Mon Feb 25 20:41:03 2019 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 25 Feb 2019 21:41:03 +0100 Subject: [baremetal-sig][ironic] Bare Metal SIG First Steps In-Reply-To: <751D0BC0-B349-4038-A53E-F6D43BA04227@openstack.org> References: <4191B2EA-A6F0-4183-B0EF-C5C013E3A982@openstack.org> <098CC2A3-B207-47D5-A0F1-F227C33C2F01@openstack.org> <751D0BC0-B349-4038-A53E-F6D43BA04227@openstack.org> Message-ID: <0F6EDA09-C2F5-44AC-A691-76AB9A85D71C@cern.ch> Hi, > On 20 Feb 2019, at 20:13, Chris Hoge wrote: > > Monday the patch for the creation of the Baremetal-SIG was approved by > the TC and UC [1]. It's exciting to see the level of interest we've > already seen in the planning etherpad [2], and it's time to start kicking > off our first initiatives. > > I'd like to begin by addressing some of the comments in the patch. > > * Wiki vs Etherpad. My own personal preference is to start with the > Etherpad as we get our feet underneath us. As more artifacts and begin > to materialize, I think a Wiki would be an excellent location for > hosting the information. My primary concern with Wikis is their > tendency (from my point of view) to become out of date with the goals > of a group. So, to begin with, unless there are any strong objections, > we can do initial planning on the Etherpad and graduate to more > permanent and resilient landing pages later. > > * Addressing operational aspects of Ironic. I see this as an absolutely > critical aspect of the SIG. I think it should be. When talking to other operators we quickly realised we faced very similar operational issues and needs. If the SIG could help with sharing these experiences and with joining forces to identify and drive the development of required features, that’d be really great. > We already have organization devoted mostly > to development, the Ironic team itself. SIGs are meant to be a > collaborative effort between developers, operators, and users. We can > send a patch up to clarify that in the governance document. If you are > an operator, please use this [baremetal-sig] subject heading to start > discussions and organize shared experiences and documentation. > > * The SIG is focused on all aspects of running bare-metal and Ironic, > whether it be as a driver to Nova, a stand-alone service, or built into > another project as a component. One of the amazing things about Ironic > is its flexibility and versatility. We want to highlight that there's > more than one way to do things with Ironic. > > * Chairs. I would very much like for this to be a community experience, > and welcome nominations for co-chairs. I've found in the past that 2-3 > co-chairs makes for a good balance, and given the number of people who > have expressed interest in the SIG in general I think we should go > ahead and appoint two extra people to co-lead the SIG. If this > interests you, please self-nominate here and we can use lazy consensus > to round out the rest of the leadership. If we have several people step > up, we can consider a stronger form of voting using the systems > available to us. I’m happy to help out with co-chairing. As Dmitry, I’m in CET time zone. > > First goals: > > I think that an important first goal is in the publication of a > whitepaper outlining the motivation, deployment methods, and case studies > surrounding OpenStack bare metal, similar to what we did with the > containers whitepaper last year. A goal would be to publish at the Denver > Open Infrastructure summit. Some initial thoughts and rough schedule can > be found here [3], and also linked from the planning etherpad. > > One of the nice things about working on the whitepaper is we can also > generate a bunch of other expanded content based on that work. In > particular, I'd very much like to highlight deployment scenarios and case > studies. I'm thinking of the whitepaper as a seed from which multiple > authors demonstrate their experience and expertise to the benefit of the > entire community. > > Another goal we've talked about at the Foundation is the creation of a > new bare metal logo program. Distinct from something like the OpenStack > Powered Trademark, which focuses on interoperability between OpenStack > products with an emphasis on interoperability, this program would be > devoted to highlighting products that are shipping with Ironic as a key > component of their bare metal management strategy. This could be in many > different configurations, and is focused on the shipping of code that > solves particular problems, whether Ironic is user-facing or not. We're > very much in the planning stages of a program like this, and it's > important to get community feedback early on about if you would find it > useful and what features you would like to see a program like this have. > A few items that we're very interested in getting early feedback on are: > > * The Pixie Boots mascot has been an important part of the Ironic > project, and we would like to apply it to highlight Ironic usage within > the logo program. > * If you're a public cloud, sell a distribution, provide installation > services, or otherwise have some product that uses Ironic, what is your > interest in participating in a logo program? > * In addition to the logo, would you find collaboration to build content > on how Ironic is being used in projects and products in our ecosystem > useful? > > Finally, we have the goals of producing and highlighting content for > using and operating Ironic. A list of possible use-cases is included in > the SIG etherpad. We're also thinking about setting up a demo booth with > a small set of server hardware to demonstrate Ironic at the Open > Infrastructure summit. > > On all of those items, your feedback and collaboration is essential. > Please respond to this mailing list if you have thoughts or want to > volunteer for any of these items, and also contribute to the etherpad to > help organize efforts and add any resources you might have available. > Thanks to everyone, and I'll be following up soon with more information > and updates. > > -Chris > > [1] https://review.openstack.org/#/c/634824/ > [2] https://etherpad.openstack.org/p/bare-metal-sig > [3] https://etherpad.openstack.org/p/bare-metal-whitepaper > Cheers, Arne — Arne Wiebalck CERN IT From openstack at nemebean.com Mon Feb 25 20:50:53 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 25 Feb 2019 14:50:53 -0600 Subject: [placement] zuul job dependencies for greater good? In-Reply-To: <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> Message-ID: On 2/25/19 2:36 PM, Eric Fried wrote: > -1 to serializing jobs with stop-on-first-failure. Human time (having to > iterate fixes one failed job at a time) is more valuable than computer > time. That's why we make computers. If you want quick feedback on > fast-running jobs (that are running in parallel with slower-running > jobs), zuul.o.o is available and easy to use. In general I agree with this sentiment. However, I do think there comes a point where we'd be penny-wise and pound-foolish. If we're talking about 5 minute unit test jobs I'm not sure how much human time you're actually losing by serializing behind them, but you may be saving significant amounts of computer time. If we're talking about sufficient gains in gate throughput it might be worth it to lose 5 minutes here or there and in other cases save a couple of hours by not waiting in a long queue behind jobs on patches that are unmergeable anyway. That said, I wouldn't push too hard in either direction until someone crunched the numbers and figured out how much time it would have saved to not run long tests on patch sets with failing unit tests. I feel like it's probably possible to figure that out, and if so then we should do it before making any big decisions on this. > > If we wanted to get more efficient about our CI resources, there are > other possibilities I would prefer to see tried first. For example, do > we need a whole separate node to run each unit & functional job, or > could we run them in parallel (or even serially, since all together they > would probably still take less time than e.g. a tempest) on a single node? > > I would also support a commit message tag (or something) that tells zuul > not to bother running CI right now. Or a way to go to zuul.o.o and yank > a patch out. > > Realizing of course that these suggestions come from someone who uses > zuul in the most superficial way possible (like, I wouldn't know how to > write a... job? playbook? with a gun to my head) so they're probably > exponentially harder than using the thing Chris mentioned. > > -efried > From smooney at redhat.com Mon Feb 25 20:59:55 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 25 Feb 2019 20:59:55 +0000 Subject: [placement] zuul job dependencies for greater good? In-Reply-To: <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> Message-ID: On Mon, 2019-02-25 at 14:36 -0600, Eric Fried wrote: > -1 to serializing jobs with stop-on-first-failure. Human time (having to > iterate fixes one failed job at a time) is more valuable than computer > time. That's why we make computers. If you want quick feedback on > fast-running jobs (that are running in parallel with slower-running > jobs), zuul.o.o is available and easy to use. im aware of the concurn with first failure. originally i had wanted to split check into "precheck" and "check" where check would only run if precheck passed. after talking to people about that a few weeks ago i changed my perspetive to we should have fastcheck and check which are two piplines that run in parallel and get two comments back from zuul. so when the fast check job fishes it comment back with that set and when the check job finshses you get teh second set. gate would then require that fastcheck and check both have +1 form zuul to run. > > If we wanted to get more efficient about our CI resources, there are > other possibilities I would prefer to see tried first. For example, do > we need a whole separate node to run each unit & functional job, or > could we run them in parallel (or even serially, since all together they > would probably still take less time than e.g. a tempest) on a single node? currently im not sure if zull has a way to express run job 1 on a node and then run job 2 and then job 3... if it does this could certenly help. nova proably has the slowest unittests of any project because it proably has the most taking 7-8 minutes to run on a fast laptop but compared to a 1 hour and 40 ish miniut temepst run yes we could proably queue up all tox enevs on a singel vm and have it easily complete before tempest. those jobs would also benifit form sharing a pip cache on that vm as 95% of the dependencies are probaly common betwen the tox env modul the python version. > > I would also support a commit message tag (or something) that tells zuul > not to bother running CI right now. Or a way to go to zuul.o.o and yank > a patch out. > > Realizing of course that these suggestions come from someone who uses > zuul in the most superficial way possible (like, I wouldn't know how to > write a... job? playbook? with a gun to my head) so they're probably > exponentially harder than using the thing Chris mentioned. > > -efried > From sorrison at gmail.com Mon Feb 25 22:18:45 2019 From: sorrison at gmail.com (Sam Morrison) Date: Tue, 26 Feb 2019 09:18:45 +1100 Subject: [cinder] extra_capabilities for scheduler filters Message-ID: Hi, Just wondering if extra_capabilities should be available in backend_state to be able to be used by scheduler filters? I can't seem to use my custom capabilities within the capabilities filter. Thanks, Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Feb 25 22:55:10 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 25 Feb 2019 22:55:10 +0000 Subject: [nova][neutron][os-vif] upcoming release of os-vif In-Reply-To: References: Message-ID: just a quick update on where we are. the brctl patches have now merged so we have reached our MVP for stein. we are going to hold the release for another 18 hours or so to allow the ovsdb python lib feature to merged. the new grouping looks like this required: Add native implementation OVSDB API https://review.openstack.org/482226 prefer to merge: make functional tests run on python 3 https://review.openstack.org/638053 docs: Add API docs for VIF types https://review.openstack.org/637009 docs: Add API docs for profile, datapath offload types https://review.openstack.org/638395 defer to train: modify functional base.py to allow using vscode https://review.openstack.org/638058 doc: Use sphinx.ext.todo for profile, datapath offload types https://review.openstack.org/638405 docs: Start using sphinx.ext.autodoc for VIF types https://review.openstack.org/638404 Add 'SUPPORT_BW_CONFIG' option to VIFs https://review.openstack.org/636933 note the docs changes can land after non-client lib freeze as they are not feature but of the others shoudl wait till rc1 i have propsoed an patch to release os-vif 1.15.0 here https://review.openstack.org/#/c/639214/ i will -1 that patch for now and respin if we merge the remain patches early tomorrow or remove teh -1 and we will release with that patch. regards sean. On Mon, 2019-02-25 at 12:22 +0000, Sean Mooney wrote: > hi everyone. > > As many people know the non-client lib freeze is thursday the 28th > so it is time to do the final os-vif release for stein. > > there are a number of pending patches to os-vif > https://review.openstack.org/#/q/project:openstack/os-vif+status:open+branch:master > > while i think we can merge several of them i have sorted them into groups below. > > required: > remove use of brctl from vif_plug_linux_bridge https://review.openstack.org/636822 > remove use of brctl from vif_plug_linux_bridge https://review.openstack.org/636821 > > prefer to merge: > Add native implementation OVSDB API https://review.openstack.org/482226 > make functional tests run on python 3 https://review.openstack.org/638053 > > nice to have: > modify functional base.py to allow using vscode https://review.openstack.org/638058 > docs: Add API docs for VIF types https://review.openstack.org/637009 > doc: Use sphinx.ext.todo for profile, datapath offload types https://review.openstack.org/638405 > docs: Start using sphinx.ext.autodoc for VIF types https://review.openstack.org/638404 > docs: Add API docs for profile, datapath offload types https://review.openstack.org/638395 > > defer: > Add 'SUPPORT_BW_CONFIG' option to VIFs https://review.openstack.org/636933 > > > the required patches are makeing there way though the gate. > i would hope we can merge most of the patches in the prefer and nice to have buckets > but my intent is to propose a patch to the release repo tonight with the head of master proably > at or after 20:00 UTC which will be around noon PST. that will give eu and us > folks that want to review these changes a resonably amount of time. > > ideally we can try and get the release out early tommorow. once that is done os-vif will go into a > a feature freeze until RC1 is released of nova and neutron at which point it will unfreeze. > once i submit the patch for the release until thursday i would like to do a full code freeze on os-vif > so that if there are any bugs that show up in the gate we can fix those without accepting any other changes. > once we pass the non-client lib freeze on thrusday non feature patches such as docs changes or testing changes > are fine but features should wait to RC1. > > regards > sean > From ekcs.openstack at gmail.com Mon Feb 25 23:57:48 2019 From: ekcs.openstack at gmail.com (Eric K) Date: Mon, 25 Feb 2019 15:57:48 -0800 Subject: Fw: [congress] Handling alarms that can be erroneous In-Reply-To: References: Message-ID: On Sun, Feb 24, 2019 at 7:14 PM AKHIL Jain wrote: > > Hi all, > > This discussion is about keeping, managing and executing actions based on old alarms. > > In Congress, when the policy is created the corresponding actions are executed based on data already existing in datasource tables and on the data that is received later in Congress datasource tables. > So the alarms raised by projects like aodh, monasca are polled by congress and even the webhook notifications for alarm are received and stored in congress. > In Congress, there are two scenarios of policy execution. One, execution based on data already existing before the policy is created and second, policy is created and action is executed at any time after the data is received Fundamentally the current policy formalism is based on state. Policy is evaluated on the latest state, whether that state is formed before or after a policy a created. Based on the emphasis on order, it feels like perhaps what you're looking for is a change-based formalism, where policy is evaluated on the change to state? For example, a state-based policy may say: if it *is* raining, make sure umbrella is used. A change-based policy may say: if it *starts* raining, deploy umbrella. Generally speaking, state-based formalism leads to simpler and more robust policies, but change-based formalism allows for greater control. But the use of one formalism does not necessarily preclude the other. > > Which can be harmful by keeping in mind that old alarms that are INVALID at present are still stored in Congress tables. So the user can trigger FALSE action based on that invalid alarm which can be very harmful to the environment. Just to clarify for someone coming to the discussion: under normal operations, alarms which have become inactive are also accurately reflected in Congress. Of course, as with any distributed system, there are issues with delivery and latency and timing. So we want to make sure Congress offers the right facilities in its policy formalism to enable policy writers to write robust policies that avoid unintended behaviors. (More details in the discussion in the quoted emails.) > > In order to tackle this, there can be multiple ways from the perspective of every OpenStack project handling alarms. > One of the solutions can be: As action needs to be taken immediately after the alarm is raised, so storing only those alarms that have corresponding actions or policies(that will use the alarm) and after the policy is executed on them just discard those alarms or mark those alarm with some field like old, executed, etc. Or there are use cases that require old alarms? > > Also, we need to provide Operator the ability to delete the rows in congress datasource table. This will not completely help in solving this issue but still, it's better functionality to have IMO. > > Above solution or any discussed better solution can lead to change in mechanism i.e currently followed that involves policy execution on both new alarm and existing alarm to only new alarm. > > I have added the previous discussion below and discussion in Congress weekly IRC meeting can be found here > http://eavesdrop.openstack.org/meetings/congressteammeeting/2019/congressteammeeting.2019-02-22-04.01.log.html > > Thanks and regards, > Akhil > ________________________________________ > From: Eric K > Sent: Tuesday, February 19, 2019 11:04 AM > To: AKHIL Jain > Subject: Re: Congress Demo and Output > > Thanks for the update! > > Yes of course if created_at field is needed by important use case then > please feel free to add it! Sample policy in the commit message would be > very helpful. > > > Regarding old alarms, I need a couple clarifications: > First, which categories of actions executions are we concerned about? > 1. Actions executed automatically by congress policy. > 2. Actions executed automatically by another service getting data from > Congress. > 3. Actions executed manually by operator based on data from Congress. > > Second, let's clarify exactly what we mean by "old". > There are several categories I can think of: > 1. Alarms which had been activated and then deactivated. > 2. Alarms which had been activated and remains active, but it has been > some time since it first became active. > 3. Alarms which had been activated and triggered some action, but the > alarm remains active because the action do not resolve the alarm. > 4. Alarms which had been activated and triggered some action, and the > action is in the process of resolving the alarm, but in the mean time the > alarm remains active. > > (1) should generally not show up in Congress as active in push update > case, but there are failure scenarios in which an update to deactivate can > fail to reach Congress. > (2) seems to be the thing option 1.1 would get rid of. But I am not clear > what problems (2) causes. Why is a bad idea to execute actions based on an > alarm that has been active for some time and remains active? An example > would help me =) > > I can see (4) causing problems. But I'd like to work through an example to > understand more concretely. In simple cases, Congress policy action > execution behavior actually works well. > > If we have simple case like: > execute[action(1)] :- alarm(1) > Then action(1) is not going to be executed twice by congress because the > behavior is that Congress executes only the NEWLY COMPUTED actions. > > If we have a more complex case like: > execute[action(1)] :- alarm(1) > > execute[action(2)] :- alarm(1), alarm(2) > If alarm (1) activates first, triggering action(1), then alarm (2) > activates before alarm(1) deactivates, action(2) would be triggered > because it is newly computed. Whether we WANT it executed may depend on > the use case. > > And I'd also like to add option 1.3: > Add a new table in (say monasca) called latest_alarm, which is the same as > the current alarms table, except that it contains only the most recently > received active alarm. That way, the policies which must avoid using older > alarms can refer to the latest_alarm table. Whereas policies which would > consider all currently active alarms can refer to the alarms table. > > Looking forward to more discussion! > > > On 2/17/19, 10:44 PM, "AKHIL Jain" wrote: > > >Hi Eric, > > > >There are some questions raised while working on FaultManagement usecase, > >mainly below ones: > >1. Keeping old alarms can be very harmful, the operator can execute > >actions based on alarms that are not even existing or valid. > >2. Adding a created_at field in Nova servers table can be useful. > > > >So for the first question, there can be multiple options: > >1.1 Do not store those alarms that do not have any policy created in > >Congress to execute on that alarm > >1.2 Add field in alarm that can tell if the policy is executed using that > >row or not. And giving the operator a command to delete them or > >automatically delete them. > > > >For 2nd question please tell me that its good to go and I will add it. > > > >Regards > >Akhil > > From openstack at fried.cc Tue Feb 26 00:09:57 2019 From: openstack at fried.cc (Eric Fried) Date: Mon, 25 Feb 2019 18:09:57 -0600 Subject: [placement] zuul job dependencies for greater good? In-Reply-To: References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> Message-ID: >> -1 to serializing jobs with stop-on-first-failure. Human time (having to >> iterate fixes one failed job at a time) is more valuable than computer >> time. That's why we make computers. Apologies, I had nova in my head when I said this. For the placement repo specifically (at least as it stands today), running full tox locally is very fast, so you really have no excuse for pushing broken py/func. I would tentatively support stop-on-first-failure in placement only; but we should be on the lookout for a time when this tips the balance. (I hope that never happens, and I'm guessing Chris would agree with that.) -efried From iwienand at redhat.com Tue Feb 26 00:20:48 2019 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 26 Feb 2019 11:20:48 +1100 Subject: [placement] zuul job dependencies for greater good? In-Reply-To: <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> Message-ID: <20190226002048.GA10439@fedora19.localdomain> On Mon, Feb 25, 2019 at 02:36:44PM -0600, Eric Fried wrote: > I would also support a commit message tag (or something) that tells zuul > not to bother running CI right now. Or a way to go to zuul.o.o and yank > a patch out. Note that because edits to zuul jobs (i.e. whatever is in .zuul.yaml) are applied to testing for that change, for WIP changes it's usually easy to just go in and edit out any and all "unrelated" jobs while you're in early iterations [1]. Obviously you put things back when things are ready for review. I think this covers your first point. If you get it wrong, you can upload a new change and Zuul will stop active jobs and start working on the new change, which I think covers the second. -i [1] e.g. https://review.openstack.org/#/c/623137/6/.zuul.d/jobs.yaml From cboylan at sapwetik.org Tue Feb 26 00:42:52 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 25 Feb 2019 19:42:52 -0500 Subject: [placement] zuul job dependencies for greater good? In-Reply-To: References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> Message-ID: <29e0f0b5-c25b-41c0-9fc3-732ee78f8b1c@www.fastmail.com> On Mon, Feb 25, 2019, at 12:51 PM, Ben Nemec wrote: > snip > That said, I wouldn't push too hard in either direction until someone > crunched the numbers and figured out how much time it would have saved > to not run long tests on patch sets with failing unit tests. I feel like > it's probably possible to figure that out, and if so then we should do > it before making any big decisions on this. For numbers the elastic-recheck tool [0] gives us fairly accurate tracking of which issues in the system cause tests to fail. You can use this as a starting point to potentially figure out how expensive indentation errors caught by the pep8 jobs ends up being or how often unittests fail. You probably need to tweak the queries there to get that specific though. Periodically I also dump node resource utilization by project, repo, and job [1]. I haven't automated this because Tobiash has written a much better thing that has Zuul inject this into graphite and we should be able to set up a grafana dashboard for that in the future instead. These numbers won't tell a whole story, but should paint a fairly accurate high level picture of the types of things we should look at to be more node efficient and "time in gate" efficient. Looking at these two really quickly myself it seems that job timeouts are a big cost (anyone looking into why our jobs timeout?). [0] http://status.openstack.org/elastic-recheck/index.html [1] http://paste.openstack.org/show/746083/ Hope this helps, Clark From thierry at openstack.org Tue Feb 26 01:54:51 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 26 Feb 2019 02:54:51 +0100 Subject: [tc][election] New series of campaign questions In-Reply-To: <0aae11d6-7db0-420a-a0ff-7cbf92ff9e1e@www.fastmail.com> References: <0aae11d6-7db0-420a-a0ff-7cbf92ff9e1e@www.fastmail.com> Message-ID: <5b8bc26f-a3d9-7ee8-a8ed-b908db5777c4@openstack.org> Jean-Philippe Evrard wrote: > [...] > A) In a world where "general" OpenStack issues/features are solved through community goals, do you think the TC should focus on "less interesting" technical issues across projects, like tech debt reduction? Or at the opposite, do you think the TC should tackle the hardest OpenStack wide problems? I'd say both. Community goals can be used to achieve a common level for "OpenStack" -- one way is to use them for user-visible change, but the other is to use them to set basic standards. Ideally it should always been ultimately beneficial to the user. > B) Do you think the TC must check and actively follow all the official projects' health and activities? Why? I found the "health check" exercise a bit time consuming, but interesting, especially for projects I'm not deeply familiar with. Maybe having two people assigned to every project every 6 months is a bit too much though. > C) Do you think the TC's role is to "empower" project and PTLs? If yes, how do you think the TC can help those? If no, do you think it would be the other way around, with PTLs empowering the TC to achieve more? How and why? I believe in servant leadership -- I think the part of the TC's role is to empower everyone else to do their part. By setting standards, resolving conflicts, adapting systems and processes to changing conditions. > D) Do you think the community goals should be converted to a "backlog"of time constrained OpenStack "projects", instead of being constrained per cycle? (with the ability to align some goals with releasing when necessary) I personally prefer to keep reasonable goals tied to release cycles, rather than have a constant backlog of TC-dictated objectives. > E) Do you think we should abandon projects' ML tags/IRC channels, to replace them by focus areas? For example, having [storage] to group people from [cinder] or [manila]. Do you think that would help new contributors, or communication in the community? If the focus area is important enough, it should probably be a SIG and use that as the channel / ML subject tag. For example, as bare metal concerns grew larger than just Ironic, a "Bare Metal" SIG was created. If there is a need to discuss common "storage" topics beyond "manila", "cinder" and "swift" topics, I suspect we'd use [storage] naturally. > [...] > G) What do you think of the elections process for the TC? Do you think it is good enough to gather a team to work on hard problems? Or do you think electing person per person have an opposite effect, highlighting individuals versus a common program/shared objectives? Corollary: Do you think we should now elect TC members by groups (of 2 or 3 persons for example), so that we would highlight their program vs highlight individual ideas/qualities? The general idea of behind electing individuals was to get a plurality of views. I feel like if we elected groups of people under a similar "party" or "program" that would (1) reduce the diversity of views and (2) encourage party politics instead of consensus decisions. -- Thierry Carrez (ttx) From zbitter at redhat.com Tue Feb 26 02:01:13 2019 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 25 Feb 2019 21:01:13 -0500 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: <66646ba0-3829-534c-3782-db95ecceec27@redhat.com> On 20/02/19 9:46 AM, Chris Dent wrote: > > It's the Campaigning slot of the TC election process, where members > of the community (including the candidates) are encouraged to ask > the candidates questions and witness some debate. I have some > questions. > > First off, I'd like to thank all the candidates for running and > being willing to commit some of their time. I'd also like to that > group as a whole for being large enough to force an election. A > representative body that is not the result of an election would not > be very representing nor have much of a mandate. +1 > The questions follow. Don't feel obliged to answer all of these. The > point here is to inspire some conversation that flows to many > places. I hope other people will ask in the areas I've chosen to > skip. If you have a lot to say, it might make sense to create a > different message for each response. Beware, you might be judged on > your email etiquette and attention to good email technique! > > * How do you account for the low number of candidates? Do you >   consider this a problem? Why or why not? In retrospect, it appears the tradition of waiting until the last possible minute had a lot to do with it. But I definitely had some other thoughts the day before nominations closed. Looking at some of the folks who either decided not to run or who delayed their decision to run, I see a lot of standard, non-alarming reasons. Someone who is organising a wedding, someone who just changed jobs, someone who has a big internal project to work on at $DAYJOB, &c. However, those things have always happened. The pool of candidates has shrunk to the point where folks who are borderline on having enough time to run are now the difference between having an election and having unfilled seats. That's in part due to the lower number of folks employed to work on OpenStack, and to the extra workload being shouldered by those left. This is a problem for OpenStack, for at least the reason you mentioned above: TC members don't have much of a mandate if they didn't actually have an election. Shrinking the TC is one option for dealing with this, and one we should probably at least investigate. But we could also work harder on developing future leaders of the project and getting them involved in the work of the TC (I guess the Stewardship WG used to be a more formal expression of this) before they eventually win seats on the TC - in much the same way that projects should hope to turn contributors into core reviewers. > * Compare and contrast the role of the TC now to 4 years ago. If you >   weren't around 4 years ago, comment on the changes you've seen >   over the time you have been around. In either case: What do you >   think the TC role should be now? In some ways it's very different and in some ways depressingly familiar :) Four years ago, the TC was focused on managing explosive growth of contributions (particularly in the form of new projects). The TC of that era did a lot of work putting governance into place that would make it possible to deal with the flood of new projects. That involved the TC removing itself from the technical review of new projects, in part because it had failed to give at least one project *any answer at all* on two successive graduation applications due to a lack of time on the part of TC members to investigate what it was for. Fast-forwarding to the present, the flood has slowed to a trickle and the governance structures are mature, but the TC is still in many ways very focused on 'governance' to the exclusion of helping to get the technical community to work in concert. > * What, to you, is the single most important thing the OpenStack >   community needs to do to ensure that packagers, deployers, and >   hobbyist users of OpenStack are willing to consistently upstream >   their fixes and have a positive experience when they do? What is >   the TC's role in helping make that "important thing" happen? Since you're forcing me to pick only one thing, I am going to say 'education'. The economics of participating in an open source project are just not intuitive to anyone who isn't steeped in this stuff, and most people are not. Most companies are vulnerable to acting especially clueless, because the folks making the decisions are not the same folks who would even have the chance to work in the community. Even most of the companies working on OpenStack in the early days were pretty bad at it, though almost all improved with practice. Forking a rapidly-evolving project is one of the most expensive mistakes you'll ever make, and if people understood that I really don't believe they'd do it because the time-to-market gains or whatever are just not worth it, and soon evaporate when you're looking at the time-to-market for v2 of your now-unmaintainable product. I feel like it's part of OpenStack's role in the world to explain the open source way - both the benefits it confers and the responsibilities it entails - to folks who are receptive but who were not closely associated enough with projects like Linux to have learned the ins and outs. (If I get a second answer, it'd be the one I brought up at the Forum in Berlin: prioritising giving immediate feedback to any and all contributions from those folks. http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000052.html) > * If you had a magic wand and could inspire and make a single >   sweeping architectural or software change across the services, >   what would it be? For now, ignore legacy or upgrade concerns. Oh wow you went there. If I had to pick one and I had a magic wand, I would replace the current backend infrastructure (comprising MariaDB, RabbitMQ, and at least theoretically etcd3) with a single component, with requirements such as the following: - Efficiently scalable to some defined level - First-class support for messaging patterns (e.g. pub-sub) - Reliable to a passes-jepsen-or-better standard - Well-defined semantics for resolving conflicts/partitions/&c. - Support for large numbers of geographically distributed replicas (for edge use cases) - To the extent that any of this needs to be implemented in the OpenStack community, it only has to be implemented once In response to the inevitable complaints: - Yes, MySQL can be scaled using sharding. But we have 50+ projects and only 2 implementations of sharding. And to underscore just how difficult that is, both were in the same project. - Yes, RabbitMQ supports messaging patterns, but it's not something we've built our services around (e.g. Kubernetes is entirely event driven, but in OpenStack very little data escapes the compute node without querying for it) - in fact we mostly use it (inappropriately) for RPC. And it's a separate service from the database, so you can't e.g. have a transaction that includes sending a message. - RabbitMQ failed Jepsen spectacularly (https://aphyr.com/posts/315-call-me-maybe-rabbitmq) - if you can manage to keep it running at all. It's a constant source of complaints from OpenStack operators. Galera also failed BTW (https://aphyr.com/posts/327-call-me-maybe-mariadb-galera-cluster). - As an OpenStack developer, I can't even find any documentation to tell me what transaction isolation level I should expect from the DB. Not only is it not documented anywhere, but AFAICT until very recently we officially supported two *different* levels and at least tacitly encouraged operators to effectively choose for themselves which to use. How can I, as an OpenStack developer, even hope to write correct code under those circumstances? This is not theoretical; many times I've been in discussions that ended with us just guessing what the database would do and hoping for the best. I went searching fruitlessly for docs yet again because I literally spent half of Wednesday trying to implement something without having a race condition, and whether it was possible or not (spoiler: it's not) depended on the transaction isolation level. - Something something Edge. - See the first point above. I'm not wedded to any particular choice, but I believe such things exist. I'm particularly intrigued by FoundationDB, although extremely wary of their non-open development model for the core. Anyhow, with that in place I would wave my magic wand and build a messaging library and then a user-facing API that used it. Then I'd have everything in OpenStack work together as an event-driven system, with (select) events potentially routed out to userspace and back in again if the user so chose. I don't think there's a path to this happening, but I think it's a very valuable exercise to imagine how things could be and think about what we're missing out on. >   What role should the TC have in inspiring and driving such >   changes? I don't think it should be the TC's job to necessarily come up with such ideas. Great ideas might come from anywhere. And I don't think it can be the TC's job to tell people what ideas to implement, because that's not how open source works. Plagiarising myself (from https://review.openstack.org/#/c/622400/1/reference/role-of-the-tc.rst at 73): "The ideas we need are out there in the community. The TC's job is to ensure that there's space for them to be heard, to amplify the best ones, to build consensus, to ensure that decisions get made when consensus is not possible (i.e. we don't default to paralysis when reasonable people disagree), and to be electorally accountable for getting it wrong." > * What can the TC do to make sure that the community (in its many >   dimensions) is informed of and engaged in the discussions and >   decisions of the TC? The TC is extremely open in how it conducts it's business. So the challenge, as always is in filtering the firehose. I think the weekly TC update email from the chair is a big improvement, because it's significantly easier to engage with than trying to follow everything. > * How do you counter people who assert the TC is not relevant? >   (Presumably you think it is, otherwise you would not have run. If >   you don't, why did you run?) Trick question! I wouldn't try to counter them. If somebody believes that the TC is not relevant *to them* then they're almost certainly right. I would be interested in listening to what things _would_ be relevant to them and exploring whether those things should be done, whether they should be done by the TC, or whether the TC should delegate them to some other group. But let's remember that there will always be people who are just heads-down in their own project getting stuff done and never needing to worry about the TC, and that's fine. OpenStack can accommodate an ~unlimited number of those folks. But it can't prosper with only those folks. We need people who keep the lines of communication open between projects and shape the system as a whole. If the TC can't be relevant to that latter group, then we really have a problem. cheers, Zane. From manuel.sb at garvan.org.au Tue Feb 26 05:07:49 2019 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Tue, 26 Feb 2019 05:07:49 +0000 Subject: failing to create a vm with SR-IOV (through neutron) - device not found but not listed under libvirt devices Message-ID: <9D8A2486E35F0941A60430473E29F15B017E85F685@MXDB2.ad.garvan.unsw.edu.au> Dear Openstack community, I am facing an issue trying to setup neutron with SR-IOV and would like to ask for some help: openstack server show fault. My environment is Openstack Rocky deployed with kolla-ansible. I have edited the configuration files as suggested by the documentation but for some reason nova can't find the PCI device for pass-through. This is my setup [root at zeus-59 ~]# lspci -nn | grep -i mell 88:00.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015] 88:00.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015] 88:00.2 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:00.3 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:00.4 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:00.5 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:00.6 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:00.7 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.2 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.3 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.4 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.5 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.6 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.7 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:02.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:02.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] sriov_agent.ini in compute node content: [sriov_nic] physical_device_mappings = sriovtenant1:ens2f0,sriovtenant1:ens2f1 exclude_devices = [securitygroup] firewall_driver = neutron.agent.firewall.NoopFirewallDriver nova.conf (nova-compute): ... [pci] passthrough_whitelist = [{ "vendor_id": "10de", "product_id": "1db1" }, { "vendor_id": "15b3", "product_id": "1015", "physical_network": "sriovtenant1" }] alias = { "vendor_id":"10de", "product_id":"1db1", "device_type":"type-PCI", "name":"nv_v100" } ml2_conf.ini (neutron-server): [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = openvswitch,l2population,sriovnicswitch extension_drivers = port_security [ml2_type_vlan] network_vlan_ranges = physnet1, sriovtenant1 [ml2_type_flat] flat_networks = sriovtenant1 ... Sriov_agent.ini (compute node): [sriov_nic] physical_device_mappings = sriovtenant1:ens2f0,sriovtenant1:ens2f1 exclude_devices = [securitygroup] firewall_driver = neutron.agent.firewall.NoopFirewallDriver Create network and subnet: openstack network create \ --provider-physical-network sriovtenant1 \ --provider-network-type flat \ sriovnet1 openstack subnet create --network sriovnet1 \ --subnet-range=10.0.0.0/16 \ --allocation-pool start=10.0.32.10,end=10.0.32.20 \ sriovnet1_sub1 Create port: openstack port create --network sriovnet1 --vnic-type direct sriovnet1-port1 Create server: openstack server create --flavor m1.large \ --image centos7.5-image \ --nic port-id=373fe020-7b89-40ab-a8e4-76b82cb47490 \ --key-name mykey \ --availability-zone nova:zeus-59.localdomain \ vm-sriov-neutron-1 The server does not gets created with the following error: | fault | {u'message': u'PCI device not found for request ID 419c5fa0-1b7a-4a83-b691-2fcb0fba94cc.', u'code': 500, u'details': u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1940, in _do_build_and_run_instance\n filter_properties, request_spec)\n File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2229, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'created': u'2019-02-26T04:21:01Z'} | Nova logs 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager Traceback (most recent call last): 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7778, in _update_available_resource_for_node 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager rt.update_available_resource(context, nodename) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 705, in update_available_resource 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager resources = self.driver.get_available_resource(nodename) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6551, in get_available_resource 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager self._get_pci_passthrough_devices() 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5978, in _get_pci_passthrough_devices 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager pci_info.append(self._get_pcidev_info(name)) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5939, in _get_pcidev_info 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager device.update(_get_device_capabilities(device, address)) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5910, in _get_device_capabilities 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager pcinet_info = self._get_pcinet_info(address) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5853, in _get_pcinet_info 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager virtdev = self._host.device_lookup_by_name(devname) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line 873, in device_lookup_by_name 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager return self.get_connection().nodeDeviceLookupByName(name) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager result = proxy_call(self._autowrap, f, *args, **kwargs) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager rv = execute(f, *args, **kwargs) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager six.reraise(c, e, tb) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager rv = meth(*args, **kwargs) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4305, in nodeDeviceLookupByName 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager if ret is None:raise libvirtError('virNodeDeviceLookupByName() failed', conn=self) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager libvirtError: Node device not found: no node device with matching name 'net_enp136s1f2_36_cc_24_fb_76_e3' Devices [root at zeus-59 ~]# docker exec nova_libvirt sudo virsh nodedev-list block_sda_SATA_SSD_7F2F0759012400117253 computer drm_card0 net_enp136s1_66_0f_f8_42_3b_bb net_enp136s1f1_82_e4_57_39_4c_bf net_enp136s1f2_6e_52_31_b0_04_b7???? net_enp136s1f3_e6_be_60_01_f3_9f net_enp136s1f4_be_d1_8e_9e_46_ef net_enp136s1f5_4e_61_1c_40_98_dc net_enp136s1f6_4a_75_ee_f7_c9_68 net_enp136s1f7_de_3e_da_36_48_02 net_enp136s2_3e_dc_23_b4_ca_c4 net_enp136s2f1_c6_12_aa_52_fa_34 net_enp1s0f0_0c_c4_7a_a4_82_ae net_enp1s0f1_0c_c4_7a_a4_82_af net_ens1f0_90_e2_ba_03_4c_c8 net_ens1f1_90_e2_ba_03_4c_c9 net_ens2f0_7c_fe_90_12_22_b4 net_ens2f1_7c_fe_90_12_22_b5 net_ens2f2_36_cc_24_fb_76_e3 net_ens2f3_82_42_06_38_9a_b7 net_ens2f4_7e_c0_bb_98_72_f4 net_ens2f5_be_9c_1c_25_ff_0d net_ens2f6_2e_01_90_f8_44_b5 net_ens2f7_7e_6a_6d_4e_89_1b pci_0000_00_00_0 pci_0000_00_01_0 pci_0000_00_02_0 pci_0000_00_02_1 pci_0000_00_02_2 pci_0000_00_03_0 pci_0000_00_03_1 pci_0000_00_03_2 ... Note: documentation says to use vlan but I am using flat networks Why is nova looking for a device that libvirt doesn't know about? Thank you very much NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongli.he at intel.com Tue Feb 26 05:53:40 2019 From: yongli.he at intel.com (yonglihe) Date: Tue, 26 Feb 2019 13:53:40 +0800 Subject: [nova] nova spec show-server-group response format Message-ID: Hi, guys The approved spec show-server-group had 2 options for response. 1. First one(current spec):          "server": {             "server_groups": [ # not cached                    "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8"             ]         }    } related discuss: https://review.openstack.org/#/c/612255/11/specs/stein/approved/show-server-group.rst at 67 digest:  This  decouple the current  implementation of server groups then get a  generic API. 2 Second one:         "server": {             "server_group": {                 "name": "groupA",                 "id": "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8"             } related discuss: https://review.openstack.org/#/c/612255/4/specs/stein/approved/list-server-group.rst at 62 digest: people have tried to change the api to allow adding/removing servers to/from groups, but still not implement yet. we need align this for continuing this work. thanks. Reference: bp: https://blueprints.launchpad.net/nova/+spec/show-server-group spec: https://review.openstack.org/#/c/612255/13/specs/stein/approved/show-server-group.rst code:  https://review.openstack.org/#/c/621474/23 Regards Yongli He -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Tue Feb 26 06:45:32 2019 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Tue, 26 Feb 2019 14:45:32 +0800 Subject: [nova]Supporting volume_name when booting from volume Message-ID: Hi,all Currently, when creating a new boot-from-volume instance, the user can only control the name of the volume by pre-creating a bootable image-backed volume with the desired name in cinder and providing it to nova during the boot process. It is not friendly to the user when we want to use the desired name volume to boot instance. What do you think of this suggestion? Can you tell me more about this ?Thank you very much. Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From manuel.sb at garvan.org.au Tue Feb 26 06:41:36 2019 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Tue, 26 Feb 2019 06:41:36 +0000 Subject: [nova][neutron][kolla]failing to create a vm with SR-IOV (through neutron) - device not found but not listed under libvirt devices Message-ID: <9D8A2486E35F0941A60430473E29F15B017E85F80D@MXDB2.ad.garvan.unsw.edu.au> I also tried the devices like this, but still getting the same error message passthrough_whitelist = [... {"devname": "ens2f0", "physical_network": "sriovtenant1"}, {"devname": "ens2f1", "physical_network": "sriovtenant1"}] thank you Manuel From: Manuel Sopena Ballesteros [mailto:manuel.sb at garvan.org.au] Sent: Tuesday, February 26, 2019 4:08 PM To: openstack at lists.openstack.org Cc: Adrian Chiris (adrianc at mellanox.com) Subject: failing to create a vm with SR-IOV (through neutron) - device not found but not listed under libvirt devices Dear Openstack community, I am facing an issue trying to setup neutron with SR-IOV and would like to ask for some help: openstack server show fault. My environment is Openstack Rocky deployed with kolla-ansible. I have edited the configuration files as suggested by the documentation but for some reason nova can't find the PCI device for pass-through. This is my setup [root at zeus-59 ~]# lspci -nn | grep -i mell 88:00.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015] 88:00.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015] 88:00.2 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:00.3 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:00.4 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:00.5 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:00.6 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:00.7 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.2 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.3 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.4 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.5 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.6 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:01.7 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:02.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] 88:02.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] sriov_agent.ini in compute node content: [sriov_nic] physical_device_mappings = sriovtenant1:ens2f0,sriovtenant1:ens2f1 exclude_devices = [securitygroup] firewall_driver = neutron.agent.firewall.NoopFirewallDriver nova.conf (nova-compute): ... [pci] passthrough_whitelist = [{ "vendor_id": "10de", "product_id": "1db1" }, { "vendor_id": "15b3", "product_id": "1015", "physical_network": "sriovtenant1" }] alias = { "vendor_id":"10de", "product_id":"1db1", "device_type":"type-PCI", "name":"nv_v100" } ml2_conf.ini (neutron-server): [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = openvswitch,l2population,sriovnicswitch extension_drivers = port_security [ml2_type_vlan] network_vlan_ranges = physnet1, sriovtenant1 [ml2_type_flat] flat_networks = sriovtenant1 ... Sriov_agent.ini (compute node): [sriov_nic] physical_device_mappings = sriovtenant1:ens2f0,sriovtenant1:ens2f1 exclude_devices = [securitygroup] firewall_driver = neutron.agent.firewall.NoopFirewallDriver Create network and subnet: openstack network create \ --provider-physical-network sriovtenant1 \ --provider-network-type flat \ sriovnet1 openstack subnet create --network sriovnet1 \ --subnet-range=10.0.0.0/16 \ --allocation-pool start=10.0.32.10,end=10.0.32.20 \ sriovnet1_sub1 Create port: openstack port create --network sriovnet1 --vnic-type direct sriovnet1-port1 Create server: openstack server create --flavor m1.large \ --image centos7.5-image \ --nic port-id=373fe020-7b89-40ab-a8e4-76b82cb47490 \ --key-name mykey \ --availability-zone nova:zeus-59.localdomain \ vm-sriov-neutron-1 The server does not gets created with the following error: | fault | {u'message': u'PCI device not found for request ID 419c5fa0-1b7a-4a83-b691-2fcb0fba94cc.', u'code': 500, u'details': u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1940, in _do_build_and_run_instance\n filter_properties, request_spec)\n File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2229, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'created': u'2019-02-26T04:21:01Z'} | Nova logs 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager Traceback (most recent call last): 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7778, in _update_available_resource_for_node 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager rt.update_available_resource(context, nodename) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 705, in update_available_resource 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager resources = self.driver.get_available_resource(nodename) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6551, in get_available_resource 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager self._get_pci_passthrough_devices() 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5978, in _get_pci_passthrough_devices 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager pci_info.append(self._get_pcidev_info(name)) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5939, in _get_pcidev_info 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager device.update(_get_device_capabilities(device, address)) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5910, in _get_device_capabilities 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager pcinet_info = self._get_pcinet_info(address) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5853, in _get_pcinet_info 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager virtdev = self._host.device_lookup_by_name(devname) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line 873, in device_lookup_by_name 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager return self.get_connection().nodeDeviceLookupByName(name) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager result = proxy_call(self._autowrap, f, *args, **kwargs) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager rv = execute(f, *args, **kwargs) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager six.reraise(c, e, tb) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager rv = meth(*args, **kwargs) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4305, in nodeDeviceLookupByName 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager if ret is None:raise libvirtError('virNodeDeviceLookupByName() failed', conn=self) 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager libvirtError: Node device not found: no node device with matching name 'net_enp136s1f2_36_cc_24_fb_76_e3' Devices [root at zeus-59 ~]# docker exec nova_libvirt sudo virsh nodedev-list block_sda_SATA_SSD_7F2F0759012400117253 computer drm_card0 net_enp136s1_66_0f_f8_42_3b_bb net_enp136s1f1_82_e4_57_39_4c_bf net_enp136s1f2_6e_52_31_b0_04_b7???? net_enp136s1f3_e6_be_60_01_f3_9f net_enp136s1f4_be_d1_8e_9e_46_ef net_enp136s1f5_4e_61_1c_40_98_dc net_enp136s1f6_4a_75_ee_f7_c9_68 net_enp136s1f7_de_3e_da_36_48_02 net_enp136s2_3e_dc_23_b4_ca_c4 net_enp136s2f1_c6_12_aa_52_fa_34 net_enp1s0f0_0c_c4_7a_a4_82_ae net_enp1s0f1_0c_c4_7a_a4_82_af net_ens1f0_90_e2_ba_03_4c_c8 net_ens1f1_90_e2_ba_03_4c_c9 net_ens2f0_7c_fe_90_12_22_b4 net_ens2f1_7c_fe_90_12_22_b5 net_ens2f2_36_cc_24_fb_76_e3 net_ens2f3_82_42_06_38_9a_b7 net_ens2f4_7e_c0_bb_98_72_f4 net_ens2f5_be_9c_1c_25_ff_0d net_ens2f6_2e_01_90_f8_44_b5 net_ens2f7_7e_6a_6d_4e_89_1b pci_0000_00_00_0 pci_0000_00_01_0 pci_0000_00_02_0 pci_0000_00_02_1 pci_0000_00_02_2 pci_0000_00_03_0 pci_0000_00_03_1 pci_0000_00_03_2 ... Note: documentation says to use vlan but I am using flat networks Why is nova looking for a device that libvirt doesn't know about? Thank you very much NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Tue Feb 26 09:15:55 2019 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Tue, 26 Feb 2019 17:15:55 +0800 Subject: [nova]Supporting volume_name when booting from volume In-Reply-To: References: Message-ID: Hi, I'd say it might be treated as a proxy feature, I guess one thing you can do is to have a sort of combination API in your product, first create the instance and then find the volume and update the name. On Tue, Feb 26, 2019 at 2:52 PM Rambo wrote: > Hi,all > > Currently, when creating a new boot-from-volume instance, the user > can only control the name of the volume by pre-creating a bootable > image-backed volume with the desired name in cinder and providing it to > nova during the boot process. It is not friendly to the user when we want > to use the desired name volume to boot instance. What do you think of this > suggestion? Can you tell me more about this ?Thank you very much. > > > > > > > > > > > Best Regards > Rambo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Tue Feb 26 09:37:34 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 26 Feb 2019 10:37:34 +0100 Subject: [cinder] extra_capabilities for scheduler filters In-Reply-To: References: Message-ID: <20190226093734.5ochha6aobx5fa55@localhost> On 26/02, Sam Morrison wrote: > Hi, > Just wondering if extra_capabilities should be available in backend_state > to be able to be used by scheduler filters? > > I can't seem to use my custom capabilities within the capabilities filter. > > Thanks, > Sam Hi Sam, As far as I know this is supported. The volume manager is sending this information to the scheduler and the capabilities filter should be able to match the extra specs from the volume type to the extra capabilities you have set in the configuration of the cinder volume service using available operations: =, , , etc. Cheers, Gorka. From bdobreli at redhat.com Tue Feb 26 09:46:11 2019 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 26 Feb 2019 10:46:11 +0100 Subject: [placement][TripleO] zuul job dependencies for greater good? In-Reply-To: <20190226002048.GA10439@fedora19.localdomain> References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> <20190226002048.GA10439@fedora19.localdomain> Message-ID: I attempted [0] to do that for tripleo-ci, but zuul was (and still does) complaining for some weird graphs building things :/ See also the related topic [1] from the past. [0] https://review.openstack.org/#/c/568543 [1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/ 127869.html On 26.02.2019 1:20, Ian Wienand wrote: > On Mon, Feb 25, 2019 at 02:36:44PM -0600, Eric Fried wrote: >> I would also support a commit message tag (or something) that tells zuul >> not to bother running CI right now. Or a way to go to zuul.o.o and yank >> a patch out. > > Note that because edits to zuul jobs (i.e. whatever is in .zuul.yaml) > are applied to testing for that change, for WIP changes it's usually > easy to just go in and edit out any and all "unrelated" jobs while > you're in early iterations [1]. Obviously you put things back when > things are ready for review. > > I think this covers your first point. If you get it wrong, you can > upload a new change and Zuul will stop active jobs and start working > on the new change, which I think covers the second. > > -i > > [1] e.g. https://review.openstack.org/#/c/623137/6/.zuul.d/jobs.yaml > -- Best regards, Bogdan Dobrelya, Irc #bogdando From christian.zunker at codecentric.cloud Tue Feb 26 10:00:33 2019 From: christian.zunker at codecentric.cloud (Christian Zunker) Date: Tue, 26 Feb 2019 11:00:33 +0100 Subject: [ceilometer] radosgw pollster In-Reply-To: References: Message-ID: Hi Florian, which version of OpenStack are you using? The radosgw metric names were different in some versions: https://bugs.launchpad.net/ceilometer/+bug/1726458 Christian Am Fr., 22. Feb. 2019 um 17:40 Uhr schrieb Florian Engelmann < florian.engelmann at everyware.ch>: > Hi, > > I failed to poll any usage data from our radosgw. I get > > 2019-02-22 17:23:57.461 24 INFO ceilometer.polling.manager [-] Polling > pollster radosgw.containers.objects in the context of > radosgw_300s_pollsters > 2019-02-22 17:23:57.462 24 ERROR ceilometer.polling.manager [-] Prevent > pollster radosgw.containers.objects from polling [ domain_id=xx9d9975088a4d93922e1d73c7217b3b, enabled=True, > > [...] > > id=xx90a9b1d4be4d75b4bd08ab8107e4ff, is_domain=False, links={u'self': > u'http://keystone-admin.service.xxxxxxx:35357/v3/projects on source > radosgw_300s_pollsters anymore!: PollsterPermanentError > > Configurations like: > cat polling.yaml > --- > sources: > - name: radosgw_300s_pollsters > interval: 300 > meters: > - radosgw.usage > - radosgw.objects > - radosgw.objects.size > - radosgw.objects.containers > - radosgw.containers.objects > - radosgw.containers.objects.size > > > Also tried radosgw.api.requests instead of radowsgw.usage. > > ceilometer.conf > [...] > [service_types] > radosgw = object-store > > [rgw_admin_credentials] > access_key = xxxxx0Z0xxxxxxxxxxxx > secret_key = xxxxxxxxxxxxlRExxcPxxxxxxoNxxxxxxOxxxx > > [rgw_client] > implicit_tenants = true > > Endpoints: > | xxxxxxx | region | swift | object-store | True | admin > | http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s | > | xxxxxxx | region | swift | object-store | True | internal > | http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s | > | xxxxxxx | region | swift | object-store | True | public > | https://s3.somedomain.com/swift/v1/AUTH_%(tenant_id)s | > > Ceilometer user: > { > "user_id": "ceilometer", > "display_name": "ceilometer", > "email": "", > "suspended": 0, > "max_buckets": 1000, > "auid": 0, > "subusers": [], > "keys": [ > { > "user": "ceilometer", > "access_key": "xxxxxxxxxxxxxxxxxx", > "secret_key": "xxxxxxxxxxxxxxxxxxxxxxxxx" > } > ], > "swift_keys": [], > "caps": [ > { > "type": "buckets", > "perm": "read" > }, > { > "type": "metadata", > "perm": "read" > }, > { > "type": "usage", > "perm": "read" > }, > { > "type": "users", > "perm": "read" > } > ], > "op_mask": "read, write, delete", > "default_placement": "", > "placement_tags": [], > "bucket_quota": { > "enabled": false, > "check_on_raw": false, > "max_size": -1, > "max_size_kb": 0, > "max_objects": -1 > }, > "user_quota": { > "enabled": false, > "check_on_raw": false, > "max_size": -1, > "max_size_kb": 0, > "max_objects": -1 > }, > "temp_url_keys": [], > "type": "rgw" > } > > > radosgw config: > [client.rgw.xxxxxxxxxxx] > host = somehost > rgw frontends = "civetweb port=7480 num_threads=512" > rgw num rados handles = 8 > rgw thread pool size = 512 > rgw cache enabled = true > rgw dns name = s3.xxxxxx.xxx > rgw enable usage log = true > rgw usage log tick interval = 30 > rgw realm = public > rgw zonegroup = xxx > rgw zone = xxxxx > rgw resolve cname = False > rgw usage log flush threshold = 1024 > rgw usage max user shards = 1 > rgw usage max shards = 32 > rgw_keystone_url = https://keystone.xxxxxxxxxxxxx > rgw_keystone_admin_domain = default > rgw_keystone_admin_project = service > rgw_keystone_admin_user = swift > rgw_keystone_admin_password = > xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx > rgw_keystone_accepted_roles = member,_member_,admin > rgw_keystone_accepted_admin_roles = admin > rgw_keystone_api_version = 3 > rgw_keystone_verify_ssl = false > rgw_keystone_implicit_tenants = true > rgw_keystone_admin_tenant = default > rgw_keystone_revocation_interval = 0 > rgw_keystone_token_cache_size = 0 > rgw_s3_auth_use_keystone = true > rgw_max_attr_size = 1024 > rgw_max_attrs_num_in_req = 32 > rgw_max_attr_name_len = 64 > rgw_swift_account_in_url = true > rgw_swift_versioning_enabled = true > rgw_enable_apis = s3,swift,swift_auth,admin > rgw_swift_enforce_content_length = true > > > > > Any idea whats going on? > > All the best, > Florian > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Feb 26 10:45:02 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 26 Feb 2019 10:45:02 +0000 Subject: [nova][neutron][kolla]failing to create a vm with SR-IOV (through neutron) - device not found but not listed under libvirt devices In-Reply-To: <9D8A2486E35F0941A60430473E29F15B017E85F80D@MXDB2.ad.garvan.unsw.edu.au> References: <9D8A2486E35F0941A60430473E29F15B017E85F80D@MXDB2.ad.garvan.unsw.edu.au> Message-ID: <57e661c50e49fc47e6052023c7ea6d8b7097fa5e.camel@redhat.com> On Tue, 2019-02-26 at 06:41 +0000, Manuel Sopena Ballesteros wrote: > I also tried the devices like this, but still getting the same error message > > passthrough_whitelist = [… {"devname": "ens2f0", "physical_network": "sriovtenant1"}, {"devname": "ens2f1", > "physical_network": "sriovtenant1"}] just a fyi using "devname": "ens2f0" is the most fragile way of whitelisting pci devices as teh devname can change if the device is attached to a guest and then detached and bound back to the host. i generally recommend not useing it and always prefering to use either vendor id and prodct id and/or pci address in the whitelist. > > thank you > > Manuel > > From: Manuel Sopena Ballesteros [mailto:manuel.sb at garvan.org.au] > Sent: Tuesday, February 26, 2019 4:08 PM > To: openstack at lists.openstack.org > Cc: Adrian Chiris (adrianc at mellanox.com) > Subject: failing to create a vm with SR-IOV (through neutron) - device not found but not listed under libvirt devices > > Dear Openstack community, > > I am facing an issue trying to setup neutron with SR-IOV and would like to ask for some help: > openstack server show fault. > > My environment is Openstack Rocky deployed with kolla-ansible. > > I have edited the configuration files as suggested by the documentation but for some reason nova can’t find the PCI > device for pass-through. > > This is my setup > > [root at zeus-59 ~]# lspci -nn | grep -i mell > 88:00.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015] > 88:00.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] [15b3:1015] > 88:00.2 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > 88:00.3 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > 88:00.4 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > 88:00.5 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > 88:00.6 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > 88:00.7 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > 88:01.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > 88:01.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > 88:01.2 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > 88:01.3 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > 88:01.4 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > 88:01.5 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > 88:01.6 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > 88:01.7 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > 88:02.0 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > 88:02.1 Ethernet controller [0200]: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] [15b3:1016] > > > sriov_agent.ini in compute node content: > > [sriov_nic] > physical_device_mappings = sriovtenant1:ens2f0,sriovtenant1:ens2f1 > exclude_devices = > > [securitygroup] > firewall_driver = neutron.agent.firewall.NoopFirewallDriver > > > nova.conf (nova-compute): > > … > [pci] > passthrough_whitelist = [{ "vendor_id": "10de", "product_id": "1db1" }, { "vendor_id": "15b3", "product_id": "1015", > "physical_network": "sriovtenant1" }] > alias = { "vendor_id":"10de", "product_id":"1db1", "device_type":"type-PCI", "name":"nv_v100" } > > > ml2_conf.ini (neutron-server): > > [ml2] > type_drivers = flat,vlan,vxlan > tenant_network_types = vxlan > mechanism_drivers = openvswitch,l2population,sriovnicswitch > extension_drivers = port_security > > [ml2_type_vlan] > network_vlan_ranges = physnet1, sriovtenant1 > > [ml2_type_flat] > flat_networks = sriovtenant1 > … > > > Sriov_agent.ini (compute node): > > [sriov_nic] > physical_device_mappings = sriovtenant1:ens2f0,sriovtenant1:ens2f1 > exclude_devices = > > [securitygroup] > firewall_driver = neutron.agent.firewall.NoopFirewallDriver > > > Create network and subnet: > > openstack network create \ > --provider-physical-network sriovtenant1 \ > --provider-network-type flat \ > sriovnet1 > > openstack subnet create --network sriovnet1 \ > --subnet-range=10.0.0.0/16 \ > --allocation-pool start=10.0.32.10,end=10.0.32.20 \ > sriovnet1_sub1 > > Create port: > > openstack port create --network sriovnet1 --vnic-type direct sriovnet1-port1 > > > Create server: > > openstack server create --flavor m1.large \ > --image centos7.5-image \ > --nic port-id=373fe020-7b89-40ab-a8e4-76b82cb47490 \ > --key-name mykey \ > --availability-zone nova:zeus-59.localdomain \ > vm-sriov-neutron-1 > > > The server does not gets created with the following error: > > | fault | {u'message': u'PCI device not found for request ID 419c5fa0-1b7a-4a83-b691- > 2fcb0fba94cc.', u'code': 500, u'details': u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line > 1940, in _do_build_and_run_instance\n filter_properties, request_spec)\n File "/usr/lib/python2.7/site- > packages/nova/compute/manager.py", line 2229, in _build_and_run_instance\n instance_uuid=instance.uuid, > reason=six.text_type(e))\n', u'created': u'2019-02-26T04:21:01Z'} | > > > Nova logs > > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager Traceback (most recent call last): > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site- > packages/nova/compute/manager.py", line 7778, in _update_available_resource_for_node > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager rt.update_available_resource(context, nodename) > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site- > packages/nova/compute/resource_tracker.py", line 705, in update_available_resource > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager resources = self.driver.get_available_resource(nodename) > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site- > packages/nova/virt/libvirt/driver.py", line 6551, in get_available_resource > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager self._get_pci_passthrough_devices() > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site- > packages/nova/virt/libvirt/driver.py", line 5978, in _get_pci_passthrough_devices > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager pci_info.append(self._get_pcidev_info(name)) > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site- > packages/nova/virt/libvirt/driver.py", line 5939, in _get_pcidev_info > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager device.update(_get_device_capabilities(device, address)) > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site- > packages/nova/virt/libvirt/driver.py", line 5910, in _get_device_capabilities > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager pcinet_info = self._get_pcinet_info(address) > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site- > packages/nova/virt/libvirt/driver.py", line 5853, in _get_pcinet_info > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager virtdev = self._host.device_lookup_by_name(devname) > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site- > packages/nova/virt/libvirt/host.py", line 873, in device_lookup_by_name > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager return self.get_connection().nodeDeviceLookupByName(name) > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line > 186, in doit > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager result = proxy_call(self._autowrap, f, *args, **kwargs) > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line > 144, in proxy_call > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager rv = execute(f, *args, **kwargs) > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line > 125, in execute > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager six.reraise(c, e, tb) > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line > 83, in tworker > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager rv = meth(*args, **kwargs) > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager File "/usr/lib64/python2.7/site-packages/libvirt.py", line > 4305, in nodeDeviceLookupByName > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager if ret is None:raise > libvirtError('virNodeDeviceLookupByName() failed', conn=self) > 2019-02-26 15:36:42.595 7 ERROR nova.compute.manager libvirtError: Node device not found: no node device with matching > name 'net_enp136s1f2_36_cc_24_fb_76_e3' > > > Devices > > [root at zeus-59 ~]# docker exec nova_libvirt sudo virsh nodedev-list > block_sda_SATA_SSD_7F2F0759012400117253 > computer > drm_card0 > net_enp136s1_66_0f_f8_42_3b_bb > net_enp136s1f1_82_e4_57_39_4c_bf > net_enp136s1f2_6e_52_31_b0_04_b7???? > net_enp136s1f3_e6_be_60_01_f3_9f > net_enp136s1f4_be_d1_8e_9e_46_ef > net_enp136s1f5_4e_61_1c_40_98_dc > net_enp136s1f6_4a_75_ee_f7_c9_68 > net_enp136s1f7_de_3e_da_36_48_02 > net_enp136s2_3e_dc_23_b4_ca_c4 > net_enp136s2f1_c6_12_aa_52_fa_34 > net_enp1s0f0_0c_c4_7a_a4_82_ae > net_enp1s0f1_0c_c4_7a_a4_82_af > net_ens1f0_90_e2_ba_03_4c_c8 > net_ens1f1_90_e2_ba_03_4c_c9 > net_ens2f0_7c_fe_90_12_22_b4 > net_ens2f1_7c_fe_90_12_22_b5 > net_ens2f2_36_cc_24_fb_76_e3 > net_ens2f3_82_42_06_38_9a_b7 > net_ens2f4_7e_c0_bb_98_72_f4 > net_ens2f5_be_9c_1c_25_ff_0d > net_ens2f6_2e_01_90_f8_44_b5 > net_ens2f7_7e_6a_6d_4e_89_1b > pci_0000_00_00_0 > pci_0000_00_01_0 > pci_0000_00_02_0 > pci_0000_00_02_1 > pci_0000_00_02_2 > pci_0000_00_03_0 > pci_0000_00_03_1 > pci_0000_00_03_2 > ... > > Note: documentation says to use vlan but I am using flat networks > > Why is nova looking for a device that libvirt doesn’t know about? > > Thank you very much > NOTICE > Please consider the environment before printing this email. This message and any attachments are intended for the > addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended > recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this > message in error please notify us at once by return email and then delete both messages. We accept no liability for > the distribution of viruses or similar in electronic communications. This notice should not be removed. > NOTICE > Please consider the environment before printing this email. This message and any attachments are intended for the > addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended > recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this > message in error please notify us at once by return email and then delete both messages. We accept no liability for > the distribution of viruses or similar in electronic communications. This notice should not be removed. From cdent+os at anticdent.org Tue Feb 26 10:54:49 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 26 Feb 2019 10:54:49 +0000 (GMT) Subject: [placement] zuul job dependencies for greater good? In-Reply-To: References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> Message-ID: On Mon, 25 Feb 2019, Eric Fried wrote: >>> -1 to serializing jobs with stop-on-first-failure. Human time (having to >>> iterate fixes one failed job at a time) is more valuable than computer >>> time. That's why we make computers. > > Apologies, I had nova in my head when I said this. For the placement > repo specifically (at least as it stands today), running full tox > locally is very fast, so you really have no excuse for pushing broken > py/func. I would tentatively support stop-on-first-failure in placement > only; but we should be on the lookout for a time when this tips the > balance. (I hope that never happens, and I'm guessing Chris would agree > with that.) I'm still not certain that we're talking about exactly the same thing. My proposal was not stop-on-first-failure. It is: 1. Run all the short duration zuul jobs, in the exact same way they run now: run each individual test, gather all individual failures, any individual test failure annotates the entire job as failed, but all tests are run, all failures are reported. If there is a failure here, zuul quits, votes -1. 2. If (only if) all those short jobs run, automatically run the long duration zuul jobs. If there is a faiulre here, zuul is done, votes -1. 3. If we reach here, zuul is still done, votes +1. This is what https://zuul-ci.org/docs/zuul/user/config.html#attr-job.dependencies provides. In our case we would make the grenade and tempest jobs depend on the success of (most of) the others. (I agree that if the unit and functional tests in placement ever get too slow to be no big deal to run locally, we've made an error that needs to be fixed. Similarly if placement (in isolation) gets too complex to test (and experiment with) in an easy and local fashion, we've also made an error. Plenty of projects need to be more complex than placement and require different modes for experimentation and testing. At least for now, placement does not.) -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From smooney at redhat.com Tue Feb 26 11:04:50 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 26 Feb 2019 11:04:50 +0000 Subject: [placement] zuul job dependencies for greater good? In-Reply-To: <29e0f0b5-c25b-41c0-9fc3-732ee78f8b1c@www.fastmail.com> References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> <29e0f0b5-c25b-41c0-9fc3-732ee78f8b1c@www.fastmail.com> Message-ID: <0b61cb519bfb7b45a855b2a59aba5cac1b19dead.camel@redhat.com> On Mon, 2019-02-25 at 19:42 -0500, Clark Boylan wrote: > On Mon, Feb 25, 2019, at 12:51 PM, Ben Nemec wrote: > > > > snip > > > That said, I wouldn't push too hard in either direction until someone > > crunched the numbers and figured out how much time it would have saved > > to not run long tests on patch sets with failing unit tests. I feel like > > it's probably possible to figure that out, and if so then we should do > > it before making any big decisions on this. > clark this sound like a interesting topic to dig into in person at the ptg/fourm. do you think we could do two things in parallel. 1 find a slot maybe in the infra track to discuss this. 2 can we create a new "fast-check" pipeline in zuul so we can do some experiment if we have a second pipeline with almost identical trrigers we can propose in tree job changes and not merge them and experiment with how this might work. i can submit a patch to do that to the project-config repo but wanted to check on the ml first. again to be clear my suggestion for an experiment it to modify the gate jobs to require approval from zuul in both the check and fast check pipeline and kick off job in both pipeline in parallel so inially the check pipeline jobs would not be condtional on the fast-check pipeline jobs. the intent is to run exactly the same amount of test we do today but just to have zuul comment back in two batchs one form each pipeline. as a step two i would also be interested with merging all of the tox env jobs into one. i think that could be done by creating a new job that inherits form the base tox job and just invoke the run play book of all the tox- jobs from a singel playbook. i can do experiment 2 without entirly form the in repo zuul.yaml file i think it would be interesting to do a test with "do not merge" patches to nova or placement and see how that works > For numbers the elastic-recheck tool [0] gives us fairly accurate tracking of which issues in the system cause tests > to fail. You can use this as a starting point to potentially figure out how expensive indentation errors caught by the > pep8 jobs ends up being or how often unittests fail. You probably need to tweak the queries there to get that specific > though. > > Periodically I also dump node resource utilization by project, repo, and job [1]. I haven't automated this because > Tobiash has written a much better thing that has Zuul inject this into graphite and we should be able to set up a > grafana dashboard for that in the future instead. > > These numbers won't tell a whole story, but should paint a fairly accurate high level picture of the types of things > we should look at to be more node efficient and "time in gate" efficient. Looking at these two really quickly myself > it seems that job timeouts are a big cost (anyone looking into why our jobs timeout?). > > [0] http://status.openstack.org/elastic-recheck/index.html > [1] http://paste.openstack.org/show/746083/ > > Hope this helps, > Clark > From smooney at redhat.com Tue Feb 26 11:13:13 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 26 Feb 2019 11:13:13 +0000 Subject: [placement] zuul job dependencies for greater good? In-Reply-To: References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> Message-ID: On Tue, 2019-02-26 at 10:54 +0000, Chris Dent wrote: > On Mon, 25 Feb 2019, Eric Fried wrote: > > >>> -1 to serializing jobs with stop-on-first-failure. Human time (having to > >>> iterate fixes one failed job at a time) is more valuable than computer > >>> time. That's why we make computers. > > > > Apologies, I had nova in my head when I said this. For the placement > > repo specifically (at least as it stands today), running full tox > > locally is very fast, so you really have no excuse for pushing broken > > py/func. I would tentatively support stop-on-first-failure in placement > > only; but we should be on the lookout for a time when this tips the > > balance. (I hope that never happens, and I'm guessing Chris would agree > > with that.) > > I'm still not certain that we're talking about exactly the same thing. > My proposal was not stop-on-first-failure. It is: > > 1. Run all the short duration zuul jobs, in the exact same way they > run now: run each individual test, gather all individual > failures, any individual test failure annotates the entire job > as failed, but all tests are run, all failures are reported. > If there is a failure here, zuul quits, votes -1. > > 2. If (only if) all those short jobs run, automatically run the long > duration zuul jobs. If there is a faiulre here, zuul is done, > votes -1. ^ is where the stop on first failure comment came from. its technically not first failure but when i rasied this topic in the past there was a strong perfernece to not condtionally skip some jobs if other fail so that the developer gets as much feedback as possible. so the last sentence in the job.dependencies is the contovertial point "... and if one or more of them fail, this job will not be run." tempest jobs are the hardest set of things to run locally and people did not want to skip them for failures in things that are easy to run locally. > > 3. If we reach here, zuul is still done, votes +1. > > This is what > https://zuul-ci.org/docs/zuul/user/config.html#attr-job.dependencies > provides. In our case we would make the grenade and tempest jobs > depend on the success of (most of) the others. > > (I agree that if the unit and functional tests in placement ever get > too slow to be no big deal to run locally, we've made an error that > needs to be fixed. Similarly if placement (in isolation) gets too > complex to test (and experiment with) in an easy and local fashion, > we've also made an error. Plenty of projects need to be more complex > than placement and require different modes for experimentation and > testing. At least for now, placement does not.) > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent From florian.engelmann at everyware.ch Tue Feb 26 12:15:17 2019 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Tue, 26 Feb 2019 13:15:17 +0100 Subject: [ceilometer] radosgw pollster In-Reply-To: References: Message-ID: Hi Christian, Am 2/26/19 um 11:00 AM schrieb Christian Zunker: > Hi Florian, > > which version of OpenStack are you using? > The radosgw metric names were different in some versions: > https://bugs.launchpad.net/ceilometer/+bug/1726458 we do use Rocky and Ceilometer 11.0.1. I am still lost with that error. As far as I am able to understand python it looks like the error is happening in polling.manager line 222: https://github.com/openstack/ceilometer/blob/11.0.1/ceilometer/polling/manager.py#L222 But I do not understand why. I tried to enable debug logging but the error does not log any additional information. The poller is not even trying to reach/poll our RadosGWs. Looks like that manger is blocking those polls. All the best, Florian > > Christian > > Am Fr., 22. Feb. 2019 um 17:40 Uhr schrieb Florian Engelmann > >: > > Hi, > > I failed to poll any usage data from our radosgw. I get > > 2019-02-22 17:23:57.461 24 INFO ceilometer.polling.manager [-] Polling > pollster radosgw.containers.objects in the context of > radosgw_300s_pollsters > 2019-02-22 17:23:57.462 24 ERROR ceilometer.polling.manager [-] Prevent > pollster radosgw.containers.objects from polling [ description=, > domain_id=xx9d9975088a4d93922e1d73c7217b3b, enabled=True, > > [...] > > id=xx90a9b1d4be4d75b4bd08ab8107e4ff, is_domain=False, links={u'self': > u'http://keystone-admin.service.xxxxxxx:35357/v3/projects on source > radosgw_300s_pollsters anymore!: PollsterPermanentError > > Configurations like: > cat polling.yaml > --- > sources: >      - name: radosgw_300s_pollsters >        interval: 300 >        meters: >          - radosgw.usage >          - radosgw.objects >          - radosgw.objects.size >          - radosgw.objects.containers >          - radosgw.containers.objects >          - radosgw.containers.objects.size > > > Also tried radosgw.api.requests instead of radowsgw.usage. > > ceilometer.conf > [...] > [service_types] > radosgw = object-store > > [rgw_admin_credentials] > access_key = xxxxx0Z0xxxxxxxxxxxx > secret_key = xxxxxxxxxxxxlRExxcPxxxxxxoNxxxxxxOxxxx > > [rgw_client] > implicit_tenants = true > > Endpoints: > | xxxxxxx | region | swift        | object-store    | True    | admin >   | http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s  | > | xxxxxxx | region | swift        | object-store    | True    | > internal >   | http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s  | > | xxxxxxx | region | swift        | object-store    | True    | public >   | https://s3.somedomain.com/swift/v1/AUTH_%(tenant_id)s       | > > Ceilometer user: > { >      "user_id": "ceilometer", >      "display_name": "ceilometer", >      "email": "", >      "suspended": 0, >      "max_buckets": 1000, >      "auid": 0, >      "subusers": [], >      "keys": [ >          { >              "user": "ceilometer", >              "access_key": "xxxxxxxxxxxxxxxxxx", >              "secret_key": "xxxxxxxxxxxxxxxxxxxxxxxxx" >          } >      ], >      "swift_keys": [], >      "caps": [ >          { >              "type": "buckets", >              "perm": "read" >          }, >          { >              "type": "metadata", >              "perm": "read" >          }, >          { >              "type": "usage", >              "perm": "read" >          }, >          { >              "type": "users", >              "perm": "read" >          } >      ], >      "op_mask": "read, write, delete", >      "default_placement": "", >      "placement_tags": [], >      "bucket_quota": { >          "enabled": false, >          "check_on_raw": false, >          "max_size": -1, >          "max_size_kb": 0, >          "max_objects": -1 >      }, >      "user_quota": { >          "enabled": false, >          "check_on_raw": false, >          "max_size": -1, >          "max_size_kb": 0, >          "max_objects": -1 >      }, >      "temp_url_keys": [], >      "type": "rgw" > } > > > radosgw config: > [client.rgw.xxxxxxxxxxx] > host = somehost > rgw frontends = "civetweb port=7480 num_threads=512" > rgw num rados handles = 8 > rgw thread pool size = 512 > rgw cache enabled = true > rgw dns name = s3.xxxxxx.xxx > rgw enable usage log = true > rgw usage log tick interval = 30 > rgw realm = public > rgw zonegroup = xxx > rgw zone = xxxxx > rgw resolve cname = False > rgw usage log flush threshold = 1024 > rgw usage max user shards = 1 > rgw usage max shards = 32 > rgw_keystone_url = https://keystone.xxxxxxxxxxxxx > rgw_keystone_admin_domain = default > rgw_keystone_admin_project = service > rgw_keystone_admin_user = swift > rgw_keystone_admin_password = > xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx > rgw_keystone_accepted_roles = member,_member_,admin > rgw_keystone_accepted_admin_roles = admin > rgw_keystone_api_version = 3 > rgw_keystone_verify_ssl = false > rgw_keystone_implicit_tenants = true > rgw_keystone_admin_tenant = default > rgw_keystone_revocation_interval = 0 > rgw_keystone_token_cache_size = 0 > rgw_s3_auth_use_keystone = true > rgw_max_attr_size = 1024 > rgw_max_attrs_num_in_req = 32 > rgw_max_attr_name_len = 64 > rgw_swift_account_in_url = true > rgw_swift_versioning_enabled = true > rgw_enable_apis = s3,swift,swift_auth,admin > rgw_swift_enforce_content_length = true > > > > > Any idea whats going on? > > All the best, > Florian > > > -- EveryWare AG Florian Engelmann Senior UNIX Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From zhengzhenyulixi at gmail.com Tue Feb 26 12:40:25 2019 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Tue, 26 Feb 2019 20:40:25 +0800 Subject: [nova] Updates about Detaching/Attaching root volumes Message-ID: Hi Nova, I'm working on a blueprint to support Detach/Attach root volumes. The blueprint has been proposed for quite a while since mitaka[1] in that version of proposal, we only talked about instances in shelved_offloaded status. And in Stein[2] the status of stopped was also added. But now we realized that support detach/attach root volume on a stopped instance could be problemastic since the underlying image could change which might invalidate the current host.[3] So Matt and Sean suggested maybe we could just do it for shelved_offloaded instances, and I have updated the patch according to this comment. And I will update the spec latter, so if anyone have thought on this, please let me know. Another thing I wanted to discuss is that in the proposal, we will reset some fields in the root_bdm instead of delete the whole record, among those fields, the tag field could be tricky. My idea was to reset it too. But there also could be cases that the users might think that it would not change[4]. Thoughts, BR, [1] http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/detach-boot-volume.html#proposed-change [2] http://specs.openstack.org/openstack/nova-specs/specs/stein/approved/detach-boot-volume.html#proposed-change [3] https://review.openstack.org/#/c/614750/34/nova/compute/manager.py at 5467 [4] https://review.openstack.org/#/c/614750/37/nova/objects/block_device.py -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Feb 26 12:43:34 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 26 Feb 2019 21:43:34 +0900 Subject: [dev][qa][tempest-plugins] Fixing the setting and usage of service_availability config option in tempest plugins Message-ID: <16929d4ee94.12627d43146118.9136819799762230061@ghanshyammann.com> HI All, After we split all the tempest plugins into the separate repo, service_available. registration is in happening in the corresponding tempest plugins with default value as true. Example [1]. This leave service_available. config options might have incorrect value and its incorrect usage when it is used in cross tempest plugin. Bugs[2] Recently Octavia-tempest-plugin faced the issue of duplicating the registration of same config option[3]. Previously Devstack used to set all the service_available config option which will be removed as devstack should be responsible to take care of 6 services only owned by Tempest and rest all goes on the service's devstack plugin side [4]. Current problem: 1. service_available. default value is True and not set based on actual service availability. 2. tempest plugins need required tempest plugin to installed which register service_available. config option but they are not in requirement.txt. To fix both problems, we need to do fixes in service and tempest plugin side. 1. Devstack should set only tempest own services - https://review.openstack.org/#/c/619973/ 2. each plugin to set their service as service_availability in their devstack plugin - In progress 3. Change the default value of service_available. config option default value to false. 4. Add used tempest plugins in tempest plugins's requirements.txt after adding it in g-r. step#3 which change the config option default value from True to False will be done after step#2, to avoid any backward incompatible changes in term of skip the tests in any job. Third party CI reply on that default value has to set its value based on actual service availability check if they do not do currently. I am going to push the patches on service and plugin side, please let me know your opinion, feedback or any concern on this approach. [1] https://git.openstack.org/cgit/openstack/barbican-tempest-plugin/tree/barbican_tempest_plugin/config.py#n19 [2] https://bugs.launchpad.net/congress/+bug/1743688 [3] https://bugs.launchpad.net/tripleo/+bug/1817154 [4] https://review.openstack.org/#/c/619973/ -gmann From gr at ham.ie Tue Feb 26 13:15:59 2019 From: gr at ham.ie (Graham Hayes) Date: Tue, 26 Feb 2019 13:15:59 +0000 Subject: [tc][election] New series of campaign questions In-Reply-To: <0aae11d6-7db0-420a-a0ff-7cbf92ff9e1e@www.fastmail.com> References: <0aae11d6-7db0-420a-a0ff-7cbf92ff9e1e@www.fastmail.com> Message-ID: <4e96be2b-1abf-57f8-3e42-dfd50224ec8a@ham.ie> On 25/02/2019 09:28, Jean-Philippe Evrard wrote: > Hello, > > Here are my questions for the candidates. Keep in mind some might overlap with existing questions, so I would expect a little different answer there than what was said. Most questions are intentionally controversial and non-strategic, so please play this spiritual game openly as much as you can (no hard feelings!). > > The objective for me with those questions is not to corner you/force you implement x if you were elected (that would be using my TC hat for asking you questions, which I believe would be wrong), but instead have a glimpse on your mindset (which is important for me as an individual member in OpenStack). It's more like the "magic wand" questions. After this long introduction, here is my volley of questions. > > A) In a world where "general" OpenStack issues/features are solved through community goals, do you think the TC should focus on "less interesting" technical issues across projects, like tech debt reduction? Or at the opposite, do you think the TC should tackle the hardest OpenStack wide problems? Both - the TC should be helping to highlight where work needs to be done, and that can include debt cleanup, and cross project features. (that said, tech debt is probably one of the hardest OpenStack wide problems). I think I have seen this said somewhere in the mountain of election emails sent this year, but the TC needs to be a group that enables the community to do things, be that debt cleanup like py3 support, user documentation like api-ref, or cross project features like volume multi attach. > B) Do you think the TC must check and actively follow all the official projects' health and activities? Why? Yes, I think it is important to follow - we don't want the first time we know a project is in trouble is the "we have a single volunteer as a developer" email / blog post. > C) Do you think the TC's role is to "empower" project and PTLs? If yes, how do you think the TC can help those? If no, do you think it would be the other way around, with PTLs empowering the TC to achieve more? How and why? Yes - but it is also to empower users and other people in the community. > D) Do you think the community goals should be converted to a "backlog"of time constrained OpenStack "projects", instead of being constrained per cycle? (with the ability to align some goals with releasing when necessary) No. I think having these goals limited to a cycle means that there is a lot more of a chance that projects will actually get them done. If we allow for them to be 2,3,4 cycles long, I think we will loose the critical mass of projects completing them. That is not to say we can't have things that are important that run for 2+ cycles, but I do not think they should be goals. They could definitely have a goal as an endpoint to finish off a larger effort, but not as a multi cycle goal. > E) Do you think we should abandon projects' ML tags/IRC channels, to replace them by focus areas? For example, having [storage] to group people from [cinder] or [manila]. Do you think that would help new contributors, or communication in the community? Abandon, no. Use in addition to the current set, yes. - See the bare metal SIG for how we can do cross project, focused development. > F) There can be multiple years between a "user desired feature across OpenStack projects", and its actual implementation through the community goals. How do you think we can improve? This is a hard issue. Due to the size of the project, and differing priorities of each set of developers, getting a single unified roadmap for user requested features is hard. Combine this with the criticality of what our software is used for, and you have a perfect storm of long pipelines for new versions (on the order of years, not cycles), and the feedback loop is too long for a "move fast and break things" mentality. By the time the users have the feature (even if we did get it completed in a cycle), the people who worked on the feature, documented it, tested it and guided it, are possibly moved on to something else, or have lost the short term context to deal with the feedback. > G) What do you think of the elections process for the TC? Do you think it is good enough to gather a team to work on hard problems? Or do you think electing person per person have an opposite effect, highlighting individuals versus a common program/shared objectives? Corollary: Do you think we should now elect TC members by groups (of 2 or 3 persons for example), so that we would highlight their program vs highlight individual ideas/qualities? I think what we have is the best of a bad selection. Lists are inherently exclusionary, and tend to make sure that views and groups are entrenched (see mainland EU parties). While the current process can create name recognition based voting, the name recognition is usually associated with someone who did something cross project, or in a larger project, which can mean that they have a good view of how things are working. > Thanks for your patience, and thanks for your application! > > Regards, > Jean-Philippe Evrard (evrardjp) > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From mriedemos at gmail.com Tue Feb 26 13:21:22 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 26 Feb 2019 07:21:22 -0600 Subject: [nova] Updates about Detaching/Attaching root volumes In-Reply-To: References: Message-ID: <19d57159-69b3-0b4b-cec8-2018fb672d41@gmail.com> On 2/26/2019 6:40 AM, Zhenyu Zheng wrote: > I'm working on a blueprint to support Detach/Attach root volumes. The > blueprint has been proposed for quite a while since mitaka[1] in that > version of proposal, we only talked about instances in shelved_offloaded > status. And in Stein[2] the status of stopped was also added. But now we > realized that support detach/attach root volume on a stopped instance > could be problemastic since the underlying image could change which > might invalidate the current host.[3] > > So Matt and Sean suggested maybe we could just do it for > shelved_offloaded instances, and I have updated the patch according to > this comment. And I will update the spec latter, so if anyone have > thought on this, please let me know. I mentioned this during the spec review but didn't push on it I guess, or must have talked myself out of it. We will also have to handle the image potentially changing when attaching a new root volume so that when we unshelve, the scheduler filters based on the new image metadata rather than the image metadata stored in the RequestSpec from when the server was originally created. But for a stopped instance, there is no run through the scheduler again so I don't think we can support that case. Also, there is no real good way for us (right now) to even compare the image ID from the new root volume to what was used to originally create the server because for volume-backed servers the RequestSpec.image.id is not set (I'm not sure why, but that's the way it's always been, the image.id is pop'ed from the metadata [1]). And when we detach the root volume, we null out the BDM.volume_id so we can't get back to figure out what that previous root volume's image ID was to compare, i.e. for a stopped instance we can't enforce that the underlying image is the same to support detach/attach root volume. We could probably hack stuff up by stashing the old volume_id/image_id in system_metadata but I'd rather not play that game. It also occurs to me that the root volume attach code is also not verifying that the new root volume is bootable. So we really need to re-use this code on root volume attach [2]. tl;dr when we attach a new root volume, we need to update the RequestSpec.image (ImageMeta) object based on the new root volume's underlying volume_image_metadata so that when we unshelve we use that image rather than the original image. > > Another thing I wanted to discuss is that in the proposal, we will reset > some fields in the root_bdm instead of delete the whole record, among > those fields, the tag field could be tricky. My idea was to reset it > too. But there also could be cases that the users might think that it > would not change[4]. Yeah I am not sure what to do here. Here is a scenario: User boots from volume with a tag "ubuntu1604vol" to indicate it's the root volume with the operating system. Then they shelve offload the server and detach the root volume. At this point, the GET /servers/{server_id}/os-volume_attachments API is going to show None for the volume_id on that BDM but should it show the original tag or also show None for that. Kevin currently has the tag field being reset to None when the root volume is detached. When the user attaches a new root volume, they can provide a new tag so even if we did not reset the tag, the user can overwrite it. As a user, would you expect the tag to be reset when the root volume is detached or have it persist but be overwritable? If in this scenario the user then attaches a new root volume that is CentOS or Ubuntu 18.04 or something like that, but forgets to update the tag, then the old tag would be misleading. So it is probably safest to just reset the tag like Kevin's proposed code is doing, but we could use some wider feedback here. [1] https://github.com/openstack/nova/blob/33f367ec2f32ce36b00257c11c5084400416774c/nova/utils.py#L943 [2] https://github.com/openstack/nova/blob/33f367ec2f32ce36b00257c11c5084400416774c/nova/compute/api.py#L1091-L1101 -- Thanks, Matt From wbedyk at suse.com Tue Feb 26 13:21:51 2019 From: wbedyk at suse.com (Witek Bedyk) Date: Tue, 26 Feb 2019 14:21:51 +0100 Subject: =?UTF-8?Q?Re=3a_=5bopenstack-dev=5d_=5bMonasca=5d_How_to_get_?= =?UTF-8?Q?=e2=80=9caggregated_value_of_one_metric_statistics=e2=80=9d_=3f?= In-Reply-To: References: <42188d5217d44601b282dbe78e50ff4f@SIDC1EXMBX27.in.ril.com> Message-ID: <38b47017-7a26-bb6f-6aff-168d16ebc4b0@suse.com> > [1] In below example , trying to get last ~24.5 days of metrics  from > particular tenant , But I can see 2019-01-26 data . With UTC_START_TIME > “2019-02-01T00:00:00Z”____ Hi Mohankumar, the problem occurs when the period is (almost) >= then the evaluated dataset. Try moving the start point earlier in the past (e.g. one month before) or decrease the period and you'll get correct results. It might be a bug in InfluxDB. > [2] does *--merge_metrics ***not**Merge multiple metrics into a single > result ?____ `merge_metrics` can be understood as wildcarding on dimensions. Please note that in combination with `group_by` it has no effect. Please compare these examples: # monasca metric-statistics --merge_metrics disk.space_used_perc avg,max,min -2 Will result in just one value for all metrics with any combination of dimensions. # monasca metric-statistics --group_by mount_point,device disk.space_used_perc avg,max,min -2 Will result in one value for every unique combination of metric name and `mount_point`, `device` dimension values. I hope it helps, Witek From chkumar246 at gmail.com Tue Feb 26 13:41:09 2019 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 26 Feb 2019 19:11:09 +0530 Subject: [tripleo][openstack-ansible] collaboration on os_tempest role update XII - Feb 26, 2019 Message-ID: Hello, Here is the 11th update (Feb 20 to Feb 26, 2019) on collaboration on os_tempest[1] role between TripleO and OpenStack-Ansible projects. Summary: * os_tempest got the support of enabling/disabling stackviz report generation It will help end-user to save them for generating stackviz report each time while testing locally. * If OVN is used always select tempest_private_net_provider_type to geneve while running os_tempest or tempest with different installer. Things got merged os_tempest * Revert "Only init a workspace if doesn't exists" - https://review.openstack.org/637801 * Add iputils to use the ping command - https://review.openstack.org/638189 * Fix redhat iputtils - https://review.openstack.org/638444 * Remove the private option from include_role - https://review.openstack.org/638557 * use tempest_run_stackviz to generate stackviz report - https://review.openstack.org/638360 tripleo * Set tempest_private_net_provider_type to geneve for os_tempest - https://review.openstack.org/637838 Things in progress: os_tempest * update depenencies for os_tempest: https://review.openstack.org/#/q/topic:os_tempest_deps+(status:open+OR+status:merged) * Enable heat support in os_tempest: https://review.openstack.org/#/q/topic:os_tempest_heat+(status:open+OR+status:merged) * Update workspace tempest.conf on changes - https://review.openstack.org/638014 Some interesting stuff coming: * since os_tempest is integrated with tripleo, now we have os_tempest playbook where we have more than 19 vars passed to os_tempest in order to run it. Where we are going to keep those vars so that end user can run it easily. Related reviews: * os_tempest - move deployment tool related vars to vars dir - https://review.openstack.org/639258 * tripleo - Standardize os_tempest playbook and set vars dynamically - https://review.openstack.org/639310 * whether to ship config_template action plugin as pypi deps - https://review.openstack.org/#/c/638383/ Feel free to join today's OSA meeting on #openstack-ansible channel at 16:00 UTC, we will try to get some interesting discussions. Thanks to cjloader (for iputils patch), arxcruz & jrosser on stackviz, evrardjp on config_template as os_tempest deps and cloudnull for always reviewing our patches. Here is the 11th update [2]. Have queries, Feel free to ping us on #tripleo or #openstack-ansible channel. Links: [1.] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest [2.] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002860.html Thanks, Chandan Kumar From mriedemos at gmail.com Tue Feb 26 13:45:40 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 26 Feb 2019 07:45:40 -0600 Subject: [nova] nova spec show-server-group response format In-Reply-To: References: Message-ID: This is coming up now because of my questions in the code review: https://review.openstack.org/#/c/621474/23/api-ref/source/parameters.yaml at 5869 On 2/25/2019 11:53 PM, yonglihe wrote: > The approved spec show-server-group had 2 options for response. > > 1. First one(current spec): > >          "server": { >             "server_groups": [ # not cached >                    "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8" >             ] >         } >    } > > related discuss: > https://review.openstack.org/#/c/612255/11/specs/stein/approved/show-server-group.rst at 67 > > digest:  This  decouple the current  implementation of server groups > then get a  generic API. Jay pushed for this on the spec review because it future-proofs the API in case a server can ever be in more than one group (currently it cannot). When I was reviewing the code this was the first thing that confused me (before I knew about the discussion on the spec) because I knew that a server can only be in at most one server group, and I think showing a list is misleading to the user. Similarly, before 2.64 the os-server-groups API had a "policies" parameter which could only ever have exactly one entry in it, and in 2.64 that was changed to just be "policy" to reflect the actual usage. I don't think we're going to have support for servers in multiple groups anytime soon, so I personally don't think we need to future-proof the servers API response with a potentially misleading type (array) when we know the server can only ever be in one group. If we were to add multi-group support in the future, we could revisit this at the same time but I'm not holding my breath given previous attempts. > > > 2 Second one: > >         "server": { >             "server_group": { >                 "name": "groupA", >                 "id": "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8" >             } > > related discuss: > https://review.openstack.org/#/c/612255/4/specs/stein/approved/list-server-group.rst at 62 This is the format I think we should use since it shows the actual cardinality of server to group we support today. By the way, I also think we should return this for GET /server/{server_id} responses for servers in down cells: https://review.openstack.org/#/c/621474/23/nova/api/openstack/compute/views/servers.py at 203 Since the group information is in the API DB there isn't much reason *not* to return that information in both the up and down cell cases. > > digest: people have tried to change the api to allow adding/removing > servers to/from groups, but still not implement yet. > > > we need align this for continuing this work. thanks. > > > Reference: > > bp: https://blueprints.launchpad.net/nova/+spec/show-server-group > > spec: > https://review.openstack.org/#/c/612255/13/specs/stein/approved/show-server-group.rst > > code: https://review.openstack.org/#/c/621474/23 > -- Thanks, Matt From smooney at redhat.com Tue Feb 26 13:50:11 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 26 Feb 2019 13:50:11 +0000 Subject: [nova] Updates about Detaching/Attaching root volumes In-Reply-To: <19d57159-69b3-0b4b-cec8-2018fb672d41@gmail.com> References: <19d57159-69b3-0b4b-cec8-2018fb672d41@gmail.com> Message-ID: <04072044e20fa2bc432e8ddd26790a152bac9308.camel@redhat.com> On Tue, 2019-02-26 at 07:21 -0600, Matt Riedemann wrote: > On 2/26/2019 6:40 AM, Zhenyu Zheng wrote: > > I'm working on a blueprint to support Detach/Attach root volumes. The > > blueprint has been proposed for quite a while since mitaka[1] in that > > version of proposal, we only talked about instances in shelved_offloaded > > status. And in Stein[2] the status of stopped was also added. But now we > > realized that support detach/attach root volume on a stopped instance > > could be problemastic since the underlying image could change which > > might invalidate the current host.[3] > > > > So Matt and Sean suggested maybe we could just do it for > > shelved_offloaded instances, and I have updated the patch according to > > this comment. And I will update the spec latter, so if anyone have > > thought on this, please let me know. > > I mentioned this during the spec review but didn't push on it I guess, > or must have talked myself out of it. We will also have to handle the > image potentially changing when attaching a new root volume so that when > we unshelve, the scheduler filters based on the new image metadata > rather than the image metadata stored in the RequestSpec from when the > server was originally created. But for a stopped instance, there is no > run through the scheduler again so I don't think we can support that > case. Also, there is no real good way for us (right now) to even compare > the image ID from the new root volume to what was used to originally > create the server because for volume-backed servers the > RequestSpec.image.id is not set (I'm not sure why, but that's the way > it's always been, the image.id is pop'ed from the metadata [1]). And > when we detach the root volume, we null out the BDM.volume_id so we > can't get back to figure out what that previous root volume's image ID > was to compare, i.e. for a stopped instance we can't enforce that the > underlying image is the same to support detach/attach root volume. We > could probably hack stuff up by stashing the old volume_id/image_id in > system_metadata but I'd rather not play that game. > > It also occurs to me that the root volume attach code is also not > verifying that the new root volume is bootable. So we really need to > re-use this code on root volume attach [2]. > > tl;dr when we attach a new root volume, we need to update the > RequestSpec.image (ImageMeta) object based on the new root volume's > underlying volume_image_metadata so that when we unshelve we use that > image rather than the original image. > > > > > Another thing I wanted to discuss is that in the proposal, we will reset > > some fields in the root_bdm instead of delete the whole record, among > > those fields, the tag field could be tricky. My idea was to reset it > > too. But there also could be cases that the users might think that it > > would not change[4]. > > Yeah I am not sure what to do here. Here is a scenario: > > User boots from volume with a tag "ubuntu1604vol" to indicate it's the > root volume with the operating system. Then they shelve offload the > server and detach the root volume. At this point, the GET > /servers/{server_id}/os-volume_attachments API is going to show None for > the volume_id on that BDM but should it show the original tag or also > show None for that. Kevin currently has the tag field being reset to > None when the root volume is detached. > > When the user attaches a new root volume, they can provide a new tag so > even if we did not reset the tag, the user can overwrite it. As a user, > would you expect the tag to be reset when the root volume is detached or > have it persist but be overwritable? > > If in this scenario the user then attaches a new root volume that is > CentOS or Ubuntu 18.04 or something like that, but forgets to update the > tag, then the old tag would be misleading. > > So it is probably safest to just reset the tag like Kevin's proposed > code is doing, but we could use some wider feedback here. the only thing i can tink of would be to have a standard "root" or "boot" tag that we apply to root volume and encourage uses to use that. but i dont know of a better way to do it generically so reseting is proably as sane as anything else. > > [1] > https://github.com/openstack/nova/blob/33f367ec2f32ce36b00257c11c5084400416774c/nova/utils.py#L943 > [2] > https://github.com/openstack/nova/blob/33f367ec2f32ce36b00257c11c5084400416774c/nova/compute/api.py#L1091-L1101 > From ryan.beisner at canonical.com Tue Feb 26 13:57:01 2019 From: ryan.beisner at canonical.com (Ryan Beisner) Date: Tue, 26 Feb 2019 14:57:01 +0100 Subject: [charms] removal reminder Message-ID: Hi All, This is a courtesy reminder that the following charm projects were deprecated [1], received final releases with the 18.11 OpenStack Charms Release, and will be removed prior to the Stein / 19.04 OpenStack Charms releases. [1] https://docs.openstack.org/charm-guide/latest/1811.html charm-glusterfs (unmaintained) charm-interface-odl-controller-api (unmaintained) charm-manila-glusterfs (unmaintained) charm-murano (unmaintained) charm-neutron-api-odl (unmaintained) charm-nova-compute-proxy (unmaintained) charm-odl-controller (unmaintained) charm-openvswitch-odl (unmaintained) charm-trove (unmaintained) Cheers, Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmohankumar1011 at gmail.com Mon Feb 25 14:19:43 2019 From: nmohankumar1011 at gmail.com (Mohan Kumar) Date: Mon, 25 Feb 2019 19:49:43 +0530 Subject: =?UTF-8?Q?=5Bopenstack=2Ddev=5D_=5BMonasca=5D_How_to_get_=E2=80=9Caggregated_v?= =?UTF-8?Q?alue_of_one_metric_statistics=E2=80=9D_=3F?= In-Reply-To: References: <42188d5217d44601b282dbe78e50ff4f@SIDC1EXMBX27.in.ril.com> Message-ID: Hi Team, How to get “aggregated value of one metric statistics” from starting of month to till now . If I try to group metrics using * --period* based on timestamp it including data from previous month metrics as well [1] In below example , trying to get last ~24.5 days of metrics from particular tenant , But I can see 2019-01-26 data . With UTC_START_TIME “2019-02-01T00:00:00Z” [2] does *--merge_metrics * not Merge multiple metrics into a single result ? Please suggest how to customise my API call to get “AVG (aggregated) value of one metric statistics” from starting of month to till now . *Regards.,* Mohankumar N "*Confidentiality Warning*: This message and any attachments are intended only for the use of the intended recipient(s), are confidential and may be privileged. If you are not the intended recipient, you are hereby notified that any review, re-transmission, conversion to hard copy, copying, circulation or other use of this message and any attachments is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return email and delete this message and any attachments from your system. *Virus Warning:* Although the company has taken reasonable precautions to ensure no viruses are present in this email. The company cannot accept responsibility for any loss or damage arising from the use of this email or attachment." -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 60195 bytes Desc: not available URL: From wbedyk at suse.com Mon Feb 25 17:13:06 2019 From: wbedyk at suse.com (Witek Bedyk) Date: Mon, 25 Feb 2019 18:13:06 +0100 Subject: =?UTF-8?Q?Re=3a_=5bopenstack-dev=5d_=5bMonasca=5d_How_to_get_?= =?UTF-8?Q?=e2=80=9caggregated_value_of_one_metric_statistics=e2=80=9d_=3f?= In-Reply-To: References: <42188d5217d44601b282dbe78e50ff4f@SIDC1EXMBX27.in.ril.com> Message-ID: > [1] In below example , trying to get last ~24.5 days of metrics  from > particular tenant , But I can see 2019-01-26 data . With UTC_START_TIME > “2019-02-01T00:00:00Z”____ Hi Mohankumar, the problem occurs when the period is (almost) >= then the evaluated dataset. Try moving the start point earlier in the past (e.g. one month before) or decrease the period and you'll get correct results. It might be a bug in InfluxDB. > [2] does *--merge_metrics ***not**Merge multiple metrics into a single > result ?____ `merge_metrics` can be understood as wildcarding on dimensions. Please note that in combination with `group_by` it has no effect. Please compare these examples: # monasca metric-statistics --merge_metrics disk.space_used_perc avg,max,min -2 Will result in just one value for all metrics with any combination of dimensions. # monasca metric-statistics --group_by mount_point,device disk.space_used_perc avg,max,min -2 Will result in one value for every unique combination of metric name and `mount_point`, `device` dimension values. I hope it helps, Witek From yih.leong.sun at intel.com Mon Feb 25 17:50:53 2019 From: yih.leong.sun at intel.com (Sun, Yih Leong) Date: Mon, 25 Feb 2019 17:50:53 +0000 Subject: [User-committee] UC Feb 2019 Election results In-Reply-To: <5C741574.4080706@openstack.org> References: <39E08158-D3F9-41E1-9C93-1CF696737412@ieee.org> <5C741574.4080706@openstack.org> Message-ID: Congratulations to new UC members. Glad to be part of UC team over the years. ☺ From: Jimmy McArthur [mailto:jimmy at openstack.org] Sent: Monday, February 25, 2019 8:19 AM To: Amy Marrich Cc: Armstrong ; Mohamed Elsakhawy ; openstack-discuss at lists.openstack.org; user-committee ; Jonathan Proulx Subject: Re: [User-committee] UC Feb 2019 Election results Congratulations to all of our candidates! And yes, thank you Leong for doing such great work with the Financial WG, among many other things. Cheers, Jimmy Amy Marrich February 25, 2019 at 10:05 AM Thanks everyone who voted and I look forward to serving another term on the UC. I also wanted to say thank you to Leong for all the hard work he’s done on the UC in the past. Thanks, Amy On Feb 24, 2019, at 9:00 AM, Armstrong > wrote: A big congrats to Amy Marrich et al. Regards, Armstrong On Feb 24, 2019, at 07:11, Mohamed Elsakhawy > wrote: Good Afternoon all On behalf of the User Committee Elections officers, I am pleased to announce the results of the UC elections for Feb 2019. Please join me in congratulating the winners of the 3 seats : - Amy Marrich - Belmiro Moreira - John Studarus Thank you to all of the candidates and all of you who voted * https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_8760d5969c6275f1&rkey=75d7d496f7e50780 _______________________________________________ User-committee mailing list User-committee at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee _______________________________________________ User-committee mailing list User-committee at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee _______________________________________________ User-committee mailing list User-committee at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee Mohamed Elsakhawy February 24, 2019 at 6:11 AM Good Afternoon all On behalf of the User Committee Elections officers, I am pleased to announce the results of the UC elections for Feb 2019. Please join me in congratulating the winners of the 3 seats : - Amy Marrich - Belmiro Moreira - John Studarus Thank you to all of the candidates and all of you who voted * https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_8760d5969c6275f1&rkey=75d7d496f7e50780 _______________________________________________ User-committee mailing list User-committee at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesse at odyssey4.me Tue Feb 26 14:20:56 2019 From: jesse at odyssey4.me (Jesse Pretorius) Date: Tue, 26 Feb 2019 14:20:56 +0000 Subject: [requirements][requests] security update for requests in stable branches In-Reply-To: <20190215202439.lestrzhp3vlryway@yuggoth.org> References: <20190215072749.k34tdrnapanietk5@mthode.org> <20190215180116.jhuuza7jdmpzmq6p@yuggoth.org> <20190215181711.7xjsdcoz2fcoe6vn@yuggoth.org> <20190215202439.lestrzhp3vlryway@yuggoth.org> Message-ID: > Updating dependencies on stable branches makes for a moving target, > and further destabilizes testing on releases which have a hard time > getting maintainers to keep their testing viable at all. FWIW, I've proposed a change to upper-constraints only: https://review.openstack.org/639340 If this is deemed something that should not be merged, then I'll submit a change to OSA which will achieve the same thing. However, I do believe that there is more value across the broader community to have this in upper-constraints where it can be consumed by everyone. From rahulgupta.jsr at gmail.com Tue Feb 26 14:42:13 2019 From: rahulgupta.jsr at gmail.com (rahul gupta) Date: Tue, 26 Feb 2019 20:12:13 +0530 Subject: Not able to collect cpu_util metric In-Reply-To: References: Message-ID: Hi All, I have created a question here: https://answers.launchpad.net/ceilometer/+question/678802 Can you please respond. I have been scanning through all the web to get info regarding how can I calculate / read cpu_util metric. I am using devstack on latest *master*. Please let me know if you need any other details. Question Posted: I am trying to setup autoscale stack using open stack. I am having problem reading cpu_util value from the instances. Error: Feb 26 07:21:43 rgupta-op-stack ceilometer-polling[8197]: 2019-02-26 07:21:43.146 8476 DEBUG ceilometer.compute.pollsters [-] 6e982e28-4cb4-42b2-b608-be4ec4ce1d35/cpu_util volume: Unavailable _stats_to_sample /opt/stack/ceilometer/ceilometer/compute/pollsters/__init__.py:113 Feb 26 07:21:43 rgupta-op-stack ceilometer-polling[8197]: 2019-02-26 07:21:43.146 8476 WARNING ceilometer.compute.pollsters [-] cpu_util statistic in not available for instance 6e982e28-4cb4-42b2-b608-be4ec4ce1d35: NoVolumeException rgupta at rgupta-op-stack:~$ openstack --version openstack 3.17.0 rgupta at rgupta-op-stack:~$ openstack metric list | grep cpu | 70b2d180-6902-42b7-abaf-e34f89aafd2e | ceilometer-low | cpu | ns | 6e982e28-4cb4-42b2-b608-be4ec4ce1d35 | | fa3239df-9bfe-4cf4-a267-106f23fbcb0b | ceilometer-low | vcpus | vcpu | 5fce3174-40ab-484d-8da1-6ccbc8017cc6 | How do I add a new metric cpu_util to this list? Can someone point to a documentation with example. I have read through the metrics.yaml documentation but not sure how to see the parameters that are part of payload? Thanks Rahul -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Feb 26 14:50:55 2019 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 26 Feb 2019 09:50:55 -0500 Subject: [nova][neutron][os-vif] upcoming release of os-vif In-Reply-To: References: Message-ID: <2539100e-2142-fdc4-9e0f-03d1bc3a6112@gmail.com> quick update... On 02/25/2019 05:55 PM, Sean Mooney wrote: > just a quick update on where we are. > > the brctl patches have now merged so we have reached our MVP for stein. > > we are going to hold the release for another 18 hours or so to allow > the ovsdb python lib feature to merged. > > the new grouping looks like this > > required: > Add native implementation OVSDB API https://review.openstack.org/482226 Above is approved and sent to test pits. > prefer to merge: > make functional tests run on python 3 https://review.openstack.org/638053 > docs: Add API docs for VIF types https://review.openstack.org/637009 > docs: Add API docs for profile, datapath offload types https://review.openstack.org/638395 All of the above approved and off to test pits as well. Best, -jay From mihalis68 at gmail.com Tue Feb 26 14:55:28 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 26 Feb 2019 09:55:28 -0500 Subject: [ops] ops meetups team meeting in 5 minutes - Berlin NEXT WEEK! Message-ID: Those of you interested or attending, please join us on #openstack-operators to have a final chat about next week's ops meetup in Berlin. Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Tue Feb 26 15:13:53 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 26 Feb 2019 23:13:53 +0800 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: On Wed, Feb 20, 2019 at 10:47 PM Chris Dent wrote: > * How do you account for the low number of candidates? Do you > consider this a problem? Why or why not? I mostly concern about why previous TCs stop to consider join the election again? Multiple reasons indeed, but none of them concerns me. We need whoever qualified as a candidate to keep jump out and help. Still, this is a problem IMO, because we keep running our election between those we know how OpenStack community works here. And I sense more new joiner didn't actually know what exactly is TC or UC. There just to much history. more committee report outside of ML might help with that situation? > > * Compare and contrast the role of the TC now to 4 years ago. If you > weren't around 4 years ago, comment on the changes you've seen > over the time you have been around. In either case: What do you > think the TC role should be now? I like some of the current tasks, including keep update our governance, but I would also like TC's role including reach out (Which I believe some TCs done a great job). We must keep reaching out with other parts of the community so we can actually make sure all flow is working. Like how can we get users/ops to provide experience, how can we get people from the global community to better understand what we needed now and how they can help, or how can we help with teams, UCs, etc. to fill up the gape between. > > * What, to you, is the single most important thing the OpenStack > community needs to do to ensure that packagers, deployers, and > hobbyist users of OpenStack are willing to consistently upstream > their fixes and have a positive experience when they do? What is > the TC's role in helping make that "important thing" happen? To guide them, and simplify their way through. Organization guideline [1] is definitely a star, if we actually plan to promote it. As for simplifying the way, we have too many ways to achieve too many things, but lack of plans to integrate or even lack documentation to show them why we got so many things. The most famous words we using is `that's because of some history`, but when we fail on provides a guideline or future plan, history will keep going. Which is up to TC to guiding teams to achieve that goal. > > * If you had a magic wand and could inspire and make a single > sweeping architectural or software change across the services, > what would it be? For now, ignore legacy or upgrade concerns. > What role should the TC have in inspiring and driving such > changes? remove rabbitmq dependancy, centralize API implementation (import oslo.api), integrate TCs with UCs into a single commitee(consider that as community architectural:) ). BTW, after I'm done with it, I might also think about selling it. :) > > * What can the TC do to make sure that the community (in its many > dimensions) is informed of and engaged in the discussions and > decisions of the TC? The best way is to post on the place people actually read, also on the place people might be interested to learn more about us. It requires collaboration all the way from TC to end users (like user groups, openInfra events, or enterprises). I mean TC+UC kind of collaboration. > > * How do you counter people who assert the TC is not relevant? > (Presumably you think it is, otherwise you would not have run. If > you don't, why did you run?) That's why we need to keep document things down and keep making OpenStack as success as it can be. So people get to know better if it's relevant to them or not. Try to figure out if we didn't provide enough information or action to let people understand the scope of TC's responsibility. > > That's probably more than enough. Thanks for your attention. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent [1] https://docs.openstack.org/contributors/organizations. -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Tue Feb 26 15:19:15 2019 From: ed at leafe.com (Ed Leafe) Date: Tue, 26 Feb 2019 09:19:15 -0600 Subject: [tc] Questions for TC Candidates In-Reply-To: <66646ba0-3829-534c-3782-db95ecceec27@redhat.com> References: <66646ba0-3829-534c-3782-db95ecceec27@redhat.com> Message-ID: <077A6BBD-48B0-4C1B-A86A-D5ABC55E4C1C@leafe.com> On Feb 25, 2019, at 8:01 PM, Zane Bitter wrote: > > This is a problem for OpenStack, for at least the reason you mentioned above: TC members don't have much of a mandate if they didn't actually have an election. That’s a good point: do you (all candidates; not just Zane) see the election as being a mandate for specific things? Candidates run on different platforms, expressing different desires for changes they would like to make. Do you see the result of a TC election as a mandate to go out and do those things, and not to do the things that the losing candidates espoused? The counter, and extremely cynical, argument here is that people don’t really weigh the specific proposals of the individual candidates and choose those most in alignment with their feelings, but instead choose people who they either a) worked with at some point and didn’t find them to be a jerk, or b) have seen their name around for a while, and figure they must know what’s going on, or c) have the same employer, or d) some other non-issue-related reason. If this cynical point of view is closer to how you see reality, does that represent a mandate at all? -- Ed Leafe From hberaud at redhat.com Tue Feb 26 15:24:43 2019 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 26 Feb 2019 16:24:43 +0100 Subject: [dev][oslo] oslo.cache and dogpile 0.7.0+ cache errors Message-ID: Hi, Just a heads up that the latest version of dogpile (0.7.0 onwards) have become incompatible with oslo.cache. This is causing a few issues for jobs. It's a little complex due to functional code and many decorated functions. The error you will see is: *oslo_cache.**tests.test_**cache.CacheRegi**onTest.**test_function_* *key_generator_* *with_kwargs -------**-------**-------**-------**-------**-------**-------* *-------**-------**-------**-------**------* *Captured traceback: ~~~~~~~~~~~~~~~~~~~ b'Traceback (most recent call last):' b' File "/tmp/oslo.**cache/oslo_**cache/tests/**test_cache.**py", line 324, in test_function_**key_generator_* *with_kwargs' b' value=self.**test_value)* *' b' File "/tmp/oslo.**cache/.**tox/py37/**lib/python3.**7/site-* *packages/**testtools/**testcase.* *py", line 485, in assertRaises' b' self.assertThat* *(our_callable, matcher)' b' File "/tmp/oslo.**cache/.**tox/py37/* *lib/python3.**7/site-**packages/**testtools/**testcase.* *py", line 498, in assertThat' b' raise mismatch_error' b'testtools* *.matchers.**_impl.MismatchE**rror: **.cacheable_**function at 0x7fec3f795400> returned '* The problem appear since we uncap dogpile.cache on oslo.cache: https://github.com/openstack/oslo.cache/commit/62b53099861134859482656dc92db81243b88bd9 The following unit test fail since we uncap dogpile => https://github.com/openstack/oslo.cache/blob/master/oslo_cache/tests/test_cache.py#L318 The problem was introduced by: https://gerrit.sqlalchemy.org/#/c/sqlalchemy/dogpile.cache/+/996/ Your main issue on oslo.cache side is that keyword arguments are tranformed in positionnal arguments when we use dogpile.cache.region.cache_on_arguments. I've try to revert the changes introduced by the previous dogpile.cache change and everything works fine on the oslo.cache side when changes was reverted (reverted to revision https://github.com/sqlalchemy/dogpile.cache/blob/2762ada1f5e43075494d91c512f7c1ec68907258/dogpile/cache/region.py ). The expected behavior is that dogpile.cache.util.function_key_generator raise a ValueError if **kwargs founds, but our kwargs is empty and our `value=self.test_value was` is recognized as a positionnal argument. Our unit test looking for an assertRaise(ValueError) on cachable decorated function when we pass kwargs but it doesn't happen due to empty kwargs. For these reasons we guess that is an dogpile.cache issue and not an oslo.cache issue due to the changes introduced by `decorator` module. The following are related: - https://github.com/openstack/oslo.cache/blob/master/oslo_cache/tests/test_cache.py#L318 : unit test where the problem occure - https://review.openstack.org/#/c/638788/ : possible fix but we don't think that is the right way - https://review.openstack.org/#/c/638732/ : possible remove of the unit test who fail The issue is being tracked in: https://bugs.launchpad.net/oslo.cache/+bug/1817032 If some dogpile expert can take a look and send feedback on this thread you are welcome. Thanks, -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Feb 26 15:47:08 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 26 Feb 2019 09:47:08 -0600 Subject: [dev][oslo] oslo.cache and dogpile 0.7.0+ cache errors In-Reply-To: References: Message-ID: <4fec7479-22f8-e49a-5732-5ddfa914831b@nemebean.com> Copying Mike. More thoughts inline. On 2/26/19 9:24 AM, Herve Beraud wrote: > Hi, > > Just a heads up that the latest version of dogpile (0.7.0 onwards) > have become incompatible with oslo.cache.  This is causing a few > issues for jobs.  It's a little complex due to functional code and many > decorated functions. > > The error you will see is: > / > / > > /oslo_cache.//tests.test_//cache.CacheRegi//onTest.//test_function_//key_generator_//with_kwargs > -------//-------//-------//-------//-------//-------//-------//-------//-------//-------//-------//------/ > > // > > /Captured traceback: > ~~~~~~~~~~~~~~~~~~~ >     b'Traceback (most recent call last):' >     b' File "/tmp/oslo.//cache/oslo_//cache/tests///test_cache.//py", > line 324, in test_function_//key_generator_//with_kwargs' >     b' value=self.//test_value)//' >     b' File > "/tmp/oslo.//cache/.//tox/py37///lib/python3.//7/site-//packages///testtools///testcase.//py", > line 485, in assertRaises' >     b' self.assertThat//(our_callable, matcher)' >     b' File > "/tmp/oslo.//cache/.//tox/py37///lib/python3.//7/site-//packages///testtools///testcase.//py", > line 498, in assertThat' >     b' raise mismatch_error' >     b'testtools//.matchers.//_impl.MismatchE//rror: CacheRegionTest//._get_cacheable//_function.////.cacheable_//function > at 0x7fec3f795400> returned > 0x7fec3f792550>'/ > > > The problem appear since we uncap dogpile.cache on oslo.cache: > https://github.com/openstack/oslo.cache/commit/62b53099861134859482656dc92db81243b88bd9 > > The following unit test fail since we uncap dogpile => > https://github.com/openstack/oslo.cache/blob/master/oslo_cache/tests/test_cache.py#L318 > > The problem was introduced by: > https://gerrit.sqlalchemy.org/#/c/sqlalchemy/dogpile.cache/+/996/ > > Your main issue on oslo.cache side is that keyword arguments are > tranformed in positionnal arguments when we use > dogpile.cache.region.cache_on_arguments. > > I've try to revert the changes introduced by the previous dogpile.cache > change and everything works fine on the oslo.cache side when changes was > reverted (reverted to revision > https://github.com/sqlalchemy/dogpile.cache/blob/2762ada1f5e43075494d91c512f7c1ec68907258/dogpile/cache/region.py). > > The expected behavior is that dogpile.cache.util.function_key_generator > raise a ValueError if **kwargs founds, but our kwargs is empty and our > `value=self.test_value was` is recognized as a positionnal argument. > Our unit test looking for an assertRaise(ValueError) on cachable > decorated function when we pass kwargs but it doesn't happen due to > empty kwargs. > > For these reasons we guess that is an dogpile.cache issue and not an > oslo.cache issue due to the changes introduced by `decorator` module. > > The following are related: > > - > https://github.com/openstack/oslo.cache/blob/master/oslo_cache/tests/test_cache.py#L318 > : unit test where the problem occure > - https://review.openstack.org/#/c/638788/ : possible fix but we don't > think that is the right way > - https://review.openstack.org/#/c/638732/ : possible remove of the unit > test who fail As I noted in the reviews, I don't think this is something we should have been testing in oslo.cache in the first place. The failing test is testing the dogpile interface, not the oslo.cache one. I've seen no evidence that oslo.cache is doing anything wrong here, so our unit tests are clearly testing something that should be out of scope. And to be clear, I'm not even sure this is a bug in dogpile. It may be a happy side-effect of the decorator change that the regular decorator now works for kwargs too. I don't know dogpile well enough to make a definitive statement on that though. Hence cc'ing Mike. :-) > > The issue is being tracked in: > > https://bugs.launchpad.net/oslo.cache/+bug/1817032 > > If some dogpile expert can take a look and send feedback on this thread > you are welcome. > > Thanks, > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From rico.lin.guanyu at gmail.com Tue Feb 26 15:47:56 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 26 Feb 2019 23:47:56 +0800 Subject: [tc] [election] Candidate question: growth of projects In-Reply-To: References: Message-ID: On Thu, Feb 21, 2019 at 7:14 PM Chris Dent wrote: > > There are 63 teams in the official list of projects. How do you feel > about this size? Too big, too small, just right? Why? Too big, because we actually contain some dead projects IMO. We should contain those projects in some place until someone is ready to take over. > If you had to make a single declaration about growth in the number > of projects would you prefer to see (and why, of course): > * Something else. I prefer we thinking about restructuring in long-term. Projects don't make much sense to me now a day. SIGs/WGs to making cross-project development might be a better approach in the feature. > > How has the relatively recent emergence of the open infrastructure > projects that are at the same "level" in the Foundation as OpenStack > changed your thoughts on the above questions? Not really, we could have multiple foundations trying on multiple projects. To have a single Foundation willing to take over the part and evolved to something else is pretty amazing. > > Do you think the number of projects has any impact (positive or > negative) on our overall ability to get things done? Yes, you need extra time to check with projects. And since we're doing community goal approach, the bigger the number, the more complex the goal will be. > > Recognizing that there are many types of contributors, not just > developers, this question is about developers: Throughout history > different members of the community have sometimes identified as an > "OpenStack developer", sometimes as a project developer (e.g., "Nova > developer"). Should we encourage contributors to think of themselves > as primarily OpenStack developers? If so, how do we do that? If not, > why not? To named with Open Infra developer might be even suitable IMO, to allow people to think and plan on a higher level. > > Thanks. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002914.html > [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002923.html > > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Tue Feb 26 15:56:16 2019 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 26 Feb 2019 10:56:16 -0500 Subject: [tc] [election] Candidate question: growth of projects In-Reply-To: References: Message-ID: On 21/02/19 6:13 AM, Chris Dent wrote: > > This is another set of questions for TC candidates, to look at a > different side of things from my first one [1] and somewhat related > to the one Doug has asked [2]. > > As Doug mentions, a continuing role of the TC is to evaluate > applicants to be official projects. These questions are about that. > > There are 63 teams in the official list of projects. How do you feel > about this size? Too big, too small, just right? Why? I have mixed feelings about this. On one hand, AWS launched 30+ services at their conference *last year alone*. (For those keeping score, that's more services than OpenStack had in total when people started complaining that OpenStack had too many services and users wouldn't be able to cope with the choice.) I'm sure many of those are pretty half-baked, and some will eventually amount to nothing, but I look at that and can't help but think: that should be us! With the power of open, we can have any developer in the world behind us. We should be able to out-innovate any one company, even a big one. It makes me sad that after 10 years we haven't built the base to make OpenStack attractive as *the* place to do those kinds of things. On the other hand, many of those services we do have are only lightly maintained. That's not hurting anybody (except perhaps the folks stuck maintaining them), but in many cases we might just be delaying the inevitable. And some of those services are a feature masquerading as a separate service, that operate as a separate team because they couldn't find another way to get code into where they needed it (usually on the compute node) - those might actually be hurting because they paper over problems with how our community works that might better be addressed head-on. > If you had to make a single declaration about growth in the number > of projects would you prefer to see (and why, of course): > > * More projects as required by demand. > * Slower or no growth to focus on what we've got. > * Trim the number of projects to "get back to our roots". > * Something else. I don't think I can pick one. It's all of the above, including the 'Something else'. > How has the relatively recent emergence of the open infrastructure > projects that are at the same "level" in the Foundation as OpenStack > changed your thoughts on the above questions? Not much, TBH. > Do you think the number of projects has any impact (positive or > negative) on our overall ability to get things done? Not really. People will work on the problems they have. If OpenStack doesn't have a project to solve their problem then they won't work on OpenStack - they're not going to go work on a different OpenStack project instead. To the extent that the number of projects has forced horizontal teams to adopt more scalable ways of working, it's probably had a positive impact. (e.g. the release management automation tools are great, and I don't know if they'd ever have been written if there were still only 6 projects.) > Recognizing that there are many types of contributors, not just > developers, this question is about developers: Throughout history > different members of the community have sometimes identified as an > "OpenStack developer", sometimes as a project developer (e.g., "Nova > developer"). Should we encourage contributors to think of themselves > as primarily OpenStack developers? If so, how do we do that? If not, > why not? I think that's to be encouraged. And it's worth noting that in our guiding principles we require community members to put the needs of OpenStack as a whole above those of their individual projects (and both of those above the needs of their employers)[1]. But I also think it's natural for folks to sometimes identify with the stuff they are directly working on. We all wear many hats in this community, and ultimately everyone will have to learn for themselves how best to juggle that. cheers, Zane. [1] https://governance.openstack.org/tc/reference/principles.html#openstack-first-project-team-second-company-third > Thanks. > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002914.html > > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002923.html > > > From mnaser at vexxhost.com Tue Feb 26 16:11:22 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 26 Feb 2019 11:11:22 -0500 Subject: [heat] keystone endpoint configuration In-Reply-To: <22a164a6-73c9-5c6f-cfd0-6f29b0bae47a@rd.bbc.co.uk> References: <22a164a6-73c9-5c6f-cfd0-6f29b0bae47a@rd.bbc.co.uk> Message-ID: On Wed, Feb 20, 2019 at 1:43 PM Jonathan Rosser wrote: > > In openstack-ansible we are trying to help a number of our end users > with their heat deployments, some of them in conjunction with magnum. > > There is some uncertainty with how the following heat.conf sections > should be configured: > > [clients_keystone] > auth_uri = ... > > [keystone_authtoken] > www_authenticate_uri = ... > > It does not appear to be possible to define a set of internal or > external keystone endpoints in heat.conf which allow the following: > > * The orchestration panels being functional in horizon > * Deployers isolating internal openstack from external networks > * Deployers using self signed/company cert on the external endpoint > * Magnum deployments completing > * Heat delivering an external endpoint at [1] > * Heat delivering an external endpoint at [2] > > There are a number of related bugs: > > https://bugs.launchpad.net/openstack-ansible/+bug/1814909 > https://bugs.launchpad.net/openstack-ansible/+bug/1811086 > https://storyboard.openstack.org/#!/story/2004808 > https://storyboard.openstack.org/#!/story/2004524 > > Any help we could get from the heat team to try to understand the root > cause of these issues would be really helpful. I think this is a really critical issue that Jonathan has spent a lot of time on to get to work. If we can't support this model, maybe we should consider dropping the whole idea of admin/internal/public if we can't commit to testing it properly. > Jon. > > > [1] > https://github.com/openstack/heat/blob/master/heat/engine/resources/server_base.py#L87 > > [2] > https://github.com/openstack/heat/blob/master/heat/engine/resources/signal_responder.py#L106 > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Tue Feb 26 16:15:43 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 26 Feb 2019 11:15:43 -0500 Subject: [tc] Questions for TC Candidates In-Reply-To: <077A6BBD-48B0-4C1B-A86A-D5ABC55E4C1C@leafe.com> References: <66646ba0-3829-534c-3782-db95ecceec27@redhat.com> <077A6BBD-48B0-4C1B-A86A-D5ABC55E4C1C@leafe.com> Message-ID: On Tue, Feb 26, 2019 at 10:23 AM Ed Leafe wrote: > > On Feb 25, 2019, at 8:01 PM, Zane Bitter wrote: > > > > This is a problem for OpenStack, for at least the reason you mentioned above: TC members don't have much of a mandate if they didn't actually have an election. > > That’s a good point: do you (all candidates; not just Zane) see the election as being a mandate for specific things? Candidates run on different platforms, expressing different desires for changes they would like to make. Do you see the result of a TC election as a mandate to go out and do those things, and not to do the things that the losing candidates espoused? Yes. I do think however that sometimes it can be really hard for us to do things as an individual if the rest of the committee disagrees i.e.: i still think weekly meetings are useful, will get things done, will help us stay in sync and give an easily parse-able thing for the rest of the community to visit.. but, I haven't had success with that. > The counter, and extremely cynical, argument here is that people don’t really weigh the specific proposals of the individual candidates and choose those most in alignment with their feelings, but instead choose people who they either a) worked with at some point and didn’t find them to be a jerk, or b) have seen their name around for a while, and figure they must know what’s going on, or c) have the same employer, or d) some other non-issue-related reason. If this cynical point of view is closer to how you see reality, does that represent a mandate at all? > > > -- Ed Leafe > > > > > > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From moguimar at redhat.com Tue Feb 26 16:28:19 2019 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Tue, 26 Feb 2019 17:28:19 +0100 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <1541607064.1040711.1568901040.6867B704@webmail.messagingengine.com> Message-ID: So, at this point, is it OK to have projects running against both py35 and py37 and considering py36 covered as being included in the interval? Also about the lowest supported version, I think that is the one that should be stated in the envlist of tox.ini to fail fast during development. Em ter, 13 de nov de 2018 às 19:32, Corey Bryant escreveu: > > > On Wed, Nov 7, 2018 at 11:12 AM Clark Boylan wrote: > >> On Wed, Nov 7, 2018, at 4:47 AM, Mohammed Naser wrote: >> > On Wed, Nov 7, 2018 at 1:37 PM Doug Hellmann >> wrote: >> > > >> > > Corey Bryant writes: >> > > >> > > > On Wed, Oct 10, 2018 at 8:45 AM Corey Bryant < >> corey.bryant at canonical.com> >> > > > wrote: >> > > > >> > > > I'd like to start moving forward with enabling py37 unit tests for >> a subset >> > > > of projects. Rather than putting too much load on infra by enabling >> 3 x py3 >> > > > unit tests for every project, this would just focus on enablement >> of py37 >> > > > unit tests for a subset of projects in the Stein cycle. And just to >> be >> > > > clear, I would not be disabling any unit tests (such as py35). I'd >> just be >> > > > enabling py37 unit tests. >> > > > >> > > > As some background, this ML thread originally led to updating the >> > > > python3-first governance goal ( >> https://review.openstack.org/#/c/610708/) >> > > > but has now led back to this ML thread for a +1 rather than >> updating the >> > > > governance goal. >> > > > >> > > > I'd like to get an official +1 here on the ML from parties such as >> the TC >> > > > and infra in particular but anyone else's input would be welcomed >> too. >> > > > Obviously individual projects would have the right to reject >> proposed >> > > > changes that enable py37 unit tests. Hopefully they wouldn't, of >> course, >> > > > but they could individually vote that way. >> > > > >> > > > Thanks, >> > > > Corey >> > > >> > > This seems like a good way to start. It lets us make incremental >> > > progress while we take the time to think about the python version >> > > management question more broadly. We can come back to the other >> projects >> > > to add 3.7 jobs and remove 3.5 jobs when we have that plan worked out. >> > >> > What's the impact on the number of consumption in upstream CI node >> usage? >> > >> >> For period from 2018-10-25 15:16:32,079 to 2018-11-07 15:59:04,994, >> openstack-tox-py35 jobs in aggregate represent 0.73% of our total capacity >> usage. >> >> I don't expect py37 to significantly deviate from that. Again the major >> resource consumption is dominated by a small number of projects/repos/jobs. >> Generally testing outside of that bubble doesn't represent a significant >> resource cost. >> >> I see no problem with adding python 3.7 unit testing from an >> infrastructure perspective. >> >> Clark >> >> >> > Thanks all for the input on this. It seems like we have no objections to > moving forward so I'll plan on getting started soon. > > Thanks, > Corey > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Feb 26 16:35:38 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 26 Feb 2019 11:35:38 -0500 Subject: [placement] zuul job dependencies for greater good? In-Reply-To: <0b61cb519bfb7b45a855b2a59aba5cac1b19dead.camel@redhat.com> References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> <29e0f0b5-c25b-41c0-9fc3-732ee78f8b1c@www.fastmail.com> <0b61cb519bfb7b45a855b2a59aba5cac1b19dead.camel@redhat.com> Message-ID: On Tue, Feb 26, 2019, at 3:04 AM, Sean Mooney wrote: > On Mon, 2019-02-25 at 19:42 -0500, Clark Boylan wrote: > > On Mon, Feb 25, 2019, at 12:51 PM, Ben Nemec wrote: > > > > > > > snip > > > > > That said, I wouldn't push too hard in either direction until someone > > > crunched the numbers and figured out how much time it would have saved > > > to not run long tests on patch sets with failing unit tests. I feel like > > > it's probably possible to figure that out, and if so then we should do > > > it before making any big decisions on this. > > > clark this sound like a interesting topic to dig into in person at the > ptg/fourm. > do you think we could do two things in parallel. > 1 find a slot maybe in the infra track to discuss this. > 2 can we create a new "fast-check" pipeline in zuul so we can do some > experiment > > if we have a second pipeline with almost identical trrigers we can > propose in tree job > changes and not merge them and experiment with how this might work. > i can submit a patch to do that to the project-config repo but wanted > to check on the ml first. > > again to be clear my suggestion for an experiment it to modify the gate > jobs to require approval > from zuul in both the check and fast check pipeline and kick off job in > both pipeline in parallel > so inially the check pipeline jobs would not be condtional on the > fast-check pipeline jobs. Currently zuul depends on the Gerrit vote data to determine if check has been satisfied for gating requirements. Zuul's verification voting options are currently [-2,-1,0,1,2] with +/-1 for check and +/-2 for gate. Where this gets complicated is how do you resolve different values from different check pipelines, and how do you keep them from racing on updates. This type of setup likely requires a new type of pipeline in zuul that can coordinate with another pipeline to ensure accurate vote posting. Another approach may be to update zuul's reporting capabilities to report intermediate results without votes. That said, is there something that the dashboard is failing to do that this would address? At any time you should be able to check the zuul dashboard for an up to date status of your in progress jobs. > > the intent is to run exactly the same amount of test we do today but > just to have zuul comment back in two batchs > one form each pipeline. > > as a step two i would also be interested with merging all of the tox > env jobs into one. > i think that could be done by creating a new job that inherits form the > base tox job and just invoke the run play book > of all the tox- jobs from a singel playbook. > > i can do experiment 2 without entirly form the in repo zuul.yaml file > > i think it would be interesting to do a test with "do not merge" > patches to nova or placement and > see how that works From sean.mcginnis at gmx.com Tue Feb 26 16:37:01 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 26 Feb 2019 10:37:01 -0600 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <1541607064.1040711.1568901040.6867B704@webmail.messagingengine.com> Message-ID: <20190226163700.GA532@sm-workstation> On Tue, Feb 26, 2019 at 05:28:19PM +0100, Moises Guimaraes de Medeiros wrote: > So, at this point, is it OK to have projects running against both py35 and > py37 and considering py36 covered as being included in the interval? > > Also about the lowest supported version, I think that is the one that > should be stated in the envlist of tox.ini to fail fast during development. > In my opinion, the py35 jobs should all be dropped. The official runtime for Stein is py36, and the upcoming runtime is py37, so it doesn't add much value to be running py35 tests at this point. Sean From fungi at yuggoth.org Tue Feb 26 16:44:42 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 26 Feb 2019 16:44:42 +0000 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: References: <1541607064.1040711.1568901040.6867B704@webmail.messagingengine.com> Message-ID: <20190226164442.khkz25ddjgwo6zxq@yuggoth.org> On 2019-02-26 17:28:19 +0100 (+0100), Moises Guimaraes de Medeiros wrote: > So, at this point, is it OK to have projects running against both > py35 and py37 and considering py36 covered as being included in > the interval? For Stein (current master development) "...all Python-based projects must target and test against, at a minimum: Python 2.7, Python 3.6." https://governance.openstack.org/tc/reference/runtimes/stein.html#python-runtime-for-stein Also keeping 3.5 around and/or adding 3.7 is of course fine, but we set the expectation that 2.7 and 3.6 are actually tested directly, not just indirectly inferred to work. > Also about the lowest supported version, I think that is the one > that should be stated in the envlist of tox.ini to fail fast > during development. [...] Any versions you're testing against in the CI system should also be listed in your tox.ini envlist, ideally. Developers can choose to run as many or as few of those locally as they desire of course. And be careful with the word "supported" as people often think it means something it doesn't. We do our best to avoid the term in our guidelines, though we did also publish a Technical Committee resolution clarifying what it means in cases where it does show up: https://governance.openstack.org/tc/resolutions/20170620-volunteer-support.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Tue Feb 26 16:53:25 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 26 Feb 2019 08:53:25 -0800 Subject: [placement][TripleO] zuul job dependencies for greater good? In-Reply-To: (Bogdan Dobrelya's message of "Tue, 26 Feb 2019 10:46:11 +0100") References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> <20190226002048.GA10439@fedora19.localdomain> Message-ID: <8736oamqyi.fsf@meyer.lemoncheese.net> Bogdan Dobrelya writes: > I attempted [0] to do that for tripleo-ci, but zuul was (and still > does) complaining for some weird graphs building things :/ > > See also the related topic [1] from the past. > > [0] https://review.openstack.org/#/c/568543 > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/127869.html Thank you for linking to [1]. It's worth re-reading. Especially the part at the end. -Jim From petebirley+openstack-dev at gmail.com Tue Feb 26 16:57:50 2019 From: petebirley+openstack-dev at gmail.com (Pete Birley) Date: Tue, 26 Feb 2019 10:57:50 -0600 Subject: [openstack-helm] Team Meeting (5th March 2019) Message-ID: Hey! The next OpenStack-Helm meeting will be held on the 5th March, at 3pm UTC in #openstack-meeting-4 in freenode IRC. Thanks to all those who could attend the one we held on the 26th Feb: * the agenda we used is here: https://etherpad.openstack.org/p/openstack-helm-meeting-2019-02-26 * the minutes logged here: http://eavesdrop.openstack.org/meetings/openstack_helm/2019/openstack_helm.2019-02-26-15.00.html It would be great if people interested in OSH could attend, though we appreciate that's not possible, or desirable, for many. The agenda for the next meeting is here: https://etherpad.openstack.org/p/openstack-helm-meeting-2019-03-05 please feel free to add to it, even if you cannot attend. Look forward to seeing you all either in IRC or here. Cheers Pete From zbitter at redhat.com Tue Feb 26 17:00:02 2019 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 26 Feb 2019 12:00:02 -0500 Subject: [tc] Questions for TC Candidates In-Reply-To: <077A6BBD-48B0-4C1B-A86A-D5ABC55E4C1C@leafe.com> References: <66646ba0-3829-534c-3782-db95ecceec27@redhat.com> <077A6BBD-48B0-4C1B-A86A-D5ABC55E4C1C@leafe.com> Message-ID: <0a272886-6e78-2ccd-90e5-7c14605e50d6@redhat.com> On 26/02/19 10:19 AM, Ed Leafe wrote: > On Feb 25, 2019, at 8:01 PM, Zane Bitter wrote: >> >> This is a problem for OpenStack, for at least the reason you mentioned above: TC members don't have much of a mandate if they didn't actually have an election. > > That’s a good point: do you (all candidates; not just Zane) see the election as being a mandate for specific things? Candidates run on different platforms, expressing different desires for changes they would like to make. Do you see the result of a TC election as a mandate to go out and do those things, and not to do the things that the losing candidates espoused? I explained my theory about that in a previous ML post a few weeks ago: http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001841.html In short, in order to co-ordinate across a group of people everyone in the group needs to have some reason to think the other people in the group are going to move in the same direction. Relaying the direction through an elected group helps to do that, because everyone believes that everyone else voted for that group - and in the aggregate they're correct. I expect that effect would be significantly weakened (though not completely eliminated) if there was no election (which would mean that TC members effectively appointed themselves). > The counter, and extremely cynical, argument here is that people don’t really weigh the specific proposals of the individual candidates and choose those most in alignment with their feelings, but instead choose people who they either a) worked with at some point and didn’t find them to be a jerk, or b) have seen their name around for a while, and figure they must know what’s going on, or c) have the same employer, or d) some other non-issue-related reason. If this cynical point of view is closer to how you see reality, does that represent a mandate at all? Yes, sadly I think it's likely that a & b at least play an outsize role (I _hope_ that c doesn't play a big role, and I do think that people consider actual issues or at least general philosophies to some extent). But interestingly it doesn't matter! The above effect turns out to still work even though it's just a convenient fiction, like money, or Belgium. It still works even though you all just read this message where I called it a fiction :) cheers, Zane. From smooney at redhat.com Tue Feb 26 17:03:51 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 26 Feb 2019 17:03:51 +0000 Subject: [placement] zuul job dependencies for greater good? In-Reply-To: References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> <29e0f0b5-c25b-41c0-9fc3-732ee78f8b1c@www.fastmail.com> <0b61cb519bfb7b45a855b2a59aba5cac1b19dead.camel@redhat.com> Message-ID: <769131d02c8798b649ff77c6dc44c26757870fbf.camel@redhat.com> On Tue, 2019-02-26 at 11:35 -0500, Clark Boylan wrote: > On Tue, Feb 26, 2019, at 3:04 AM, Sean Mooney wrote: > > On Mon, 2019-02-25 at 19:42 -0500, Clark Boylan wrote: > > > On Mon, Feb 25, 2019, at 12:51 PM, Ben Nemec wrote: > > > > > > > > > > snip > > > > > > > That said, I wouldn't push too hard in either direction until someone > > > > crunched the numbers and figured out how much time it would have saved > > > > to not run long tests on patch sets with failing unit tests. I feel like > > > > it's probably possible to figure that out, and if so then we should do > > > > it before making any big decisions on this. > > > > clark this sound like a interesting topic to dig into in person at the > > ptg/fourm. > > do you think we could do two things in parallel. > > 1 find a slot maybe in the infra track to discuss this. > > 2 can we create a new "fast-check" pipeline in zuul so we can do some > > experiment > > > > if we have a second pipeline with almost identical trrigers we can > > propose in tree job > > changes and not merge them and experiment with how this might work. > > i can submit a patch to do that to the project-config repo but wanted > > to check on the ml first. > > > > again to be clear my suggestion for an experiment it to modify the gate > > jobs to require approval > > from zuul in both the check and fast check pipeline and kick off job in > > both pipeline in parallel > > so inially the check pipeline jobs would not be condtional on the > > fast-check pipeline jobs. > > Currently zuul depends on the Gerrit vote data to determine if check has been satisfied for gating requirements. > Zuul's verification voting options are currently [-2,-1,0,1,2] with +/-1 for check and +/-2 for gate. Where this gets > complicated is how do you resolve different values from different check pipelines, and how do you keep them from > racing on updates. This type of setup likely requires a new type of pipeline in zuul that can coordinate with another > pipeline to ensure accurate vote posting. oh right because there would only be one zuul user for both piplines so they would conflict. i had not thought about that aspect. > > Another approach may be to update zuul's reporting capabilities to report intermediate results without votes. That > said, is there something that the dashboard is failing to do that this would address? At any time you should be able > to check the zuul dashboard for an up to date status of your in progress jobs. for me know but i find that many people dont know about zuul.openstack.org and that you can view the jobs and there logs (once a job finishes) before zuul comments back. perhaps posting a comment when zuul starts contain a line ot zull.o.o would help the discoverability aspect. > > > > > the intent is to run exactly the same amount of test we do today but > > just to have zuul comment back in two batchs > > one form each pipeline. > > > > as a step two i would also be interested with merging all of the tox > > env jobs into one. > > i think that could be done by creating a new job that inherits form the > > base tox job and just invoke the run play book > > of all the tox- jobs from a singel playbook. > > > > i can do experiment 2 without entirly form the in repo zuul.yaml file > > > > i think it would be interesting to do a test with "do not merge" > > patches to nova or placement and > > see how that works > > From fungi at yuggoth.org Tue Feb 26 17:13:45 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 26 Feb 2019 17:13:45 +0000 Subject: [placement] zuul job dependencies for greater good? In-Reply-To: <769131d02c8798b649ff77c6dc44c26757870fbf.camel@redhat.com> References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> <29e0f0b5-c25b-41c0-9fc3-732ee78f8b1c@www.fastmail.com> <0b61cb519bfb7b45a855b2a59aba5cac1b19dead.camel@redhat.com> <769131d02c8798b649ff77c6dc44c26757870fbf.camel@redhat.com> Message-ID: <20190226171345.73qtqd26ehwpxood@yuggoth.org> On 2019-02-26 17:03:51 +0000 (+0000), Sean Mooney wrote: > On Tue, 2019-02-26 at 11:35 -0500, Clark Boylan wrote: [...] > > is there something that the dashboard is failing to do that this > > would address? At any time you should be able to check the zuul > > dashboard for an up to date status of your in progress jobs. > > for me know but i find that many people dont know about > zuul.openstack.org and that you can view the jobs and there logs > (once a job finishes) before zuul comments back. > > perhaps posting a comment when zuul starts contain a line ot > zull.o.o would help the discoverability aspect. [...] We've had some semi-successful experiments in the past with exposing a filtered progress view of Zuul builds in the Gerrit WebUI. Previous attempts were stymied by the sheer volume of status API requests from hundreds of developers with dozens of open browser tabs to different Gerrit changes. Now that we've got the API better cached and separated out to its own service we may be able to weather the storm. There's also new support being worked on in Gerrit for improved CI reporting, and for which we'll hopefully be able to take advantage eventually. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mihalis68 at gmail.com Tue Feb 26 17:53:55 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 26 Feb 2019 12:53:55 -0500 Subject: [ops] berlin ops meetup content thread Message-ID: Hello Everyone, Here's the first etherpad for next week's Ops Meetup in Berlin, hosted by Deutsche Telekom: https://etherpad.openstack.org/p/BER19-OPS-DUDES This one is for a proposed group discussion amongst the openstack operators attending about all the new (or newly top-level) projects under the openstack foundation umbrella, or as I've nicknamed it "All the young dudes", kata,zuul, airship, starlingx. I also added a section about the changes to the foundation, stackforge, the summits etc.I haven't seen a prior effort to get feedback from openstack operators on all these changes. Feedback welcome Chris p.s. still tickets available, come join us if you can - especially if it's nearby to you there's still time! https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Feb 26 18:01:13 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 26 Feb 2019 19:01:13 +0100 Subject: [tc] Questions for TC Candidates In-Reply-To: <077A6BBD-48B0-4C1B-A86A-D5ABC55E4C1C@leafe.com> References: <66646ba0-3829-534c-3782-db95ecceec27@redhat.com> <077A6BBD-48B0-4C1B-A86A-D5ABC55E4C1C@leafe.com> Message-ID: On Tue, Feb 26, 2019 at 4:25 PM Ed Leafe wrote: > On Feb 25, 2019, at 8:01 PM, Zane Bitter wrote: > > > > This is a problem for OpenStack, for at least the reason you mentioned > above: TC members don't have much of a mandate if they didn't actually have > an election. > > That’s a good point: do you (all candidates; not just Zane) see the > election as being a mandate for specific things? Candidates run on > different platforms, expressing different desires for changes they would > like to make. Do you see the result of a TC election as a mandate to go out > and do those things, and not to do the things that the losing candidates > espoused? > > The counter, and extremely cynical, argument here is that people don’t > really weigh the specific proposals of the individual candidates and choose > those most in alignment with their feelings, but instead choose people who > they either a) worked with at some point and didn’t find them to be a jerk, > or b) have seen their name around for a while, and figure they must know > what’s going on, or c) have the same employer, or d) some other > non-issue-related reason. If this cynical point of view is closer to how > you see reality, does that represent a mandate at all? > > What you mention is the exact reason why I claimed in my candidacy ballot to have more folks for the election [1]. I don't remember since when we started to have campaigns for the TC election (at least 3 or 4 IIRC) but i hope it will help the election to not be just a popularity contest, but rather a way for voters to better know the individuals before giving them a ticket (and actually pounding this ticket using the Condorcet system if they really want to favor a candidate close to their own opinions). If we were less or equal the number of TC seats, what would have been the representativity of such candidates who would have been elected by default ? Now, that's not because we have an election and that we have a campaign period that it will mean that those individuals will feel as 'members of parliament' (at least for me). One of the key values of OpenStack is Open Design on a consensus model. We all have great opinions and we care about our own opinions, but we also need a majority of contributors agreeing with. -Sylvain [1] https://git.openstack.org/cgit/openstack/election/plain/candidates/train/TC/sbauza%40redhat.com (last paragraph) > > -- Ed Leafe > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Tue Feb 26 18:35:08 2019 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 26 Feb 2019 19:35:08 +0100 Subject: [dev][oslo] oslo.cache and dogpile 0.7.0+ cache errors In-Reply-To: <4fec7479-22f8-e49a-5732-5ddfa914831b@nemebean.com> References: <4fec7479-22f8-e49a-5732-5ddfa914831b@nemebean.com> Message-ID: FYI dogpile.cache issue was opened: https://github.com/sqlalchemy/dogpile.cache/issues/144 Come with a possible oslo.cache solution that I've introduce there => https://review.openstack.org/#/c/638788/8 Le mar. 26 févr. 2019 à 16:49, Ben Nemec a écrit : > Copying Mike. More thoughts inline. > > On 2/26/19 9:24 AM, Herve Beraud wrote: > > Hi, > > > > Just a heads up that the latest version of dogpile (0.7.0 onwards) > > have become incompatible with oslo.cache. This is causing a few > > issues for jobs. It's a little complex due to functional code and many > > decorated functions. > > > > The error you will see is: > > / > > / > > > > > /oslo_cache.//tests.test_//cache.CacheRegi//onTest.//test_function_//key_generator_//with_kwargs > > > -------//-------//-------//-------//-------//-------//-------//-------//-------//-------//-------//------/ > > > > // > > > > /Captured traceback: > > ~~~~~~~~~~~~~~~~~~~ > > b'Traceback (most recent call last):' > > b' File "/tmp/oslo.//cache/oslo_//cache/tests///test_cache.//py", > > line 324, in test_function_//key_generator_//with_kwargs' > > b' value=self.//test_value)//' > > b' File > > > "/tmp/oslo.//cache/.//tox/py37///lib/python3.//7/site-//packages///testtools///testcase.//py", > > > line 485, in assertRaises' > > b' self.assertThat//(our_callable, matcher)' > > b' File > > > "/tmp/oslo.//cache/.//tox/py37///lib/python3.//7/site-//packages///testtools///testcase.//py", > > > line 498, in assertThat' > > b' raise mismatch_error' > > b'testtools//.matchers.//_impl.MismatchE//rror: > > CacheRegionTest//._get_cacheable//_function.////.cacheable_//function > > > at 0x7fec3f795400> returned > > > 0x7fec3f792550>'/ > > > > > > The problem appear since we uncap dogpile.cache on oslo.cache: > > > https://github.com/openstack/oslo.cache/commit/62b53099861134859482656dc92db81243b88bd9 > > > > The following unit test fail since we uncap dogpile => > > > https://github.com/openstack/oslo.cache/blob/master/oslo_cache/tests/test_cache.py#L318 > > > > The problem was introduced by: > > https://gerrit.sqlalchemy.org/#/c/sqlalchemy/dogpile.cache/+/996/ > > > > Your main issue on oslo.cache side is that keyword arguments are > > tranformed in positionnal arguments when we use > > dogpile.cache.region.cache_on_arguments. > > > > I've try to revert the changes introduced by the previous dogpile.cache > > change and everything works fine on the oslo.cache side when changes was > > reverted (reverted to revision > > > https://github.com/sqlalchemy/dogpile.cache/blob/2762ada1f5e43075494d91c512f7c1ec68907258/dogpile/cache/region.py > ). > > > > The expected behavior is that dogpile.cache.util.function_key_generator > > raise a ValueError if **kwargs founds, but our kwargs is empty and our > > `value=self.test_value was` is recognized as a positionnal argument. > > Our unit test looking for an assertRaise(ValueError) on cachable > > decorated function when we pass kwargs but it doesn't happen due to > > empty kwargs. > > > > For these reasons we guess that is an dogpile.cache issue and not an > > oslo.cache issue due to the changes introduced by `decorator` module. > > > > The following are related: > > > > - > > > https://github.com/openstack/oslo.cache/blob/master/oslo_cache/tests/test_cache.py#L318 > > : unit test where the problem occure > > - https://review.openstack.org/#/c/638788/ : possible fix but we don't > > think that is the right way > > - https://review.openstack.org/#/c/638732/ : possible remove of the > unit > > test who fail > > As I noted in the reviews, I don't think this is something we should > have been testing in oslo.cache in the first place. The failing test is > testing the dogpile interface, not the oslo.cache one. I've seen no > evidence that oslo.cache is doing anything wrong here, so our unit tests > are clearly testing something that should be out of scope. > > And to be clear, I'm not even sure this is a bug in dogpile. It may be a > happy side-effect of the decorator change that the regular decorator now > works for kwargs too. I don't know dogpile well enough to make a > definitive statement on that though. Hence cc'ing Mike. :-) > > > > > The issue is being tracked in: > > > > https://bugs.launchpad.net/oslo.cache/+bug/1817032 > > > > If some dogpile expert can take a look and send feedback on this thread > > you are welcome. > > > > Thanks, > > > > -- > > Hervé Beraud > > Senior Software Engineer > > Red Hat - Openstack Oslo > > irc: hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Feb 26 18:43:54 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 26 Feb 2019 13:43:54 -0500 Subject: [ops] berlin ops meetup content thread In-Reply-To: References: Message-ID: Second topic proposal for berlin, Ceph for openstack : https://etherpad.openstack.org/p/BER19-OPS-CEPH On Tue, Feb 26, 2019 at 12:53 PM Chris Morgan wrote: > Hello Everyone, > > Here's the first etherpad for next week's Ops Meetup in Berlin, hosted by > Deutsche Telekom: > https://etherpad.openstack.org/p/BER19-OPS-DUDES > > This one is for a proposed group discussion amongst the openstack > operators attending about all the new (or newly top-level) projects under > the openstack foundation umbrella, or as I've nicknamed it "All the young > dudes", kata,zuul, airship, starlingx. > > I also added a section about the changes to the foundation, stackforge, > the summits etc.I haven't seen a prior effort to get feedback from > openstack operators on all these changes. > > Feedback welcome > > Chris > > p.s. still tickets available, come join us if you can - especially if it's > nearby to you there's still time! > https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 > > -- > Chris Morgan > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From petebirley+openstack-dev at gmail.com Tue Feb 26 19:04:33 2019 From: petebirley+openstack-dev at gmail.com (Pete Birley) Date: Tue, 26 Feb 2019 13:04:33 -0600 Subject: [openstack-helm] Initial spec for multiple container distribution support Message-ID: Hi, One of the items that came up in the meeting this week was to get a spec together for supporting multiple distributions in containers managed via OpenStack-Helm. Currently, we ubiquitously use Ubuntu-based images, but would also like to support both CentOS and OpenSUSE in addition. As there's a lot to discuss here, I've created the following etherpad to allow us to work through these issues prior to the next meeting - at which point we should hopefully be at the point where we can start writing a formal spec. * https://etherpad.openstack.org/p/openstack-helm-container-distro-support Cheers Pete From hberaud at redhat.com Tue Feb 26 19:40:00 2019 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 26 Feb 2019 20:40:00 +0100 Subject: [dev][oslo] oslo.cache and dogpile 0.7.0+ cache errors In-Reply-To: References: <4fec7479-22f8-e49a-5732-5ddfa914831b@nemebean.com> Message-ID: Submit a patch to dogpile.cache to add some related tests cases: https://github.com/sqlalchemy/dogpile.cache/pull/145/ Le mar. 26 févr. 2019 à 19:35, Herve Beraud a écrit : > FYI dogpile.cache issue was opened: > https://github.com/sqlalchemy/dogpile.cache/issues/144 > > Come with a possible oslo.cache solution that I've introduce there => > https://review.openstack.org/#/c/638788/8 > > Le mar. 26 févr. 2019 à 16:49, Ben Nemec a > écrit : > >> Copying Mike. More thoughts inline. >> >> On 2/26/19 9:24 AM, Herve Beraud wrote: >> > Hi, >> > >> > Just a heads up that the latest version of dogpile (0.7.0 onwards) >> > have become incompatible with oslo.cache. This is causing a few >> > issues for jobs. It's a little complex due to functional code and many >> > decorated functions. >> > >> > The error you will see is: >> > / >> > / >> > >> > >> /oslo_cache.//tests.test_//cache.CacheRegi//onTest.//test_function_//key_generator_//with_kwargs >> > >> -------//-------//-------//-------//-------//-------//-------//-------//-------//-------//-------//------/ >> > >> > // >> > >> > /Captured traceback: >> > ~~~~~~~~~~~~~~~~~~~ >> > b'Traceback (most recent call last):' >> > b' File "/tmp/oslo.//cache/oslo_//cache/tests///test_cache.//py", >> > line 324, in test_function_//key_generator_//with_kwargs' >> > b' value=self.//test_value)//' >> > b' File >> > >> "/tmp/oslo.//cache/.//tox/py37///lib/python3.//7/site-//packages///testtools///testcase.//py", >> >> > line 485, in assertRaises' >> > b' self.assertThat//(our_callable, matcher)' >> > b' File >> > >> "/tmp/oslo.//cache/.//tox/py37///lib/python3.//7/site-//packages///testtools///testcase.//py", >> >> > line 498, in assertThat' >> > b' raise mismatch_error' >> > b'testtools//.matchers.//_impl.MismatchE//rror: > > >> CacheRegionTest//._get_cacheable//_function.////.cacheable_//function >> >> > at 0x7fec3f795400> returned >> > > > 0x7fec3f792550>'/ >> > >> > >> > The problem appear since we uncap dogpile.cache on oslo.cache: >> > >> https://github.com/openstack/oslo.cache/commit/62b53099861134859482656dc92db81243b88bd9 >> > >> > The following unit test fail since we uncap dogpile => >> > >> https://github.com/openstack/oslo.cache/blob/master/oslo_cache/tests/test_cache.py#L318 >> > >> > The problem was introduced by: >> > https://gerrit.sqlalchemy.org/#/c/sqlalchemy/dogpile.cache/+/996/ >> > >> > Your main issue on oslo.cache side is that keyword arguments are >> > tranformed in positionnal arguments when we use >> > dogpile.cache.region.cache_on_arguments. >> > >> > I've try to revert the changes introduced by the previous dogpile.cache >> > change and everything works fine on the oslo.cache side when changes >> was >> > reverted (reverted to revision >> > >> https://github.com/sqlalchemy/dogpile.cache/blob/2762ada1f5e43075494d91c512f7c1ec68907258/dogpile/cache/region.py >> ). >> > >> > The expected behavior is that dogpile.cache.util.function_key_generator >> > raise a ValueError if **kwargs founds, but our kwargs is empty and our >> > `value=self.test_value was` is recognized as a positionnal argument. >> > Our unit test looking for an assertRaise(ValueError) on cachable >> > decorated function when we pass kwargs but it doesn't happen due to >> > empty kwargs. >> > >> > For these reasons we guess that is an dogpile.cache issue and not an >> > oslo.cache issue due to the changes introduced by `decorator` module. >> > >> > The following are related: >> > >> > - >> > >> https://github.com/openstack/oslo.cache/blob/master/oslo_cache/tests/test_cache.py#L318 >> > : unit test where the problem occure >> > - https://review.openstack.org/#/c/638788/ : possible fix but we don't >> > think that is the right way >> > - https://review.openstack.org/#/c/638732/ : possible remove of the >> unit >> > test who fail >> >> As I noted in the reviews, I don't think this is something we should >> have been testing in oslo.cache in the first place. The failing test is >> testing the dogpile interface, not the oslo.cache one. I've seen no >> evidence that oslo.cache is doing anything wrong here, so our unit tests >> are clearly testing something that should be out of scope. >> >> And to be clear, I'm not even sure this is a bug in dogpile. It may be a >> happy side-effect of the decorator change that the regular decorator now >> works for kwargs too. I don't know dogpile well enough to make a >> definitive statement on that though. Hence cc'ing Mike. :-) >> >> > >> > The issue is being tracked in: >> > >> > https://bugs.launchpad.net/oslo.cache/+bug/1817032 >> > >> > If some dogpile expert can take a look and send feedback on this thread >> > you are welcome. >> > >> > Thanks, >> > >> > -- >> > Hervé Beraud >> > Senior Software Engineer >> > Red Hat - Openstack Oslo >> > irc: hberaud >> > -----BEGIN PGP SIGNATURE----- >> > >> > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> > v6rDpkeNksZ9fFSyoY2o >> > =ECSj >> > -----END PGP SIGNATURE----- >> > >> >> > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Tue Feb 26 20:34:32 2019 From: dms at danplanet.com (Dan Smith) Date: Tue, 26 Feb 2019 12:34:32 -0800 Subject: [nova] nova spec show-server-group response format In-Reply-To: (Matt Riedemann's message of "Tue, 26 Feb 2019 07:45:40 -0600") References: Message-ID: > On 2/25/2019 11:53 PM, yonglihe wrote: >> The approved spec show-server-group had 2 options for response. >> >> 1. First one(current spec): >> >>          "server": { >>             "server_groups": [ # not cached >>                    "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8" >>             ] >>         } >>    } >> >> related discuss: >> https://review.openstack.org/#/c/612255/11/specs/stein/approved/show-server-group.rst at 67 >> >> digest:  This  decouple the current  implementation of server groups >> then get a  generic API. > > Jay pushed for this on the spec review because it future-proofs the > API in case a server can ever be in more than one group (currently it > cannot). When I was reviewing the code this was the first thing that > confused me (before I knew about the discussion on the spec) because I > knew that a server can only be in at most one server group, and I > think showing a list is misleading to the user. Similarly, before 2.64 > the os-server-groups API had a "policies" parameter which could only > ever have exactly one entry in it, and in 2.64 that was changed to > just be "policy" to reflect the actual usage. I don't think we're > going to have support for servers in multiple groups anytime soon, so > I personally don't think we need to future-proof the servers API > response with a potentially misleading type (array) when we know the > server can only ever be in one group. If we were to add multi-group > support in the future, we could revisit this at the same time but I'm > not holding my breath given previous attempts. Personally, I agree with Jay and think that going for the list is better. We know people have asked for multiple-group membership before, and we have existing warts in our APIs where we've had to convert from a singleton to a list, which are still ugly today. >> 2 Second one: >> >>         "server": { >>             "server_group": { >>                 "name": "groupA", >>                 "id": "0b5d2c72-12cc-4ba6-a8d7-3ff5cc1d8cb8" >>             } >> >> related discuss: >> https://review.openstack.org/#/c/612255/4/specs/stein/approved/list-server-group.rst at 62 > > This is the format I think we should use since it shows the actual > cardinality of server to group we support today. I totally get your reasoning for this, I just think that using a list for a thing that can only be one-long currently is a fairly common thing, and is worth the potential for minor confusion over the work required to maybe someday expand it to N. I don't feel overly strong about it and wouldn't spend much energy trying to convince people. If you (or others) feel strongly, then that's fine. --Dan From zbitter at redhat.com Tue Feb 26 20:39:12 2019 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 26 Feb 2019 15:39:12 -0500 Subject: [tc][election] campaign question: team approval criteria In-Reply-To: References: Message-ID: <89c16c68-050a-4b4e-97a3-d780a739ebd5@redhat.com> On 20/02/19 12:58 PM, Doug Hellmann wrote: > > One of the key responsibilities of the Technical Committee is still > evaluating projects and teams that want to become official OpenStack > projects. The Foundation Open Infrastructure Project approval process > has recently produced a different set of criteria for the Board to use > for approving projects [1] than the TC uses for approving teams [2]. This is an apples-to-oranges comparison, because if the OIP criteria were to be applied to OpenStack, they'd be applied to it as a whole. There's actually not that much difference though, except for the last point about a diversified developer base with active engagement from users &c. The OpenStack project as a whole has that but we don't require individual (sub)projects to do that to become official. > What parts, if any, of the OIP approval criteria do you think should > apply to OpenStack teams? At the moment, only the parts that already apply (like e.g. requiring open development). > What other changes, if any, would you propose to the official team > approval process or criteria? Are we asking the right questions and > setting the minimum requirements high enough? Are there any criteria > that are too hard to meet? So there's always a chicken-and-egg tradeoff where ideally we'd only take projects with lots of people working on them, but lots of people only want to work on projects that are official. At the moment, we've decided to err on the side of letting projects in. There may be a time when that's no longer appropriate, but for now it seems fine. I am pleased that we now have the Vision for OpenStack Clouds document to refer to when dealing with applications. The idea behind that is not only that it gives us better guidance on how to vote, but also that it telegraphs where there might be gaps in the offering in which we would welcome new projects, and encourages prospective projects that might be outside of the vision to engage with us earlier. We will see over the next little while how that plays out in practice. cheers, Zane. > How would you apply those rule changes to existing teams? > > [1] http://lists.openstack.org/pipermail/foundation/2019-February/002708.html > [2] https://governance.openstack.org/tc/reference/new-projects-requirements.html > From njha1999 at gmail.com Tue Feb 26 21:23:21 2019 From: njha1999 at gmail.com (Namrata Jha) Date: Wed, 27 Feb 2019 02:53:21 +0530 Subject: Introducing myself - Namrata Jha. Message-ID: Hello everyone! I am Namrata Jha, from India and I am an Outreachy 2019 aspirant. I am extremely excited to learn a lot and benefit from this community! :) I looked through the GitHub repository for the project "Storyboard Database Query Optimizations" and found that there are no existing issues on the repository yet. It would be really kind if someone could guide me how do I go about code contribution. Thank you. Regards, Namrata Jha. P.S. I was having some issues in joining the IRC. The error message says, "Disconnected: Failed to connect - Invalid hostname (Your hostname looks invalid: freenode Check your host, port and ssl settings );" I might be missing something already stated, but please help me out I'd be really grateful to you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Tue Feb 26 21:28:00 2019 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 26 Feb 2019 16:28:00 -0500 Subject: [tc][election] New series of campaign questions In-Reply-To: <0aae11d6-7db0-420a-a0ff-7cbf92ff9e1e@www.fastmail.com> References: <0aae11d6-7db0-420a-a0ff-7cbf92ff9e1e@www.fastmail.com> Message-ID: <68433edf-2a6a-9b67-233c-997af0ce5964@redhat.com> On 25/02/19 4:28 AM, Jean-Philippe Evrard wrote: > Hello, > > Here are my questions for the candidates. Keep in mind some might overlap with existing questions, so I would expect a little different answer there than what was said. Most questions are intentionally controversial and non-strategic, so please play this spiritual game openly as much as you can (no hard feelings!). > > The objective for me with those questions is not to corner you/force you implement x if you were elected (that would be using my TC hat for asking you questions, which I believe would be wrong), but instead have a glimpse on your mindset (which is important for me as an individual member in OpenStack). It's more like the "magic wand" questions. After this long introduction, here is my volley of questions. > > A) In a world where "general" OpenStack issues/features are solved through community goals, do you think the TC should focus on "less interesting" technical issues across projects, like tech debt reduction? Or at the opposite, do you think the TC should tackle the hardest OpenStack wide problems? The purpose of community goals to me is to co-ordinate stuff that *everyone* has to do before *anyone* can get the benefit. That's a necessary thing to have, but probably most of the hardest OpenStack-wide problems don't necessarily fall into that category (at least at first). Neither does tech debt reduction for the most part, because reducing tech debt is often its own reward. I do think that the TC has a role to play in co-ordinating the community to tackle the hardest problems, but project-wide goals are not going to be the only mechanism. > B) Do you think the TC must check and actively follow all the official projects' health and activities? Why? We've been experimenting with this for nearly a year, and to be honest I am personally yet to see any value from it. I'm not even sure we know what a success would look like from it. It's time-consuming (and, at least for me, highly unenjoyable) to do well and pointless to do perfunctorily. So I wouldn't be sad if we called time on the experiment. > C) Do you think the TC's role is to "empower" project and PTLs? If yes, how do you think the TC can help those? If no, do you think it would be the other way around, with PTLs empowering the TC to achieve more? How and why? I don't want to suggest that projects shouldn't be "empowered" - they should. But if the TC didn't exist, projects would already be completely empowered. The purpose of the TC is that projects relinquish some power to it in exchange for the support of the Foundation. > D) Do you think the community goals should be converted to a "backlog"of time constrained OpenStack "projects", instead of being constrained per cycle? (with the ability to align some goals with releasing when necessary) I don't. It's hard enough to co-ordinate 60ish projects as it is, I think a long-running tasks without a release cadence to discipline them is a recipe for unhappy surprises at the end. If goal champions can't break them down into chunks manageable in a release cycle then they're probably not going to happen. What I *do* think we need is to be able to have a roadmap for chunks of larger goals, to say we're going to do this chunk in T, this one in U, this one in V and it's going to deliver X benefit at the end even if the first part doesn't seem that useful. > E) Do you think we should abandon projects' ML tags/IRC channels, to replace them by focus areas? For example, having [storage] to group people from [cinder] or [manila]. Do you think that would help new contributors, or communication in the community? No, I don't. I know it's just an example but there's no actual similarity between Cinder and Manila. They don't share common code, common people, common use cases... the only similarity is that they both can be considered types of 'storage'. That's pure sophistry. Labels mean nothing. > F) There can be multiple years between a "user desired feature across OpenStack projects", and its actual implementation through the community goals. How do you think we can improve? There's probably a different answer for every feature. I really like where Rico's suggestions about making SIGs more visible in the community have been leading: get each SIG to pick their #1 priority and give them a chance to publicise it to a captive audience (e.g. give them each a 3 minute speaking slot during lunch at the PTG). > G) What do you think of the elections process for the TC? Do you think it is good enough to gather a team to work on hard problems? Or do you think electing person per person have an opposite effect, highlighting individuals versus a common program/shared objectives? Corollary: Do you think we should now elect TC members by groups (of 2 or 3 persons for example), so that we would highlight their program vs highlight individual ideas/qualities? So, political parties? General those arise when you have electoral systems that are constrained by the need to be able to count votes by hand. I'm actually very happy with Condorcet as a system. cheers, Zane. From kennelson11 at gmail.com Tue Feb 26 21:28:47 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 26 Feb 2019 13:28:47 -0800 Subject: Introducing myself - Namrata Jha. In-Reply-To: References: Message-ID: Hello :) On Tue, Feb 26, 2019 at 1:24 PM Namrata Jha wrote: > Hello everyone! I am Namrata Jha, from India and I am an Outreachy 2019 > aspirant. I am extremely excited to learn a lot and benefit from this > community! :) > I looked through the GitHub repository for the project "Storyboard > Database Query Optimizations" and found that there are no existing issues > on the repository yet. It would be really kind if someone could guide me > how do I go about code contribution. Thank you. > Github is just a mirror of our code, we don't actually use it for development. We use gerrit instead. As for where we track issues-- we use Storyboard actually :) You can find starter issues here[1]. > > Regards, > Namrata Jha. > > P.S. I was having some issues in joining the IRC. The error message says, "Disconnected: > Failed to connect - Invalid hostname (Your hostname looks invalid: > freenode Check your host, port and ssl settings > );" > I might be missing something already stated, but please help me out I'd be > really grateful to you. > As for joining IRC issues, if you follow the documentation here[2]. You should be able to connect after that. -Kendall (diablo_rojo) [1] https://storyboard.openstack.org/#!/worklist/492 [2] https://docs.openstack.org/contributors/common/irc.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Feb 26 21:34:54 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 26 Feb 2019 21:34:54 +0000 (GMT) Subject: [placement][nova] What is the plan for tracking placement blueprints post-extraction? In-Reply-To: <953ec1f3-b110-bd01-cc42-a08a4f1f1e0f@gmail.com> References: <325410cc-69dd-3979-933a-287af4d73e3a@gmail.com> <8282ef0a-57e9-5391-9e1b-884dd2780e3e@gmail.com> <2AB6284B-D820-4A0B-9CE7-B2E76C4285D6@fried.cc> <953ec1f3-b110-bd01-cc42-a08a4f1f1e0f@gmail.com> Message-ID: On Mon, 25 Feb 2019, Matt Riedemann wrote: > On 2/25/2019 5:53 AM, Chris Dent wrote: >> So, to take it back to Matt's question, my opinion is: Let's let >> existing stuff run its course to the end of the cycle, decide >> together how and when we want to implement storyboard. > > FWIW I agree with this. I'm fine with no separate specs repo or core team. I > do, however, think that having some tracking tool is important for project > management to get an idea of what is approved for a release and what is left > (and what was completed). Etherpads are not kanban boards nor are they > indexed by search engines so they are fine for notes and such but not great > for long-term documentation or tracking. I've gone head and proposed [1] putting all four of placement, os-traits, os-resource-classes and osc-placement under storyboard, but not yet start any migrations. The commit message on there has an explanation but to recapitulate: * There's stuff we'd like to start tracking, storyboard seems a good place to remember those things. * It's safe to do migrations later. * We have some stuff (e.g., blueprints) that we're already tracking in nova's launchpad and don't really care to carry over. * The updating of bugs from gerrit changes is no longer working for these projects, probably because of acl changes since somewhere in the past few months. The review (and the above) is a proposal: If you don't like it, say so, we'll figure something out. However, no matter what we do and how much care we take to make sure people are not confused through this transition and that history is preserved, some stuff will get dropped and we're going to have to, as humans, pay extra attention and do a bit of extra work to keep everything, including ourselves, happy. We'll need to decide what to do about new bugs relatively soon but that doesn't have to be in lock step with this change. Thoughts? [1] https://review.openstack.org/639445 -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From njha1999 at gmail.com Tue Feb 26 21:51:25 2019 From: njha1999 at gmail.com (Namrata Jha) Date: Wed, 27 Feb 2019 03:21:25 +0530 Subject: Introducing myself - Namrata Jha. In-Reply-To: References: Message-ID: Thank you so much! I'll check these out and get back to you in case of any further questions. :) On Wed, 27 Feb 2019 at 02:58, Kendall Nelson wrote: > Hello :) > > On Tue, Feb 26, 2019 at 1:24 PM Namrata Jha wrote: > >> Hello everyone! I am Namrata Jha, from India and I am an Outreachy 2019 >> aspirant. I am extremely excited to learn a lot and benefit from this >> community! :) >> I looked through the GitHub repository for the project "Storyboard >> Database Query Optimizations" and found that there are no existing issues >> on the repository yet. It would be really kind if someone could guide me >> how do I go about code contribution. Thank you. >> > > Github is just a mirror of our code, we don't actually use it for > development. We use gerrit instead. As for where we track issues-- we use > Storyboard actually :) You can find starter issues here[1]. > > >> >> Regards, >> Namrata Jha. >> >> P.S. I was having some issues in joining the IRC. The error message says, >> "Disconnected: Failed to connect - Invalid hostname (Your hostname looks >> invalid: freenode Check your host, port and ssl settings >> );" >> I might be missing something already stated, but please help me out I'd >> be really grateful to you. >> > > As for joining IRC issues, if you follow the documentation here[2]. You > should be able to connect after that. > > -Kendall (diablo_rojo) > > [1] https://storyboard.openstack.org/#!/worklist/492 > [2] https://docs.openstack.org/contributors/common/irc.html > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Feb 26 22:11:14 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 26 Feb 2019 22:11:14 +0000 Subject: [tc][election] New series of campaign questions In-Reply-To: <68433edf-2a6a-9b67-233c-997af0ce5964@redhat.com> References: <0aae11d6-7db0-420a-a0ff-7cbf92ff9e1e@www.fastmail.com> <68433edf-2a6a-9b67-233c-997af0ce5964@redhat.com> Message-ID: <20190226221114.n24azulyxx2jnpq5@yuggoth.org> On 2019-02-26 16:28:00 -0500 (-0500), Zane Bitter wrote: [...] > What I *do* think we need is to be able to have a roadmap for chunks of > larger goals, to say we're going to do this chunk in T, this one in U, this > one in V and it's going to deliver X benefit at the end even if the first > part doesn't seem that useful. [...] And not that it's the only possible example, but a good recent one is the Python 3 transition we've been undertaking for several cycles already and have at least a couple more ahead of us before it's done. We didn't say "switch to Python3 is a multi-cycle goal" and instead broke it up into goal phases each manageable with a cycle and with their own distinct completion criteria. This also provides increased opportunity for retrospection at the end of each phase and adjustment of the longer-term effort when we see a need. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sorrison at gmail.com Tue Feb 26 22:41:32 2019 From: sorrison at gmail.com (Sam Morrison) Date: Wed, 27 Feb 2019 09:41:32 +1100 Subject: [cinder] extra_capabilities for scheduler filters In-Reply-To: <20190226093734.5ochha6aobx5fa55@localhost> References: <20190226093734.5ochha6aobx5fa55@localhost> Message-ID: <770622DC-6795-465E-8B02-5FB1902AE650@gmail.com> OK thanks, looks like there might be a bug here so have created https://bugs.launchpad.net/cinder/+bug/1817802 Thanks, Sam > On 26 Feb 2019, at 8:37 pm, Gorka Eguileor wrote: > > On 26/02, Sam Morrison wrote: >> Hi, >> Just wondering if extra_capabilities should be available in backend_state >> to be able to be used by scheduler filters? >> >> I can't seem to use my custom capabilities within the capabilities filter. >> >> Thanks, >> Sam > > Hi Sam, > > As far as I know this is supported. > > The volume manager is sending this information to the scheduler and the > capabilities filter should be able to match the extra specs from the > volume type to the extra capabilities you have set in the configuration > of the cinder volume service using available operations: =, , , > etc. > > Cheers, > Gorka. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gilles.mocellin at nuagelibre.org Tue Feb 26 22:56:13 2019 From: gilles.mocellin at nuagelibre.org (Gilles Mocellin) Date: Tue, 26 Feb 2019 23:56:13 +0100 Subject: [openstack-ansible] Compute nodes with mixed system releases Message-ID: <4117207.Ezq4iH3xk8@gillesxps> Hello, I can ot find a real answer in the OpenStack-Ansible docs. Can I add Ubuntu 18.04 compute nodes to my actueal all Ubuntu 16.04 cluster ? Ubuntu 18.04 needs Rocky, so I will first migrate from Queens to Rocky. But then, do I need to stick to 16.04 and plan an overall upgrade after ? Of course, I understand that mixing Ubuntu release, will also mix kernel and qemu versions and can pose problems, for migrations for example. From kennelson11 at gmail.com Tue Feb 26 23:46:22 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 26 Feb 2019 15:46:22 -0800 Subject: [all] 'Train' TC Election Voting Period Begins! Message-ID: Hello All! The poll for the TC Election is now open and will remain open until Mar 05, 2019 23:45 UTC. We are selecting 7 TC members, please rank all candidates in your order of preference. You are eligible to vote if you are a Foundation individual member[1] that also has committed to one of the official programs projects[2] over the Feb 09, 2018 00:00 UTC - Feb 19, 2019 00:00 UTC timeframe (Rocky to Stein) or if you are one of the extra-atcs.[3] What to do if you don't see the email and have a commit in at least one of the official programs projects[2]: * check the trash or spam folder of your gerrit Preferred Email address[4], in case it went into trash or spam * wait a bit and check again, in case your email server is a bit slow * find the sha of at least one commit from the program project repos[2] and email the election officials[1]. If we can confirm that you are entitled to vote, we will add you to the voters list and you will be emailed a ballot. Our democratic process is important to the health of OpenStack, please exercise your right to vote. Candidate statements/platforms can be found linked to Candidate names[6]. Happy voting! Thank you, -Kendall Nelson (diablo_rojo) [1] http://www.openstack.org/community/members/ [2] https://git.openstack.org/cgit/openstack/governance/plain/reference/projects.yaml?id=feb-2019-elections [3] Look for the extra-atcs element in [2] [4] Sign into review.openstack.org: Go to Settings > Contact Information. Look at the email listed as your preferred email. That is where the ballot has been sent. [5] http://governance.openstack.org/election/#election-officials [6] http://governance.openstack.org/election/#stein-tc-candidates -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Feb 27 00:18:27 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 27 Feb 2019 00:18:27 +0000 Subject: [all] 'Train' TC Election Voting Period Begins! In-Reply-To: References: Message-ID: <20190227001827.uhswo24vg55rnh3s@yuggoth.org> On 2019-02-26 15:46:22 -0800 (-0800), Kendall Nelson wrote: > The poll for the TC Election is now open and will remain open > until Mar 05, 2019 23:45 UTC. [...] Also note that, since we got some requests to put the release name 'Train' in quotes for clarity, CIVS has helpfully escaped it from the title string used in the E-mail subject line and introductory sentence. As a result the messages will appear thusly: Subject: Poll: 'Train' TC Election Please pardon the aesthetic shortcomings of the system. ;) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From chris at openstack.org Wed Feb 27 00:35:12 2019 From: chris at openstack.org (Chris Hoge) Date: Tue, 26 Feb 2019 16:35:12 -0800 Subject: [puppet] Re: NDSU Capstone Introduction! In-Reply-To: References: Message-ID: <6E895DDA-E451-416B-83D9-A89E801BA0CE@openstack.org> Welcome Eduardo, and Hunter and Jason. For the initial work, we will be looking at replacing GPL licensed modules in the Puppet-OpenStack project with Apache licensed alternatives. Some of the candidate module transitions include: antonlindstrom/puppet-powerdns -> sensson/powerdns duritong/puppet-sysctl -> thias/puppet-sysctl puppetlabs/puppetlabs-vcsrepo -> voxpupuli/puppet-git_resource Feedback and support on this is welcome, but where possible I would like for the students to be sending the patches up and collaborating to to help make these transitions (where possible, it’s my understanding that sysctl may pose serious challenges). Much of it should be good introductory work to our community workflow, and I'd like for them to have an opportunity to have a successful set of initial patches and contributions that have a positive lasting impact on the community. Thanks in advance, and my apologies for not communicating these efforts to the mailing list sooner. -Chris > On Feb 19, 2019, at 6:40 PM, Urbano Moreno, Eduardo wrote: > > Hello OpenStack community, > > I just wanted to go ahead and introduce myself, as I am a part of the NDSU Capstone group! > > My name is Eduardo Urbano and I am a Jr/Senior at NDSU. I am currently majoring in Computer Science, with no minor although that could change towards graduation. I am currently an intern at an electrical supply company here in Fargo, North Dakota known as Border States. I am an information security intern and I am enjoying it so far. I have learned many interesting security things and have also became a little paranoid of how easily someone can get hacked haha. Anyways, I am so excited to be on board and be working with OpenStack for this semester. So far I have learned many new things and I can’t wait to continue on learning. > > Thank you! > > > -Eduardo From zbitter at redhat.com Wed Feb 27 00:39:50 2019 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 26 Feb 2019 19:39:50 -0500 Subject: [tc] Questions for TC Candidates In-Reply-To: References: Message-ID: <622d787a-0b2c-6edd-4299-891c56751742@redhat.com> On 21/02/19 12:28 PM, Sylvain Bauza wrote: > > * If you had a magic wand and could inspire and make a single > >   sweeping architectural or software change across the services, > >   what would it be? For now, ignore legacy or upgrade concerns. > >   What role should the TC have in inspiring and driving such > >   changes? > > 1: Single agent on each compute node that allows for plugins to do >    all the work required. (Nova / Neutron / Vitrage / watcher / etc) > > 2: Remove RMQ where it makes sense - e.g. for nova-api -> nova-compute >    using something like HTTP(S) would make a lot of sense. > > 3: Unified Error codes, with a central registry, but at the very least >    each time we raise an error, and it gets returned a user can see >    where in the code base it failed. e.g. a header that has >    OS-ERROR-COMPUTE-3142, which means that someone can google for >    something more informative than the VM failed scheduling > > 4: OpenTracing support in all projects. > > 5: Possibly something with pub / sub where each project can listen for >    events and not create something like designate did using >    notifications. > > > That's the exact reason why I tried to avoid to answer about > architectural changes I'd like to see it done. Because when I read the > above lines, I'm far off any consensus on those. > To answer 1. and 2. from my Nova developer's hat, I'd just say that we > invented Cells v2 and Placement. > To be clear, the redesign wasn't coming from any other sources but our > users, complaining about scale. IMHO If we really want to see some > comittee driving us about feature requests, this should be the UC and > not the TC. > > Whatever it is, at the end of the day, we're all paid by our sponsors. > Meaning that any architectural redesign always hits the reality wall > where you need to convince your respective Product Managers of the great > benefit of the redesign. I'm maybe too pragmatic, but I remember so many > discussions we had about redesigns that I now feel we just need hands, > not ideas. C'mon, the question explicitly stipulated use of a magic wand, ignoring path dependence and throwing out backwards compat, but you're worried about the practicalities of convincing product managers??!? We need to stop reflexively stifling these discussions. An 'open' community where nobody is allowed to so much as spitball ideas in case somebody disagrees with them is unworthy of the name. - ZB From liliueecg at gmail.com Wed Feb 27 02:20:20 2019 From: liliueecg at gmail.com (Li Liu) Date: Tue, 26 Feb 2019 21:20:20 -0500 Subject: [Cyborg][IRC] The Cyborg IRC meeting will be held Wednesday at 0300 UTC Message-ID: Sorry for the late reminder The IRC meeting will be held Wednesday at 0300 UTC, which is 10:00 pm est(Tuesday) / 7:00 pm pst(Tuesday) /11 am Beijing time (Wednesday) -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Wed Feb 27 03:35:56 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 27 Feb 2019 12:35:56 +0900 Subject: OpenInfra Day Vietnam 2019 - pre-CFP Message-ID: Hello, This is the pre-calling for presentations at the OpenInfra Day in Vietnam this year. If you love to visit Hanoi , the capital of Vietnam, and share your passion for the Open Infrastructure of any topic, please let me know by replying to this email. Below is the tentative information of the event: - Date: 31 August 2019 - Location: Hanoi, Vietnam We are working with the OpenStack Foundation to organize the Upstream Institute at the day so this will be a great opportunity for potential contributors to come and learn. There is also a couple of PTLs and projects core members have shown their interest in visiting Hanoi for this event. We will send out the official call-for-presentations after we've done with the logistic vendors and It would be around the beginning of May or sooner. If you have any questions, please do not hesitate to contact me. See you in Hanoi :) Yours, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Wed Feb 27 03:40:58 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 27 Feb 2019 12:40:58 +0900 Subject: OpenInfra Day Vietnam 2019 - pre-CFP In-Reply-To: References: Message-ID: P/S: you can have a look at our last year OpenInfra Day: https://2018.vietopenstack.org/ On Wed, Feb 27, 2019 at 12:35 PM Trinh Nguyen wrote: > Hello, > > This is the pre-calling for presentations at the OpenInfra Day in Vietnam > this year. If you love to visit Hanoi > , the capital of Vietnam, and share > your passion for the Open Infrastructure of any topic, please let me know > by replying to this email. Below is the tentative information of the event: > > - Date: 31 August 2019 > - Location: Hanoi, Vietnam > > We are working with the OpenStack Foundation to organize the Upstream > Institute at the day so this will be a great opportunity for potential > contributors to come and learn. There is also a couple of PTLs and projects > core members have shown their interest in visiting Hanoi for this event. > > We will send out the official call-for-presentations after we've done with > the logistic vendors and It would be around the beginning of May or sooner. > > If you have any questions, please do not hesitate to contact me. > > See you in Hanoi :) > > Yours, > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From vedarthambharath at gmail.com Wed Feb 27 04:26:59 2019 From: vedarthambharath at gmail.com (Vedartham Bharath) Date: Wed, 27 Feb 2019 09:56:59 +0530 Subject: [dev][swift] Regarding fstab entries in object servers Message-ID: Hi all, This is with regards to an issue in the documentation of Openstack Swift and mostly an issue to system administrators operating Swift.(Sorry if the subject tags are wrong!!) In the Swift Multiserver docs, When setting up an object storage server, the docs tell to use the disk labels in the disk's /etc/fstab entries. eg: /dev/sda /srv/node/sda xfs noatime............. I feel that we should encourage people to use UUIDs rather than disk labels. I have had a lot of issues with my Swift storage servers crashing whenever I reboot them. I have found out that the issue is with the /etc/fstab as the disk labels change whenever a disk is removed or added(depends on the OS's "mood" i.e boot order). I don't want to change my ring configuration every time I reboot my storage servers. Thank you Bharath -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Wed Feb 27 04:57:38 2019 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Wed, 27 Feb 2019 11:57:38 +0700 Subject: [tc] [election] Candidate question: growth of projects In-Reply-To: References: Message-ID: On 26 Feb 2019, 22:59 +0700, Zane Bitter , wrote: > We should be able to out-innovate any one company, even a big one. > It makes me sad that after 10 years we haven't built the base to make > OpenStack attractive as *the* place to do those kinds of things. +1 > On the other hand, many of those services we do have are only lightly > maintained. That's not hurting anybody (except perhaps the folks stuck > maintaining them), but in many cases we might just be delaying the > inevitable. And some of those services are a feature masquerading as a > separate service, that operate as a separate team because they couldn't > find another way to get code into where they needed it (usually on the > compute node) - those might actually be hurting because they paper over > problems with how our community works that might better be addressed > head-on. I think we need to be very careful with the definition of “lightly maintained”. IMHO, the number of patches isn't always a good indicator. I can tell about my project, Mistral. Yes, we haven’t had an impressive number of patches merged in the last 3 months, but mainly because the key contributors (mainly from Nokia, Red Hat, NetCraker and OVH) were focused downstream tasks around it. There were also some internal changes in the companies from which we have contributors and now we’re trying to deal with that and find a new contribution model that would keep moving the project forward. But that all *doesn’t* mean that the project is not needed anywhere. Several huge corporates and lots of smaller companies use it in production successfully. They make money on it. I didn’t want it to sound as a commercial though, I wanted to deliver the message that “lightly maintained” thing can really be subtle. > > If you had to make a single declaration about growth in the number > > of projects would you prefer to see (and why, of course): > > > > * More projects as required by demand. > > * Slower or no growth to focus on what we've got. > > * Trim the number of projects to "get back to our roots". > > * Something else. Just want to clarify this. What’s our main criteria to make this decision upon? What’s the main pain point that triggers thinking about that? Has the (subjectively) big number of projects made it hard to maintain infrastructure (CI, releases etc.), i.e. it led to technical issues and labor costs? Or it’s just image, or discomfort that not all of these projects are well maintained anymore? > > Do you think the number of projects has any impact (positive or > > negative) on our overall ability to get things done? > > Not really. People will work on the problems they have. If OpenStack > doesn't have a project to solve their problem then they won't work on > OpenStack - they're not going to go work on a different OpenStack > project instead. +1. Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Wed Feb 27 05:53:23 2019 From: sorrison at gmail.com (Sam Morrison) Date: Wed, 27 Feb 2019 16:53:23 +1100 Subject: [nova][keystone] project tags in context for scheduling Message-ID: <37E79D0F-D085-4758-84BC-158798055522@gmail.com> Hi nova and keystone devs, We have a use case where we want to schedule a bunch of projects to specific compute nodes only. The aggregate_multitenancy_isolation isn’t viable because in some cases we will want thousands of projects to go to some hardware and it isn’t manageable/scaleable to do this in nova and aggregates. (Maybe it is and I’m being silly?) The best way I can think of doing this is to tag the keystone projects (or possibly set a custom property) and then write a custom nova scheduler filter to use the tag/property. The only issue I have is that tags/properties aren’t available to nova in it’s RequestContext. Can you think of a better way or a way that would work now? If this does in fact sound like a good way forward does authtoken middleware send this data downwards so nova could consume? If it does I assume it would be then as simple as adding these to the nova RequestContext? Thanks, Sam From manuel.sb at garvan.org.au Wed Feb 27 07:00:54 2019 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Wed, 27 Feb 2019 07:00:54 +0000 Subject: sriov bonding Message-ID: <9D8A2486E35F0941A60430473E29F15B017E860EDF@MXDB2.ad.garvan.unsw.edu.au> Hi, Is there a documentation that explains how to setup bonding on SR-IOV neutron? Thank you Manuel NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From saphi070 at gmail.com Wed Feb 27 07:13:06 2019 From: saphi070 at gmail.com (Sa Pham) Date: Wed, 27 Feb 2019 16:13:06 +0900 Subject: sriov bonding In-Reply-To: <9D8A2486E35F0941A60430473E29F15B017E860EDF@MXDB2.ad.garvan.unsw.edu.au> References: <9D8A2486E35F0941A60430473E29F15B017E860EDF@MXDB2.ad.garvan.unsw.edu.au> Message-ID: As I remember, Currently there is no mechanism to setup bonding with SR-IOV. On Wed, Feb 27, 2019 at 4:09 PM Manuel Sopena Ballesteros < manuel.sb at garvan.org.au> wrote: > Hi, > > > > Is there a documentation that explains how to setup bonding on SR-IOV > neutron? > > > > Thank you > > > > Manuel > NOTICE > Please consider the environment before printing this email. This message > and any attachments are intended for the addressee named and may contain > legally privileged/confidential/copyright information. If you are not the > intended recipient, you should not read, use, disclose, copy or distribute > this communication. If you have received this message in error please > notify us at once by return email and then delete both messages. We accept > no liability for the distribution of viruses or similar in electronic > communications. This notice should not be removed. > -- Sa Pham Dang Cloud RnD Team - VCCloud Phone/Telegram: 0986.849.582 Skype: great_bn -------------- next part -------------- An HTML attachment was scrubbed... URL: From bence.romsics at gmail.com Wed Feb 27 07:38:22 2019 From: bence.romsics at gmail.com (Bence Romsics) Date: Wed, 27 Feb 2019 08:38:22 +0100 Subject: sriov bonding In-Reply-To: <9D8A2486E35F0941A60430473E29F15B017E860EDF@MXDB2.ad.garvan.unsw.edu.au> References: <9D8A2486E35F0941A60430473E29F15B017E860EDF@MXDB2.ad.garvan.unsw.edu.au> Message-ID: Hi, On Wed, Feb 27, 2019 at 8:00 AM Manuel Sopena Ballesteros wrote: > Is there a documentation that explains how to setup bonding on SR-IOV neutron? Not right now to my knowledge, but I remember seeing effort to design and introduce this feature. I think there may have been multiple rounds of design already, this is maybe the last one that's still ongoing: Neutron side: https://bugs.launchpad.net/neutron/+bug/1809037 Nova side: https://blueprints.launchpad.net/nova/+spec/schedule-vm-nics-to-different-pf https://blueprints.launchpad.net/nova/+spec/sriov-bond Hope that helps, Bence From eumel at arcor.de Wed Feb 27 08:00:07 2019 From: eumel at arcor.de (Frank Kloeker) Date: Wed, 27 Feb 2019 09:00:07 +0100 Subject: [I18n] Team meeting Feb 28 2019 16:00 UTC Message-ID: <8b30c70b5b6c638fa0f612a69152c98e@arcor.de> Hello, after my vacation I want to make the I18n team meeting on Thursday, 28th of Feb and hopefully I'll be there on time. There are several topics on the agenda [1]. Feel free to add your topics too. kind regards Frank [1] https://wiki.openstack.org/wiki/Meetings/I18nTeamMeeting From christian.zunker at codecentric.cloud Wed Feb 27 08:09:24 2019 From: christian.zunker at codecentric.cloud (Christian Zunker) Date: Wed, 27 Feb 2019 09:09:24 +0100 Subject: [ceilometer] radosgw pollster In-Reply-To: References: Message-ID: Hi Florian, have you tried different permissions for your ceilometer user in radosgw? According to the docs you need an admin user: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#ceph-object-storage Our user has these caps: usage=read,write;metadata=read,write;users=read,write;buckets=read,write We also had to add the requests-aws pip package to query radosgw from ceilometer: https://docs.openstack.org/openstack-ansible/latest/user/ceph/ceilometer.html Christian Am Di., 26. Feb. 2019 um 13:15 Uhr schrieb Florian Engelmann < florian.engelmann at everyware.ch>: > Hi Christian, > > Am 2/26/19 um 11:00 AM schrieb Christian Zunker: > > Hi Florian, > > > > which version of OpenStack are you using? > > The radosgw metric names were different in some versions: > > https://bugs.launchpad.net/ceilometer/+bug/1726458 > > we do use Rocky and Ceilometer 11.0.1. I am still lost with that error. > As far as I am able to understand python it looks like the error is > happening in polling.manager line 222: > > > https://github.com/openstack/ceilometer/blob/11.0.1/ceilometer/polling/manager.py#L222 > > But I do not understand why. I tried to enable debug logging but the > error does not log any additional information. > The poller is not even trying to reach/poll our RadosGWs. Looks like > that manger is blocking those polls. > > All the best, > Florian > > > > > > Christian > > > > Am Fr., 22. Feb. 2019 um 17:40 Uhr schrieb Florian Engelmann > > >>: > > > > Hi, > > > > I failed to poll any usage data from our radosgw. I get > > > > 2019-02-22 17:23:57.461 24 INFO ceilometer.polling.manager [-] > Polling > > pollster radosgw.containers.objects in the context of > > radosgw_300s_pollsters > > 2019-02-22 17:23:57.462 24 ERROR ceilometer.polling.manager [-] > Prevent > > pollster radosgw.containers.objects from polling [ > description=, > > domain_id=xx9d9975088a4d93922e1d73c7217b3b, enabled=True, > > > > [...] > > > > id=xx90a9b1d4be4d75b4bd08ab8107e4ff, is_domain=False, links={u'self': > > u'http://keystone-admin.service.xxxxxxx:35357/v3/projects on source > > radosgw_300s_pollsters anymore!: PollsterPermanentError > > > > Configurations like: > > cat polling.yaml > > --- > > sources: > > - name: radosgw_300s_pollsters > > interval: 300 > > meters: > > - radosgw.usage > > - radosgw.objects > > - radosgw.objects.size > > - radosgw.objects.containers > > - radosgw.containers.objects > > - radosgw.containers.objects.size > > > > > > Also tried radosgw.api.requests instead of radowsgw.usage. > > > > ceilometer.conf > > [...] > > [service_types] > > radosgw = object-store > > > > [rgw_admin_credentials] > > access_key = xxxxx0Z0xxxxxxxxxxxx > > secret_key = xxxxxxxxxxxxlRExxcPxxxxxxoNxxxxxxOxxxx > > > > [rgw_client] > > implicit_tenants = true > > > > Endpoints: > > | xxxxxxx | region | swift | object-store | True | admin > > | http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s | > > | xxxxxxx | region | swift | object-store | True | > > internal > > | http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s | > > | xxxxxxx | region | swift | object-store | True | > public > > | https://s3.somedomain.com/swift/v1/AUTH_%(tenant_id)s | > > > > Ceilometer user: > > { > > "user_id": "ceilometer", > > "display_name": "ceilometer", > > "email": "", > > "suspended": 0, > > "max_buckets": 1000, > > "auid": 0, > > "subusers": [], > > "keys": [ > > { > > "user": "ceilometer", > > "access_key": "xxxxxxxxxxxxxxxxxx", > > "secret_key": "xxxxxxxxxxxxxxxxxxxxxxxxx" > > } > > ], > > "swift_keys": [], > > "caps": [ > > { > > "type": "buckets", > > "perm": "read" > > }, > > { > > "type": "metadata", > > "perm": "read" > > }, > > { > > "type": "usage", > > "perm": "read" > > }, > > { > > "type": "users", > > "perm": "read" > > } > > ], > > "op_mask": "read, write, delete", > > "default_placement": "", > > "placement_tags": [], > > "bucket_quota": { > > "enabled": false, > > "check_on_raw": false, > > "max_size": -1, > > "max_size_kb": 0, > > "max_objects": -1 > > }, > > "user_quota": { > > "enabled": false, > > "check_on_raw": false, > > "max_size": -1, > > "max_size_kb": 0, > > "max_objects": -1 > > }, > > "temp_url_keys": [], > > "type": "rgw" > > } > > > > > > radosgw config: > > [client.rgw.xxxxxxxxxxx] > > host = somehost > > rgw frontends = "civetweb port=7480 num_threads=512" > > rgw num rados handles = 8 > > rgw thread pool size = 512 > > rgw cache enabled = true > > rgw dns name = s3.xxxxxx.xxx > > rgw enable usage log = true > > rgw usage log tick interval = 30 > > rgw realm = public > > rgw zonegroup = xxx > > rgw zone = xxxxx > > rgw resolve cname = False > > rgw usage log flush threshold = 1024 > > rgw usage max user shards = 1 > > rgw usage max shards = 32 > > rgw_keystone_url = https://keystone.xxxxxxxxxxxxx > > rgw_keystone_admin_domain = default > > rgw_keystone_admin_project = service > > rgw_keystone_admin_user = swift > > rgw_keystone_admin_password = > > xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx > > rgw_keystone_accepted_roles = member,_member_,admin > > rgw_keystone_accepted_admin_roles = admin > > rgw_keystone_api_version = 3 > > rgw_keystone_verify_ssl = false > > rgw_keystone_implicit_tenants = true > > rgw_keystone_admin_tenant = default > > rgw_keystone_revocation_interval = 0 > > rgw_keystone_token_cache_size = 0 > > rgw_s3_auth_use_keystone = true > > rgw_max_attr_size = 1024 > > rgw_max_attrs_num_in_req = 32 > > rgw_max_attr_name_len = 64 > > rgw_swift_account_in_url = true > > rgw_swift_versioning_enabled = true > > rgw_enable_apis = s3,swift,swift_auth,admin > > rgw_swift_enforce_content_length = true > > > > > > > > > > Any idea whats going on? > > > > All the best, > > Florian > > > > > > > > -- > > EveryWare AG > Florian Engelmann > Senior UNIX Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: mailto:florian.engelmann at everyware.ch > web: http://www.everyware.ch > -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Wed Feb 27 08:33:15 2019 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Wed, 27 Feb 2019 09:33:15 +0100 Subject: [ops] [nova] Wrong reported memory hypervisor usage Message-ID: In the dashboard of my OpenStack Ocata cloud I see that the reported memory usage is wrong for the hypervisors. As far as I understand that information should correspond to the "used_now" field of the "nova host-describe" command. And indeed considering an hypervisor: # nova host-describe cld-blu-01.cloud.pd.infn.it +-----------------------------+----------------------------------+-----+-----------+---------+ | HOST | PROJECT | cpu | memory_mb | disk_gb | +-----------------------------+----------------------------------+-----+-----------+---------+ | cld-blu-01.cloud.pd.infn.it | (total) | 8 | 32722 | 241 | | cld-blu-01.cloud.pd.infn.it | (used_now) | 19 | 16259 | 23 | | cld-blu-01.cloud.pd.infn.it | (used_max) | 19 | 38912 | 95 | | cld-blu-01.cloud.pd.infn.it | b08eede75d5e4be4b0fe21e68fa9c688 | 1 | 2048 | 20 | | cld-blu-01.cloud.pd.infn.it | 7890b3e262264529a19f9743cf2f14bc | 16 | 32768 | 50 | | cld-blu-01.cloud.pd.infn.it | 4d8187cffa6a4085ad4a357b8a5afc03 | 2 | 4096 | 25 | +-----------------------------+----------------------------------+-----+-----------+---------+ So for the memory it reports 16259 for used_now, while as far as far I understand it should be 38912 + the memory used by the hypervisor Am I missing something ? Thanks, Massimo -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at canonical.com Wed Feb 27 09:16:09 2019 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Wed, 27 Feb 2019 10:16:09 +0100 Subject: [charms] Proposing Pete VanderGiessen to the Charms core team Message-ID: Hello all, I would like to propose Pete VanderGiessen as a member of the Charms core team. -- Frode Nordahl -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.ames at canonical.com Wed Feb 27 09:27:10 2019 From: david.ames at canonical.com (David Ames) Date: Wed, 27 Feb 2019 10:27:10 +0100 Subject: [charms] Proposing Pete VanderGiessen to the Charms core team In-Reply-To: References: Message-ID: On Wed, Feb 27, 2019 at 10:17 AM Frode Nordahl wrote: > > Hello all, > > I would like to propose Pete VanderGiessen as a member of the Charms core team. > > -- > Frode Nordahl +1 Welcome, Pete. -- David Ames From james.page at canonical.com Wed Feb 27 09:33:20 2019 From: james.page at canonical.com (James Page) Date: Wed, 27 Feb 2019 10:33:20 +0100 Subject: [charms] Proposing Pete VanderGiessen to the Charms core team In-Reply-To: References: Message-ID: +1 On Wed, Feb 27, 2019 at 10:23 AM Frode Nordahl wrote: > Hello all, > > I would like to propose Pete VanderGiessen as a member of the Charms core > team. > > -- > Frode Nordahl > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.macnaughton at canonical.com Wed Feb 27 09:44:16 2019 From: chris.macnaughton at canonical.com (Chris MacNaughton) Date: Wed, 27 Feb 2019 10:44:16 +0100 Subject: [charms] Proposing Pete VanderGiessen to the Charms core team In-Reply-To: References: Message-ID: +1 from me, Welcome Pete! On Wed, Feb 27, 2019 at 10:23 AM Frode Nordahl wrote: > Hello all, > > I would like to propose Pete VanderGiessen as a member of the Charms core > team. > > -- > Frode Nordahl > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.kavanagh at canonical.com Wed Feb 27 09:46:56 2019 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Wed, 27 Feb 2019 10:46:56 +0100 Subject: [charms] Proposing Pete VanderGiessen to the Charms core team In-Reply-To: References: Message-ID: Yup, +1 from me too. On Wed, Feb 27, 2019 at 10:23 AM Frode Nordahl wrote: > Hello all, > > I would like to propose Pete VanderGiessen as a member of the Charms core > team. > > -- > Frode Nordahl > -- Alex Kavanagh - Software Engineer OpenStack Engineering - Data Centre Development - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Wed Feb 27 09:51:52 2019 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 27 Feb 2019 09:51:52 +0000 Subject: [scientific-sig] IRC meeting 1100 UTC: Edge computing and Scientific use cases Message-ID: <3775939A-22A3-43CC-AF07-4660753047FE@telfer.org> Hi All - We have an IRC meeting at 1100 UTC (about an hour’s time) in channel #openstack-meeting. Everyone is welcome. Today’s agenda is here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_February_27th_2019 Today’s main event is that we have Ildikó Vancsa joining us to talk about edge computing use cases and find overlap with Scientific SIG use cases. Cheers, Stig From bdobreli at redhat.com Wed Feb 27 10:15:03 2019 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 27 Feb 2019 11:15:03 +0100 Subject: [scientific-sig] IRC meeting 1100 UTC: Edge computing and Scientific use cases In-Reply-To: <3775939A-22A3-43CC-AF07-4660753047FE@telfer.org> References: <3775939A-22A3-43CC-AF07-4660753047FE@telfer.org> Message-ID: On 27.02.2019 10:51, Stig Telfer wrote: > Hi All - > > We have an IRC meeting at 1100 UTC (about an hour’s time) in channel #openstack-meeting. Everyone is welcome. > > Today’s agenda is here: > > https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_February_27th_2019 > > Today’s main event is that we have Ildikó Vancsa joining us to talk about edge computing use cases and find overlap with Scientific SIG use cases. > > Cheers, > Stig > > Edge computing is a *great* topic for lot of future research work indeed. Thank you for this useful announcement, appreciated! I'll attend at least the first half of it, and in case I'll miss the agenda item that I placed at the end of the list, please be kind to review that research request with my absence as well. Thanks! -- Best regards, Bogdan Dobrelya, Irc #bogdando From cdent+os at anticdent.org Wed Feb 27 10:28:11 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 27 Feb 2019 10:28:11 +0000 (GMT) Subject: [tc] Questions for TC Candidates In-Reply-To: <622d787a-0b2c-6edd-4299-891c56751742@redhat.com> References: <622d787a-0b2c-6edd-4299-891c56751742@redhat.com> Message-ID: On Tue, 26 Feb 2019, Zane Bitter wrote: > We need to stop reflexively stifling these discussions. An 'open' community > where nobody is allowed to so much as spitball ideas in case somebody > disagrees with them is unworthy of the name. Bless you. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From bdobreli at redhat.com Wed Feb 27 10:31:38 2019 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 27 Feb 2019 11:31:38 +0100 Subject: [placement][TripleO] zuul job dependencies for greater good? In-Reply-To: <8736oamqyi.fsf@meyer.lemoncheese.net> References: <179B50E1-0FA9-4801-AAB5-B65832BF4DFB@redhat.com> <1c67f431ebceb62e7d867af530201f5836531b1f.camel@redhat.com> <5be33695-5554-1d40-b899-06dfaf3b0a24@fried.cc> <20190226002048.GA10439@fedora19.localdomain> <8736oamqyi.fsf@meyer.lemoncheese.net> Message-ID: On 26.02.2019 17:53, James E. Blair wrote: > Bogdan Dobrelya writes: > >> I attempted [0] to do that for tripleo-ci, but zuul was (and still >> does) complaining for some weird graphs building things :/ >> >> See also the related topic [1] from the past. >> >> [0] https://review.openstack.org/#/c/568543 >> [1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/127869.html > > Thank you for linking to [1]. It's worth re-reading. Especially the > part at the end. > > -Jim > Yes, the part at the end is the best indeed. I'd amend the time priorities graph though like that: CPU-time < a developer time < developers time That means burning some CPU and nodes in a pool for a waste might benefit a developer, but saving some CPU and nodes in a pool would benefit *developers* in many projects as they'd get the jobs results off the waiting check queues faster :) -- Best regards, Bogdan Dobrelya, Irc #bogdando From smooney at redhat.com Wed Feb 27 10:36:21 2019 From: smooney at redhat.com (Sean Mooney) Date: Wed, 27 Feb 2019 10:36:21 +0000 Subject: sriov bonding In-Reply-To: References: <9D8A2486E35F0941A60430473E29F15B017E860EDF@MXDB2.ad.garvan.unsw.edu.au> Message-ID: <553f272f8be80ef6c29b0cc47a548e52bd7a1c2a.camel@redhat.com> On Wed, 2019-02-27 at 08:38 +0100, Bence Romsics wrote: > Hi, > > On Wed, Feb 27, 2019 at 8:00 AM Manuel Sopena Ballesteros > wrote: > > Is there a documentation that explains how to setup bonding on SR-IOV neutron? > > Not right now to my knowledge, but I remember seeing effort to design > and introduce this feature. I think there may have been multiple > rounds of design already, this is maybe the last one that's still > ongoing: > > Neutron side: > https://bugs.launchpad.net/neutron/+bug/1809037 > > Nova side: > https://blueprints.launchpad.net/nova/+spec/schedule-vm-nics-to-different-pf > https://blueprints.launchpad.net/nova/+spec/sriov-bond most of the previous attempts have not proceeded as they have tried to hide the bonding from nova's and neutron's data models vai configs or opaque strings. for bonding to really be supported at the openstack level we will need to take a similar a approch to trunk ports. e.g. we create a logical bond port and a set of bond peer ports at the neutron api level then we attach the bond port to the vm. currently im not aware of any proposal that really has tracktion. you can manually create a bond in the guest but you cannot today guarentee that the vf will come from different pfs. one of the reason i think we need to go the logical bond port direction is that it will allow us to constuct resouce requests using the fucntionality that is being added for bandwidth based schduleing. that will make expressing affinity and anti affinty simpler useing the request groups syntax. i personally have stopped trying to add bond support untill that bandwith based schduling effort is finished. > > Hope that helps, > Bence > From sbauza at redhat.com Wed Feb 27 10:38:46 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 27 Feb 2019 11:38:46 +0100 Subject: [tc] Questions for TC Candidates In-Reply-To: <622d787a-0b2c-6edd-4299-891c56751742@redhat.com> References: <622d787a-0b2c-6edd-4299-891c56751742@redhat.com> Message-ID: Le mer. 27 févr. 2019 à 01:44, Zane Bitter a écrit : > On 21/02/19 12:28 PM, Sylvain Bauza wrote: > > > * If you had a magic wand and could inspire and make a single > > > sweeping architectural or software change across the services, > > > what would it be? For now, ignore legacy or upgrade concerns. > > > What role should the TC have in inspiring and driving such > > > changes? > > > > 1: Single agent on each compute node that allows for plugins to do > > all the work required. (Nova / Neutron / Vitrage / watcher / etc) > > > > 2: Remove RMQ where it makes sense - e.g. for nova-api -> > nova-compute > > using something like HTTP(S) would make a lot of sense. > > > > 3: Unified Error codes, with a central registry, but at the very > least > > each time we raise an error, and it gets returned a user can see > > where in the code base it failed. e.g. a header that has > > OS-ERROR-COMPUTE-3142, which means that someone can google for > > something more informative than the VM failed scheduling > > > > 4: OpenTracing support in all projects. > > > > 5: Possibly something with pub / sub where each project can listen > for > > events and not create something like designate did using > > notifications. > > > > > > That's the exact reason why I tried to avoid to answer about > > architectural changes I'd like to see it done. Because when I read the > > above lines, I'm far off any consensus on those. > > To answer 1. and 2. from my Nova developer's hat, I'd just say that we > > invented Cells v2 and Placement. > > To be clear, the redesign wasn't coming from any other sources but our > > users, complaining about scale. IMHO If we really want to see some > > comittee driving us about feature requests, this should be the UC and > > not the TC. > > > > Whatever it is, at the end of the day, we're all paid by our sponsors. > > Meaning that any architectural redesign always hits the reality wall > > where you need to convince your respective Product Managers of the great > > benefit of the redesign. I'm maybe too pragmatic, but I remember so many > > discussions we had about redesigns that I now feel we just need hands, > > not ideas. > > C'mon, the question explicitly stipulated use of a magic wand, ignoring > path dependence and throwing out backwards compat, but you're worried > about the practicalities of convincing product managers??!? > > We need to stop reflexively stifling these discussions. An 'open' > community where nobody is allowed to so much as spitball ideas in case > somebody disagrees with them is unworthy of the name. > We are post the campaign period so I won't argue but see my other emails after this one, hopefully you will see that I'm not against discussing about architectural concerns, just not here and not by only the TC members. Sylvain (stopping now the campaign) > - ZB > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Feb 27 10:47:10 2019 From: smooney at redhat.com (Sean Mooney) Date: Wed, 27 Feb 2019 10:47:10 +0000 Subject: [ops] [nova] Wrong reported memory hypervisor usage In-Reply-To: References: Message-ID: <2f0e39c939630d6f009f146a7d2f6f6ce6d7a12f.camel@redhat.com> On Wed, 2019-02-27 at 09:33 +0100, Massimo Sgaravatto wrote: > In the dashboard of my OpenStack Ocata cloud I see that the reported memory usage is wrong for the hypervisors. > As far as I understand that information should correspond to the "used_now" field of the "nova host-describe" nova host-describe has been removed in later version of nova. > command. And indeed considering an hypervisor: > > > # nova host-describe cld-blu-01.cloud.pd.infn.it > +-----------------------------+----------------------------------+-----+-----------+---------+ > | HOST | PROJECT | cpu | memory_mb | disk_gb | > +-----------------------------+----------------------------------+-----+-----------+---------+ > | cld-blu-01.cloud.pd.infn.it | (total) | 8 | 32722 | 241 | > | cld-blu-01.cloud.pd.infn.it | (used_now) | 19 | 16259 | 23 | > | cld-blu-01.cloud.pd.infn.it | (used_max) | 19 | 38912 | 95 | > | cld-blu-01.cloud.pd.infn.it | b08eede75d5e4be4b0fe21e68fa9c688 | 1 | 2048 | 20 | > | cld-blu-01.cloud.pd.infn.it | 7890b3e262264529a19f9743cf2f14bc | 16 | 32768 | 50 | > | cld-blu-01.cloud.pd.infn.it | 4d8187cffa6a4085ad4a357b8a5afc03 | 2 | 4096 | 25 | > +-----------------------------+----------------------------------+-----+-----------+---------+ > > > So for the memory it reports 16259 for used_now, while as far as far I understand it should be 38912 + the memory used > by the hypervisor > > Am I missing something ? the hypervisors api's memory_mb_used is (reserved memory + total of all memory associtate with instance on the host). i would have expected used_now to reflect that. what does "openstack hypervisor show cld-blu-01.cloud.pd.infn.it" print. > > Thanks, Massimo > From massimo.sgaravatto at gmail.com Wed Feb 27 11:16:01 2019 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Wed, 27 Feb 2019 12:16:01 +0100 Subject: [ops] [nova] Wrong reported memory hypervisor usage In-Reply-To: <2f0e39c939630d6f009f146a7d2f6f6ce6d7a12f.camel@redhat.com> References: <2f0e39c939630d6f009f146a7d2f6f6ce6d7a12f.camel@redhat.com> Message-ID: Thanks for the prompt feedback. This [*] is the output of "openstack hypervisor show cld-blu-01.cloud.pd.infn.it" Let me also add that if I restart openstack-nova-compute on the hypervisor, then "nova host-describe" shows the right information for a few seconds: [root at cld-ctrl-01 ~]# nova host-describe cld-blu-01.cloud.pd.infn.it +-----------------------------+----------------------------------+-----+-----------+---------+ | HOST | PROJECT | cpu | memory_mb | disk_gb | +-----------------------------+----------------------------------+-----+-----------+---------+ | cld-blu-01.cloud.pd.infn.it | (total) | 8 | 32722 | 241 | | cld-blu-01.cloud.pd.infn.it | (used_now) | 19 | 39424 | 95 | | cld-blu-01.cloud.pd.infn.it | (used_max) | 19 | 38912 | 95 | | cld-blu-01.cloud.pd.infn.it | b08eede75d5e4be4b0fe21e68fa9c688 | 1 | 2048 | 20 | | cld-blu-01.cloud.pd.infn.it | 7890b3e262264529a19f9743cf2f14bc | 16 | 32768 | 50 | | cld-blu-01.cloud.pd.infn.it | 4d8187cffa6a4085ad4a357b8a5afc03 | 2 | 4096 | 25 | +-----------------------------+----------------------------------+-----+-----------+---------+ but after a while it goes back publishing 16256 MB as used_now Thanks, Massimo [*] # openstack hypervisor show cld-blu-01.cloud.pd.infn.it +----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | aggregates | [u'Unipd-AdminTesting-Unipd', u'Unipd-HPC-Physics', u'Unipd-model-glasses---DISC', u'Unipd-DECAMP', u'Unipd-Biodiversity-Macro-biomes', u'Unipd- | | | CMB4PrimordialNonGaussianity', u'Unipd-Links-in-Channel', u'Unipd-TC4SAP', u'Unipd-Diabetes_Risk', u'Unipd-LabTrasporti', u'Unipd-Hydrological_DA', u'Unipd- | | | Smart_Enterprise_in_Cloud', u'Unipd-DEM_Sim_ICEA', u'Unipd-Plasmonics', u'Unipd-QuBiMM', u'Unipd-MedComp', u'Unipd-BigDataComputingCourse', u'Unipd-DSB---Sci.-Biomed', u | | | 'Unipd-Notion', u'Unipd-The_role_of_trade-offs_in_competitive_ecosystems', u'Unipd-MMS_Cloud', u'Unipd-SID2016', u'Unipd-QuantumFuture', u'Unipd-Link_Translocation-DFA', | | | u'Unipd-Hopping_Transport_in_Lithium_Niobate', u'Unipd-Negapedia', u'Unipd-SIGNET-ns3', u'Unipd-SIGNET-MATLAB', u'Unipd-QST', u'Unipd-AbinitioTransport', u'Unipd- | | | CalcStat', u'Unipd-PhysicsOfData-students', u'Unipd-DiSePaM', u'Unipd-Few-mode-optical-fibers', u'Unipd-cleva'] | | cpu_info | {"vendor": "Intel", "model": "SandyBridge-IBRS", "arch": "x86_64", "features": ["pge", "avx", "xsaveopt", "clflush", "sep", "syscall", "tsc-deadline", "dtes64", "stibp", | | | "msr", "xsave", "vmx", "xtpr", "cmov", "ssse3", "est", "pat", "monitor", "smx", "pbe", "lm", "tsc", "nx", "fxsr", "tm", "sse4.1", "pae", "sse4.2", "pclmuldq", "cx16", | | | "pcid", "vme", "mmx", "osxsave", "cx8", "mce", "de", "rdtscp", "ht", "dca", "lahf_lm", "pdcm", "mca", "pdpe1gb", "apic", "sse", "pse", "ds", "invtsc", "pni", "tm2", | | | "aes", "sse2", "ss", "ds_cpl", "arat", "acpi", "spec-ctrl", "fpu", "ssbd", "pse36", "mtrr", "popcnt", "x2apic"], "topology": {"cores": 4, "cells": 2, "threads": 1, | | | "sockets": 1}} | | current_workload | 0 | | disk_available_least | 124 | | free_disk_gb | 146 | | free_ram_mb | -6702 | | host_ip | 192.168.60.150 | | host_time | 11:56:28 | | hypervisor_hostname | cld-blu-01.cloud.pd.infn.it | | hypervisor_type | QEMU | | hypervisor_version | 2010000 | | id | 132 | | load_average | 0.11, 0.19, 0.21 | | local_gb | 241 | | local_gb_used | 23 | | memory_mb | 32722 | | memory_mb_used | 16255 | | running_vms | 4 | | service_host | cld-blu-01.cloud.pd.infn.it | | service_id | 171 | | state | up | | status | enabled | | uptime | 85 days, 34 min | | users | 1 | | vcpus | 8 | | vcpus_used | 19 | +----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ [root at cld-ctrl-01 ~]# On Wed, Feb 27, 2019 at 11:47 AM Sean Mooney wrote: > On Wed, 2019-02-27 at 09:33 +0100, Massimo Sgaravatto wrote: > > In the dashboard of my OpenStack Ocata cloud I see that the reported > memory usage is wrong for the hypervisors. > > As far as I understand that information should correspond to the > "used_now" field of the "nova host-describe" > nova host-describe has been removed in later version of nova. > > command. And indeed considering an hypervisor: > > > > > > # nova host-describe cld-blu-01.cloud.pd.infn.it > > > +-----------------------------+----------------------------------+-----+-----------+---------+ > > | HOST | PROJECT | cpu | > memory_mb | disk_gb | > > > +-----------------------------+----------------------------------+-----+-----------+---------+ > > | cld-blu-01.cloud.pd.infn.it | (total) | 8 > | 32722 | 241 | > > | cld-blu-01.cloud.pd.infn.it | (used_now) | 19 > | 16259 | 23 | > > | cld-blu-01.cloud.pd.infn.it | (used_max) | 19 > | 38912 | 95 | > > | cld-blu-01.cloud.pd.infn.it | b08eede75d5e4be4b0fe21e68fa9c688 | 1 > | 2048 | 20 | > > | cld-blu-01.cloud.pd.infn.it | 7890b3e262264529a19f9743cf2f14bc | 16 > | 32768 | 50 | > > | cld-blu-01.cloud.pd.infn.it | 4d8187cffa6a4085ad4a357b8a5afc03 | 2 > | 4096 | 25 | > > > +-----------------------------+----------------------------------+-----+-----------+---------+ > > > > > > So for the memory it reports 16259 for used_now, while as far as far I > understand it should be 38912 + the memory used > > by the hypervisor > > > > Am I missing something ? > the hypervisors api's memory_mb_used is (reserved memory + total of all > memory associtate with instance on the host). > i would have expected used_now to reflect that. > what does "openstack hypervisor show cld-blu-01.cloud.pd.infn.it" print. > > > > > Thanks, Massimo > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Feb 27 12:33:43 2019 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 27 Feb 2019 13:33:43 +0100 Subject: [neutron][ironic] Remove deprecated option 'external_network_bridge' from neutron config Message-ID: Hi Ironic devs, Some time ago I started removing of very old and deprecated since long time option ‚external_network_bridge’ from Neutron. Main patch for that is in [1]. All needed work to remove that is almost done. It was blocked by Tony Breeds because Ironic is still using this option. So I would like to ask You if You have any plans to remove usage of this option that we will be able to remove it completely? Thx in advance for any info about that :) [1] https://review.openstack.org/#/c/567369/ — Slawek Kaplonski Senior software engineer Red Hat From derekh at redhat.com Wed Feb 27 12:50:33 2019 From: derekh at redhat.com (Derek Higgins) Date: Wed, 27 Feb 2019 12:50:33 +0000 Subject: [neutron][ironic] Remove deprecated option 'external_network_bridge' from neutron config In-Reply-To: References: Message-ID: On Wed, 27 Feb 2019 at 12:37, Slawomir Kaplonski wrote: > > Hi Ironic devs, > > Some time ago I started removing of very old and deprecated since long time option ‚external_network_bridge’ from Neutron. > Main patch for that is in [1]. > All needed work to remove that is almost done. It was blocked by Tony Breeds because Ironic is still using this option. > So I would like to ask You if You have any plans to remove usage of this option that we will be able to remove it completely? We stopped using it a few weeks ago[1] at the time I tested it with a depends-on on the patch that removed the option from devstack[2], so assuming this was the correct patch to depend on we should be good to go, I also commented on the devstack patch to say we were good to move forward, sorry it looks like the message didn't get where it needed to be. 1 - https://review.openstack.org/#/c/621146/17 2 - https://review.openstack.org/#/c/619067/ > Thx in advance for any info about that :) > > [1] https://review.openstack.org/#/c/567369/ > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > From pierre at stackhpc.com Wed Feb 27 13:18:03 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Wed, 27 Feb 2019 13:18:03 +0000 Subject: [scientific-sig] IRC meeting 1100 UTC: Edge computing and Scientific use cases In-Reply-To: References: <3775939A-22A3-43CC-AF07-4660753047FE@telfer.org> Message-ID: Hello Bogdan, In the context of data management in edge computing, you may be interested by research done by the Discovery Initiative: http://beyondtheclouds.github.io/publications.html I am adding Adrien Lèbre to the thread, he may be able to point you to the publications most relevant to your whitepaper draft. Best wishes, Pierre Riteau On Wed, 27 Feb 2019 at 10:21, Bogdan Dobrelya wrote: > > On 27.02.2019 10:51, Stig Telfer wrote: > > Hi All - > > > > We have an IRC meeting at 1100 UTC (about an hour’s time) in channel #openstack-meeting. Everyone is welcome. > > > > Today’s agenda is here: > > > > https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_February_27th_2019 > > > > Today’s main event is that we have Ildikó Vancsa joining us to talk about edge computing use cases and find overlap with Scientific SIG use cases. > > > > Cheers, > > Stig > > > > > > Edge computing is a *great* topic for lot of future research work indeed. > Thank you for this useful announcement, appreciated! > I'll attend at least the first half of it, and in case I'll miss the > agenda item that I placed at the end of the list, please be kind to > review that research request with my absence as well. Thanks! > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > From skaplons at redhat.com Wed Feb 27 13:20:15 2019 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 27 Feb 2019 14:20:15 +0100 Subject: [neutron][ironic] Remove deprecated option 'external_network_bridge' from neutron config In-Reply-To: References: Message-ID: Thx Derek for this info. That is good news because now we can finally move forward with this in Neutron :) > Wiadomość napisana przez Derek Higgins w dniu 27.02.2019, o godz. 13:50: > > On Wed, 27 Feb 2019 at 12:37, Slawomir Kaplonski wrote: >> >> Hi Ironic devs, >> >> Some time ago I started removing of very old and deprecated since long time option ‚external_network_bridge’ from Neutron. >> Main patch for that is in [1]. >> All needed work to remove that is almost done. It was blocked by Tony Breeds because Ironic is still using this option. >> So I would like to ask You if You have any plans to remove usage of this option that we will be able to remove it completely? > > We stopped using it a few weeks ago[1] at the time I tested it with a > depends-on on the patch that removed the option > from devstack[2], so assuming this was the correct patch to depend on > we should be good to go, I also commented on > the devstack patch to say we were good to move forward, sorry it looks > like the message didn't get where it needed to be. > > 1 - https://review.openstack.org/#/c/621146/17 > 2 - https://review.openstack.org/#/c/619067/ > >> Thx in advance for any info about that :) >> >> [1] https://review.openstack.org/#/c/567369/ >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> >> > — Slawek Kaplonski Senior software engineer Red Hat From lebre.adrien at gmail.com Wed Feb 27 13:36:36 2019 From: lebre.adrien at gmail.com (Adrien Gmail) Date: Wed, 27 Feb 2019 14:36:36 +0100 Subject: [scientific-sig] IRC meeting 1100 UTC: Edge computing and Scientific use cases In-Reply-To: References: <3775939A-22A3-43CC-AF07-4660753047FE@telfer.org> Message-ID: <11313C72-9F0B-4E36-8874-246EF752FEE0@gmail.com> Hi, Getting a look to this post could also make sense: http://beyondtheclouds.github.io/blog/openstack/cockroachdb/2018/06/04/evaluation-of-openstack-multi-region-keystone-deployments.html Best regards, Adrien PS: Thanks Pierre. -- Prof. IMT Atlantique / Inria / LS2N Head of the STACK Research Group www.emn.fr/x-info/alebre08 > On 27 Feb 2019, at 14:18, Pierre Riteau > wrote: > > Hello Bogdan, > > In the context of data management in edge computing, you may be > interested by research done by the Discovery Initiative: > http://beyondtheclouds.github.io/publications.html > I am adding Adrien Lèbre to the thread, he may be able to point you to > the publications most relevant to your whitepaper draft. > > Best wishes, > Pierre Riteau > > On Wed, 27 Feb 2019 at 10:21, Bogdan Dobrelya wrote: >> >> On 27.02.2019 10:51, Stig Telfer wrote: >>> Hi All - >>> >>> We have an IRC meeting at 1100 UTC (about an hour’s time) in channel #openstack-meeting. Everyone is welcome. >>> >>> Today’s agenda is here: >>> >>> https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_February_27th_2019 >>> >>> Today’s main event is that we have Ildikó Vancsa joining us to talk about edge computing use cases and find overlap with Scientific SIG use cases. >>> >>> Cheers, >>> Stig >>> >>> >> >> Edge computing is a *great* topic for lot of future research work indeed. >> Thank you for this useful announcement, appreciated! >> I'll attend at least the first half of it, and in case I'll miss the >> agenda item that I placed at the end of the list, please be kind to >> review that research request with my absence as well. Thanks! >> >> >> -- >> Best regards, >> Bogdan Dobrelya, >> Irc #bogdando >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Feb 27 14:25:14 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 27 Feb 2019 08:25:14 -0600 Subject: [nova][keystone] project tags in context for scheduling In-Reply-To: <37E79D0F-D085-4758-84BC-158798055522@gmail.com> References: <37E79D0F-D085-4758-84BC-158798055522@gmail.com> Message-ID: <55a7132e-1573-b29c-efbe-9c48226cc964@gmail.com> On 2/26/2019 11:53 PM, Sam Morrison wrote: > We have a use case where we want to schedule a bunch of projects to specific compute nodes only. > The aggregate_multitenancy_isolation isn’t viable because in some cases we will want thousands of projects to go to some hardware and it isn’t manageable/scaleable to do this in nova and aggregates. (Maybe it is and I’m being silly?) Is the issue because of this? https://bugs.launchpad.net/nova/+bug/1802111 Or just in general. Because https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#tenant-isolation-with-placement fixes that problem, but is only available since Rocky. Also, I can't find it now but there was a public cloud workgroup bug in launchpad at one point where it was asking that the AggregateMultiTenancyIsolation filter work on keystone domains rather than a list of projects, so if those projects were all in the same domain you'd just specify the domain in the aggregate metadata than the thousands of projects which is your scaling issue. Tobias might remember that bug. -- Thanks, Matt From dev.faz at gmail.com Wed Feb 27 14:25:26 2019 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Wed, 27 Feb 2019 15:25:26 +0100 Subject: [neutron] auto shedule explained? Message-ID: <29f9b428-da38-bbd1-9158-f835b87935e0@gmail.com> Hi, im trying to understand the use-cases of # Allow auto scheduling networks to DHCP agent. (boolean value) network_auto_schedule = false # Allow auto scheduling of routers to L3 agent. (boolean value) router_auto_schedule = false if I read the code correctly, both options should only be used if there is (during runtime) an network without any l3 or dhcp agents, so neutron would trigger an automatic reschedule to new agents, isnt it? I disabled both options and created a new network, which automatically got l3 and dhcp agents assigned, so it seems like my assumption is correct? Thanks a lot, Fabian Zimmermann From gaudenz at durcheinandertal.ch Wed Feb 27 14:49:54 2019 From: gaudenz at durcheinandertal.ch (Gaudenz Steinlin) Date: Wed, 27 Feb 2019 15:49:54 +0100 Subject: [ops] [nova] Wrong reported memory hypervisor usage In-Reply-To: References: <2f0e39c939630d6f009f146a7d2f6f6ce6d7a12f.camel@redhat.com> Message-ID: <87y361e165.fsf@meteor.durcheinandertal.bofh> Hi Massimo Sgaravatto writes: > Thanks for the prompt feedback. > > This [*] is the output of "openstack hypervisor show > cld-blu-01.cloud.pd.infn.it" > > Let me also add that if I restart openstack-nova-compute on the > hypervisor, then "nova host-describe" shows the right > information for a few seconds: Could it be that you are hitting this bug: https://bugs.launchpad.net/nova/+bug/1733034 If you are affected by this bug you will see the memory usage change between the "sum of all instances + reserved memory" and the actual usage on the hypervisor as reported by libvirt. Just call "nova hypervisor-show " a few times to see the values change. depending on which version you are running, you have to create or destroy an instance to see the changing values. Gaudenz > > > [root at cld-ctrl-01 ~]# nova host-describe cld-blu-01.cloud.pd.infn.it > +-----------------------------+----------------------------------+-----+-----------+---------+ > | HOST | PROJECT | cpu | > memory_mb | disk_gb | > +-----------------------------+----------------------------------+-----+-----------+---------+ > | cld-blu-01.cloud.pd.infn.it | (total) | 8 | > 32722 | 241 | > | cld-blu-01.cloud.pd.infn.it | (used_now) | 19 | > 39424 | 95 | > | cld-blu-01.cloud.pd.infn.it | (used_max) | 19 | > 38912 | 95 | > | cld-blu-01.cloud.pd.infn.it | b08eede75d5e4be4b0fe21e68fa9c688 | 1 | > 2048 | 20 | > | cld-blu-01.cloud.pd.infn.it | 7890b3e262264529a19f9743cf2f14bc | 16 | > 32768 | 50 | > | cld-blu-01.cloud.pd.infn.it | 4d8187cffa6a4085ad4a357b8a5afc03 | 2 | > 4096 | 25 | > +-----------------------------+----------------------------------+-----+-----------+---------+ > > > but after a while it goes back publishing 16256 MB as used_now > > Thanks, Massimo > > [*] > # openstack hypervisor show cld-blu-01.cloud.pd.infn.it > +----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > | > +----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | aggregates | [u'Unipd-AdminTesting-Unipd', > u'Unipd-HPC-Physics', u'Unipd-model-glasses---DISC', u'Unipd-DECAMP', > u'Unipd-Biodiversity-Macro-biomes', u'Unipd- | > | | CMB4PrimordialNonGaussianity', > u'Unipd-Links-in-Channel', u'Unipd-TC4SAP', u'Unipd-Diabetes_Risk', > u'Unipd-LabTrasporti', u'Unipd-Hydrological_DA', u'Unipd- | > | | Smart_Enterprise_in_Cloud', u'Unipd-DEM_Sim_ICEA', > u'Unipd-Plasmonics', u'Unipd-QuBiMM', u'Unipd-MedComp', > u'Unipd-BigDataComputingCourse', u'Unipd-DSB---Sci.-Biomed', u | > | | 'Unipd-Notion', > u'Unipd-The_role_of_trade-offs_in_competitive_ecosystems', > u'Unipd-MMS_Cloud', u'Unipd-SID2016', u'Unipd-QuantumFuture', > u'Unipd-Link_Translocation-DFA', | > | | u'Unipd-Hopping_Transport_in_Lithium_Niobate', > u'Unipd-Negapedia', u'Unipd-SIGNET-ns3', u'Unipd-SIGNET-MATLAB', > u'Unipd-QST', u'Unipd-AbinitioTransport', u'Unipd- | > | | CalcStat', u'Unipd-PhysicsOfData-students', > u'Unipd-DiSePaM', u'Unipd-Few-mode-optical-fibers', u'Unipd-cleva'] > | > | cpu_info | {"vendor": "Intel", "model": "SandyBridge-IBRS", > "arch": "x86_64", "features": ["pge", "avx", "xsaveopt", "clflush", "sep", > "syscall", "tsc-deadline", "dtes64", "stibp", | > | | "msr", "xsave", "vmx", "xtpr", "cmov", "ssse3", > "est", "pat", "monitor", "smx", "pbe", "lm", "tsc", "nx", "fxsr", "tm", > "sse4.1", "pae", "sse4.2", "pclmuldq", "cx16", | > | | "pcid", "vme", "mmx", "osxsave", "cx8", "mce", > "de", "rdtscp", "ht", "dca", "lahf_lm", "pdcm", "mca", "pdpe1gb", "apic", > "sse", "pse", "ds", "invtsc", "pni", "tm2", | > | | "aes", "sse2", "ss", "ds_cpl", "arat", "acpi", > "spec-ctrl", "fpu", "ssbd", "pse36", "mtrr", "popcnt", "x2apic"], > "topology": {"cores": 4, "cells": 2, "threads": 1, | > | | "sockets": 1}} > > | > | current_workload | 0 > > | > | disk_available_least | 124 > > | > | free_disk_gb | 146 > > | > | free_ram_mb | -6702 > > | > | host_ip | 192.168.60.150 > > | > | host_time | 11:56:28 > > | > | hypervisor_hostname | cld-blu-01.cloud.pd.infn.it > > | > | hypervisor_type | QEMU > > | > | hypervisor_version | 2010000 > > | > | id | 132 > > | > | load_average | 0.11, 0.19, 0.21 > > | > | local_gb | 241 > > | > | local_gb_used | 23 > > | > | memory_mb | 32722 > > | > | memory_mb_used | 16255 > > | > | running_vms | 4 > > | > | service_host | cld-blu-01.cloud.pd.infn.it > > | > | service_id | 171 > > | > | state | up > > | > | status | enabled > > | > | uptime | 85 days, 34 min > > | > | users | 1 > > | > | vcpus | 8 > > | > | vcpus_used | 19 > > | > +----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > [root at cld-ctrl-01 ~]# > > > > > > On Wed, Feb 27, 2019 at 11:47 AM Sean Mooney wrote: > >> On Wed, 2019-02-27 at 09:33 +0100, Massimo Sgaravatto wrote: >> > In the dashboard of my OpenStack Ocata cloud I see that the reported >> memory usage is wrong for the hypervisors. >> > As far as I understand that information should correspond to the >> "used_now" field of the "nova host-describe" >> nova host-describe has been removed in later version of nova. >> > command. And indeed considering an hypervisor: >> > >> > >> > # nova host-describe cld-blu-01.cloud.pd.infn.it >> > >> +-----------------------------+----------------------------------+-----+-----------+---------+ >> > | HOST | PROJECT | cpu | >> memory_mb | disk_gb | >> > >> +-----------------------------+----------------------------------+-----+-----------+---------+ >> > | cld-blu-01.cloud.pd.infn.it | (total) | 8 >> | 32722 | 241 | >> > | cld-blu-01.cloud.pd.infn.it | (used_now) | 19 >> | 16259 | 23 | >> > | cld-blu-01.cloud.pd.infn.it | (used_max) | 19 >> | 38912 | 95 | >> > | cld-blu-01.cloud.pd.infn.it | b08eede75d5e4be4b0fe21e68fa9c688 | 1 >> | 2048 | 20 | >> > | cld-blu-01.cloud.pd.infn.it | 7890b3e262264529a19f9743cf2f14bc | 16 >> | 32768 | 50 | >> > | cld-blu-01.cloud.pd.infn.it | 4d8187cffa6a4085ad4a357b8a5afc03 | 2 >> | 4096 | 25 | >> > >> +-----------------------------+----------------------------------+-----+-----------+---------+ >> > >> > >> > So for the memory it reports 16259 for used_now, while as far as far I >> understand it should be 38912 + the memory used >> > by the hypervisor >> > >> > Am I missing something ? >> the hypervisors api's memory_mb_used is (reserved memory + total of all >> memory associtate with instance on the host). >> i would have expected used_now to reflect that. >> what does "openstack hypervisor show cld-blu-01.cloud.pd.infn.it" print. >> >> > >> > Thanks, Massimo >> > >> >> -- PGP: 836E 4F81 EFBB ADA7 0852 79BF A97A 7702 BAF9 1EF5 From aschultz at redhat.com Wed Feb 27 14:59:35 2019 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 27 Feb 2019 07:59:35 -0700 Subject: [puppet][tripleo][starlingx] Re: NDSU Capstone Introduction! In-Reply-To: <6E895DDA-E451-416B-83D9-A89E801BA0CE@openstack.org> References: <6E895DDA-E451-416B-83D9-A89E801BA0CE@openstack.org> Message-ID: On Tue, Feb 26, 2019 at 5:38 PM Chris Hoge wrote: > > Welcome Eduardo, and Hunter and Jason. > > For the initial work, we will be looking at replacing GPL licensed modules in > the Puppet-OpenStack project with Apache licensed alternatives. Some of the > candidate module transitions include: > > antonlindstrom/puppet-powerdns -> sensson/powerdns > > duritong/puppet-sysctl -> thias/puppet-sysctl > > puppetlabs/puppetlabs-vcsrepo -> voxpupuli/puppet-git_resource > > Feedback and support on this is welcome, but where possible I would like for > the students to be sending the patches up and collaborating to to help make these > transitions (where possible, it’s my understanding that sysctl may pose serious > challenges). Much of it should be good introductory work to our community > workflow, and I'd like for them to have an opportunity to have a > successful set of initial patches and contributions that have a positive > lasting impact on the community. > Please note that this also has an impact on TripleO and any other downstream consumers of the puppet modules. Specifically for TripleO we'll need to consider how packaging these will come into play and if they aren't 1:1 compatible it may break us. StarlingX might also be impacted as well. > Thanks in advance, and my apologies for not communicating these efforts > to the mailing list sooner. > > -Chris > > > On Feb 19, 2019, at 6:40 PM, Urbano Moreno, Eduardo wrote: > > > > Hello OpenStack community, > > > > I just wanted to go ahead and introduce myself, as I am a part of the NDSU Capstone group! > > > > My name is Eduardo Urbano and I am a Jr/Senior at NDSU. I am currently majoring in Computer Science, with no minor although that could change towards graduation. I am currently an intern at an electrical supply company here in Fargo, North Dakota known as Border States. I am an information security intern and I am enjoying it so far. I have learned many interesting security things and have also became a little paranoid of how easily someone can get hacked haha. Anyways, I am so excited to be on board and be working with OpenStack for this semester. So far I have learned many new things and I can’t wait to continue on learning. > > > > Thank you! > > > > > > -Eduardo > > From dabarren at gmail.com Wed Feb 27 15:09:12 2019 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Wed, 27 Feb 2019 16:09:12 +0100 Subject: [kolla] Proposing Michal Nasiadka to the core team In-Reply-To: References: Message-ID: Vote is over, Welcome to the core team Michal! El lun., 25 feb. 2019 a las 16:13, Jeffrey Zhang (< zhang.lei.fly+os-discuss at gmail.com>) escribió: > +1 > > On Mon, Feb 25, 2019 at 8:30 PM Martin André wrote: > >> On Fri, Feb 15, 2019 at 11:21 AM Eduardo Gonzalez >> wrote: >> > >> > Hi, is my pleasure to propose Michal Nasiadka for the core team in >> kolla-ansible. >> >> +1 >> I'd also be happy to welcome Michal to the kolla-core group (not just >> kolla-ansible) as he's done a great job reviewing the kolla patches >> too. >> >> Martin >> >> > Michal has been active reviewer in the last relases ( >> https://www.stackalytics.com/?module=kolla-group&user_id=mnasiadka), has >> been keeping an eye on the bugs and being active help on IRC. >> > He has also made efforts in community interactions in Rocky and Stein >> releases, including PTG attendance. >> > >> > His main interest is NFV and Edge clouds and brings valuable couple of >> years experience as OpenStack/Kolla operator with good knowledge of Kolla >> code base. >> > >> > Planning to work on extending Kolla CI scenarios, Edge use cases and >> improving NFV-related functions ease of deployment. >> > >> > Consider this email as my +1 vote. Vote ends in 7 days (22 feb 2019) >> > >> > Regards >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed Feb 27 15:18:33 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 27 Feb 2019 09:18:33 -0600 Subject: [nova][keystone] project tags in context for scheduling In-Reply-To: <55a7132e-1573-b29c-efbe-9c48226cc964@gmail.com> References: <37E79D0F-D085-4758-84BC-158798055522@gmail.com> <55a7132e-1573-b29c-efbe-9c48226cc964@gmail.com> Message-ID: <1458809b-afb0-febd-a575-649d7fc58d7a@gmail.com> On 2/27/19 8:25 AM, Matt Riedemann wrote: > On 2/26/2019 11:53 PM, Sam Morrison wrote: >> We have a use case where we want to schedule a bunch of projects to >> specific compute nodes only. >> The aggregate_multitenancy_isolation isn’t viable because in some >> cases we will want thousands of projects to go to some hardware and >> it isn’t manageable/scaleable to do this in nova and aggregates. >> (Maybe it is and I’m being silly?) > > Is the issue because of this? > > https://bugs.launchpad.net/nova/+bug/1802111 > > Or just in general. Because > https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#tenant-isolation-with-placement > fixes that problem, but is only available since Rocky. > > Also, I can't find it now but there was a public cloud workgroup bug > in launchpad at one point where it was asking that the > AggregateMultiTenancyIsolation filter work on keystone domains rather > than a list of projects, so if those projects were all in the same > domain you'd just specify the domain in the aggregate metadata than > the thousands of projects which is your scaling issue. Tobias might > remember that bug. > I can't find this either, but the working group does have a couple generic domain support bugs, but they aren't very specific to this issue. I think Matt brings up an interesting point about domain support. Currently, it's pretty limited to keystone. Not a lot of other services rely on domain-scoped tokens for anything [0]. As far as context and middleware goes, keystonemiddleware validates tokens, sets headers, and oslo.context has the ability to convert those request headers [1] to attributes of the context object from loading the request environment [2]. Long story short, if the context is getting handled property and if you have access to it, you can pull the project_id off the context object and query more information about it from keystone directly (also assuming you have keystoneclient handy). [0] https://docs.openstack.org/keystone/latest/contributor/services.html#authorization-scopes [1] http://git.openstack.org/cgit/openstack/oslo.context/tree/oslo_context/context.py?id=76a07f9022f0fa967707c9f6cb5a4a24aac6b3ef#n44 [2] http://git.openstack.org/cgit/openstack/oslo.context/tree/oslo_context/context.py?id=76a07f9022f0fa967707c9f6cb5a4a24aac6b3ef#n426 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From mriedemos at gmail.com Wed Feb 27 15:36:26 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 27 Feb 2019 09:36:26 -0600 Subject: [nova] Updates about Detaching/Attaching root volumes In-Reply-To: <19d57159-69b3-0b4b-cec8-2018fb672d41@gmail.com> References: <19d57159-69b3-0b4b-cec8-2018fb672d41@gmail.com> Message-ID: <6abe3c9d-909d-fd40-b8a9-70415eb9e83e@gmail.com> On 2/26/2019 7:21 AM, Matt Riedemann wrote: > Yeah I am not sure what to do here. Here is a scenario: > > User boots from volume with a tag "ubuntu1604vol" to indicate it's the > root volume with the operating system. Then they shelve offload the > server and detach the root volume. At this point, the GET > /servers/{server_id}/os-volume_attachments API is going to show None for > the volume_id on that BDM but should it show the original tag or also > show None for that. Kevin currently has the tag field being reset to > None when the root volume is detached. > > When the user attaches a new root volume, they can provide a new tag so > even if we did not reset the tag, the user can overwrite it. As a user, > would you expect the tag to be reset when the root volume is detached or > have it persist but be overwritable? > > If in this scenario the user then attaches a new root volume that is > CentOS or Ubuntu 18.04 or something like that, but forgets to update the > tag, then the old tag would be misleading. > > So it is probably safest to just reset the tag like Kevin's proposed > code is doing, but we could use some wider feedback here. I just realized that the user providing a new tag when attaching the new root volume won't work, because we are only going to allow attaching a new root volume to a shelved offloaded instance, which explicitly rejects providing a tag in that case [1]. So we likely need to lift that restriction in this microversion and then on unshelve in the compute service we need to check if the compute supports device tags like during server create and if not, the unshelve will fail. Now that I think about that, that's likely already a bug today, i.e. if I create a server with device tags at server create time and land on a host that supports them, but then shelve offload and unshelve to a compute that does not support them, the unshelve won't fail even though the compute doesn't support the device tags on my attached volumes/ports. [1] https://review.openstack.org/#/c/623981/18/nova/compute/api.py at 4264 -- Thanks, Matt From alifshit at redhat.com Wed Feb 27 16:02:41 2019 From: alifshit at redhat.com (Artom Lifshitz) Date: Wed, 27 Feb 2019 11:02:41 -0500 Subject: [nova] Updates about Detaching/Attaching root volumes In-Reply-To: <19d57159-69b3-0b4b-cec8-2018fb672d41@gmail.com> References: <19d57159-69b3-0b4b-cec8-2018fb672d41@gmail.com> Message-ID: On Tue, Feb 26, 2019 at 8:23 AM Matt Riedemann wrote: > > On 2/26/2019 6:40 AM, Zhenyu Zheng wrote: > > I'm working on a blueprint to support Detach/Attach root volumes. The > > blueprint has been proposed for quite a while since mitaka[1] in that > > version of proposal, we only talked about instances in shelved_offloaded > > status. And in Stein[2] the status of stopped was also added. But now we > > realized that support detach/attach root volume on a stopped instance > > could be problemastic since the underlying image could change which > > might invalidate the current host.[3] > > > > So Matt and Sean suggested maybe we could just do it for > > shelved_offloaded instances, and I have updated the patch according to > > this comment. And I will update the spec latter, so if anyone have > > thought on this, please let me know. > > I mentioned this during the spec review but didn't push on it I guess, > or must have talked myself out of it. We will also have to handle the > image potentially changing when attaching a new root volume so that when > we unshelve, the scheduler filters based on the new image metadata > rather than the image metadata stored in the RequestSpec from when the > server was originally created. But for a stopped instance, there is no > run through the scheduler again so I don't think we can support that > case. Also, there is no real good way for us (right now) to even compare > the image ID from the new root volume to what was used to originally > create the server because for volume-backed servers the > RequestSpec.image.id is not set (I'm not sure why, but that's the way > it's always been, the image.id is pop'ed from the metadata [1]). And > when we detach the root volume, we null out the BDM.volume_id so we > can't get back to figure out what that previous root volume's image ID > was to compare, i.e. for a stopped instance we can't enforce that the > underlying image is the same to support detach/attach root volume. We > could probably hack stuff up by stashing the old volume_id/image_id in > system_metadata but I'd rather not play that game. > > It also occurs to me that the root volume attach code is also not > verifying that the new root volume is bootable. So we really need to > re-use this code on root volume attach [2]. > > tl;dr when we attach a new root volume, we need to update the > RequestSpec.image (ImageMeta) object based on the new root volume's > underlying volume_image_metadata so that when we unshelve we use that > image rather than the original image. > > > > > Another thing I wanted to discuss is that in the proposal, we will reset > > some fields in the root_bdm instead of delete the whole record, among > > those fields, the tag field could be tricky. My idea was to reset it > > too. But there also could be cases that the users might think that it > > would not change[4]. > > Yeah I am not sure what to do here. Here is a scenario: > > User boots from volume with a tag "ubuntu1604vol" to indicate it's the > root volume with the operating system. Then they shelve offload the > server and detach the root volume. At this point, the GET > /servers/{server_id}/os-volume_attachments API is going to show None for > the volume_id on that BDM but should it show the original tag or also > show None for that. Kevin currently has the tag field being reset to > None when the root volume is detached. > > When the user attaches a new root volume, they can provide a new tag so > even if we did not reset the tag, the user can overwrite it. As a user, > would you expect the tag to be reset when the root volume is detached or > have it persist but be overwritable? > > If in this scenario the user then attaches a new root volume that is > CentOS or Ubuntu 18.04 or something like that, but forgets to update the > tag, then the old tag would be misleading. The tag is a Nova concept on the attachment. If you detach a volume (root or not) then attach a different one (root or not), to me that's a new attachment, with a new (potentially None) tag. I have no idea who that fits into the semantics around root volume detach, but that's my 2 cents. > > So it is probably safest to just reset the tag like Kevin's proposed > code is doing, but we could use some wider feedback here. > > [1] > https://github.com/openstack/nova/blob/33f367ec2f32ce36b00257c11c5084400416774c/nova/utils.py#L943 > [2] > https://github.com/openstack/nova/blob/33f367ec2f32ce36b00257c11c5084400416774c/nova/compute/api.py#L1091-L1101 > > -- > > Thanks, > > Matt > -- -- Artom Lifshitz Software Engineer, OpenStack Compute DFG From zbitter at redhat.com Wed Feb 27 16:23:19 2019 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 27 Feb 2019 11:23:19 -0500 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <20190226163700.GA532@sm-workstation> References: <1541607064.1040711.1568901040.6867B704@webmail.messagingengine.com> <20190226163700.GA532@sm-workstation> Message-ID: <25179e07-9a8c-9279-9399-f4f46a82510c@redhat.com> On 26/02/19 11:37 AM, Sean McGinnis wrote: > On Tue, Feb 26, 2019 at 05:28:19PM +0100, Moises Guimaraes de Medeiros wrote: >> So, at this point, is it OK to have projects running against both py35 and >> py37 and considering py36 covered as being included in the interval? According to the resolution we should unit test "[e]ach Python 3 version that is the default in any of the Linux distributions specifically identified in the Project Testing Interface at the beginning of the development cycle." https://governance.openstack.org/tc/resolutions/20181024-python-update-process.html#unit-tests Python 3.6 is the default in Ubuntu Bionic and openSUSE Leap 15.0, so we should keep py36 jobs running. (Unit test jobs are not particularly expensive, so we shouldn't worry too much about running 4 of them.) >> Also about the lowest supported version, I think that is the one that >> should be stated in the envlist of tox.ini to fail fast during development. >> > > In my opinion, the py35 jobs should all be dropped. I'm all for getting rid of py35 ASAP, but a prerequisite for that would be that we move all integration test jobs to Bionic (from Xenial). Apparently gmann has a plan for that to happen in Stein, but we haven't set it as a goal or anything. If that ended up working out we could drop it in Train though (the resolution says we should unit test "[e]ach Python 3 version that was still used in any integration tests at the beginning of the development cycle"). > The official runtime for Stein is py36, and the upcoming runtime is py37, so it > doesn't add much value to be running py35 tests at this point. It prevents a project from breaking another project's Xenial-based integration tests by committing some trivial non-py35-compatible code (e.g. f-strings). cheers, Zane. From fungi at yuggoth.org Wed Feb 27 16:36:29 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 27 Feb 2019 16:36:29 +0000 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <25179e07-9a8c-9279-9399-f4f46a82510c@redhat.com> References: <1541607064.1040711.1568901040.6867B704@webmail.messagingengine.com> <20190226163700.GA532@sm-workstation> <25179e07-9a8c-9279-9399-f4f46a82510c@redhat.com> Message-ID: <20190227163629.azdmd3hvhryuet7g@yuggoth.org> On 2019-02-27 11:23:19 -0500 (-0500), Zane Bitter wrote: [...] > the resolution says we should unit test "[e]ach Python 3 version > that was still used in any integration tests at the beginning of the > development cycle" [...] Now I'm getting worried that the phrasing we settled on is leading to misinterpretation. The entire point, I thought, was that we decide at the beginning of the development cycle on which platforms we're testing, and so choose the most recent releases of those LTS distros. If some projects were still running jobs on an earlier platform at the start of the cycle, I don't think we need to be stuck maintaining that testing. The beginning of the cycle is the point at which it's safe for them to switch to the agreed-upon current platform for our upcoming release under development. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jimmy at openstack.org Wed Feb 27 16:40:46 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 27 Feb 2019 10:40:46 -0600 Subject: [all] [forum] Forum Submissions are open! Message-ID: <5C76BD8E.4070504@openstack.org> Hi Everyone - A quick reminder that we are accepting Forum [1] submissions for the 2019 Open Infrastructure Summit in Denver [2]. Please submit your ideas through the Summit CFP tool [3] through March 8th. Don't forget to put your brainstorming etherpad up on the Denver Forum page [4]. This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. More information about the Forum [1]. If you have questions or concerns, please reach out to speakersupport at openstack.org . Cheers, Jimmy [1] https://wiki.openstack.org/wiki/Forum [2] https://www.openstack.org/summit/denver-2019/ [3] https://www.openstack.org/summit/denver-2019/call-for-presentations [4] https://wiki.openstack.org/wiki/Forum/Denver2019 ___________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Feb 27 17:07:44 2019 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 27 Feb 2019 11:07:44 -0600 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <20190227163629.azdmd3hvhryuet7g@yuggoth.org> References: <1541607064.1040711.1568901040.6867B704@webmail.messagingengine.com> <20190226163700.GA532@sm-workstation> <25179e07-9a8c-9279-9399-f4f46a82510c@redhat.com> <20190227163629.azdmd3hvhryuet7g@yuggoth.org> Message-ID: <86fa7dc7-aa78-c81f-11cc-03ec329c469f@nemebean.com> On 2/27/19 10:36 AM, Jeremy Stanley wrote: > On 2019-02-27 11:23:19 -0500 (-0500), Zane Bitter wrote: > [...] >> the resolution says we should unit test "[e]ach Python 3 version >> that was still used in any integration tests at the beginning of the >> development cycle" > [...] > > Now I'm getting worried that the phrasing we settled on is leading > to misinterpretation. The entire point, I thought, was that we > decide at the beginning of the development cycle on which platforms > we're testing, and so choose the most recent releases of those LTS > distros. If some projects were still running jobs on an earlier > platform at the start of the cycle, I don't think we need to be > stuck maintaining that testing. The beginning of the cycle is the > point at which it's safe for them to switch to the agreed-upon > current platform for our upcoming release under development. > Part of the problem is that this didn't actually happen at the beginning of the cycle. The current plan is not to finish the legacy job migration until Apr. 1[0], which means until then projects may have a dependency on py35. I'm not sure whether this necessarily indicates that the resolution is flawed though - we only update the distro for functional tests once every couple of years, and in this case we didn't adopt the resolution until partway through the cycle so we got a late start. On the other hand, it sounds like the distro migration is going to take around 4 months total (started in Dec., ending early April). Maybe expecting everyone to be on the current distro release right at the start of the cycle is overly ambitious? 0: https://etherpad.openstack.org/p/legacy-job-bionic From gmann at ghanshyammann.com Wed Feb 27 17:08:37 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 28 Feb 2019 02:08:37 +0900 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <20190227163629.azdmd3hvhryuet7g@yuggoth.org> References: <1541607064.1040711.1568901040.6867B704@webmail.messagingengine.com> <20190226163700.GA532@sm-workstation> <25179e07-9a8c-9279-9399-f4f46a82510c@redhat.com> <20190227163629.azdmd3hvhryuet7g@yuggoth.org> Message-ID: <1692fedf433.12a6a175393613.6060908281008487623@ghanshyammann.com> ---- On Thu, 28 Feb 2019 01:36:29 +0900 Jeremy Stanley wrote ---- > On 2019-02-27 11:23:19 -0500 (-0500), Zane Bitter wrote: > [...] > > the resolution says we should unit test "[e]ach Python 3 version > > that was still used in any integration tests at the beginning of the > > development cycle" > [...] > > Now I'm getting worried that the phrasing we settled on is leading > to misinterpretation. The entire point, I thought, was that we > decide at the beginning of the development cycle on which platforms > we're testing, and so choose the most recent releases of those LTS > distros. If some projects were still running jobs on an earlier > platform at the start of the cycle, I don't think we need to be > stuck maintaining that testing. The beginning of the cycle is the > point at which it's safe for them to switch to the agreed-upon > current platform for our upcoming release under development. Main point is, it's all happening together in stein :), migration of LTS distro, mixed of legacy and zuulv3 jobs. Maybe from next cycle (or when we will have new dtiro), it will be easy to migrate the latest distro when all jobs are zuul v3 native and counting the all python 3 versions we need to support as per resolution. But yes, let's keep the py35 testing until we move all integration jobs to bionic. -gmann > -- > Jeremy Stanley > From zbitter at redhat.com Wed Feb 27 17:25:17 2019 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 27 Feb 2019 12:25:17 -0500 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <20190227163629.azdmd3hvhryuet7g@yuggoth.org> References: <1541607064.1040711.1568901040.6867B704@webmail.messagingengine.com> <20190226163700.GA532@sm-workstation> <25179e07-9a8c-9279-9399-f4f46a82510c@redhat.com> <20190227163629.azdmd3hvhryuet7g@yuggoth.org> Message-ID: <3f9ca71e-d087-bf4f-73e7-a46e9772dff6@redhat.com> On 27/02/19 11:36 AM, Jeremy Stanley wrote: > On 2019-02-27 11:23:19 -0500 (-0500), Zane Bitter wrote: > [...] >> the resolution says we should unit test "[e]ach Python 3 version >> that was still used in any integration tests at the beginning of the >> development cycle" > [...] > > Now I'm getting worried that the phrasing we settled on is leading > to misinterpretation. The entire point, I thought, was that we > decide at the beginning of the development cycle on which platforms > we're testing, and so choose the most recent releases of those LTS > distros. That's a separate bullet point - the first one I quoted. There's two other bullet points, one of which is the one above. > If some projects were still running jobs on an earlier > platform at the start of the cycle, Note that when the platform changes (as it has from Rocky->Stein), it's inevitable that *all* projects will still be running jobs on an earlier platform at the start of the cycle. > I don't think we need to be > stuck maintaining that testing. The beginning of the cycle is the > point at which it's safe for them to switch to the agreed-upon > current platform for our upcoming release under development. Right, but until they do it's not safe for other projects to drop their unit tests for the old platforms that are still being tested: https://review.openstack.org/#/c/613145/2..3/resolutions/20181024-python-update-process.rst at 34 In the final version we did say that "Support for these versions can be dropped once all integration tests have migrated", so if everything moved to Bionic before the end of Stein then we could drop py35 unit tests then, and not keep running them on stable/stein. You commented that you'd like to require that to happen within one release cycle: https://review.openstack.org/#/c/613145/4/resolutions/20181024-python-update-process.rst at 38 but we decided we'd leave it up to the goal champions to decide case-by-case. Of course right now we're trying to retrospectively apply guidelines that we set up for Train and beyond to Stein, where we didn't set a goal, we don't have a goal champion, and the configs aren't managed centrally so different projects could theoretically make different choices. cheers, Zane. From fungi at yuggoth.org Wed Feb 27 17:29:57 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 27 Feb 2019 17:29:57 +0000 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <86fa7dc7-aa78-c81f-11cc-03ec329c469f@nemebean.com> References: <1541607064.1040711.1568901040.6867B704@webmail.messagingengine.com> <20190226163700.GA532@sm-workstation> <25179e07-9a8c-9279-9399-f4f46a82510c@redhat.com> <20190227163629.azdmd3hvhryuet7g@yuggoth.org> <86fa7dc7-aa78-c81f-11cc-03ec329c469f@nemebean.com> Message-ID: <20190227172957.j2fut7wxc5e4ywq7@yuggoth.org> On 2019-02-27 11:07:44 -0600 (-0600), Ben Nemec wrote: > On 2/27/19 10:36 AM, Jeremy Stanley wrote: > > On 2019-02-27 11:23:19 -0500 (-0500), Zane Bitter wrote: > > [...] > > > the resolution says we should unit test "[e]ach Python 3 version > > > that was still used in any integration tests at the beginning of the > > > development cycle" > > [...] > > > > Now I'm getting worried that the phrasing we settled on is leading > > to misinterpretation. The entire point, I thought, was that we > > decide at the beginning of the development cycle on which platforms > > we're testing, and so choose the most recent releases of those LTS > > distros. If some projects were still running jobs on an earlier > > platform at the start of the cycle, I don't think we need to be > > stuck maintaining that testing. The beginning of the cycle is the > > point at which it's safe for them to switch to the agreed-upon > > current platform for our upcoming release under development. > > > > Part of the problem is that this didn't actually happen at the > beginning of the cycle. The current plan is not to finish the > legacy job migration until Apr. 1[0], which means until then > projects may have a dependency on py35. [...] In the past, the way it worked was at the beginning of a new cycle the Infra team said "there's a new Ubuntu LTS release and we have working images for it, you all need to move your jobs to it before the end of the cycle." (Or we just set a date when we were going to switch everyone's jobs over and it was up to them to fix them if they didn't do so before the deadline.) This time the Infra team left it up to the TC to decide on the plan and messaging, so as committee design tradition dictates we spent half the cycle just coming to the conclusion we'd do pretty much what we've done in past cycles. This of course has left teams with far less time to actually implement a transition, and as a result it's overlapping with their attempts to prepare the release. The key is determination of the platform occurs as early in the cycle as possible so teams have an opportunity to get it all working before doing so impacts finalizing their release. Hopefully now that the plan is baked, we can avoid similar delays for instating future testing platform transitions. I don't think that the delay this time should necessarily inform our policy for future iterations. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From bdobreli at redhat.com Wed Feb 27 17:31:37 2019 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 27 Feb 2019 18:31:37 +0100 Subject: [placement][TripleO][infra] zuul job dependencies for greater good? Message-ID: <449d51d5-de35-3e45-cecf-1678a49f9a06@redhat.com> I think we can still consider the middle-ground, where only deprecated multinode jobs, which tripleo infra team is in progress of migrating into standalone jobs, could be made depending on unit and pep8 checks? And some basic jobs will keep being depending on nothing. I expanded that idea in WIP topic [0]. Commit messages explain how the ordering was reworked. PS. I'm sorry I missed the submitted stats for zuul projects posted earlier in this topic, I'll take a look into that. [0] https://review.openstack.org/#/q/topic:ci_pipelines+(status:open+OR+status:merged) > Bogdan Dobrelya writes: >> On 26.02.2019 17:53, James E. Blair wrote: >>> Bogdan Dobrelya writes: >>> >>>> I attempted [0] to do that for tripleo-ci, but zuul was (and still >>>> does) complaining for some weird graphs building things :/ >>>> >>>> See also the related topic [1] from the past. >>>> >>>> [0] https://review.openstack.org/#/c/568543 >>>> [1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/127869.html >>> >>> Thank you for linking to [1]. It's worth re-reading. Especially the >>> part at the end. >>> >>> -Jim >>> >> > > Yes, the part at the end is the best indeed. > I'd amend the time priorities graph though like that: > > CPU-time < a developer time < developers time > > That means burning some CPU and nodes in a pool for a waste might > benefit a developer, but saving some CPU and nodes in a pool would > benefit *developers* in many projects as they'd get the jobs results off > the waiting check queues faster :) -- Best regards, Bogdan Dobrelya, Irc #bogdando From emccormick at cirrusseven.com Wed Feb 27 17:31:33 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 27 Feb 2019 12:31:33 -0500 Subject: [all] [forum] Forum Submissions are open! In-Reply-To: <5C76BD8E.4070504@openstack.org> References: <5C76BD8E.4070504@openstack.org> Message-ID: Would it be possible to push the deadline back a couple weeks? I expect there to be a few session proposals that will come out of the Ops Meetup which ends the day before the deadline. It would be helpful to have a little time to organize and submit things afterwards. Thanks, Erik On Wed, Feb 27, 2019, 11:42 AM Jimmy McArthur wrote: > Hi Everyone - > > A quick reminder that we are accepting Forum [1] submissions for the 2019 > Open Infrastructure Summit in Denver [2]. Please submit your ideas through > the Summit CFP tool [3] through March 8th. Don't forget to put your > brainstorming etherpad up on the Denver Forum page [4]. > > This is not a classic conference track with speakers and presentations. > OSF community members (participants in development teams, operators, > working groups, SIGs, and other interested individuals) discuss the topics > they want to cover and get alignment on and we welcome your participation. > The Forum is your opportunity to help shape the development of future > project releases. More information about the Forum [1]. > > If you have questions or concerns, please reach out to > speakersupport at openstack.org. > > Cheers, > Jimmy > > [1] https://wiki.openstack.org/wiki/Forum > [2] https://www.openstack.org/summit/denver-2019/ > [3] https://www.openstack.org/summit/denver-2019/call-for-presentations > [4] https://wiki.openstack.org/wiki/Forum/Denver2019 > ___________________________________________ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Feb 27 17:44:11 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 27 Feb 2019 17:44:11 +0000 Subject: [openstack-dev] [python3] Enabling py37 unit tests In-Reply-To: <1692fedf433.12a6a175393613.6060908281008487623@ghanshyammann.com> References: <1541607064.1040711.1568901040.6867B704@webmail.messagingengine.com> <20190226163700.GA532@sm-workstation> <25179e07-9a8c-9279-9399-f4f46a82510c@redhat.com> <20190227163629.azdmd3hvhryuet7g@yuggoth.org> <1692fedf433.12a6a175393613.6060908281008487623@ghanshyammann.com> Message-ID: <20190227174410.okufzhkpaiob33tm@yuggoth.org> On 2019-02-28 02:08:37 +0900 (+0900), Ghanshyam Mann wrote: [...] > Main point is, it's all happening together in stein :), migration > of LTS distro, mixed of legacy and zuulv3 jobs. Maybe from next > cycle (or when we will have new dtiro), it will be easy to migrate > the latest distro when all jobs are zuul v3 native and counting > the all python 3 versions we need to support as per resolution. Well, to be fair Zuul v3 job migration has been happening over the course of the past several cycles. I'd love to be able to say that 1.5 years is enough warning and any teams that don't have enough understanding of the new system and their existing jobs to have rewritten them in that time should probably just stop running those jobs instead because they're more of a liability than a benefit. I'm sure that's not a popular view for many, however, so I doubt it's what's actually going to happen. > But yes, let's keep the py35 testing until we move all integration > jobs to bionic. This is less of a question of Python3.5 testing in a vacuum and more a question of platform-specific system interfaces and dependencies. Retaining py35 unit tests does little to ensure that you won't break DevStack on platforms which have Python3.5 as their default Python3 interpreter. If that's the driving concern, then projects need to continue running DevStack jobs on both platforms until all projects have added equivalent jobs on the newer platform. As we saw last time (in the Trusty to Xenial transition), projects who were still gating on integration tests for the old platform got wedged by projects who had switched to only gating their integration tests on the new platform even though they were both using Python2.7 (so it had nothing to do with dropping support for older Python interpreters). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Wed Feb 27 18:02:21 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 28 Feb 2019 03:02:21 +0900 Subject: [infra][qa] installing required projects from source in functional/devstack jobs In-Reply-To: References: <5d1ebc25-4530-4a93-a640-b30e93f0a424@www.fastmail.com> <169127b779a.c2a7cc0895597.8824954749040304365@ghanshyammann.com> Message-ID: <169301f2631.f5d8dfeb94878.7601023862679057989@ghanshyammann.com> ---- On Fri, 22 Feb 2019 22:11:33 +0900 Boden Russell wrote ---- > On 2/21/19 4:54 PM, Ghanshyam Mann wrote: > > In addition to what Clark mentioned, all repo defined in "required-projects" > variable in zuul v3 job gets appended to devstack's LIBS_FROM_GIT > variable by > default. > > Thanks for the info. > > However, based on trial and error, using LIBS_FROM_GIT only works if > those projects are not in requirements.txt. If the projects used in > LIBS_FROM_GIT are also in requirements.txt; the versions from > requirements.txt are used; not the source from git. > > For example the tricircle-functional job passes when neutron and > networking-sfc are removed from requirements.txt [1], but fails if they > are in requirements.txt [2]. I've also tried moving those required > projects into their own requirements file [3], but that does not work > either. > > That said; the only solution I see at the moment is to remove those > required projects from requirements.txt until we are ready to release > the given project and then specify the versions for these source projects. > > Am I missing something here; it seems there must be a better solution? I do not think LIBS_FROM_GIT and requirement.txt are two conflict entity. repo mentioned in LIBS_FROM_GIT will be checked against the requirement.txt version and they should satisfy with the latest master version of repo mentioned in LIBS_FROM_GIT. For example, in your case neutron is mentioned in LIBS_FROM_GIT so devstack will pickup the neutron master verison which should be compatible with the requirement.txt (>=neutron-released-version) I saw in your patch (taking the example of PS7), neutron is installed form master[4] and it did satisfy the requirement.txt version [5]. So the final installed version of neutron is 14.0.0.0b2.dev243 which is the latest master. similar case with required_project which end up appending in LIBS_FROM_GIT by devstack so all repo mentioned in required_projects are installed from source until installing project has explicitly constrained them by upper_constarinted etc. The problem I see in your patch is networking-sfc latest version is not picked up even that is installed from source. i found networking-sfc-7.0.0 has neutron.db.api imported which has been changed to neutron_lib.db.api in networking-sfc-8.0.0 In the failure, networking-sfc-7.0.0 is being picked up[6] instead of networking-sfc-8.0.0 which fail with the latest neutron 14.0.0.0b2.dev243. All other PS8-11 [7], I cannot find the networking-sfc installed in that job and so does no error. I am not sure how that is passing without networking-sfc. But in term of installation, devstack pickup the source version and then apply constraint according to what installation repo has. > > > [1] https://review.openstack.org/#/c/638099/8 > [2] https://review.openstack.org/#/c/638099/7 > [3] https://review.openstack.org/#/c/638099/9 [4] http://logs.openstack.org/99/638099/7/check/tricircle-functional/5b20269/logs/devstacklog.txt.gz#_2019-02-21_19_23_35_571 [5] http://logs.openstack.org/99/638099/7/check/tricircle-functional/5b20269/logs/devstacklog.txt.gz#_2019-02-21_19_23_31_906 [6] http://logs.openstack.org/99/638099/7/check/tricircle-functional/5b20269/logs/devstacklog.txt.gz#_2019-02-21_19_23_39_418 [7] http://logs.openstack.org/99/638099/8/check/tricircle-functional/92841bb/logs/devstacklog.txt.gz -gmann > > From jimmy at openstack.org Wed Feb 27 18:04:21 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 27 Feb 2019 12:04:21 -0600 Subject: [all] [forum] Forum Submissions are open! In-Reply-To: References: <5C76BD8E.4070504@openstack.org> Message-ID: <5C76D125.2040404@openstack.org> Hi Erik, We are able to extend the deadline to 11:59PM Pacific, March 10th. That should give the weekend to get any additional stragglers in and still allow the Forum Programming Committee enough time to manage the rest of the approval and publishing process in time for people's travel needs, etc... For the Ops Meetup specifically, I'd suggest going a bit broader with the proposals and offering to fill in the blanks later. For example, if something comes up and everyone agrees it should go to the Forum, just submit before the end of the Ops session. Kendall or myself would be happy to help you add details a bit later in the process, should clarification be necessary. We typically have enough spots for the majority of proposed Forum sessions. That's not a guarantee, but food for thought. Cheers, Jimmy > Erik McCormick > February 27, 2019 at 11:31 AM > Would it be possible to push the deadline back a couple weeks? I > expect there to be a few session proposals that will come out of the > Ops Meetup which ends the day before the deadline. It would be helpful > to have a little time to organize and submit things afterwards. > > Thanks, > Erik > > Jimmy McArthur > February 27, 2019 at 10:40 AM > Hi Everyone - > > A quick reminder that we are accepting Forum [1] submissions for the > 2019 Open Infrastructure Summit in Denver [2]. Please submit your > ideas through the Summit CFP tool [3] through March 8th. Don't forget > to put your brainstorming etherpad up on the Denver Forum page [4]. > > This is not a classic conference track with speakers and > presentations. OSF community members (participants in development > teams, operators, working groups, SIGs, and other interested > individuals) discuss the topics they want to cover and get alignment > on and we welcome your participation. The Forum is your opportunity > to help shape the development of future project releases. More > information about the Forum [1]. > > If you have questions or concerns, please reach out to > speakersupport at openstack.org . > > Cheers, > Jimmy > > [1] https://wiki.openstack.org/wiki/Forum > [2] https://www.openstack.org/summit/denver-2019/ > [3] https://www.openstack.org/summit/denver-2019/call-for-presentations > [4] https://wiki.openstack.org/wiki/Forum/Denver2019 > ___________________________________________ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Feb 27 18:13:01 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 28 Feb 2019 03:13:01 +0900 Subject: [infra][qa] installing required projects from source in functional/devstack jobs In-Reply-To: <169301f2631.f5d8dfeb94878.7601023862679057989@ghanshyammann.com> References: <5d1ebc25-4530-4a93-a640-b30e93f0a424@www.fastmail.com> <169127b779a.c2a7cc0895597.8824954749040304365@ghanshyammann.com> <169301f2631.f5d8dfeb94878.7601023862679057989@ghanshyammann.com> Message-ID: <1693028ead7.c631c70e95038.8437622958825110143@ghanshyammann.com> ---- On Thu, 28 Feb 2019 03:02:21 +0900 Ghanshyam Mann wrote ---- > ---- On Fri, 22 Feb 2019 22:11:33 +0900 Boden Russell wrote ---- > > On 2/21/19 4:54 PM, Ghanshyam Mann wrote: > > > In addition to what Clark mentioned, all repo defined in "required-projects" > > variable in zuul v3 job gets appended to devstack's LIBS_FROM_GIT > > variable by > > default. > > > > Thanks for the info. > > > > However, based on trial and error, using LIBS_FROM_GIT only works if > > those projects are not in requirements.txt. If the projects used in > > LIBS_FROM_GIT are also in requirements.txt; the versions from > > requirements.txt are used; not the source from git. > > > > For example the tricircle-functional job passes when neutron and > > networking-sfc are removed from requirements.txt [1], but fails if they > > are in requirements.txt [2]. I've also tried moving those required > > projects into their own requirements file [3], but that does not work > > either. > > > > That said; the only solution I see at the moment is to remove those > > required projects from requirements.txt until we are ready to release > > the given project and then specify the versions for these source projects. > > > > Am I missing something here; it seems there must be a better solution? > > I do not think LIBS_FROM_GIT and requirement.txt are two conflict entity. repo > mentioned in LIBS_FROM_GIT will be checked against the requirement.txt version and > they should satisfy with the latest master version of repo mentioned in LIBS_FROM_GIT. > > For example, in your case neutron is mentioned in LIBS_FROM_GIT so devstack will > pickup the neutron master verison which should be compatible with the requirement.txt (>=neutron-released-version) > > I saw in your patch (taking the example of PS7), neutron is installed form master[4] and it did satisfy the > requirement.txt version [5]. So the final installed version of neutron is 14.0.0.0b2.dev243 which is the latest master. > > similar case with required_project which end up appending in LIBS_FROM_GIT by devstack so all repo > mentioned in required_projects are installed from source until installing project has explicitly constrained > them by upper_constarinted etc. > > The problem I see in your patch is networking-sfc latest version is not picked up even that is installed from source. > i found networking-sfc-7.0.0 has neutron.db.api imported which has been changed to neutron_lib.db.api in networking-sfc-8.0.0 > In the failure, networking-sfc-7.0.0 is being picked up[6] instead of networking-sfc-8.0.0 which fail with the latest neutron 14.0.0.0b2.dev243. Even I checked the master gate job on networking-sfc side and the latest networking-sfc version installed there is 7.1.0.dev45. which confirm that networking-sfc and neutron version in tricircle job is from source. There is some issue on networking-sfc side. - http://logs.openstack.org/52/637852/1/check/networking-sfc-tempest-dsvm/2aa75a5/job-output.txt.gz#_2019-02-19_14_43_25_098614 > > > All other PS8-11 [7], I cannot find the networking-sfc installed in that job and so does no error. I am not sure how that is passing without networking-sfc. > > But in term of installation, devstack pickup the source version and then apply constraint according to what installation repo has. > > > > > > > [1] https://review.openstack.org/#/c/638099/8 > > [2] https://review.openstack.org/#/c/638099/7 > > [3] https://review.openstack.org/#/c/638099/9 > > [4] http://logs.openstack.org/99/638099/7/check/tricircle-functional/5b20269/logs/devstacklog.txt.gz#_2019-02-21_19_23_35_571 > [5] http://logs.openstack.org/99/638099/7/check/tricircle-functional/5b20269/logs/devstacklog.txt.gz#_2019-02-21_19_23_31_906 > [6] http://logs.openstack.org/99/638099/7/check/tricircle-functional/5b20269/logs/devstacklog.txt.gz#_2019-02-21_19_23_39_418 > [7] http://logs.openstack.org/99/638099/8/check/tricircle-functional/92841bb/logs/devstacklog.txt.gz > > -gmann > > > > > > > From emccormick at cirrusseven.com Wed Feb 27 18:43:18 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 27 Feb 2019 13:43:18 -0500 Subject: [all] [forum] Forum Submissions are open! In-Reply-To: <5C76D125.2040404@openstack.org> References: <5C76BD8E.4070504@openstack.org> <5C76D125.2040404@openstack.org> Message-ID: Jimmy, I won't even get home until the 10th much less have time to follow up with anyone. The formation of those sessions often come from discussions spawned at the meetup and expanded upon later with folks who could not attend. Could we at least get until 3/17? I understand your desire to finalize the schedule, but 6 weeks out should be more than enough time, no? Thanks, Erik On Wed, Feb 27, 2019 at 1:04 PM Jimmy McArthur wrote: > > Hi Erik, > > We are able to extend the deadline to 11:59PM Pacific, March 10th. That should give the weekend to get any additional stragglers in and still allow the Forum Programming Committee enough time to manage the rest of the approval and publishing process in time for people's travel needs, etc... > > For the Ops Meetup specifically, I'd suggest going a bit broader with the proposals and offering to fill in the blanks later. For example, if something comes up and everyone agrees it should go to the Forum, just submit before the end of the Ops session. Kendall or myself would be happy to help you add details a bit later in the process, should clarification be necessary. We typically have enough spots for the majority of proposed Forum sessions. That's not a guarantee, but food for thought. > > Cheers, > Jimmy > > Erik McCormick February 27, 2019 at 11:31 AM > Would it be possible to push the deadline back a couple weeks? I expect there to be a few session proposals that will come out of the Ops Meetup which ends the day before the deadline. It would be helpful to have a little time to organize and submit things afterwards. > > Thanks, > Erik > > Jimmy McArthur February 27, 2019 at 10:40 AM > Hi Everyone - > > A quick reminder that we are accepting Forum [1] submissions for the 2019 Open Infrastructure Summit in Denver [2]. Please submit your ideas through the Summit CFP tool [3] through March 8th. Don't forget to put your brainstorming etherpad up on the Denver Forum page [4]. > > This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. More information about the Forum [1]. > > If you have questions or concerns, please reach out to speakersupport at openstack.org. > > Cheers, > Jimmy > > [1] https://wiki.openstack.org/wiki/Forum > [2] https://www.openstack.org/summit/denver-2019/ > [3] https://www.openstack.org/summit/denver-2019/call-for-presentations > [4] https://wiki.openstack.org/wiki/Forum/Denver2019 > ___________________________________________ > > From jungleboyj at gmail.com Wed Feb 27 18:50:54 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 27 Feb 2019 12:50:54 -0600 Subject: [cinder] Forum and PTG Etherpads Available Message-ID: All, I just wanted to share the fact that we now have etherpads created to propose topics for the Denver Forum [1] and Denver PTG [2]. Please take a few minutes to add topics for the forum ASAP as those topics need to be proposed by 3/8.  Remember that the Forum topics are supposed to be focused on things where we would like wider user/operator feedback while the PTG subjects should be more deeply technical discussion. Thanks! Jay (jungleboyj) [1] https://etherpad.openstack.org/p/cinder-denver-forum-brainstorming [2] https://etherpad.openstack.org/p/cinder-train-ptg-planning From openstack at nemebean.com Wed Feb 27 19:04:38 2019 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 27 Feb 2019 13:04:38 -0600 Subject: [dev][oslo] oslo.cache and dogpile 0.7.0+ cache errors In-Reply-To: References: <4fec7479-22f8-e49a-5732-5ddfa914831b@nemebean.com> Message-ID: <5d6d643e-720a-0ab9-b86d-dd47ec37dc43@nemebean.com> To close the loop on this, we just merged a unit test fix that unblocks oslo.cache ci. We'll continue to work on sorting out where these tests should live as a followup. On 2/26/19 1:40 PM, Herve Beraud wrote: > Submit a patch to dogpile.cache to add some related tests cases: > > https://github.com/sqlalchemy/dogpile.cache/pull/145/ > > Le mar. 26 févr. 2019 à 19:35, Herve Beraud > a écrit : > > FYI dogpile.cache issue was opened: > https://github.com/sqlalchemy/dogpile.cache/issues/144 > > Come with a possible oslo.cache solution that I've introduce there > => https://review.openstack.org/#/c/638788/8 > > Le mar. 26 févr. 2019 à 16:49, Ben Nemec > a écrit : > > Copying Mike. More thoughts inline. > > On 2/26/19 9:24 AM, Herve Beraud wrote: > > Hi, > > > > Just a heads up that the latest version of dogpile (0.7.0 > onwards) > > have become incompatible with oslo.cache.  This is causing a few > > issues for jobs.  It's a little complex due to functional > code and many > > decorated functions. > > > > The error you will see is: > > / > > / > > > > > /oslo_cache.//tests.test_//cache.CacheRegi//onTest.//test_function_//key_generator_//with_kwargs > > > -------//-------//-------//-------//-------//-------//-------//-------//-------//-------//-------//------/ > > > > // > > > > /Captured traceback: > > ~~~~~~~~~~~~~~~~~~~ > >      b'Traceback (most recent call last):' > >      b' File > "/tmp/oslo.//cache/oslo_//cache/tests///test_cache.//py", > > line 324, in test_function_//key_generator_//with_kwargs' > >      b' value=self.//test_value)//' > >      b' File > > > "/tmp/oslo.//cache/.//tox/py37///lib/python3.//7/site-//packages///testtools///testcase.//py", > > > line 485, in assertRaises' > >      b' self.assertThat//(our_callable, matcher)' > >      b' File > > > "/tmp/oslo.//cache/.//tox/py37///lib/python3.//7/site-//packages///testtools///testcase.//py", > > > line 498, in assertThat' > >      b' raise mismatch_error' > >      b'testtools//.matchers.//_impl.MismatchE//rror: > > CacheRegionTest//._get_cacheable//_function.////.cacheable_//function > > > at 0x7fec3f795400> returned > > > 0x7fec3f792550>'/ > > > > > > The problem appear since we uncap dogpile.cache on oslo.cache: > > > https://github.com/openstack/oslo.cache/commit/62b53099861134859482656dc92db81243b88bd9 > > > > The following unit test fail since we uncap dogpile => > > > https://github.com/openstack/oslo.cache/blob/master/oslo_cache/tests/test_cache.py#L318 > > > > The problem was introduced by: > > https://gerrit.sqlalchemy.org/#/c/sqlalchemy/dogpile.cache/+/996/ > > > > Your main issue on oslo.cache side is that keyword arguments are > > tranformed in positionnal arguments when we use > > dogpile.cache.region.cache_on_arguments. > > > > I've try to revert the changes introduced by the previous > dogpile.cache > > change and everything works fine on the oslo.cache side when > changes was > > reverted (reverted to revision > > > https://github.com/sqlalchemy/dogpile.cache/blob/2762ada1f5e43075494d91c512f7c1ec68907258/dogpile/cache/region.py). > > > > The expected behavior is that > dogpile.cache.util.function_key_generator > > raise a ValueError if **kwargs founds, but our kwargs is > empty and our > > `value=self.test_value was` is recognized as a positionnal > argument. > > Our unit test looking for an assertRaise(ValueError) on cachable > > decorated function when we pass kwargs but it doesn't happen > due to > > empty kwargs. > > > > For these reasons we guess that is an dogpile.cache issue and > not an > > oslo.cache issue due to the changes introduced by `decorator` > module. > > > > The following are related: > > > > - > > > https://github.com/openstack/oslo.cache/blob/master/oslo_cache/tests/test_cache.py#L318 > > > : unit test where the problem occure > > - https://review.openstack.org/#/c/638788/ : possible fix but > we don't > > think that is the right way > > - https://review.openstack.org/#/c/638732/ : possible remove > of the unit > > test who fail > > As I noted in the reviews, I don't think this is something we > should > have been testing in oslo.cache in the first place. The failing > test is > testing the dogpile interface, not the oslo.cache one. I've seen no > evidence that oslo.cache is doing anything wrong here, so our > unit tests > are clearly testing something that should be out of scope. > > And to be clear, I'm not even sure this is a bug in dogpile. It > may be a > happy side-effect of the decorator change that the regular > decorator now > works for kwargs too. I don't know dogpile well enough to make a > definitive statement on that though. Hence cc'ing Mike. :-) > > > > > The issue is being tracked in: > > > > https://bugs.launchpad.net/oslo.cache/+bug/1817032 > > > > If some dogpile expert can take a look and send feedback on > this thread > > you are welcome. > > > > Thanks, > > > > -- > > Hervé Beraud > > Senior Software Engineer > > Red Hat - Openstack Oslo > > irc: hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From jimmy at openstack.org Wed Feb 27 19:08:31 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 27 Feb 2019 13:08:31 -0600 Subject: [all] [forum] Forum Submissions are open! In-Reply-To: References: <5C76BD8E.4070504@openstack.org> <5C76D125.2040404@openstack.org> Message-ID: <5C76E02F.4010906@openstack.org> Erik, I definitely understand the timeline is tight. One of the reasons that we publish the schedule so early is to enable community members to plan their schedule early, especially as there is more overlap with the main Summit Schedule in Denver. Additionally, travel approval is often predicated upon someone showing they're leading/moderating a session. Before publishing the schedule, we print a draft Forum schedule for community feedback and start promotion of the schedule, which we have to put up on the OpenStack website and apps at 5 weeks out. Extending the date beyond the 10th won't give the Forum Selection Committee enough time to complete those tasks. I think if the Ops team can come up with some high level discussion topics, we'll be happy to put some holds in the Forum schedule for Ops-specific content. diablo_rojo has also offered to attend some of the Ops sessions remotely as well, if that would help you all shape some things into actual sessions. I wish I could offer a further extension, but extending it another week would push too far into the process. Cheers, Jimmy > Erik McCormick > February 27, 2019 at 12:43 PM > Jimmy, > > I won't even get home until the 10th much less have time to follow up > with anyone. The formation of those sessions often come from > discussions spawned at the meetup and expanded upon later with folks > who could not attend. Could we at least get until 3/17? I understand > your desire to finalize the schedule, but 6 weeks out should be more > than enough time, no? > > Thanks, > Erik > Jimmy McArthur > February 27, 2019 at 12:04 PM > Hi Erik, > > We are able to extend the deadline to 11:59PM Pacific, March 10th. > That should give the weekend to get any additional stragglers in and > still allow the Forum Programming Committee enough time to manage the > rest of the approval and publishing process in time for people's > travel needs, etc... > > For the Ops Meetup specifically, I'd suggest going a bit broader with > the proposals and offering to fill in the blanks later. For example, > if something comes up and everyone agrees it should go to the Forum, > just submit before the end of the Ops session. Kendall or myself > would be happy to help you add details a bit later in the process, > should clarification be necessary. We typically have enough spots for > the majority of proposed Forum sessions. That's not a guarantee, but > food for thought. > > Cheers, > Jimmy > > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > Erik McCormick > February 27, 2019 at 11:31 AM > Would it be possible to push the deadline back a couple weeks? I > expect there to be a few session proposals that will come out of the > Ops Meetup which ends the day before the deadline. It would be helpful > to have a little time to organize and submit things afterwards. > > Thanks, > Erik > > Jimmy McArthur > February 27, 2019 at 10:40 AM > Hi Everyone - > > A quick reminder that we are accepting Forum [1] submissions for the > 2019 Open Infrastructure Summit in Denver [2]. Please submit your > ideas through the Summit CFP tool [3] through March 8th. Don't forget > to put your brainstorming etherpad up on the Denver Forum page [4]. > > This is not a classic conference track with speakers and > presentations. OSF community members (participants in development > teams, operators, working groups, SIGs, and other interested > individuals) discuss the topics they want to cover and get alignment > on and we welcome your participation. The Forum is your opportunity > to help shape the development of future project releases. More > information about the Forum [1]. > > If you have questions or concerns, please reach out to > speakersupport at openstack.org . > > Cheers, > Jimmy > > [1] https://wiki.openstack.org/wiki/Forum > [2] https://www.openstack.org/summit/denver-2019/ > [3] https://www.openstack.org/summit/denver-2019/call-for-presentations > [4] https://wiki.openstack.org/wiki/Forum/Denver2019 > ___________________________________________ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Feb 27 19:14:54 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 27 Feb 2019 11:14:54 -0800 Subject: [all] [forum] Forum Submissions are open! In-Reply-To: <5C76E02F.4010906@openstack.org> References: <5C76BD8E.4070504@openstack.org> <5C76D125.2040404@openstack.org> <5C76E02F.4010906@openstack.org> Message-ID: Hello :) On Wed, Feb 27, 2019 at 11:08 AM Jimmy McArthur wrote: > Erik, > > I definitely understand the timeline is tight. One of the reasons that we > publish the schedule so early is to enable community members to plan their > schedule early, especially as there is more overlap with the main Summit > Schedule in Denver. Additionally, travel approval is often predicated upon > someone showing they're leading/moderating a session. > > Before publishing the schedule, we print a draft Forum schedule for > community feedback and start promotion of the schedule, which we have to > put up on the OpenStack website and apps at 5 weeks out. Extending the date > beyond the 10th won't give the Forum Selection Committee enough time to > complete those tasks. > > I think if the Ops team can come up with some high level discussion > topics, we'll be happy to put some holds in the Forum schedule for > Ops-specific content. diablo_rojo has also offered to attend some of the > Ops sessions remotely as well, if that would help you all shape some things > into actual sessions. > I'm definitely happy to help as much as I can. If you'll have something set up that I can call into (zoom, webex, bluejeans, hangout, whatever), I definitely will. I could also read through etherpads you take notes in and help summarize things into forum proposals. Another thing to note is that whatever you/we submit, it doesn't have to be award winning :) Its totally possible to change session descriptions and edit who the speaker is later. Other random thought, I know Sean McGinnis has attended a lot of the Operators stuff in the past so maybe he could help narrow things down too? Not to sign him up for more work, but I know he's written a forum propsal or two in the past ;) > > I wish I could offer a further extension, but extending it another week > would push too far into the process. > > Cheers, > Jimmy > > Erik McCormick > February 27, 2019 at 12:43 PM > > Jimmy, > > I won't even get home until the 10th much less have time to follow up > with anyone. The formation of those sessions often come from > discussions spawned at the meetup and expanded upon later with folks > who could not attend. Could we at least get until 3/17? I understand > your desire to finalize the schedule, but 6 weeks out should be more > than enough time, no? > > Thanks, > Erik > > Jimmy McArthur > February 27, 2019 at 12:04 PM > > Hi Erik, > > We are able to extend the deadline to 11:59PM Pacific, March 10th. That > should give the weekend to get any additional stragglers in and still allow > the Forum Programming Committee enough time to manage the rest of the > approval and publishing process in time for people's travel needs, etc... > > For the Ops Meetup specifically, I'd suggest going a bit broader with the > proposals and offering to fill in the blanks later. For example, if > something comes up and everyone agrees it should go to the Forum, just > submit before the end of the Ops session. Kendall or myself would be happy > to help you add details a bit later in the process, should clarification be > necessary. We typically have enough spots for the majority of proposed > Forum sessions. That's not a guarantee, but food for thought. > > Cheers, > Jimmy > > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > > Erik McCormick > February 27, 2019 at 11:31 AM > Would it be possible to push the deadline back a couple weeks? I expect > there to be a few session proposals that will come out of the Ops Meetup > which ends the day before the deadline. It would be helpful to have a > little time to organize and submit things afterwards. > > Thanks, > Erik > > Jimmy McArthur > February 27, 2019 at 10:40 AM > Hi Everyone - > > A quick reminder that we are accepting Forum [1] submissions for the 2019 > Open Infrastructure Summit in Denver [2]. Please submit your ideas through > the Summit CFP tool [3] through March 8th. Don't forget to put your > brainstorming etherpad up on the Denver Forum page [4]. > > This is not a classic conference track with speakers and presentations. > OSF community members (participants in development teams, operators, > working groups, SIGs, and other interested individuals) discuss the topics > they want to cover and get alignment on and we welcome your participation. > The Forum is your opportunity to help shape the development of future > project releases. More information about the Forum [1]. > > If you have questions or concerns, please reach out to > speakersupport at openstack.org. > > Cheers, > Jimmy > > [1] https://wiki.openstack.org/wiki/Forum > [2] https://www.openstack.org/summit/denver-2019/ > [3] https://www.openstack.org/summit/denver-2019/call-for-presentations > [4] https://wiki.openstack.org/wiki/Forum/Denver2019 > ___________________________________________ > > Hopefully that helps! -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Feb 27 19:19:54 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 27 Feb 2019 11:19:54 -0800 Subject: [all] [forum] Forum Submissions are open! In-Reply-To: References: <5C76BD8E.4070504@openstack.org> <5C76D125.2040404@openstack.org> <5C76E02F.4010906@openstack.org> Message-ID: Another- nother thought: You could take a look at what is submitted by project teams closer to the deadline and see if your ideas might fit well with theirs since they are looking for feedback from operators anyway. In the past I have always hoped for more engagement in the forum sessions I've submitted but only ever had one or two operators able to join us. -Kendall (diablo_rojo) On Wed, Feb 27, 2019 at 11:14 AM Kendall Nelson wrote: > Hello :) > > On Wed, Feb 27, 2019 at 11:08 AM Jimmy McArthur > wrote: > >> Erik, >> >> I definitely understand the timeline is tight. One of the reasons that >> we publish the schedule so early is to enable community members to plan >> their schedule early, especially as there is more overlap with the main >> Summit Schedule in Denver. Additionally, travel approval is often >> predicated upon someone showing they're leading/moderating a session. >> >> Before publishing the schedule, we print a draft Forum schedule for >> community feedback and start promotion of the schedule, which we have to >> put up on the OpenStack website and apps at 5 weeks out. Extending the date >> beyond the 10th won't give the Forum Selection Committee enough time to >> complete those tasks. >> >> I think if the Ops team can come up with some high level discussion >> topics, we'll be happy to put some holds in the Forum schedule for >> Ops-specific content. diablo_rojo has also offered to attend some of the >> Ops sessions remotely as well, if that would help you all shape some things >> into actual sessions. >> > > I'm definitely happy to help as much as I can. If you'll have something > set up that I can call into (zoom, webex, bluejeans, hangout, whatever), I > definitely will. I could also read through etherpads you take notes in and > help summarize things into forum proposals. > > Another thing to note is that whatever you/we submit, it doesn't have to > be award winning :) Its totally possible to change session descriptions and > edit who the speaker is later. > > Other random thought, I know Sean McGinnis has attended a lot of the > Operators stuff in the past so maybe he could help narrow things down too? > Not to sign him up for more work, but I know he's written a forum propsal > or two in the past ;) > > >> >> I wish I could offer a further extension, but extending it another week >> would push too far into the process. >> >> Cheers, >> Jimmy >> >> Erik McCormick >> February 27, 2019 at 12:43 PM >> >> Jimmy, >> >> I won't even get home until the 10th much less have time to follow up >> with anyone. The formation of those sessions often come from >> discussions spawned at the meetup and expanded upon later with folks >> who could not attend. Could we at least get until 3/17? I understand >> your desire to finalize the schedule, but 6 weeks out should be more >> than enough time, no? >> >> Thanks, >> Erik >> >> Jimmy McArthur >> February 27, 2019 at 12:04 PM >> >> Hi Erik, >> >> We are able to extend the deadline to 11:59PM Pacific, March 10th. That >> should give the weekend to get any additional stragglers in and still allow >> the Forum Programming Committee enough time to manage the rest of the >> approval and publishing process in time for people's travel needs, etc... >> >> For the Ops Meetup specifically, I'd suggest going a bit broader with the >> proposals and offering to fill in the blanks later. For example, if >> something comes up and everyone agrees it should go to the Forum, just >> submit before the end of the Ops session. Kendall or myself would be happy >> to help you add details a bit later in the process, should clarification be >> necessary. We typically have enough spots for the majority of proposed >> Forum sessions. That's not a guarantee, but food for thought. >> >> Cheers, >> Jimmy >> >> >> _______________________________________________ >> Airship-discuss mailing list >> Airship-discuss at lists.airshipit.org >> http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss >> >> Erik McCormick >> February 27, 2019 at 11:31 AM >> Would it be possible to push the deadline back a couple weeks? I expect >> there to be a few session proposals that will come out of the Ops Meetup >> which ends the day before the deadline. It would be helpful to have a >> little time to organize and submit things afterwards. >> >> Thanks, >> Erik >> >> Jimmy McArthur >> February 27, 2019 at 10:40 AM >> Hi Everyone - >> >> A quick reminder that we are accepting Forum [1] submissions for the 2019 >> Open Infrastructure Summit in Denver [2]. Please submit your ideas through >> the Summit CFP tool [3] through March 8th. Don't forget to put your >> brainstorming etherpad up on the Denver Forum page [4]. >> >> This is not a classic conference track with speakers and presentations. >> OSF community members (participants in development teams, operators, >> working groups, SIGs, and other interested individuals) discuss the topics >> they want to cover and get alignment on and we welcome your participation. >> The Forum is your opportunity to help shape the development of future >> project releases. More information about the Forum [1]. >> >> If you have questions or concerns, please reach out to >> speakersupport at openstack.org. >> >> Cheers, >> Jimmy >> >> [1] https://wiki.openstack.org/wiki/Forum >> [2] https://www.openstack.org/summit/denver-2019/ >> [3] https://www.openstack.org/summit/denver-2019/call-for-presentations >> [4] https://wiki.openstack.org/wiki/Forum/Denver2019 >> ___________________________________________ >> >> > Hopefully that helps! > > -Kendall (diablo_rojo) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.bishop at beyondhosting.net Wed Feb 27 19:30:14 2019 From: tyler.bishop at beyondhosting.net (Tyler Bishop) Date: Wed, 27 Feb 2019 14:30:14 -0500 Subject: [kolla] - Ceph bootstrap monitor is not properly configured Message-ID: Trying to deploy a new cluster using the bootstrapping for ceph but running into issues with the admin keys being incorrectly deployed. Successful ansible deploy up until: TASK [ceph : Getting ceph mgr keyring] ****************************************************************************************************************************** failed: [osctlr.home.visualbits.net -> osctlr.home.visualbits.net] (item= osctlr.home.visualbits.net) => {"changed": false, "item": " osctlr.home.visualbits.net", "msg": "Failed to call command: ['docker', 'exec', 'ceph_mon', 'ceph', '--format', 'json', 'auth', 'get-or-create', ' mgr.osctlr.home.visualbits.net', 'mds', 'allow *', 'mon', 'allow profile mgr', 'osd', 'allow *'] returncode: 1 output: stdout: \"\", stderr: \"[errno 1] error connecting to the cluster\n\""} Errors in log from ceph: TASK [ceph : Getting ceph mgr keyring] ****************************************************************************************************************************** failed: [osctlr.home.visualbits.net -> osctlr.home.visualbits.net] (item= osctlr.home.visualbits.net) => {"changed": false, "item": " osctlr.home.visualbits.net", "msg": "Failed to call command: ['docker', 'exec', 'ceph_mon', 'ceph', '--format', 'json', 'auth', 'get-or-create', ' mgr.osctlr.home.visualbits.net', 'mds', 'allow *', 'mon', 'allow profile mgr', 'osd', 'allow *'] returncode: 1 output: stdout: \"\", stderr: \"[errno 1] error connecting to the cluster\n\""} keyrings look proper: (openstack) [root at osctlr ~]# md5sum /etc/kolla/ceph-mon/ceph.client.admin.keyring 4658c01282c791bce9c75678df9e21c9 /etc/kolla/ceph-mon/ceph.client.admin.keyring (openstack) [root at osctlr ~]# md5sum /var/lib/docker/volumes/ceph_mon_config/_data/ceph.client.admin.keyring 4658c01282c791bce9c75678df9e21c9 /var/lib/docker/volumes/ceph_mon_config/_data/ceph.client.admin.keyring I've removed the docker container, volume and kolla config directories multple times with the same error. I can't even run ceph status from the container bash itself. Any ideas? -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.engelmann at everyware.ch Wed Feb 27 19:33:10 2019 From: florian.engelmann at everyware.ch (Engelmann Florian) Date: Wed, 27 Feb 2019 19:33:10 +0000 Subject: [ceilometer] radosgw pollster In-Reply-To: References: , Message-ID: <1551295990725.66562@everyware.ch> Hi Christian, thank you for your feedback and help! Permissions are fine as I tried to poll the Endpoint successfully with curl and the user (key + secret) we created (and is configured in ceilometer.conf). I saw the requests-aws is used in OSA and it is indeed missing in the kolla container (we use "source" not binary). https://github.com/openstack/kolla/blob/master/docker/ceilometer/ceilometer-base/Dockerfile.j2 I will build a new ceilometer container including requests-aws tomorrow to see if this fixes the problem. All the best, Florian ________________________________ From: Christian Zunker Sent: Wednesday, February 27, 2019 9:09 AM To: Engelmann Florian Cc: openstack-discuss at lists.openstack.org Subject: Re: [ceilometer] radosgw pollster Hi Florian, have you tried different permissions for your ceilometer user in radosgw? According to the docs you need an admin user: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#ceph-object-storage Our user has these caps: usage=read,write;metadata=read,write;users=read,write;buckets=read,write We also had to add the requests-aws pip package to query radosgw from ceilometer: https://docs.openstack.org/openstack-ansible/latest/user/ceph/ceilometer.html Christian Am Di., 26. Feb. 2019 um 13:15 Uhr schrieb Florian Engelmann >: Hi Christian, Am 2/26/19 um 11:00 AM schrieb Christian Zunker: > Hi Florian, > > which version of OpenStack are you using? > The radosgw metric names were different in some versions: > https://bugs.launchpad.net/ceilometer/+bug/1726458 we do use Rocky and Ceilometer 11.0.1. I am still lost with that error. As far as I am able to understand python it looks like the error is happening in polling.manager line 222: https://github.com/openstack/ceilometer/blob/11.0.1/ceilometer/polling/manager.py#L222 But I do not understand why. I tried to enable debug logging but the error does not log any additional information. The poller is not even trying to reach/poll our RadosGWs. Looks like that manger is blocking those polls. All the best, Florian > > Christian > > Am Fr., 22. Feb. 2019 um 17:40 Uhr schrieb Florian Engelmann > >>: > > Hi, > > I failed to poll any usage data from our radosgw. I get > > 2019-02-22 17:23:57.461 24 INFO ceilometer.polling.manager [-] Polling > pollster radosgw.containers.objects in the context of > radosgw_300s_pollsters > 2019-02-22 17:23:57.462 24 ERROR ceilometer.polling.manager [-] Prevent > pollster radosgw.containers.objects from polling [ description=, > domain_id=xx9d9975088a4d93922e1d73c7217b3b, enabled=True, > > [...] > > id=xx90a9b1d4be4d75b4bd08ab8107e4ff, is_domain=False, links={u'self': > u'http://keystone-admin.service.xxxxxxx:35357/v3/projects on source > radosgw_300s_pollsters anymore!: PollsterPermanentError > > Configurations like: > cat polling.yaml > --- > sources: > - name: radosgw_300s_pollsters > interval: 300 > meters: > - radosgw.usage > - radosgw.objects > - radosgw.objects.size > - radosgw.objects.containers > - radosgw.containers.objects > - radosgw.containers.objects.size > > > Also tried radosgw.api.requests instead of radowsgw.usage. > > ceilometer.conf > [...] > [service_types] > radosgw = object-store > > [rgw_admin_credentials] > access_key = xxxxx0Z0xxxxxxxxxxxx > secret_key = xxxxxxxxxxxxlRExxcPxxxxxxoNxxxxxxOxxxx > > [rgw_client] > implicit_tenants = true > > Endpoints: > | xxxxxxx | region | swift | object-store | True | admin > | http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s | > | xxxxxxx | region | swift | object-store | True | > internal > | http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s | > | xxxxxxx | region | swift | object-store | True | public > | https://s3.somedomain.com/swift/v1/AUTH_%(tenant_id)s | > > Ceilometer user: > { > "user_id": "ceilometer", > "display_name": "ceilometer", > "email": "", > "suspended": 0, > "max_buckets": 1000, > "auid": 0, > "subusers": [], > "keys": [ > { > "user": "ceilometer", > "access_key": "xxxxxxxxxxxxxxxxxx", > "secret_key": "xxxxxxxxxxxxxxxxxxxxxxxxx" > } > ], > "swift_keys": [], > "caps": [ > { > "type": "buckets", > "perm": "read" > }, > { > "type": "metadata", > "perm": "read" > }, > { > "type": "usage", > "perm": "read" > }, > { > "type": "users", > "perm": "read" > } > ], > "op_mask": "read, write, delete", > "default_placement": "", > "placement_tags": [], > "bucket_quota": { > "enabled": false, > "check_on_raw": false, > "max_size": -1, > "max_size_kb": 0, > "max_objects": -1 > }, > "user_quota": { > "enabled": false, > "check_on_raw": false, > "max_size": -1, > "max_size_kb": 0, > "max_objects": -1 > }, > "temp_url_keys": [], > "type": "rgw" > } > > > radosgw config: > [client.rgw.xxxxxxxxxxx] > host = somehost > rgw frontends = "civetweb port=7480 num_threads=512" > rgw num rados handles = 8 > rgw thread pool size = 512 > rgw cache enabled = true > rgw dns name = s3.xxxxxx.xxx > rgw enable usage log = true > rgw usage log tick interval = 30 > rgw realm = public > rgw zonegroup = xxx > rgw zone = xxxxx > rgw resolve cname = False > rgw usage log flush threshold = 1024 > rgw usage max user shards = 1 > rgw usage max shards = 32 > rgw_keystone_url = https://keystone.xxxxxxxxxxxxx > rgw_keystone_admin_domain = default > rgw_keystone_admin_project = service > rgw_keystone_admin_user = swift > rgw_keystone_admin_password = > xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx > rgw_keystone_accepted_roles = member,_member_,admin > rgw_keystone_accepted_admin_roles = admin > rgw_keystone_api_version = 3 > rgw_keystone_verify_ssl = false > rgw_keystone_implicit_tenants = true > rgw_keystone_admin_tenant = default > rgw_keystone_revocation_interval = 0 > rgw_keystone_token_cache_size = 0 > rgw_s3_auth_use_keystone = true > rgw_max_attr_size = 1024 > rgw_max_attrs_num_in_req = 32 > rgw_max_attr_name_len = 64 > rgw_swift_account_in_url = true > rgw_swift_versioning_enabled = true > rgw_enable_apis = s3,swift,swift_auth,admin > rgw_swift_enforce_content_length = true > > > > > Any idea whats going on? > > All the best, > Florian > > > -- EveryWare AG Florian Engelmann Senior UNIX Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From njha1999 at gmail.com Wed Feb 27 19:33:09 2019 From: njha1999 at gmail.com (Namrata Jha) Date: Thu, 28 Feb 2019 01:03:09 +0530 Subject: Help with contributing to Storyboard for Outreachy 2019. Message-ID: I want to make contributions to the low hanging fruits in the Storyboard project, and want to make changes in the REST API documentation as is required in one of the stories, however, I have no idea how to go about it. I have even commented on the story in concern: https://storyboard.openstack.org/#!/story/298. Any help on this would be greatly appreciated. Moreover, the skills required mentioned on the Outreachy project listings page primarily includes SQL but there are no SQL related bugs to solve on the storyboard. I will contribute to improve the documentations wherever I can but how will that determine my selection as an Outreachy intern. P.S. I'm sorry @Kendall Nelson , I had not intended to personally mail you this issue earlier, I replied to a previous thread and didn't realize that the mailing list wasn't included. Sorry for that. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.engelmann at everyware.ch Wed Feb 27 19:38:56 2019 From: florian.engelmann at everyware.ch (Engelmann Florian) Date: Wed, 27 Feb 2019 19:38:56 +0000 Subject: [ceilometer] radosgw pollster In-Reply-To: <1551295990725.66562@everyware.ch> References: , , <1551295990725.66562@everyware.ch> Message-ID: <1551296336596.95229@everyware.ch> Hi Christian, looks like a hit: https://github.com/openstack/ceilometer/commit/c9eb2d44df7cafde1294123d66445ebef4cfb76d You made my day! I will test tomorrow and report back! ​ All the best, Florian ________________________________ From: Engelmann Florian Sent: Wednesday, February 27, 2019 8:33 PM To: Christian Zunker Cc: openstack-discuss at lists.openstack.org Subject: Re: [ceilometer] radosgw pollster Hi Christian, thank you for your feedback and help! Permissions are fine as I tried to poll the Endpoint successfully with curl and the user (key + secret) we created (and is configured in ceilometer.conf). I saw the requests-aws is used in OSA and it is indeed missing in the kolla container (we use "source" not binary). https://github.com/openstack/kolla/blob/master/docker/ceilometer/ceilometer-base/Dockerfile.j2 I will build a new ceilometer container including requests-aws tomorrow to see if this fixes the problem. All the best, Florian ________________________________ From: Christian Zunker Sent: Wednesday, February 27, 2019 9:09 AM To: Engelmann Florian Cc: openstack-discuss at lists.openstack.org Subject: Re: [ceilometer] radosgw pollster Hi Florian, have you tried different permissions for your ceilometer user in radosgw? According to the docs you need an admin user: https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#ceph-object-storage Our user has these caps: usage=read,write;metadata=read,write;users=read,write;buckets=read,write We also had to add the requests-aws pip package to query radosgw from ceilometer: https://docs.openstack.org/openstack-ansible/latest/user/ceph/ceilometer.html Christian Am Di., 26. Feb. 2019 um 13:15 Uhr schrieb Florian Engelmann >: Hi Christian, Am 2/26/19 um 11:00 AM schrieb Christian Zunker: > Hi Florian, > > which version of OpenStack are you using? > The radosgw metric names were different in some versions: > https://bugs.launchpad.net/ceilometer/+bug/1726458 we do use Rocky and Ceilometer 11.0.1. I am still lost with that error. As far as I am able to understand python it looks like the error is happening in polling.manager line 222: https://github.com/openstack/ceilometer/blob/11.0.1/ceilometer/polling/manager.py#L222 But I do not understand why. I tried to enable debug logging but the error does not log any additional information. The poller is not even trying to reach/poll our RadosGWs. Looks like that manger is blocking those polls. All the best, Florian > > Christian > > Am Fr., 22. Feb. 2019 um 17:40 Uhr schrieb Florian Engelmann > >>: > > Hi, > > I failed to poll any usage data from our radosgw. I get > > 2019-02-22 17:23:57.461 24 INFO ceilometer.polling.manager [-] Polling > pollster radosgw.containers.objects in the context of > radosgw_300s_pollsters > 2019-02-22 17:23:57.462 24 ERROR ceilometer.polling.manager [-] Prevent > pollster radosgw.containers.objects from polling [ description=, > domain_id=xx9d9975088a4d93922e1d73c7217b3b, enabled=True, > > [...] > > id=xx90a9b1d4be4d75b4bd08ab8107e4ff, is_domain=False, links={u'self': > u'http://keystone-admin.service.xxxxxxx:35357/v3/projects on source > radosgw_300s_pollsters anymore!: PollsterPermanentError > > Configurations like: > cat polling.yaml > --- > sources: > - name: radosgw_300s_pollsters > interval: 300 > meters: > - radosgw.usage > - radosgw.objects > - radosgw.objects.size > - radosgw.objects.containers > - radosgw.containers.objects > - radosgw.containers.objects.size > > > Also tried radosgw.api.requests instead of radowsgw.usage. > > ceilometer.conf > [...] > [service_types] > radosgw = object-store > > [rgw_admin_credentials] > access_key = xxxxx0Z0xxxxxxxxxxxx > secret_key = xxxxxxxxxxxxlRExxcPxxxxxxoNxxxxxxOxxxx > > [rgw_client] > implicit_tenants = true > > Endpoints: > | xxxxxxx | region | swift | object-store | True | admin > | http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s | > | xxxxxxx | region | swift | object-store | True | > internal > | http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s | > | xxxxxxx | region | swift | object-store | True | public > | https://s3.somedomain.com/swift/v1/AUTH_%(tenant_id)s | > > Ceilometer user: > { > "user_id": "ceilometer", > "display_name": "ceilometer", > "email": "", > "suspended": 0, > "max_buckets": 1000, > "auid": 0, > "subusers": [], > "keys": [ > { > "user": "ceilometer", > "access_key": "xxxxxxxxxxxxxxxxxx", > "secret_key": "xxxxxxxxxxxxxxxxxxxxxxxxx" > } > ], > "swift_keys": [], > "caps": [ > { > "type": "buckets", > "perm": "read" > }, > { > "type": "metadata", > "perm": "read" > }, > { > "type": "usage", > "perm": "read" > }, > { > "type": "users", > "perm": "read" > } > ], > "op_mask": "read, write, delete", > "default_placement": "", > "placement_tags": [], > "bucket_quota": { > "enabled": false, > "check_on_raw": false, > "max_size": -1, > "max_size_kb": 0, > "max_objects": -1 > }, > "user_quota": { > "enabled": false, > "check_on_raw": false, > "max_size": -1, > "max_size_kb": 0, > "max_objects": -1 > }, > "temp_url_keys": [], > "type": "rgw" > } > > > radosgw config: > [client.rgw.xxxxxxxxxxx] > host = somehost > rgw frontends = "civetweb port=7480 num_threads=512" > rgw num rados handles = 8 > rgw thread pool size = 512 > rgw cache enabled = true > rgw dns name = s3.xxxxxx.xxx > rgw enable usage log = true > rgw usage log tick interval = 30 > rgw realm = public > rgw zonegroup = xxx > rgw zone = xxxxx > rgw resolve cname = False > rgw usage log flush threshold = 1024 > rgw usage max user shards = 1 > rgw usage max shards = 32 > rgw_keystone_url = https://keystone.xxxxxxxxxxxxx > rgw_keystone_admin_domain = default > rgw_keystone_admin_project = service > rgw_keystone_admin_user = swift > rgw_keystone_admin_password = > xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx > rgw_keystone_accepted_roles = member,_member_,admin > rgw_keystone_accepted_admin_roles = admin > rgw_keystone_api_version = 3 > rgw_keystone_verify_ssl = false > rgw_keystone_implicit_tenants = true > rgw_keystone_admin_tenant = default > rgw_keystone_revocation_interval = 0 > rgw_keystone_token_cache_size = 0 > rgw_s3_auth_use_keystone = true > rgw_max_attr_size = 1024 > rgw_max_attrs_num_in_req = 32 > rgw_max_attr_name_len = 64 > rgw_swift_account_in_url = true > rgw_swift_versioning_enabled = true > rgw_enable_apis = s3,swift,swift_auth,admin > rgw_swift_enforce_content_length = true > > > > > Any idea whats going on? > > All the best, > Florian > > > -- EveryWare AG Florian Engelmann Senior UNIX Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From zbitter at redhat.com Wed Feb 27 20:49:33 2019 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 27 Feb 2019 15:49:33 -0500 Subject: [heat] keystone endpoint configuration In-Reply-To: <22a164a6-73c9-5c6f-cfd0-6f29b0bae47a@rd.bbc.co.uk> References: <22a164a6-73c9-5c6f-cfd0-6f29b0bae47a@rd.bbc.co.uk> Message-ID: On 20/02/19 1:40 PM, Jonathan Rosser wrote: > In openstack-ansible we are trying to help a number of our end users > with their heat deployments, some of them in conjunction with magnum. > > There is some uncertainty with how the following heat.conf sections > should be configured: > > [clients_keystone] > auth_uri = ... > > [keystone_authtoken] > www_authenticate_uri = ... > > It does not appear to be possible to define a set of internal or > external keystone endpoints in heat.conf which allow the following: > >  * The orchestration panels being functional in horizon >  * Deployers isolating internal openstack from external networks >  * Deployers using self signed/company cert on the external endpoint >  * Magnum deployments completing >  * Heat delivering an external endpoint at [1] >  * Heat delivering an external endpoint at [2] > > There are a number of related bugs: > > https://bugs.launchpad.net/openstack-ansible/+bug/1814909 > https://bugs.launchpad.net/openstack-ansible/+bug/1811086 > https://storyboard.openstack.org/#!/story/2004808 > https://storyboard.openstack.org/#!/story/2004524 Based on this and your comment on IRC[1] - and correct me if I'm misunderstanding here - the crux of the issue is that the Keystone auth_url must be accessed via different addresses depending on which network the request is coming from? I don't think this was ever contemplated as a use case in developing Heat. For my part, I certainly always assumed that while the Keystone catalog could contain different Public/Internal/Admin endpoints for each service, there was only a single place to access the catalog (i.e. each cloud had a single unique auth_url). It's entirely possible this wasn't a valid assumption about the how clouds would/should be deployed in practice. If that's the case then we likely need some richer configuration options. The design of the Keystone catalog predates both the existence of Heat and the idea that cloud workloads might have reason to access the OpenStack APIs, and nobody is really an expert on both although we've gotten better at communicating. [1] http://eavesdrop.openstack.org/irclogs/%23heat/%23heat.2019-02-26.log.html#t2019-02-26T17:14:14 > Any help we could get from the heat team to try to understand the root > cause of these issues would be really helpful. > > Jon. > > > [1] > https://github.com/openstack/heat/blob/master/heat/engine/resources/server_base.py#L87 > > > [2] > https://github.com/openstack/heat/blob/master/heat/engine/resources/signal_responder.py#L106 > > From mnaser at vexxhost.com Wed Feb 27 20:54:26 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 27 Feb 2019 15:54:26 -0500 Subject: [openstack-ansible] Compute nodes with mixed system releases In-Reply-To: <4117207.Ezq4iH3xk8@gillesxps> References: <4117207.Ezq4iH3xk8@gillesxps> Message-ID: Hi Gilles, You will run into a few interesting issues such as how we build our repo based on the OS that the controller runs which means that there will be no pre-built venvs for those systems. I'd strongly suggest sticking to 16.04 and upgrading it all at once, for the other issues that you mentioned as well. Thanks, Mohammed On Tue, Feb 26, 2019 at 5:58 PM Gilles Mocellin wrote: > > Hello, > > I can ot find a real answer in the OpenStack-Ansible docs. > Can I add Ubuntu 18.04 compute nodes to my actueal all Ubuntu 16.04 cluster ? > > Ubuntu 18.04 needs Rocky, so I will first migrate from Queens to Rocky. > But then, do I need to stick to 16.04 and plan an overall upgrade after ? > > Of course, I understand that mixing Ubuntu release, will also mix kernel and > qemu versions and can pose problems, for migrations for example. > > > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From skaplons at redhat.com Wed Feb 27 21:02:12 2019 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 27 Feb 2019 22:02:12 +0100 Subject: [neutron] Issue in neutron-tempest-iptables_hybrid gate job Message-ID: <68617E57-4EA0-498C-9803-869BB9C5842A@redhat.com> Hi Neutrinos, Just FYI, we have bug in os-vif [1] which cause failures of (at least) neutron-tempest-iptables_hybrid. So if Zuul is failing on Your patch because of this job, please don’t recheck it until patch [2] will be merged and new os-vif will be released. [1] https://bugs.launchpad.net/os-vif/+bug/1817919 [2] https://review.openstack.org/#/c/639702/ — Slawek Kaplonski Senior software engineer Red Hat From colleen at gazlene.net Wed Feb 27 21:13:58 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 27 Feb 2019 16:13:58 -0500 Subject: [heat] [keystone] keystone endpoint configuration In-Reply-To: <22a164a6-73c9-5c6f-cfd0-6f29b0bae47a@rd.bbc.co.uk> References: <22a164a6-73c9-5c6f-cfd0-6f29b0bae47a@rd.bbc.co.uk> Message-ID: <87e0c889-ab02-4e97-b5ff-bd93e7d9d53f@www.fastmail.com> Hi, On Wed, Feb 20, 2019, at 7:40 PM, Jonathan Rosser wrote: > In openstack-ansible we are trying to help a number of our end users > with their heat deployments, some of them in conjunction with magnum. > > There is some uncertainty with how the following heat.conf sections > should be configured: > > [clients_keystone] > auth_uri = ... > > [keystone_authtoken] > www_authenticate_uri = ... I know very little about heat, but I think there's some confusion about what [keystone_authtoken]/www_authenticate_uri is for, and after grepping a bit I think heat is misusing it. www_authenticate_uri (formerly known as auth_uri) is meant to be used by keystonemiddleware to set the WWW-Authenticate header in its response when a client request fails to present a valid keystone token. It's a wsgi middleware, so heat shouldn't be using it or even be aware of it. You would normally set it to keystone's public endpoint since it's what the server would present to an end user to help them retry their request. If the client already knows the right auth URL and grabs a token beforehand, it will never see what's in www_authenticate_uri. Heat appears to be using it for its own purposes, which I think is not advisable. If heat needs to provide other subsystems or services with a URL for keystone, it should do that in a separate config. I don't really have enough heat knowledge to make a recommendation for the issues below. While keystoneauth provides a way to filter the service catalog by admin/internal/public endpoint, it obviously doesn't help much when you don't know where keystone is yet. Colleen > > It does not appear to be possible to define a set of internal or > external keystone endpoints in heat.conf which allow the following: > > * The orchestration panels being functional in horizon > * Deployers isolating internal openstack from external networks > * Deployers using self signed/company cert on the external endpoint > * Magnum deployments completing > * Heat delivering an external endpoint at [1] > * Heat delivering an external endpoint at [2] > > There are a number of related bugs: > > https://bugs.launchpad.net/openstack-ansible/+bug/1814909 > https://bugs.launchpad.net/openstack-ansible/+bug/1811086 > https://storyboard.openstack.org/#!/story/2004808 > https://storyboard.openstack.org/#!/story/2004524 > > Any help we could get from the heat team to try to understand the root > cause of these issues would be really helpful. > > Jon. > > > [1] > https://github.com/openstack/heat/blob/master/heat/engine/resources/server_base.py#L87 > > [2] > https://github.com/openstack/heat/blob/master/heat/engine/resources/signal_responder.py#L106 > > From mihalis68 at gmail.com Wed Feb 27 21:25:55 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 27 Feb 2019 16:25:55 -0500 Subject: [all] [forum] Forum Submissions are open! In-Reply-To: References: <5C76BD8E.4070504@openstack.org> <5C76D125.2040404@openstack.org> <5C76E02F.4010906@openstack.org> Message-ID: I think the issue is that forum submissions building on what gets discussed in Berlin can't be expected to be finalised whilst attendees to the berlin meetup are still traveling. It's not that Erik can't pull these things together, in fact he's an old hand at this, it's more that this process isn't reasonable if there's so little time to collate what we learn in Berlin and feed it forward to Denver. Frankly it sounds like because the planning committee needs 5 weeks, Erik can have two days. Seem unfair. Chris On Wed, Feb 27, 2019 at 2:29 PM Kendall Nelson wrote: > Another- nother thought: You could take a look at what is submitted by > project teams closer to the deadline and see if your ideas might fit well > with theirs since they are looking for feedback from operators anyway. In > the past I have always hoped for more engagement in the forum sessions I've > submitted but only ever had one or two operators able to join us. > > -Kendall (diablo_rojo) > > On Wed, Feb 27, 2019 at 11:14 AM Kendall Nelson > wrote: > >> Hello :) >> >> On Wed, Feb 27, 2019 at 11:08 AM Jimmy McArthur >> wrote: >> >>> Erik, >>> >>> I definitely understand the timeline is tight. One of the reasons that >>> we publish the schedule so early is to enable community members to plan >>> their schedule early, especially as there is more overlap with the main >>> Summit Schedule in Denver. Additionally, travel approval is often >>> predicated upon someone showing they're leading/moderating a session. >>> >>> Before publishing the schedule, we print a draft Forum schedule for >>> community feedback and start promotion of the schedule, which we have to >>> put up on the OpenStack website and apps at 5 weeks out. Extending the date >>> beyond the 10th won't give the Forum Selection Committee enough time to >>> complete those tasks. >>> >>> I think if the Ops team can come up with some high level discussion >>> topics, we'll be happy to put some holds in the Forum schedule for >>> Ops-specific content. diablo_rojo has also offered to attend some of the >>> Ops sessions remotely as well, if that would help you all shape some things >>> into actual sessions. >>> >> >> I'm definitely happy to help as much as I can. If you'll have something >> set up that I can call into (zoom, webex, bluejeans, hangout, whatever), I >> definitely will. I could also read through etherpads you take notes in and >> help summarize things into forum proposals. >> >> Another thing to note is that whatever you/we submit, it doesn't have to >> be award winning :) Its totally possible to change session descriptions and >> edit who the speaker is later. >> >> Other random thought, I know Sean McGinnis has attended a lot of the >> Operators stuff in the past so maybe he could help narrow things down too? >> Not to sign him up for more work, but I know he's written a forum propsal >> or two in the past ;) >> >> >>> >>> I wish I could offer a further extension, but extending it another week >>> would push too far into the process. >>> >>> Cheers, >>> Jimmy >>> >>> Erik McCormick >>> February 27, 2019 at 12:43 PM >>> >>> Jimmy, >>> >>> I won't even get home until the 10th much less have time to follow up >>> with anyone. The formation of those sessions often come from >>> discussions spawned at the meetup and expanded upon later with folks >>> who could not attend. Could we at least get until 3/17? I understand >>> your desire to finalize the schedule, but 6 weeks out should be more >>> than enough time, no? >>> >>> Thanks, >>> Erik >>> >>> Jimmy McArthur >>> February 27, 2019 at 12:04 PM >>> >>> Hi Erik, >>> >>> We are able to extend the deadline to 11:59PM Pacific, March 10th. That >>> should give the weekend to get any additional stragglers in and still allow >>> the Forum Programming Committee enough time to manage the rest of the >>> approval and publishing process in time for people's travel needs, etc... >>> >>> For the Ops Meetup specifically, I'd suggest going a bit broader with >>> the proposals and offering to fill in the blanks later. For example, if >>> something comes up and everyone agrees it should go to the Forum, just >>> submit before the end of the Ops session. Kendall or myself would be happy >>> to help you add details a bit later in the process, should clarification be >>> necessary. We typically have enough spots for the majority of proposed >>> Forum sessions. That's not a guarantee, but food for thought. >>> >>> Cheers, >>> Jimmy >>> >>> >>> _______________________________________________ >>> Airship-discuss mailing list >>> Airship-discuss at lists.airshipit.org >>> http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss >>> >>> Erik McCormick >>> February 27, 2019 at 11:31 AM >>> Would it be possible to push the deadline back a couple weeks? I expect >>> there to be a few session proposals that will come out of the Ops Meetup >>> which ends the day before the deadline. It would be helpful to have a >>> little time to organize and submit things afterwards. >>> >>> Thanks, >>> Erik >>> >>> Jimmy McArthur >>> February 27, 2019 at 10:40 AM >>> Hi Everyone - >>> >>> A quick reminder that we are accepting Forum [1] submissions for the >>> 2019 Open Infrastructure Summit in Denver [2]. Please submit your ideas >>> through the Summit CFP tool [3] through March 8th. Don't forget to put >>> your brainstorming etherpad up on the Denver Forum page [4]. >>> >>> This is not a classic conference track with speakers and presentations. >>> OSF community members (participants in development teams, operators, >>> working groups, SIGs, and other interested individuals) discuss the topics >>> they want to cover and get alignment on and we welcome your participation. >>> The Forum is your opportunity to help shape the development of future >>> project releases. More information about the Forum [1]. >>> >>> If you have questions or concerns, please reach out to >>> speakersupport at openstack.org. >>> >>> Cheers, >>> Jimmy >>> >>> [1] https://wiki.openstack.org/wiki/Forum >>> [2] https://www.openstack.org/summit/denver-2019/ >>> [3] https://www.openstack.org/summit/denver-2019/call-for-presentations >>> [4] https://wiki.openstack.org/wiki/Forum/Denver2019 >>> ___________________________________________ >>> >>> >> Hopefully that helps! >> >> -Kendall (diablo_rojo) >> > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Wed Feb 27 21:47:48 2019 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 27 Feb 2019 13:47:48 -0800 Subject: Fw: [congress] Handling alarms that can be erroneous In-Reply-To: References: Message-ID: To facilitate further discussion, I have begun an etherpad [1] to write out in more detail the cases to consider as well as the desired behaviors and potential solutions. Feel free to add/elaborate/correct the cases! [1] https://etherpad.openstack.org/p/congress-exec-semantics-cases On Mon, Feb 25, 2019 at 3:57 PM Eric K wrote: > > On Sun, Feb 24, 2019 at 7:14 PM AKHIL Jain wrote: > > > > Hi all, > > > > This discussion is about keeping, managing and executing actions based on old alarms. > > > > In Congress, when the policy is created the corresponding actions are executed based on data already existing in datasource tables and on the data that is received later in Congress datasource tables. > > So the alarms raised by projects like aodh, monasca are polled by congress and even the webhook notifications for alarm are received and stored in congress. > > In Congress, there are two scenarios of policy execution. One, execution based on data already existing before the policy is created and second, policy is created and action is executed at any time after the data is received > Fundamentally the current policy formalism is based on state. Policy > is evaluated on the latest state, whether that state is formed before > or after a policy a created. > Based on the emphasis on order, it feels like perhaps what you're > looking for is a change-based formalism, where policy is evaluated on > the change to state? > For example, a state-based policy may say: if it *is* raining, make > sure umbrella is used. > A change-based policy may say: if it *starts* raining, deploy umbrella. > Generally speaking, state-based formalism leads to simpler and more > robust policies, but change-based formalism allows for greater > control. But the use of one formalism does not necessarily preclude > the other. > > > > Which can be harmful by keeping in mind that old alarms that are INVALID at present are still stored in Congress tables. So the user can trigger FALSE action based on that invalid alarm which can be very harmful to the environment. > Just to clarify for someone coming to the discussion: under normal > operations, alarms which have become inactive are also accurately > reflected in Congress. Of course, as with any distributed system, > there are issues with delivery and latency and timing. So we want to > make sure Congress offers the right facilities in its policy formalism > to enable policy writers to write robust policies that avoid > unintended behaviors. (More details in the discussion in the quoted > emails.) > > > > In order to tackle this, there can be multiple ways from the perspective of every OpenStack project handling alarms. > > One of the solutions can be: As action needs to be taken immediately after the alarm is raised, so storing only those alarms that have corresponding actions or policies(that will use the alarm) and after the policy is executed on them just discard those alarms or mark those alarm with some field like old, executed, etc. Or there are use cases that require old alarms? > > > > Also, we need to provide Operator the ability to delete the rows in congress datasource table. This will not completely help in solving this issue but still, it's better functionality to have IMO. > > > > Above solution or any discussed better solution can lead to change in mechanism i.e currently followed that involves policy execution on both new alarm and existing alarm to only new alarm. > > > > I have added the previous discussion below and discussion in Congress weekly IRC meeting can be found here > > http://eavesdrop.openstack.org/meetings/congressteammeeting/2019/congressteammeeting.2019-02-22-04.01.log.html > > > > Thanks and regards, > > Akhil > > ________________________________________ > > From: Eric K > > Sent: Tuesday, February 19, 2019 11:04 AM > > To: AKHIL Jain > > Subject: Re: Congress Demo and Output > > > > Thanks for the update! > > > > Yes of course if created_at field is needed by important use case then > > please feel free to add it! Sample policy in the commit message would be > > very helpful. > > > > > > Regarding old alarms, I need a couple clarifications: > > First, which categories of actions executions are we concerned about? > > 1. Actions executed automatically by congress policy. > > 2. Actions executed automatically by another service getting data from > > Congress. > > 3. Actions executed manually by operator based on data from Congress. > > > > Second, let's clarify exactly what we mean by "old". > > There are several categories I can think of: > > 1. Alarms which had been activated and then deactivated. > > 2. Alarms which had been activated and remains active, but it has been > > some time since it first became active. > > 3. Alarms which had been activated and triggered some action, but the > > alarm remains active because the action do not resolve the alarm. > > 4. Alarms which had been activated and triggered some action, and the > > action is in the process of resolving the alarm, but in the mean time the > > alarm remains active. > > > > (1) should generally not show up in Congress as active in push update > > case, but there are failure scenarios in which an update to deactivate can > > fail to reach Congress. > > (2) seems to be the thing option 1.1 would get rid of. But I am not clear > > what problems (2) causes. Why is a bad idea to execute actions based on an > > alarm that has been active for some time and remains active? An example > > would help me =) > > > > I can see (4) causing problems. But I'd like to work through an example to > > understand more concretely. In simple cases, Congress policy action > > execution behavior actually works well. > > > > If we have simple case like: > > execute[action(1)] :- alarm(1) > > Then action(1) is not going to be executed twice by congress because the > > behavior is that Congress executes only the NEWLY COMPUTED actions. > > > > If we have a more complex case like: > > execute[action(1)] :- alarm(1) > > > > execute[action(2)] :- alarm(1), alarm(2) > > If alarm (1) activates first, triggering action(1), then alarm (2) > > activates before alarm(1) deactivates, action(2) would be triggered > > because it is newly computed. Whether we WANT it executed may depend on > > the use case. > > > > And I'd also like to add option 1.3: > > Add a new table in (say monasca) called latest_alarm, which is the same as > > the current alarms table, except that it contains only the most recently > > received active alarm. That way, the policies which must avoid using older > > alarms can refer to the latest_alarm table. Whereas policies which would > > consider all currently active alarms can refer to the alarms table. > > > > Looking forward to more discussion! > > > > > > On 2/17/19, 10:44 PM, "AKHIL Jain" wrote: > > > > >Hi Eric, > > > > > >There are some questions raised while working on FaultManagement usecase, > > >mainly below ones: > > >1. Keeping old alarms can be very harmful, the operator can execute > > >actions based on alarms that are not even existing or valid. > > >2. Adding a created_at field in Nova servers table can be useful. > > > > > >So for the first question, there can be multiple options: > > >1.1 Do not store those alarms that do not have any policy created in > > >Congress to execute on that alarm > > >1.2 Add field in alarm that can tell if the policy is executed using that > > >row or not. And giving the operator a command to delete them or > > >automatically delete them. > > > > > >For 2nd question please tell me that its good to go and I will add it. > > > > > >Regards > > >Akhil > > > > From smooney at redhat.com Wed Feb 27 21:48:41 2019 From: smooney at redhat.com (Sean Mooney) Date: Wed, 27 Feb 2019 21:48:41 +0000 Subject: [neutron] Issue in neutron-tempest-iptables_hybrid gate job In-Reply-To: <68617E57-4EA0-498C-9803-869BB9C5842A@redhat.com> References: <68617E57-4EA0-498C-9803-869BB9C5842A@redhat.com> Message-ID: On Wed, 2019-02-27 at 22:02 +0100, Slawomir Kaplonski wrote: > Hi Neutrinos, > > Just FYI, we have bug in os-vif [1] which cause failures of (at least) neutron-tempest-iptables_hybrid. So if Zuul is > failing on Your patch because of this job, please don’t recheck it until patch [2] will be merged and new os-vif will > be released. > > [1] https://bugs.launchpad.net/os-vif/+bug/1817919 > [2] https://review.openstack.org/#/c/639702/ yes sorry for the delay im creating the patch to the release repo currently and i have added new gate jobs help catch this in the future. https://review.openstack.org/#/c/639732/6 os-vif-ovs-iptables is effectivly a clone of the neutron-tempest-iptables_hybrid gate job that uses os-vif for git instead of from packages and similarly the new os-vif-linuxbridge is a clone of the neutron linux bridge job. i will be moving both to python 3 soon and optimising the test suite to better match os-vifs needs bug going forword we will be testing ml2/ovs with contrack, ml2/ovs with iptables and linux bridge in the os-vif gate. > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > From gilles.mocellin at nuagelibre.org Wed Feb 27 21:55:13 2019 From: gilles.mocellin at nuagelibre.org (Gilles Mocellin) Date: Wed, 27 Feb 2019 22:55:13 +0100 Subject: [openstack-ansible] Compute nodes with mixed system releases In-Reply-To: References: <4117207.Ezq4iH3xk8@gillesxps> Message-ID: <3234422.G8zt2Rua2d@gillesxps> Le mercredi 27 février 2019, 21:54:26 CET Mohammed Naser a écrit : > Hi Gilles, > > You will run into a few interesting issues such as how we build our > repo based on the OS that the controller runs which means that there > will be no pre-built venvs for those systems. Ah ! I forget this repo conainer... > I'd strongly suggest sticking to 16.04 and upgrading it all at once, > for the other issues that you mentioned as well. I'll do that for now, but I really wonder how to upgrade all at once when we have lots of servers. I wonder which server OS is the base for the OS venvs to build in the repo container. I'll check, but if someone already knows... > Thanks, > Mohammed Thank you Mohammed. > > On Tue, Feb 26, 2019 at 5:58 PM Gilles Mocellin > > wrote: > > Hello, > > > > I can ot find a real answer in the OpenStack-Ansible docs. > > Can I add Ubuntu 18.04 compute nodes to my actueal all Ubuntu 16.04 > > cluster ? > > > > Ubuntu 18.04 needs Rocky, so I will first migrate from Queens to Rocky. > > But then, do I need to stick to 16.04 and plan an overall upgrade after ? > > > > Of course, I understand that mixing Ubuntu release, will also mix kernel > > and qemu versions and can pose problems, for migrations for example. From kennelson11 at gmail.com Wed Feb 27 21:57:42 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 27 Feb 2019 13:57:42 -0800 Subject: [all] [forum] Forum Submissions are open! In-Reply-To: References: <5C76BD8E.4070504@openstack.org> <5C76D125.2040404@openstack.org> <5C76E02F.4010906@openstack.org> Message-ID: Hello :) On Wed, Feb 27, 2019 at 1:26 PM Chris Morgan wrote: > I think the issue is that forum submissions building on what gets > discussed in Berlin can't be expected to be finalised whilst attendees to > the berlin meetup are still traveling. It's not that Erik can't pull these > things together, in fact he's an old hand at this, it's more that this > process isn't reasonable if there's so little time to collate what we learn > in Berlin and feed it forward to Denver. Frankly it sounds like because the > planning committee needs 5 weeks, Erik can have two days. Seem unfair. > Honestly, the decision process doesn't take much time, aside from organizing a time that all 10 people can meet across x timezones (a thing unto itself). Its the community feedback period, giving people enough time to secure travel approval from their management, loading the sessions into the actual schedule app, and other print deadlines that force us to have everything set this far out. I will definitely help the ops community in whatever way I can! Do you have remote attendance set up for the meetup? > Chris > > On Wed, Feb 27, 2019 at 2:29 PM Kendall Nelson > wrote: > >> Another- nother thought: You could take a look at what is submitted by >> project teams closer to the deadline and see if your ideas might fit well >> with theirs since they are looking for feedback from operators anyway. In >> the past I have always hoped for more engagement in the forum sessions I've >> submitted but only ever had one or two operators able to join us. >> >> -Kendall (diablo_rojo) >> >> On Wed, Feb 27, 2019 at 11:14 AM Kendall Nelson >> wrote: >> >>> Hello :) >>> >>> On Wed, Feb 27, 2019 at 11:08 AM Jimmy McArthur >>> wrote: >>> >>>> Erik, >>>> >>>> I definitely understand the timeline is tight. One of the reasons that >>>> we publish the schedule so early is to enable community members to plan >>>> their schedule early, especially as there is more overlap with the main >>>> Summit Schedule in Denver. Additionally, travel approval is often >>>> predicated upon someone showing they're leading/moderating a session. >>>> >>>> Before publishing the schedule, we print a draft Forum schedule for >>>> community feedback and start promotion of the schedule, which we have to >>>> put up on the OpenStack website and apps at 5 weeks out. Extending the date >>>> beyond the 10th won't give the Forum Selection Committee enough time to >>>> complete those tasks. >>>> >>>> I think if the Ops team can come up with some high level discussion >>>> topics, we'll be happy to put some holds in the Forum schedule for >>>> Ops-specific content. diablo_rojo has also offered to attend some of the >>>> Ops sessions remotely as well, if that would help you all shape some things >>>> into actual sessions. >>>> >>> >>> I'm definitely happy to help as much as I can. If you'll have something >>> set up that I can call into (zoom, webex, bluejeans, hangout, whatever), I >>> definitely will. I could also read through etherpads you take notes in and >>> help summarize things into forum proposals. >>> >>> Another thing to note is that whatever you/we submit, it doesn't have to >>> be award winning :) Its totally possible to change session descriptions and >>> edit who the speaker is later. >>> >>> Other random thought, I know Sean McGinnis has attended a lot of the >>> Operators stuff in the past so maybe he could help narrow things down too? >>> Not to sign him up for more work, but I know he's written a forum propsal >>> or two in the past ;) >>> >>> >>>> >>>> I wish I could offer a further extension, but extending it another week >>>> would push too far into the process. >>>> >>>> Cheers, >>>> Jimmy >>>> >>>> Erik McCormick >>>> February 27, 2019 at 12:43 PM >>>> >>>> Jimmy, >>>> >>>> I won't even get home until the 10th much less have time to follow up >>>> with anyone. The formation of those sessions often come from >>>> discussions spawned at the meetup and expanded upon later with folks >>>> who could not attend. Could we at least get until 3/17? I understand >>>> your desire to finalize the schedule, but 6 weeks out should be more >>>> than enough time, no? >>>> >>>> Thanks, >>>> Erik >>>> >>>> Jimmy McArthur >>>> February 27, 2019 at 12:04 PM >>>> >>>> Hi Erik, >>>> >>>> We are able to extend the deadline to 11:59PM Pacific, March 10th. >>>> That should give the weekend to get any additional stragglers in and still >>>> allow the Forum Programming Committee enough time to manage the rest of the >>>> approval and publishing process in time for people's travel needs, etc... >>>> >>>> For the Ops Meetup specifically, I'd suggest going a bit broader with >>>> the proposals and offering to fill in the blanks later. For example, if >>>> something comes up and everyone agrees it should go to the Forum, just >>>> submit before the end of the Ops session. Kendall or myself would be happy >>>> to help you add details a bit later in the process, should clarification be >>>> necessary. We typically have enough spots for the majority of proposed >>>> Forum sessions. That's not a guarantee, but food for thought. >>>> >>>> Cheers, >>>> Jimmy >>>> >>>> >>>> _______________________________________________ >>>> Airship-discuss mailing list >>>> Airship-discuss at lists.airshipit.org >>>> http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss >>>> >>>> Erik McCormick >>>> February 27, 2019 at 11:31 AM >>>> Would it be possible to push the deadline back a couple weeks? I expect >>>> there to be a few session proposals that will come out of the Ops Meetup >>>> which ends the day before the deadline. It would be helpful to have a >>>> little time to organize and submit things afterwards. >>>> >>>> Thanks, >>>> Erik >>>> >>>> Jimmy McArthur >>>> February 27, 2019 at 10:40 AM >>>> Hi Everyone - >>>> >>>> A quick reminder that we are accepting Forum [1] submissions for the >>>> 2019 Open Infrastructure Summit in Denver [2]. Please submit your ideas >>>> through the Summit CFP tool [3] through March 8th. Don't forget to put >>>> your brainstorming etherpad up on the Denver Forum page [4]. >>>> >>>> This is not a classic conference track with speakers and presentations. >>>> OSF community members (participants in development teams, operators, >>>> working groups, SIGs, and other interested individuals) discuss the topics >>>> they want to cover and get alignment on and we welcome your participation. >>>> The Forum is your opportunity to help shape the development of future >>>> project releases. More information about the Forum [1]. >>>> >>>> If you have questions or concerns, please reach out to >>>> speakersupport at openstack.org. >>>> >>>> Cheers, >>>> Jimmy >>>> >>>> [1] https://wiki.openstack.org/wiki/Forum >>>> [2] https://www.openstack.org/summit/denver-2019/ >>>> [3] https://www.openstack.org/summit/denver-2019/call-for-presentations >>>> [4] https://wiki.openstack.org/wiki/Forum/Denver2019 >>>> ___________________________________________ >>>> >>>> >>> Hopefully that helps! >>> >>> -Kendall (diablo_rojo) >>> >> > > -- > Chris Morgan > - Kendall Nelson (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Feb 27 22:08:02 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 27 Feb 2019 16:08:02 -0600 Subject: [all] [forum] Forum Submissions are open! In-Reply-To: References: <5C76BD8E.4070504@openstack.org> <5C76D125.2040404@openstack.org> <5C76E02F.4010906@openstack.org> Message-ID: <20190227220801.GA12980@sm-workstation> On Wed, Feb 27, 2019 at 01:57:42PM -0800, Kendall Nelson wrote: > > > > I think the issue is that forum submissions building on what gets > > discussed in Berlin can't be expected to be finalised whilst attendees to > > the berlin meetup are still traveling. It's not that Erik can't pull these > > things together, in fact he's an old hand at this, it's more that this > > process isn't reasonable if there's so little time to collate what we learn > > in Berlin and feed it forward to Denver. Frankly it sounds like because the > > planning committee needs 5 weeks, Erik can have two days. Seem unfair. > > > > Honestly, the decision process doesn't take much time, aside from > organizing a time that all 10 people can meet across x timezones (a thing > unto itself). Its the community feedback period, giving people enough time > to secure travel approval from their management, loading the sessions into > the actual schedule app, and other print deadlines that force us to have > everything set this far out. > > I will definitely help the ops community in whatever way I can! Do you have > remote attendance set up for the meetup? > To be clear, the issue isn't needing help writing up the submission. So great if someone can attend or watch for topics coming up that can be pulled out into Forum ideas, but the crux is that there are a lot of things discussed at these events and it may take several days after it is over to realize, "hey, that would be really useful if we could discuss that at the Forum." I don't think what Erik asked for is unreasonable. The Ops Meetup event is exactly the target we want feeding into Forum discussions. If we can give a week after the event for the Ops Meetup ends for this processing to happen, I think we increase the odds of an effective and useful Forum. Can we please extend that deadline out a few more days to make sure we get this valuable input? Sean From jonathan.rosser at rd.bbc.co.uk Wed Feb 27 22:29:40 2019 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Wed, 27 Feb 2019 22:29:40 +0000 Subject: [heat] keystone endpoint configuration In-Reply-To: References: <22a164a6-73c9-5c6f-cfd0-6f29b0bae47a@rd.bbc.co.uk> Message-ID: On 27/02/2019 20:49, Zane Bitter wrote: > On 20/02/19 1:40 PM, Jonathan Rosser wrote: >> In openstack-ansible we are trying to help a number of our end users >> with their heat deployments, some of them in conjunction with magnum. >> >> There is some uncertainty with how the following heat.conf sections >> should be configured: >> >> [clients_keystone] >> auth_uri = ... >> >> [keystone_authtoken] >> www_authenticate_uri = ... >> >> It does not appear to be possible to define a set of internal or >> external keystone endpoints in heat.conf which allow the following: >> >>   * The orchestration panels being functional in horizon >>   * Deployers isolating internal openstack from external networks >>   * Deployers using self signed/company cert on the external endpoint >>   * Magnum deployments completing >>   * Heat delivering an external endpoint at [1] >>   * Heat delivering an external endpoint at [2] >> >> There are a number of related bugs: >> >> https://bugs.launchpad.net/openstack-ansible/+bug/1814909 >> https://bugs.launchpad.net/openstack-ansible/+bug/1811086 >> https://storyboard.openstack.org/#!/story/2004808 >> https://storyboard.openstack.org/#!/story/2004524 > > Based on this and your comment on IRC[1] - and correct me if I'm > misunderstanding here - the crux of the issue is that the Keystone > auth_url must be accessed via different addresses depending on which > network the request is coming from? > The most concrete example I can give is that of a Magnum k8s deployment, where heat is used to create several VM and deploy software. Callback URLs are embedded into those VM and SoftwareDeployments, and the Callback URL must be accessible from the VM, this would always need to be something that could reasonably be called a "Public" endpoint. Conversely, heat itself needs to be able to talk to many other openstack components, defined in the [clients_*] config sections. It is reasonable to describe these interactions as being "Internal" - I may misunderstand some of this though. So here lies the issue - appropriate entries in heat.conf to make internal interactions between heat and horizon (one example) work in real-world deployments results in the keystone internal URL being placed in callbacks, and then SoftwareDemployments never complete as the internal keystone URL is not usually accessible to a VM. I suspect that there is not much coverage for this kind of network separation in gate tests. > I don't think this was ever contemplated as a use case in developing > Heat. For my part, I certainly always assumed that while the Keystone > catalog could contain different Public/Internal/Admin endpoints for > each service, there was only a single place to access the catalog > (i.e. each cloud had a single unique auth_url). > I think that as far as heat itself interacting with other openstack components is concerned there does not need to be more than one auth_url. However it is very important to make a distinction between the context in which the heat code runs and the context of a VM created by heat - any callback URL created must be valid for the context of the VM, not the heat code. > It's entirely possible this wasn't a valid assumption about the how > clouds would/should be deployed in practice. If that's the case then > we likely need some richer configuration options. The design of the > Keystone catalog predates both the existence of Heat and the idea that > cloud workloads might have reason to access the OpenStack APIs, and > nobody is really an expert on both although we've gotten better at > communicating. > > [1] > http://eavesdrop.openstack.org/irclogs/%23heat/%23heat.2019-02-26.log.html#t2019-02-26T17:14:14 > Colleen makes some observations about the use of keystone config in heat - and interestingly suggests a seperate config entry for cases where a keystone URL should be handed on to another service. Mohammed and I have already discussed additional config options being a potential solution whilst trying to debug this. There are already examples of similar config options in heat.conf, such as "heat_waitcondition_server_url" - would additonal config items such as server_base_auth_url and signal_responder_auth_url be appropriate so that we can be totally explicit about the endpoints handed on to created VM? From sorrison at gmail.com Wed Feb 27 22:39:21 2019 From: sorrison at gmail.com (Sam Morrison) Date: Thu, 28 Feb 2019 09:39:21 +1100 Subject: [nova][keystone] project tags in context for scheduling In-Reply-To: <55a7132e-1573-b29c-efbe-9c48226cc964@gmail.com> References: <37E79D0F-D085-4758-84BC-158798055522@gmail.com> <55a7132e-1573-b29c-efbe-9c48226cc964@gmail.com> Message-ID: <286B267B-E095-4C3F-8BED-696DC5993EE2@gmail.com> > On 28 Feb 2019, at 1:25 am, Matt Riedemann wrote: > > On 2/26/2019 11:53 PM, Sam Morrison wrote: >> We have a use case where we want to schedule a bunch of projects to specific compute nodes only. >> The aggregate_multitenancy_isolation isn’t viable because in some cases we will want thousands of projects to go to some hardware and it isn’t manageable/scaleable to do this in nova and aggregates. (Maybe it is and I’m being silly?) > > Is the issue because of this? > > https://bugs.launchpad.net/nova/+bug/1802111 > Or just in general. Because https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#tenant-isolation-with-placement fixes that problem, but is only available since Rocky. Yeah essentially it is however it would be nice to manage this in keystone where is’t all in one place but this I think would work. Just upgraded to Queens so not far off too! > Also, I can't find it now but there was a public cloud workgroup bug in launchpad at one point where it was asking that the AggregateMultiTenancyIsolation filter work on keystone domains rather than a list of projects, so if those projects were all in the same domain you'd just specify the domain in the aggregate metadata than the thousands of projects which is your scaling issue. Tobias might remember that bug. We actually already have a domain scheduler filter [1] but we have multiple levels of projects that intersects. Eg. We use domains to seperate our Australian and NZ projects. But we also have needs to schedule our projects based on their funding source. Thanks, Sam [1] https://github.com/NeCTAR-RC/nova/blob/nectar/queens/nova/cells/filters/restrict_domain.py > > -- > > Thanks, > > Matt > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Wed Feb 27 22:42:36 2019 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 27 Feb 2019 17:42:36 -0500 Subject: [heat] [keystone] keystone endpoint configuration In-Reply-To: <87e0c889-ab02-4e97-b5ff-bd93e7d9d53f@www.fastmail.com> References: <22a164a6-73c9-5c6f-cfd0-6f29b0bae47a@rd.bbc.co.uk> <87e0c889-ab02-4e97-b5ff-bd93e7d9d53f@www.fastmail.com> Message-ID: <89c53e06-ffa7-6b35-7f35-6a061cfe2d59@redhat.com> On 27/02/19 4:13 PM, Colleen Murphy wrote: > Hi, > > On Wed, Feb 20, 2019, at 7:40 PM, Jonathan Rosser wrote: >> In openstack-ansible we are trying to help a number of our end users >> with their heat deployments, some of them in conjunction with magnum. >> >> There is some uncertainty with how the following heat.conf sections >> should be configured: >> >> [clients_keystone] >> auth_uri = ... >> >> [keystone_authtoken] >> www_authenticate_uri = ... > > I know very little about heat, but I think there's some confusion about what [keystone_authtoken]/www_authenticate_uri is for, and after grepping a bit I think heat is misusing it. www_authenticate_uri (formerly known as auth_uri) is meant to be used by keystonemiddleware to set the WWW-Authenticate header in its response when a client request fails to present a valid keystone token. It's a wsgi middleware, so heat shouldn't be using it or even be aware of it. You would normally set it to keystone's public endpoint since it's what the server would present to an end user to help them retry their request. If the client already knows the right auth URL and grabs a token beforehand, it will never see what's in www_authenticate_uri. Thanks Colleen! > Heat appears to be using it for its own purposes, which I think is not advisable. If heat needs to provide other subsystems or services with a URL for keystone, it should do that in a separate config. Ooh, I know this one because Johannes schooled me on it a couple of months back :) We actually do have a separate config, it's [clients_keystone]/auth_uri. If, and only if, the user does not provide a value for that, we fall back to using the value in [keystone_authtoken]/www_authenticate_uri. https://git.openstack.org/cgit/openstack/heat/tree/heat/common/endpoint_utils.py#n25 > I don't really have enough heat knowledge to make a recommendation for the issues below. While keystoneauth provides a way to filter the service catalog by admin/internal/public endpoint, it obviously doesn't help much when you don't know where keystone is yet. Yeah, that seems to be the heart of the problem. [clients_keystone]/auth_uri is being used in multiple different ways for different things that may have different access to the various networks (some of which Heat has no control over - we give the user the URL and it's up to them where it is called from). cheers, Zane. > Colleen > >> >> It does not appear to be possible to define a set of internal or >> external keystone endpoints in heat.conf which allow the following: >> >> * The orchestration panels being functional in horizon >> * Deployers isolating internal openstack from external networks >> * Deployers using self signed/company cert on the external endpoint >> * Magnum deployments completing >> * Heat delivering an external endpoint at [1] >> * Heat delivering an external endpoint at [2] >> >> There are a number of related bugs: >> >> https://bugs.launchpad.net/openstack-ansible/+bug/1814909 >> https://bugs.launchpad.net/openstack-ansible/+bug/1811086 >> https://storyboard.openstack.org/#!/story/2004808 >> https://storyboard.openstack.org/#!/story/2004524 >> >> Any help we could get from the heat team to try to understand the root >> cause of these issues would be really helpful. >> >> Jon. >> >> >> [1] >> https://github.com/openstack/heat/blob/master/heat/engine/resources/server_base.py#L87 >> >> [2] >> https://github.com/openstack/heat/blob/master/heat/engine/resources/signal_responder.py#L106 >> >> > From iwienand at redhat.com Wed Feb 27 22:42:57 2019 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 28 Feb 2019 09:42:57 +1100 Subject: [dev][oslo] oslo.cache and dogpile 0.7.0+ cache errors In-Reply-To: <5d6d643e-720a-0ab9-b86d-dd47ec37dc43@nemebean.com> References: <4fec7479-22f8-e49a-5732-5ddfa914831b@nemebean.com> <5d6d643e-720a-0ab9-b86d-dd47ec37dc43@nemebean.com> Message-ID: <20190227224257.GA18739@fedora19.localdomain> On Wed, Feb 27, 2019 at 01:04:38PM -0600, Ben Nemec wrote: > To close the loop on this, we just merged a unit test fix that unblocks > oslo.cache ci. We'll continue to work on sorting out where these tests > should live as a followup. I haven't really followed this problem, but just to point out that we added dogpile.cache to zuul's projects and added tests against master to the nodepool "-src" functional tests [1] after the interesting issues we had between it and openstacksdk a while ago. Thus I'm certain you could get some form of smoke-test job running against dogpile master, and that would respect depends-on for pull requests, etc. This can help avoid fire-drills on release days. Myself or anyone in #openstack-infra would be happy to help, I'm sure :) -i [1] https://git.openstack.org/cgit/openstack-infra/nodepool/tree/devstack/plugin.sh#n52 From jimmy at openstack.org Wed Feb 27 22:31:00 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 27 Feb 2019 16:31:00 -0600 Subject: [all] [forum] Forum Submissions are open! In-Reply-To: <20190227220801.GA12980@sm-workstation> References: <5C76BD8E.4070504@openstack.org> <5C76D125.2040404@openstack.org> <5C76E02F.4010906@openstack.org> <20190227220801.GA12980@sm-workstation> Message-ID: <5C770FA4.6010409@openstack.org> Sean, > Sean McGinnis > February 27, 2019 at 4:08 PM > > To be clear, the issue isn't needing help writing up the submission. > So great > if someone can attend or watch for topics coming up that can be pulled > out into > Forum ideas, but the crux is that there are a lot of things discussed > at these > events and it may take several days after it is over to realize, "hey, > that > would be really useful if we could discuss that at the Forum." Totally understood. That's why we're suggesting to put a few (I counted five Ops specific Forum talks in Berlin) placeholder sessions in for Ops with a couple of high level sentences, to be defined later. We're happy to help update the Forum suggestions after y'all have had a chance to parse Meetup outcomes into Forum topics that makes sense. > > I don't think what Erik asked for is unreasonable. The Ops Meetup event is > exactly the target we want feeding into Forum discussions. If we can > give a > week after the event for the Ops Meetup ends for this processing to > happen, I > think we increase the odds of an effective and useful Forum. I understand the POV, but keep in mind we're also trying to program against many other interests and we have deadlines that we have to reach with regards to getting comms out to the rest of the community. We've extended the deadline by two days, which I hope will give the Ops Community enough time to parse the info from the Meetup into a few bite-size bits that we can use as placeholders on the schedule, with details TBD. I want to re-emphasize, Kendall and I are happy to help in any way possible on this. If you want to pass along some napkin notes, we can do help get those into the schedule :) It's critical that we provide at least a straw man schedule to the community as early as possible, so we can identify conflicts and travel considerations for our attendees. Thanks, Jimmy > > Can we please extend that deadline out a few more days to make sure we > get this > valuable input? > > Sean > Kendall Nelson > February 27, 2019 at 3:57 PM > Hello :) > > On Wed, Feb 27, 2019 at 1:26 PM Chris Morgan > wrote: > > I think the issue is that forum submissions building on what gets > discussed in Berlin can't be expected to be finalised whilst > attendees to the berlin meetup are still traveling. It's not that > Erik can't pull these things together, in fact he's an old hand at > this, it's more that this process isn't reasonable if there's so > little time to collate what we learn in Berlin and feed it forward > to Denver. Frankly it sounds like because the planning committee > needs 5 weeks, Erik can have two days. Seem unfair. > > > Honestly, the decision process doesn't take much time, aside from > organizing a time that all 10 people can meet across x timezones (a > thing unto itself). Its the community feedback period, giving people > enough time to secure travel approval from their management, loading > the sessions into the actual schedule app, and other print deadlines > that force us to have everything set this far out. > > I will definitely help the ops community in whatever way I can! Do you > have remote attendance set up for the meetup? > > > Chris > > On Wed, Feb 27, 2019 at 2:29 PM Kendall Nelson > > wrote: > > Another- nother thought: You could take a look at what is > submitted by project teams closer to the deadline and see if > your ideas might fit well with theirs since they are looking > for feedback from operators anyway. In the past I have always > hoped for more engagement in the forum sessions I've submitted > but only ever had one or two operators able to join us. > > -Kendall (diablo_rojo) > > On Wed, Feb 27, 2019 at 11:14 AM Kendall Nelson > > wrote: > > Hello :) > > On Wed, Feb 27, 2019 at 11:08 AM Jimmy McArthur > > wrote: > > Erik, > > I definitely understand the timeline is tight. One of > the reasons that we publish the schedule so early is > to enable community members to plan their schedule > early, especially as there is more overlap with the > main Summit Schedule in Denver. Additionally, travel > approval is often predicated upon someone showing > they're leading/moderating a session. > > Before publishing the schedule, we print a draft > Forum schedule for community feedback and start > promotion of the schedule, which we have to put up on > the OpenStack website and apps at 5 weeks out. > Extending the date beyond the 10th won't give the > Forum Selection Committee enough time to complete > those tasks. > > I think if the Ops team can come up with some high > level discussion topics, we'll be happy to put some > holds in the Forum schedule for Ops-specific content. > diablo_rojo has also offered to attend some of the Ops > sessions remotely as well, if that would help you all > shape some things into actual sessions. > > > I'm definitely happy to help as much as I can. If you'll > have something set up that I can call into (zoom, webex, > bluejeans, hangout, whatever), I definitely will. I could > also read through etherpads you take notes in and help > summarize things into forum proposals. > > Another thing to note is that whatever you/we submit, it > doesn't have to be award winning :) Its totally possible > to change session descriptions and edit who the speaker is > later. > > Other random thought, I know Sean McGinnis has attended a > lot of the Operators stuff in the past so maybe he could > help narrow things down too? Not to sign him up for more > work, but I know he's written a forum propsal or two in > the past ;) > > > I wish I could offer a further extension, but > extending it another week would push too far into the > process. > > Cheers, > Jimmy >> Erik McCormick >> February 27, 2019 at 12:43 PM >> Jimmy, >> >> I won't even get home until the 10th much less have >> time to follow up >> with anyone. The formation of those sessions often >> come from >> discussions spawned at the meetup and expanded upon >> later with folks >> who could not attend. Could we at least get until >> 3/17? I understand >> your desire to finalize the schedule, but 6 weeks out >> should be more >> than enough time, no? >> >> Thanks, >> Erik >> Jimmy McArthur >> February 27, 2019 at 12:04 PM >> Hi Erik, >> >> We are able to extend the deadline to 11:59PM >> Pacific, March 10th. That should give the weekend to >> get any additional stragglers in and still allow the >> Forum Programming Committee enough time to manage the >> rest of the approval and publishing process in time >> for people's travel needs, etc... >> >> For the Ops Meetup specifically, I'd suggest going a >> bit broader with the proposals and offering to fill >> in the blanks later. For example, if something comes >> up and everyone agrees it should go to the Forum, >> just submit before the end of the Ops session. >> Kendall or myself would be happy to help you add >> details a bit later in the process, should >> clarification be necessary. We typically have enough >> spots for the majority of proposed Forum sessions. >> That's not a guarantee, but food for thought. >> >> Cheers, >> Jimmy >> >> >> _______________________________________________ >> Airship-discuss mailing list >> Airship-discuss at lists.airshipit.org >> >> http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss >> Erik McCormick >> February 27, 2019 at 11:31 AM >> Would it be possible to push the deadline back a >> couple weeks? I expect there to be a few session >> proposals that will come out of the Ops Meetup which >> ends the day before the deadline. It would be helpful >> to have a little time to organize and submit things >> afterwards. >> >> Thanks, >> Erik >> >> Jimmy McArthur >> February 27, 2019 at 10:40 AM >> Hi Everyone - >> >> A quick reminder that we are accepting Forum [1] >> submissions for the 2019 Open Infrastructure Summit >> in Denver [2]. Please submit your ideas through the >> Summit CFP tool [3] through March 8th. Don't forget >> to put your brainstorming etherpad up on the Denver >> Forum page [4]. >> >> This is not a classic conference track with speakers >> and presentations. OSF community members >> (participants in development teams, operators, >> working groups, SIGs, and other interested >> individuals) discuss the topics they want to cover >> and get alignment on and we welcome your >> participation. The Forum is your opportunity to help >> shape the development of future project releases. >> More information about the Forum [1]. >> >> If you have questions or concerns, please reach out >> to speakersupport at openstack.org >> . >> >> Cheers, >> Jimmy >> >> [1] https://wiki.openstack.org/wiki/Forum >> [2] https://www.openstack.org/summit/denver-2019/ >> [3] >> https://www.openstack.org/summit/denver-2019/call-for-presentations >> [4] https://wiki.openstack.org/wiki/Forum/Denver2019 >> ___________________________________________ >> > > Hopefully that helps! > > -Kendall (diablo_rojo) > > > > -- > Chris Morgan > > > > - Kendall Nelson (diablo_rojo) > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > Chris Morgan > February 27, 2019 at 3:25 PM > I think the issue is that forum submissions building on what gets > discussed in Berlin can't be expected to be finalised whilst attendees > to the berlin meetup are still traveling. It's not that Erik can't > pull these things together, in fact he's an old hand at this, it's > more that this process isn't reasonable if there's so little time to > collate what we learn in Berlin and feed it forward to Denver. Frankly > it sounds like because the planning committee needs 5 weeks, Erik can > have two days. Seem unfair. > > Chris > > > > -- > Chris Morgan > > Kendall Nelson > February 27, 2019 at 1:19 PM > Another- nother thought: You could take a look at what is submitted by > project teams closer to the deadline and see if your ideas might fit > well with theirs since they are looking for feedback from operators > anyway. In the past I have always hoped for more engagement in the > forum sessions I've submitted but only ever had one or two operators > able to join us. > > -Kendall (diablo_rojo) > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > Kendall Nelson > February 27, 2019 at 1:14 PM > Hello :) > > On Wed, Feb 27, 2019 at 11:08 AM Jimmy McArthur > wrote: > > Erik, > > I definitely understand the timeline is tight. One of the reasons > that we publish the schedule so early is to enable community > members to plan their schedule early, especially as there is more > overlap with the main Summit Schedule in Denver. Additionally, > travel approval is often predicated upon someone showing they're > leading/moderating a session. > > Before publishing the schedule, we print a draft Forum schedule > for community feedback and start promotion of the schedule, which > we have to put up on the OpenStack website and apps at 5 weeks > out. Extending the date beyond the 10th won't give the Forum > Selection Committee enough time to complete those tasks. > > I think if the Ops team can come up with some high level > discussion topics, we'll be happy to put some holds in the Forum > schedule for Ops-specific content. diablo_rojo has also offered > to attend some of the Ops sessions remotely as well, if that would > help you all shape some things into actual sessions. > > > I'm definitely happy to help as much as I can. If you'll have > something set up that I can call into (zoom, webex, bluejeans, > hangout, whatever), I definitely will. I could also read through > etherpads you take notes in and help summarize things into forum > proposals. > > Another thing to note is that whatever you/we submit, it doesn't have > to be award winning :) Its totally possible to change session > descriptions and edit who the speaker is later. > > Other random thought, I know Sean McGinnis has attended a lot of the > Operators stuff in the past so maybe he could help narrow things down > too? Not to sign him up for more work, but I know he's written a forum > propsal or two in the past ;) > > > I wish I could offer a further extension, but extending it another > week would push too far into the process. > > Cheers, > Jimmy >> Erik McCormick >> February 27, 2019 at 12:43 PM >> Jimmy, >> >> I won't even get home until the 10th much less have time to follow up >> with anyone. The formation of those sessions often come from >> discussions spawned at the meetup and expanded upon later with folks >> who could not attend. Could we at least get until 3/17? I understand >> your desire to finalize the schedule, but 6 weeks out should be more >> than enough time, no? >> >> Thanks, >> Erik >> Jimmy McArthur >> February 27, 2019 at 12:04 PM >> Hi Erik, >> >> We are able to extend the deadline to 11:59PM Pacific, March >> 10th. That should give the weekend to get any additional >> stragglers in and still allow the Forum Programming Committee >> enough time to manage the rest of the approval and publishing >> process in time for people's travel needs, etc... >> >> For the Ops Meetup specifically, I'd suggest going a bit broader >> with the proposals and offering to fill in the blanks later. For >> example, if something comes up and everyone agrees it should go >> to the Forum, just submit before the end of the Ops session. >> Kendall or myself would be happy to help you add details a bit >> later in the process, should clarification be necessary. We >> typically have enough spots for the majority of proposed Forum >> sessions. That's not a guarantee, but food for thought. >> >> Cheers, >> Jimmy >> >> >> _______________________________________________ >> Airship-discuss mailing list >> Airship-discuss at lists.airshipit.org >> >> http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss >> Erik McCormick >> February 27, 2019 at 11:31 AM >> Would it be possible to push the deadline back a couple weeks? I >> expect there to be a few session proposals that will come out of >> the Ops Meetup which ends the day before the deadline. It would >> be helpful to have a little time to organize and submit things >> afterwards. >> >> Thanks, >> Erik >> >> Jimmy McArthur >> February 27, 2019 at 10:40 AM >> Hi Everyone - >> >> A quick reminder that we are accepting Forum [1] submissions for >> the 2019 Open Infrastructure Summit in Denver [2]. Please submit >> your ideas through the Summit CFP tool [3] through March 8th. >> Don't forget to put your brainstorming etherpad up on the Denver >> Forum page [4]. >> >> This is not a classic conference track with speakers and >> presentations. OSF community members (participants in development >> teams, operators, working groups, SIGs, and other interested >> individuals) discuss the topics they want to cover and get >> alignment on and we welcome your participation. The Forum is >> your opportunity to help shape the development of future project >> releases. More information about the Forum [1]. >> >> If you have questions or concerns, please reach out to >> speakersupport at openstack.org . >> >> Cheers, >> Jimmy >> >> [1] https://wiki.openstack.org/wiki/Forum >> [2] https://www.openstack.org/summit/denver-2019/ >> [3] >> https://www.openstack.org/summit/denver-2019/call-for-presentations >> [4] https://wiki.openstack.org/wiki/Forum/Denver2019 >> ___________________________________________ >> > > Hopefully that helps! > > -Kendall (diablo_rojo) > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Feb 27 23:50:49 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 27 Feb 2019 15:50:49 -0800 Subject: Help with contributing to Storyboard for Outreachy 2019. In-Reply-To: References: Message-ID: Hello :) On Wed, Feb 27, 2019 at 11:34 AM Namrata Jha wrote: > I want to make contributions to the low hanging fruits in the Storyboard > project, and want to make changes in the REST API documentation as is > required in one of the stories, however, I have no idea how to go about it. > I have even commented on the story in concern: > https://storyboard.openstack.org/#!/story/298. Any help on this would be > greatly appreciated. > If you have questions about how to actually implement the solution you should come join us in #storyboard on IRC and we should be able to get you started. > > Moreover, the skills required mentioned on the Outreachy project listings > page primarily includes SQL but there are no SQL related bugs to solve on > the storyboard. I will contribute to improve the documentations wherever I > can but how will that determine my selection as an Outreachy intern. > Yeah I understand how this can be confusing. I chose SQL as one of the required skills because that will be the primary thing you need to know to complete the work that we want done. That being said, the majority of storyboard is written in Python and Javascript and so that is what you will see when looking at the low hanging bugs. There were only three skills we could say were required so I stuck to the ones relevant to the project you would be working on and didn't include other skills that were somewhat relevant. > P.S. I'm sorry @Kendall Nelson , I had not > intended to personally mail you this issue earlier, I replied to a previous > thread and didn't realize that the mailing list wasn't included. Sorry for > that. :) > No worries :) You could have kept replying there. I'll see it either place :) -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Wed Feb 27 23:51:49 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 27 Feb 2019 18:51:49 -0500 Subject: [all] [forum] Forum Submissions are open! In-Reply-To: <5C770FA4.6010409@openstack.org> References: <5C76BD8E.4070504@openstack.org> <5C76D125.2040404@openstack.org> <5C76E02F.4010906@openstack.org> <20190227220801.GA12980@sm-workstation> <5C770FA4.6010409@openstack.org> Message-ID: On Wed, Feb 27, 2019 at 5:31 PM Jimmy McArthur wrote: > > Sean, > > Sean McGinnis February 27, 2019 at 4:08 PM > > To be clear, the issue isn't needing help writing up the submission. So great > if someone can attend or watch for topics coming up that can be pulled out into > Forum ideas, but the crux is that there are a lot of things discussed at these > events and it may take several days after it is over to realize, "hey, that > would be really useful if we could discuss that at the Forum." > > Totally understood. That's why we're suggesting to put a few (I counted five Ops specific Forum talks in Berlin) placeholder sessions in for Ops with a couple of high level sentences, to be defined later. We're happy to help update the Forum suggestions after y'all have had a chance to parse Meetup outcomes into Forum topics that makes sense. OK, I surrender. I submitted 3 generic topics (as that's my limit) as placeholders for topics TBD. Additionally, Chris Morgan will submit two long-standing regular sessions for Ceph-related topics and the Ops Meetup Team gathering sometime soon. If I need to do anything else with them right now, please let me know. > > > I don't think what Erik asked for is unreasonable. The Ops Meetup event is > exactly the target we want feeding into Forum discussions. If we can give a > week after the event for the Ops Meetup ends for this processing to happen, I > think we increase the odds of an effective and useful Forum. > > I understand the POV, but keep in mind we're also trying to program against many other interests and we have deadlines that we have to reach with regards to getting comms out to the rest of the community. We've extended the deadline by two days, which I hope will give the Ops Community enough time to parse the info from the Meetup into a few bite-size bits that we can use as placeholders on the schedule, with details TBD. I put in the placeholders and am happy to update them if you can show me how. I won't know the bite-sized tidbits until I get back and talk to those who were not in attendance. We are short quite a number of regular attendees for this one due to other conflicts and it being an expensive week to be in Berlin. I hope this is acceptable. > > I want to re-emphasize, Kendall and I are happy to help in any way possible on this. If you want to pass along some napkin notes, we can do help get those into the schedule :) It's critical that we provide at least a straw man schedule to the community as early as possible, so we can identify conflicts and travel considerations for our attendees. In future, we would love to know cutoff dates as soon as you know what they are. We don't have a lot of control over exact dates of the meetup as it's based primarily on the availability of the host organization. However, if it comes soon enough we can try to influence the date and plan for it as best we can. I would also be curious to hear from anyone who is basing their travel choices on the approval status of a Forum session. I believe that this number is 0, but I'm ready to be proven wrong. > > Thanks, > Jimmy > > > Can we please extend that deadline out a few more days to make sure we get this > valuable input? > > > Sean > Kendall Nelson February 27, 2019 at 3:57 PM > Hello :) > > On Wed, Feb 27, 2019 at 1:26 PM Chris Morgan wrote: >> >> I think the issue is that forum submissions building on what gets discussed in Berlin can't be expected to be finalised whilst attendees to the berlin meetup are still traveling. It's not that Erik can't pull these things together, in fact he's an old hand at this, it's more that this process isn't reasonable if there's so little time to collate what we learn in Berlin and feed it forward to Denver. Frankly it sounds like because the planning committee needs 5 weeks, Erik can have two days. Seem unfair. > > > Honestly, the decision process doesn't take much time, aside from organizing a time that all 10 people can meet across x timezones (a thing unto itself). Its the community feedback period, giving people enough time to secure travel approval from their management, loading the sessions into the actual schedule app, and other print deadlines that force us to have everything set this far out. > > I will definitely help the ops community in whatever way I can! Do you have remote attendance set up for the meetup? > We have no capacity for remote attendees other than etherpads. I'm happy to Skype you in for things as needed, but there's no good way to have multiple attendees there 100% of the time. We have no resources for such things :(. >> >> Chris >> >> On Wed, Feb 27, 2019 at 2:29 PM Kendall Nelson wrote: >>> >>> Another- nother thought: You could take a look at what is submitted by project teams closer to the deadline and see if your ideas might fit well with theirs since they are looking for feedback from operators anyway. In the past I have always hoped for more engagement in the forum sessions I've submitted but only ever had one or two operators able to join us. >>> I have always made a habit of going through other session topics and scrapping ones of our own that were redundant. For Sydney, I had put up an FFU session that conflicted with one Arkady proposed. We chose to combine them and worked together on it. That's how it should be. I always prefer devs and ops to be in collective sessions as opposed to silo'd off on their own. That should be what the forum is all about. -Erik >>> -Kendall (diablo_rojo) >>> >>> On Wed, Feb 27, 2019 at 11:14 AM Kendall Nelson wrote: >>>> >>>> Hello :) >>>> >>>> On Wed, Feb 27, 2019 at 11:08 AM Jimmy McArthur wrote: >>>>> >>>>> Erik, >>>>> >>>>> I definitely understand the timeline is tight. One of the reasons that we publish the schedule so early is to enable community members to plan their schedule early, especially as there is more overlap with the main Summit Schedule in Denver. Additionally, travel approval is often predicated upon someone showing they're leading/moderating a session. >>>>> >>>>> Before publishing the schedule, we print a draft Forum schedule for community feedback and start promotion of the schedule, which we have to put up on the OpenStack website and apps at 5 weeks out. Extending the date beyond the 10th won't give the Forum Selection Committee enough time to complete those tasks. >>>>> >>>>> I think if the Ops team can come up with some high level discussion topics, we'll be happy to put some holds in the Forum schedule for Ops-specific content. diablo_rojo has also offered to attend some of the Ops sessions remotely as well, if that would help you all shape some things into actual sessions. >>>> >>>> >>>> I'm definitely happy to help as much as I can. If you'll have something set up that I can call into (zoom, webex, bluejeans, hangout, whatever), I definitely will. I could also read through etherpads you take notes in and help summarize things into forum proposals. >>>> >>>> Another thing to note is that whatever you/we submit, it doesn't have to be award winning :) Its totally possible to change session descriptions and edit who the speaker is later. >>>> >>>> Other random thought, I know Sean McGinnis has attended a lot of the Operators stuff in the past so maybe he could help narrow things down too? Not to sign him up for more work, but I know he's written a forum propsal or two in the past ;) >>>> >>>>> >>>>> >>>>> I wish I could offer a further extension, but extending it another week would push too far into the process. >>>>> >>>>> Cheers, >>>>> Jimmy >>>>> >>>>> Erik McCormick February 27, 2019 at 12:43 PM >>>>> >>>>> Jimmy, >>>>> >>>>> I won't even get home until the 10th much less have time to follow up >>>>> with anyone. The formation of those sessions often come from >>>>> discussions spawned at the meetup and expanded upon later with folks >>>>> who could not attend. Could we at least get until 3/17? I understand >>>>> your desire to finalize the schedule, but 6 weeks out should be more >>>>> than enough time, no? >>>>> >>>>> Thanks, >>>>> Erik >>>>> >>>>> Jimmy McArthur February 27, 2019 at 12:04 PM >>>>> >>>>> Hi Erik, >>>>> >>>>> We are able to extend the deadline to 11:59PM Pacific, March 10th. That should give the weekend to get any additional stragglers in and still allow the Forum Programming Committee enough time to manage the rest of the approval and publishing process in time for people's travel needs, etc... >>>>> >>>>> For the Ops Meetup specifically, I'd suggest going a bit broader with the proposals and offering to fill in the blanks later. For example, if something comes up and everyone agrees it should go to the Forum, just submit before the end of the Ops session. Kendall or myself would be happy to help you add details a bit later in the process, should clarification be necessary. We typically have enough spots for the majority of proposed Forum sessions. That's not a guarantee, but food for thought. >>>>> >>>>> Cheers, >>>>> Jimmy >>>>> >>>>> >>>>> _______________________________________________ >>>>> Airship-discuss mailing list >>>>> Airship-discuss at lists.airshipit.org >>>>> http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss >>>>> >>>>> Erik McCormick February 27, 2019 at 11:31 AM >>>>> Would it be possible to push the deadline back a couple weeks? I expect there to be a few session proposals that will come out of the Ops Meetup which ends the day before the deadline. It would be helpful to have a little time to organize and submit things afterwards. >>>>> >>>>> Thanks, >>>>> Erik >>>>> >>>>> Jimmy McArthur February 27, 2019 at 10:40 AM >>>>> Hi Everyone - >>>>> >>>>> A quick reminder that we are accepting Forum [1] submissions for the 2019 Open Infrastructure Summit in Denver [2]. Please submit your ideas through the Summit CFP tool [3] through March 8th. Don't forget to put your brainstorming etherpad up on the Denver Forum page [4]. >>>>> >>>>> This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. More information about the Forum [1]. >>>>> >>>>> If you have questions or concerns, please reach out to speakersupport at openstack.org. >>>>> >>>>> Cheers, >>>>> Jimmy >>>>> >>>>> [1] https://wiki.openstack.org/wiki/Forum >>>>> [2] https://www.openstack.org/summit/denver-2019/ >>>>> [3] https://www.openstack.org/summit/denver-2019/call-for-presentations >>>>> [4] https://wiki.openstack.org/wiki/Forum/Denver2019 >>>>> ___________________________________________ >>>>> >>>> >>>> Hopefully that helps! >>>> >>>> -Kendall (diablo_rojo) >> >> >> >> -- >> Chris Morgan > > > - Kendall Nelson (diablo_rojo) > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > > Chris Morgan February 27, 2019 at 3:25 PM > I think the issue is that forum submissions building on what gets discussed in Berlin can't be expected to be finalised whilst attendees to the berlin meetup are still traveling. It's not that Erik can't pull these things together, in fact he's an old hand at this, it's more that this process isn't reasonable if there's so little time to collate what we learn in Berlin and feed it forward to Denver. Frankly it sounds like because the planning committee needs 5 weeks, Erik can have two days. Seem unfair. > > Chris > > > > -- > Chris Morgan > Kendall Nelson February 27, 2019 at 1:19 PM > Another- nother thought: You could take a look at what is submitted by project teams closer to the deadline and see if your ideas might fit well with theirs since they are looking for feedback from operators anyway. In the past I have always hoped for more engagement in the forum sessions I've submitted but only ever had one or two operators able to join us. > > -Kendall (diablo_rojo) > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > Kendall Nelson February 27, 2019 at 1:14 PM > Hello :) > > On Wed, Feb 27, 2019 at 11:08 AM Jimmy McArthur wrote: >> >> Erik, >> >> I definitely understand the timeline is tight. One of the reasons that we publish the schedule so early is to enable community members to plan their schedule early, especially as there is more overlap with the main Summit Schedule in Denver. Additionally, travel approval is often predicated upon someone showing they're leading/moderating a session. >> >> Before publishing the schedule, we print a draft Forum schedule for community feedback and start promotion of the schedule, which we have to put up on the OpenStack website and apps at 5 weeks out. Extending the date beyond the 10th won't give the Forum Selection Committee enough time to complete those tasks. >> >> I think if the Ops team can come up with some high level discussion topics, we'll be happy to put some holds in the Forum schedule for Ops-specific content. diablo_rojo has also offered to attend some of the Ops sessions remotely as well, if that would help you all shape some things into actual sessions. > > > I'm definitely happy to help as much as I can. If you'll have something set up that I can call into (zoom, webex, bluejeans, hangout, whatever), I definitely will. I could also read through etherpads you take notes in and help summarize things into forum proposals. > > Another thing to note is that whatever you/we submit, it doesn't have to be award winning :) Its totally possible to change session descriptions and edit who the speaker is later. > > Other random thought, I know Sean McGinnis has attended a lot of the Operators stuff in the past so maybe he could help narrow things down too? Not to sign him up for more work, but I know he's written a forum propsal or two in the past ;) > >> >> >> I wish I could offer a further extension, but extending it another week would push too far into the process. >> >> Cheers, >> Jimmy >> >> Erik McCormick February 27, 2019 at 12:43 PM >> >> Jimmy, >> >> I won't even get home until the 10th much less have time to follow up >> with anyone. The formation of those sessions often come from >> discussions spawned at the meetup and expanded upon later with folks >> who could not attend. Could we at least get until 3/17? I understand >> your desire to finalize the schedule, but 6 weeks out should be more >> than enough time, no? >> >> Thanks, >> Erik >> >> Jimmy McArthur February 27, 2019 at 12:04 PM >> >> Hi Erik, >> >> We are able to extend the deadline to 11:59PM Pacific, March 10th. That should give the weekend to get any additional stragglers in and still allow the Forum Programming Committee enough time to manage the rest of the approval and publishing process in time for people's travel needs, etc... >> >> For the Ops Meetup specifically, I'd suggest going a bit broader with the proposals and offering to fill in the blanks later. For example, if something comes up and everyone agrees it should go to the Forum, just submit before the end of the Ops session. Kendall or myself would be happy to help you add details a bit later in the process, should clarification be necessary. We typically have enough spots for the majority of proposed Forum sessions. That's not a guarantee, but food for thought. >> >> Cheers, >> Jimmy >> >> >> _______________________________________________ >> Airship-discuss mailing list >> Airship-discuss at lists.airshipit.org >> http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss >> >> Erik McCormick February 27, 2019 at 11:31 AM >> Would it be possible to push the deadline back a couple weeks? I expect there to be a few session proposals that will come out of the Ops Meetup which ends the day before the deadline. It would be helpful to have a little time to organize and submit things afterwards. >> >> Thanks, >> Erik >> >> Jimmy McArthur February 27, 2019 at 10:40 AM >> Hi Everyone - >> >> A quick reminder that we are accepting Forum [1] submissions for the 2019 Open Infrastructure Summit in Denver [2]. Please submit your ideas through the Summit CFP tool [3] through March 8th. Don't forget to put your brainstorming etherpad up on the Denver Forum page [4]. >> >> This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. More information about the Forum [1]. >> >> If you have questions or concerns, please reach out to speakersupport at openstack.org. >> >> Cheers, >> Jimmy >> >> [1] https://wiki.openstack.org/wiki/Forum >> [2] https://www.openstack.org/summit/denver-2019/ >> [3] https://www.openstack.org/summit/denver-2019/call-for-presentations >> [4] https://wiki.openstack.org/wiki/Forum/Denver2019 >> ___________________________________________ >> > > Hopefully that helps! > > -Kendall (diablo_rojo) > > _______________________________________________ > Airship-discuss mailing list > Airship-discuss at lists.airshipit.org > http://lists.airshipit.org/cgi-bin/mailman/listinfo/airship-discuss > > From fungi at yuggoth.org Thu Feb 28 00:05:07 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 28 Feb 2019 00:05:07 +0000 Subject: [all] [forum] Forum Submissions are open! In-Reply-To: References: <5C76D125.2040404@openstack.org> <5C76E02F.4010906@openstack.org> <20190227220801.GA12980@sm-workstation> <5C770FA4.6010409@openstack.org> Message-ID: <20190228000507.5xgffdsnns6xnxzw@yuggoth.org> On 2019-02-27 18:51:49 -0500 (-0500), Erik McCormick wrote: [...] > I would also be curious to hear from anyone who is basing their travel > choices on the approval status of a Forum session. I believe that this > number is 0, but I'm ready to be proven wrong. [...] We had someone in the Diversity WG meeting this week express that their employer won't send them for all 3 days of the Summit so they need to know which day the WG's forum session will be scheduled as far in advance as possible to be able to make travel arrangements. I know that's just one example, but there are likely more. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From emccormick at cirrusseven.com Thu Feb 28 00:10:08 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 27 Feb 2019 19:10:08 -0500 Subject: [all] [forum] Forum Submissions are open! In-Reply-To: <20190228000507.5xgffdsnns6xnxzw@yuggoth.org> References: <5C76D125.2040404@openstack.org> <5C76E02F.4010906@openstack.org> <20190227220801.GA12980@sm-workstation> <5C770FA4.6010409@openstack.org> <20190228000507.5xgffdsnns6xnxzw@yuggoth.org> Message-ID: On Wed, Feb 27, 2019 at 7:06 PM Jeremy Stanley wrote: > > On 2019-02-27 18:51:49 -0500 (-0500), Erik McCormick wrote: > [...] > > I would also be curious to hear from anyone who is basing their travel > > choices on the approval status of a Forum session. I believe that this > > number is 0, but I'm ready to be proven wrong. > [...] > > We had someone in the Diversity WG meeting this week express that > their employer won't send them for all 3 days of the Summit so they > need to know which day the WG's forum session will be scheduled as > far in advance as possible to be able to make travel arrangements. I > know that's just one example, but there are likely more. > -- > Jeremy Stanley That is super useful to know. Thanks! From tony at bakeyournoodle.com Thu Feb 28 00:10:26 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 28 Feb 2019 11:10:26 +1100 Subject: [all] [forum] Forum Submissions are open! In-Reply-To: References: <5C76D125.2040404@openstack.org> <5C76E02F.4010906@openstack.org> <20190227220801.GA12980@sm-workstation> <5C770FA4.6010409@openstack.org> Message-ID: <20190228001024.GD13081@thor.bakeyournoodle.com> On Wed, Feb 27, 2019 at 06:51:49PM -0500, Erik McCormick wrote: > In future, we would love to know cutoff dates as soon as you know what > they are. We don't have a lot of control over exact dates of the > meetup as it's based primarily on the availability of the host > organization. However, if it comes soon enough we can try to influence > the date and plan for it as best we can. Given it's the same format every time they're known now? See: https://wiki.openstack.org/wiki/Forum for the format. So we know the post Denver summit is in Shnaghai: [tony at thor ~]$ python3 Python 3.7.2 (default, Jan 16 2019, 19:49:22) [GCC 8.2.1 20181215 (Red Hat 8.2.1-6)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import datetime >>> T=datetime.date(2019,11,4) >>> print('Etherpads %s' % (T - datetime.timedelta(weeks=11))) Etherpads 2019-08-19 >>> print('Deadline %s' % (T - datetime.timedelta(weeks=7))) Deadline 2019-09-16 >>> > I would also be curious to hear from anyone who is basing their travel > choices on the approval status of a Forum session. I believe that this > number is 0, but I'm ready to be proven wrong. There are companies that only fund 'speakers' for travel and there are some developer types (like myself) that will be moderating forum sessions (hopefully) but not talking at the summit. So there certainly *are* people in this situation. Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From emccormick at cirrusseven.com Thu Feb 28 00:21:54 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 27 Feb 2019 19:21:54 -0500 Subject: [all] [forum] Forum Submissions are open! In-Reply-To: <20190228001024.GD13081@thor.bakeyournoodle.com> References: <5C76D125.2040404@openstack.org> <5C76E02F.4010906@openstack.org> <20190227220801.GA12980@sm-workstation> <5C770FA4.6010409@openstack.org> <20190228001024.GD13081@thor.bakeyournoodle.com> Message-ID: On Wed, Feb 27, 2019 at 7:10 PM Tony Breeds wrote: > > On Wed, Feb 27, 2019 at 06:51:49PM -0500, Erik McCormick wrote: > > > In future, we would love to know cutoff dates as soon as you know what > > they are. We don't have a lot of control over exact dates of the > > meetup as it's based primarily on the availability of the host > > organization. However, if it comes soon enough we can try to influence > > the date and plan for it as best we can. > > Given it's the same format every time they're known now? > See: https://wiki.openstack.org/wiki/Forum for the format. > > So we know the post Denver summit is in Shnaghai: > > [tony at thor ~]$ python3 > Python 3.7.2 (default, Jan 16 2019, 19:49:22) > [GCC 8.2.1 20181215 (Red Hat 8.2.1-6)] on linux > Type "help", "copyright", "credits" or "license" for more information. > >>> import datetime > >>> T=datetime.date(2019,11,4) > >>> print('Etherpads %s' % (T - datetime.timedelta(weeks=11))) > Etherpads 2019-08-19 > >>> print('Deadline %s' % (T - datetime.timedelta(weeks=7))) > Deadline 2019-09-16 > >>> > Deadline for Berlin was 44 days before the summit. Deadline for this one was 51 days (now 49). The extra week I asked for would bring it in line exactly, so +1 to your code and I get the extra week. > > I would also be curious to hear from anyone who is basing their travel > > choices on the approval status of a Forum session. I believe that this > > number is 0, but I'm ready to be proven wrong. > > There are companies that only fund 'speakers' for travel and there are > some developer types (like myself) that will be moderating forum > sessions (hopefully) but not talking at the summit. > > So there certainly *are* people in this situation. > As per my previous comments, I submit to penance for underestimating the varied travel situations attendees face. My apologies > Tony. From zbitter at redhat.com Thu Feb 28 00:48:57 2019 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 27 Feb 2019 19:48:57 -0500 Subject: [heat] keystone endpoint configuration In-Reply-To: References: <22a164a6-73c9-5c6f-cfd0-6f29b0bae47a@rd.bbc.co.uk> Message-ID: On 26/02/19 11:11 AM, Mohammed Naser wrote: > If we can't support this model, maybe we should consider dropping the whole > idea of admin/internal/public I wish to subscribe to your newsletter ;) Did we ever support that model though? We have those different endpoints in the catalog, but whether operators could require the user to use different URIs for the catalog itself depending on where they're calling from is not controlled by the catalog code. I doubt we've said anything about it either way (though we should). - ZB From kennelson11 at gmail.com Thu Feb 28 00:58:52 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 27 Feb 2019 16:58:52 -0800 Subject: [first contact] Meeting Time Moving! Message-ID: Hello :) Since two of our regular meeting attendees are moving/have moved to very different timezones, here is a poll to pick a new time to meet! We also won't actually enact the new time until April-ish but if you have a preference on if we want to move it for the first meeting in April or the second, please voice your opinions here :) -Kendall (diablo_rojo) [1] https://doodle.com/poll/h5m5n6za9hbiv9pr -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at openstack.org Thu Feb 28 01:23:59 2019 From: chris at openstack.org (Chris Hoge) Date: Wed, 27 Feb 2019 17:23:59 -0800 Subject: [puppet][tripleo] NDSU Capstone Introduction! In-Reply-To: References: <6E895DDA-E451-416B-83D9-A89E801BA0CE@openstack.org> Message-ID: <458302FF-356B-4EC8-B38F-C81A36C3632C@openstack.org> > On Feb 27, 2019, at 6:59 AM, Alex Schultz wrote: > > On Tue, Feb 26, 2019 at 5:38 PM Chris Hoge wrote: >> >> Welcome Eduardo, and Hunter and Jason. >> >> For the initial work, we will be looking at replacing GPL licensed modules in >> the Puppet-OpenStack project with Apache licensed alternatives. Some of the >> candidate module transitions include: >> >> antonlindstrom/puppet-powerdns -> sensson/powerdns >> >> duritong/puppet-sysctl -> thias/puppet-sysctl >> >> puppetlabs/puppetlabs-vcsrepo -> voxpupuli/puppet-git_resource >> >> Feedback and support on this is welcome, but where possible I would like for >> the students to be sending the patches up and collaborating to to help make these >> transitions (where possible, it’s my understanding that sysctl may pose serious >> challenges). Much of it should be good introductory work to our community >> workflow, and I'd like for them to have an opportunity to have a >> successful set of initial patches and contributions that have a positive >> lasting impact on the community. >> > > Please note that this also has an impact on TripleO and any other > downstream consumers of the puppet modules. Specifically for TripleO > we'll need to consider how packaging these will come into play and if > they aren't 1:1 compatible it may break us. StarlingX might also be > impacted as well. Given the difficulties of coordinating the limited scope of the capstone project and release schedule constraints of RDO and TripleO, we've decided to focus on work related to other OpenStack projects. Thanks for your feedback. -Chris > >> Thanks in advance, and my apologies for not communicating these efforts >> to the mailing list sooner. >> >> -Chris >> >>> On Feb 19, 2019, at 6:40 PM, Urbano Moreno, Eduardo wrote: >>> >>> Hello OpenStack community, >>> >>> I just wanted to go ahead and introduce myself, as I am a part of the NDSU Capstone group! >>> >>> My name is Eduardo Urbano and I am a Jr/Senior at NDSU. I am currently majoring in Computer Science, with no minor although that could change towards graduation. I am currently an intern at an electrical supply company here in Fargo, North Dakota known as Border States. I am an information security intern and I am enjoying it so far. I have learned many interesting security things and have also became a little paranoid of how easily someone can get hacked haha. Anyways, I am so excited to be on board and be working with OpenStack for this semester. So far I have learned many new things and I can’t wait to continue on learning. >>> >>> Thank you! >>> >>> >>> -Eduardo >> >> > From alifshit at redhat.com Thu Feb 28 01:25:41 2019 From: alifshit at redhat.com (Artom Lifshitz) Date: Wed, 27 Feb 2019 20:25:41 -0500 Subject: [nova] NUMA live migration - mostly how it's tested Message-ID: Hey all, There won't be much new here for those who've reviewed the patches [1] already, but I wanted to address the testing situation. Until recently, the last patch was WIP because I had functional tests but no unit tests. Even without NUMA anywhere, the claims part of the new code could be tested in functional tests. With the new and improved implementation proposed by Dan Smith [2], this is no longer the case. Any test more involved than unit testing will need "real" NUMA instances on "real" NUMA hosts to trigger the new code. Because of that, I've dropped functional testing altogether, have added unit tests, and have taken the WIP tag off. What I've been using for testing is this: [3]. It's a series of patches to whitebox_tempest_plugin, a Tempest plugin used by a bunch of us Nova Red Hatters to automate testing that's outside of Tempest's scope. Same idea as the intel-nfv-ci plugin [4]. The tests I currently have check that: * CPU pin mapping is updated if the destination has an instance pinned to the same CPUs as the incoming instance * emulator thread pins are updated if the destination has a different cpu_shared_set value and the instance has the hw:emulator_threads_policy set to `share` * NUMA node pins are updated for a hugepages instance if the destination has a hugepages instances consuming the same NUMA node as the incoming instance It's not exhaustive by any means, but I've made sure that all iterations pass those 3 tests. It should be fairly easy to add new tests, as most of the necessary scaffolding is already in place. [1] https://review.openstack.org/#/c/634606/ [2] https://review.openstack.org/#/c/634828/28/nova/virt/driver.py at 1147 [3] https://review.rdoproject.org/r/#/c/18832/ [4] https://github.com/openstack/intel-nfv-ci-tests/ From zhengzhenyulixi at gmail.com Thu Feb 28 01:41:18 2019 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Thu, 28 Feb 2019 09:41:18 +0800 Subject: [nova] Updates about Detaching/Attaching root volumes In-Reply-To: References: <19d57159-69b3-0b4b-cec8-2018fb672d41@gmail.com> Message-ID: As for this case, and what Matt mentioned in the patch review: > restriction for attaching volumes with a tag because while it's > true we don't know if the compute that the instance is unshelved on > will support device tags, we also don't know that during initial > server create but we still allow bdm tags in that case. In fact, in > the server create case, if you try to create a server with bdm tags > on a compute host that does not support them, it will fail the > build (not even try to reschedule) There is something I don't quite understand, what will be different for the volumes that are newly attached and the existing volumes in case you mentioned? I mean, the existing volumes could also have tags, and when we unshelve, we still have to handle the tags in bdms, no matter it is existing bdms or newly atteched when the instance is in ``shelved_offloaded`` status. What is the difference? BR, On Thu, Feb 28, 2019 at 12:08 AM Artom Lifshitz wrote: > On Tue, Feb 26, 2019 at 8:23 AM Matt Riedemann > wrote: > > > > On 2/26/2019 6:40 AM, Zhenyu Zheng wrote: > > > I'm working on a blueprint to support Detach/Attach root volumes. The > > > blueprint has been proposed for quite a while since mitaka[1] in that > > > version of proposal, we only talked about instances in > shelved_offloaded > > > status. And in Stein[2] the status of stopped was also added. But now > we > > > realized that support detach/attach root volume on a stopped instance > > > could be problemastic since the underlying image could change which > > > might invalidate the current host.[3] > > > > > > So Matt and Sean suggested maybe we could just do it for > > > shelved_offloaded instances, and I have updated the patch according to > > > this comment. And I will update the spec latter, so if anyone have > > > thought on this, please let me know. > > > > I mentioned this during the spec review but didn't push on it I guess, > > or must have talked myself out of it. We will also have to handle the > > image potentially changing when attaching a new root volume so that when > > we unshelve, the scheduler filters based on the new image metadata > > rather than the image metadata stored in the RequestSpec from when the > > server was originally created. But for a stopped instance, there is no > > run through the scheduler again so I don't think we can support that > > case. Also, there is no real good way for us (right now) to even compare > > the image ID from the new root volume to what was used to originally > > create the server because for volume-backed servers the > > RequestSpec.image.id is not set (I'm not sure why, but that's the way > > it's always been, the image.id is pop'ed from the metadata [1]). And > > when we detach the root volume, we null out the BDM.volume_id so we > > can't get back to figure out what that previous root volume's image ID > > was to compare, i.e. for a stopped instance we can't enforce that the > > underlying image is the same to support detach/attach root volume. We > > could probably hack stuff up by stashing the old volume_id/image_id in > > system_metadata but I'd rather not play that game. > > > > It also occurs to me that the root volume attach code is also not > > verifying that the new root volume is bootable. So we really need to > > re-use this code on root volume attach [2]. > > > > tl;dr when we attach a new root volume, we need to update the > > RequestSpec.image (ImageMeta) object based on the new root volume's > > underlying volume_image_metadata so that when we unshelve we use that > > image rather than the original image. > > > > > > > > Another thing I wanted to discuss is that in the proposal, we will > reset > > > some fields in the root_bdm instead of delete the whole record, among > > > those fields, the tag field could be tricky. My idea was to reset it > > > too. But there also could be cases that the users might think that it > > > would not change[4]. > > > > Yeah I am not sure what to do here. Here is a scenario: > > > > User boots from volume with a tag "ubuntu1604vol" to indicate it's the > > root volume with the operating system. Then they shelve offload the > > server and detach the root volume. At this point, the GET > > /servers/{server_id}/os-volume_attachments API is going to show None for > > the volume_id on that BDM but should it show the original tag or also > > show None for that. Kevin currently has the tag field being reset to > > None when the root volume is detached. > > > > When the user attaches a new root volume, they can provide a new tag so > > even if we did not reset the tag, the user can overwrite it. As a user, > > would you expect the tag to be reset when the root volume is detached or > > have it persist but be overwritable? > > > > If in this scenario the user then attaches a new root volume that is > > CentOS or Ubuntu 18.04 or something like that, but forgets to update the > > tag, then the old tag would be misleading. > > The tag is a Nova concept on the attachment. If you detach a volume > (root or not) then attach a different one (root or not), to me that's > a new attachment, with a new (potentially None) tag. I have no idea > who that fits into the semantics around root volume detach, but that's > my 2 cents. > > > > > So it is probably safest to just reset the tag like Kevin's proposed > > code is doing, but we could use some wider feedback here. > > > > [1] > > > https://github.com/openstack/nova/blob/33f367ec2f32ce36b00257c11c5084400416774c/nova/utils.py#L943 > > [2] > > > https://github.com/openstack/nova/blob/33f367ec2f32ce36b00257c11c5084400416774c/nova/compute/api.py#L1091-L1101 > > > > -- > > > > Thanks, > > > > Matt > > > > > -- > -- > Artom Lifshitz > Software Engineer, OpenStack Compute DFG > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Feb 28 01:58:19 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 28 Feb 2019 12:58:19 +1100 Subject: [first contact] Meeting Time Moving! In-Reply-To: References: Message-ID: <20190228015818.GE13081@thor.bakeyournoodle.com> On Wed, Feb 27, 2019 at 04:58:52PM -0800, Kendall Nelson wrote: > Hello :) > > Since two of our regular meeting attendees are moving/have moved to very > different timezones, here is a poll to pick a new time to meet! > > We also won't actually enact the new time until April-ish but if you have a > preference on if we want to move it for the first meeting in April or the > second, please voice your opinions here :) The first meeting in April is April 9th/10th. By that time the US, EU and AU have all completed the awkward DST transition so I'd say that's a great transition point. We can test it once before the summit and then have a break ;P Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mriedemos at gmail.com Thu Feb 28 02:10:52 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 27 Feb 2019 20:10:52 -0600 Subject: [nova] Updates about Detaching/Attaching root volumes In-Reply-To: References: <19d57159-69b3-0b4b-cec8-2018fb672d41@gmail.com> Message-ID: On 2/27/2019 7:41 PM, Zhenyu Zheng wrote: > There is something I don't quite understand, what will be different for > the volumes that are newly attached and > the existing volumes in case you mentioned? I mean, the existing volumes > could also have tags, and when > we unshelve,  we still have to handle the tags in bdms, no matter it is > existing bdms or newly atteched when the > instance is in ``shelved_offloaded`` status. What is the difference? There isn't, it's a bug: https://bugs.launchpad.net/nova/+bug/1817927 Which is why I think we should probably lift the restriction in the API so that users can attach volumes with tags to a shelved offloaded instance. I'm not really comfortable with adding root volume detach/attach support if the user cannot specify a new tag when attaching a new root volume, and to do that we have to remove that restriction on tags + shelved offloaded servers in the API. -- Thanks, Matt From mriedemos at gmail.com Thu Feb 28 02:25:36 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 27 Feb 2019 20:25:36 -0600 Subject: [nova] NUMA live migration - mostly how it's tested In-Reply-To: References: Message-ID: On 2/27/2019 7:25 PM, Artom Lifshitz wrote: > What I've been using for testing is this: [3]. It's a series of > patches to whitebox_tempest_plugin, a Tempest plugin used by a bunch > of us Nova Red Hatters to automate testing that's outside of Tempest's > scope. And where is that pulling in your nova series of changes and posting test results (like a 3rd party CI) so anyone can see it? Or do you mean here are tests, but you need to provide your own environment if you want to verify the code prior to merging it. Can we really not even have functional tests with the fake libvirt driver and fake numa resources to ensure the flow doesn't blow up? -- Thanks, Matt From alifshit at redhat.com Thu Feb 28 02:33:31 2019 From: alifshit at redhat.com (Artom Lifshitz) Date: Wed, 27 Feb 2019 21:33:31 -0500 Subject: [nova] NUMA live migration - mostly how it's tested In-Reply-To: References: Message-ID: On Wed, Feb 27, 2019, 21:27 Matt Riedemann, wrote: > On 2/27/2019 7:25 PM, Artom Lifshitz wrote: > > What I've been using for testing is this: [3]. It's a series of > > patches to whitebox_tempest_plugin, a Tempest plugin used by a bunch > > of us Nova Red Hatters to automate testing that's outside of Tempest's > > scope. > > And where is that pulling in your nova series of changes and posting > test results (like a 3rd party CI) so anyone can see it? Or do you mean > here are tests, but you need to provide your own environment if you want > to verify the code prior to merging it. > Sorry, wasn't clear. It's the latter. The test code exists, and has run against my devstack environment with my patches checked out, but there's no CI or public posting of test results. Getting CI coverage for these NUMA things (like the old Intel one) is a whole other topic. > Can we really not even have functional tests with the fake libvirt > driver and fake numa resources to ensure the flow doesn't blow up? > That's something I have to look into. We have live migration functional tests, and we have NUMA functional tests, but I'm not sure how we can combine the two. > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wouter.bommel at canonical.com Thu Feb 28 03:35:56 2019 From: wouter.bommel at canonical.com (Wouter van Bommel) Date: Thu, 28 Feb 2019 13:35:56 +1000 Subject: Issue with ceph and vault Message-ID: <01f1b1ff-7b50-c665-0580-a9a630a520c2@canonical.com> Hi, I currently bumped into a situation in a cloud using both ceph and vault. While adding a new node to the cloud all seemed fine, until I noticed that on one node juju failed with the message: hook failed: "secrets-storage-relation-changed" Upon investigation it turned out that this node had a broken network configuration that was not picket up earlier. The problem was, that the vault instance could not be reached. This was fixed, but I could not un-wedge the error. A juju resolve on the node does not seem to have any effect. Using the debug-log option from juju I do see the following lines that do indicate the problem. INFO:vaultlocker.dmcrypt:LUKS formatting /dev/disk/by-dname/osd1-part1 using UUID:4448f9fa-d291-403e-9826-7318e57cf1a4 Cannot format device /dev/disk/by-dname/osd1-part1 which is still in use. vaultlocker: Command '['cryptsetup', '--batch-mode', '--uuid', '4448f9fa-d291-403e-9826-7318e57cf1a4', '--key-file', '-', 'luksFormat', '/dev/disk/by-dname/osd1-part1']' returned non-zero exit status 5 So the disk is in use. Which looks strange to me, as this machine could never complete the hooks as the vault was unreachable. I would like to get some advice on how to proceed. Regrds, Wouter From singh.surya64mnnit at gmail.com Thu Feb 28 05:21:37 2019 From: singh.surya64mnnit at gmail.com (Surya Singh) Date: Thu, 28 Feb 2019 10:51:37 +0530 Subject: [kolla] Proposing Michal Nasiadka to the core team In-Reply-To: References: Message-ID: +1 Good Work Michal, Welcome !! to team On Wed, Feb 27, 2019 at 8:44 PM Eduardo Gonzalez wrote: > Vote is over, > > Welcome to the core team Michal! > > > El lun., 25 feb. 2019 a las 16:13, Jeffrey Zhang (< > zhang.lei.fly+os-discuss at gmail.com>) escribió: > >> +1 >> >> On Mon, Feb 25, 2019 at 8:30 PM Martin André wrote: >> >>> On Fri, Feb 15, 2019 at 11:21 AM Eduardo Gonzalez >>> wrote: >>> > >>> > Hi, is my pleasure to propose Michal Nasiadka for the core team in >>> kolla-ansible. >>> >>> +1 >>> I'd also be happy to welcome Michal to the kolla-core group (not just >>> kolla-ansible) as he's done a great job reviewing the kolla patches >>> too. >>> >>> Martin >>> >>> > Michal has been active reviewer in the last relases ( >>> https://www.stackalytics.com/?module=kolla-group&user_id=mnasiadka), >>> has been keeping an eye on the bugs and being active help on IRC. >>> > He has also made efforts in community interactions in Rocky and Stein >>> releases, including PTG attendance. >>> > >>> > His main interest is NFV and Edge clouds and brings valuable couple of >>> years experience as OpenStack/Kolla operator with good knowledge of Kolla >>> code base. >>> > >>> > Planning to work on extending Kolla CI scenarios, Edge use cases and >>> improving NFV-related functions ease of deployment. >>> > >>> > Consider this email as my +1 vote. Vote ends in 7 days (22 feb 2019) >>> > >>> > Regards >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Thu Feb 28 05:41:55 2019 From: ramishra at redhat.com (Rabi Mishra) Date: Thu, 28 Feb 2019 11:11:55 +0530 Subject: [heat] keystone endpoint configuration In-Reply-To: References: <22a164a6-73c9-5c6f-cfd0-6f29b0bae47a@rd.bbc.co.uk> Message-ID: On Thu, Feb 28, 2019 at 4:02 AM Jonathan Rosser < jonathan.rosser at rd.bbc.co.uk> wrote: > > On 27/02/2019 20:49, Zane Bitter wrote: > > On 20/02/19 1:40 PM, Jonathan Rosser wrote: > >> In openstack-ansible we are trying to help a number of our end users > >> with their heat deployments, some of them in conjunction with magnum. > >> > >> There is some uncertainty with how the following heat.conf sections > >> should be configured: > >> > >> [clients_keystone] > >> auth_uri = ... > >> > >> [keystone_authtoken] > >> www_authenticate_uri = ... > >> > >> It does not appear to be possible to define a set of internal or > >> external keystone endpoints in heat.conf which allow the following: > >> > >> * The orchestration panels being functional in horizon > >> * Deployers isolating internal openstack from external networks > >> * Deployers using self signed/company cert on the external endpoint > >> * Magnum deployments completing > >> * Heat delivering an external endpoint at [1] > >> * Heat delivering an external endpoint at [2] > >> > >> There are a number of related bugs: > >> > >> https://bugs.launchpad.net/openstack-ansible/+bug/1814909 > >> https://bugs.launchpad.net/openstack-ansible/+bug/1811086 > >> https://storyboard.openstack.org/#!/story/2004808 > >> https://storyboard.openstack.org/#!/story/2004524 > > > > Based on this and your comment on IRC[1] - and correct me if I'm > > misunderstanding here - the crux of the issue is that the Keystone > > auth_url must be accessed via different addresses depending on which > > network the request is coming from? > > > The most concrete example I can give is that of a Magnum k8s deployment, > where heat is used to create several VM and deploy software. Callback > URLs are embedded into those VM and SoftwareDeployments, and the > Callback URL must be accessible from the VM, this would always need to > be something that could reasonably be called a "Public" endpoint. > > Conversely, heat itself needs to be able to talk to many other openstack > components, defined in the [clients_*] config sections. It is reasonable > to describe these interactions as being "Internal" - I may misunderstand > some of this though. > > So here lies the issue - appropriate entries in heat.conf to make > internal interactions between heat and horizon (one example) work in > real-world deployments results in the keystone internal URL being placed > in callbacks, and then SoftwareDemployments never complete as the > internal keystone URL is not usually accessible to a VM. I suspect that > there is not much coverage for this kind of network separation in gate > tests. > > I think the crux of issue is that we use the same keystone endpoint (auth_uri/ www_authenticate_uri from clients_keystone/keystone_authtoken sections in heat.conf) when creating the auth_plugins in the context[1], heat internal keystone objects[3] and the keystone auth_url[2] we pass on to the instances for signaling. We've auth_url middleware that sets X-Auth-Url header in the request by reading the above config options. There is probably some history[4] (i.e on why we fallback to use [keystone_authtoken]auth_uri/www_authenticate_uri) which is beyond my involvement with heat. AFAICT, we're aware of this problem for sometime and no one ever tried to fix it. I think we can provide a separate config option for what auth_url we send to the instances as they may not be in the same network, as the ones like other services use. [1] https://github.com/openstack/heat/blob/master/heat/common/context.py#L255 [2] https://github.com/openstack/heat/blob/master/heat/engine/resources/signal_responder.py#L106 [3] https://github.com/openstack/heat/blob/master/heat/engine/clients/os/keystone/heat_keystoneclient.py [4] https://github.com/openstack/heat/commit/1e71566169d9ffb34ed4e1bdf3f264cbdbb567cb#diff-e3a36cd5713124c3901fb0e8c6016357 > > I don't think this was ever contemplated as a use case in developing > > Heat. For my part, I certainly always assumed that while the Keystone > > catalog could contain different Public/Internal/Admin endpoints for > > each service, there was only a single place to access the catalog > > (i.e. each cloud had a single unique auth_url). > > > I think that as far as heat itself interacting with other openstack > components is concerned there does not need to be more than one > auth_url. However it is very important to make a distinction between the > context in which the heat code runs and the context of a VM created by > heat - any callback URL created must be valid for the context of the VM, > not the heat code. > > > It's entirely possible this wasn't a valid assumption about the how > > clouds would/should be deployed in practice. If that's the case then > > we likely need some richer configuration options. The design of the > > Keystone catalog predates both the existence of Heat and the idea that > > cloud workloads might have reason to access the OpenStack APIs, and > > nobody is really an expert on both although we've gotten better at > > communicating. > > > > [1] > > > http://eavesdrop.openstack.org/irclogs/%23heat/%23heat.2019-02-26.log.html#t2019-02-26T17:14:14 > > > Colleen makes some observations about the use of keystone config in heat > - and interestingly suggests a seperate config entry for cases where a > keystone URL should be handed on to another service. Mohammed and I have > already discussed additional config options being a potential solution > whilst trying to debug this. > > There are already examples of similar config options in heat.conf, such > as "heat_waitcondition_server_url" - would additonal config items such > as server_base_auth_url and signal_responder_auth_url be appropriate so > that we can be totally explicit about the endpoints handed on to created > VM? > > > > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at citynetwork.eu Thu Feb 28 09:03:57 2019 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Thu, 28 Feb 2019 10:03:57 +0100 Subject: [Openstack-sigs] [publiccloud-wg] Late reminder weekly meeting Public Cloud WG Message-ID: <6dddf231-0955-6849-8142-e37fdbfc9adf@citynetwork.eu> Hi everyone, Time for a new meeting for PCWG - today (28th) 1400 UTC in #openstack-publiccloud! Agenda found below and at https://etherpad.openstack.org/p/publiccloud-wg Agenda 1. Train goal: Deletion of resources 2. Summit Denver forum topics 3. Followup goals Sorry for the late reminder! Talk to you later today! Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From zhengzhenyulixi at gmail.com Thu Feb 28 09:35:59 2019 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Thu, 28 Feb 2019 17:35:59 +0800 Subject: [nova] Updates about Detaching/Attaching root volumes In-Reply-To: References: <19d57159-69b3-0b4b-cec8-2018fb672d41@gmail.com> Message-ID: Checking the bug report you mentioned and seems the best solution will be rely on that BP you mentioned in the report. I sugguest one thing we can do is that we mention that we will not reset the tag but it might not working. And when we can support assign tag when attach volume to shelved_offloaded instances, we then perform the reset and update action. On Thu, Feb 28, 2019 at 10:10 AM Matt Riedemann wrote: > On 2/27/2019 7:41 PM, Zhenyu Zheng wrote: > > There is something I don't quite understand, what will be different for > > the volumes that are newly attached and > > the existing volumes in case you mentioned? I mean, the existing volumes > > could also have tags, and when > > we unshelve, we still have to handle the tags in bdms, no matter it is > > existing bdms or newly atteched when the > > instance is in ``shelved_offloaded`` status. What is the difference? > > There isn't, it's a bug: > > https://bugs.launchpad.net/nova/+bug/1817927 > > Which is why I think we should probably lift the restriction in the API > so that users can attach volumes with tags to a shelved offloaded instance. > > I'm not really comfortable with adding root volume detach/attach support > if the user cannot specify a new tag when attaching a new root volume, > and to do that we have to remove that restriction on tags + shelved > offloaded servers in the API. > > -- > > Thanks, > > Matt > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu Feb 28 09:48:54 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 28 Feb 2019 09:48:54 +0000 (GMT) Subject: [nova][dev] Any VMware resource pool and shares kind of feature available in openstack nova? In-Reply-To: <0da87145-8c10-7d4e-ec3c-70cbf6e31729@gmail.com> References: <492012d2-18b7-436c-9990-d11136f62b9a@gmail.com> <0da87145-8c10-7d4e-ec3c-70cbf6e31729@gmail.com> Message-ID: On Fri, 22 Feb 2019, Jay Pipes wrote: > On 02/22/2019 10:06 AM, Matt Riedemann wrote: >> On 2/22/2019 2:46 AM, Sanjay K wrote: >>> I will define/derive priority based on the which sub network the VM >>> belongs to - mostly Production or Development. From this, the Prod VMs >>> will have higher resource allocation criteria than other normal VMs and >>> these can be calculated at runtime when a VM is also rebooted like how >>> VMware resource pools and shares features work. >> >> It sounds like a weigher in scheduling isn't appropriate for your use case >> then, because weighers in scheduling are meant to weigh compute hosts once >> they have been filtered. It sounds like you're trying to prioritize which >> VMs will get built, which sounds more like a pre-emptible/spot instances >> use case [1][2]. >> >> As for VMware resource pools and shares features, I don't know anything >> about those since I'm not a vCenter user. Maybe someone more worldly, like >> Jay Pipes, can chime in here. > > I am neither worldly nor a vCenter user. Perhaps someone from VMWare, like > Chris Dent, can chime in here ;) As I understand the question the goal is to have a feature within nova itself which is similar to resource pools in vCenter (and the somewhat dynamic resource management they can do) but can be used with a group of kvm hypervisors. In which case Matt's answer is pretty much spot on: pre-emptibility, reservations, dynamic management of aggregates and the like. In which case there may be some value in seeing if Blazar can help (now or in the future). Another option would be dynamic management of the inventory and traits in placement by a custom third party agent that doesn't yet exist combined with clever flavor management. Sanjay, I suspect none of this really helps you all that much, sorry for that. If you feel like trying to describe what you want to accomplish in s slightly different way, that might help us come up with workarounds. (/me also not wordly, and only barely and sometimes a vCenter user, it's all magic in there) -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From florian.engelmann at everyware.ch Thu Feb 28 10:21:38 2019 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Thu, 28 Feb 2019 11:21:38 +0100 Subject: [ceilometer] radosgw pollster In-Reply-To: <1551296336596.95229@everyware.ch> References: <1551295990725.66562@everyware.ch> <1551296336596.95229@everyware.ch> Message-ID: Hi Christian, after adding requests-aws to the container the PollsterPermanentError is gone and the ceilometer polling log looks clean to me: 2019-02-28 10:29:46.163 24 INFO ceilometer.polling.manager [-] Polling pollster radosgw.containers.objects in the context of radosgw_300s_pollsters 2019-02-28 10:29:46.167 24 INFO ceilometer.polling.manager [-] Polling pollster radosgw.objects in the context of radosgw_300s_pollsters 2019-02-28 10:29:46.172 24 INFO ceilometer.polling.manager [-] Polling pollster radosgw.objects.size in the context of radosgw_300s_pollsters 2019-02-28 10:29:46.177 24 INFO ceilometer.polling.manager [-] Polling pollster radosgw.objects.containers in the context of radosgw_300s_pollsters 2019-02-28 10:29:46.182 24 INFO ceilometer.polling.manager [-] Polling pollster radosgw.usage in the context of radosgw_300s_pollsters I still do not get any data in gnocchi: openstack metric resource list -c type -f value | sort -u generic image instance instance_disk instance_network_interface network volume My RadosGW logs do show the poller requests: 2019-02-28 10:29:54.767266 7f67a550e700 20 HTTP_ACCEPT=*/* 2019-02-28 10:29:54.767272 7f67a550e700 20 HTTP_ACCEPT_ENCODING=gzip, deflate 2019-02-28 10:29:54.767274 7f67a550e700 20 HTTP_AUTHORIZATION=AWS 0APxxxxxxxxx:vxxxxxxxICq/qQXxxxxxxxxxuuc= [...] 2019-02-28 10:29:54.767294 7f67a550e700 20 REMOTE_ADDR=10.xx.xxx.xxx 2019-02-28 10:29:54.767295 7f67a550e700 20 REQUEST_METHOD=GET 2019-02-28 10:29:54.767296 7f67a550e700 20 REQUEST_URI=/admin/usage 2019-02-28 10:29:54.767297 7f67a550e700 20 SCRIPT_URI=/admin/usage [...] in_hosted_domain_s3website=0 s->info.domain=rgw.xxxxxx s->info.request_uri=/admin/usage 2019-02-28 10:29:54.767479 7f67a550e700 10 handler=16RGWHandler_Usage 2019-02-28 10:29:54.767491 7f67a550e700 2 req 215268:0.000183::GET /admin/usage::getting op 0 2019-02-28 10:29:54.767495 7f67a550e700 10 op=15RGWOp_Usage_Get 2019-02-28 10:29:54.767496 7f67a550e700 2 req 215268:0.000196::GET /admin/usage:get_usage:verifying requester 2019-02-28 10:29:54.767528 7f67a550e700 20 rgw::auth::StrategyRegistry::s3_main_strategy_t: trying rgw::auth::s3::AWSAuthStrategy 2019-02-28 10:29:54.767530 7f67a550e700 20 rgw::auth::s3::AWSAuthStrategy: trying rgw::auth::s3::S3AnonymousEngine 2019-02-28 10:29:54.767558 7f67a550e700 20 rgw::auth::s3::S3AnonymousEngine denied with reason=-1 2019-02-28 10:29:54.767561 7f67a550e700 20 rgw::auth::s3::AWSAuthStrategy: trying rgw::auth::s3::AWSv2ExternalAuthStrategy 2019-02-28 10:29:54.767573 7f67a550e700 20 rgw::auth::s3::AWSv2ExternalAuthStrategy: trying rgw::auth::keystone::EC2Engine 2019-02-28 10:29:54.767602 7f67a550e700 10 get_canon_resource(): dest=/admin/usage 2019-02-28 10:29:54.767605 7f67a550e700 10 string_to_sign: 2019-02-28 10:29:54.767630 7f67a550e700 20 sending request to https://keystone.xxxxxxxxx/v3/auth/tokens 2019-02-28 10:29:54.767678 7f67a550e700 20 ssl verification is set to off 2019-02-28 10:29:55.194088 7f67a550e700 20 sending request to https://keystone.xxxxxxxx/v3/s3tokens 2019-02-28 10:29:55.194115 7f67a550e700 20 ssl verification is set to off 2019-02-28 10:29:55.226841 7f67a550e700 20 rgw::auth::keystone::EC2Engine denied with reason=-2028 2019-02-28 10:29:55.226852 7f67a550e700 20 rgw::auth::s3::AWSv2ExternalAuthStrategy denied with reason=-2028 2019-02-28 10:29:55.226855 7f67a550e700 20 rgw::auth::s3::AWSAuthStrategy: trying rgw::auth::s3::LocalEngine 2019-02-28 10:29:55.226878 7f67a550e700 10 get_canon_resource(): dest=/admin/usage 2019-02-28 10:29:55.226881 7f67a550e700 10 string_to_sign: 2019-02-28 10:29:55.226943 7f67a550e700 15 string_to_sign=GET [...] 2019-02-28 10:29:55.226972 7f67a550e700 15 compare=0 2019-02-28 10:29:55.227099 7f67a550e700 20 rgw::auth::s3::LocalEngine granted access 2019-02-28 10:29:55.227103 7f67a550e700 20 rgw::auth::s3::AWSAuthStrategy granted access 2019-02-28 10:29:55.227106 7f67a550e700 2 req 215268:0.459806::GET /admin/usage:get_usage:normalizing buckets and tenants 2019-02-28 10:29:55.227109 7f67a550e700 2 req 215268:0.459809::GET /admin/usage:get_usage:init permissions 2019-02-28 10:29:55.227136 7f67a550e700 2 req 215268:0.459826::GET /admin/usage:get_usage:recalculating target 2019-02-28 10:29:55.227140 7f67a550e700 2 req 215268:0.459841::GET /admin/usage:get_usage:reading permissions 2019-02-28 10:29:55.227142 7f67a550e700 2 req 215268:0.459842::GET /admin/usage:get_usage:init op 2019-02-28 10:29:55.227143 7f67a550e700 2 req 215268:0.459844::GET /admin/usage:get_usage:verifying op mask 2019-02-28 10:29:55.227145 7f67a550e700 20 required_mask= 0 user.op_mask=7 2019-02-28 10:29:55.227146 7f67a550e700 2 req 215268:0.459846::GET /admin/usage:get_usage:verifying op permissions 2019-02-28 10:29:55.227148 7f67a550e700 2 req 215268:0.459848::GET /admin/usage:get_usage:verifying op params 2019-02-28 10:29:55.227149 7f67a550e700 2 req 215268:0.459850::GET /admin/usage:get_usage:pre-executing 2019-02-28 10:29:55.227150 7f67a550e700 2 req 215268:0.459851::GET /admin/usage:get_usage:executing 2019-02-28 10:29:55.232830 7f67a550e700 2 req 215268:0.465530::GET /admin/usage:get_usage:completing 2019-02-28 10:29:55.232858 7f67a550e700 2 req 215268:0.465559::GET /admin/usage:get_usage:op status=0 2019-02-28 10:29:55.232863 7f67a550e700 2 req 215268:0.465564::GET /admin/usage:get_usage:http status=200 2019-02-28 10:29:55.232865 7f67a550e700 1 ====== req done req=0x7f67a55080a0 op status=0 http_status=200 ====== 2019-02-28 10:29:55.232894 7f67a550e700 1 civetweb: 0x55ced7bd2000: 10.0.81.59 - - [28/Feb/2019:10:29:54 +0100] "GET /admin/usage?uid=2534a3e876ee41f088098fxxxxxxxxx%242534a3e876ee41f088098f53xxxxxxx HTTP/1.1" 200 0 - python-requests/2.19.1 Using curl to do the same request gives me the correct usage results: {"entries":[{"user":"a772e4abxxxxxxxx4559d$a772e4ab888exxxxxxxf4559d","buckets":[{"bucket":"","time":"2019-02-26 12:00:00.000000Z","epoch":1551182400,"owner":"a772e4ab88xxxxxxx88e4f039b3430d688f4559d","categories":[{"category":"list_buckets","bytes_sent":36,"bytes_received":0,"ops":3,"successful_ops":0}]},{"bucket":"-","time":"2019-02-17 15:00:00.000000Z","epoch":1550415600,"owner":"a772e4ab88xxxxxxxa772e4ab888e4f039b3430d688f4559d","categories":[{"category":"get_obj","bytes_sent":0,"bytes_received":0,"ops":1,"successful_ops":0},{"category":"list_bucket","bytes_sent":132,"bytes_received":0,"ops":1,"successful_ops":0}]},{"bucket":"info","time":"2019-02-26 12:00:00.000000Z","epoch":1551182400,"owner":"a772e4ab888e4f039xxxxxx72e4ab888e4f039b3430d688f4559d","categories":[{"category":"RGWMovedPermanently","bytes_sent":0,"bytes_received":0,"ops":3,"successful_ops":3}]},{"bucket":"test","time":"2019-02-17 15:00:00.000000Z","epoch":1550415600,"owner":"a772e4ab8xxxxxxd$a772e4ab888e4f039b3430d688f4559d","categories":[{"category":"create_bucket","bytes_sent":0,"bytes_received":0,"ops":1,"successful_ops":1},{"category":"list_bucket","bytes_sent":1368,"bytes_received":0,"ops":18,"successful_ops":18},{"category":"put_obj","bytes_sent":0,"bytes_received":10,"ops":1,"successful_ops":1}]}]}],"summary":[{"user":"a772e4abxxxx4f0xxx8f4559d$a772e4ab88xxxxxx30d688f4559d","categories":[{"category":"RGWMovedPermanently","bytes_sent":0,"bytes_received":0,"ops":3,"successful_ops":3},{"category":"create_bucket","bytes_sent":0,"bytes_received":0,"ops":1,"successful_ops":1},{"category":"get_obj","bytes_sent":0,"bytes_received":0,"ops":1,"successful_ops":0},{"category":"list_bucket","bytes_sent":1500,"bytes_received":0,"ops":19,"successful_ops":18},{"category":"list_buckets","bytes_sent":36,"bytes_received":0,"ops":3,"successful_ops":0},{"category":"put_obj","bytes_sent":0,"bytes_received":10,"ops":1,"successful_ops":1}],"total":{"bytes_sent":153* Connection #0 to host 10.xxx.xxx.xxx left intact My custom archive policy (custom_gnocchi_resources.yaml) looks like: [...] - resource_type: ceph_account metrics: radosgw.objects: radosgw.objects.size: radosgw.objects.containers: radosgw.api.request: radosgw.containers.objects: radosgw.containers.objects.size: [...] and the pipeline: [...] sources: - name: meter_source meters: - "*" sinks: - meter_sink [...] sinks: - name: meter_sink transformers: publishers: - gnocchi://?resources_definition_file=%2Fetc%2Fceilometer%2Fcustom_gnocchi_resources.yaml [...] Anything wrong with my archive policy or the pipeline? All the best, Florian Am 2/27/19 um 8:38 PM schrieb Engelmann Florian: > Hi Christian, > > > looks like a hit: > > > https://github.com/openstack/ceilometer/commit/c9eb2d44df7cafde1294123d66445ebef4cfb76d > > > You made my day! > > > I will test tomorrow and report back! > > ​ > > All the best, > > Florian > > > ------------------------------------------------------------------------ > *From:* Engelmann Florian > *Sent:* Wednesday, February 27, 2019 8:33 PM > *To:* Christian Zunker > *Cc:* openstack-discuss at lists.openstack.org > *Subject:* Re: [ceilometer] radosgw pollster > > Hi Christian, > > > thank you for your feedback and help! Permissions are fine as I tried to > poll the Endpoint successfully with curl and the user (key + secret) we > created (and is configured in ceilometer.conf). > > I saw the requests-aws is used in OSA and it is indeed missing in the > kolla container (we use "source" not binary). > > > https://github.com/openstack/kolla/blob/master/docker/ceilometer/ceilometer-base/Dockerfile.j2 > > > I will build a new ceilometer container including requests-aws tomorrow > to see if this fixes the problem. > > > All the best, > > Florian > > > ------------------------------------------------------------------------ > *From:* Christian Zunker > *Sent:* Wednesday, February 27, 2019 9:09 AM > *To:* Engelmann Florian > *Cc:* openstack-discuss at lists.openstack.org > *Subject:* Re: [ceilometer] radosgw pollster > Hi Florian, > > have you tried different permissions for your ceilometer user in radosgw? > According to the docs you need an admin user: > https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#ceph-object-storage > Our user has these caps: > usage=read,write;metadata=read,write;users=read,write;buckets=read,write > > We also had to add the requests-aws pip package to query radosgw from > ceilometer: > https://docs.openstack.org/openstack-ansible/latest/user/ceph/ceilometer.html > > Christian > > > Am Di., 26. Feb. 2019 um 13:15 Uhr schrieb Florian Engelmann > >: > > Hi Christian, > > Am 2/26/19 um 11:00 AM schrieb Christian Zunker: > > Hi Florian, > > > > which version of OpenStack are you using? > > The radosgw metric names were different in some versions: > > https://bugs.launchpad.net/ceilometer/+bug/1726458 > > we do use Rocky and Ceilometer 11.0.1. I am still lost with that error. > As far as I am able to understand python it looks like the error is > happening in polling.manager line 222: > > https://github.com/openstack/ceilometer/blob/11.0.1/ceilometer/polling/manager.py#L222 > > But I do not understand why. I tried to enable debug logging but the > error does not log any additional information. > The poller is not even trying to reach/poll our RadosGWs. Looks like > that manger is blocking those polls. > > All the best, > Florian > > > > > > Christian > > > > Am Fr., 22. Feb. 2019 um 17:40 Uhr schrieb Florian Engelmann > > > >>: > > > >     Hi, > > > >     I failed to poll any usage data from our radosgw. I get > > > >     2019-02-22 17:23:57.461 24 INFO ceilometer.polling.manager > [-] Polling > >     pollster radosgw.containers.objects in the context of > >     radosgw_300s_pollsters > >     2019-02-22 17:23:57.462 24 ERROR ceilometer.polling.manager > [-] Prevent > >     pollster radosgw.containers.objects from polling [ >     description=, > >     domain_id=xx9d9975088a4d93922e1d73c7217b3b, enabled=True, > > > >     [...] > > > >     id=xx90a9b1d4be4d75b4bd08ab8107e4ff, is_domain=False, > links={u'self': > >     u'http://keystone-admin.service.xxxxxxx:35357/v3/projects on > source > >     radosgw_300s_pollsters anymore!: PollsterPermanentError > > > >     Configurations like: > >     cat polling.yaml > >     --- > >     sources: > >           - name: radosgw_300s_pollsters > >             interval: 300 > >             meters: > >               - radosgw.usage > >               - radosgw.objects > >               - radosgw.objects.size > >               - radosgw.objects.containers > >               - radosgw.containers.objects > >               - radosgw.containers.objects.size > > > > > >     Also tried radosgw.api.requests instead of radowsgw.usage. > > > >     ceilometer.conf > >     [...] > >     [service_types] > >     radosgw = object-store > > > >     [rgw_admin_credentials] > >     access_key = xxxxx0Z0xxxxxxxxxxxx > >     secret_key = xxxxxxxxxxxxlRExxcPxxxxxxoNxxxxxxOxxxx > > > >     [rgw_client] > >     implicit_tenants = true > > > >     Endpoints: > >     | xxxxxxx | region | swift        | object-store    | True > | admin > >        | > http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s  | > >     | xxxxxxx | region | swift        | object-store    | True    | > >     internal > >        | > http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s  | > >     | xxxxxxx | region | swift        | object-store    | True > | public > >        | https://s3.somedomain.com/swift/v1/AUTH_%(tenant_id)s >    | > > > >     Ceilometer user: > >     { > >           "user_id": "ceilometer", > >           "display_name": "ceilometer", > >           "email": "", > >           "suspended": 0, > >           "max_buckets": 1000, > >           "auid": 0, > >           "subusers": [], > >           "keys": [ > >               { > >                   "user": "ceilometer", > >                   "access_key": "xxxxxxxxxxxxxxxxxx", > >                   "secret_key": "xxxxxxxxxxxxxxxxxxxxxxxxx" > >               } > >           ], > >           "swift_keys": [], > >           "caps": [ > >               { > >                   "type": "buckets", > >                   "perm": "read" > >               }, > >               { > >                   "type": "metadata", > >                   "perm": "read" > >               }, > >               { > >                   "type": "usage", > >                   "perm": "read" > >               }, > >               { > >                   "type": "users", > >                   "perm": "read" > >               } > >           ], > >           "op_mask": "read, write, delete", > >           "default_placement": "", > >           "placement_tags": [], > >           "bucket_quota": { > >               "enabled": false, > >               "check_on_raw": false, > >               "max_size": -1, > >               "max_size_kb": 0, > >               "max_objects": -1 > >           }, > >           "user_quota": { > >               "enabled": false, > >               "check_on_raw": false, > >               "max_size": -1, > >               "max_size_kb": 0, > >               "max_objects": -1 > >           }, > >           "temp_url_keys": [], > >           "type": "rgw" > >     } > > > > > >     radosgw config: > >     [client.rgw.xxxxxxxxxxx] > >     host = somehost > >     rgw frontends = "civetweb port=7480 num_threads=512" > >     rgw num rados handles = 8 > >     rgw thread pool size = 512 > >     rgw cache enabled = true > >     rgw dns name = s3.xxxxxx.xxx > >     rgw enable usage log = true > >     rgw usage log tick interval = 30 > >     rgw realm = public > >     rgw zonegroup = xxx > >     rgw zone = xxxxx > >     rgw resolve cname = False > >     rgw usage log flush threshold = 1024 > >     rgw usage max user shards = 1 > >     rgw usage max shards = 32 > >     rgw_keystone_url = https://keystone.xxxxxxxxxxxxx > >     rgw_keystone_admin_domain = default > >     rgw_keystone_admin_project = service > >     rgw_keystone_admin_user = swift > >     rgw_keystone_admin_password = > >     xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx > >     rgw_keystone_accepted_roles = member,_member_,admin > >     rgw_keystone_accepted_admin_roles = admin > >     rgw_keystone_api_version = 3 > >     rgw_keystone_verify_ssl = false > >     rgw_keystone_implicit_tenants = true > >     rgw_keystone_admin_tenant = default > >     rgw_keystone_revocation_interval = 0 > >     rgw_keystone_token_cache_size = 0 > >     rgw_s3_auth_use_keystone = true > >     rgw_max_attr_size = 1024 > >     rgw_max_attrs_num_in_req = 32 > >     rgw_max_attr_name_len = 64 > >     rgw_swift_account_in_url = true > >     rgw_swift_versioning_enabled = true > >     rgw_enable_apis = s3,swift,swift_auth,admin > >     rgw_swift_enforce_content_length = true > > > > > > > > > >     Any idea whats going on? > > > >     All the best, > >     Florian > > > > > > > > -- > > EveryWare AG > Florian Engelmann > Senior UNIX Systems Engineer > Zurlindenstrasse 52a > CH-8003 Zürich > > tel: +41 44 466 60 00 > fax: +41 44 466 60 10 > mail: mailto:florian.engelmann at everyware.ch > > web: http://www.everyware.ch > > > -- EveryWare AG Florian Engelmann Senior UNIX Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From florian.engelmann at everyware.ch Thu Feb 28 10:58:29 2019 From: florian.engelmann at everyware.ch (Florian Engelmann) Date: Thu, 28 Feb 2019 11:58:29 +0100 Subject: [ceilometer] radosgw pollster In-Reply-To: References: <1551295990725.66562@everyware.ch> <1551296336596.95229@everyware.ch> Message-ID: <50fdbfa4-4562-59fe-cad3-334d395769a6@everyware.ch> Hi Christian, all good! I was not getting any result with "openstack resource list" as we got to many resources. openstack metric resource list --type ceph_account -c id -f value | wc -l gives results!!!! Looks like everything is working as expected! Thank you so much for your help! All the best, Florian Am 2/28/19 um 11:21 AM schrieb Florian Engelmann: > Hi Christian, > > after adding requests-aws to the container the PollsterPermanentError is > gone and the ceilometer polling log looks clean to me: > > 2019-02-28 10:29:46.163 24 INFO ceilometer.polling.manager [-] Polling > pollster radosgw.containers.objects in the context of > radosgw_300s_pollsters > 2019-02-28 10:29:46.167 24 INFO ceilometer.polling.manager [-] Polling > pollster radosgw.objects in the context of radosgw_300s_pollsters > 2019-02-28 10:29:46.172 24 INFO ceilometer.polling.manager [-] Polling > pollster radosgw.objects.size in the context of radosgw_300s_pollsters > 2019-02-28 10:29:46.177 24 INFO ceilometer.polling.manager [-] Polling > pollster radosgw.objects.containers in the context of > radosgw_300s_pollsters > 2019-02-28 10:29:46.182 24 INFO ceilometer.polling.manager [-] Polling > pollster radosgw.usage in the context of radosgw_300s_pollsters > > > I still do not get any data in gnocchi: > > openstack metric resource list -c type -f value | sort -u > generic > image > instance > instance_disk > instance_network_interface > network > volume > > My RadosGW logs do show the poller requests: > 2019-02-28 10:29:54.767266 7f67a550e700 20 HTTP_ACCEPT=*/* > 2019-02-28 10:29:54.767272 7f67a550e700 20 HTTP_ACCEPT_ENCODING=gzip, > deflate > 2019-02-28 10:29:54.767274 7f67a550e700 20 HTTP_AUTHORIZATION=AWS > 0APxxxxxxxxx:vxxxxxxxICq/qQXxxxxxxxxxuuc= > [...] > 2019-02-28 10:29:54.767294 7f67a550e700 20 REMOTE_ADDR=10.xx.xxx.xxx > 2019-02-28 10:29:54.767295 7f67a550e700 20 REQUEST_METHOD=GET > 2019-02-28 10:29:54.767296 7f67a550e700 20 REQUEST_URI=/admin/usage > 2019-02-28 10:29:54.767297 7f67a550e700 20 SCRIPT_URI=/admin/usage > [...] > in_hosted_domain_s3website=0 s->info.domain=rgw.xxxxxx > s->info.request_uri=/admin/usage > 2019-02-28 10:29:54.767479 7f67a550e700 10 handler=16RGWHandler_Usage > 2019-02-28 10:29:54.767491 7f67a550e700  2 req 215268:0.000183::GET > /admin/usage::getting op 0 > 2019-02-28 10:29:54.767495 7f67a550e700 10 op=15RGWOp_Usage_Get > 2019-02-28 10:29:54.767496 7f67a550e700  2 req 215268:0.000196::GET > /admin/usage:get_usage:verifying requester > 2019-02-28 10:29:54.767528 7f67a550e700 20 > rgw::auth::StrategyRegistry::s3_main_strategy_t: trying > rgw::auth::s3::AWSAuthStrategy > 2019-02-28 10:29:54.767530 7f67a550e700 20 > rgw::auth::s3::AWSAuthStrategy: trying rgw::auth::s3::S3AnonymousEngine > 2019-02-28 10:29:54.767558 7f67a550e700 20 > rgw::auth::s3::S3AnonymousEngine denied with reason=-1 > 2019-02-28 10:29:54.767561 7f67a550e700 20 > rgw::auth::s3::AWSAuthStrategy: trying > rgw::auth::s3::AWSv2ExternalAuthStrategy > 2019-02-28 10:29:54.767573 7f67a550e700 20 > rgw::auth::s3::AWSv2ExternalAuthStrategy: trying > rgw::auth::keystone::EC2Engine > 2019-02-28 10:29:54.767602 7f67a550e700 10 get_canon_resource(): > dest=/admin/usage > 2019-02-28 10:29:54.767605 7f67a550e700 10 string_to_sign: > 2019-02-28 10:29:54.767630 7f67a550e700 20 sending request to > https://keystone.xxxxxxxxx/v3/auth/tokens > 2019-02-28 10:29:54.767678 7f67a550e700 20 ssl verification is set to off > 2019-02-28 10:29:55.194088 7f67a550e700 20 sending request to > https://keystone.xxxxxxxx/v3/s3tokens > 2019-02-28 10:29:55.194115 7f67a550e700 20 ssl verification is set to off > 2019-02-28 10:29:55.226841 7f67a550e700 20 > rgw::auth::keystone::EC2Engine denied with reason=-2028 > 2019-02-28 10:29:55.226852 7f67a550e700 20 > rgw::auth::s3::AWSv2ExternalAuthStrategy denied with reason=-2028 > 2019-02-28 10:29:55.226855 7f67a550e700 20 > rgw::auth::s3::AWSAuthStrategy: trying rgw::auth::s3::LocalEngine > 2019-02-28 10:29:55.226878 7f67a550e700 10 get_canon_resource(): > dest=/admin/usage > 2019-02-28 10:29:55.226881 7f67a550e700 10 string_to_sign: > 2019-02-28 10:29:55.226943 7f67a550e700 15 string_to_sign=GET > [...] > 2019-02-28 10:29:55.226972 7f67a550e700 15 compare=0 > 2019-02-28 10:29:55.227099 7f67a550e700 20 rgw::auth::s3::LocalEngine > granted access > 2019-02-28 10:29:55.227103 7f67a550e700 20 > rgw::auth::s3::AWSAuthStrategy granted access > 2019-02-28 10:29:55.227106 7f67a550e700  2 req 215268:0.459806::GET > /admin/usage:get_usage:normalizing buckets and tenants > 2019-02-28 10:29:55.227109 7f67a550e700  2 req 215268:0.459809::GET > /admin/usage:get_usage:init permissions > 2019-02-28 10:29:55.227136 7f67a550e700  2 req 215268:0.459826::GET > /admin/usage:get_usage:recalculating target > 2019-02-28 10:29:55.227140 7f67a550e700  2 req 215268:0.459841::GET > /admin/usage:get_usage:reading permissions > 2019-02-28 10:29:55.227142 7f67a550e700  2 req 215268:0.459842::GET > /admin/usage:get_usage:init op > 2019-02-28 10:29:55.227143 7f67a550e700  2 req 215268:0.459844::GET > /admin/usage:get_usage:verifying op mask > 2019-02-28 10:29:55.227145 7f67a550e700 20 required_mask= 0 user.op_mask=7 > 2019-02-28 10:29:55.227146 7f67a550e700  2 req 215268:0.459846::GET > /admin/usage:get_usage:verifying op permissions > 2019-02-28 10:29:55.227148 7f67a550e700  2 req 215268:0.459848::GET > /admin/usage:get_usage:verifying op params > 2019-02-28 10:29:55.227149 7f67a550e700  2 req 215268:0.459850::GET > /admin/usage:get_usage:pre-executing > 2019-02-28 10:29:55.227150 7f67a550e700  2 req 215268:0.459851::GET > /admin/usage:get_usage:executing > 2019-02-28 10:29:55.232830 7f67a550e700  2 req 215268:0.465530::GET > /admin/usage:get_usage:completing > 2019-02-28 10:29:55.232858 7f67a550e700  2 req 215268:0.465559::GET > /admin/usage:get_usage:op status=0 > 2019-02-28 10:29:55.232863 7f67a550e700  2 req 215268:0.465564::GET > /admin/usage:get_usage:http status=200 > 2019-02-28 10:29:55.232865 7f67a550e700  1 ====== req done > req=0x7f67a55080a0 op status=0 http_status=200 ====== > 2019-02-28 10:29:55.232894 7f67a550e700  1 civetweb: 0x55ced7bd2000: > 10.0.81.59 - - [28/Feb/2019:10:29:54 +0100] "GET > /admin/usage?uid=2534a3e876ee41f088098fxxxxxxxxx%242534a3e876ee41f088098f53xxxxxxx > HTTP/1.1" 200 0 - python-requests/2.19.1 > > > Using curl to do the same request gives me the correct usage results: > {"entries":[{"user":"a772e4abxxxxxxxx4559d$a772e4ab888exxxxxxxf4559d","buckets":[{"bucket":"","time":"2019-02-26 > 12:00:00.000000Z","epoch":1551182400,"owner":"a772e4ab88xxxxxxx88e4f039b3430d688f4559d","categories":[{"category":"list_buckets","bytes_sent":36,"bytes_received":0,"ops":3,"successful_ops":0}]},{"bucket":"-","time":"2019-02-17 > 15:00:00.000000Z","epoch":1550415600,"owner":"a772e4ab88xxxxxxxa772e4ab888e4f039b3430d688f4559d","categories":[{"category":"get_obj","bytes_sent":0,"bytes_received":0,"ops":1,"successful_ops":0},{"category":"list_bucket","bytes_sent":132,"bytes_received":0,"ops":1,"successful_ops":0}]},{"bucket":"info","time":"2019-02-26 > 12:00:00.000000Z","epoch":1551182400,"owner":"a772e4ab888e4f039xxxxxx72e4ab888e4f039b3430d688f4559d","categories":[{"category":"RGWMovedPermanently","bytes_sent":0,"bytes_received":0,"ops":3,"successful_ops":3}]},{"bucket":"test","time":"2019-02-17 > 15:00:00.000000Z","epoch":1550415600,"owner":"a772e4ab8xxxxxxd$a772e4ab888e4f039b3430d688f4559d","categories":[{"category":"create_bucket","bytes_sent":0,"bytes_received":0,"ops":1,"successful_ops":1},{"category":"list_bucket","bytes_sent":1368,"bytes_received":0,"ops":18,"successful_ops":18},{"category":"put_obj","bytes_sent":0,"bytes_received":10,"ops":1,"successful_ops":1}]}]}],"summary":[{"user":"a772e4abxxxx4f0xxx8f4559d$a772e4ab88xxxxxx30d688f4559d","categories":[{"category":"RGWMovedPermanently","bytes_sent":0,"bytes_received":0,"ops":3,"successful_ops":3},{"category":"create_bucket","bytes_sent":0,"bytes_received":0,"ops":1,"successful_ops":1},{"category":"get_obj","bytes_sent":0,"bytes_received":0,"ops":1,"successful_ops":0},{"category":"list_bucket","bytes_sent":1500,"bytes_received":0,"ops":19,"successful_ops":18},{"category":"list_buckets","bytes_sent":36,"bytes_received":0,"ops":3,"successful_ops":0},{"category":"put_obj","bytes_sent":0,"bytes_received":10,"ops":1,"successful_ops":1}],"total":{"bytes_sent":153* > Connection #0 to host 10.xxx.xxx.xxx left intact > > > My custom archive policy (custom_gnocchi_resources.yaml) looks like: > [...] >   - resource_type: ceph_account >     metrics: >       radosgw.objects: >       radosgw.objects.size: >       radosgw.objects.containers: >       radosgw.api.request: >       radosgw.containers.objects: >       radosgw.containers.objects.size: > [...] > > and the pipeline: > [...] > sources: >     - name: meter_source >       meters: >           - "*" >       sinks: >           - meter_sink > [...] > sinks: >     - name: meter_sink >       transformers: >       publishers: >           - > gnocchi://?resources_definition_file=%2Fetc%2Fceilometer%2Fcustom_gnocchi_resources.yaml > > [...] > > Anything wrong with my archive policy or the pipeline? > > All the best, > Florian > > Am 2/27/19 um 8:38 PM schrieb Engelmann Florian: >> Hi Christian, >> >> >> looks like a hit: >> >> >> https://github.com/openstack/ceilometer/commit/c9eb2d44df7cafde1294123d66445ebef4cfb76d >> >> >> >> You made my day! >> >> >> I will test tomorrow and report back! >> >> ​ >> >> All the best, >> >> Florian >> >> >> ------------------------------------------------------------------------ >> *From:* Engelmann Florian >> *Sent:* Wednesday, February 27, 2019 8:33 PM >> *To:* Christian Zunker >> *Cc:* openstack-discuss at lists.openstack.org >> *Subject:* Re: [ceilometer] radosgw pollster >> >> Hi Christian, >> >> >> thank you for your feedback and help! Permissions are fine as I tried >> to poll the Endpoint successfully with curl and the user (key + >> secret) we created (and is configured in ceilometer.conf). >> >> I saw the requests-aws is used in OSA and it is indeed missing in the >> kolla container (we use "source" not binary). >> >> >> https://github.com/openstack/kolla/blob/master/docker/ceilometer/ceilometer-base/Dockerfile.j2 >> >> >> >> I will build a new ceilometer container including requests-aws >> tomorrow to see if this fixes the problem. >> >> >> All the best, >> >> Florian >> >> >> ------------------------------------------------------------------------ >> *From:* Christian Zunker >> *Sent:* Wednesday, February 27, 2019 9:09 AM >> *To:* Engelmann Florian >> *Cc:* openstack-discuss at lists.openstack.org >> *Subject:* Re: [ceilometer] radosgw pollster >> Hi Florian, >> >> have you tried different permissions for your ceilometer user in radosgw? >> According to the docs you need an admin user: >> https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html#ceph-object-storage >> >> Our user has these caps: >> usage=read,write;metadata=read,write;users=read,write;buckets=read,write >> >> We also had to add the requests-aws pip package to query radosgw from >> ceilometer: >> https://docs.openstack.org/openstack-ansible/latest/user/ceph/ceilometer.html >> >> >> Christian >> >> >> Am Di., 26. Feb. 2019 um 13:15 Uhr schrieb Florian Engelmann >> >: >> >>     Hi Christian, >> >>     Am 2/26/19 um 11:00 AM schrieb Christian Zunker: >>      > Hi Florian, >>      > >>      > which version of OpenStack are you using? >>      > The radosgw metric names were different in some versions: >>      > https://bugs.launchpad.net/ceilometer/+bug/1726458 >> >>     we do use Rocky and Ceilometer 11.0.1. I am still lost with that >> error. >>     As far as I am able to understand python it looks like the error is >>     happening in polling.manager line 222: >> >> >> https://github.com/openstack/ceilometer/blob/11.0.1/ceilometer/polling/manager.py#L222 >> >> >>     But I do not understand why. I tried to enable debug logging but the >>     error does not log any additional information. >>     The poller is not even trying to reach/poll our RadosGWs. Looks like >>     that manger is blocking those polls. >> >>     All the best, >>     Florian >> >> >>      > >>      > Christian >>      > >>      > Am Fr., 22. Feb. 2019 um 17:40 Uhr schrieb Florian Engelmann >>      > >     >>     >     >>: >>      > >>      >     Hi, >>      > >>      >     I failed to poll any usage data from our radosgw. I get >>      > >>      >     2019-02-22 17:23:57.461 24 INFO ceilometer.polling.manager >>     [-] Polling >>      >     pollster radosgw.containers.objects in the context of >>      >     radosgw_300s_pollsters >>      >     2019-02-22 17:23:57.462 24 ERROR ceilometer.polling.manager >>     [-] Prevent >>      >     pollster radosgw.containers.objects from polling [>      >     description=, >>      >     domain_id=xx9d9975088a4d93922e1d73c7217b3b, enabled=True, >>      > >>      >     [...] >>      > >>      >     id=xx90a9b1d4be4d75b4bd08ab8107e4ff, is_domain=False, >>     links={u'self': >>      >     u'http://keystone-admin.service.xxxxxxx:35357/v3/projects on >>     source >>      >     radosgw_300s_pollsters anymore!: PollsterPermanentError >>      > >>      >     Configurations like: >>      >     cat polling.yaml >>      >     --- >>      >     sources: >>      >           - name: radosgw_300s_pollsters >>      >             interval: 300 >>      >             meters: >>      >               - radosgw.usage >>      >               - radosgw.objects >>      >               - radosgw.objects.size >>      >               - radosgw.objects.containers >>      >               - radosgw.containers.objects >>      >               - radosgw.containers.objects.size >>      > >>      > >>      >     Also tried radosgw.api.requests instead of radowsgw.usage. >>      > >>      >     ceilometer.conf >>      >     [...] >>      >     [service_types] >>      >     radosgw = object-store >>      > >>      >     [rgw_admin_credentials] >>      >     access_key = xxxxx0Z0xxxxxxxxxxxx >>      >     secret_key = xxxxxxxxxxxxlRExxcPxxxxxxoNxxxxxxOxxxx >>      > >>      >     [rgw_client] >>      >     implicit_tenants = true >>      > >>      >     Endpoints: >>      >     | xxxxxxx | region | swift        | object-store    | True >>     | admin >>      >        | >>     http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s  | >>      >     | xxxxxxx | region | swift        | object-store    | True >>   | >>      >     internal >>      >        | >>     http://rgw.service.internalxxx/swift/v1/AUTH_%(tenant_id)s  | >>      >     | xxxxxxx | region | swift        | object-store    | True >>     | public >>      >        | https://s3.somedomain.com/swift/v1/AUTH_%(tenant_id)s >>         | >>      > >>      >     Ceilometer user: >>      >     { >>      >           "user_id": "ceilometer", >>      >           "display_name": "ceilometer", >>      >           "email": "", >>      >           "suspended": 0, >>      >           "max_buckets": 1000, >>      >           "auid": 0, >>      >           "subusers": [], >>      >           "keys": [ >>      >               { >>      >                   "user": "ceilometer", >>      >                   "access_key": "xxxxxxxxxxxxxxxxxx", >>      >                   "secret_key": "xxxxxxxxxxxxxxxxxxxxxxxxx" >>      >               } >>      >           ], >>      >           "swift_keys": [], >>      >           "caps": [ >>      >               { >>      >                   "type": "buckets", >>      >                   "perm": "read" >>      >               }, >>      >               { >>      >                   "type": "metadata", >>      >                   "perm": "read" >>      >               }, >>      >               { >>      >                   "type": "usage", >>      >                   "perm": "read" >>      >               }, >>      >               { >>      >                   "type": "users", >>      >                   "perm": "read" >>      >               } >>      >           ], >>      >           "op_mask": "read, write, delete", >>      >           "default_placement": "", >>      >           "placement_tags": [], >>      >           "bucket_quota": { >>      >               "enabled": false, >>      >               "check_on_raw": false, >>      >               "max_size": -1, >>      >               "max_size_kb": 0, >>      >               "max_objects": -1 >>      >           }, >>      >           "user_quota": { >>      >               "enabled": false, >>      >               "check_on_raw": false, >>      >               "max_size": -1, >>      >               "max_size_kb": 0, >>      >               "max_objects": -1 >>      >           }, >>      >           "temp_url_keys": [], >>      >           "type": "rgw" >>      >     } >>      > >>      > >>      >     radosgw config: >>      >     [client.rgw.xxxxxxxxxxx] >>      >     host = somehost >>      >     rgw frontends = "civetweb port=7480 num_threads=512" >>      >     rgw num rados handles = 8 >>      >     rgw thread pool size = 512 >>      >     rgw cache enabled = true >>      >     rgw dns name = s3.xxxxxx.xxx >>      >     rgw enable usage log = true >>      >     rgw usage log tick interval = 30 >>      >     rgw realm = public >>      >     rgw zonegroup = xxx >>      >     rgw zone = xxxxx >>      >     rgw resolve cname = False >>      >     rgw usage log flush threshold = 1024 >>      >     rgw usage max user shards = 1 >>      >     rgw usage max shards = 32 >>      >     rgw_keystone_url = https://keystone.xxxxxxxxxxxxx >>      >     rgw_keystone_admin_domain = default >>      >     rgw_keystone_admin_project = service >>      >     rgw_keystone_admin_user = swift >>      >     rgw_keystone_admin_password = >>      >     xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx >>      >     rgw_keystone_accepted_roles = member,_member_,admin >>      >     rgw_keystone_accepted_admin_roles = admin >>      >     rgw_keystone_api_version = 3 >>      >     rgw_keystone_verify_ssl = false >>      >     rgw_keystone_implicit_tenants = true >>      >     rgw_keystone_admin_tenant = default >>      >     rgw_keystone_revocation_interval = 0 >>      >     rgw_keystone_token_cache_size = 0 >>      >     rgw_s3_auth_use_keystone = true >>      >     rgw_max_attr_size = 1024 >>      >     rgw_max_attrs_num_in_req = 32 >>      >     rgw_max_attr_name_len = 64 >>      >     rgw_swift_account_in_url = true >>      >     rgw_swift_versioning_enabled = true >>      >     rgw_enable_apis = s3,swift,swift_auth,admin >>      >     rgw_swift_enforce_content_length = true >>      > >>      > >>      > >>      > >>      >     Any idea whats going on? >>      > >>      >     All the best, >>      >     Florian >>      > >>      > >>      > >> >>     -- >>     EveryWare AG >>     Florian Engelmann >>     Senior UNIX Systems Engineer >>     Zurlindenstrasse 52a >>     CH-8003 Zürich >> >>     tel: +41 44 466 60 00 >>     fax: +41 44 466 60 10 >>     mail: mailto:florian.engelmann at everyware.ch >>     >>     web: http://www.everyware.ch >> >> >> > -- EveryWare AG Florian Engelmann Senior UNIX Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:florian.engelmann at everyware.ch web: http://www.everyware.ch -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From bdobreli at redhat.com Thu Feb 28 11:30:01 2019 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 28 Feb 2019 12:30:01 +0100 Subject: [placement][TripleO][infra] zuul job dependencies for greater good? In-Reply-To: <449d51d5-de35-3e45-cecf-1678a49f9a06@redhat.com> References: <449d51d5-de35-3e45-cecf-1678a49f9a06@redhat.com> Message-ID: <1dff3363-5824-be9b-d981-2432b09138a8@redhat.com> Here is example jobs [0],[1] to illustrate the "middle-ground proposal". As you can see, disregard of the failed tox, a few jobs will be executed: - tripleo-ci-centos-7-undercloud-containers (the base job for UC deployments checking) - tripleo-ci-centos-7-standalone (the base job that emulates overcloud deployments but on all-in-one standalone layout) - tripleo-ci-fedora-28-standalone (the same, but performed for f28 zuul subnodes) So that hopefully still gives a developer some insights and as well prevents the rest of the deemed to fail (or deprecated multinode) jobs from execution and still saves some CI pool resources. To illustrate the further ordering, see [2],[3] that is expected to have the standalone and UC jobs passing. That would in turn cause the update/upgrade and custom standalone scenarios jobs executed *after* that logical build-step, so let's wait and see for results. If you think we should limit the changes scope for dependencies of tox jobs only, let me know and I'll remove those additional inter-jobs dependencies odd the patches. PS. I don't think that reworked layout abuses zuul dependencies feature in any way as we do have some logical state shared here across these consequently executed jobs. That is only the "succeeded-or-not" flag so far :-) Ideally, we'll need some real deployment artifacts shared, like the updated containers registry. [0] https://review.openstack.org/639615 [1] https://review.openstack.org/639721 [2] https://review.openstack.org/639725 [3] https://review.openstack.org/639604 On 27.02.2019 18:31, Bogdan Dobrelya wrote: > I think we can still consider the middle-ground, where only deprecated > multinode jobs, which tripleo infra team is in progress of migrating > into standalone jobs, could be made depending on unit and pep8 checks? > And some basic jobs will keep being depending on nothing. > > I expanded that idea in WIP topic [0]. Commit messages explain how the > ordering was reworked. > > PS. I'm sorry I missed the submitted stats for zuul projects posted > earlier in this topic, I'll take a look into that. > > [0] > https://review.openstack.org/#/q/topic:ci_pipelines+(status:open+OR+status:merged) > > >> Bogdan Dobrelya writes: >>> On 26.02.2019 17:53, James E. Blair wrote: >>>> Bogdan Dobrelya writes: >>>> >>>>> I attempted [0] to do that for tripleo-ci, but zuul was (and still >>>>> does) complaining for some weird graphs building things :/ >>>>> >>>>> See also the related topic [1] from the past. >>>>> >>>>> [0] https://review.openstack.org/#/c/568543 >>>>> [1] >>>>> http://lists.openstack.org/pipermail/openstack-dev/2018-March/127869.html >>>>> >>>> >>>> Thank you for linking to [1].  It's worth re-reading.  Especially the >>>> part at the end. >>>> >>>> -Jim >>>> >>> >> >> Yes, the part at the end is the best indeed. >> I'd amend the time priorities graph though like that: >> >> CPU-time < a developer time < developers time >> >> That means burning some CPU and nodes in a pool for a waste might >> benefit a developer, but saving some CPU and nodes in a pool would >> benefit *developers* in many projects as they'd get the jobs results >> off the waiting check queues faster :) > > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From bdobreli at redhat.com Thu Feb 28 11:46:00 2019 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 28 Feb 2019 12:46:00 +0100 Subject: [placement][TripleO][infra] zuul job dependencies for greater good? In-Reply-To: <1dff3363-5824-be9b-d981-2432b09138a8@redhat.com> References: <449d51d5-de35-3e45-cecf-1678a49f9a06@redhat.com> <1dff3363-5824-be9b-d981-2432b09138a8@redhat.com> Message-ID: <27e943d7-6a0b-076e-b48d-9840d05d5265@redhat.com> On 28.02.2019 12:30, Bogdan Dobrelya wrote: > Here is example jobs [0],[1] to illustrate the "middle-ground proposal". > > As you can see, disregard of the failed tox, a few jobs will be executed: > - tripleo-ci-centos-7-undercloud-containers (the base job for UC > deployments checking) > - tripleo-ci-centos-7-standalone (the base job that emulates overcloud > deployments but on all-in-one standalone layout) > - tripleo-ci-fedora-28-standalone (the same, but performed for f28 zuul > subnodes) what's interesting, it gives even more prominent feedback for the listed "base level" checks, than it normally does. See, a 1h after the last update of [1], I have already been gotten some results this early. And for the 2nd DNM test [0], that took even shorter - only a 30 min. > > So that hopefully still gives a developer some insights and as well > prevents the rest of the deemed to fail (or deprecated multinode) jobs > from execution and still saves some CI pool resources. > > To illustrate the further ordering, see [2],[3] that is expected to have > the standalone and UC jobs passing. That would in turn cause the > update/upgrade and custom standalone scenarios jobs executed *after* > that logical build-step, so let's wait and see for results. If you think > we should limit the changes scope for dependencies of tox jobs only, let > me know and I'll remove those additional inter-jobs dependencies odd the > patches. Note that [2],[3] takes longer comparing to [0],[1] as the former jobs are running after the "next level". So we can expect the total time of the full check pipeline will take longer by a 50 minutes or so. And if we moved the update/upgrade jobs to the base level, we'd have results for those jobs always listed disregard of the tox jobs. But for that case, the base later would take a ~2h instead, therefore the total check pipeline delay would be also extended by that value. > > PS. I don't think that reworked layout abuses zuul dependencies feature > in any way as we do have some logical state shared here across these > consequently executed jobs. That is only the "succeeded-or-not" flag so > far :-) Ideally, we'll need some real deployment artifacts shared, like > the updated containers registry. > > [0] https://review.openstack.org/639615 > [1] https://review.openstack.org/639721 > [2] https://review.openstack.org/639725 > [3] https://review.openstack.org/639604 > > > On 27.02.2019 18:31, Bogdan Dobrelya wrote: >> I think we can still consider the middle-ground, where only deprecated >> multinode jobs, which tripleo infra team is in progress of migrating >> into standalone jobs, could be made depending on unit and pep8 checks? >> And some basic jobs will keep being depending on nothing. >> >> I expanded that idea in WIP topic [0]. Commit messages explain how the >> ordering was reworked. >> >> PS. I'm sorry I missed the submitted stats for zuul projects posted >> earlier in this topic, I'll take a look into that. >> >> [0] >> https://review.openstack.org/#/q/topic:ci_pipelines+(status:open+OR+status:merged) >> >> >>> Bogdan Dobrelya writes: >>>> On 26.02.2019 17:53, James E. Blair wrote: >>>>> Bogdan Dobrelya writes: >>>>> >>>>>> I attempted [0] to do that for tripleo-ci, but zuul was (and still >>>>>> does) complaining for some weird graphs building things :/ >>>>>> >>>>>> See also the related topic [1] from the past. >>>>>> >>>>>> [0] https://review.openstack.org/#/c/568543 >>>>>> [1] >>>>>> http://lists.openstack.org/pipermail/openstack-dev/2018-March/127869.html >>>>>> >>>>> >>>>> Thank you for linking to [1].  It's worth re-reading.  Especially the >>>>> part at the end. >>>>> >>>>> -Jim >>>>> >>>> >>> >>> Yes, the part at the end is the best indeed. >>> I'd amend the time priorities graph though like that: >>> >>> CPU-time < a developer time < developers time >>> >>> That means burning some CPU and nodes in a pool for a waste might >>> benefit a developer, but saving some CPU and nodes in a pool would >>> benefit *developers* in many projects as they'd get the jobs results >>> off the waiting check queues faster :) >> >> >> > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From thierry at openstack.org Thu Feb 28 12:02:45 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 28 Feb 2019 13:02:45 +0100 Subject: [tc][election] campaign question: team approval criteria In-Reply-To: <20190225200327.wt6u37bhpb73r4be@yuggoth.org> References: <20190225200327.wt6u37bhpb73r4be@yuggoth.org> Message-ID: <625ae083-112f-c312-2531-878f3352be7f@openstack.org> Jeremy Stanley wrote: > On 2019-02-25 09:09:59 -0500 (-0500), Doug Hellmann wrote: > [...] >> One of the criteria that caught my eye as especially interesting was >> that a project must complete at least one release before being >> accepted. We've debated that rule in the past, and always come down on >> the side encouraging new projects by accepting them early. I wonder if >> it's time to reconsider that, and perhaps to start thinking hard about >> projects that don't release after they are approved. >> >> Thoughts? > > For me, the key difference is that OpenStack already has clear > release processes outlined which teams are expected to follow for > their deliverables. For confirming a new OIP it's seen as important > that they've worked out what their release process *is* and proven > that they can follow it (this is, perhaps, similar to why the OIP > confirmation criteria mentions other things we don't for new > OpenStack project team acceptance, like vulnerability management and > governance). Yes, that was the main idea behind that "one release" criteria, and I think our automated release processes take care of that for our openstack project teams. Furthermore, some OIPs are formed by merging source code or ideas from multiple parties (the canonical example being Kata containers being merged from Hyper and Intel) -- the task of making it a clear single project (rather than a WIP merge activity) should be long completed by confirmation time. -- Thierry Carrez (ttx) From smooney at redhat.com Thu Feb 28 13:28:15 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 28 Feb 2019 13:28:15 +0000 Subject: [nova] NUMA live migration - mostly how it's tested In-Reply-To: References: Message-ID: On Wed, 2019-02-27 at 21:33 -0500, Artom Lifshitz wrote: > > > On Wed, Feb 27, 2019, 21:27 Matt Riedemann, wrote: > > On 2/27/2019 7:25 PM, Artom Lifshitz wrote: > > > What I've been using for testing is this: [3]. It's a series of > > > patches to whitebox_tempest_plugin, a Tempest plugin used by a bunch > > > of us Nova Red Hatters to automate testing that's outside of Tempest's > > > scope. > > > > And where is that pulling in your nova series of changes and posting > > test results (like a 3rd party CI) so anyone can see it? Or do you mean > > here are tests, but you need to provide your own environment if you want > > to verify the code prior to merging it. > > Sorry, wasn't clear. It's the latter. The test code exists, and has run against my devstack environment with my > patches checked out, but there's no CI or public posting of test results. Getting CI coverage for these NUMA things > (like the old Intel one) is a whole other topic. on the ci front i resolved the nested vert on the server i bought to set up a personal ci for numa testing. that set me back a few weeks in setting up that ci but i hope to run artom whitebox test amoung other in that at some point. vexhost also provided nested virt to the gate vms. im going to see if we can actully create a non voting job using the ubuntu-bionic-vexxhost nodeset. if ovh or one of the other providers of ci resource renable nested virt then we can maybe make that job voting and not need thridparty ci anymor. > > Can we really not even have functional tests with the fake libvirt > > driver and fake numa resources to ensure the flow doesn't blow up? > > That's something I have to look into. We have live migration functional tests, and we have NUMA functional tests, but > I'm not sure how we can combine the two. jus as an addtional proof point im am planning to do a bunch of migration and live migration testing in the next 2-4 weeks. my current backlog on no particalar order is sriov migration numa migration vtpm migration cross-cell migration cross-neutron backend migration (ovs<->linuxbridge) cross-firwall migraton (iptables<->contrack) (previously tested and worked at end of queens) narrowong in on the numa migration the current set of testcases i plan to manually verify are as follows: note assume all flavor will have 256mb of ram and 4 cores unless otherwise stated basic tests pinned guests (hw:cpu_policy=dedicated) pinned-isolated guests (hw:cpu_policy=dedicated hw:thread_policy=isolate) pinned-prefer guests (hw:cpu_policy=dedicated hw:thread_policy=prefer) unpinned-singel-numa guest (hw:numa_nodes=1) unpinned-dual-numa guest (hw:numa_nodes=2) unpinned-dual-numa-unblanced guest (hw:numa_nodes=2 hw:numa_cpu.0=1 hw:numa_cpu.1=1-3 hw:numa_mem.0=64 hw:numa_mem.0=192) unpinned-hugepage-implcit numa guest (hw:mem_page_size=large) unpinned-hugepage-multi numa guest (hw:mem_page_size=large hw:numa_nodes=2) pinned-hugepage-multi numa guest (hw:mem_page_size=large hw:numa_nodes=2 hw:cpu_policy=dedicated) realtime guest (hw:cpu_policy=dedicated hw:cpu_realtime=yes hw:cpu_realtime_mask=^0-1) emulator-thread-iosolated guest (hw:cpu_policy=dedicated hw:emulator_threads_policy=isolate) advanced tests (require extra nova.conf changes) emulator-thread-shared guest (hw:cpu_policy=dedicated hw:emulator_threads_policy=shared) note cpu_share_set configrued unpinned-singel-numa-hetorgious-host guest (hw:numa_nodes=1) note vcpu_pin_set adjusted so that host 1 only has cpus on numa 1 and host 2 only has cpus on numa node 2. supper-optimiesd-guest (hw:numa_nodes=2 hw:numa_cpu.0=1 hw:numa_cpu.1=1-3 hw:numa_mem.0=64 hw:numa_mem.0=192 hw:cpu_realtime=yes hw:cpu_realtime_mask=^0-1 hw:emulator_threads_policy=isolate) supper-optimiesd-guest-2 (hw:numa_nodes=2 hw:numa_cpu.0=1 hw:numa_cpu.1=1-3 hw:numa_mem.0=64 hw:numa_mem.0=192 hw:cpu_realtime=yes hw:cpu_realtime_mask=^0-1 hw:emulator_threads_policy=share) for each of these test ill provide a test-command file with the command i used to run the tests and reustlts file with a summary at the top plus the xmls before and after the migration showing that intially the resouces would conflict on migration and then the updated xmls after the migration. i will also provide the local.conf for the devstack deployment and some details about the env like distor/qemu/libvirt versions. eventurally i hope all those test cases can be added to the whitebox plugin and verifed in a ci. we could also try and valideate them in functional tests. i have attached the xml for the pinned guest as an example of what to expect but i will be compileing this slowly as i go and zip everying up in an email to the list. this will take some time to complete and hosestly i had planned to do most of this testing after feature freeze when we can focus on testing more. regards sean -------------- next part -------------- before live migration vm test-1 is spawned on uma-migration-2 vm test-2 is spawned on uma-migration-1 both vms are pinned to the same cores on different hosts 4096 after migration migrated vm was updated to 4096 -------------------vm test-1 xml-------------------- [centos at numa-migration-2 devstack]$ sudo virsh dumpxml instance-00000005 instance-00000005 f98cac48-f3a7-450b-b308-72470addf9c5 test-1 2019-02-28 11:55:45 128 1 0 0 4 admin demo 131072 131072 4 4096 /machine OpenStack Foundation OpenStack Nova 18.1.0 f98cac48-f3a7-450b-b308-72470addf9c5 f98cac48-f3a7-450b-b308-72470addf9c5 Virtual Machine hvm destroy restart destroy /usr/libexec/qemu-kvm