From zhengzhenyulixi at gmail.com Fri Feb 1 01:40:57 2019 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Fri, 1 Feb 2019 09:40:57 +0800 Subject: [nova] Per-instance serial number implementation question In-Reply-To: References: <6ac4eac9-18ea-2cca-e2f7-76de8f80835b@gmail.com> Message-ID: > > - Add the 'unique' choice to the [libvirt]/sysinfo_serial config option > and make it the default for new deployments. > - Deprecate the sysinfo_serial config option in Stein and remove it in > Train. This would give at least some window of time for transition > and/or raising a stink if someone thinks we should leave the old > per-host behavior. > - Merge mnaser's patch to expose hostId in the metadata API and config > drive so users still have a way within the guest to determine that > affinity for servers in the same project on the same host. So can I assume this is the decision now? I will update the patch according to this and since I have 3 more days before Chinese new year, I will update again if something new happens On Thu, Jan 31, 2019 at 11:38 PM Mohammed Naser wrote: > On Thu, Jan 31, 2019 at 10:05 AM Matt Riedemann > wrote: > > > > I'm going to top post and try to summarize where we are on this thread > > since I have it on today's nova meeting agenda under the "stuck reviews" > > section. > > > > * The email started as a proposal to change the proposed image property > > and flavor extra spec from a confusing boolean to an enum. > > > > * The question was raised why even have any other option than a unique > > serial number for all instances based on the instance UUID. > > > > * Stephen asked Daniel Berrange (danpb) about the history of the > > [libvirt]/sysinfo_serial configuration option and it sounds like it was > > mostly added as a way to determine guests running on the same host, > > which can already be determined using the hostId parameter in the REST > > API (hostId is the hashed instance.host + instance.project_id so it's > > not exactly the same since it's unique per host and project, not just > > host). However, the hostId is not exposed to the guest in the metadata > > API / config drive - so that could be a regression for applications that > > used this somehow to calculate affinity within the guest based on the > > serial (note that mnaser has a patch to expose hostId in the metadata > > API / config drive [1]). > > > > * danpb said the system.serial we set today should really be > > chassis.serial but that's only available in libvirt >= 4.1.0 and our > > current minimum required version of libvirt is 1.3.1 so setting > > chassis.serial would have to be conditional on the running version of > > libvirt (this is common in that driver). > > > > * Applications that depend on the serial number within the guest were > > not guaranteed it would be unique or not change because migrating the > > guest to another host would change the serial number anyway (that's the > > point of the blueprint - to keep the serial unchanged for each guest), > > so if we just changed to always using unique serial numbers everywhere > > it should probably be OK (and tolerated/expected by guest applications). > > > > * Clearly we would have a release note if we change this behavior but > > keep in mind that end users are not reading release notes, and none of > > this is documented today anyway outside of the [libvirt]/sysinfo_serial > > config option. So a release note would really only help an operator or > > support personal if they get a ticket due to the change in behavior > > (which we probably wouldn't hear about upstream for 2+ years given how > > slow openstack deployments upgrade). > > > > So where are we? If we want the minimal amount of behavior change as > > possible then we just add the new image property / flavor extra spec / > > config option choice, but that arguably adds technical debt and > > virt-driver specific behavior to the API (again, that's not uncommon > > though). > > > > If we want to simplify, we don't add the image property / flavor extra > > spec. But what do we do about the existing config option? > > > > Do we add the 'unique' choice, make it the default, and then deprecate > > the option to at least signal the change is coming in Train? > > > > Or do we just deprecate the option in Stein and completely ignore it, > > always setting the unique serial number as the instance.uuid (and set > > the host serial in chassis.serial if libvirt>=4.1.0)? > > > > In addition, do we expose hostId in the metadata API / config drive via > > [1] so there is a true alternative *from within the guest* to determine > > guest affinity on the same host? I'm personally OK with [1] if there is > > some user documentation around it (as noted in the review). > > > > If we are not going to add the new image property / extra spec, my > > personal choice would be to: > > > > - Add the 'unique' choice to the [libvirt]/sysinfo_serial config option > > and make it the default for new deployments. > > - Deprecate the sysinfo_serial config option in Stein and remove it in > > Train. This would give at least some window of time for transition > > and/or raising a stink if someone thinks we should leave the old > > per-host behavior. > > - Merge mnaser's patch to expose hostId in the metadata API and config > > drive so users still have a way within the guest to determine that > > affinity for servers in the same project on the same host. > > I agree with this for a few reasons > > Assuming that a system serial means that it is colocated with another > machine seems just taking advantage of a bug in the first place. That > is not *documented* behaviour and serials should inherently be unique, > it also exposes information about the host which should not be necessary, > Matt has pointed me to an OSSN about this too: > > https://wiki.openstack.org/wiki/OSSN/OSSN-0028 > > I think we should indeed provide a unique serials (only, ideally) to avoid > having the user shooting themselves in the foot by exposing information > they didn't know they were exposing. > > The patch that I supplied was really meant to make that information > available > in a controllable way, it also provides a much more secure way of exposing > that information because hostId is actually hashed with the tenant ID which > means that one VM from one tenant can't know that it's hosted on the same > VM as another one by usnig the hostId (and with all of the recent processor > issues, this is a big plus in security). > > > > What do others think? > > > > [1] https://review.openstack.org/#/c/577933/ > > > > On 1/24/2019 9:09 AM, Matt Riedemann wrote: > > > The proposal from the spec for this feature was to add an image > property > > > (hw_unique_serial), flavor extra spec (hw:unique_serial) and new > > > "unique" choice to the [libvirt]/sysinfo_serial config option. The > image > > > property and extra spec would be booleans but really only True values > > > make sense and False would be more or less ignored. There were no plans > > > to enforce strict checking of a boolean value, e.g. if the image > > > property was True but the flavor extra spec was False, we would not > > > raise an exception for incompatible values, we would just use OR logic > > > and take the image property True value. > > > > > > The boolean usage proposed is a bit confusing, as can be seen from > > > comments in the spec [1] and the proposed code change [2]. > > > > > > After thinking about this a bit, I'm now thinking maybe we should just > > > use a single-value enum for the image property and flavor extra spec: > > > > > > image: hw_guest_serial=unique > > > flavor: hw:guest_serial=unique > > > > > > If either are set, then we use a unique serial number for the guest. If > > > neither are set, then the serial number is based on the host > > > configuration as it is today. > > > > > > I think that's more clear usage, do others agree? Alex does. I can't > > > think of any cases where users would want hw_unique_serial=False, so > > > this removes that ability and confusion over whether or not to enforce > > > mismatching booleans. > > > > > > [1] > > > > https://review.openstack.org/#/c/612531/2/specs/stein/approved/per-instance-libvirt-sysinfo-serial.rst at 43 > > > > > > [2] > > > > https://review.openstack.org/#/c/619953/7/nova/virt/libvirt/driver.py at 4894 > > > > > > -- > > > > Thanks, > > > > Matt > > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Feb 1 04:33:49 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 1 Feb 2019 15:33:49 +1100 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox Message-ID: <20190201043349.GB6183@thor.bakeyournoodle.com> Hi All, During the Berlin forum the idea of running some kinda of bot on the sandbox [1] repo cam up as another way to onboard/encourage contributors. The general idea is that the bot would: 1. Leave a -1 review on 'qualifying'[2] changes along with a request for some small change 2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) on the change Showing new contributors approximately what code review looks like[2], and also reduce the human requirements. The OpenStack Upstream Institute would make use of the bot and we'd also use it as an interactive tutorial from the contributors portal. I think this can be done as a 'normal' CI job with the following considerations: * Because we want this service to be reasonably robust we don't want to code or the job definitions to live in repo so I guess they'd need to live in project-config[4]. The bot itself doesn't need to be stateful as gerrit comments / meta-data would act as the store/state sync. * We'd need a gerrit account we can use to lodge these votes, as using 'proposal-bot' or tonyb would be a bad idea. My initial plan would be to develop the bot locally and then migrate it into the opendev infra once we've proven its utility. So thoughts on the design or considerations or should I just code something up and see what it looks like? Yours Tony. [1] http://git.openstack.org/cgit/openstack-dev/sandbox [2] The details of what counts as qualifying can be fleshed out later but there needs to be something so that contributors using the sandbox that don't want to be bothered by the bot wont be. [3] So it would a) be faster than typical and b) not all new changes are greeted with a -1 ;P [4] Another repo would be better as project-config is trusted we can't use Depends-On to test changes to the bot itself, but we need to consider the bots access to secrets -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From cjeanner at redhat.com Fri Feb 1 06:44:34 2019 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Fri, 1 Feb 2019 07:44:34 +0100 Subject: [TripleO] containers logging to stdout In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C28DCCB@EX10MBOX03.pnnl.gov> References: <7cee5db5-f4cd-9e11-e0a3-7438154fb9af@redhat.com> <1A3C52DFCD06494D8528644858247BF01C28DCCB@EX10MBOX03.pnnl.gov> Message-ID: <0860f38e-0f10-df12-256a-12df18fe7d9e@redhat.com> On 1/30/19 6:26 PM, Fox, Kevin M wrote: > k8s's offical way of dealing with logs is to ensure use of the docker json logger, not the journald one. then all the k8s log shippers have a standard way to gather the logs. Docker supports log rotation and other options too. seems to work out pretty well in practice. sending directly to a file looks a good option indeed. Journald and (r)syslog have both some throttle issue, and it might create some issues in case of service restarting and the like. Pushing logs directly from the container engine (podman does actually support the same options) might be the way to go. As long as we have a common, easy way to output the logs, it's all for the best. The only concern I have with the "not-journald" path is the possible lack of "journalctl -f CONTAINER_NAME=foo". But, compared to the risks exposed in this thread about the possible crash if journald isn't available, and throttling, I think it's fine. Also, a small note regarding "log re-shipping": some people might want to push their logs to some elk/kelk/others - pushing the logs directly as json in plain files might help a log for that, as (r)syslog can then read them (and there, no bottleneck with throttle) and send it in the proper format to the remote logging infra. Soooo... yeah. imho the "direct writing as json" might be the way to go :). > > log shipping with other cri drivers such as containerd seems to work well too. Not tested yet, but at least podman has the option (as a work for this engine integration is done). Cheers, C. > > Thanks, > Kevin > ________________________________________ > From: Sean Mooney [smooney at redhat.com] > Sent: Wednesday, January 30, 2019 8:23 AM > To: Emilien Macchi; Juan Antonio Osorio Robles > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [TripleO] containers logging to stdout > > On Wed, 2019-01-30 at 07:37 -0500, Emilien Macchi wrote: >> >> >> On Wed, Jan 30, 2019 at 5:53 AM Juan Antonio Osorio Robles wrote: >>> Hello! >>> >>> >>> In Queens, the a spec to provide the option to make containers log to >>> standard output was proposed [1] [2]. Some work was done on that side, >>> but due to the lack of traction, it wasn't completed. With the Train >>> release coming, I think it would be a good idea to revive this effort, >>> but make logging to stdout the default in that release. >>> >>> This would allow several benefits: >>> >>> * All logging from the containers would en up in journald; this would >>> make it easier for us to forward the logs, instead of having to keep >>> track of the different directories in /var/log/containers >>> >>> * The journald driver would add metadata to the logs about the container >>> (we would automatically get what container ID issued the logs). >>> >>> * This wouldo also simplify the stacks (removing the Logging nested >>> stack which is present in several templates). >>> >>> * Finally... if at some point we move towards kubernetes (or something >>> in between), managing our containers, it would work with their logging >>> tooling as well >> >> Also, I would add that it'll be aligned with what we did for Paunch-managed containers (with Podman backend) where >> each ("long life") container has its own SystemD service (+ SystemD timer sometimes); so using journald makes total >> sense to me. > one thing to keep in mind is that journald apparently has rate limiting so if you contaiern are very verbose journald > will actully slowdown the execution of the contaienr application as it slows down the rate at wich it can log. > this came form a downstream conversation on irc were they were recommending that such applciation bypass journald and > log to a file for best performacne. >> -- >> Emilien Macchi > > > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From ignaziocassano at gmail.com Fri Feb 1 06:28:00 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 1 Feb 2019 07:28:00 +0100 Subject: [manila][glusterfs] on queens error In-Reply-To: References: Message-ID: Thanks Goutham. If there are not mantainers for this driver I will switch on ceph and or netapp. I am already using netapp but I would like to export shares from an openstack installation to another. Since these 2 installations do non share any openstack component and have different openstack database, I would like to know it is possible . Regards Ignazio Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi ha scritto: > Hi Ignazio, > > On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano > wrote: > > > > Hello All, > > I installed manila on my queens openstack based on centos 7. > > I configured two servers with glusterfs replocation and ganesha nfs. > > I configured my controllers octavia,conf but when I try to create a share > > the manila scheduler logs reports: > > > > Failed to schedule create_share: No valid host was found. Failed to find > a weighted host, the last executed filter was CapabilitiesFilter.: > NoValidHost: No valid host was found. Failed to find a weighted host, the > last executed filter was CapabilitiesFilter. > > 2019-01-31 16:07:32.614 159380 INFO manila.message.api > [req-241d66b3-8004-410b-b000-c6d2d3536e4a 89f76bc5de5545f381da2c10c7df7f15 > 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for > request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a > > > The scheduler failure points out that you have a mismatch in > expectations (backend capabilities vs share type extra-specs) and > there was no host to schedule your share to. So a few things to check > here: > > - What is the share type you're using? Can you list the share type > extra-specs and confirm that the backend (your GlusterFS storage) > capabilities are appropriate with whatever you've set up as > extra-specs ($ manila pool-list --detail)? > - Is your backend operating correctly? You can list the manila > services ($ manila service-list) and see if the backend is both > 'enabled' and 'up'. If it isn't, there's a good chance there was a > problem with the driver initialization, please enable debug logging, > and look at the log file for the manila-share service, you might see > why and be able to fix it. > > > Please be aware that we're on a look out for a maintainer for the > GlusterFS driver for the past few releases. We're open to bug fixes > and maintenance patches, but there is currently no active maintainer > for this driver. > > > > I did not understand if controllers node must be connected to the > network where shares must be exported for virtual machines, so my glusterfs > are connected on the management network where openstack controllers are > conencted and to the network where virtual machine are connected. > > > > My manila.conf section for glusterfs section is the following > > > > [gluster-manila565] > > driver_handles_share_servers = False > > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver > > glusterfs_target = root at 10.102.184.229:/manila565 > > glusterfs_path_to_private_key = /etc/manila/id_rsa > > glusterfs_ganesha_server_username = root > > glusterfs_nfs_server_type = Ganesha > > glusterfs_ganesha_server_ip = 10.102.184.229 > > #glusterfs_servers = root at 10.102.185.19 > > ganesha_config_dir = /etc/ganesha > > > > > > PS > > 10.102.184.0/24 is the network where controlelrs expose endpoint > > > > 10.102.189.0/24 is the shared network inside openstack where virtual > machines are connected. > > > > The gluster servers are connected on both. > > > > > > Any help, please ? > > > > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at redhat.com Fri Feb 1 06:58:19 2019 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Fri, 1 Feb 2019 08:58:19 +0200 Subject: [TripleO] containers logging to stdout In-Reply-To: <0860f38e-0f10-df12-256a-12df18fe7d9e@redhat.com> References: <7cee5db5-f4cd-9e11-e0a3-7438154fb9af@redhat.com> <1A3C52DFCD06494D8528644858247BF01C28DCCB@EX10MBOX03.pnnl.gov> <0860f38e-0f10-df12-256a-12df18fe7d9e@redhat.com> Message-ID: On 2/1/19 8:44 AM, Cédric Jeanneret wrote: > On 1/30/19 6:26 PM, Fox, Kevin M wrote: >> k8s's offical way of dealing with logs is to ensure use of the docker json logger, not the journald one. then all the k8s log shippers have a standard way to gather the logs. Docker supports log rotation and other options too. seems to work out pretty well in practice. > sending directly to a file looks a good option indeed. Journald and > (r)syslog have both some throttle issue, and it might create some issues > in case of service restarting and the like. > > Pushing logs directly from the container engine (podman does actually > support the same options) might be the way to go. > > As long as we have a common, easy way to output the logs, it's all for > the best. > The only concern I have with the "not-journald" path is the possible > lack of "journalctl -f CONTAINER_NAME=foo". But, compared to the risks > exposed in this thread about the possible crash if journald isn't > available, and throttling, I think it's fine. > > Also, a small note regarding "log re-shipping": some people might want > to push their logs to some elk/kelk/others - pushing the logs directly > as json in plain files might help a log for that, as (r)syslog can then > read them (and there, no bottleneck with throttle) and send it in the > proper format to the remote logging infra. > > Soooo... yeah. imho the "direct writing as json" might be the way to go :). That is just fine IMO. the runtime engine usually allows you to configure the logging driver (docker in CentOS defaults... or used to default, to journald); but if we find out that file is a better choice; that's entirely fine. The whole point is to let the runtime engine do its job, and handle the logging with the driver. > >> log shipping with other cri drivers such as containerd seems to work well too. > Not tested yet, but at least podman has the option (as a work for this > engine integration is done). > > Cheers, > > C. > >> Thanks, >> Kevin >> ________________________________________ >> From: Sean Mooney [smooney at redhat.com] >> Sent: Wednesday, January 30, 2019 8:23 AM >> To: Emilien Macchi; Juan Antonio Osorio Robles >> Cc: openstack-discuss at lists.openstack.org >> Subject: Re: [TripleO] containers logging to stdout >> >> On Wed, 2019-01-30 at 07:37 -0500, Emilien Macchi wrote: >>> >>> On Wed, Jan 30, 2019 at 5:53 AM Juan Antonio Osorio Robles wrote: >>>> Hello! >>>> >>>> >>>> In Queens, the a spec to provide the option to make containers log to >>>> standard output was proposed [1] [2]. Some work was done on that side, >>>> but due to the lack of traction, it wasn't completed. With the Train >>>> release coming, I think it would be a good idea to revive this effort, >>>> but make logging to stdout the default in that release. >>>> >>>> This would allow several benefits: >>>> >>>> * All logging from the containers would en up in journald; this would >>>> make it easier for us to forward the logs, instead of having to keep >>>> track of the different directories in /var/log/containers >>>> >>>> * The journald driver would add metadata to the logs about the container >>>> (we would automatically get what container ID issued the logs). >>>> >>>> * This wouldo also simplify the stacks (removing the Logging nested >>>> stack which is present in several templates). >>>> >>>> * Finally... if at some point we move towards kubernetes (or something >>>> in between), managing our containers, it would work with their logging >>>> tooling as well >>> Also, I would add that it'll be aligned with what we did for Paunch-managed containers (with Podman backend) where >>> each ("long life") container has its own SystemD service (+ SystemD timer sometimes); so using journald makes total >>> sense to me. >> one thing to keep in mind is that journald apparently has rate limiting so if you contaiern are very verbose journald >> will actully slowdown the execution of the contaienr application as it slows down the rate at wich it can log. >> this came form a downstream conversation on irc were they were recommending that such applciation bypass journald and >> log to a file for best performacne. >>> -- >>> Emilien Macchi >> >> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From smooney at redhat.com Fri Feb 1 11:25:47 2019 From: smooney at redhat.com (Sean Mooney) Date: Fri, 01 Feb 2019 11:25:47 +0000 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <20190201043349.GB6183@thor.bakeyournoodle.com> References: <20190201043349.GB6183@thor.bakeyournoodle.com> Message-ID: <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> On Fri, 2019-02-01 at 15:33 +1100, Tony Breeds wrote: > Hi All, > During the Berlin forum the idea of running some kinda of bot on the > sandbox [1] repo cam up as another way to onboard/encourage > contributors. > > The general idea is that the bot would: > 1. Leave a -1 review on 'qualifying'[2] changes along with a request for > some small change > 2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) > on the change > > Showing new contributors approximately what code review looks like[2], > and also reduce the human requirements. The OpenStack Upstream > Institute would make use of the bot and we'd also use it as an > interactive tutorial from the contributors portal. > > I think this can be done as a 'normal' CI job with the following > considerations: > > * Because we want this service to be reasonably robust we don't want to > code or the job definitions to live in repo so I guess they'd need to > live in project-config[4]. The bot itself doesn't need to be > stateful as gerrit comments / meta-data would act as the store/state > sync. > * We'd need a gerrit account we can use to lodge these votes, as using > 'proposal-bot' or tonyb would be a bad idea. do you need an actual bot why not just have a job defiend in the sandbox repo itself that runs say pep8 or some simple test like check the commit message for Close-Bug: or somting like that. i noticed that if you are modifying zuul jobs and have a syntax error we actully comment on the patch to say where it is. like this https://review.openstack.org/#/c/632484/2/.zuul.yaml at 31 so you could just develop a custom job that ran in the a seperate pipline and set the sucess action to Code-Review: +2 an failure to Code-Review: -1 the authour could then add the second +2 and +w to complete the normal workflow. as far as i know the sandbox repo allowas all users to +2 +w correct? > > My initial plan would be to develop the bot locally and then migrate it > into the opendev infra once we've proven its utility. > > So thoughts on the design or considerations or should I just code > something up and see what it looks like? > > Yours Tony. > > [1] http://git.openstack.org/cgit/openstack-dev/sandbox > [2] The details of what counts as qualifying can be fleshed out later > but there needs to be something so that contributors using the sandbox > that don't want to be bothered by the bot wont be. > [3] So it would a) be faster than typical and b) not all new changes are > greeted with a -1 ;P > [4] Another repo would be better as project-config is trusted we can't > use Depends-On to test changes to the bot itself, but we need to > consider the bots access to secrets From thierry at openstack.org Fri Feb 1 11:49:19 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 1 Feb 2019 12:49:19 +0100 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> Message-ID: Lance Bragstad wrote: > [..] > Outside of having a formal name, do we expect the "pop-up" teams to > include processes that make what we went through easier? Ultimately, we > still had to self-organize and do a bunch of socializing to make progress. I think being listed as a pop-up team would definitely facilitate getting mentioned in TC reports, community newsletters or other high-vsibility community communications. It would help getting space to meet at PTGs, too. None of those things were impossible before... but they were certainly easier to achieve for people with name-recognition or the right connections. It was also easier for things to slip between the cracks. I agree that we should consider adding processes that would facilitate going through the steps you described... But I don't really want this to become a bureaucratic nightmare hindering volunteers stepping up to get things done. So it's a thin line to walk on :) -- Thierry Carrez (ttx) From alfredo.deluca at gmail.com Fri Feb 1 09:20:50 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Fri, 1 Feb 2019 10:20:50 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> Message-ID: thanks Feilong, clemens et all. I going to have a look later on today and see what I can do and see. Just a question: Does the kube master need internet access to download stuff or not? Cheers On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang wrote: > I'm echoing Von's comments. > > From the log of cloud-init-output.log, you should be able to see below > error: > > *Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 > +0000. Up 76.51 seconds.* > *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running > /var/lib/cloud/instance/scripts/part-011 [1]* > *+ _prefix=docker.io/openstackmagnum/ * > *+ atomic install --storage ostree --system --system-package no --set > REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name > heat-container-agent > docker.io/openstackmagnum/heat-container-agent:queens-stable > * > *The docker daemon does not appear to be running.* > *+ systemctl start heat-container-agent* > *Failed to start heat-container-agent.service: Unit > heat-container-agent.service not found.* > *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running > /var/lib/cloud/instance/scripts/part-013 [5]* > > Then please go to /var/lib/cloud/instances//scripts to find > the script 011 and 013 to run it manually to get the root cause. And > welcome to pop up into #openstack-containers irc channel. > > > > On 30/01/19 11:43 PM, Clemens Hardewig wrote: > > Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 > part of the config script finishes with error. Check why. > > Von meinem iPhone gesendet > > Am 30.01.2019 um 10:11 schrieb Alfredo De Luca : > > here are also the logs for the cloud init logs from the k8s master.... > > > > On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca > wrote: > >> >> In the meantime this is my cluster >> template >> >> >> >> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca >> wrote: >> >>> hi Clemens and Ignazio. thanks for your support. >>> it must be network related but I don't do something special apparently >>> to create a simple k8s cluster. >>> I ll post later on configurations and logs as you Clemens suggested. >>> >>> >>> Cheers >>> >>> >>> >>> On Tue, Jan 29, 2019 at 9:16 PM Clemens >>> wrote: >>> >>>> … an more important: check the other log cloud-init.log for error >>>> messages (not only cloud-init-output.log) >>>> >>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < >>>> alfredo.deluca at gmail.com>: >>>> >>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs >>>> on the kube master keep saying the following >>>> >>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>> [+]poststarthook/extensions/third-party-resources ok >>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>> healthz check failed' ']' >>>> + sleep 5 >>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>> + '[' ok = '' ']' >>>> + sleep 5 >>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>> [+]poststarthook/extensions/third-party-resources ok >>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>> healthz check failed' ']' >>>> + sleep 5 >>>> >>>> Not sure what to do. >>>> My configuration is ... >>>> eth0 - 10.1.8.113 >>>> >>>> But the openstack configration in terms of networkin is the default >>>> from ansible-openstack which is 172.29.236.100/22 >>>> >>>> Maybe that's the problem? >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < >>>> ignaziocassano at gmail.com> wrote: >>>> >>>>> Hello Alfredo, >>>>> your external network is using proxy ? >>>>> If you using a proxy, and yuo configured it in cluster template, you >>>>> must setup no proxy for 127.0.0.1 >>>>> Ignazio >>>>> >>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < >>>>> clemens.hardewig at crandale.de> ha scritto: >>>>> >>>>>> At least on fedora there is a second cloud Init log as far as I >>>>>> remember-Look into both >>>>>> >>>>>> Br c >>>>>> >>>>>> Von meinem iPhone gesendet >>>>>> >>>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >>>>>> alfredo.deluca at gmail.com>: >>>>>> >>>>>> thanks Clemens. >>>>>> I looked at the cloud-init-output.log on the master... and at the >>>>>> moment is doing the following.... >>>>>> >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> >>>>>> Network ....could be but not sure where to look at >>>>>> >>>>>> >>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >>>>>> clemens.hardewig at crandale.de> wrote: >>>>>> >>>>>>> Yes, you should check the cloud-init logs of your master. Without >>>>>>> having seen them, I would guess a network issue or you have selected for >>>>>>> your minion nodes a flavor using swap perhaps ... >>>>>>> So, log files are the first step you could dig into... >>>>>>> Br c >>>>>>> Von meinem iPhone gesendet >>>>>>> >>>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>>>>>> alfredo.deluca at gmail.com>: >>>>>>> >>>>>>> Hi all. >>>>>>> I finally instaledl successufully openstack ansible (queens) but, >>>>>>> after creating a cluster template I create k8s cluster, it stuck on >>>>>>> >>>>>>> >>>>>>> kube_masters >>>>>>> >>>>>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>>>>>> >>>>>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create >>>>>>> in progress....and after around an hour it says...time out. k8s master >>>>>>> seems to be up.....at least as VM. >>>>>>> >>>>>>> any idea? >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> *Alfredo* >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> *Alfredo* >>>>>> >>>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> >> >> -- >> *Alfredo* >> >> > > -- > *Alfredo* > > > > > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > -------------------------------------------------------------------------- > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > -------------------------------------------------------------------------- > > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Feb 1 12:34:20 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 1 Feb 2019 12:34:20 +0000 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> References: <20190201043349.GB6183@thor.bakeyournoodle.com> <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> Message-ID: <20190201123420.sjhvwuwjxbyvru3x@yuggoth.org> On 2019-02-01 11:25:47 +0000 (+0000), Sean Mooney wrote: > do you need an actual bot > why not just have a job defiend in the sandbox repo itself that > runs say pep8 or some simple test like check the commit message > for Close-Bug: or somting like that. I think that's basically what he was suggesting: a Zuul job which votes on (some) changes to the openstack/sandbox repository. Some challenges there... first, you'd probably want credentials set as Zuul secrets, but in-repository secrets can only be used by jobs in safe "post-review" pipelines (gate, promote, post, release...) to prevent leakage through speculative execution of changes to those job definitions. The workaround would be to place the secrets and any playbooks which use them into a trusted config repository such as openstack-infra/project-config so they can be safely used in "pre-review" pipelines like check. > i noticed that if you are modifying zuul jobs and have a syntax > error we actully comment on the patch to say where it is. like > this https://review.openstack.org/#/c/632484/2/.zuul.yaml at 31 > > so you could just develop a custom job that ran in the a seperate > pipline and set the sucess action to Code-Review: +2 an failure to > Code-Review: -1 [...] It would be a little weird to have those code review votes showing up for the Zuul account and might further confuse students. Also, what you describe would require a custom pipeline definition as those behaviors apply to pipelines, not to jobs. I think Tony's suggestion of doing this as a job with custom credentials to log into Gerrit and leave code review votes is probably the most workable and least confusing solution, but I also think a bulk of that job definition will end up having to live outside the sandbox repo for logistical reasons described above. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rfolco at redhat.com Fri Feb 1 12:38:40 2019 From: rfolco at redhat.com (Rafael Folco) Date: Fri, 1 Feb 2019 10:38:40 -0200 Subject: [openstack-dev][tripleo] TripleO CI Summary: Sprint 25 Message-ID: Greetings, The TripleO CI team has just completed Sprint 25 / Unified Sprint 4 (Jan 10 thru Jan 30). The following is a summary of completed work during this sprint cycle: - Setup a Fedora-28 promotion pipeline based on the current CentOS-7 pipeline. The Fedora28 pipeline is expected not to work atm, updates from the DF are required and will be pulled in Unified Sprint 5 (Sprint 26). - Completed transition from multinode scenarios (1-4) to standalone across all TripleO projects. Standalone scenarios (1-4) have been fixed with missing services and are now voting jobs. - Continued work on our next-gen upstream TripleO CI job reproducer. Both cloud and libvirt based deployments are working but not fully merged. - Enabled CI on the new openstack-virtual-baremetal repo. The integration CI uses the same standard job as TripleO third party e.g. https://review.openstack.org/#/c/633681/. - Started moving RDO Phase 2 jobs to upstream tripleo by triggering master jobs on tripleo-ci-testing hash. The planned work for the next sprint [1] extends work on previous sprint, which includes: - Add a check job for containers build on Fedora 28 using the new tripleo-build-containers playbook. Update the promotion pipeline jobs to use the same workflow for building containers on CentOS 7 and Fedora 28. - Convert scenarios (9 and 12) from multinode to singlenode standalone. This work will enable upstream TLS CI and testing. - Improve usability of Zuul container reproducer with launcher and user documentation to merge a MVP. - Complete support of additional OVB node in TripleO jobs. - Implement a FreeIPA deployment via CI tooling (tripleo-quickstart / tripleo-quickstart-extras). The Ruck and Rover for this sprint are Felix Quique (quiquell) and Chandan Kumar (chkumar). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Notes are recorded on etherpad [2]. Thanks, rfolco [1] https://tree.taiga.io/project/tripleo-ci-board/taskboard/unified-sprint-5 [2] https://review.rdoproject.org/etherpad/p/ruckrover-sprint26 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Feb 1 12:54:32 2019 From: smooney at redhat.com (Sean Mooney) Date: Fri, 01 Feb 2019 12:54:32 +0000 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <20190201123420.sjhvwuwjxbyvru3x@yuggoth.org> References: <20190201043349.GB6183@thor.bakeyournoodle.com> <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> <20190201123420.sjhvwuwjxbyvru3x@yuggoth.org> Message-ID: On Fri, 2019-02-01 at 12:34 +0000, Jeremy Stanley wrote: > On 2019-02-01 11:25:47 +0000 (+0000), Sean Mooney wrote: > > do you need an actual bot > > why not just have a job defiend in the sandbox repo itself that > > runs say pep8 or some simple test like check the commit message > > for Close-Bug: or somting like that. > > I think that's basically what he was suggesting: a Zuul job which > votes on (some) changes to the openstack/sandbox repository. > > Some challenges there... first, you'd probably want credentials set > as Zuul secrets, but in-repository secrets can only be used by jobs > in safe "post-review" pipelines (gate, promote, post, release...) to > prevent leakage through speculative execution of changes to those > job definitions. The workaround would be to place the secrets and > any playbooks which use them into a trusted config repository such > as openstack-infra/project-config so they can be safely used in > "pre-review" pipelines like check. > > > i noticed that if you are modifying zuul jobs and have a syntax > > error we actully comment on the patch to say where it is. like > > this https://review.openstack.org/#/c/632484/2/.zuul.yaml at 31 > > > > so you could just develop a custom job that ran in the a seperate > > pipline and set the sucess action to Code-Review: +2 an failure to > > Code-Review: -1 > > [...] > > It would be a little weird to have those code review votes showing > up for the Zuul account and might further confuse students. Also, > what you describe would require a custom pipeline definition as > those behaviors apply to pipelines, not to jobs. yes i was suggsting a custom pipeline. > > I think Tony's suggestion of doing this as a job with custom > credentials to log into Gerrit and leave code review votes is > probably the most workable and least confusing solution, but I also > think a bulk of that job definition will end up having to live > outside the sandbox repo for logistical reasons described above. no disagreement that that might be a better path. when i hear both i think some long lived thing like an irc bot that would presumably have to listen to the event queue. so i was just wondering if we could avoid having to wite an acutal "bot" application and just have zuul jobs do it instead. From fungi at yuggoth.org Fri Feb 1 13:11:49 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 1 Feb 2019 13:11:49 +0000 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: References: <20190201043349.GB6183@thor.bakeyournoodle.com> <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> <20190201123420.sjhvwuwjxbyvru3x@yuggoth.org> Message-ID: <20190201131149.th4rqgej2tmwnicp@yuggoth.org> On 2019-02-01 12:54:32 +0000 (+0000), Sean Mooney wrote: [...] > when i hear bot i think some long lived thing like an irc bot that > would presumably have to listen to the event queue. so i was just > wondering if we could avoid having to wite an acutal "bot" > application and just have zuul jobs do it instead. Yes, we have a number of stateless/momentary processes like Zuul jobs and Gerrit hook scripts which get confusingly referred to as "bots," so I've learned to stop making such assumptions where that term is bandied about. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From colleen at gazlene.net Fri Feb 1 13:13:43 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 01 Feb 2019 14:13:43 +0100 Subject: [dev][keystone] Keystone Team Update - Week of 28 January 2019 Message-ID: <1549026823.2932754.1648592648.39A7D7DD@webmail.messagingengine.com> # Keystone Team Update - Week of 28 January 2019 ## News ### JWS Key Rotation Since JSON Web Tokens are asymmetrically signed and not encrypted, we discussed whether we needed to implement the full rotation procedure that we have for fernet tokens and came to the conclusion that probably not[1][2]. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-01-30.log.html#t2019-01-30T14:29:39 [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-01-30.log.html#t2019-01-30T22:40:50 ### Alembic Migration Vishakha reminded us that most projects are moving away from sqlalchemy-migrate to Alembic but that we hadn't done so yet[3]. In fact we already have a spec published for it[4] but we need someone to do the work. Now might be a good time to revive our rolling upgrade testing and revisit how we manage upgrades and migrations. [3] http://eavesdrop.openstack.org/meetings/keystone/2019/keystone.2019-01-29-16.00.log.html#l-65 [4] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/backlog/alembic.html ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 17 changes this week. Among these were changes introducing the JWS token functionality. ## Changes that need Attention Search query: https://bit.ly/2RLApdA There are 75 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs This week we opened 2 new bugs and closed 6. Bugs opened (2) Bug #1813926 (keystone:Undecided) opened by Shrey bhatnagar https://bugs.launchpad.net/keystone/+bug/1813926 Bug #1813739 (keystonemiddleware:Undecided) opened by Yang Youseok https://bugs.launchpad.net/keystonemiddleware/+bug/1813739 Bugs closed (2) Bug #1805817 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1805817 Bug #1813926 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1813926 Bugs fixed (4) Bug #1813085 (keystone:High) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1813085 Bug #1798184 (keystone:Medium) fixed by Corey Bryant https://bugs.launchpad.net/keystone/+bug/1798184 Bug #1804520 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1804520 Bug #1798184 (ldappool:Undecided) fixed by no one https://bugs.launchpad.net/ldappool/+bug/1798184 ## Milestone Outlook https://releases.openstack.org/stein/schedule.html This week is the feature proposal freeze, so code implementing specs should be available for review by now. ## Shout-outs Congratulations and thank you to our Outreachy intern Erus for getting CentOS supported in the keystone devstack plugin! Great work! ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From openstack at fried.cc Fri Feb 1 14:25:03 2019 From: openstack at fried.cc (Eric Fried) Date: Fri, 1 Feb 2019 08:25:03 -0600 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <20190201043349.GB6183@thor.bakeyournoodle.com> References: <20190201043349.GB6183@thor.bakeyournoodle.com> Message-ID: <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> Tony- Thanks for following up on this! > The general idea is that the bot would: > 1. Leave a -1 review on 'qualifying'[2] changes along with a request for > some small change As I mentioned in the room, to give a realistic experience the bot should wait two or three weeks before tendering its -1. I kid (in case that wasn't clear). > 2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) > on the change If you're compiling a list of eventual features for the bot, another one that could be neat is, after the second patch set, the bot merges a change that creates a merge conflict on the student's patch, which they then have to go resolve. Also, cross-referencing [1], it might be nice to update that tutorial at some point to use the sandbox repo instead of nova. That could be done once we have bot action so said action could be incorporated into the tutorial flow. > [2] The details of what counts as qualifying can be fleshed out later > but there needs to be something so that contributors using the > sandbox that don't want to be bothered by the bot wont be. Yeah, I had been assuming it would be some tag in the commit message. If we ultimately enact different flows of varying complexity, the tag syntax could be enriched so students in different courses/grades could get different experiences. For example: Bot-Reviewer: or Bot-Reviewer: Level 2 or Bot-Reviewer: initial-downvote, merge-conflict, series-depth=3 The possibilities are endless :P -efried [1] https://review.openstack.org/#/c/634333/ From sean.mcginnis at gmx.com Fri Feb 1 14:55:53 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 1 Feb 2019 08:55:53 -0600 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> Message-ID: <20190201145553.GA5625@sm-workstation> On Fri, Feb 01, 2019 at 12:49:19PM +0100, Thierry Carrez wrote: > Lance Bragstad wrote: > > [..] > > Outside of having a formal name, do we expect the "pop-up" teams to > > include processes that make what we went through easier? Ultimately, we > > still had to self-organize and do a bunch of socializing to make progress. > > I think being listed as a pop-up team would definitely facilitate > getting mentioned in TC reports, community newsletters or other > high-vsibility community communications. It would help getting space to > meet at PTGs, too. > I guess this is the main value I see from this proposal. If it helps with visibility and communications around the effort then it does add some value to give them an official name. I don't think it changes much else. Those working in the group will still need to socialize the changes they would like to make, get buy-in from the project teams affected that the design approach is good, and find enough folks interested in the changes to drive it forward and propose the patches and do the other work needed to get things to happen. We can try looking at processes to help support that. But ultimately, as with most open source projects, I think it comes down to having enough people interested enough to get the work done. Sean From lars at redhat.com Fri Feb 1 15:20:35 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 1 Feb 2019 10:20:35 -0500 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <5354829D-31EA-4CB2-A054-239D105C7EC9@cern.ch> <20190130170501.hs2vsmm7iqdhmftc@redhat.com> Message-ID: <20190201152035.bfw2bbg27hswqhbd@redhat.com> On Thu, Jan 31, 2019 at 10:58:58AM +0000, Pierre Riteau wrote: > > This would require Ironic to support multi-tenancy first, right? > > Yes, assuming this would be available as per your initial message. > Although technically you could use the Blazar API as a wrapper to > provide the multi-tenancy, it would require duplicating a lot of the > Ironic API into Blazar, so I wouldn't recommend this approach. I think that it would be best to implement the multi-tenenacy at a lower level than Blazar. Our thought was to prototype this by putting multi-tenancy and the related access control logic into a proxy service that sits between Ironic and the end user, although that still suffers from the same problem of needing the shim service to be aware of the much of the ironic API. Ultimately it would be great to see Ironic develop native support multi-tenant operation. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From lars at redhat.com Fri Feb 1 15:26:52 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 1 Feb 2019 10:26:52 -0500 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <5354829D-31EA-4CB2-A054-239D105C7EC9@cern.ch> <20190130170501.hs2vsmm7iqdhmftc@redhat.com> Message-ID: <20190201152652.cnudbniuraiflybj@redhat.com> On Thu, Jan 31, 2019 at 12:09:07PM +0100, Dmitry Tantsur wrote: > Some first steps have been done: > http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ownership-field.html. > We need someone to drive the futher design and implementation > though. That spec seems to be for a strictly informational field. Reading through it, I guess it's because doing something like this... openstack baremetal node set --property owner=lars ...leads to sub-optimal performance when trying to filter a large number of hosts. I see that it's merged already, so I guess this is commenting-after-the-fact, but that seems like the wrong path to follow: I can see properties like "the contract id under which this system was purchased" being as or more important than "owner" from a large business perspective, so making it easier to filter by property on the server side would seem to be a better solution. Or implement full multi-tenancy so that "owner" is more than simply informational, of course :). -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From eblock at nde.ag Fri Feb 1 15:27:01 2019 From: eblock at nde.ag (Eugen Block) Date: Fri, 01 Feb 2019 15:27:01 +0000 Subject: [Openstack] [Nova][Glance] Nova imports flat images from base file despite ceph backend In-Reply-To: References: <20180928115051.Horde.ZC_55UzSXeK4hiOjJt6tajA@webmail.nde.ag> <20180928125224.Horde.33aqtdk0B9Ncylg-zxjA5to@webmail.nde.ag> <9F3C86CE-862D-469A-AD79-3F334CD5DB41@enter.eu> <20181004124417.Horde.py2wEG4JmO1oFXbjX5u1uw3@webmail.nde.ag> <20181009080101.Horde.---iO9LIrKkWvTsNJwWk_Mj@webmail.nde.ag> <679352a8-c082-d851-d8a5-ea7b2348b7d3@gmail.com> <20181012215027.Horde.t5xm_KfkoEE4YEnrewHQZPG@webmail.nde.ag> <9df7167b-ea3b-51d6-9fad-7c9298caa7be@gmail.com> <72242CC2-621E-4037-A8F0-8AE56C4A6F36@italy1.com> Message-ID: <20190201152701.Horde.Qt9AVNDrBgrTv9KJLB6WOBX@webmail.nde.ag> Hi, I'd like to share that I found the solution to my problem in [1]. It was the config option "cache_images" in nova that is set to "all" per default. Despite changing the glance image properties of my images to raw it didn't prevent nova from downloading a local copy to /var/lib/nova/instances/_base. Setting "cache_images = none" disables the nova image cache, and after deleting all cache files in _base a new instance is not flat anymore but a copy-on-write clone like it's supposed to be. Sorry for the noise in this thread. :-) Have a nice weekend! Eugen [1] https://ask.openstack.org/en/question/79843/prefetched-and-cached-images-in-glance/ Zitat von melanie witt : > On Fri, 12 Oct 2018 20:06:04 -0700, Remo Mattei wrote: >> I do not have it handy now but you can verify that the image is >> indeed raw or qcow2 >> >> As soon as I get home I will dig the command and pass it on. I have >> seen where images have extensions thinking it is raw and it is not. > > You could try 'qemu-img info ' and get output like this, > notice "file format": > > $ qemu-img info test.vmdk > (VMDK) image open: flags=0x2 filename=test.vmdk > image: test.vmdk > file format: vmdk > virtual size: 20M (20971520 bytes) > disk size: 17M > > [1] https://en.wikibooks.org/wiki/QEMU/Images#Getting_information > > -melanie > > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From lars at redhat.com Fri Feb 1 17:09:53 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 1 Feb 2019 12:09:53 -0500 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: <20190130152604.ik7zi2w7hrpabahd@redhat.com> References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> Message-ID: On Wed, Jan 30, 2019 at 10:26:04AM -0500, Lars Kellogg-Stedman wrote: > Howdy. > > I'm working with a group of people who are interested in enabling some > form of baremetal leasing/reservations using Ironic... Hey everyone, Thanks for the feedback! Based on the what I've heard so far, I'm beginning to think our best course of action is: 1. Implement multi-tenancy either (a) directly in Ironic or (b) in a shim service that sits between Ironic and the client. 2. Implement a Blazar plugin that is able to talk to whichever service in (1) is appropriate. 3. Work with Blazar developers to implement any lease logic that we think is necessary. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From smooney at redhat.com Fri Feb 1 18:16:42 2019 From: smooney at redhat.com (Sean Mooney) Date: Fri, 01 Feb 2019 18:16:42 +0000 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> Message-ID: On Fri, 2019-02-01 at 12:09 -0500, Lars Kellogg-Stedman wrote: > On Wed, Jan 30, 2019 at 10:26:04AM -0500, Lars Kellogg-Stedman wrote: > > Howdy. > > > > I'm working with a group of people who are interested in enabling some > > form of baremetal leasing/reservations using Ironic... > > Hey everyone, > > Thanks for the feedback! Based on the what I've heard so far, I'm > beginning to think our best course of action is: > > 1. Implement multi-tenancy either (a) directly in Ironic or (b) in a > shim service that sits between Ironic and the client. that shim service could be nova, which already has multi tenancy. > > 2. Implement a Blazar plugin that is able to talk to whichever service > in (1) is appropriate. and nova is supported by blazar > > 3. Work with Blazar developers to implement any lease logic that we > think is necessary. +1 by they im sure there is a reason why you dont want to have blazar drive nova and nova dirve ironic but it seam like all the fucntionality would already be there in that case. > > -- > Lars Kellogg-Stedman | larsks @ {irc,twitter,github} > http://blog.oddbit.com/m/ | > From ignaziocassano at gmail.com Fri Feb 1 12:21:56 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 1 Feb 2019 13:21:56 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> Message-ID: Yes, it needs Internet access. Ignazio Il giorno Ven 1 Feb 2019 13:20 Alfredo De Luca ha scritto: > thanks Feilong, clemens et all. > > I going to have a look later on today and see what I can do and see. > > Just a question: > Does the kube master need internet access to download stuff or not? > > Cheers > > > On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang > wrote: > >> I'm echoing Von's comments. >> >> From the log of cloud-init-output.log, you should be able to see below >> error: >> >> *Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 >> +0000. Up 76.51 seconds.* >> *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running >> /var/lib/cloud/instance/scripts/part-011 [1]* >> *+ _prefix=docker.io/openstackmagnum/ * >> *+ atomic install --storage ostree --system --system-package no --set >> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >> heat-container-agent >> docker.io/openstackmagnum/heat-container-agent:queens-stable >> * >> *The docker daemon does not appear to be running.* >> *+ systemctl start heat-container-agent* >> *Failed to start heat-container-agent.service: Unit >> heat-container-agent.service not found.* >> *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running >> /var/lib/cloud/instance/scripts/part-013 [5]* >> >> Then please go to /var/lib/cloud/instances//scripts to find >> the script 011 and 013 to run it manually to get the root cause. And >> welcome to pop up into #openstack-containers irc channel. >> >> >> >> On 30/01/19 11:43 PM, Clemens Hardewig wrote: >> >> Read the cloud-Init.log! There you can see that your >> /var/lib/.../part-011 part of the config script finishes with error. Check >> why. >> >> Von meinem iPhone gesendet >> >> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca > >: >> >> here are also the logs for the cloud init logs from the k8s master.... >> >> >> >> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca >> wrote: >> >>> >>> In the meantime this is my cluster >>> template >>> >>> >>> >>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < >>> alfredo.deluca at gmail.com> wrote: >>> >>>> hi Clemens and Ignazio. thanks for your support. >>>> it must be network related but I don't do something special apparently >>>> to create a simple k8s cluster. >>>> I ll post later on configurations and logs as you Clemens suggested. >>>> >>>> >>>> Cheers >>>> >>>> >>>> >>>> On Tue, Jan 29, 2019 at 9:16 PM Clemens >>>> wrote: >>>> >>>>> … an more important: check the other log cloud-init.log for error >>>>> messages (not only cloud-init-output.log) >>>>> >>>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < >>>>> alfredo.deluca at gmail.com>: >>>>> >>>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the >>>>> logs on the kube master keep saying the following >>>>> >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> >>>>> Not sure what to do. >>>>> My configuration is ... >>>>> eth0 - 10.1.8.113 >>>>> >>>>> But the openstack configration in terms of networkin is the default >>>>> from ansible-openstack which is 172.29.236.100/22 >>>>> >>>>> Maybe that's the problem? >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < >>>>> ignaziocassano at gmail.com> wrote: >>>>> >>>>>> Hello Alfredo, >>>>>> your external network is using proxy ? >>>>>> If you using a proxy, and yuo configured it in cluster template, you >>>>>> must setup no proxy for 127.0.0.1 >>>>>> Ignazio >>>>>> >>>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < >>>>>> clemens.hardewig at crandale.de> ha scritto: >>>>>> >>>>>>> At least on fedora there is a second cloud Init log as far as I >>>>>>> remember-Look into both >>>>>>> >>>>>>> Br c >>>>>>> >>>>>>> Von meinem iPhone gesendet >>>>>>> >>>>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >>>>>>> alfredo.deluca at gmail.com>: >>>>>>> >>>>>>> thanks Clemens. >>>>>>> I looked at the cloud-init-output.log on the master... and at the >>>>>>> moment is doing the following.... >>>>>>> >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> >>>>>>> Network ....could be but not sure where to look at >>>>>>> >>>>>>> >>>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>> >>>>>>>> Yes, you should check the cloud-init logs of your master. Without >>>>>>>> having seen them, I would guess a network issue or you have selected for >>>>>>>> your minion nodes a flavor using swap perhaps ... >>>>>>>> So, log files are the first step you could dig into... >>>>>>>> Br c >>>>>>>> Von meinem iPhone gesendet >>>>>>>> >>>>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>> >>>>>>>> Hi all. >>>>>>>> I finally instaledl successufully openstack ansible (queens) but, >>>>>>>> after creating a cluster template I create k8s cluster, it stuck on >>>>>>>> >>>>>>>> >>>>>>>> kube_masters >>>>>>>> >>>>>>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>>>>>>> >>>>>>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create >>>>>>>> in progress....and after around an hour it says...time out. k8s master >>>>>>>> seems to be up.....at least as VM. >>>>>>>> >>>>>>>> any idea? >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> *Alfredo* >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Alfredo* >>>>>>> >>>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> >> >> -- >> *Alfredo* >> >> >> >> >> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> -------------------------------------------------------------------------- >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> -------------------------------------------------------------------------- >> >> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ing.gloriapalmagonzalez at gmail.com Fri Feb 1 17:54:28 2019 From: ing.gloriapalmagonzalez at gmail.com (=?UTF-8?Q?Gloria_Palma_Gonz=C3=A1lez?=) Date: Fri, 1 Feb 2019 11:54:28 -0600 Subject: [openstack-community] Open Infrastructure Summit Denver - Community Voting Open In-Reply-To: <6B02F9A1-28A7-4F43-85E1-66AD570ED37B@openstack.org> References: <6B02F9A1-28A7-4F43-85E1-66AD570ED37B@openstack.org> Message-ID: Done! Thanks! El jue., 31 ene. 2019 a las 12:36, Ashlee Ferguson () escribió: > Hi everyone, > > Community voting for the Open Infrastructure Summit Denver sessions is > open! > > You can VOTE HERE > , but > what does that mean? > > Now that the Call for Presentations has closed, all submissions are > available for community vote and input. After community voting closes, the > volunteer Programming Committee members will receive the presentations to > review and determine the final selections for Summit schedule. While > community votes are meant to help inform the decision, Programming > Committee members are expected to exercise judgment in their area of > expertise and help ensure diversity of sessions and speakers. View full > details of the session selection process here > > . > > In order to vote, you need an OSF community membership. If you do not have > an account, please create one by going to openstack.org/join. If you need > to reset your password, you can do that here > . > > Hurry, voting closes Monday, February 4 at 11:59pm Pacific Time (Tuesday, > February 5 at 7:59 UTC). > > Continue to visit https://www.openstack.org/summit/denver-2019 for all > Summit-related information. > > REGISTER > Register for the Summit > before prices > increase in late February! > > VISA APPLICATION PROCESS > Make sure to secure your Visa soon. More information > > about the Visa application process. > > TRAVEL SUPPORT PROGRAM > February 27 is the last day to submit applications. Please submit your > applications > by > 11:59pm Pacific Time (February 28 at 7:59am UTC). > > If you have any questions, please email summit at openstack.org > . > > Cheers, > Ashlee > > > Ashlee Ferguson > OpenStack Foundation > ashlee at openstack.org > > > > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > -- Gloria Palma González GloriaPG | @GloriaPalmaGlez -------------- next part -------------- An HTML attachment was scrubbed... URL: From dbingham at godaddy.com Fri Feb 1 18:16:10 2019 From: dbingham at godaddy.com (David G. Bingham) Date: Fri, 1 Feb 2019 18:16:10 +0000 Subject: [neutron] Multi-segment per host support for routed networks Message-ID: <7D8DEE81-6D5F-4424-9482-12C80A5C15DA@godaddy.com> Neutron land, Problem: Neutron currently only allows a single network segment per host. This becomes a problem when networking teams want to limit the number of IPs it supports on a segment. This means that at times the number of IPs available to the host is the limiting factor for the number of instances we can deploy on a host. Ref: https://bugs.launchpad.net/neutron/+bug/1764738 Ongoing Work: We are excited in our work add "multi-segment support for routed networks". We currently have a proof of concept here https://review.openstack.org/#/c/623115 that for routed networks effectively: * Removes validation preventing multiple segments. * Injects segment_id into fixed IP records. * Uses the segment_id when creating a bridge (rather than network_id). In effect, it gives each segment its own bridge. It works pretty well for new networks and deployments. For existing routed networks, however, it breaks networking. Please use *caution* if you decide to try it. TODOs: Things TODO before this before it is fully baked: * Need to add code to handle ensuring bridges are also updated/deleted using the segment_id (rather than network_id). * Need to add something (a feature flag?) that prevents this from breaking routed networks when a cloud admin updates to master and is configured for routed networks. * Need to create checker and upgrade migration code that will convert existing bridges from network_id based to segment_id based (ideally live or with little network traffic downtime). Once converted, the feature flag could enable the feature and start using the new code. Need: 1. How does one go about adding a migration tool? Maybe some examples? 2. Will nova need to be notified/upgraded to have bridge related files updated? 3. Is there a way to migrate without (or minimal) downtime? 4. How to repeatably test this migration code? Grenade? Looking for any ideas that can keep this moving :) Thanks a ton, David Bingham (wwriverrat on irc) Kris Lindgren (klindgren on irc) Cloud Engineers at GoDaddy From tony at bakeyournoodle.com Fri Feb 1 23:51:44 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sat, 2 Feb 2019 10:51:44 +1100 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> References: <20190201043349.GB6183@thor.bakeyournoodle.com> <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> Message-ID: <20190201235143.GC6183@thor.bakeyournoodle.com> On Fri, Feb 01, 2019 at 11:25:47AM +0000, Sean Mooney wrote: > On Fri, 2019-02-01 at 15:33 +1100, Tony Breeds wrote: > > Hi All, > > During the Berlin forum the idea of running some kinda of bot on the > > sandbox [1] repo cam up as another way to onboard/encourage > > contributors. > > > > The general idea is that the bot would: > > 1. Leave a -1 review on 'qualifying'[2] changes along with a request for > > some small change > > 2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) > > on the change > > > > Showing new contributors approximately what code review looks like[2], > > and also reduce the human requirements. The OpenStack Upstream > > Institute would make use of the bot and we'd also use it as an > > interactive tutorial from the contributors portal. > > > > I think this can be done as a 'normal' CI job with the following > > considerations: > > > > * Because we want this service to be reasonably robust we don't want to > > code or the job definitions to live in repo so I guess they'd need to > > live in project-config[4]. The bot itself doesn't need to be > > stateful as gerrit comments / meta-data would act as the store/state > > sync. > > * We'd need a gerrit account we can use to lodge these votes, as using > > 'proposal-bot' or tonyb would be a bad idea. > do you need an actual bot > why not just have a job defiend in the sandbox repo itself that runs say > pep8 or some simple test like check the commit message for Close-Bug: or somting like > that. Yup sorry for using the overloaded term 'Bot' what you describe is what I was trying to suggest. > i noticed that if you are modifying zuul jobs and have a syntax error > we actully comment on the patch to say where it is. > like this https://review.openstack.org/#/c/632484/2/.zuul.yaml at 31 Yup. > so you could just develop a custom job that ran in the a seperate pipline and > set the sucess action to Code-Review: +2 an failure to Code-Review: -1 > > the authour could then add the second +2 and +w to complete the normal workflow. > as far as i know the sandbox repo allowas all users to +2 +w correct? Correct. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Fri Feb 1 23:55:20 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sat, 2 Feb 2019 10:55:20 +1100 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <20190201123420.sjhvwuwjxbyvru3x@yuggoth.org> References: <20190201043349.GB6183@thor.bakeyournoodle.com> <3d0f0b2890ecdb480a10a812a6f07630d81f0668.camel@redhat.com> <20190201123420.sjhvwuwjxbyvru3x@yuggoth.org> Message-ID: <20190201235520.GD6183@thor.bakeyournoodle.com> On Fri, Feb 01, 2019 at 12:34:20PM +0000, Jeremy Stanley wrote: > On 2019-02-01 11:25:47 +0000 (+0000), Sean Mooney wrote: > > do you need an actual bot > > why not just have a job defiend in the sandbox repo itself that > > runs say pep8 or some simple test like check the commit message > > for Close-Bug: or somting like that. > > I think that's basically what he was suggesting: a Zuul job which > votes on (some) changes to the openstack/sandbox repository. > > Some challenges there... first, you'd probably want credentials set > as Zuul secrets, but in-repository secrets can only be used by jobs > in safe "post-review" pipelines (gate, promote, post, release...) to > prevent leakage through speculative execution of changes to those > job definitions. The workaround would be to place the secrets and > any playbooks which use them into a trusted config repository such > as openstack-infra/project-config so they can be safely used in > "pre-review" pipelines like check. Yup that was my plan. It also means that new contributors can't accidentallt break the bot :) > > > i noticed that if you are modifying zuul jobs and have a syntax > > error we actully comment on the patch to say where it is. like > > this https://review.openstack.org/#/c/632484/2/.zuul.yaml at 31 > > > > so you could just develop a custom job that ran in the a seperate > > pipline and set the sucess action to Code-Review: +2 an failure to > > Code-Review: -1 > [...] > > It would be a little weird to have those code review votes showing > up for the Zuul account and might further confuse students. Also, > what you describe would require a custom pipeline definition as > those behaviors apply to pipelines, not to jobs. > > I think Tony's suggestion of doing this as a job with custom > credentials to log into Gerrit and leave code review votes is > probably the most workable and least confusing solution, but I also > think a bulk of that job definition will end up having to live > outside the sandbox repo for logistical reasons described above. Cool. There clearly isn't a rush on this but it would be really good to have it in place before the Denver summit. Can someone that knows how either create the gerrit user and zuul secrets or point me at how to do it. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Sat Feb 2 00:01:39 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sat, 2 Feb 2019 11:01:39 +1100 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> References: <20190201043349.GB6183@thor.bakeyournoodle.com> <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> Message-ID: <20190202000139.GE6183@thor.bakeyournoodle.com> On Fri, Feb 01, 2019 at 08:25:03AM -0600, Eric Fried wrote: > Yeah, I had been assuming it would be some tag in the commit message. If > we ultimately enact different flows of varying complexity, the tag > syntax could be enriched so students in different courses/grades could > get different experiences. For example: > > Bot-Reviewer: > > or > > Bot-Reviewer: Level 2 > > or > > Bot-Reviewer: initial-downvote, merge-conflict, series-depth=3 Something like that would work well. A nice thing about it is it begins the process of teaching about other tags we but in commit messages. > The possibilities are endless :P :) Of course it should be Auto-Bot[1] instead of Bot-Reviewer ;P Yours Tony. [1] The bike shed it pink! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From miguel at mlavalle.com Sat Feb 2 01:06:42 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 1 Feb 2019 19:06:42 -0600 Subject: [neutron] OVS OpenFlow L3 DVR / dvr_bridge agent_mode In-Reply-To: References: Message-ID: Hi Igor, Please see my comments in-line below On Tue, Jan 29, 2019 at 1:26 AM Duarte Cardoso, Igor < igor.duarte.cardoso at intel.com> wrote: > Hi Neutron, > > > > I've been internally collaborating on the ``dvr_bridge`` L3 agent mode > [1][2][3] work (David Shaughnessy, Xubo Zhang), which allows the L3 agent > to make use of Open vSwitch / OpenFlow to implement ``distributed`` IPv4 > Routers thus bypassing kernel namespaces and iptables and opening the door > for higher performance by keeping packets in OVS for longer. > > > > I want to share a few questions in order to gather feedback from you. I > understand parts of these questions may have been answered in the past > before my involvement, but I believe it's still important to revisit and > clarify them. This can impact how long it's going to take to complete the > work and whether it can make it to stein-3. > > > > 1. Should OVS support also be added to the legacy router? > > And if so, would it make more sense to have a new variable (not > ``agent_mode``) to specify what backend to use (OVS or kernel) instead of > creating more combinations? > I would like to see the legacy router also implemented. And yes, we need to specify a new config option. As it has already been pointed out, we need to separate what the agent does in each host from the backend technology implementing the routers. > > > 2. What is expected in terms of CI for this? Regarding testing, what > should this first patch include apart from the unit tests? (since the > l3_agent.ini needs to be configured differently). > I agree with Slawek. We would like to see a scenario job. > > > 3. What problems can be anticipated by having the same agent managing both > kernel and OVS powered routers (depending on whether they were created as > ``distributed``)? > > We are experimenting with different ways of decoupling RouterInfo (mainly > as part of the L3 agent refactor patch) and haven't been able to find the > right balance yet. On one end we have an agent that is still coupled with > kernel-based RouterInfo, and on the other end we have an agent that either > only accepts OVS-based RouterInfos or only kernel-based RouterInfos > depending on the ``agent_mode``. > I also agree with Slawek here. It would a good idea if we can get the two efforts in synch so we can untangle RouterInfo from the agent code > > > We'd also appreciate reviews on the 2 patches [4][5]. The L3 refactor one > should be able to pass Zuul after a recheck. > > > > [1] Spec: > https://blueprints.launchpad.net/neutron/+spec/openflow-based-dvr > > [2] RFE: https://bugs.launchpad.net/neutron/+bug/1705536 > > [3] Gerrit topic: > https://review.openstack.org/#/q/topic:dvr_bridge+(status:open+OR+status:merged) > > [4] L3 agent refactor patch: https://review.openstack.org/#/c/528336/29 > > [5] dvr_bridge patch: https://review.openstack.org/#/c/472289/17 > > > > Thank you! > > > > Best regards, > > Igor D.C. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 2 08:37:49 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 2 Feb 2019 09:37:49 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> Message-ID: Alfredo, if you configured your template for using floatingip you can connect to the master and check if it can connect to Internet. Il giorno Ven 1 Feb 2019 13:20 Alfredo De Luca ha scritto: > thanks Feilong, clemens et all. > > I going to have a look later on today and see what I can do and see. > > Just a question: > Does the kube master need internet access to download stuff or not? > > Cheers > > > On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang > wrote: > >> I'm echoing Von's comments. >> >> From the log of cloud-init-output.log, you should be able to see below >> error: >> >> *Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 >> +0000. Up 76.51 seconds.* >> *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running >> /var/lib/cloud/instance/scripts/part-011 [1]* >> *+ _prefix=docker.io/openstackmagnum/ * >> *+ atomic install --storage ostree --system --system-package no --set >> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >> heat-container-agent >> docker.io/openstackmagnum/heat-container-agent:queens-stable >> * >> *The docker daemon does not appear to be running.* >> *+ systemctl start heat-container-agent* >> *Failed to start heat-container-agent.service: Unit >> heat-container-agent.service not found.* >> *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running >> /var/lib/cloud/instance/scripts/part-013 [5]* >> >> Then please go to /var/lib/cloud/instances//scripts to find >> the script 011 and 013 to run it manually to get the root cause. And >> welcome to pop up into #openstack-containers irc channel. >> >> >> >> On 30/01/19 11:43 PM, Clemens Hardewig wrote: >> >> Read the cloud-Init.log! There you can see that your >> /var/lib/.../part-011 part of the config script finishes with error. Check >> why. >> >> Von meinem iPhone gesendet >> >> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca > >: >> >> here are also the logs for the cloud init logs from the k8s master.... >> >> >> >> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca >> wrote: >> >>> >>> In the meantime this is my cluster >>> template >>> >>> >>> >>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < >>> alfredo.deluca at gmail.com> wrote: >>> >>>> hi Clemens and Ignazio. thanks for your support. >>>> it must be network related but I don't do something special apparently >>>> to create a simple k8s cluster. >>>> I ll post later on configurations and logs as you Clemens suggested. >>>> >>>> >>>> Cheers >>>> >>>> >>>> >>>> On Tue, Jan 29, 2019 at 9:16 PM Clemens >>>> wrote: >>>> >>>>> … an more important: check the other log cloud-init.log for error >>>>> messages (not only cloud-init-output.log) >>>>> >>>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < >>>>> alfredo.deluca at gmail.com>: >>>>> >>>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the >>>>> logs on the kube master keep saying the following >>>>> >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> >>>>> Not sure what to do. >>>>> My configuration is ... >>>>> eth0 - 10.1.8.113 >>>>> >>>>> But the openstack configration in terms of networkin is the default >>>>> from ansible-openstack which is 172.29.236.100/22 >>>>> >>>>> Maybe that's the problem? >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < >>>>> ignaziocassano at gmail.com> wrote: >>>>> >>>>>> Hello Alfredo, >>>>>> your external network is using proxy ? >>>>>> If you using a proxy, and yuo configured it in cluster template, you >>>>>> must setup no proxy for 127.0.0.1 >>>>>> Ignazio >>>>>> >>>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < >>>>>> clemens.hardewig at crandale.de> ha scritto: >>>>>> >>>>>>> At least on fedora there is a second cloud Init log as far as I >>>>>>> remember-Look into both >>>>>>> >>>>>>> Br c >>>>>>> >>>>>>> Von meinem iPhone gesendet >>>>>>> >>>>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >>>>>>> alfredo.deluca at gmail.com>: >>>>>>> >>>>>>> thanks Clemens. >>>>>>> I looked at the cloud-init-output.log on the master... and at the >>>>>>> moment is doing the following.... >>>>>>> >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> >>>>>>> Network ....could be but not sure where to look at >>>>>>> >>>>>>> >>>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>> >>>>>>>> Yes, you should check the cloud-init logs of your master. Without >>>>>>>> having seen them, I would guess a network issue or you have selected for >>>>>>>> your minion nodes a flavor using swap perhaps ... >>>>>>>> So, log files are the first step you could dig into... >>>>>>>> Br c >>>>>>>> Von meinem iPhone gesendet >>>>>>>> >>>>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>> >>>>>>>> Hi all. >>>>>>>> I finally instaledl successufully openstack ansible (queens) but, >>>>>>>> after creating a cluster template I create k8s cluster, it stuck on >>>>>>>> >>>>>>>> >>>>>>>> kube_masters >>>>>>>> >>>>>>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>>>>>>> >>>>>>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create >>>>>>>> in progress....and after around an hour it says...time out. k8s master >>>>>>>> seems to be up.....at least as VM. >>>>>>>> >>>>>>>> any idea? >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> *Alfredo* >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Alfredo* >>>>>>> >>>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> >> >> -- >> *Alfredo* >> >> >> >> >> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> -------------------------------------------------------------------------- >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> -------------------------------------------------------------------------- >> >> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemens.hardewig at crandale.de Sat Feb 2 13:26:02 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 14:26:02 +0100 Subject: Fwd: [openstack-ansible][magnum] References: <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: > Anfang der weitergeleiteten Nachricht: > > Von: Clemens > Betreff: Aw: [openstack-ansible][magnum] > Datum: 2. Februar 2019 um 14:20:37 MEZ > An: Alfredo De Luca > Kopie: Feilong Wang , openstack-discuss at lists.openstack.org > > Well - it seems that failure of part-013 has its root cause in failure of part-011: > > in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore the certificates for the access to Etcd are created; this is prerequisite for any kinda of access authorization maintained by Etcd. The ip address config items require an appropriate definition as metadata. If there is no definition of that, then internet access fails and it can also not install docker in part-013 ... > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From clemens.hardewig at crandale.de Sat Feb 2 16:36:12 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 17:36:12 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: > Am 02.02.2019 um 17:26 schrieb Clemens : > > Hi Alfredo, > > This is basics of Openstack: curl -s http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the metadata service with its special IP address 169.254.169.254 , to obtain the local ip address; the second one to get the public ip address > It look like from remote that your network is not properly configured so that this information is not answered from metadata service successfully. What happens if you execute that command manually? > > BR C > >> Am 02.02.2019 um 17:18 schrieb Alfredo De Luca >: >> >> Hi Clemens. Yes...you are right but not sure why the IPs are not correct >> >> if [ -z "${KUBE_NODE_IP}" ]; then >> KUBE_NODE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4 ) >> fi >> >> sans="IP:${KUBE_NODE_IP}" >> >> if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then >> KUBE_NODE_PUBLIC_IP=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4 ) >> >> I don't have that IP at all. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From clemens.hardewig at crandale.de Sat Feb 2 16:39:52 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 17:39:52 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: <03D38DC1-D6BB-492D-96CE-05673E411C26@crandale.de> OK - and your floating ip 172.29.249.112 has access to the internet? > Am 02.02.2019 um 17:33 schrieb Alfredo De Luca : > > [root at freddo-5oyez3ot5pxi-master-0 scripts]# curl -s http://169.254.169.254/latest/meta-data/local-ipv4 > 10.0.0.5[root at freddo-5oyez3ot5pxi-master-0 scripts]# > > [root at freddo-5oyez3ot5pxi-master-0 scripts]# curl -s http://169.254.169.254/latest/meta-data/public-ipv4 > 172.29.249.112[root at freddo-5oyez3ot5pxi-master-0 scripts]# > > 172.29.249.112 is the Floating IP... which I use to connect to the master > > > > > On Sat, Feb 2, 2019 at 5:26 PM Clemens > wrote: > Hi Alfredo, > > This is basics of Openstack: curl -s http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the metadata service with its special IP address 169.254.169.254 , to obtain the local ip address; the second one to get the public ip address > It look like from remote that your network is not properly configured so that this information is not answered from metadata service successfully. What happens if you execute that command manually? > > BR C -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From maruthi.inukonda at gmail.com Sat Feb 2 16:43:44 2019 From: maruthi.inukonda at gmail.com (Maruthi Inukonda) Date: Sat, 2 Feb 2019 22:13:44 +0530 Subject: Suggestions on OpenStack installer Message-ID: hi All, We are planning to build a private cloud for our department at academic institution. We are a born-in public-cloud institution. Idea is to have a hybrid cloud. Few important requirements: * No vendor lock-in of hardware and software/distribution. * Preferably stable software from openstack.org. * Also need to support Accelerated instances (GPU,FPGA,Other PCIecard). * On standard rack servers with remote systems management for the nodes (IPMI based) * Need to support Instances [VMs (Qemu-KVM), Containers (Docker, LXC), Bare metal (Ubuntu)]. * Workload will be Software Development/Test (on VMs) and Benchmarking (on baremetal/container). * Smooth upgrades. Could anyone suggest stable Openstack installer for our multi-node setup (initially 40 physical machines, later around 80)? Upgrades should be smooth. Any pointers to reference architecture will also be helpful. PS: I have tried devstack recently. It works. Kolla fails. I tried packstack few years back. Appreciate any help. cheers, Maruthi Inukonda. -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemens.hardewig at crandale.de Sat Feb 2 16:47:27 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 17:47:27 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: <931331F0-2313-4CC0-8D86-77F506239F16@crandale.de> One after the other: First of all part-011 needs to run successfully: Did your certificates create successfully? What is in /etc/kubernetes/certs ? Or did you run part-011 already successfully? > Am 02.02.2019 um 17:36 schrieb Alfredo De Luca : > > Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From alfredo.deluca at gmail.com Sat Feb 2 16:55:36 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Sat, 2 Feb 2019 17:55:36 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: <931331F0-2313-4CC0-8D86-77F506239F16@crandale.de> References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> <931331F0-2313-4CC0-8D86-77F506239F16@crandale.de> Message-ID: part-011 run succesfully.... + set -o errexit + set -o nounset + set -o pipefail + '[' True == True ']' + exit 0 But what I think it's wrong is the floating IP . It's not the IP that goes on internet which is the eth0 on my machine that has 10.1.8.113... anyway here is the network image [image: image.png] On Sat, Feb 2, 2019 at 5:47 PM Clemens wrote: > One after the other: First of all part-011 needs to run successfully: Did > your certificates create successfully? What is in /etc/kubernetes/certs ? > Or did you run part-011 already successfully? > > Am 02.02.2019 um 17:36 schrieb Alfredo De Luca : > > Failed to start heat-container-agent.service: Unit > heat-container-agent.service not found. > > > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 10192 bytes Desc: not available URL: From dabarren at gmail.com Sat Feb 2 17:19:41 2019 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Sat, 2 Feb 2019 18:19:41 +0100 Subject: Suggestions on OpenStack installer In-Reply-To: References: Message-ID: Hi, could you describe what are the issues faced with kolla so we can fix them? Thanks On Sat, Feb 2, 2019, 5:46 PM Maruthi Inukonda wrote: > hi All, > > We are planning to build a private cloud for our department at academic > institution. We are a born-in public-cloud institution. Idea is to have a > hybrid cloud. > > Few important requirements: > * No vendor lock-in of hardware and software/distribution. > * Preferably stable software from openstack.org. > * Also need to support Accelerated instances (GPU,FPGA,Other PCIecard). > * On standard rack servers with remote systems management for the nodes > (IPMI based) > * Need to support Instances [VMs (Qemu-KVM), Containers (Docker, LXC), > Bare metal (Ubuntu)]. > * Workload will be Software Development/Test (on VMs) and Benchmarking (on > baremetal/container). > * Smooth upgrades. > > Could anyone suggest stable Openstack installer for our multi-node setup > (initially 40 physical machines, later around 80)? Upgrades should be > smooth. > > Any pointers to reference architecture will also be helpful. > > PS: I have tried devstack recently. It works. Kolla fails. I tried > packstack few years back. > > Appreciate any help. > > cheers, > Maruthi Inukonda. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Sat Feb 2 17:32:37 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Sat, 2 Feb 2019 12:32:37 -0500 Subject: Suggestions on OpenStack installer In-Reply-To: References: Message-ID: On Sat, Feb 2, 2019, 12:27 PM Eduardo Gonzalez Hi, could you describe what are the issues faced with kolla so we can fix > them? What he said. I run multiple clusters deployed with Kolla. There's a bit of a learning curve as with anything, but it works great for me. The openstack-kolla IRC channel on freenode is also a good resource if you're having issues. -Erik > Thanks > > On Sat, Feb 2, 2019, 5:46 PM Maruthi Inukonda > wrote: > >> hi All, >> >> We are planning to build a private cloud for our department at academic >> institution. We are a born-in public-cloud institution. Idea is to have a >> hybrid cloud. >> >> Few important requirements: >> * No vendor lock-in of hardware and software/distribution. >> * Preferably stable software from openstack.org. >> * Also need to support Accelerated instances (GPU,FPGA,Other PCIecard). >> * On standard rack servers with remote systems management for the nodes >> (IPMI based) >> * Need to support Instances [VMs (Qemu-KVM), Containers (Docker, LXC), >> Bare metal (Ubuntu)]. >> * Workload will be Software Development/Test (on VMs) and Benchmarking >> (on baremetal/container). >> * Smooth upgrades. >> >> Could anyone suggest stable Openstack installer for our multi-node setup >> (initially 40 physical machines, later around 80)? Upgrades should be >> smooth. >> >> Any pointers to reference architecture will also be helpful. >> >> PS: I have tried devstack recently. It works. Kolla fails. I tried >> packstack few years back. >> >> Appreciate any help. >> >> cheers, >> Maruthi Inukonda. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemens.hardewig at crandale.de Sat Feb 2 18:45:28 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 19:45:28 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> <931331F0-2313-4CC0-8D86-77F506239F16@crandale.de> Message-ID: <289BE90C-210E-497E-BEAB-B1EDE380362B@crandale.de> Nope - this looks ok: When a cluster is created, then it creates a private network for you (in your case 10.0.0.0/24), connecting this network via a router to your public network. Floating ip is the assigned to your machine accordingly. So - if now your part-011 runs ok, do you have also now all the Etcd certificates/keys in your /etc/kubernetes/certs > Am 02.02.2019 um 17:55 schrieb Alfredo De Luca : > > part-011 run succesfully.... > + set -o errexit > + set -o nounset > + set -o pipefail > + '[' True == True ']' > + exit 0 > > But what I think it's wrong is the floating IP . It's not the IP that goes on internet which is the eth0 on my machine that has 10.1.8.113... > anyway here is the network image > > > > > On Sat, Feb 2, 2019 at 5:47 PM Clemens > wrote: > One after the other: First of all part-011 needs to run successfully: Did your certificates create successfully? What is in /etc/kubernetes/certs ? Or did you run part-011 already successfully? > >> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca >: >> >> Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. > > > > -- > Alfredo > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From clemens.hardewig at crandale.de Sat Feb 2 18:53:11 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 19:53:11 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Now to the failure of your part-013: Are you sure that you used the glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your error message below suggests that your image does not contain ‚atomic‘ as part of the image … + _prefix=docker.io/openstackmagnum/ + atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable ./part-013: line 8: atomic: command not found + systemctl start heat-container-agent Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. > Am 02.02.2019 um 17:36 schrieb Alfredo De Luca : > > Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From mriedemos at gmail.com Sat Feb 2 20:59:07 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 2 Feb 2019 14:59:07 -0600 Subject: [goals][upgrade-checkers] Week R-10 Update Message-ID: <1c440b08-efd5-ac8d-ebd7-9945b0302f6f@gmail.com> There are a few open changes: https://review.openstack.org/#/q/topic:upgrade-checkers+status:open Some of those are getting a bit dusty, specifically: * aodh: https://review.openstack.org/614401 * ceilometer: https://review.openstack.org/614400 * cloudkitty: https://review.openstack.org/613076 It looks like the horizon team is moving forward with adding an upgrade check script and discussing how to enable plugin support: https://review.openstack.org/#/c/631785/ As for mistral https://review.openstack.org/#/c/611513/ and swift https://review.openstack.org/#/c/611634/ those should probably just be abandoned since they don't fit with the project plans. There are no other projects that need the framework added: https://storyboard.openstack.org/#!/story/2003657 So once we complete those mentioned above we will have the basic framework in place for teams to add non-placeholder upgrade checks for Stein, and some projects are already leveraging it. This is also a good time for projects that have completed the framework to be thinking about adding specific checks as we get closer to feature freeze on March 7. -- Thanks, Matt From clemens.hardewig at crandale.de Sat Feb 2 13:20:37 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 14:20:37 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> Message-ID: <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Well - it seems that failure of part-013 has its root cause in failure of part-011: in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore the certificates for the access to Etcd are created; this is prerequisite for any kinda of access authorization maintained by Etcd. The ip address config items require an appropriate definition as metadata. If there is no definition of that, then internet access fails and it can also not install docker in part-013 ... > Am 01.02.2019 um 10:20 schrieb Alfredo De Luca : > > thanks Feilong, clemens et all. > > I going to have a look later on today and see what I can do and see. > > Just a question: > Does the kube master need internet access to download stuff or not? > > Cheers > > > On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang > wrote: > I'm echoing Von's comments. > > From the log of cloud-init-output.log, you should be able to see below error: > > Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 +0000. Up 76.51 seconds. > 2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-011 [1] > + _prefix=docker.io/openstackmagnum/ > + atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable > The docker daemon does not appear to be running. > + systemctl start heat-container-agent > Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. > 2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-013 [5] > > Then please go to /var/lib/cloud/instances//scripts to find the script 011 and 013 to run it manually to get the root cause. And welcome to pop up into #openstack-containers irc channel. > > > > > > On 30/01/19 11:43 PM, Clemens Hardewig wrote: >> Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 part of the config script finishes with error. Check why. >> >> Von meinem iPhone gesendet >> >> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca >: >> >>> here are also the logs for the cloud init logs from the k8s master.... >>> >>> >>> >>> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca > wrote: >>> >>> In the meantime this is my cluster >>> template >>> >>> >>> >>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca > wrote: >>> hi Clemens and Ignazio. thanks for your support. >>> it must be network related but I don't do something special apparently to create a simple k8s cluster. >>> I ll post later on configurations and logs as you Clemens suggested. >>> >>> >>> Cheers >>> >>> >>> >>> On Tue, Jan 29, 2019 at 9:16 PM Clemens > wrote: >>> … an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log) >>> >>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca >: >>>> >>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following >>>> >>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>> [+]poststarthook/extensions/third-party-resources ok >>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>> healthz check failed' ']' >>>> + sleep 5 >>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>> + '[' ok = '' ']' >>>> + sleep 5 >>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>> [+]poststarthook/extensions/third-party-resources ok >>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>> healthz check failed' ']' >>>> + sleep 5 >>>> >>>> Not sure what to do. >>>> My configuration is ... >>>> eth0 - 10.1.8.113 >>>> >>>> But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22 >>>> >>>> Maybe that's the problem? >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano > wrote: >>>> Hello Alfredo, >>>> your external network is using proxy ? >>>> If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 >>>> Ignazio >>>> >>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig > ha scritto: >>>> At least on fedora there is a second cloud Init log as far as I remember-Look into both >>>> >>>> Br c >>>> >>>> Von meinem iPhone gesendet >>>> >>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca >: >>>> >>>>> thanks Clemens. >>>>> I looked at the cloud-init-output.log on the master... and at the moment is doing the following.... >>>>> >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> >>>>> Network ....could be but not sure where to look at >>>>> >>>>> >>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig > wrote: >>>>> Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... >>>>> So, log files are the first step you could dig into... >>>>> Br c >>>>> Von meinem iPhone gesendet >>>>> >>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca >: >>>>> >>>>>> Hi all. >>>>>> I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on >>>>>> >>>>>> >>>>>> kube_masters b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 OS::Heat::ResourceGroup 16 minutes Create In Progress state changed >>>>>> create in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM. >>>>>> >>>>>> any idea? >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> Alfredo >>>>>> >>>>> >>>>> >>>>> -- >>>>> Alfredo >>>>> >>>> >>>> >>>> -- >>>> Alfredo >>>> >>> >>> >>> >>> -- >>> Alfredo >>> >>> >>> >>> -- >>> Alfredo >>> >>> >>> >>> -- >>> Alfredo >>> >>> >>> > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > -------------------------------------------------------------------------- > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > -------------------------------------------------------------------------- > > > -- > Alfredo > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From alfredo.deluca at gmail.com Sat Feb 2 16:16:02 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Sat, 2 Feb 2019 17:16:02 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> Message-ID: Hi Ignazio. I ve already done that so that's why I can connect to the master. Then I can ping 8.8.8.8 any other IP on internet but not through domainnames..... such as google.com or yahoo.com. Doesn't resolve names. the server doesn't have either dig or nslookup and I can't install them cause the domainname. So I changed the domainname into IP but still the same issue... [root at freddo-5oyez3ot5pxi-master-0 ~]# yum repolist Fedora Modular 29 - x86_64 0.0 B/s | 0 B 00:20 Error: Failed to synchronize cache for repo 'fedora-modular' On Sat, Feb 2, 2019 at 9:38 AM Ignazio Cassano wrote: > Alfredo, if you configured your template for using floatingip you can > connect to the master and check if it can connect to Internet. > > Il giorno Ven 1 Feb 2019 13:20 Alfredo De Luca > ha scritto: > >> thanks Feilong, clemens et all. >> >> I going to have a look later on today and see what I can do and see. >> >> Just a question: >> Does the kube master need internet access to download stuff or not? >> >> Cheers >> >> >> On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang >> wrote: >> >>> I'm echoing Von's comments. >>> >>> From the log of cloud-init-output.log, you should be able to see below >>> error: >>> >>> *Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 >>> 08:33:41 +0000. Up 76.51 seconds.* >>> *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running >>> /var/lib/cloud/instance/scripts/part-011 [1]* >>> *+ _prefix=docker.io/openstackmagnum/ >>> * >>> *+ atomic install --storage ostree --system --system-package no --set >>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>> heat-container-agent >>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>> * >>> *The docker daemon does not appear to be running.* >>> *+ systemctl start heat-container-agent* >>> *Failed to start heat-container-agent.service: Unit >>> heat-container-agent.service not found.* >>> *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running >>> /var/lib/cloud/instance/scripts/part-013 [5]* >>> >>> Then please go to /var/lib/cloud/instances//scripts to >>> find the script 011 and 013 to run it manually to get the root cause. And >>> welcome to pop up into #openstack-containers irc channel. >>> >>> >>> >>> On 30/01/19 11:43 PM, Clemens Hardewig wrote: >>> >>> Read the cloud-Init.log! There you can see that your >>> /var/lib/.../part-011 part of the config script finishes with error. Check >>> why. >>> >>> Von meinem iPhone gesendet >>> >>> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca >> >: >>> >>> here are also the logs for the cloud init logs from the k8s master.... >>> >>> >>> >>> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca < >>> alfredo.deluca at gmail.com> wrote: >>> >>>> >>>> In the meantime this is my cluster >>>> template >>>> >>>> >>>> >>>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < >>>> alfredo.deluca at gmail.com> wrote: >>>> >>>>> hi Clemens and Ignazio. thanks for your support. >>>>> it must be network related but I don't do something special apparently >>>>> to create a simple k8s cluster. >>>>> I ll post later on configurations and logs as you Clemens suggested. >>>>> >>>>> >>>>> Cheers >>>>> >>>>> >>>>> >>>>> On Tue, Jan 29, 2019 at 9:16 PM Clemens >>>>> wrote: >>>>> >>>>>> … an more important: check the other log cloud-init.log for error >>>>>> messages (not only cloud-init-output.log) >>>>>> >>>>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < >>>>>> alfredo.deluca at gmail.com>: >>>>>> >>>>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the >>>>>> logs on the kube master keep saying the following >>>>>> >>>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>>> [+]poststarthook/extensions/third-party-resources ok >>>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>>> healthz check failed' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>>> [+]poststarthook/extensions/third-party-resources ok >>>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>>> healthz check failed' ']' >>>>>> + sleep 5 >>>>>> >>>>>> Not sure what to do. >>>>>> My configuration is ... >>>>>> eth0 - 10.1.8.113 >>>>>> >>>>>> But the openstack configration in terms of networkin is the default >>>>>> from ansible-openstack which is 172.29.236.100/22 >>>>>> >>>>>> Maybe that's the problem? >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < >>>>>> ignaziocassano at gmail.com> wrote: >>>>>> >>>>>>> Hello Alfredo, >>>>>>> your external network is using proxy ? >>>>>>> If you using a proxy, and yuo configured it in cluster template, you >>>>>>> must setup no proxy for 127.0.0.1 >>>>>>> Ignazio >>>>>>> >>>>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < >>>>>>> clemens.hardewig at crandale.de> ha scritto: >>>>>>> >>>>>>>> At least on fedora there is a second cloud Init log as far as I >>>>>>>> remember-Look into both >>>>>>>> >>>>>>>> Br c >>>>>>>> >>>>>>>> Von meinem iPhone gesendet >>>>>>>> >>>>>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>> >>>>>>>> thanks Clemens. >>>>>>>> I looked at the cloud-init-output.log on the master... and at the >>>>>>>> moment is doing the following.... >>>>>>>> >>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> + '[' ok = '' ']' >>>>>>>> + sleep 5 >>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> + '[' ok = '' ']' >>>>>>>> + sleep 5 >>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> + '[' ok = '' ']' >>>>>>>> + sleep 5 >>>>>>>> >>>>>>>> Network ....could be but not sure where to look at >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >>>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>>> >>>>>>>>> Yes, you should check the cloud-init logs of your master. Without >>>>>>>>> having seen them, I would guess a network issue or you have selected for >>>>>>>>> your minion nodes a flavor using swap perhaps ... >>>>>>>>> So, log files are the first step you could dig into... >>>>>>>>> Br c >>>>>>>>> Von meinem iPhone gesendet >>>>>>>>> >>>>>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>>> >>>>>>>>> Hi all. >>>>>>>>> I finally instaledl successufully openstack ansible (queens) but, >>>>>>>>> after creating a cluster template I create k8s cluster, it stuck on >>>>>>>>> >>>>>>>>> >>>>>>>>> kube_masters >>>>>>>>> >>>>>>>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>>>>>>>> >>>>>>>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state >>>>>>>>> changed create in progress....and after around an hour it >>>>>>>>> says...time out. k8s master seems to be up.....at least as VM. >>>>>>>>> >>>>>>>>> any idea? >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> *Alfredo* >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> *Alfredo* >>>>>>>> >>>>>>>> >>>>>> >>>>>> -- >>>>>> *Alfredo* >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> >>> >>> >>> >>> -- >>> Cheers & Best regards, >>> Feilong Wang (王飞龙) >>> -------------------------------------------------------------------------- >>> Senior Cloud Software Engineer >>> Tel: +64-48032246 >>> Email: flwang at catalyst.net.nz >>> Catalyst IT Limited >>> Level 6, Catalyst House, 150 Willis Street, Wellington >>> -------------------------------------------------------------------------- >>> >>> >> >> -- >> *Alfredo* >> >> -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Sat Feb 2 16:18:29 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Sat, 2 Feb 2019 17:18:29 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Hi Clemens. Yes...you are right but not sure why the IPs are not correct if [ -z "${KUBE_NODE_IP}" ]; then KUBE_NODE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4) fi sans="IP:${KUBE_NODE_IP}" if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then KUBE_NODE_PUBLIC_IP=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4) I don't have that IP at all. On Sat, Feb 2, 2019 at 2:20 PM Clemens wrote: > Well - it seems that failure of part-013 has its root cause in failure of > part-011: > > in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore > the certificates for the access to Etcd are created; this is prerequisite > for any kinda of access authorization maintained by Etcd. The ip address > config items require an appropriate definition as metadata. If there is no > definition of that, then internet access fails and it can also not install > docker in part-013 ... > > Am 01.02.2019 um 10:20 schrieb Alfredo De Luca : > > thanks Feilong, clemens et all. > > I going to have a look later on today and see what I can do and see. > > Just a question: > Does the kube master need internet access to download stuff or not? > > Cheers > > > On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang > wrote: > >> I'm echoing Von's comments. >> >> From the log of cloud-init-output.log, you should be able to see below >> error: >> >> *Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 >> +0000. Up 76.51 seconds.* >> *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running >> /var/lib/cloud/instance/scripts/part-011 [1]* >> *+ _prefix=docker.io/openstackmagnum/ * >> *+ atomic install --storage ostree --system --system-package no --set >> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >> heat-container-agent >> docker.io/openstackmagnum/heat-container-agent:queens-stable >> * >> *The docker daemon does not appear to be running.* >> *+ systemctl start heat-container-agent* >> *Failed to start heat-container-agent.service: Unit >> heat-container-agent.service not found.* >> *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running >> /var/lib/cloud/instance/scripts/part-013 [5]* >> >> Then please go to /var/lib/cloud/instances//scripts to find >> the script 011 and 013 to run it manually to get the root cause. And >> welcome to pop up into #openstack-containers irc channel. >> >> >> >> On 30/01/19 11:43 PM, Clemens Hardewig wrote: >> >> Read the cloud-Init.log! There you can see that your >> /var/lib/.../part-011 part of the config script finishes with error. Check >> why. >> >> Von meinem iPhone gesendet >> >> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca > >: >> >> here are also the logs for the cloud init logs from the k8s master.... >> >> >> >> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca >> wrote: >> >>> >>> In the meantime this is my cluster >>> template >>> >>> >>> >>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < >>> alfredo.deluca at gmail.com> wrote: >>> >>>> hi Clemens and Ignazio. thanks for your support. >>>> it must be network related but I don't do something special apparently >>>> to create a simple k8s cluster. >>>> I ll post later on configurations and logs as you Clemens suggested. >>>> >>>> >>>> Cheers >>>> >>>> >>>> >>>> On Tue, Jan 29, 2019 at 9:16 PM Clemens >>>> wrote: >>>> >>>>> … an more important: check the other log cloud-init.log for error >>>>> messages (not only cloud-init-output.log) >>>>> >>>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < >>>>> alfredo.deluca at gmail.com>: >>>>> >>>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the >>>>> logs on the kube master keep saying the following >>>>> >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> >>>>> Not sure what to do. >>>>> My configuration is ... >>>>> eth0 - 10.1.8.113 >>>>> >>>>> But the openstack configration in terms of networkin is the default >>>>> from ansible-openstack which is 172.29.236.100/22 >>>>> >>>>> Maybe that's the problem? >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < >>>>> ignaziocassano at gmail.com> wrote: >>>>> >>>>>> Hello Alfredo, >>>>>> your external network is using proxy ? >>>>>> If you using a proxy, and yuo configured it in cluster template, you >>>>>> must setup no proxy for 127.0.0.1 >>>>>> Ignazio >>>>>> >>>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < >>>>>> clemens.hardewig at crandale.de> ha scritto: >>>>>> >>>>>>> At least on fedora there is a second cloud Init log as far as I >>>>>>> remember-Look into both >>>>>>> >>>>>>> Br c >>>>>>> >>>>>>> Von meinem iPhone gesendet >>>>>>> >>>>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >>>>>>> alfredo.deluca at gmail.com>: >>>>>>> >>>>>>> thanks Clemens. >>>>>>> I looked at the cloud-init-output.log on the master... and at the >>>>>>> moment is doing the following.... >>>>>>> >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> >>>>>>> Network ....could be but not sure where to look at >>>>>>> >>>>>>> >>>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>> >>>>>>>> Yes, you should check the cloud-init logs of your master. Without >>>>>>>> having seen them, I would guess a network issue or you have selected for >>>>>>>> your minion nodes a flavor using swap perhaps ... >>>>>>>> So, log files are the first step you could dig into... >>>>>>>> Br c >>>>>>>> Von meinem iPhone gesendet >>>>>>>> >>>>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>> >>>>>>>> Hi all. >>>>>>>> I finally instaledl successufully openstack ansible (queens) but, >>>>>>>> after creating a cluster template I create k8s cluster, it stuck on >>>>>>>> >>>>>>>> >>>>>>>> kube_masters >>>>>>>> >>>>>>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>>>>>>> >>>>>>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state changed create >>>>>>>> in progress....and after around an hour it says...time out. k8s master >>>>>>>> seems to be up.....at least as VM. >>>>>>>> >>>>>>>> any idea? >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> *Alfredo* >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Alfredo* >>>>>>> >>>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> >> >> -- >> *Alfredo* >> >> >> >> >> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> -------------------------------------------------------------------------- >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> -------------------------------------------------------------------------- >> >> > > -- > *Alfredo* > > > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemens.hardewig at crandale.de Sat Feb 2 16:26:43 2019 From: clemens.hardewig at crandale.de (Clemens) Date: Sat, 2 Feb 2019 17:26:43 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Hi Alfredo, This is basics of Openstack: curl -s http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the metadata service with its special IP address 169.254.169.254 , to obtain the local ip address; the second one to get the public ip address It look like from remote that your network is not properly configured so that this information is not answered from metadata service successfully. What happens if you execute that command manually? BR C > Am 02.02.2019 um 17:18 schrieb Alfredo De Luca : > > Hi Clemens. Yes...you are right but not sure why the IPs are not correct > > if [ -z "${KUBE_NODE_IP}" ]; then > KUBE_NODE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4 ) > fi > > sans="IP:${KUBE_NODE_IP}" > > if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then > KUBE_NODE_PUBLIC_IP=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4 ) > > I don't have that IP at all. > > > On Sat, Feb 2, 2019 at 2:20 PM Clemens > wrote: > Well - it seems that failure of part-013 has its root cause in failure of part-011: > > in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore the certificates for the access to Etcd are created; this is prerequisite for any kinda of access authorization maintained by Etcd. The ip address config items require an appropriate definition as metadata. If there is no definition of that, then internet access fails and it can also not install docker in part-013 ... > >> Am 01.02.2019 um 10:20 schrieb Alfredo De Luca >: >> >> thanks Feilong, clemens et all. >> >> I going to have a look later on today and see what I can do and see. >> >> Just a question: >> Does the kube master need internet access to download stuff or not? >> >> Cheers >> >> >> On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang > wrote: >> I'm echoing Von's comments. >> >> From the log of cloud-init-output.log, you should be able to see below error: >> >> Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 08:33:41 +0000. Up 76.51 seconds. >> 2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-011 [1] >> + _prefix=docker.io/openstackmagnum/ >> + atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable >> The docker daemon does not appear to be running. >> + systemctl start heat-container-agent >> Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. >> 2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-013 [5] >> >> Then please go to /var/lib/cloud/instances//scripts to find the script 011 and 013 to run it manually to get the root cause. And welcome to pop up into #openstack-containers irc channel. >> >> >> >> >> >> On 30/01/19 11:43 PM, Clemens Hardewig wrote: >>> Read the cloud-Init.log! There you can see that your /var/lib/.../part-011 part of the config script finishes with error. Check why. >>> >>> Von meinem iPhone gesendet >>> >>> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca >: >>> >>>> here are also the logs for the cloud init logs from the k8s master.... >>>> >>>> >>>> >>>> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca > wrote: >>>> >>>> In the meantime this is my cluster >>>> template >>>> >>>> >>>> >>>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca > wrote: >>>> hi Clemens and Ignazio. thanks for your support. >>>> it must be network related but I don't do something special apparently to create a simple k8s cluster. >>>> I ll post later on configurations and logs as you Clemens suggested. >>>> >>>> >>>> Cheers >>>> >>>> >>>> >>>> On Tue, Jan 29, 2019 at 9:16 PM Clemens > wrote: >>>> … an more important: check the other log cloud-init.log for error messages (not only cloud-init-output.log) >>>> >>>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca >: >>>>> >>>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the logs on the kube master keep saying the following >>>>> >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '' ']' >>>>> + sleep 5 >>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>> [+]poststarthook/extensions/third-party-resources ok >>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>> healthz check failed' ']' >>>>> + sleep 5 >>>>> >>>>> Not sure what to do. >>>>> My configuration is ... >>>>> eth0 - 10.1.8.113 >>>>> >>>>> But the openstack configration in terms of networkin is the default from ansible-openstack which is 172.29.236.100/22 >>>>> >>>>> Maybe that's the problem? >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano > wrote: >>>>> Hello Alfredo, >>>>> your external network is using proxy ? >>>>> If you using a proxy, and yuo configured it in cluster template, you must setup no proxy for 127.0.0.1 >>>>> Ignazio >>>>> >>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig > ha scritto: >>>>> At least on fedora there is a second cloud Init log as far as I remember-Look into both >>>>> >>>>> Br c >>>>> >>>>> Von meinem iPhone gesendet >>>>> >>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca >: >>>>> >>>>>> thanks Clemens. >>>>>> I looked at the cloud-init-output.log on the master... and at the moment is doing the following.... >>>>>> >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> >>>>>> Network ....could be but not sure where to look at >>>>>> >>>>>> >>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig > wrote: >>>>>> Yes, you should check the cloud-init logs of your master. Without having seen them, I would guess a network issue or you have selected for your minion nodes a flavor using swap perhaps ... >>>>>> So, log files are the first step you could dig into... >>>>>> Br c >>>>>> Von meinem iPhone gesendet >>>>>> >>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca >: >>>>>> >>>>>>> Hi all. >>>>>>> I finally instaledl successufully openstack ansible (queens) but, after creating a cluster template I create k8s cluster, it stuck on >>>>>>> >>>>>>> >>>>>>> kube_masters b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 OS::Heat::ResourceGroup 16 minutes Create In Progress state changed >>>>>>> create in progress....and after around an hour it says...time out. k8s master seems to be up.....at least as VM. >>>>>>> >>>>>>> any idea? >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Alfredo >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Alfredo >>>>>> >>>>> >>>>> >>>>> -- >>>>> Alfredo >>>>> >>>> >>>> >>>> >>>> -- >>>> Alfredo >>>> >>>> >>>> >>>> -- >>>> Alfredo >>>> >>>> >>>> >>>> -- >>>> Alfredo >>>> >>>> >>>> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> -------------------------------------------------------------------------- >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> -------------------------------------------------------------------------- >> >> >> -- >> Alfredo >> > > > > -- > Alfredo > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3783 bytes Desc: not available URL: From alfredo.deluca at gmail.com Sat Feb 2 16:33:36 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Sat, 2 Feb 2019 17:33:36 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: [root at freddo-5oyez3ot5pxi-master-0 scripts]# curl -s http://169.254.169.254/latest/meta-data/local-ipv4 10.0.0.5[root at freddo-5oyez3ot5pxi-master-0 scripts]# [root at freddo-5oyez3ot5pxi-master-0 scripts]# curl -s http://169.254.169.254/latest/meta-data/public-ipv4 172.29.249.112[root at freddo-5oyez3ot5pxi-master-0 scripts]# 172.29.249.112 is the Floating IP... which I use to connect to the master On Sat, Feb 2, 2019 at 5:26 PM Clemens wrote: > Hi Alfredo, > > This is basics of Openstack: curl -s > http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the > metadata service with its special IP address 169.254.169.254 > , to obtain the local > ip address; the second one to get the public ip address > It look like from remote that your network is not properly configured so > that this information is not answered from metadata service successfully. > What happens if you execute that command manually? > > BR C > > Am 02.02.2019 um 17:18 schrieb Alfredo De Luca : > > Hi Clemens. Yes...you are right but not sure why the IPs are not correct > > if [ -z "${KUBE_NODE_IP}" ]; then > KUBE_NODE_IP=$(curl -s > http://169.254.169.254/latest/meta-data/local-ipv4) > fi > > sans="IP:${KUBE_NODE_IP}" > > if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then > KUBE_NODE_PUBLIC_IP=$(curl -s > http://169.254.169.254/latest/meta-data/public-ipv4) > > I don't have that IP at all. > > > On Sat, Feb 2, 2019 at 2:20 PM Clemens > wrote: > >> Well - it seems that failure of part-013 has its root cause in failure of >> part-011: >> >> in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. Furthermore >> the certificates for the access to Etcd are created; this is prerequisite >> for any kinda of access authorization maintained by Etcd. The ip address >> config items require an appropriate definition as metadata. If there is no >> definition of that, then internet access fails and it can also not install >> docker in part-013 ... >> >> Am 01.02.2019 um 10:20 schrieb Alfredo De Luca > >: >> >> thanks Feilong, clemens et all. >> >> I going to have a look later on today and see what I can do and see. >> >> Just a question: >> Does the kube master need internet access to download stuff or not? >> >> Cheers >> >> >> On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang >> wrote: >> >>> I'm echoing Von's comments. >>> >>> From the log of cloud-init-output.log, you should be able to see below >>> error: >>> >>> *Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 >>> 08:33:41 +0000. Up 76.51 seconds.* >>> *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running >>> /var/lib/cloud/instance/scripts/part-011 [1]* >>> *+ _prefix=docker.io/openstackmagnum/ >>> * >>> *+ atomic install --storage ostree --system --system-package no --set >>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>> heat-container-agent >>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>> * >>> *The docker daemon does not appear to be running.* >>> *+ systemctl start heat-container-agent* >>> *Failed to start heat-container-agent.service: Unit >>> heat-container-agent.service not found.* >>> *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running >>> /var/lib/cloud/instance/scripts/part-013 [5]* >>> >>> Then please go to /var/lib/cloud/instances//scripts to >>> find the script 011 and 013 to run it manually to get the root cause. And >>> welcome to pop up into #openstack-containers irc channel. >>> >>> >>> >>> On 30/01/19 11:43 PM, Clemens Hardewig wrote: >>> >>> Read the cloud-Init.log! There you can see that your >>> /var/lib/.../part-011 part of the config script finishes with error. Check >>> why. >>> >>> Von meinem iPhone gesendet >>> >>> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca >> >: >>> >>> here are also the logs for the cloud init logs from the k8s master.... >>> >>> >>> >>> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca < >>> alfredo.deluca at gmail.com> wrote: >>> >>>> >>>> In the meantime this is my cluster >>>> template >>>> >>>> >>>> >>>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < >>>> alfredo.deluca at gmail.com> wrote: >>>> >>>>> hi Clemens and Ignazio. thanks for your support. >>>>> it must be network related but I don't do something special apparently >>>>> to create a simple k8s cluster. >>>>> I ll post later on configurations and logs as you Clemens suggested. >>>>> >>>>> >>>>> Cheers >>>>> >>>>> >>>>> >>>>> On Tue, Jan 29, 2019 at 9:16 PM Clemens >>>>> wrote: >>>>> >>>>>> … an more important: check the other log cloud-init.log for error >>>>>> messages (not only cloud-init-output.log) >>>>>> >>>>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < >>>>>> alfredo.deluca at gmail.com>: >>>>>> >>>>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the >>>>>> logs on the kube master keep saying the following >>>>>> >>>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>>> [+]poststarthook/extensions/third-party-resources ok >>>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>>> healthz check failed' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '' ']' >>>>>> + sleep 5 >>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not finished >>>>>> [+]poststarthook/extensions/third-party-resources ok >>>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>>> healthz check failed' ']' >>>>>> + sleep 5 >>>>>> >>>>>> Not sure what to do. >>>>>> My configuration is ... >>>>>> eth0 - 10.1.8.113 >>>>>> >>>>>> But the openstack configration in terms of networkin is the default >>>>>> from ansible-openstack which is 172.29.236.100/22 >>>>>> >>>>>> Maybe that's the problem? >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < >>>>>> ignaziocassano at gmail.com> wrote: >>>>>> >>>>>>> Hello Alfredo, >>>>>>> your external network is using proxy ? >>>>>>> If you using a proxy, and yuo configured it in cluster template, you >>>>>>> must setup no proxy for 127.0.0.1 >>>>>>> Ignazio >>>>>>> >>>>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < >>>>>>> clemens.hardewig at crandale.de> ha scritto: >>>>>>> >>>>>>>> At least on fedora there is a second cloud Init log as far as I >>>>>>>> remember-Look into both >>>>>>>> >>>>>>>> Br c >>>>>>>> >>>>>>>> Von meinem iPhone gesendet >>>>>>>> >>>>>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>> >>>>>>>> thanks Clemens. >>>>>>>> I looked at the cloud-init-output.log on the master... and at the >>>>>>>> moment is doing the following.... >>>>>>>> >>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> + '[' ok = '' ']' >>>>>>>> + sleep 5 >>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> + '[' ok = '' ']' >>>>>>>> + sleep 5 >>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>> + '[' ok = '' ']' >>>>>>>> + sleep 5 >>>>>>>> >>>>>>>> Network ....could be but not sure where to look at >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >>>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>>> >>>>>>>>> Yes, you should check the cloud-init logs of your master. Without >>>>>>>>> having seen them, I would guess a network issue or you have selected for >>>>>>>>> your minion nodes a flavor using swap perhaps ... >>>>>>>>> So, log files are the first step you could dig into... >>>>>>>>> Br c >>>>>>>>> Von meinem iPhone gesendet >>>>>>>>> >>>>>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>>> >>>>>>>>> Hi all. >>>>>>>>> I finally instaledl successufully openstack ansible (queens) but, >>>>>>>>> after creating a cluster template I create k8s cluster, it stuck on >>>>>>>>> >>>>>>>>> >>>>>>>>> kube_masters >>>>>>>>> >>>>>>>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>>>>>>>> >>>>>>>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state >>>>>>>>> changed create in progress....and after around an hour it >>>>>>>>> says...time out. k8s master seems to be up.....at least as VM. >>>>>>>>> >>>>>>>>> any idea? >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> *Alfredo* >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> *Alfredo* >>>>>>>> >>>>>>>> >>>>>> >>>>>> -- >>>>>> *Alfredo* >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> >>> >>> >>> >>> -- >>> Cheers & Best regards, >>> Feilong Wang (王飞龙) >>> -------------------------------------------------------------------------- >>> Senior Cloud Software Engineer >>> Tel: +64-48032246 >>> Email: flwang at catalyst.net.nz >>> Catalyst IT Limited >>> Level 6, Catalyst House, 150 Willis Street, Wellington >>> -------------------------------------------------------------------------- >>> >>> >> >> -- >> *Alfredo* >> >> >> > > -- > *Alfredo* > > > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Sat Feb 2 16:36:37 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Sat, 2 Feb 2019 17:36:37 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: so if I run part-013 I get the following oot at freddo-5oyez3ot5pxi-master-0 scripts]# ./part-013 + _prefix=docker.io/openstackmagnum/ + atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:queens-stable ./part-013: line 8: atomic: command not found + systemctl start heat-container-agent Failed to start heat-container-agent.service: Unit heat-container-agent.service not found. On Sat, Feb 2, 2019 at 5:33 PM Alfredo De Luca wrote: > [root at freddo-5oyez3ot5pxi-master-0 scripts]# curl -s > http://169.254.169.254/latest/meta-data/local-ipv4 > 10.0.0.5[root at freddo-5oyez3ot5pxi-master-0 scripts]# > > [root at freddo-5oyez3ot5pxi-master-0 scripts]# curl -s > http://169.254.169.254/latest/meta-data/public-ipv4 > 172.29.249.112[root at freddo-5oyez3ot5pxi-master-0 scripts]# > > 172.29.249.112 is the Floating IP... which I use to connect to the master > > > > > On Sat, Feb 2, 2019 at 5:26 PM Clemens > wrote: > >> Hi Alfredo, >> >> This is basics of Openstack: curl -s >> http://169.254.169.254/latest/meta-data/local-ipv4 is a request to the >> metadata service with its special IP address 169.254.169.254 >> , to obtain the >> local ip address; the second one to get the public ip address >> It look like from remote that your network is not properly configured so >> that this information is not answered from metadata service successfully. >> What happens if you execute that command manually? >> >> BR C >> >> Am 02.02.2019 um 17:18 schrieb Alfredo De Luca > >: >> >> Hi Clemens. Yes...you are right but not sure why the IPs are not correct >> >> if [ -z "${KUBE_NODE_IP}" ]; then >> KUBE_NODE_IP=$(curl -s >> http://169.254.169.254/latest/meta-data/local-ipv4) >> fi >> >> sans="IP:${KUBE_NODE_IP}" >> >> if [ -z "${KUBE_NODE_PUBLIC_IP}" ]; then >> KUBE_NODE_PUBLIC_IP=$(curl -s >> http://169.254.169.254/latest/meta-data/public-ipv4) >> >> I don't have that IP at all. >> >> >> On Sat, Feb 2, 2019 at 2:20 PM Clemens >> wrote: >> >>> Well - it seems that failure of part-013 has its root cause in failure >>> of part-011: >>> >>> in part-011, KUBE_NODE_PUBLIC_IP and KUBE_NODE_IP are set. >>> Furthermore the certificates for the access to Etcd are created; this is >>> prerequisite for any kinda of access authorization maintained by Etcd. The >>> ip address config items require an appropriate definition as metadata. If >>> there is no definition of that, then internet access fails and it can also >>> not install docker in part-013 ... >>> >>> Am 01.02.2019 um 10:20 schrieb Alfredo De Luca >> >: >>> >>> thanks Feilong, clemens et all. >>> >>> I going to have a look later on today and see what I can do and see. >>> >>> Just a question: >>> Does the kube master need internet access to download stuff or not? >>> >>> Cheers >>> >>> >>> On Fri, Feb 1, 2019 at 3:28 AM Feilong Wang >>> wrote: >>> >>>> I'm echoing Von's comments. >>>> >>>> From the log of cloud-init-output.log, you should be able to see below >>>> error: >>>> >>>> *Cloud-init v. 0.7.9 running 'modules:final' at Wed, 30 Jan 2019 >>>> 08:33:41 +0000. Up 76.51 seconds.* >>>> *2019-01-30 08:37:49,209 - util.py[WARNING]: Failed running >>>> /var/lib/cloud/instance/scripts/part-011 [1]* >>>> *+ _prefix=docker.io/openstackmagnum/ >>>> * >>>> *+ atomic install --storage ostree --system --system-package no --set >>>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>> heat-container-agent >>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>> * >>>> *The docker daemon does not appear to be running.* >>>> *+ systemctl start heat-container-agent* >>>> *Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found.* >>>> *2019-01-30 08:38:10,250 - util.py[WARNING]: Failed running >>>> /var/lib/cloud/instance/scripts/part-013 [5]* >>>> >>>> Then please go to /var/lib/cloud/instances//scripts to >>>> find the script 011 and 013 to run it manually to get the root cause. And >>>> welcome to pop up into #openstack-containers irc channel. >>>> >>>> >>>> >>>> On 30/01/19 11:43 PM, Clemens Hardewig wrote: >>>> >>>> Read the cloud-Init.log! There you can see that your >>>> /var/lib/.../part-011 part of the config script finishes with error. Check >>>> why. >>>> >>>> Von meinem iPhone gesendet >>>> >>>> Am 30.01.2019 um 10:11 schrieb Alfredo De Luca < >>>> alfredo.deluca at gmail.com>: >>>> >>>> here are also the logs for the cloud init logs from the k8s master.... >>>> >>>> >>>> >>>> On Wed, Jan 30, 2019 at 9:30 AM Alfredo De Luca < >>>> alfredo.deluca at gmail.com> wrote: >>>> >>>>> >>>>> In the meantime this is my cluster >>>>> template >>>>> >>>>> >>>>> >>>>> On Wed, Jan 30, 2019 at 9:17 AM Alfredo De Luca < >>>>> alfredo.deluca at gmail.com> wrote: >>>>> >>>>>> hi Clemens and Ignazio. thanks for your support. >>>>>> it must be network related but I don't do something special >>>>>> apparently to create a simple k8s cluster. >>>>>> I ll post later on configurations and logs as you Clemens suggested. >>>>>> >>>>>> >>>>>> Cheers >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Jan 29, 2019 at 9:16 PM Clemens >>>>>> wrote: >>>>>> >>>>>>> … an more important: check the other log cloud-init.log for error >>>>>>> messages (not only cloud-init-output.log) >>>>>>> >>>>>>> Am 29.01.2019 um 16:07 schrieb Alfredo De Luca < >>>>>>> alfredo.deluca at gmail.com>: >>>>>>> >>>>>>> Hi Ignazio and Clemens. I haven\t configure the proxy and all the >>>>>>> logs on the kube master keep saying the following >>>>>>> >>>>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not >>>>>>> finished >>>>>>> [+]poststarthook/extensions/third-party-resources ok >>>>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>>>> healthz check failed' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '' ']' >>>>>>> + sleep 5 >>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>> + '[' ok = '[-]poststarthook/bootstrap-controller failed: not >>>>>>> finished >>>>>>> [+]poststarthook/extensions/third-party-resources ok >>>>>>> [-]poststarthook/rbac/bootstrap-roles failed: not finished >>>>>>> healthz check failed' ']' >>>>>>> + sleep 5 >>>>>>> >>>>>>> Not sure what to do. >>>>>>> My configuration is ... >>>>>>> eth0 - 10.1.8.113 >>>>>>> >>>>>>> But the openstack configration in terms of networkin is the default >>>>>>> from ansible-openstack which is 172.29.236.100/22 >>>>>>> >>>>>>> Maybe that's the problem? >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, Jan 29, 2019 at 2:26 PM Ignazio Cassano < >>>>>>> ignaziocassano at gmail.com> wrote: >>>>>>> >>>>>>>> Hello Alfredo, >>>>>>>> your external network is using proxy ? >>>>>>>> If you using a proxy, and yuo configured it in cluster template, >>>>>>>> you must setup no proxy for 127.0.0.1 >>>>>>>> Ignazio >>>>>>>> >>>>>>>> Il giorno mar 29 gen 2019 alle ore 12:26 Clemens Hardewig < >>>>>>>> clemens.hardewig at crandale.de> ha scritto: >>>>>>>> >>>>>>>>> At least on fedora there is a second cloud Init log as far as I >>>>>>>>> remember-Look into both >>>>>>>>> >>>>>>>>> Br c >>>>>>>>> >>>>>>>>> Von meinem iPhone gesendet >>>>>>>>> >>>>>>>>> Am 29.01.2019 um 12:08 schrieb Alfredo De Luca < >>>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>>> >>>>>>>>> thanks Clemens. >>>>>>>>> I looked at the cloud-init-output.log on the master... and at the >>>>>>>>> moment is doing the following.... >>>>>>>>> >>>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>>> + '[' ok = '' ']' >>>>>>>>> + sleep 5 >>>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>>> + '[' ok = '' ']' >>>>>>>>> + sleep 5 >>>>>>>>> ++ curl --silent http://127.0.0.1:8080/healthz >>>>>>>>> + '[' ok = '' ']' >>>>>>>>> + sleep 5 >>>>>>>>> >>>>>>>>> Network ....could be but not sure where to look at >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Jan 29, 2019 at 11:34 AM Clemens Hardewig < >>>>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>>>> >>>>>>>>>> Yes, you should check the cloud-init logs of your master. Without >>>>>>>>>> having seen them, I would guess a network issue or you have selected for >>>>>>>>>> your minion nodes a flavor using swap perhaps ... >>>>>>>>>> So, log files are the first step you could dig into... >>>>>>>>>> Br c >>>>>>>>>> Von meinem iPhone gesendet >>>>>>>>>> >>>>>>>>>> Am 28.01.2019 um 15:34 schrieb Alfredo De Luca < >>>>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>>>> >>>>>>>>>> Hi all. >>>>>>>>>> I finally instaledl successufully openstack ansible (queens) but, >>>>>>>>>> after creating a cluster template I create k8s cluster, it stuck on >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> kube_masters >>>>>>>>>> >>>>>>>>>> b7204f0c-b9d8-4ef2-8f0b-afe4c077d039 >>>>>>>>>> >>>>>>>>>> OS::Heat::ResourceGroup 16 minutes Create In Progress state >>>>>>>>>> changed create in progress....and after around an hour it >>>>>>>>>> says...time out. k8s master seems to be up.....at least as VM. >>>>>>>>>> >>>>>>>>>> any idea? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> *Alfredo* >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> *Alfredo* >>>>>>>>> >>>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Alfredo* >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> *Alfredo* >>>>>> >>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >>>> >>>> >>>> >>>> -- >>>> Cheers & Best regards, >>>> Feilong Wang (王飞龙) >>>> -------------------------------------------------------------------------- >>>> Senior Cloud Software Engineer >>>> Tel: +64-48032246 >>>> Email: flwang at catalyst.net.nz >>>> Catalyst IT Limited >>>> Level 6, Catalyst House, 150 Willis Street, Wellington >>>> -------------------------------------------------------------------------- >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> >>> >> >> -- >> *Alfredo* >> >> >> > > -- > *Alfredo* > > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sun Feb 3 02:30:07 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 3 Feb 2019 02:30:07 +0000 Subject: [all] Two months with openstack-discuss (another progress report) Message-ID: <20190203023007.ysbjvegzbp7rsjop@yuggoth.org> This is just a quick followup to see how things have progressed since we cut the old openstack, openstack-dev, openstack-operators and openstack-sigs mailing lists over to openstack-discuss two months ago, as compared to my previous report[*] from the one-month anniversary. We're still seeing a fair number of posts from non-subscribers landing in the moderation queue (around one or two a day, sometimes more, sometimes less) but most of them are newcomers and many subscribe immediately after receiving the moderation notice. We're now at 830 subscribers to openstack-discuss (up from 708 in the previous report). 75% of the addresses used to send 10 or more messages to the old lists in 2018 are now subscribed to the new one (it was 70% a month ago). While posting volume is up compared to December (unsurprising given the usual end-of-year holiday slump), we only had a total of 958 posts over the month of January; comparing to the 1196 from January 2018 that's a 20% drop which (considering that right at 10% of the messages on the old lists were duplicates from cross-posting), is still less of a drop than was typical on average across the old lists over the previous five Januaries. One change worth mentioning: we noticed a rash of bounce-disabled subscriptions triggered by messages occasionally containing invalid DKIM signatures (inconsistently for some posters, fairly consistently for a few others). We're unsure as of yet whether the messages are arriving with invalid signatures or whether Mailman is modifying them in unanticipated ways prior to forwarding, but have re-enabled all affected subscribers and temporarily turned off the automatic subscription disabling feature while investigation is underway. If you missed receiving some messages which are present in the list archive, that's quite possibly the cause. Apologies for the inconvenience! [*] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001386.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tpb at dyncloud.net Sun Feb 3 10:05:49 2019 From: tpb at dyncloud.net (Tom Barron) Date: Sun, 3 Feb 2019 05:05:49 -0500 Subject: [manila][glusterfs] on queens error In-Reply-To: References: Message-ID: <20190203100549.urtnvf2iatmqm6oy@barron.net> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: >Thanks Goutham. >If there are not mantainers for this driver I will switch on ceph and or >netapp. >I am already using netapp but I would like to export shares from an >openstack installation to another. >Since these 2 installations do non share any openstack component and have >different openstack database, I would like to know it is possible . >Regards >Ignazio Hi Ignazio, If by "export shares from an openstack installation to another" you mean removing them from management by manila in installation A and instead managing them by manila in installation B then you can do that while leaving them in place on your Net App back end using the manila "manage-unmanage" administrative commands. Here's some documentation [1] that should be helpful. If on the other hand by "export shares ... to another" you mean to leave the shares under management of manila in installation A but consume them from compute instances in installation B it's all about the networking. One can use manila to "allow-access" to consumers of shares anywhere but the consumers must be able to reach the "export locations" for those shares and mount them. Cheers, -- Tom Barron [1] https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 > >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi >ha scritto: > >> Hi Ignazio, >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano >> wrote: >> > >> > Hello All, >> > I installed manila on my queens openstack based on centos 7. >> > I configured two servers with glusterfs replocation and ganesha nfs. >> > I configured my controllers octavia,conf but when I try to create a share >> > the manila scheduler logs reports: >> > >> > Failed to schedule create_share: No valid host was found. Failed to find >> a weighted host, the last executed filter was CapabilitiesFilter.: >> NoValidHost: No valid host was found. Failed to find a weighted host, the >> last executed filter was CapabilitiesFilter. >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a 89f76bc5de5545f381da2c10c7df7f15 >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a >> >> >> The scheduler failure points out that you have a mismatch in >> expectations (backend capabilities vs share type extra-specs) and >> there was no host to schedule your share to. So a few things to check >> here: >> >> - What is the share type you're using? Can you list the share type >> extra-specs and confirm that the backend (your GlusterFS storage) >> capabilities are appropriate with whatever you've set up as >> extra-specs ($ manila pool-list --detail)? >> - Is your backend operating correctly? You can list the manila >> services ($ manila service-list) and see if the backend is both >> 'enabled' and 'up'. If it isn't, there's a good chance there was a >> problem with the driver initialization, please enable debug logging, >> and look at the log file for the manila-share service, you might see >> why and be able to fix it. >> >> >> Please be aware that we're on a look out for a maintainer for the >> GlusterFS driver for the past few releases. We're open to bug fixes >> and maintenance patches, but there is currently no active maintainer >> for this driver. >> >> >> > I did not understand if controllers node must be connected to the >> network where shares must be exported for virtual machines, so my glusterfs >> are connected on the management network where openstack controllers are >> conencted and to the network where virtual machine are connected. >> > >> > My manila.conf section for glusterfs section is the following >> > >> > [gluster-manila565] >> > driver_handles_share_servers = False >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver >> > glusterfs_target = root at 10.102.184.229:/manila565 >> > glusterfs_path_to_private_key = /etc/manila/id_rsa >> > glusterfs_ganesha_server_username = root >> > glusterfs_nfs_server_type = Ganesha >> > glusterfs_ganesha_server_ip = 10.102.184.229 >> > #glusterfs_servers = root at 10.102.185.19 >> > ganesha_config_dir = /etc/ganesha >> > >> > >> > PS >> > 10.102.184.0/24 is the network where controlelrs expose endpoint >> > >> > 10.102.189.0/24 is the shared network inside openstack where virtual >> machines are connected. >> > >> > The gluster servers are connected on both. >> > >> > >> > Any help, please ? >> > >> > Ignazio >> From ignaziocassano at gmail.com Sun Feb 3 11:45:02 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sun, 3 Feb 2019 12:45:02 +0100 Subject: [manila][glusterfs] on queens error In-Reply-To: <20190203100549.urtnvf2iatmqm6oy@barron.net> References: <20190203100549.urtnvf2iatmqm6oy@barron.net> Message-ID: Many Thanks. I will check it [1]. Regards Ignazio Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha scritto: > On 01/02/19 07:28 +0100, Ignazio Cassano wrote: > >Thanks Goutham. > >If there are not mantainers for this driver I will switch on ceph and or > >netapp. > >I am already using netapp but I would like to export shares from an > >openstack installation to another. > >Since these 2 installations do non share any openstack component and have > >different openstack database, I would like to know it is possible . > >Regards > >Ignazio > > Hi Ignazio, > > If by "export shares from an openstack installation to another" you > mean removing them from management by manila in installation A and > instead managing them by manila in installation B then you can do that > while leaving them in place on your Net App back end using the manila > "manage-unmanage" administrative commands. Here's some documentation > [1] that should be helpful. > > If on the other hand by "export shares ... to another" you mean to > leave the shares under management of manila in installation A but > consume them from compute instances in installation B it's all about > the networking. One can use manila to "allow-access" to consumers of > shares anywhere but the consumers must be able to reach the "export > locations" for those shares and mount them. > > Cheers, > > -- Tom Barron > > [1] > https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 > > > >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < > gouthampravi at gmail.com> > >ha scritto: > > > >> Hi Ignazio, > >> > >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano > >> wrote: > >> > > >> > Hello All, > >> > I installed manila on my queens openstack based on centos 7. > >> > I configured two servers with glusterfs replocation and ganesha nfs. > >> > I configured my controllers octavia,conf but when I try to create a > share > >> > the manila scheduler logs reports: > >> > > >> > Failed to schedule create_share: No valid host was found. Failed to > find > >> a weighted host, the last executed filter was CapabilitiesFilter.: > >> NoValidHost: No valid host was found. Failed to find a weighted host, > the > >> last executed filter was CapabilitiesFilter. > >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api > >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a > 89f76bc5de5545f381da2c10c7df7f15 > >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for > >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a > >> > >> > >> The scheduler failure points out that you have a mismatch in > >> expectations (backend capabilities vs share type extra-specs) and > >> there was no host to schedule your share to. So a few things to check > >> here: > >> > >> - What is the share type you're using? Can you list the share type > >> extra-specs and confirm that the backend (your GlusterFS storage) > >> capabilities are appropriate with whatever you've set up as > >> extra-specs ($ manila pool-list --detail)? > >> - Is your backend operating correctly? You can list the manila > >> services ($ manila service-list) and see if the backend is both > >> 'enabled' and 'up'. If it isn't, there's a good chance there was a > >> problem with the driver initialization, please enable debug logging, > >> and look at the log file for the manila-share service, you might see > >> why and be able to fix it. > >> > >> > >> Please be aware that we're on a look out for a maintainer for the > >> GlusterFS driver for the past few releases. We're open to bug fixes > >> and maintenance patches, but there is currently no active maintainer > >> for this driver. > >> > >> > >> > I did not understand if controllers node must be connected to the > >> network where shares must be exported for virtual machines, so my > glusterfs > >> are connected on the management network where openstack controllers are > >> conencted and to the network where virtual machine are connected. > >> > > >> > My manila.conf section for glusterfs section is the following > >> > > >> > [gluster-manila565] > >> > driver_handles_share_servers = False > >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver > >> > glusterfs_target = root at 10.102.184.229:/manila565 > >> > glusterfs_path_to_private_key = /etc/manila/id_rsa > >> > glusterfs_ganesha_server_username = root > >> > glusterfs_nfs_server_type = Ganesha > >> > glusterfs_ganesha_server_ip = 10.102.184.229 > >> > #glusterfs_servers = root at 10.102.185.19 > >> > ganesha_config_dir = /etc/ganesha > >> > > >> > > >> > PS > >> > 10.102.184.0/24 is the network where controlelrs expose endpoint > >> > > >> > 10.102.189.0/24 is the shared network inside openstack where virtual > >> machines are connected. > >> > > >> > The gluster servers are connected on both. > >> > > >> > > >> > Any help, please ? > >> > > >> > Ignazio > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From honjo.rikimaru at po.ntt-tx.co.jp Mon Feb 4 01:32:54 2019 From: honjo.rikimaru at po.ntt-tx.co.jp (Rikimaru Honjo) Date: Mon, 4 Feb 2019 10:32:54 +0900 Subject: [infra][zuul]Run only my 3rd party CI on my environment In-Reply-To: References: <17a356c2-9911-a4e9-43f3-6df04bf18a59@po.ntt-tx.co.jp> Message-ID: <071fc91d-1e35-8838-3046-a237b681d59e@po.ntt-tx.co.jp> On 2019/01/31 21:34, Sean Mooney wrote: > On Thu, 2019-01-31 at 14:27 +0900, Rikimaru Honjo wrote: >> Hello, >> >> I have a question about Zuulv3. >> >> I'm preparing third party CI for openstack/masakari PJ. >> I'd like to run my CI by my Zuulv3 instance on my environment. >> >> In my understand, I should add my pipeline to the project of the following .zuul.yaml for my purpose. >> >> https://github.com/openstack/masakari/blob/master/.zuul.yaml >> >> But, as a result, my Zuulv3 instance also run existed pipelines(check & gate). >> I want to run only my pipeline on my environment. >> (And, existed piplines will be run on openstack-infra environment.) >> >> How can I make my Zuulv3 instance ignore other pipeline? > you have two options that i know of. > > first you can simply not define a pipeline called gate and check in your zuul config repo. > since you are already usign it that is not an option for you. > > second if you have your own ci config project that is hosted > seperatly from upstream gerrit you can define in you pipeline that > the gate and check piplines are only for that other souce. > > e.g. if you have two connections defiend in zuul you can use the pipline > triggers to define that the triggers for the gate an check pipeline only work with your > own gerrit instance and not openstacks > > i am similar seting up a personal thridparty ci at present. > i have chosen to create a seperate pipeline with a different name for running > against upstream changes using the git.openstack.org gerrit source > > i have not pushed the patch to trigger form upstream gerrit yet > https://review.seanmooney.info/plugins/gitiles/ci-config/+/master/zuul.d/pipelines.yaml > but you can see that my gate and check piplines only trigger form the gerrit source > which is my own gerrit instacne at review.seanmooney.info > > i will be adding a dedicated pipeline for upstream as unlike my personal gerrit i never > want my ci to submit/merge patches upstream. > > i hope that helps. > > the gerrit trigger docs can be found here > https://zuul-ci.org/docs/zuul/admin/drivers/gerrit.html#trigger-configuration Thanks a lot! My question has been solved completely with your advice. I would choose the second method. > regards > sean >> >> Best regards, > > -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp From mikal at stillhq.com Mon Feb 4 02:15:54 2019 From: mikal at stillhq.com (Michael Still) Date: Mon, 4 Feb 2019 13:15:54 +1100 Subject: [kolla] Debugging with kolla-ansible Message-ID: Heya, I'm chasing a bug at the moment, and have been able to recreate it with a stock kolla-ansible install. The next step is to add more debugging to the OpenStack code to try and chase down what's happening. Before I go off and do something wildly bonkers, does anyone have a nice way of overriding locally the container image that kolla is using for a given container? The best I've come up with at the moment is something like: - copy the contents of the container out to a directory on the host node - delete the docker container - create a new container which mimics the previous container (docker inspect and some muttering) and have that container mount the copied out stuff as a volume I considered just snapshotting the image being used by the current container, but I want a faster edit cycle than edit, snapshot, start provides. Thoughts? Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Mon Feb 4 02:36:33 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Sun, 3 Feb 2019 21:36:33 -0500 Subject: [kolla] Debugging with kolla-ansible In-Reply-To: References: Message-ID: On Sun, Feb 3, 2019, 9:17 PM Michael Still Heya, > > I'm chasing a bug at the moment, and have been able to recreate it with a > stock kolla-ansible install. The next step is to add more debugging to the > OpenStack code to try and chase down what's happening. > > Before I go off and do something wildly bonkers, does anyone have a nice > way of overriding locally the container image that kolla is using for a > given container? > > The best I've come up with at the moment is something like: > > - copy the contents of the container out to a directory on the host node > - delete the docker container > - create a new container which mimics the previous container (docker > inspect and some muttering) and have that container mount the copied out > stuff as a volume > > I considered just snapshotting the image being used by the current > container, but I want a faster edit cycle than edit, snapshot, start > provides. > > Thoughts? > Michael > Easiest way would be to deploy from a local registry. You can pull everything from docker hub and just use kolla-build to build and push the ones you're working on. Then just delete the image from wherever it's running, run a deploy with --tags of the project you're messing with, and it'll deploy the new image, or increment the docker tag when you push it and run upgrade. If I'm missing something and oversimplifying, let me know :). -Erik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Mon Feb 4 02:38:53 2019 From: mikal at stillhq.com (Michael Still) Date: Mon, 4 Feb 2019 13:38:53 +1100 Subject: [kolla] Debugging with kolla-ansible In-Reply-To: References: Message-ID: That sounds interesting... So if I only wanted to redeploy say the ironic_neutron_agent container, how would I do that with a tag? Its not immediately obvious to me where the command line for docker comes from in the ansible. Is that just in ansible/roles/neutron/defaults/main.yml ? If so, I could tweak the container definition for the container I want to hack with the get its code from a volume, and then redeploy just that one container, yes? Thanks for your help! Michael On Mon, Feb 4, 2019 at 1:36 PM Erik McCormick wrote: > > > On Sun, Feb 3, 2019, 9:17 PM Michael Still >> Heya, >> >> I'm chasing a bug at the moment, and have been able to recreate it with a >> stock kolla-ansible install. The next step is to add more debugging to the >> OpenStack code to try and chase down what's happening. >> >> Before I go off and do something wildly bonkers, does anyone have a nice >> way of overriding locally the container image that kolla is using for a >> given container? >> >> The best I've come up with at the moment is something like: >> >> - copy the contents of the container out to a directory on the host node >> - delete the docker container >> - create a new container which mimics the previous container (docker >> inspect and some muttering) and have that container mount the copied out >> stuff as a volume >> >> I considered just snapshotting the image being used by the current >> container, but I want a faster edit cycle than edit, snapshot, start >> provides. >> >> Thoughts? >> Michael >> > > Easiest way would be to deploy from a local registry. You can pull > everything from docker hub and just use kolla-build to build and push the > ones you're working on. > > Then just delete the image from wherever it's running, run a deploy with > --tags of the project you're messing with, and it'll deploy the new image, > or increment the docker tag when you push it and run upgrade. > > If I'm missing something and oversimplifying, let me know :). > > -Erik > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Mon Feb 4 05:01:35 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Mon, 4 Feb 2019 00:01:35 -0500 Subject: [kolla] Debugging with kolla-ansible In-Reply-To: References: Message-ID: Sorry for the delay. I didn't want to try and write this on my phone... On Sun, Feb 3, 2019, 9:39 PM Michael Still That sounds interesting... So if I only wanted to redeploy say the > ironic_neutron_agent container, how would I do that with a tag? > To roll it out, update it's config, or upgrade to a new ticker tag, you'd just do kolla-ansible --tags neutron deploy | reconfigure | upgrade I don't think its granular enough to do just the agent and I'm not sure you'd want to anyway. > Its not immediately obvious to me where the command line for docker comes > from in the ansible. Is that just > in ansible/roles/neutron/defaults/main.yml ? If so, I could tweak the > container definition for the container I want to hack with the get its code > from a volume, and then redeploy just that one container, yes? > I suppose that's one way to go about quick hacks. You could add a new volume in that main.yml and then modify things in it. I think that would get messy though. There aren't any volume definitions explicitly for that container, so you'd have to add a whole section in there for it and I don't know what other side effects that might have. The slow but safe way to do it would be to point Kolla at your feature branch and rebuild the image each time you want to test a new patch set. In kolla-build.conf do something like: [neutron-base-plugin-networking-baremetal] type = url location = https://github.com/openstack/networking-baremetal.git reference = tonys-hacks then something like kolla-build --config-file /etc/kolla/kolla-build.conf --base centos --type source --push --registry localhost:5000 --logs-dir /tmp ironic-neutron-agent The really dirty but useful way to test small changes would be to just push them into the container with 'docker cp' and restart the container. Note that this will not work for config changes as those files get clobbered at startup, but for hacking the actual python bits, it'll do. Hope that's what you're looking for. If you drop by #opensatck-kolla during US daylight hours you might get more suggestions from Eduardo or one of the actual project devs. They probably have fancier methods. Cheers, Erik > > Thanks for your help! > > Michael > > > On Mon, Feb 4, 2019 at 1:36 PM Erik McCormick > wrote: > >> >> >> On Sun, Feb 3, 2019, 9:17 PM Michael Still > >>> Heya, >>> >>> I'm chasing a bug at the moment, and have been able to recreate it with >>> a stock kolla-ansible install. The next step is to add more debugging to >>> the OpenStack code to try and chase down what's happening. >>> >>> Before I go off and do something wildly bonkers, does anyone have a nice >>> way of overriding locally the container image that kolla is using for a >>> given container? >>> >>> The best I've come up with at the moment is something like: >>> >>> - copy the contents of the container out to a directory on the host node >>> - delete the docker container >>> - create a new container which mimics the previous container (docker >>> inspect and some muttering) and have that container mount the copied out >>> stuff as a volume >>> >>> I considered just snapshotting the image being used by the current >>> container, but I want a faster edit cycle than edit, snapshot, start >>> provides. >>> >>> Thoughts? >>> Michael >>> >> >> Easiest way would be to deploy from a local registry. You can pull >> everything from docker hub and just use kolla-build to build and push the >> ones you're working on. >> >> Then just delete the image from wherever it's running, run a deploy with >> --tags of the project you're messing with, and it'll deploy the new image, >> or increment the docker tag when you push it and run upgrade. >> >> If I'm missing something and oversimplifying, let me know :). >> >> -Erik >> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Mon Feb 4 06:12:10 2019 From: mikal at stillhq.com (Michael Still) Date: Mon, 4 Feb 2019 17:12:10 +1100 Subject: [kolla] Debugging with kolla-ansible In-Reply-To: References: Message-ID: On Mon, Feb 4, 2019 at 4:01 PM Erik McCormick wrote: [snip detailed helpful stuff] The really dirty but useful way to test small changes would be to just push > them into the container with 'docker cp' and restart the container. Note > that this will not work for config changes as those files get clobbered at > startup, but for hacking the actual python bits, it'll do. > This was news to me to be honest. I had assumed the container filesystem got reset on process restart, but you're right and that's not true. So, editing files in the container works for my current needs. Thanks heaps! Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Mon Feb 4 08:23:23 2019 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Mon, 4 Feb 2019 09:23:23 +0100 Subject: [kolla] Debugging with kolla-ansible In-Reply-To: References: Message-ID: Hi Michael, You could use a custom image and change the image definition in ansible, ie for define a different image for neutron_server you would add a variable in globals.yml like: neutron_server_image_full: "registry/repo/image_name:mytag:" If what you are debugin is openstack code, you could use kolla dev mode, where you can change git code locally and mount the code into the python path https://docs.openstack.org/kolla-ansible/latest/contributor/kolla-for-openstack-development.html Regards El lun., 4 feb. 2019 a las 7:16, Michael Still () escribió: > On Mon, Feb 4, 2019 at 4:01 PM Erik McCormick > wrote: > > [snip detailed helpful stuff] > > The really dirty but useful way to test small changes would be to just >> push them into the container with 'docker cp' and restart the container. >> Note that this will not work for config changes as those files get >> clobbered at startup, but for hacking the actual python bits, it'll do. >> > > This was news to me to be honest. I had assumed the container filesystem > got reset on process restart, but you're right and that's not true. So, > editing files in the container works for my current needs. > > Thanks heaps! > > Michael > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Feb 4 08:31:46 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 04 Feb 2019 17:31:46 +0900 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> Message-ID: <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> ---- On Thu, 31 Jan 2019 19:45:25 +0900 Thierry Carrez wrote ---- > Hi everyone, > > The "Help most needed" list[1] was created by the Technical Committee to > clearly describe areas of the OpenStack open source project which were > in the most need of urgent help. This was done partly to facilitate > communications with corporate sponsors and engineering managers, and be > able to point them to an official statement of need from "the project". > > [1] https://governance.openstack.org/tc/reference/help-most-needed.html > > This list encounters two issues. First it's hard to limit entries: a lot > of projects teams, SIGs and other forms of working groups could use > extra help. But more importantly, this list has had a very limited > impact -- new contributors did not exactly magically show up in the > areas we designated as in most need of help. > > When we raised that topic (again) at a Board+TC meeting, a suggestion > was made that we should turn the list more into a "job description" > style that would make it more palatable to the corporate world. I fear > that would not really solve the underlying issue (which is that at our > stage of the hype curve, no organization really has spare contributors > to throw at random hard problems). > > So I wonder if we should not reframe the list and make it less "this > team needs help" and more "I offer peer-mentoring in this team". A list > of contributor internships offers, rather than a call for corporate help > in the dark. I feel like that would be more of a win-win offer, and more > likely to appeal to students, or OpenStack users trying to contribute back. > > Proper 1:1 mentoring takes a lot of time, and I'm not underestimating > that. Only people that are ready to dedicate mentoring time should show > up on this new "list"... which is why it should really list identified > individuals rather than anonymous teams. It should also probably be > one-off offers -- once taken, the offer should probably go off the list. > > Thoughts on that? Do you think reframing help-needed as > mentoring-offered could help? Do you have alternate suggestions? Reframing to "mentoring-offered " is a nice idea which is something can give the best result if there will be. Being mentor few times or as FC SIG member, I agree that it is very hard to get new contributors, especially for the long term. Many times, they disappear after few weeks. Having a peer mentor can attract few contributors if they technically hesitate to start working on that. Along with that we need this list as a live list and should be reiterated every cycle with the latest items, priority, peer-mentors mapping. For example, if any team adding any item as help-wanted do they provide peer-mentor or we ask the volunteer for peer-mentorship and based on that priority should go. If I recall it correctly from Board+TC meeting, TC is looking for a new home for this list ? Or we continue to maintain this in TC itself which should not be much effort I feel. One of the TC members can volunteer on this and keep it up to date every cycle by organizing a forum sessions discussion etc. Further, we ask other groups like Outreachy, FC SIG, OUI to publishing this list every time they get chance to interact with new contributors. -gmann > > -- > Thierry Carrez (ttx) > > From alfredo.deluca at gmail.com Mon Feb 4 08:36:02 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Mon, 4 Feb 2019 09:36:02 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Hi Clemens. So the image I downloaded is this https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 which is the latest I think. But you are right...and I noticed that too.... It doesn't have atomic binary the os-release is *NAME=Fedora* *VERSION="29 (Cloud Edition)"* *ID=fedora* *VERSION_ID=29* *PLATFORM_ID="platform:f29"* *PRETTY_NAME="Fedora 29 (Cloud Edition)"* *ANSI_COLOR="0;34"* *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* *HOME_URL="https://fedoraproject.org/ "* *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ "* *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help "* *BUG_REPORT_URL="https://bugzilla.redhat.com/ "* *REDHAT_BUGZILLA_PRODUCT="Fedora"* *REDHAT_BUGZILLA_PRODUCT_VERSION=29* *REDHAT_SUPPORT_PRODUCT="Fedora"* *REDHAT_SUPPORT_PRODUCT_VERSION=29* *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy "* *VARIANT="Cloud Edition"* *VARIANT_ID=cloud* so not sure why I don't have atomic tho On Sat, Feb 2, 2019 at 7:53 PM Clemens wrote: > Now to the failure of your part-013: Are you sure that you used the glance > image ‚fedora-atomic-latest‘ and not some other fedora image? Your error > message below suggests that your image does not contain ‚atomic‘ as part of > the image … > > + _prefix=docker.io/openstackmagnum/ > + atomic install --storage ostree --system --system-package no --set > REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name > heat-container-agent > docker.io/openstackmagnum/heat-container-agent:queens-stable > ./part-013: line 8: atomic: command not found > + systemctl start heat-container-agent > Failed to start heat-container-agent.service: Unit > heat-container-agent.service not found. > > Am 02.02.2019 um 17:36 schrieb Alfredo De Luca : > > Failed to start heat-container-agent.service: Unit > heat-container-agent.service not found. > > > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Feb 4 09:48:05 2019 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 4 Feb 2019 09:48:05 +0000 Subject: [kolla] Debugging with kolla-ansible In-Reply-To: References: Message-ID: On Mon, 4 Feb 2019 at 08:25, Eduardo Gonzalez wrote: > Hi Michael, > > You could use a custom image and change the image definition in ansible, > ie for define a different image for neutron_server you would add a variable > in globals.yml like: > > > neutron_server_image_full: "registry/repo/image_name:mytag:" > > If what you are debugin is openstack code, you could use kolla dev mode, > where you can change git code locally and mount the code into the python > path > https://docs.openstack.org/kolla-ansible/latest/contributor/kolla-for-openstack-development.html > > Regards > Just a warning: I have recently had issues with dev mode because it does not do a pip install, but mounts the source code into the site-packages // directory, if there are new source files these will not be included in the package's file manifest. Also this won't affect any files outside of site-packages//. I just raised a bug [1] on this. What I often do when developing in a tight-ish loop on a single host is something like this: docker exec -it pip install -e git+https://# docker restart You have to be careful, since if the service doesn't start, the container will fail to start, and docker exec won't work. At that point you need to delete the container and redeploy. [1] https://bugs.launchpad.net/kolla-ansible/+bug/1814515 Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Feb 4 10:45:14 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 4 Feb 2019 11:45:14 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: I used fedora-magnum-27-4 and it works Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < alfredo.deluca at gmail.com> ha scritto: > Hi Clemens. > So the image I downloaded is this > https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 > which is the latest I think. > But you are right...and I noticed that too.... It doesn't have atomic > binary > the os-release is > > *NAME=Fedora* > *VERSION="29 (Cloud Edition)"* > *ID=fedora* > *VERSION_ID=29* > *PLATFORM_ID="platform:f29"* > *PRETTY_NAME="Fedora 29 (Cloud Edition)"* > *ANSI_COLOR="0;34"* > *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* > *HOME_URL="https://fedoraproject.org/ "* > *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ > "* > *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help > "* > *BUG_REPORT_URL="https://bugzilla.redhat.com/ > "* > *REDHAT_BUGZILLA_PRODUCT="Fedora"* > *REDHAT_BUGZILLA_PRODUCT_VERSION=29* > *REDHAT_SUPPORT_PRODUCT="Fedora"* > *REDHAT_SUPPORT_PRODUCT_VERSION=29* > *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy > "* > *VARIANT="Cloud Edition"* > *VARIANT_ID=cloud* > > > so not sure why I don't have atomic tho > > > On Sat, Feb 2, 2019 at 7:53 PM Clemens > wrote: > >> Now to the failure of your part-013: Are you sure that you used the >> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >> error message below suggests that your image does not contain ‚atomic‘ as >> part of the image … >> >> + _prefix=docker.io/openstackmagnum/ >> + atomic install --storage ostree --system --system-package no --set >> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >> heat-container-agent >> docker.io/openstackmagnum/heat-container-agent:queens-stable >> ./part-013: line 8: atomic: command not found >> + systemctl start heat-container-agent >> Failed to start heat-container-agent.service: Unit >> heat-container-agent.service not found. >> >> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca > >: >> >> Failed to start heat-container-agent.service: Unit >> heat-container-agent.service not found. >> >> >> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Mon Feb 4 11:39:25 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Mon, 4 Feb 2019 12:39:25 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: thanks ignazio Where can I get it from? On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano wrote: > I used fedora-magnum-27-4 and it works > > Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < > alfredo.deluca at gmail.com> ha scritto: > >> Hi Clemens. >> So the image I downloaded is this >> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >> which is the latest I think. >> But you are right...and I noticed that too.... It doesn't have atomic >> binary >> the os-release is >> >> *NAME=Fedora* >> *VERSION="29 (Cloud Edition)"* >> *ID=fedora* >> *VERSION_ID=29* >> *PLATFORM_ID="platform:f29"* >> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >> *ANSI_COLOR="0;34"* >> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >> *HOME_URL="https://fedoraproject.org/ "* >> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >> "* >> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >> "* >> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >> "* >> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >> *REDHAT_SUPPORT_PRODUCT="Fedora"* >> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >> "* >> *VARIANT="Cloud Edition"* >> *VARIANT_ID=cloud* >> >> >> so not sure why I don't have atomic tho >> >> >> On Sat, Feb 2, 2019 at 7:53 PM Clemens >> wrote: >> >>> Now to the failure of your part-013: Are you sure that you used the >>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>> error message below suggests that your image does not contain ‚atomic‘ as >>> part of the image … >>> >>> + _prefix=docker.io/openstackmagnum/ >>> + atomic install --storage ostree --system --system-package no --set >>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>> heat-container-agent >>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>> ./part-013: line 8: atomic: command not found >>> + systemctl start heat-container-agent >>> Failed to start heat-container-agent.service: Unit >>> heat-container-agent.service not found. >>> >>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca >> >: >>> >>> Failed to start heat-container-agent.service: Unit >>> heat-container-agent.service not found. >>> >>> >>> >> >> -- >> *Alfredo* >> >> -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Feb 4 11:55:41 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 4 Feb 2019 12:55:41 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: wget https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180212.2/CloudImages/x86_64/images/Fedora-Atomic-27-20180212.2.x86_64.qcow2 Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < alfredo.deluca at gmail.com> ha scritto: > thanks ignazio > Where can I get it from? > > > On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano > wrote: > >> I used fedora-magnum-27-4 and it works >> >> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >> alfredo.deluca at gmail.com> ha scritto: >> >>> Hi Clemens. >>> So the image I downloaded is this >>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>> which is the latest I think. >>> But you are right...and I noticed that too.... It doesn't have atomic >>> binary >>> the os-release is >>> >>> *NAME=Fedora* >>> *VERSION="29 (Cloud Edition)"* >>> *ID=fedora* >>> *VERSION_ID=29* >>> *PLATFORM_ID="platform:f29"* >>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>> *ANSI_COLOR="0;34"* >>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>> *HOME_URL="https://fedoraproject.org/ "* >>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>> "* >>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>> "* >>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>> "* >>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>> "* >>> *VARIANT="Cloud Edition"* >>> *VARIANT_ID=cloud* >>> >>> >>> so not sure why I don't have atomic tho >>> >>> >>> On Sat, Feb 2, 2019 at 7:53 PM Clemens >>> wrote: >>> >>>> Now to the failure of your part-013: Are you sure that you used the >>>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>>> error message below suggests that your image does not contain ‚atomic‘ as >>>> part of the image … >>>> >>>> + _prefix=docker.io/openstackmagnum/ >>>> + atomic install --storage ostree --system --system-package no --set >>>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>> heat-container-agent >>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>> ./part-013: line 8: atomic: command not found >>>> + systemctl start heat-container-agent >>>> Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found. >>>> >>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>> alfredo.deluca at gmail.com>: >>>> >>>> Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found. >>>> >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Feb 4 11:57:25 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 4 Feb 2019 12:57:25 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Then upload it with: openstack image create \ --disk-format=qcow2 \ --container-format=bare \ --file=Fedora-Atomic-27-20180212.2.x86_64.qcow2\ --property os_distro='fedora-atomic' \ fedora-atomic-latest Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < alfredo.deluca at gmail.com> ha scritto: > thanks ignazio > Where can I get it from? > > > On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano > wrote: > >> I used fedora-magnum-27-4 and it works >> >> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >> alfredo.deluca at gmail.com> ha scritto: >> >>> Hi Clemens. >>> So the image I downloaded is this >>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>> which is the latest I think. >>> But you are right...and I noticed that too.... It doesn't have atomic >>> binary >>> the os-release is >>> >>> *NAME=Fedora* >>> *VERSION="29 (Cloud Edition)"* >>> *ID=fedora* >>> *VERSION_ID=29* >>> *PLATFORM_ID="platform:f29"* >>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>> *ANSI_COLOR="0;34"* >>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>> *HOME_URL="https://fedoraproject.org/ "* >>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>> "* >>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>> "* >>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>> "* >>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>> "* >>> *VARIANT="Cloud Edition"* >>> *VARIANT_ID=cloud* >>> >>> >>> so not sure why I don't have atomic tho >>> >>> >>> On Sat, Feb 2, 2019 at 7:53 PM Clemens >>> wrote: >>> >>>> Now to the failure of your part-013: Are you sure that you used the >>>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>>> error message below suggests that your image does not contain ‚atomic‘ as >>>> part of the image … >>>> >>>> + _prefix=docker.io/openstackmagnum/ >>>> + atomic install --storage ostree --system --system-package no --set >>>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>> heat-container-agent >>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>> ./part-013: line 8: atomic: command not found >>>> + systemctl start heat-container-agent >>>> Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found. >>>> >>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>> alfredo.deluca at gmail.com>: >>>> >>>> Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found. >>>> >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Feb 4 12:02:25 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 4 Feb 2019 13:02:25 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: I also suggest to change dns in your external network used by magnum. Using openstack dashboard you can change it to 8.8.8.8 (If I remember fine you wrote that you can ping 8.8.8.8 from kuke baster) Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < alfredo.deluca at gmail.com> ha scritto: > thanks ignazio > Where can I get it from? > > > On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano > wrote: > >> I used fedora-magnum-27-4 and it works >> >> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >> alfredo.deluca at gmail.com> ha scritto: >> >>> Hi Clemens. >>> So the image I downloaded is this >>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>> which is the latest I think. >>> But you are right...and I noticed that too.... It doesn't have atomic >>> binary >>> the os-release is >>> >>> *NAME=Fedora* >>> *VERSION="29 (Cloud Edition)"* >>> *ID=fedora* >>> *VERSION_ID=29* >>> *PLATFORM_ID="platform:f29"* >>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>> *ANSI_COLOR="0;34"* >>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>> *HOME_URL="https://fedoraproject.org/ "* >>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>> "* >>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>> "* >>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>> "* >>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>> "* >>> *VARIANT="Cloud Edition"* >>> *VARIANT_ID=cloud* >>> >>> >>> so not sure why I don't have atomic tho >>> >>> >>> On Sat, Feb 2, 2019 at 7:53 PM Clemens >>> wrote: >>> >>>> Now to the failure of your part-013: Are you sure that you used the >>>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>>> error message below suggests that your image does not contain ‚atomic‘ as >>>> part of the image … >>>> >>>> + _prefix=docker.io/openstackmagnum/ >>>> + atomic install --storage ostree --system --system-package no --set >>>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>> heat-container-agent >>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>> ./part-013: line 8: atomic: command not found >>>> + systemctl start heat-container-agent >>>> Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found. >>>> >>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>> alfredo.deluca at gmail.com>: >>>> >>>> Failed to start heat-container-agent.service: Unit >>>> heat-container-agent.service not found. >>>> >>>> >>>> >>> >>> -- >>> *Alfredo* >>> >>> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bansalnehal26 at gmail.com Mon Feb 4 05:16:14 2019 From: bansalnehal26 at gmail.com (Nehal Bansal) Date: Mon, 4 Feb 2019 10:46:14 +0530 Subject: Regarding supporting version of Nova-Docker Driver Message-ID: Hi, I have done a manual installation of OpenStack Queens version and wanted to run docker containers on it using Nova-Docker driver. But the git repository says it is no longer a maintained project. Could you tell me if it supports the Queens release. Thank you. Regards, Nehal Bansal -------------- next part -------------- An HTML attachment was scrubbed... URL: From davanum at gmail.com Mon Feb 4 13:02:07 2019 From: davanum at gmail.com (Davanum Srinivas) Date: Mon, 4 Feb 2019 08:02:07 -0500 Subject: Regarding supporting version of Nova-Docker Driver In-Reply-To: References: Message-ID: Nehal, you found the right info. it is not maintained. please look at alternatives like Zun ( https://docs.openstack.org/zun/latest/ ) Thanks, Dims On Mon, Feb 4, 2019 at 7:45 AM Nehal Bansal wrote: > Hi, > > I have done a manual installation of OpenStack Queens version and wanted > to run docker containers on it using Nova-Docker driver. But the git > repository says it is no longer a maintained project. Could you tell me if > it supports the Queens release. > > Thank you. > > Regards, > Nehal Bansal > -- Davanum Srinivas :: https://twitter.com/dims -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Mon Feb 4 13:25:41 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Mon, 4 Feb 2019 14:25:41 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Hi Ignazio. Thanks for the link...... so Now at least atomic is present on the system. Also I ve already had 8.8.8.8 on the system. So I can connect on the floating IP to the kube master....than I can ping 8.8.8.8 but for example doesn't resolve the names...so if I ping 8.8.8.8 *root at my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* *64 bytes from 8.8.8.8 : icmp_seq=1 ttl=118 time=12.1 ms* *64 bytes from 8.8.8.8 : icmp_seq=2 ttl=118 time=12.2 ms* but if I ping google.com doesn't resolve. I can't either find on fedora dig or nslookup to check resolv.conf has *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* *nameserver 8.8.8.8* It\s all so weird. On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano wrote: > I also suggest to change dns in your external network used by magnum. > Using openstack dashboard you can change it to 8.8.8.8 (If I remember > fine you wrote that you can ping 8.8.8.8 from kuke baster) > > Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < > alfredo.deluca at gmail.com> ha scritto: > >> thanks ignazio >> Where can I get it from? >> >> >> On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano >> wrote: >> >>> I used fedora-magnum-27-4 and it works >>> >>> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >>> alfredo.deluca at gmail.com> ha scritto: >>> >>>> Hi Clemens. >>>> So the image I downloaded is this >>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>>> which is the latest I think. >>>> But you are right...and I noticed that too.... It doesn't have atomic >>>> binary >>>> the os-release is >>>> >>>> *NAME=Fedora* >>>> *VERSION="29 (Cloud Edition)"* >>>> *ID=fedora* >>>> *VERSION_ID=29* >>>> *PLATFORM_ID="platform:f29"* >>>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>>> *ANSI_COLOR="0;34"* >>>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>>> *HOME_URL="https://fedoraproject.org/ "* >>>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>>> "* >>>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>>> "* >>>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>>> "* >>>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>>> "* >>>> *VARIANT="Cloud Edition"* >>>> *VARIANT_ID=cloud* >>>> >>>> >>>> so not sure why I don't have atomic tho >>>> >>>> >>>> On Sat, Feb 2, 2019 at 7:53 PM Clemens >>>> wrote: >>>> >>>>> Now to the failure of your part-013: Are you sure that you used the >>>>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>>>> error message below suggests that your image does not contain ‚atomic‘ as >>>>> part of the image … >>>>> >>>>> + _prefix=docker.io/openstackmagnum/ >>>>> + atomic install --storage ostree --system --system-package no --set >>>>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>>> heat-container-agent >>>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>>> ./part-013: line 8: atomic: command not found >>>>> + systemctl start heat-container-agent >>>>> Failed to start heat-container-agent.service: Unit >>>>> heat-container-agent.service not found. >>>>> >>>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>>> alfredo.deluca at gmail.com>: >>>>> >>>>> Failed to start heat-container-agent.service: Unit >>>>> heat-container-agent.service not found. >>>>> >>>>> >>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >> >> -- >> *Alfredo* >> >> -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From shakhat at gmail.com Mon Feb 4 13:42:04 2019 From: shakhat at gmail.com (Ilya Shakhat) Date: Mon, 4 Feb 2019 14:42:04 +0100 Subject: OpenStack code and GPL libraries Message-ID: Hi, I am experimenting with automatic verification of code licenses of OpenStack projects and see that one of Rally dependencies has GPL3 license [1]. I'm not a big expert in licenses, but isn't it a violation of GPL? In particular what concerns me is: [2] - " If a library is released under the GPL (not the LGPL), does that mean that any software which uses it has to be under the GPL or a GPL-compatible license? (#IfLibraryIsGPL) Yes, because the program actually links to the library. As such, the terms of the GPL apply to the entire combination. The software modules that link with the library may be under various GPL compatible licenses, but the work as a whole must be licensed under the GPL. " and [3] - " This licensing incompatibility applies only when some Apache project software becomes a derivative work of some GPLv3 software, because then the Apache software would have to be distributed under GPLv3. This would be incompatible with ASF's requirement that all Apache software must be distributed under the Apache License 2.0. We avoid GPLv3 software because merely linking to it is considered by the GPLv3 authors to create a derivative work. " [1] http://paste.openstack.org/show/744483/ [2] https://www.gnu.org/licenses/gpl-faq.html#IfLibraryIsGPL [3] https://www.apache.org/licenses/GPL-compatibility.html Should this issue be fixed? If yes, should we have a gate job to block adding of such dependencies? Thanks, Ilya -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon Feb 4 13:51:04 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 4 Feb 2019 05:51:04 -0800 Subject: [ironic] [thirdparty-ci] BaremetalBasicOps test In-Reply-To: <1bf8f3b4-ea39-6c17-3609-9289ceeeb7ed@linux.vnet.ibm.com> References: <1bf8f3b4-ea39-6c17-3609-9289ceeeb7ed@linux.vnet.ibm.com> Message-ID: On Thu, Jan 31, 2019 at 8:37 AM Michael Turek wrote: > [trim] > The job is able to clean the node during devstack, successfully deploy > to the node during the tempest run, and is successfully validated via > ssh. The node then moves to clean failed with a network error [1], and > the job subsequently fails. Sometime between the validation and > attempting to clean, the neutron port associated with the ironic port is > deleted and a new port comes into existence. Where I'm having trouble is > finding out what this port is. Based on it's MAC address It's a virtual > port, and its MAC is not the same as the ironic port. I think we landed code around then to address the issue of duplicate mac addresses where a port gets orphaned by external processes, so by default I seem to remember the logic now just resets the MAC if we no longer need the port. What are the network settings your operating the job with? It seems like 'flat' is at least the network_interface based on what your describing. > > We could add an IP to the job to fix it, but I'd rather not do that > needlessly. > From juliaashleykreger at gmail.com Mon Feb 4 14:03:30 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 4 Feb 2019 06:03:30 -0800 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: <20190201152652.cnudbniuraiflybj@redhat.com> References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <5354829D-31EA-4CB2-A054-239D105C7EC9@cern.ch> <20190130170501.hs2vsmm7iqdhmftc@redhat.com> <20190201152652.cnudbniuraiflybj@redhat.com> Message-ID: On Fri, Feb 1, 2019 at 7:34 AM Lars Kellogg-Stedman wrote: > > On Thu, Jan 31, 2019 at 12:09:07PM +0100, Dmitry Tantsur wrote: > > Some first steps have been done: > > http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ownership-field.html. > > We need someone to drive the futher design and implementation > > though. > > That spec seems to be for a strictly informational field. Reading > through it, I guess it's because doing something like this... > > openstack baremetal node set --property owner=lars > > ...leads to sub-optimal performance when trying to filter a large > number of hosts. I see that it's merged already, so I guess this is > commenting-after-the-fact, but that seems like the wrong path to > follow: I can see properties like "the contract id under which this > system was purchased" being as or more important than "owner" from a > large business perspective, so making it easier to filter by property > on the server side would seem to be a better solution. > > Or implement full multi-tenancy so that "owner" is more than simply > informational, of course :). > My original thought was more enable multi-purpose usage and should we ever get to a point where we want to offer filtered views by saying a baremetal_user can only see machines whose owner is set by their tenant. Sub-optimal for sure, but in order not to break baremetal_admin level usage we have to have a compromise. The alternative that comes to mind is build a new permission matrix model that delineates the two, but at some point someone is still the "owner" and is responsible for the hardware. The details we kind of want to keep out of storage and consideration in ironic are the more CMDB-ish details that would things like contracts and acquisition dates. The other things we should consider is "Give me a physical machine" versus "I have my machines, I need to use them" approaches and such a model. I suspect this is quickly becoming a Forum worthy session. > -- > Lars Kellogg-Stedman | larsks @ {irc,twitter,github} > http://blog.oddbit.com/ | > From km.giuseppesannino at gmail.com Mon Feb 4 14:25:22 2019 From: km.giuseppesannino at gmail.com (Giuseppe Sannino) Date: Mon, 4 Feb 2019 15:25:22 +0100 Subject: [kolla] Magnum K8s cluster creation time out due to "Failed to contact endpoint at https ... certificate verify failed" error in magnum-conductor Message-ID: Hi all, this is my first post on this mailing list especially for "kolla" related issues. Hope you can help and hope this is the right channel to reuqest support. I have a problem with Magnum during the creation of a K8S cluster. The request gets timed out. Looking at the magnum-conductor logs I can see: Failed to contact the endpoint at https://:5000 for discovery. Fallback to using that endpoint as the base url.: SSLError: SSL exception connecting to https:// :5000: HTTPSConnectionPool(host=' ', port=5000): Max retries exceeded with url: / (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)) I had a similar issue with Kuryr. the service is trying to contact keystone over the external IP address without certificates. In kuryr, the workaround was to set the "endpoint_type" for neutron to "internal". In magnum.conf that's already the situation. Any suggestion on how to address this issue ? Here you can find some details about the deployment: --------------------------- Host nodes: Baremetal OS: Queens kolla-ansible: 6.1.0 Deployment: multinode (1+1). Kolla installed on the controller host kolla_install_type: source kolla_base_distro: ubuntu External/internal interfaces: separated kolla_enable_tls_external: "yes" Services: enable_cinder: "yes" enable_cinder_backend_lvm: "yes" enable_etcd: "yes" enable_fluentd: "yes" enable_haproxy: "yes" enable_heat: "yes" enable_horizon: "yes" enable_horizon_magnum: "{{ enable_magnum | bool }}" enable_horizon_zun: "{{ enable_zun | bool }}" enable_kuryr: "yes" enable_magnum: "yes" enable_openvswitch: "{{ neutron_plugin_agent != 'linuxbridge' }}" enable_zun: "yes" glance_backend_file: "yes" nova_compute_virt_type: "qemu" --------------------------- BR and many thanks in advance /Giuseppe -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Feb 4 14:36:29 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 04 Feb 2019 14:36:29 +0000 Subject: OpenStack code and GPL libraries In-Reply-To: References: Message-ID: <6cc948eae81115321508d2d6fa8bcc236012d9d9.camel@redhat.com> On Mon, 2019-02-04 at 14:42 +0100, Ilya Shakhat wrote: > Hi, > > I am experimenting with automatic verification of code licenses of OpenStack projects and see that one of Rally > dependencies has GPL3 license [1]. > I'm not a big expert in licenses, but isn't it a violation of GPL? In particular what concerns me is: > > [2] - " > If a library is released under the GPL (not the LGPL), does that mean that any software which uses it has to be under > the GPL or a GPL-compatible license? (#IfLibraryIsGPL) > > Yes, because the program actually links to the library. As such, the terms of the GPL apply to the entire combination. > The software modules that link with the library may be under various GPL compatible licenses, but the work as a whole > must be licensed under the GPL. > " > > and > > [3] - " > This licensing incompatibility applies only when some Apache project software becomes a derivative work of some GPLv3 > software, because then the Apache software would have to be distributed under GPLv3. This would be incompatible with > ASF's requirement that all Apache software must be distributed under the Apache License 2.0. > > We avoid GPLv3 software because merely linking to it is considered by the GPLv3 authors to create a derivative work. > " > > [1] http://paste.openstack.org/show/744483/ > [2] https://www.gnu.org/licenses/gpl-faq.html#IfLibraryIsGPL > [3] https://www.apache.org/licenses/GPL-compatibility.html > > Should this issue be fixed? If yes, should we have a gate job to block adding of such dependencies? it looks like it was added as part of this change https://github.com/openstack/rally/commit/ee2f469d8f347fbf8e0dcd84cf3f52e41eb98090 I have not checked but if it is only used by the optional elasticSearch plugin then im not sure there is a licence conflict in the general case. rally can be used entirly without the elastic serach exporter plugin so at most the GPL contamination whould be confied to that plugin provided the combination fo the plugin and rally is not considerd a sincel combinded work. the clause of the GPL only take effect on distibution as such if you distibute rally without the elastic search plugin or you distibute in such a way as the elastitc search plugin is not loaded i think no conclict would exist. im not a legal expert so this is just my oppion but from reviewing https://www.gnu.org/licenses/gpl-faq.en.html#GPLPlugins breifly it is arguable that loading the elastic search pluging would make rally and that plugin a single combined application which looking at https://www.gnu.org/licenses/gpl-faq.en.html#NFUseGPLPlugins would imply that the GPL would have to apply to the entire combination fo rally and the elastic search plugin. that would depend on how the plugin was loaded. if the exporter plugin is forked into a seperate python inteperater instance instaead of imported as a lib and invoked via a fuction call it would not form a single combined program but i have not looked at how rally uses the plugin. it would likely be good for legal and the rally core team to review. the simplest soltution if an issue is determinted to exist would be to move the elastic search plugin into its own repos so it si distibuted seperately from rally. failing that the code that depends on morph would have to be removed to resolve the conflict. as i said im not a leagl expert so this is just my personal opinion as such take it with a grain of salt. regard sean > > Thanks, > Ilya From fungi at yuggoth.org Mon Feb 4 15:05:15 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 Feb 2019 15:05:15 +0000 Subject: [tc] OpenStack code and GPL libraries In-Reply-To: References: Message-ID: <20190204150515.7zxgq2pj7pgnjaxk@yuggoth.org> On 2019-02-04 14:42:04 +0100 (+0100), Ilya Shakhat wrote: > I am experimenting with automatic verification of code licenses of > OpenStack projects and see that one of Rally dependencies has GPL3 > license [...] To start off, it looks like the license for morph is already known to the Rally developers, based on the inline comment for it at https://git.openstack.org/cgit/openstack/rally/tree/requirements.txt?id=3625758#n10 (so hopefully this is no real surprise). The source of truth for our licensing policies, as far as projects governed by the OpenStack Technical Committee are concerned (which openstack/rally is), can be found here: https://governance.openstack.org/tc/reference/licensing.html It has a carve out for "tools that are run with or on OpenStack projects only during validation or testing phases of development" which "may be licensed under any OSI-approved license" and since the README.rst for Rally states it's a "tool & framework that allows one to write simple plugins and combine them in complex tests scenarios that allows to perform all kinds of testing" it probably meets those criteria. As for concern that a Python application which imports another Python library at runtime inherits its license and so becomes derivative of that work, that has been the subject of much speculation. In particular, whether a Python import counts as "dynamic linking" in GPL 3.0 section 1 is debatable: https://bytes.com/topic/python/answers/41019-python-gpl https://opensource.stackexchange.com/questions/1487/how-does-the-gpls-linking-restriction-apply-when-using-a-proprietary-library-wi https://softwareengineering.stackexchange.com/questions/87446/using-a-gplv3-python-module-will-my-entire-project-have-to-be-gplv3-licensed https://stackoverflow.com/questions/40492518/is-an-import-in-python-considered-to-be-dynamic-linking I'm most definitely not a lawyer, but from what I've been able to piece together it's the combination of rally+morph which potentially becomes GPLv3-licensed when distributed, not the openstack/rally source code itself. This is really more of a topic for the legal-discuss mailing list, however, so I am cross-posting my reply there for completeness. To readers only of the legal-discuss ML, the original post can be found archived here: http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002356.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon Feb 4 15:22:31 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 Feb 2019 15:22:31 +0000 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> Message-ID: <20190204152231.qgiryyjn7omu642z@yuggoth.org> On 2019-02-04 17:31:46 +0900 (+0900), Ghanshyam Mann wrote: [...] > If I recall it correctly from Board+TC meeting, TC is looking for > a new home for this list ? Or we continue to maintain this in TC > itself which should not be much effort I feel. [...] It seems like you might be referring to the in-person TC meeting we held on the Sunday prior to the Stein PTG in Denver (Alan from the OSF BoD was also present). Doug's recap can be found in the old openstack-dev archive here: http://lists.openstack.org/pipermail/openstack-dev/2018-September/134744.html Quoting Doug, "...it wasn't clear that the TC was the best group to manage a list of 'roles' or other more detailed information. We discussed placing that information into team documentation or hosting it somewhere outside of the governance repository where more people could contribute." (If memory serves, this was in response to earlier OSF BoD suggestions that retooling the Help Wanted list to be a set of business-case-focused job descriptions might garner more uptake from the organizations they represent.) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Mon Feb 4 15:40:22 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 04 Feb 2019 10:40:22 -0500 Subject: [karbor][goals][python3] looking for karbor PTL Message-ID: I am trying to reach the Karbor PTL, Pengju Jiao, to ask some questions about the status of python 3 support. My email sent to the address on file in the governance repository has bounced. Does anyone have a current email address? -- Doug From doug at doughellmann.com Mon Feb 4 15:56:01 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 04 Feb 2019 10:56:01 -0500 Subject: [goal][python3] week R-9 update Message-ID: This is the periodic update for the "Run under Python 3 by default" goal (https://governance.openstack.org/tc/goals/stein/python3-first.html). == Current Status == We still have a fairly large number of projects without a python 3 functional test job running at all: adjutant aodh ceilometer cloudkitty cyborg freezer horizon karbor magnum masakari mistral monasca-agent monasca-ui murano murano-agent neutron-vpnaas qinling rally searchlight storlets swift tricircle watcher zaqar networking-l2gw and several with the job listed as non-voting: designate neutron-fwaas sahara senlin tacker I have contacted the PTLs of all of the affected teams directly to ask for updates. == Ongoing and Completed Work == There are still a handful of open patches to update tox, documentation, and python 3.6 unit tests. +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ | Team | tox defaults | Docs | 3.6 unit | Failing | Unreviewed | Total | Champion | +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ | adjutant | 1/ 1 | - | + | 0 | 1 | 2 | Doug Hellmann | | barbican | + | 1/ 3 | + | 1 | 1 | 7 | Doug Hellmann | | heat | 1/ 8 | + | 1/ 7 | 0 | 0 | 21 | Doug Hellmann | | InteropWG | 2/ 3 | + | + | 0 | 0 | 9 | Doug Hellmann | | ironic | 1/ 10 | + | + | 0 | 0 | 35 | Doug Hellmann | | magnum | 1/ 5 | + | + | 0 | 0 | 10 | | | masakari | 1/ 4 | + | - | 0 | 1 | 5 | Nguyen Hai | | monasca | 1/ 17 | + | + | 0 | 1 | 34 | Doug Hellmann | | neutron | 2/ 17 | + | + | 1 | 1 | 44 | Doug Hellmann | | OpenStack Charms | 8/ 73 | - | - | 7 | 2 | 73 | Doug Hellmann | | Quality Assurance | 2/ 10 | + | + | 0 | 1 | 31 | Doug Hellmann | | rally | 1/ 3 | + | - | 1 | 1 | 5 | Nguyen Hai | | sahara | 1/ 6 | + | + | 0 | 0 | 13 | Doug Hellmann | | swift | 2/ 3 | + | + | 1 | 1 | 6 | Nguyen Hai | | tacker | 2/ 4 | + | + | 1 | 0 | 9 | Nguyen Hai | | Telemetry | 1/ 7 | + | + | 0 | 1 | 19 | Doug Hellmann | | tripleo | 1/ 54 | + | + | 0 | 1 | 92 | Doug Hellmann | | trove | 1/ 5 | + | + | 0 | 0 | 11 | Doug Hellmann | | User Committee | 3/ 3 | + | - | 0 | 2 | 5 | Doug Hellmann | | | 43/ 61 | 56/ 57 | 54/ 55 | 12 | 14 | 1071 | | +-------------------+--------------+---------+----------+---------+------------+-------+---------------+ == Next Steps == We need to be wrapping up work on this goal by approving or abandoning the patches listed above (assuming they aren't needed) and adding the functional test jobs to the projects that don't have them. == How can you help? == 1. Choose a patch that has failing tests and help fix it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) 2. Review the patches for the zuul changes. Keep in mind that some of those patches will be on the stable branches for projects. 3. Work on adding functional test jobs that run under Python 3. == How can you ask for help? == If you have any questions, please post them here to the openstack-dev list with the topic tag [python3] in the subject line. Posting questions to the mailing list will give the widest audience the chance to see the answers. We are using the #openstack-dev IRC channel for discussion as well, but I'm not sure how good our timezone coverage is so it's probably better to use the mailing list. == Reference Material == Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open Storyboard: https://storyboard.openstack.org/#!/board/104 Zuul migration notes: https://etherpad.openstack.org/p/python3-first Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 -- Doug From jiaopengju at qq.com Mon Feb 4 16:06:55 2019 From: jiaopengju at qq.com (=?ISO-8859-1?B?amlhb3BlbmdqdQ==?=) Date: Tue, 5 Feb 2019 00:06:55 +0800 Subject: [karbor][goals][python3] looking for karbor PTL References: Message-ID: Hi Doug, This is the email which I am using for subscribing the email list. And my openstack account emails:jiaopengju at cmss.chinamobile.com and pj.jiao at 139.com are still in use. You can choose any one of them to contact me. I am on vacation now, but I will reply your email ASAP. Thanks, Pengju Jiao ------------------ Original ------------------ From: Doug Hellmann Date: Mon,Feb 4,2019 11:42 PM To: openstack-discuss Subject: Re: [karbor][goals][python3] looking for karbor PTL I am trying to reach the Karbor PTL, Pengju Jiao, to ask some questions about the status of python 3 support. My email sent to the address on file in the governance repository has bounced. Does anyone have a current email address? -- Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Mon Feb 4 16:36:49 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 4 Feb 2019 10:36:49 -0600 Subject: [cinder] Proposed mid-cycle schedule available Message-ID: All, I have put together a proposed schedule for our mid-cycle that starts tomorrow.  You can see the schedule here: https://etherpad.openstack.org/p/cinder-stein-mid-cycle-planning I have tried to keep the topics that are of interest to people in Europe/Asia earlier in the day.  If anyone has concerns with the schedule, please add notes in the etherpad. Look forward to meeting with you all tomorrow. Jay From ignaziocassano at gmail.com Mon Feb 4 16:45:49 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 4 Feb 2019 17:45:49 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Alfredo, try to check security group linked to your kubemaster. Il giorno Lun 4 Feb 2019 14:25 Alfredo De Luca ha scritto: > Hi Ignazio. Thanks for the link...... so > > Now at least atomic is present on the system. > Also I ve already had 8.8.8.8 on the system. So I can connect on the > floating IP to the kube master....than I can ping 8.8.8.8 but for example > doesn't resolve the names...so if I ping 8.8.8.8 > *root at my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* > *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* > *64 bytes from 8.8.8.8 : icmp_seq=1 ttl=118 time=12.1 ms* > *64 bytes from 8.8.8.8 : icmp_seq=2 ttl=118 time=12.2 ms* > > but if I ping google.com doesn't resolve. I can't either find on fedora > dig or nslookup to check > resolv.conf has > *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* > *nameserver 8.8.8.8* > > It\s all so weird. > > > > > On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano > wrote: > >> I also suggest to change dns in your external network used by magnum. >> Using openstack dashboard you can change it to 8.8.8.8 (If I remember >> fine you wrote that you can ping 8.8.8.8 from kuke baster) >> >> Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < >> alfredo.deluca at gmail.com> ha scritto: >> >>> thanks ignazio >>> Where can I get it from? >>> >>> >>> On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano < >>> ignaziocassano at gmail.com> wrote: >>> >>>> I used fedora-magnum-27-4 and it works >>>> >>>> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >>>> alfredo.deluca at gmail.com> ha scritto: >>>> >>>>> Hi Clemens. >>>>> So the image I downloaded is this >>>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>>>> which is the latest I think. >>>>> But you are right...and I noticed that too.... It doesn't have atomic >>>>> binary >>>>> the os-release is >>>>> >>>>> *NAME=Fedora* >>>>> *VERSION="29 (Cloud Edition)"* >>>>> *ID=fedora* >>>>> *VERSION_ID=29* >>>>> *PLATFORM_ID="platform:f29"* >>>>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>>>> *ANSI_COLOR="0;34"* >>>>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>>>> *HOME_URL="https://fedoraproject.org/ "* >>>>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>>>> "* >>>>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>>>> "* >>>>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>>>> "* >>>>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>>>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>>>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>>>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>>>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>>>> "* >>>>> *VARIANT="Cloud Edition"* >>>>> *VARIANT_ID=cloud* >>>>> >>>>> >>>>> so not sure why I don't have atomic tho >>>>> >>>>> >>>>> On Sat, Feb 2, 2019 at 7:53 PM Clemens >>>>> wrote: >>>>> >>>>>> Now to the failure of your part-013: Are you sure that you used the >>>>>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>>>>> error message below suggests that your image does not contain ‚atomic‘ as >>>>>> part of the image … >>>>>> >>>>>> + _prefix=docker.io/openstackmagnum/ >>>>>> + atomic install --storage ostree --system --system-package no --set >>>>>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>>>> heat-container-agent >>>>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>>>> ./part-013: line 8: atomic: command not found >>>>>> + systemctl start heat-container-agent >>>>>> Failed to start heat-container-agent.service: Unit >>>>>> heat-container-agent.service not found. >>>>>> >>>>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>>>> alfredo.deluca at gmail.com>: >>>>>> >>>>>> Failed to start heat-container-agent.service: Unit >>>>>> heat-container-agent.service not found. >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>> >>> -- >>> *Alfredo* >>> >>> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjturek at linux.vnet.ibm.com Mon Feb 4 16:52:11 2019 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Mon, 4 Feb 2019 11:52:11 -0500 Subject: [ironic] [thirdparty-ci] BaremetalBasicOps test In-Reply-To: References: <1bf8f3b4-ea39-6c17-3609-9289ceeeb7ed@linux.vnet.ibm.com> Message-ID: <8bc77794-ddc9-6b08-138a-a741729fcd48@linux.vnet.ibm.com> Hey Julia On 2/4/19 8:51 AM, Julia Kreger wrote: > On Thu, Jan 31, 2019 at 8:37 AM Michael Turek > wrote: > [trim] >> The job is able to clean the node during devstack, successfully deploy >> to the node during the tempest run, and is successfully validated via >> ssh. The node then moves to clean failed with a network error [1], and >> the job subsequently fails. Sometime between the validation and >> attempting to clean, the neutron port associated with the ironic port is >> deleted and a new port comes into existence. Where I'm having trouble is >> finding out what this port is. Based on it's MAC address It's a virtual >> port, and its MAC is not the same as the ironic port. > I think we landed code around then to address the issue of duplicate > mac addresses where a port gets orphaned by external processes, so by > default I seem to remember the logic now just resets the MAC if we no > longer need the port. Interesting! I'll look for the patch. If you have it handy please share. > What are the network settings your operating the job with? It seems > like 'flat' is at least the network_interface based on what your > describing. We are using a single  flat provider network with two available IPs (one for the DHCP server and one for the server itself) Here is a paste of a bunch of the network resources (censored here and there just in case). http://paste.openstack.org/show/744513/ >> We could add an IP to the job to fix it, but I'd rather not do that >> needlessly. >> From doug at doughellmann.com Mon Feb 4 17:25:36 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 04 Feb 2019 12:25:36 -0500 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <20190204152231.qgiryyjn7omu642z@yuggoth.org> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> Message-ID: Jeremy Stanley writes: > On 2019-02-04 17:31:46 +0900 (+0900), Ghanshyam Mann wrote: > [...] >> If I recall it correctly from Board+TC meeting, TC is looking for >> a new home for this list ? Or we continue to maintain this in TC >> itself which should not be much effort I feel. > [...] > > It seems like you might be referring to the in-person TC meeting we > held on the Sunday prior to the Stein PTG in Denver (Alan from the > OSF BoD was also present). Doug's recap can be found in the old > openstack-dev archive here: > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134744.html > > Quoting Doug, "...it wasn't clear that the TC was the best group to > manage a list of 'roles' or other more detailed information. We > discussed placing that information into team documentation or > hosting it somewhere outside of the governance repository where more > people could contribute." (If memory serves, this was in response to > earlier OSF BoD suggestions that retooling the Help Wanted list to > be a set of business-case-focused job descriptions might garner more > uptake from the organizations they represent.) > -- > Jeremy Stanley Right, the feedback was basically that we might have more luck convincing companies to provide resources if we were more specific about how they would be used by describing the work in more detail. When we started thinking about how that change might be implemented, it seemed like managing the information a well-defined job in its own right, and our usual pattern is to establish a group of people interested in doing something and delegating responsibility to them. When we talked about it in the TC meeting in Denver we did not have any TC members volunteer to drive the implementation to the next step by starting to recruit a team. During the Train series goal discussion in Berlin we talked about having a goal of ensuring that each team had documentation for bringing new contributors onto the team. Offering specific mentoring resources seems to fit nicely with that goal, and doing it in each team's repository in a consistent way would let us build a central page on docs.openstack.org to link to all of the team contributor docs, like we link to the user and installation documentation, without requiring us to find a separate group of people to manage the information across the entire community. So, maybe the next step is to convince someone to champion a goal of improving our contributor documentation, and to have them describe what the documentation should include, covering the usual topics like how to actually submit patches as well as suggestions for how to describe areas where help is needed in a project and offers to mentor contributors. Does anyone want to volunteer to serve as the goal champion for that? -- Doug From andr.kurilin at gmail.com Mon Feb 4 17:57:11 2019 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Mon, 4 Feb 2019 19:57:11 +0200 Subject: [tc] OpenStack code and GPL libraries In-Reply-To: <20190204150515.7zxgq2pj7pgnjaxk@yuggoth.org> References: <20190204150515.7zxgq2pj7pgnjaxk@yuggoth.org> Message-ID: Hi stackers! Thanks for raising this topic. I recently removed morph dependency ( https://review.openstack.org/#/c/634741 ) and I hope to release a new version of Rally as soon as possible. пн, 4 февр. 2019 г. в 17:14, Jeremy Stanley : > On 2019-02-04 14:42:04 +0100 (+0100), Ilya Shakhat wrote: > > I am experimenting with automatic verification of code licenses of > > OpenStack projects and see that one of Rally dependencies has GPL3 > > license > [...] > > To start off, it looks like the license for morph is already known > to the Rally developers, based on the inline comment for it at > > https://git.openstack.org/cgit/openstack/rally/tree/requirements.txt?id=3625758#n10 > (so hopefully this is no real surprise). > > The source of truth for our licensing policies, as far as projects > governed by the OpenStack Technical Committee are concerned (which > openstack/rally is), can be found here: > > https://governance.openstack.org/tc/reference/licensing.html > > It has a carve out for "tools that are run with or on OpenStack > projects only during validation or testing phases of development" > which "may be licensed under any OSI-approved license" and since > the README.rst for Rally states it's a "tool & framework that allows > one to write simple plugins and combine them in complex tests > scenarios that allows to perform all kinds of testing" it probably > meets those criteria. > > As for concern that a Python application which imports another > Python library at runtime inherits its license and so becomes > derivative of that work, that has been the subject of much > speculation. In particular, whether a Python import counts as > "dynamic linking" in GPL 3.0 section 1 is debatable: > > https://bytes.com/topic/python/answers/41019-python-gpl > > https://opensource.stackexchange.com/questions/1487/how-does-the-gpls-linking-restriction-apply-when-using-a-proprietary-library-wi > > https://softwareengineering.stackexchange.com/questions/87446/using-a-gplv3-python-module-will-my-entire-project-have-to-be-gplv3-licensed > > https://stackoverflow.com/questions/40492518/is-an-import-in-python-considered-to-be-dynamic-linking > > I'm most definitely not a lawyer, but from what I've been able to > piece together it's the combination of rally+morph which potentially > becomes GPLv3-licensed when distributed, not the openstack/rally > source code itself. This is really more of a topic for the > legal-discuss mailing list, however, so I am cross-posting my reply > there for completeness. > > To readers only of the legal-discuss ML, the original post can be > found archived here: > > > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002356.html > > -- > Jeremy Stanley > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashlee at openstack.org Mon Feb 4 18:26:57 2019 From: ashlee at openstack.org (Ashlee Ferguson) Date: Mon, 4 Feb 2019 12:26:57 -0600 Subject: [OpenStack Foundation] Open Infrastructure Summit Denver - Community Voting Open In-Reply-To: <6B02F9A1-28A7-4F43-85E1-66AD570ED37B@openstack.org> References: <6B02F9A1-28A7-4F43-85E1-66AD570ED37B@openstack.org> Message-ID: <5164AFCF-285F-43F0-8718-A8F9DDCAF48A@openstack.org> Hi everyone, Just under 12 hours left to vote for the sessions you’d like to see at the Denver Open Infrastructure Summit ! REGISTER Register for the Summit before prices increase in late February! VISA APPLICATION PROCESS Make sure to secure your Visa soon. More information about the Visa application process. TRAVEL SUPPORT PROGRAM February 27 is the last day to submit applications. Please submit your applications by 11:59pm Pacific Time (February 28 at 7:59am UTC). If you have any questions, please email summit at openstack.org . Cheers, Ashlee Ashlee Ferguson OpenStack Foundation ashlee at openstack.org > On Jan 31, 2019, at 12:29 PM, Ashlee Ferguson wrote: > > Hi everyone, > > Community voting for the Open Infrastructure Summit Denver sessions is open! > > You can VOTE HERE , but what does that mean? > > Now that the Call for Presentations has closed, all submissions are available for community vote and input. After community voting closes, the volunteer Programming Committee members will receive the presentations to review and determine the final selections for Summit schedule. While community votes are meant to help inform the decision, Programming Committee members are expected to exercise judgment in their area of expertise and help ensure diversity of sessions and speakers. View full details of the session selection process here . > > In order to vote, you need an OSF community membership. If you do not have an account, please create one by going to openstack.org/join . If you need to reset your password, you can do that here . > > Hurry, voting closes Monday, February 4 at 11:59pm Pacific Time (Tuesday, February 5 at 7:59 UTC). > > Continue to visit https://www.openstack.org/summit/denver-2019 for all Summit-related information. > > REGISTER > Register for the Summit before prices increase in late February! > > VISA APPLICATION PROCESS > Make sure to secure your Visa soon. More information about the Visa application process. > > TRAVEL SUPPORT PROGRAM > February 27 is the last day to submit applications. Please submit your applications by 11:59pm Pacific Time (February 28 at 7:59am UTC). > > If you have any questions, please email summit at openstack.org . > > Cheers, > Ashlee > > > Ashlee Ferguson > OpenStack Foundation > ashlee at openstack.org > > > > > _______________________________________________ > Foundation mailing list > Foundation at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From svasudevan at suse.com Mon Feb 4 19:36:03 2019 From: svasudevan at suse.com (Swaminathan Vasudevan) Date: Mon, 04 Feb 2019 12:36:03 -0700 Subject: [Neutron] - Bug Report for the week of Jan 29th- Feb4th. Message-ID: <5C589423020000D7000400BA@prv-mh.provo.novell.com> Item Type: Note Date: Monday, 4 Feb 2019 Hi Neutrinos,Here is the summary of the neutron bugs that came in last week ( starting from Jan 29th - Feb 4th). https://docs.google.com/spreadsheets/d/1MwoHgK_Ve_6JGYaM8tZxWha2HDaMeAYtq4qFdZ4TUAU/edit?usp=sharing Thanks Swaminathan Vasudevan. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: journal.ics Type: text/calendar Size: 947 bytes Desc: not available URL: From fungi at yuggoth.org Mon Feb 4 19:57:06 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 Feb 2019 19:57:06 +0000 Subject: [Neutron] - Bug Report for the week of Jan 29th- Feb4th. In-Reply-To: <5C589423020000D7000400BA@prv-mh.provo.novell.com> References: <5C589423020000D7000400BA@prv-mh.provo.novell.com> Message-ID: <20190204195705.v6to7bmqe2ib2nfd@yuggoth.org> On 2019-02-04 12:36:03 -0700 (-0700), Swaminathan Vasudevan wrote: > Hi Neutrinos,Here is the summary of the neutron bugs that came in last week ( starting from Jan 29th - Feb 4th). > > https://docs.google.com/spreadsheets/d/1MwoHgK_Ve_6JGYaM8tZxWha2HDaMeAYtq4qFdZ4TUAU/edit?usp=sharing If it's just a collaboratively-edited spreadsheet application you need, don't forget we maintain https://ethercalc.openstack.org/ (hopefully soon also reachable as ethercalc.opendev.org) which runs entirely on free software and is usable from parts of the World where Google's services are not (for example, mainland China). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From e0ne at e0ne.info Mon Feb 4 20:28:30 2019 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 4 Feb 2019 22:28:30 +0200 Subject: [horizon][plugins][vitrage][heat][ironic][manila] Integration tests on gates Message-ID: Hi team, A few weeks ago we enabled horizon-integration-tests job[1]. It's a set of selenium-based test cases to verify that Horizon works as expected from the user's perspective. Like any new job, it's added in a non-voting mode for now. During the PTG, I'd got several conversations with project teams that it would be good to have such tests in each plugin to verify that plugin works correctly with a current Horizon version. We've got about 30 plugins in the Plugin Registry [2]. Honestly, without any kind of testing in most of the plugins, we can't be sure that they work well with a current version of Horizon. That's why we decided to implement some kind of smoke tests for plugins based on Horizon integration tests framework. These tests should verify that a plugin is installed and pages could be opened in a browser. We will run these tests on the experimental queue and/or on some schedule on Horizon gates to verify that plugins are maintained and working properly. My idea is to have such a list of 'tested' plugins, so we can add 'Maintained' label to the Plugin Registry. Once these jobs become voting, we can add a label 'Verified'. I think such a schedule looks reasonable: * Stein-Train release cycles - add non-voting jobs for each maintained plugin and introduce "Maintained" label * Train-U release cycles - makes stable jobs voting and introduce "Verified" label in the Horizon Plugin registry I do understand that some teams don't have enough resources to maintain integration tests, so I'm stepping as a volunteer to introduce such tests and jobs for the project. I already published patches for Vitrage and Heat [3] plugins and will do the same for Ironic and Manila dashboards in a short time. Any help or feedback is welcome:). [1] https://review.openstack.org/#/c/580469/ [2] https://docs.openstack.org/horizon/latest/install/plugin-registry.html [3] https://review.openstack.org/#/q/topic:horizon-integration-tests+(status:open+OR+status:merged) Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Feb 4 21:01:51 2019 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 4 Feb 2019 23:01:51 +0200 Subject: [neutron] CI meeting this week cancelled Message-ID: Hi, I’m traveling this week and I will not be able to run Neutron CI meeting on Tuesday, 5.02. As some other people usually involved in this meeting are also traveling, lets skip it this week. We will have next meeting as usual on Tuesday, 12.02.2019. — Slawek Kaplonski Senior software engineer Red Hat From tpb at dyncloud.net Mon Feb 4 21:38:34 2019 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 4 Feb 2019 16:38:34 -0500 Subject: [manila][glusterfs] on queens error In-Reply-To: References: <20190203100549.urtnvf2iatmqm6oy@barron.net> Message-ID: <20190204213834.reohoqqk6gsxel33@barron.net> On 03/02/19 12:45 +0100, Ignazio Cassano wrote: >Many Thanks. >I will check it [1]. >Regards >Ignazio > And Goutham just gave me a more current doc link: https://netapp-openstack-dev.github.io/openstack-docs/rocky/manila/examples/openstack_command_line/section_manila-cli.html#importing-and-exporting-manila-shares -- Tom >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha >scritto: > >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: >> >Thanks Goutham. >> >If there are not mantainers for this driver I will switch on ceph and or >> >netapp. >> >I am already using netapp but I would like to export shares from an >> >openstack installation to another. >> >Since these 2 installations do non share any openstack component and have >> >different openstack database, I would like to know it is possible . >> >Regards >> >Ignazio >> >> Hi Ignazio, >> >> If by "export shares from an openstack installation to another" you >> mean removing them from management by manila in installation A and >> instead managing them by manila in installation B then you can do that >> while leaving them in place on your Net App back end using the manila >> "manage-unmanage" administrative commands. Here's some documentation >> [1] that should be helpful. >> >> If on the other hand by "export shares ... to another" you mean to >> leave the shares under management of manila in installation A but >> consume them from compute instances in installation B it's all about >> the networking. One can use manila to "allow-access" to consumers of >> shares anywhere but the consumers must be able to reach the "export >> locations" for those shares and mount them. >> >> Cheers, >> >> -- Tom Barron >> >> [1] >> https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 >> > >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < >> gouthampravi at gmail.com> >> >ha scritto: >> > >> >> Hi Ignazio, >> >> >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano >> >> wrote: >> >> > >> >> > Hello All, >> >> > I installed manila on my queens openstack based on centos 7. >> >> > I configured two servers with glusterfs replocation and ganesha nfs. >> >> > I configured my controllers octavia,conf but when I try to create a >> share >> >> > the manila scheduler logs reports: >> >> > >> >> > Failed to schedule create_share: No valid host was found. Failed to >> find >> >> a weighted host, the last executed filter was CapabilitiesFilter.: >> >> NoValidHost: No valid host was found. Failed to find a weighted host, >> the >> >> last executed filter was CapabilitiesFilter. >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a >> 89f76bc5de5545f381da2c10c7df7f15 >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a >> >> >> >> >> >> The scheduler failure points out that you have a mismatch in >> >> expectations (backend capabilities vs share type extra-specs) and >> >> there was no host to schedule your share to. So a few things to check >> >> here: >> >> >> >> - What is the share type you're using? Can you list the share type >> >> extra-specs and confirm that the backend (your GlusterFS storage) >> >> capabilities are appropriate with whatever you've set up as >> >> extra-specs ($ manila pool-list --detail)? >> >> - Is your backend operating correctly? You can list the manila >> >> services ($ manila service-list) and see if the backend is both >> >> 'enabled' and 'up'. If it isn't, there's a good chance there was a >> >> problem with the driver initialization, please enable debug logging, >> >> and look at the log file for the manila-share service, you might see >> >> why and be able to fix it. >> >> >> >> >> >> Please be aware that we're on a look out for a maintainer for the >> >> GlusterFS driver for the past few releases. We're open to bug fixes >> >> and maintenance patches, but there is currently no active maintainer >> >> for this driver. >> >> >> >> >> >> > I did not understand if controllers node must be connected to the >> >> network where shares must be exported for virtual machines, so my >> glusterfs >> >> are connected on the management network where openstack controllers are >> >> conencted and to the network where virtual machine are connected. >> >> > >> >> > My manila.conf section for glusterfs section is the following >> >> > >> >> > [gluster-manila565] >> >> > driver_handles_share_servers = False >> >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver >> >> > glusterfs_target = root at 10.102.184.229:/manila565 >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa >> >> > glusterfs_ganesha_server_username = root >> >> > glusterfs_nfs_server_type = Ganesha >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 >> >> > #glusterfs_servers = root at 10.102.185.19 >> >> > ganesha_config_dir = /etc/ganesha >> >> > >> >> > >> >> > PS >> >> > 10.102.184.0/24 is the network where controlelrs expose endpoint >> >> > >> >> > 10.102.189.0/24 is the shared network inside openstack where virtual >> >> machines are connected. >> >> > >> >> > The gluster servers are connected on both. >> >> > >> >> > >> >> > Any help, please ? >> >> > >> >> > Ignazio >> >> >> From chris at openstack.org Mon Feb 4 22:45:07 2019 From: chris at openstack.org (Chris Hoge) Date: Mon, 4 Feb 2019 14:45:07 -0800 Subject: [baremetal-sig][ironic] Proposing Formation of Bare Metal SIG In-Reply-To: <4191B2EA-A6F0-4183-B0EF-C5C013E3A982@openstack.org> References: <4191B2EA-A6F0-4183-B0EF-C5C013E3A982@openstack.org> Message-ID: <098CC2A3-B207-47D5-A0F1-F227C33C2F01@openstack.org> Based on the number of folks signed up in the planning etherpad[1], we have a good initial showing and I've gone ahead and sent up a review[2] to formalize the creation of the SIG. One thing missing is additional leads to help guide the Bare-metal SIG. If you would like to be added as a co-lead, please respond here or on the review and I can make the necessary update. I'll start looking for UC and TC approval early next week on the patch. In the meantime, I'd like to use this thread to start talking about some of the initial items we can start collaborating on. A few things that I was thinking we could begin on are: * A bare metal white paper, similar to the containers white paper we published last year[3]. * A getting started with Ironic demo, run as a community webinar that would not only be a way to give an easy introduction to Ironic but also get larger feedback on the sort of things the community would like to see the SIG produce. What are some other items that we could get started with, and do we have volunteers to participate in any of the items listed above? [1] https://etherpad.openstack.org/p/bare-metal-sig [2] https://review.openstack.org/#/c/634824/1 [3] https://www.openstack.org/containers -Chris From mikal at stillhq.com Mon Feb 4 22:54:10 2019 From: mikal at stillhq.com (Michael Still) Date: Tue, 5 Feb 2019 09:54:10 +1100 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging Message-ID: Hi, I’ve been chasing a bug in ironic’s neutron agent for the last few days and I think its time to ask for some advice. Specifically, I was asked to debug why a set of controllers was using so much RAM, and the answer was that rabbitmq had a queue called ironic-neutron-agent-heartbeat.info with 800,000 messages enqueued. This notification queue is used by ironic’s neutron agent to calculate the hash ring. I have been able to duplicate this issue in a stock kolla-ansible install with ironic turned on but no bare metal nodes enrolled in ironic. About 0.6 messages are queued per second. I added some debugging code (hence the thread yesterday about mangling the code kolla deploys), and I can see that the messages in the queue are being read by the ironic neutron agent and acked correctly. However, they are not removed from the queue. You can see your queue size while using kolla with this command: docker exec rabbitmq rabbitmqctl list_queues messages name messages_ready consumers | sort -n | tail -1 My stock install that’s been running for about 12 hours currently has 8,244 messages in that queue. Where I’m a bit stumped is I had assumed that the messages weren’t being acked correctly, which is not the case. Is there something obvious about notification queues like them being persistent that I’ve missed in my general ignorance of the underlying implementation of notifications? Thanks, Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Tue Feb 5 02:52:22 2019 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Tue, 05 Feb 2019 03:52:22 +0100 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: References: Message-ID: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> On Tue, 2019-02-05 at 09:54 +1100, Michael Still wrote: > Hi, > > I’ve been chasing a bug in ironic’s neutron agent for the last few > days and I think its time to ask for some advice. > I'm working on the same issue. (In fact there are two issues.) > Specifically, I was asked to debug why a set of controllers was using > so much RAM, and the answer was that rabbitmq had a queue called > ironic-neutron-agent-heartbeat.info with 800,000 messages enqueued. > This notification queue is used by ironic’s neutron agent to > calculate the hash ring. I have been able to duplicate this issue in > a stock kolla-ansible install with ironic turned on but no bare metal > nodes enrolled in ironic. About 0.6 messages are queued per second. > > I added some debugging code (hence the thread yesterday about > mangling the code kolla deploys), and I can see that the messages in > the queue are being read by the ironic neutron agent and acked > correctly. However, they are not removed from the queue. > > You can see your queue size while using kolla with this command: > > docker exec rabbitmq rabbitmqctl list_queues messages name > messages_ready consumers | sort -n | tail -1 > > My stock install that’s been running for about 12 hours currently has > 8,244 messages in that queue. > > Where I’m a bit stumped is I had assumed that the messages weren’t > being acked correctly, which is not the case. Is there something > obvious about notification queues like them being persistent that > I’ve missed in my general ignorance of the underlying implementation > of notifications? > I opened a oslo.messaging bug[1] yesterday. When using notifications and all consumers use one or more pools. The ironic-neutron-agent does use pools for all listeners in it's hash-ring member manager. And the result is that notifications are published to the 'ironic-neutron- agent-heartbeat.info' queue and they are never consumed. The second issue, each instance of the agent uses it's own pool to ensure all agents are notified about the existance of peer-agents. The pools use a uuid that is generated at startup (and re-generated on restart, stop/start etc). In the case where `[oslo_messaging_rabbit]/amqp_auto_delete = false` in neutron config these uuid queues are not automatically removed. So after a restart of the ironic-neutron-agent the queue with the old UUID is left in the message broker without no consumers, growing ... I intend to push patches to fix both issues. As a workaround (or the permanent solution) will create another listener consuming the notifications without a pool. This should fix the first issue. Second change will set amqp_auto_delete for these specific queues to 'true' no matter. What I'm currently stuck on here is that I need to change the control_exchange for the transport. According to oslo.messaging documentation it should be possible to override the control_exchange in the transport_url[3]. The idea is to set amqp_auto_delete and a ironic-neutron-agent specific exchange on the url when setting up the transport for notifications, but so far I belive the doc string on the control_exchange option is wrong. NOTE: The second issue can be worked around by stopping and starting rabbitmq as a dependency of the ironic-neutron-agent service. This ensure only queues for active agent uuid's are present, and those queues will be consumed. -- Harald Jensås [1] https://bugs.launchpad.net/oslo.messaging/+bug/1814544 [2] https://storyboard.openstack.org/#!/story/2004933 [3] https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/transport.py#L58-L62 From mikal at stillhq.com Tue Feb 5 02:56:38 2019 From: mikal at stillhq.com (Michael Still) Date: Tue, 5 Feb 2019 13:56:38 +1100 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> References: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> Message-ID: Cool thanks for the summary. You seem to have this under control so I might bravely run away. I definitely think these are issues that deserve a backport when the time comes. Michael On Tue, Feb 5, 2019 at 1:52 PM Harald Jensås wrote: > On Tue, 2019-02-05 at 09:54 +1100, Michael Still wrote: > > Hi, > > > > I’ve been chasing a bug in ironic’s neutron agent for the last few > > days and I think its time to ask for some advice. > > > > I'm working on the same issue. (In fact there are two issues.) > > > Specifically, I was asked to debug why a set of controllers was using > > so much RAM, and the answer was that rabbitmq had a queue called > > ironic-neutron-agent-heartbeat.info with 800,000 messages enqueued. > > This notification queue is used by ironic’s neutron agent to > > calculate the hash ring. I have been able to duplicate this issue in > > a stock kolla-ansible install with ironic turned on but no bare metal > > nodes enrolled in ironic. About 0.6 messages are queued per second. > > > > I added some debugging code (hence the thread yesterday about > > mangling the code kolla deploys), and I can see that the messages in > > the queue are being read by the ironic neutron agent and acked > > correctly. However, they are not removed from the queue. > > > > You can see your queue size while using kolla with this command: > > > > docker exec rabbitmq rabbitmqctl list_queues messages name > > messages_ready consumers | sort -n | tail -1 > > > > My stock install that’s been running for about 12 hours currently has > > 8,244 messages in that queue. > > > > Where I’m a bit stumped is I had assumed that the messages weren’t > > being acked correctly, which is not the case. Is there something > > obvious about notification queues like them being persistent that > > I’ve missed in my general ignorance of the underlying implementation > > of notifications? > > > > I opened a oslo.messaging bug[1] yesterday. When using notifications > and all consumers use one or more pools. The ironic-neutron-agent does > use pools for all listeners in it's hash-ring member manager. And the > result is that notifications are published to the 'ironic-neutron- > agent-heartbeat.info' queue and they are never consumed. > > The second issue, each instance of the agent uses it's own pool to > ensure all agents are notified about the existance of peer-agents. The > pools use a uuid that is generated at startup (and re-generated on > restart, stop/start etc). In the case where > `[oslo_messaging_rabbit]/amqp_auto_delete = false` in neutron config > these uuid queues are not automatically removed. So after a restart of > the ironic-neutron-agent the queue with the old UUID is left in the > message broker without no consumers, growing ... > > > I intend to push patches to fix both issues. As a workaround (or the > permanent solution) will create another listener consuming the > notifications without a pool. This should fix the first issue. > > Second change will set amqp_auto_delete for these specific queues to > 'true' no matter. What I'm currently stuck on here is that I need to > change the control_exchange for the transport. According to > oslo.messaging documentation it should be possible to override the > control_exchange in the transport_url[3]. The idea is to set > amqp_auto_delete and a ironic-neutron-agent specific exchange on the > url when setting up the transport for notifications, but so far I > belive the doc string on the control_exchange option is wrong. > > > NOTE: The second issue can be worked around by stopping and starting > rabbitmq as a dependency of the ironic-neutron-agent service. This > ensure only queues for active agent uuid's are present, and those > queues will be consumed. > > > -- > Harald Jensås > > > [1] https://bugs.launchpad.net/oslo.messaging/+bug/1814544 > [2] https://storyboard.openstack.org/#!/story/2004933 > [3] > > https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/transport.py#L58-L62 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Tue Feb 5 04:43:55 2019 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Tue, 05 Feb 2019 05:43:55 +0100 Subject: [ironic] [thirdparty-ci] BaremetalBasicOps test In-Reply-To: <1bf8f3b4-ea39-6c17-3609-9289ceeeb7ed@linux.vnet.ibm.com> References: <1bf8f3b4-ea39-6c17-3609-9289ceeeb7ed@linux.vnet.ibm.com> Message-ID: On Thu, 2019-01-31 at 11:30 -0500, Michael Turek wrote: > Hello all, > > Our ironic job has been broken and it seems to be due to a lack of > IPs. > We allocate two IPs to our job, one for the dhcp server, and one for > the > target node. This had been working for as long as the job has > existed > but recently (since about early December 2018), we've been broken. > > The job is able to clean the node during devstack, successfully > deploy > to the node during the tempest run, and is successfully validated > via > ssh. The node then moves to clean failed with a network error [1], > and > the job subsequently fails. Sometime between the validation and > attempting to clean, the neutron port associated with the ironic port > is > deleted and a new port comes into existence. Where I'm having trouble > is > finding out what this port is. Based on it's MAC address It's a > virtual > port, and its MAC is not the same as the ironic port. > > We could add an IP to the job to fix it, but I'd rather not do that > needlessly. > > Any insight or advice would be appreciated here! > While working on the neutron events I noticed a pattern I thought was a bit strange. (Note, this was with neutron networking.) Create nova baremetal instance: 1. The tenant VIF is created. 2. The provision port is created. 3. Provision port plugged (bound) 4. Provision port un-plugged (deleted) 5. Tenant port plugged (bound) On nova delete of barametal instance: 1. Tenant VIF is un-plugged (unbound) 2. Cleaning port created 3. Cleaning port plugged (bound) 4. Cleaning port un-plugged (deleted) 5. Tenant port deleted I think step 5, deleting the tenant port could happen after step 1. But it looks like it is'nt deleted before after cleaning is done. If this is the case with flat networks as well it could explain why you get the error on cleaning. The "tenant" port still exist, and there are no free IP's in the allocation pool to create a new port for cleaning. -- Harald From chkumar246 at gmail.com Tue Feb 5 05:26:15 2019 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 5 Feb 2019 10:56:15 +0530 Subject: [tripleo][openstack-ansible] collaboration on os_tempest role update IX - Feb 05, 2019 Message-ID: Hello, Here is the 9 th update (Jan 29 to Feb 05, 2019) on collaboration on os_tempest[1] role between TripleO and OpenStack-Ansible projects. Summary: It was a great week, * we unblocked the os_tempest centos gate failure thanks to slaweq (neutron) and jrosser (OSA) to fixing the tempest container vlan issue. * TripleO is now using os_tempest for standalone job and os_tempest is also getted with the same job: -> http://zuul.openstack.org/builds?job_name=tripleo-ci-centos-7-standalone-os-tempest * Few other improvements: * generate stackviz irrespective of tempest tests failure * Port security is now enabled in tempest.conf * Cirros Image got updated from 3.5 to 3.6 * Use tempest run command with --test-list option Things got merged: os_tempest: * Update all plugin urls to use https rather than git - https://review.openstack.org/625670 * Add an ip address to eth12 in OSA test containers - https://review.openstack.org/633732 * Adds tempest run command with --test-list option - https://review.openstack.org/631351 * Enable port security - https://review.openstack.org/617719 * Use tempest_cloud_name in tempestconf - https://review.openstack.org/631708 * Always generate stackviz irrespective of tests pass or fail - https://review.openstack.org/631967 * Update cirros from 3.5 to 3.6 - https://review.openstack.org/633208 * Disable nova-lxd tempest plugin - https://review.openstack.org/633711 * Only init a workspace if doesn't exists - https://review.openstack.org/633549 * Add tripleo-ci-centos-7-standalone-os-tempest job - https://review.openstack.org/633931 Tripleo: * Enable standalone-full on validate-tempest role - https://review.openstack.org/634644 Things IN-Progress: os_tempest: * Ping router once it is created - https://review.openstack.org/633883 * Improve overview subpage - https://review.openstack.org/633934 * Added tempest.conf for heat_plugin - https://review.openstack.org/632021 * Add telemetry distro plugin install for aodh - https://review.openstack.org/632125 * Use the correct heat tests - https://review.openstack.org/630695 Tripleo: * Reuse the validate-tempest skip list in os_tempest - https://review.openstack.org/634380 Goal of this week: * Finish ongoing patches and reusing of skip list in TripleO from validate-tempest which will allow to move standalone scenario jobs to os_tempest Here is the 8th update [2]. Have queries, Feel free to ping us on #tripleo or #openstack-ansible channel. Links: [1.] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest [2.] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/002151.html Thanks, Chandan Kumar From cjeanner at redhat.com Tue Feb 5 10:11:22 2019 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 5 Feb 2019 11:11:22 +0100 Subject: [TripleO] containers logging to stdout In-Reply-To: <7cee5db5-f4cd-9e11-e0a3-7438154fb9af@redhat.com> References: <7cee5db5-f4cd-9e11-e0a3-7438154fb9af@redhat.com> Message-ID: <95dc4e6c-dc4a-7cc6-a34d-7999566725ba@redhat.com> Hello there! small thoughts: - we might already push the stdout logging, in parallel of the current existing one - that would already point some weakness and issues, without making the whole thing crash, since there aren't that many logs in stdout for now - that would already allow to check what's the best way to do it, and what's the best format for re-usability (thinking: sending logs to some (k)elk and the like) This would also allow devs to actually test that for their services. And thus going forward on this topic. Any thoughts? Cheers, C. On 1/30/19 11:49 AM, Juan Antonio Osorio Robles wrote: > Hello! > > > In Queens, the a spec to provide the option to make containers log to > standard output was proposed [1] [2]. Some work was done on that side, > but due to the lack of traction, it wasn't completed. With the Train > release coming, I think it would be a good idea to revive this effort, > but make logging to stdout the default in that release. > > This would allow several benefits: > > * All logging from the containers would en up in journald; this would > make it easier for us to forward the logs, instead of having to keep > track of the different directories in /var/log/containers > > * The journald driver would add metadata to the logs about the container > (we would automatically get what container ID issued the logs). > > * This wouldo also simplify the stacks (removing the Logging nested > stack which is present in several templates). > > * Finally... if at some point we move towards kubernetes (or something > in between), managing our containers, it would work with their logging > tooling as well. > > > Any thoughts? > > > [1] > https://specs.openstack.org/openstack/tripleo-specs/specs/queens/logging-stdout.html > > [2] https://blueprints.launchpad.net/tripleo/+spec/logging-stdout-rsyslog > > > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From andr.kurilin at gmail.com Tue Feb 5 10:42:08 2019 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Tue, 5 Feb 2019 12:42:08 +0200 Subject: [tc] OpenStack code and GPL libraries In-Reply-To: References: <20190204150515.7zxgq2pj7pgnjaxk@yuggoth.org> Message-ID: a quick update: the latest release of Rally ( https://pypi.org/project/rally/1.4.0/ ) doesn't include morph dependency пн, 4 февр. 2019 г. в 19:57, Andrey Kurilin : > Hi stackers! > > Thanks for raising this topic. > I recently removed morph dependency ( > https://review.openstack.org/#/c/634741 ) and I hope to release a new > version of Rally as soon as possible. > > пн, 4 февр. 2019 г. в 17:14, Jeremy Stanley : > >> On 2019-02-04 14:42:04 +0100 (+0100), Ilya Shakhat wrote: >> > I am experimenting with automatic verification of code licenses of >> > OpenStack projects and see that one of Rally dependencies has GPL3 >> > license >> [...] >> >> To start off, it looks like the license for morph is already known >> to the Rally developers, based on the inline comment for it at >> >> https://git.openstack.org/cgit/openstack/rally/tree/requirements.txt?id=3625758#n10 >> (so hopefully this is no real surprise). >> >> The source of truth for our licensing policies, as far as projects >> governed by the OpenStack Technical Committee are concerned (which >> openstack/rally is), can be found here: >> >> https://governance.openstack.org/tc/reference/licensing.html >> >> It has a carve out for "tools that are run with or on OpenStack >> projects only during validation or testing phases of development" >> which "may be licensed under any OSI-approved license" and since >> the README.rst for Rally states it's a "tool & framework that allows >> one to write simple plugins and combine them in complex tests >> scenarios that allows to perform all kinds of testing" it probably >> meets those criteria. >> >> As for concern that a Python application which imports another >> Python library at runtime inherits its license and so becomes >> derivative of that work, that has been the subject of much >> speculation. In particular, whether a Python import counts as >> "dynamic linking" in GPL 3.0 section 1 is debatable: >> >> https://bytes.com/topic/python/answers/41019-python-gpl >> >> https://opensource.stackexchange.com/questions/1487/how-does-the-gpls-linking-restriction-apply-when-using-a-proprietary-library-wi >> >> https://softwareengineering.stackexchange.com/questions/87446/using-a-gplv3-python-module-will-my-entire-project-have-to-be-gplv3-licensed >> >> https://stackoverflow.com/questions/40492518/is-an-import-in-python-considered-to-be-dynamic-linking >> >> I'm most definitely not a lawyer, but from what I've been able to >> piece together it's the combination of rally+morph which potentially >> becomes GPLv3-licensed when distributed, not the openstack/rally >> source code itself. This is really more of a topic for the >> legal-discuss mailing list, however, so I am cross-posting my reply >> there for completeness. >> >> To readers only of the legal-discuss ML, the original post can be >> found archived here: >> >> >> http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002356.html >> >> -- >> Jeremy Stanley >> > > > -- > Best regards, > Andrey Kurilin. > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Tue Feb 5 12:11:22 2019 From: aspiers at suse.com (Adam Spiers) Date: Tue, 5 Feb 2019 12:11:22 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190201145553.GA5625@sm-workstation> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> Message-ID: <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> Sean McGinnis wrote: >On Fri, Feb 01, 2019 at 12:49:19PM +0100, Thierry Carrez wrote: >>Lance Bragstad wrote: >>>[..] >>>Outside of having a formal name, do we expect the "pop-up" teams to >>>include processes that make what we went through easier? Ultimately, we >>>still had to self-organize and do a bunch of socializing to make progress. >> >>I think being listed as a pop-up team would definitely facilitate >>getting mentioned in TC reports, community newsletters or other >>high-vsibility community communications. It would help getting space to >>meet at PTGs, too. > >I guess this is the main value I see from this proposal. If it helps with >visibility and communications around the effort then it does add some value to >give them an official name. I agree - speaking from SIG experience, visibility and communications is one of the biggest challenges with small initiatives. >I don't think it changes much else. Those working in the group will still need >to socialize the changes they would like to make, get buy-in from the project >teams affected that the design approach is good, and find enough folks >interested in the changes to drive it forward and propose the patches and do >the other work needed to get things to happen. > >We can try looking at processes to help support that. But ultimately, as with >most open source projects, I think it comes down to having enough people >interested enough to get the work done. Sure. I particularly agree with your point about processes; I think the TC (or whoever else volunteers) could definitely help lower the barrier to starting up a pop-up team by creating a cookie-cutter kind of approach which would quickly set up any required infrastructure. For example it could be a simple form or CLI-based tool posing questions like the following, where the answers could facilitate the bootstrapping process: - What is the name of your pop-up team? - Please enter a brief description of the purpose of your pop-up team. - If you will use an IRC channel, please state it here. - Do you need regular IRC meetings? - Do you need a new git repository? [If so, ...] - Do you need a new StoryBoard project? [If so, ...] - Do you need a [badge] for use in Subject: headers on openstack-discuss? etc. The outcome of the form could be anything from pointers to specific bits of documentation on how to set up the various bits of infrastructure, all the way through to automation of as much of the setup as is possible. The slicker the process, the more agile the community could become in this respect. From lauren at openstack.org Tue Feb 5 13:23:27 2019 From: lauren at openstack.org (Lauren Sell) Date: Tue, 5 Feb 2019 07:23:27 -0600 Subject: Why COA exam is being retired? In-Reply-To: References: <25c27f7e-80ec-2eb5-6b88-5627bc9f1f01@admin.grnet.gr> <16640d78-1124-a21d-8658-b7d9b2d50509@gmail.com> <5077d9dc-c4af-8736-0db3-2e05cbc1e992@gmail.com> <20190125152713.dxbxgkzoevzw35f2@csail.mit.edu> <1688640cbe0.27a5.eb5fa01e01bf15c6e0d805bdb1ad935e@jbryce.com> Message-ID: <268F8E4B-0DBA-464A-B44C-A4023634EF94@openstack.org> Hi everyone, I had a few direct responses to my email, so I’m scheduling a community call for anyone who wants to discuss the COA and options going forward. Friday, February 15 @ 10:00 am CT / 15:00 UTC Zoom meeting: https://zoom.us/j/361542002 Find your local number: https://zoom.us/u/akLt1CD2H For those who cannot attend, we will take notes in an etherpad and share back with the list. Best, Lauren > On Jan 25, 2019, at 12:34 PM, Lauren Sell wrote: > > Thanks very much for the feedback. When we launched the COA, the commercial market for OpenStack was much more crowded (read: fragmented), and the availability of individuals with OpenStack experience was more scarce. That indicated a need for a vendor neutral certification to test baseline OpenStack proficiency, and to help provide a target for training curriculum being developed by companies in the ecosystem. > > Three years on, the commercial ecosystem has become easier to navigate, and there are a few thousand professionals who have taken the COA and had on-the-job experience. As those conditions have changed, we've been trying to evaluate the best ways to use the Foundation's resources and time to support the current needs for education and certification. The COA in its current form is pretty resource intensive, because it’s a hands-on exam that runs in a virtual OpenStack environment. To maintain the exam (including keeping it current to OpenStack releases) would require a pretty significant investment in terms of time and money this year. From the data and demand we’re seeing, the COA did not seem to be a top priority compared to our investments in programs that push knowledge and training into the ecosystem like Upstream Institute, supporting OpenStack training partners, mentoring, and sponsoring internship programs like Outreachy and Google Summer of Code. > > That said, we’ve honestly been surprised by the response from training partners and the community as plans have been trickling out these past few weeks, and are open to discussing it. If there are people and companies who are willing to invest time and resources into a neutral certification exam, we could investigate alternative paths. It's very helpful to hear which education activities you find most valuable, and if you'd like to have a deeper discussion or volunteer to help, let me know and we can schedule a community call next week. > > Regardless of the future of the COA exam, we will of course continue to maintain the training marketplace at openstack.org to promote commercial training partners and certifications. There are also some great books and resources developed by community members listed alongside the community training. > > >> From: Jay Bryant jungleboyj at gmail.com >> Date: January 25, 2019 07:42:55 >> Subject: Re: Why COA exam is being retired? >> To: openstack-discuss at lists.openstack.org >> >>> On 1/25/2019 9:27 AM, Jonathan Proulx wrote: >>>> On Fri, Jan 25, 2019 at 10:09:04AM -0500, Jay Pipes wrote: >>>> :On 01/25/2019 09:09 AM, Erik McCormick wrote: >>>> :> On Fri, Jan 25, 2019, 8:58 AM Jay Bryant >>> >>>> :> That's sad. I really appreciated having a non-vendory, ubiased, >>>> :> community-driven option. >>>> : >>>> :+10 >>>> : >>>> :> If a vendor folds or moves on from Openstack, your certification >>>> :> becomes worthless. Presumably, so long as there is Openstack, there >>>> :> will be the foundation at its core. I hope they might reconsider. >>>> : >>>> :+100 >>>> >>>> So to clarify is the COA certifiaction going away or is the Foundation >>>> just no longer administerign the exam? >>>> >>>> It would be a shame to loose a standard unbiased certification, but if >>>> this is a transition away from directly providing the training and >>>> only providing the exam specification that may be reasonable. >>>> >>>> -Jon >>> >>> When Allison e-mailed me last week they said they were having meetings >>> to figure out how to go forward with the COA. The foundations partners >>> were going to be offering the exam through September and they were >>> working on communicating the status of things to the community. >>> >>> So, probably best to not jump to conclusions and wait for the official >>> word from the community. >>> >>> - Jay >> >> >> > From mihalis68 at gmail.com Tue Feb 5 13:47:34 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 5 Feb 2019 08:47:34 -0500 Subject: [ops] ops meetups team meeting minutes 2019-1-29 Message-ID: minutes from last week's meeting: Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-01-29-15.08.html 10:32 AM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-01-29-15.08.txt 10:32 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-01-29-15.08.log.html The next meeting is due in about 1h 15 minutes on #openstack-operators We are trying to finalise the evenbrite for the upcoming ops meetup in berlin March 6th,7th and we're collecting session topics here: https://etherpad.openstack.org/p/BER-ops-meetup Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Feb 5 14:35:20 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 5 Feb 2019 08:35:20 -0600 Subject: [nova][qa][cinder] CI job changes Message-ID: <666ffefd-7344-1853-7fd5-a2a32ea8d222@gmail.com> I'd like to propose some changes primarily to the CI jobs that run on nova changes, but also impact cinder and tempest. 1. Drop the nova-multiattach job and move test coverage to other jobs This is actually an old thread [1] and I had started the work but got hung up on a bug that was teased out of one of the tests when running in the multi-node tempest-slow job [2]. For now I've added a conditional skip on that test if running in a multi-node job. The open changes are here [3]. 2. Only run compute.api and scenario tests in nova-next job and run under python3 only The nova-next job is a place to test new or advanced nova features like placement and cells v2 when those were still optional in Newton. It currently runs with a few changes from the normal tempest-full job: * configures service user tokens * configures nova console proxy to use TLS * disables the resource provider association refresh interval * it runs the post_test_hook which runs some commands like archive_delete_rows, purge, and looks for leaked resource allocations [4] Like tempest-full, it runs the non-slow tempest API tests concurrently and then the scenario tests serially. I'm proposing that we: a) change that job to only run tempest compute API tests and scenario tests to cut down on the number of tests to run; since the job is really only about testing nova features, we don't need to spend time running glance/keystone/cinder/neutron tests which don't touch nova. b) run it with python3 [5] which is the direction all jobs are moving anyway 3. Drop the integrated-gate (py2) template jobs (from nova) Nova currently runs with both the integrated-gate and integrated-gate-py3 templates, which adds a set of tempest-full and grenade jobs each to the check and gate pipelines. I don't think we need to be gating on both py2 and py3 at this point when it comes to tempest/grenade changes. Tempest changes are still gating on both so we have coverage there against breaking changes, but I think anything that's py2 specific would be caught in unit and functional tests (which we're running on both py27 and py3*). Who's with me? [1] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135299.html [2] https://bugs.launchpad.net/tempest/+bug/1807723 [3] https://review.openstack.org/#/q/topic:drop-multiattach-job+(status:open+OR+status:merged) [4] https://github.com/openstack/nova/blob/5283b464b/gate/post_test_hook.sh [5] https://review.openstack.org/#/c/634739/ -- Thanks, Matt From mnaser at vexxhost.com Tue Feb 5 16:22:09 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 5 Feb 2019 11:22:09 -0500 Subject: [openstack-ansible] bug squash day! In-Reply-To: References: <717c065910a2365e8d9674f987227771@arcor.de> Message-ID: Hi everyone, We've discussed this over the ML today and we've decided for it to be next Wednesday (13th of February). Due to the distributed nature of our teams, we'll be aiming to go throughout the day and we'll all be hanging out on #openstack-ansible with a few more high bandwidth way of discussion if that is needed Thanks! Mohammed On Thu, Jan 31, 2019 at 2:35 PM Mohammed Naser wrote: > > On Tue, Jan 29, 2019 at 2:26 PM Frank Kloeker wrote: > > > > Am 2019-01-29 17:09, schrieb Mohammed Naser: > > > Hi team, > > > > > > As you may have noticed, bug triage during our meetings has been > > > something that has kinda killed attendance (really, no one seems to > > > enjoy it, believe it or not!) > > > > > > I wanted to propose for us to take a day to go through as much bugs as > > > possible, triaging and fixing as much as we can. It'd be a fun day > > > and we can also hop on a more higher bandwidth way to talk about this > > > stuff while we grind through it all. > > > > > > Is this something that people are interested in, if so, is there any > > > times/days that work better in the week to organize? > > > > Interesting. Something in EU timezone would be nice. Or what about: Bug > > around the clock? > > So 24 hours of bug triage :) > > I'd be up for that too, we have a pretty distributed team so that > would be awesome, > I'm still wondering if there are enough resources or folks available > to be doing this, > as we haven't had a response yet on a timeline that might work or > availabilities yet. > > > kind regards > > > > Frank > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From kgiusti at gmail.com Tue Feb 5 16:43:09 2019 From: kgiusti at gmail.com (Ken Giusti) Date: Tue, 5 Feb 2019 11:43:09 -0500 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> References: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> Message-ID: On 2/4/19, Harald Jensås wrote: > On Tue, 2019-02-05 at 09:54 +1100, Michael Still wrote: >> Hi, >> >> I’ve been chasing a bug in ironic’s neutron agent for the last few >> days and I think its time to ask for some advice. >> > > I'm working on the same issue. (In fact there are two issues.) > >> Specifically, I was asked to debug why a set of controllers was using >> so much RAM, and the answer was that rabbitmq had a queue called >> ironic-neutron-agent-heartbeat.info with 800,000 messages enqueued. >> This notification queue is used by ironic’s neutron agent to >> calculate the hash ring. I have been able to duplicate this issue in >> a stock kolla-ansible install with ironic turned on but no bare metal >> nodes enrolled in ironic. About 0.6 messages are queued per second. >> >> I added some debugging code (hence the thread yesterday about >> mangling the code kolla deploys), and I can see that the messages in >> the queue are being read by the ironic neutron agent and acked >> correctly. However, they are not removed from the queue. >> >> You can see your queue size while using kolla with this command: >> >> docker exec rabbitmq rabbitmqctl list_queues messages name >> messages_ready consumers | sort -n | tail -1 >> >> My stock install that’s been running for about 12 hours currently has >> 8,244 messages in that queue. >> >> Where I’m a bit stumped is I had assumed that the messages weren’t >> being acked correctly, which is not the case. Is there something >> obvious about notification queues like them being persistent that >> I’ve missed in my general ignorance of the underlying implementation >> of notifications? >> > > I opened a oslo.messaging bug[1] yesterday. When using notifications > and all consumers use one or more pools. The ironic-neutron-agent does > use pools for all listeners in it's hash-ring member manager. And the > result is that notifications are published to the 'ironic-neutron- > agent-heartbeat.info' queue and they are never consumed. > This is an issue with the design of the notification pool feature. The Notification service is designed so notification events can be sent even though there may currently be no consumers. It supports the ability for events to be queued until a consumer(s) is ready to process them. So when a notifier issues an event and there are no consumers subscribed, a queue must be provisioned to hold that event until consumers appear. For notification pools the pool identifier is supplied by the notification listener when it subscribes. The value of any pool id is not known beforehand by the notifier, which is important because pool ids can be dynamically created by the listeners. And in many cases pool ids are not even used. So notifications are always published to a non-pooled queue. If there are pooled subscriptions we rely on the broker to do the fanout. This means that the application should always have at least one non-pooled listener for the topic, since any events that may be published _before_ the listeners are established will be stored on a non-pooled queue. The documentation doesn't make that clear AFAIKT - that needs to be fixed. > The second issue, each instance of the agent uses it's own pool to > ensure all agents are notified about the existance of peer-agents. The > pools use a uuid that is generated at startup (and re-generated on > restart, stop/start etc). In the case where > `[oslo_messaging_rabbit]/amqp_auto_delete = false` in neutron config > these uuid queues are not automatically removed. So after a restart of > the ironic-neutron-agent the queue with the old UUID is left in the > message broker without no consumers, growing ... > > > I intend to push patches to fix both issues. As a workaround (or the > permanent solution) will create another listener consuming the > notifications without a pool. This should fix the first issue. > > Second change will set amqp_auto_delete for these specific queues to > 'true' no matter. What I'm currently stuck on here is that I need to > change the control_exchange for the transport. According to > oslo.messaging documentation it should be possible to override the > control_exchange in the transport_url[3]. The idea is to set > amqp_auto_delete and a ironic-neutron-agent specific exchange on the > url when setting up the transport for notifications, but so far I > belive the doc string on the control_exchange option is wrong. > Yes the doc string is wrong - you can override the default control_exchange via the Target's exchange field: https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/target.py#n40 At least that's the intent... ... however the Notifier API does not take a Target, it takes a list of topic _strings_: https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/notify/notifier.py#n239 Which seems wrong, especially since the notification Listener subscribes to a list of Targets: https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/notify/listener.py#n227 I've opened a bug for this and will provide a patch for review shortly: https://bugs.launchpad.net/oslo.messaging/+bug/1814797 > > NOTE: The second issue can be worked around by stopping and starting > rabbitmq as a dependency of the ironic-neutron-agent service. This > ensure only queues for active agent uuid's are present, and those > queues will be consumed. > > > -- > Harald Jensås > > > [1] https://bugs.launchpad.net/oslo.messaging/+bug/1814544 > [2] https://storyboard.openstack.org/#!/story/2004933 > [3] > https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/transport.py#L58-L62 > > > -- Ken Giusti (kgiusti at gmail.com) From mriedemos at gmail.com Tue Feb 5 17:00:41 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 5 Feb 2019 11:00:41 -0600 Subject: [publiccloud] New Contributor Joining In-Reply-To: References: Message-ID: On 1/27/2019 4:34 PM, Sindisiwe Chuma wrote: > Hi All, > > I am Sindi, a new member. I am interested in participating in the Pubic > Cloud Operators Working Group. Are there current projects or initiatives > running and documentation available to familiarize myself with the work > done and currently being done? > > Could you please refer me to resources containing information. Welcome Sindi. Here is some information that can maybe get you started: * The wiki is here: https://wiki.openstack.org/wiki/PublicCloudWorkingGroup but I'm not sure how up to date it is. * The IRC channel is #openstack-publiccloud. * Meeting information can be found here: http://eavesdrop.openstack.org/#Public_Cloud_Working_Group * Public cloud requirements / RFEs are tracked in launchpad: https://bugs.launchpad.net/openstack-publiccloud-wg The IRC channel may not be very active given the different time zones that people are operating in, so the best time to try and discuss anything in IRC is during the meeting, otherwise feel free to post to the #openstack-discuss mailing list and tag your subject with "[ops]" so it is filtered properly. -- Thanks, Matt From eumel at arcor.de Tue Feb 5 18:04:56 2019 From: eumel at arcor.de (Frank Kloeker) Date: Tue, 05 Feb 2019 19:04:56 +0100 Subject: [openstack-ansible] bug squash day! In-Reply-To: References: <717c065910a2365e8d9674f987227771@arcor.de> Message-ID: <5f88b97f42da5cd3015ec738d4d7a6f9@arcor.de> Hi Mohammed, will there be an extra invitation or an etherpad for logistic? many thanks Frank Am 2019-02-05 17:22, schrieb Mohammed Naser: > Hi everyone, > > We've discussed this over the ML today and we've decided for it to be > next Wednesday (13th of February). Due to the distributed nature of > our teams, we'll be aiming to go throughout the day and we'll all be > hanging out on #openstack-ansible with a few more high bandwidth way > of discussion if that is needed > > Thanks! > Mohammed > > On Thu, Jan 31, 2019 at 2:35 PM Mohammed Naser > wrote: >> >> On Tue, Jan 29, 2019 at 2:26 PM Frank Kloeker wrote: >> > >> > Am 2019-01-29 17:09, schrieb Mohammed Naser: >> > > Hi team, >> > > >> > > As you may have noticed, bug triage during our meetings has been >> > > something that has kinda killed attendance (really, no one seems to >> > > enjoy it, believe it or not!) >> > > >> > > I wanted to propose for us to take a day to go through as much bugs as >> > > possible, triaging and fixing as much as we can. It'd be a fun day >> > > and we can also hop on a more higher bandwidth way to talk about this >> > > stuff while we grind through it all. >> > > >> > > Is this something that people are interested in, if so, is there any >> > > times/days that work better in the week to organize? >> > >> > Interesting. Something in EU timezone would be nice. Or what about: Bug >> > around the clock? >> > So 24 hours of bug triage :) >> >> I'd be up for that too, we have a pretty distributed team so that >> would be awesome, >> I'm still wondering if there are enough resources or folks available >> to be doing this, >> as we haven't had a response yet on a timeline that might work or >> availabilities yet. >> >> > kind regards >> > >> > Frank >> >> >> >> -- >> Mohammed Naser — vexxhost >> ----------------------------------------------------- >> D. 514-316-8872 >> D. 800-910-1726 ext. 200 >> E. mnaser at vexxhost.com >> W. http://vexxhost.com From martin.chlumsky at gmail.com Tue Feb 5 18:16:07 2019 From: martin.chlumsky at gmail.com (Martin Chlumsky) Date: Tue, 5 Feb 2019 13:16:07 -0500 Subject: [Cinder][driver][ScaleIO] Message-ID: Hello, We are using EMC ScaleIO as our backend to cinder. When we delete VMs that have attached volumes and then try deleting said volumes, the volumes will sometimes end in state error_deleting. The state is reached because for some reason the volumes are still mapped (in the ScaleIO sense of the word) to the hypervisor despite the VM being deleted. We fixed the issue by setting the following option to True in cinder.conf: # Unmap volume before deletion. (boolean value) sio_unmap_volume_before_deletion=False What is the reasoning behind this option? Why would we ever set this to False and why is it False by default? It seems you would always want to unmap the volume from the hypervisor before deleting it. Thank you, Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Tue Feb 5 19:08:35 2019 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Tue, 05 Feb 2019 20:08:35 +0100 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: References: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> Message-ID: <4c3eda3d27c7e8199d23f6739bdad4ffcc132137.camel@redhat.com> On Tue, 2019-02-05 at 11:43 -0500, Ken Giusti wrote: > On 2/4/19, Harald Jensås wrote: > > > > I opened a oslo.messaging bug[1] yesterday. When using > > notifications > > and all consumers use one or more pools. The ironic-neutron-agent > > does > > use pools for all listeners in it's hash-ring member manager. And > > the > > result is that notifications are published to the 'ironic-neutron- > > agent-heartbeat.info' queue and they are never consumed. > > > > This is an issue with the design of the notification pool feature. > > The Notification service is designed so notification events can be > sent even though there may currently be no consumers. It supports > the > ability for events to be queued until a consumer(s) is ready to > process them. So when a notifier issues an event and there are no > consumers subscribed, a queue must be provisioned to hold that event > until consumers appear. > > For notification pools the pool identifier is supplied by the > notification listener when it subscribes. The value of any pool id > is > not known beforehand by the notifier, which is important because pool > ids can be dynamically created by the listeners. And in many cases > pool ids are not even used. > > So notifications are always published to a non-pooled queue. If > there > are pooled subscriptions we rely on the broker to do the fanout. > This means that the application should always have at least one > non-pooled listener for the topic, since any events that may be > published _before_ the listeners are established will be stored on a > non-pooled queue. > >From what I observer any message published _before_ or _after_ pool listeners are established are stored on the non-pooled queue. > The documentation doesn't make that clear AFAIKT - that needs to be > fixed. > I agree with your conclusion here. This is not clear in the documentation. And it should be updated to reflect the requirement of at least one non-pool listener to consume the non-pooled queue. > > The second issue, each instance of the agent uses it's own pool to > > ensure all agents are notified about the existance of peer-agents. > > The > > pools use a uuid that is generated at startup (and re-generated on > > restart, stop/start etc). In the case where > > `[oslo_messaging_rabbit]/amqp_auto_delete = false` in neutron > > config > > these uuid queues are not automatically removed. So after a restart > > of > > the ironic-neutron-agent the queue with the old UUID is left in the > > message broker without no consumers, growing ... > > > > > > I intend to push patches to fix both issues. As a workaround (or > > the > > permanent solution) will create another listener consuming the > > notifications without a pool. This should fix the first issue. > > > > Second change will set amqp_auto_delete for these specific queues > > to > > 'true' no matter. What I'm currently stuck on here is that I need > > to > > change the control_exchange for the transport. According to > > oslo.messaging documentation it should be possible to override the > > control_exchange in the transport_url[3]. The idea is to set > > amqp_auto_delete and a ironic-neutron-agent specific exchange on > > the > > url when setting up the transport for notifications, but so far I > > belive the doc string on the control_exchange option is wrong. > > > > Yes the doc string is wrong - you can override the default > control_exchange via the Target's exchange field: > > https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/target.py#n40 > > At least that's the intent... > > ... however the Notifier API does not take a Target, it takes a list > of topic _strings_: > > https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/notify/notifier.py#n239 > > Which seems wrong, especially since the notification Listener > subscribes to a list of Targets: > > https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/notify/listener.py#n227 > > I've opened a bug for this and will provide a patch for review > shortly: > > https://bugs.launchpad.net/oslo.messaging/+bug/1814797 > > Thanks, this makes sense. One question, in target I can see that there is the 'fanout' parameter. https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/target.py#n62 """ Clients may request that a copy of the message be delivered to all servers listening on a topic by setting fanout to ``True``, rather than just one of them. """ In my usecase I actually want exactly that. So once your patch lands I can drop the use of pools and just set fanout=true on the target instead? > > > > > > > NOTE: The second issue can be worked around by stopping and > > starting > > rabbitmq as a dependency of the ironic-neutron-agent service. This > > ensure only queues for active agent uuid's are present, and those > > queues will be consumed. > > > > > > -- > > Harald Jensås > > > > > > [1] https://bugs.launchpad.net/oslo.messaging/+bug/1814544 > > [2] https://storyboard.openstack.org/#!/story/2004933 > > [3] > > https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/transport.py#L58-L62 > > > > > > > > From mvanwinkle at salesforce.com Tue Feb 5 19:18:24 2019 From: mvanwinkle at salesforce.com (Matt Van Winkle) Date: Tue, 5 Feb 2019 13:18:24 -0600 Subject: User Committee Elections - call for candidates Message-ID: Hello all, It's that time again! The candidacy period for the upcoming UC election is open. Three seats are up for voting. If you are an AUC, and are interested in running for one of them, now is the time to announce it. Here are the important dates: February 04 - February 17, 05:59 UTC: Open candidacy for UC positions February 18 - February 24, 11:59 UTC: UC elections (voting) Special thanks to our election officials - Mohamed Elsakhawy and Jonathan Prolux! You can find al the info for the election here: https://governance.openstack.org/uc/reference/uc-election-feb2019.html Note: there are a couple of typos on the page that have an older date for the items above. That is being sorted in a patch today, but we wanted to go and get the notification out. The dates above and at the top of the linked page are correct. Thanks! VW -- Matt Van Winkle Senior Manager, Software Engineering | Salesforce Mobile: 210-445-4183 -------------- next part -------------- An HTML attachment was scrubbed... URL: From shokoofa.hosseini at gmail.com Tue Feb 5 11:35:05 2019 From: shokoofa.hosseini at gmail.com (shokoofa Hosseini) Date: Tue, 5 Feb 2019 15:05:05 +0330 Subject: Rally verify issue Message-ID: Dear Sir / Madam, I recently install Rally version: 1.3.0 with Installed Plugins: rally-openstack :1.3.0 by python 34. on centos 7 It work property. I benchmark my openstack environment correctly with scenarios of rally. But I have some issue with rally verification create, according to the link bellow: https://docs.openstack.org/developer/rally/quick_start/tutorial/step_10_verifying_cloud_via_tempest_verifier.html I run the command: " rally verify create-verifier --type tempest --name tempest-verifier " but I get the bellow error message: "TypeError: startswith first arg must be bytes or a tuple of bytes, not str" What should I do? I will appreciate if you could help me your sincerely shokoofa Attachments area -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rally-verify-ERROR.png Type: image/png Size: 190484 bytes Desc: not available URL: From Arkady.Kanevsky at dell.com Tue Feb 5 16:15:02 2019 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Tue, 5 Feb 2019 16:15:02 +0000 Subject: [openstack-dev] [stackalytics] Stackalytics Facelift In-Reply-To: <5C378410.6050603@openstack.org> References: <45e9c80f282d4d2a880b279b990a964c@AUSX13MPS308.AMER.DELL.COM> <5C378231.8010603@openstack.org> <4b8edd5beecd4915b06278524482431e@AUSX13MPS308.AMER.DELL.COM> <5C378410.6050603@openstack.org> Message-ID: <5b9d8dc2519b4f358e051bf9e6cb5c5f@AUSX13MPS304.AMER.DELL.COM> Team, With the new stackalytics how can I see current (Train release) data? Thanks, Arkady From: Jimmy McArthur Sent: Thursday, January 10, 2019 11:43 AM To: Kanevsky, Arkady Cc: openstack-dev at lists.openstack.org; openstack-discuss at lists.openstack.org Subject: Re: [openstack-dev] [stackalytics] Stackalytics Facelift [EXTERNAL EMAIL] Absolutely. When we get there, I'll send an announcement to the MLs and ping you :) I don't currently have a timeline, but given the Stackalytics changes, this might speed it up a bit. Arkady.Kanevsky at dell.com January 10, 2019 at 11:38 AM Thanks Jimmy. Since I am responsible for updating marketplace per release I just need to know what mechanism to use and which file I need to patch. Thanks, Arkady From: Jimmy McArthur Sent: Thursday, January 10, 2019 11:35 AM To: openstack-dev at lists.openstack.org; openstack-discuss at lists.openstack.org Subject: Re: [openstack-dev] [stackalytics] Stackalytics Facelift [EXTERNAL EMAIL] Arkady.Kanevsky at dell.com January 9, 2019 at 9:20 AM Thanks Boris. Do we still use DriverLog for marketplace driver status updates? We do still use DriverLog for the Marketplace drivers listing. We have a cronjob set up to ingest nightly from Stackalytics. We also have the ability to CRUD the listings in the Foundation website CMS. That said, as Boris mentioned, the list is really not used much and I know there is a lot of out of date info there. We're planning to move the marketplace list to yaml in a public repo, similar to what we did for OpenStack Map [1]. Cheers, Jimmy [1] https://git.openstack.org/cgit/openstack/openstack-map/ Thanks, Arkady From: Boris Renski Sent: Tuesday, January 8, 2019 11:11 AM To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg Subject: [openstack-dev] [stackalytics] Stackalytics Facelift [EXTERNAL EMAIL] Folks, Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics openstack project). Brief summary of updates: * We have new look and feel at stackalytics.com * We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still available via direct links, but not in the men on the top * BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary Happy to hear comments or feedback or answer questions. -Boris Boris Renski January 8, 2019 at 11:10 AM Folks, Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics openstack project). Brief summary of updates: * We have new look and feel at stackalytics.com * We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still available via direct links, but not in the men on the top * BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary Happy to hear comments or feedback or answer questions. -Boris Jimmy McArthur January 10, 2019 at 11:34 AM Arkady.Kanevsky at dell.com January 9, 2019 at 9:20 AM Thanks Boris. Do we still use DriverLog for marketplace driver status updates? We do still use DriverLog for the Marketplace drivers listing. We have a cronjob set up to ingest nightly from Stackalytics. We also have the ability to CRUD the listings in the Foundation website CMS. That said, as Boris mentioned, the list is really not used much and I know there is a lot of out of date info there. We're planning to move the marketplace list to yaml in a public repo, similar to what we did for OpenStack Map [1]. Cheers, Jimmy [1] https://git.openstack.org/cgit/openstack/openstack-map/ Thanks, Arkady From: Boris Renski Sent: Tuesday, January 8, 2019 11:11 AM To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg Subject: [openstack-dev] [stackalytics] Stackalytics Facelift [EXTERNAL EMAIL] Folks, Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics openstack project). Brief summary of updates: * We have new look and feel at stackalytics.com * We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still available via direct links, but not in the men on the top * BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary Happy to hear comments or feedback or answer questions. -Boris Boris Renski January 8, 2019 at 11:10 AM Folks, Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics openstack project). Brief summary of updates: * We have new look and feel at stackalytics.com * We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still available via direct links, but not in the men on the top * BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary Happy to hear comments or feedback or answer questions. -Boris Arkady.Kanevsky at dell.com January 9, 2019 at 9:20 AM Thanks Boris. Do we still use DriverLog for marketplace driver status updates? Thanks, Arkady From: Boris Renski Sent: Tuesday, January 8, 2019 11:11 AM To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg Subject: [openstack-dev] [stackalytics] Stackalytics Facelift [EXTERNAL EMAIL] Folks, Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics openstack project). Brief summary of updates: * We have new look and feel at stackalytics.com * We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still available via direct links, but not in the men on the top * BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary Happy to hear comments or feedback or answer questions. -Boris Boris Renski January 8, 2019 at 11:10 AM Folks, Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics openstack project). Brief summary of updates: * We have new look and feel at stackalytics.com * We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still available via direct links, but not in the men on the top * BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary Happy to hear comments or feedback or answer questions. -Boris -------------- next part -------------- An HTML attachment was scrubbed... URL: From blair.bethwaite at gmail.com Tue Feb 5 19:57:27 2019 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Wed, 6 Feb 2019 08:57:27 +1300 Subject: [scientific-sig] IRC meeting today 2100 UTC (in one hour): Continued HPC container discussion, Open Infra Summit Lightning Talks Message-ID: Hi all, Probably just a quick meeting today. Keen to collect HPC container war stories and looking for interest from lightning talk presenters for the SIG BoF at the Summit... Cheers, b1airo -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Tue Feb 5 20:11:10 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Tue, 5 Feb 2019 14:11:10 -0600 Subject: [openstack-dev] [Neutron] Propose Liu Yulong for Neutron core In-Reply-To: References: Message-ID: Hi everybody, It has been a week since I sent out this nomination and I have only received positive feedback from the community. As a consequence, Liu Yulong has been added as a member of the Neutron core team. Congratulations and keep up all the great contributions! Best regards Miguel On Thu, Jan 31, 2019 at 2:45 AM Qin, Kailun wrote: > Big +1 J Congrats Yulong, well-deserved! > > > > BR, > > Kailun > > > > *From:* Miguel Lavalle [mailto:miguel at mlavalle.com] > *Sent:* Wednesday, January 30, 2019 7:19 AM > *To:* openstack-discuss at lists.openstack.org > *Subject:* [openstack-dev] [Neutron] Propose Liu Yulong for Neutron core > > > > Hi Stackers, > > > > I want to nominate Liu Yulong (irc: liuyulong) as a member of the Neutron > core team. Liu started contributing to Neutron back in Mitaka, fixing bugs > in HA routers. Since then, he has specialized in L3 networking, developing > a deep knowledge of DVR. More recently, he single handedly implemented QoS > for floating IPs with this series of patches: > https://review.openstack.org/#/q/topic:bp/floating-ip-rate-limit+(status:open+OR+status:merged). > He has also been very busy helping to improve the implementation of port > forwardings and adding QoS to them. He also works for a large operator in > China, which allows him to bring an important operational perspective from > that part of the world to our project. The quality and number of his code > reviews during the Stein cycle is on par with the leading members of the > core team: https://www.stackalytics.com/?module=neutron-group. > > > > I will keep this nomination open for a week as customary. > > > > Best regards > > > > Miguel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Tue Feb 5 20:18:33 2019 From: eumel at arcor.de (Frank Kloeker) Date: Tue, 05 Feb 2019 21:18:33 +0100 Subject: [I18n] Meeting on Demand Message-ID: <38930a466b50140a2cbb05e2f2370b66@arcor.de> Hello Stackers, in the past we changed often the format of our team meeting to find out the right requirements and the highest comfort for all participants. We cover different time zones, tried Office Hours and joint the docs team meeting as well, so we have both meeting behind each other. At the end there are no participants and from I18n perspective also not so much topics to discuss outside the translation period. For that reason I want to change to a "Meeting on Demand" format. Feel free to add your topics on the wiki page [1] for the upcoming meeting slot (as usually Thursday [2]) or raise the topic on the mailing list with the proposal of a regular meeting. We will then arrange the next meeting. many thanks kind regards Frank [1] https://wiki.openstack.org/wiki/Meetings/I18nTeamMeeting [2] http://eavesdrop.openstack.org/#I18N_Team_Meeting From smooney at redhat.com Tue Feb 5 20:18:43 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 05 Feb 2019 20:18:43 +0000 Subject: [openstack-dev] [stackalytics] Stackalytics Facelift In-Reply-To: <5b9d8dc2519b4f358e051bf9e6cb5c5f@AUSX13MPS304.AMER.DELL.COM> References: <45e9c80f282d4d2a880b279b990a964c@AUSX13MPS308.AMER.DELL.COM> <5C378231.8010603@openstack.org> <4b8edd5beecd4915b06278524482431e@AUSX13MPS308.AMER.DELL.COM> <5C378410.6050603@openstack.org> <5b9d8dc2519b4f358e051bf9e6cb5c5f@AUSX13MPS304.AMER.DELL.COM> Message-ID: <0ae39e2c1f285345f554f6205bdbc53d80db62eb.camel@redhat.com> On Tue, 2019-02-05 at 16:15 +0000, Arkady.Kanevsky at dell.com wrote: > Team, > With the new stackalytics how can I see current (Train release) data? the current devlopment cycle is stein and the current released version is Rocky Train is the name of the next developemnt version that will be starting later this year. > Thanks, > Arkady > > From: Jimmy McArthur > Sent: Thursday, January 10, 2019 11:43 AM > To: Kanevsky, Arkady > Cc: openstack-dev at lists.openstack.org; openstack-discuss at lists.openstack.org > Subject: Re: [openstack-dev] [stackalytics] Stackalytics Facelift > > [EXTERNAL EMAIL] > Absolutely. When we get there, I'll send an announcement to the MLs and ping you :) I don't currently have a > timeline, but given the Stackalytics changes, this might speed it up a bit. > > > > Arkady.Kanevsky at dell.com > > January 10, 2019 at 11:38 AM > > Thanks Jimmy. > > Since I am responsible for updating marketplace per release I just need to know what mechanism to use and which file > > I need to patch. > > Thanks, > > Arkady > > > > From: Jimmy McArthur > > Sent: Thursday, January 10, 2019 11:35 AM > > To: openstack-dev at lists.openstack.org; openstack-discuss at lists.openstack.org > > Subject: Re: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > [EXTERNAL EMAIL] > > > > > > > > > Arkady.Kanevsky at dell.com > > > January 9, 2019 at 9:20 AM > > > Thanks Boris. > > > Do we still use DriverLog for marketplace driver status updates? > > > > We do still use DriverLog for the Marketplace drivers listing. We have a cronjob set up to ingest nightly from > > Stackalytics. We also have the ability to CRUD the listings in the Foundation website CMS. > > > > That said, as Boris mentioned, the list is really not used much and I know there is a lot of out of date info > > there. We're planning to move the marketplace list to yaml in a public repo, similar to what we did for OpenStack > > Map [1]. > > > > Cheers, > > Jimmy > > > > [1] https://git.openstack.org/cgit/openstack/openstack-map/ > > > > > > > Thanks, > > > Arkady > > > > > > From: Boris Renski > > > Sent: Tuesday, January 8, 2019 11:11 AM > > > To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg > > > Subject: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > > > [EXTERNAL EMAIL] > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > Boris Renski > > > January 8, 2019 at 11:10 AM > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > > > > Jimmy McArthur > > January 10, 2019 at 11:34 AM > > > > > > > Arkady.Kanevsky at dell.com > > > January 9, 2019 at 9:20 AM > > > Thanks Boris. > > > Do we still use DriverLog for marketplace driver status updates? > > > > We do still use DriverLog for the Marketplace drivers listing. We have a cronjob set up to ingest nightly from > > Stackalytics. We also have the ability to CRUD the listings in the Foundation website CMS. > > > > That said, as Boris mentioned, the list is really not used much and I know there is a lot of out of date info > > there. We're planning to move the marketplace list to yaml in a public repo, similar to what we did for OpenStack > > Map [1]. > > > > Cheers, > > Jimmy > > > > [1] https://git.openstack.org/cgit/openstack/openstack-map/ > > > > > Thanks, > > > Arkady > > > > > > From: Boris Renski > > > Sent: Tuesday, January 8, 2019 11:11 AM > > > To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg > > > Subject: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > > > [EXTERNAL EMAIL] > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > Boris Renski > > > January 8, 2019 at 11:10 AM > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > > > > Arkady.Kanevsky at dell.com > > January 9, 2019 at 9:20 AM > > Thanks Boris. > > Do we still use DriverLog for marketplace driver status updates? > > Thanks, > > Arkady > > > > From: Boris Renski > > Sent: Tuesday, January 8, 2019 11:11 AM > > To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg > > Subject: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > [EXTERNAL EMAIL] > > Folks, > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > openstack project). Brief summary of updates: > > We have new look and feel at stackalytics.com > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > available via direct links, but not in the men on the top > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > Happy to hear comments or feedback or answer questions. > > > > -Boris > > Boris Renski > > January 8, 2019 at 11:10 AM > > Folks, > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > openstack project). Brief summary of updates: > > We have new look and feel at stackalytics.com > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > available via direct links, but not in the men on the top > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > Happy to hear comments or feedback or answer questions. > > > > -Boris > > From smooney at redhat.com Tue Feb 5 20:18:43 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 05 Feb 2019 20:18:43 +0000 Subject: [openstack-dev] [stackalytics] Stackalytics Facelift In-Reply-To: <5b9d8dc2519b4f358e051bf9e6cb5c5f@AUSX13MPS304.AMER.DELL.COM> References: <45e9c80f282d4d2a880b279b990a964c@AUSX13MPS308.AMER.DELL.COM> <5C378231.8010603@openstack.org> <4b8edd5beecd4915b06278524482431e@AUSX13MPS308.AMER.DELL.COM> <5C378410.6050603@openstack.org> <5b9d8dc2519b4f358e051bf9e6cb5c5f@AUSX13MPS304.AMER.DELL.COM> Message-ID: <0ae39e2c1f285345f554f6205bdbc53d80db62eb.camel@redhat.com> On Tue, 2019-02-05 at 16:15 +0000, Arkady.Kanevsky at dell.com wrote: > Team, > With the new stackalytics how can I see current (Train release) data? the current devlopment cycle is stein and the current released version is Rocky Train is the name of the next developemnt version that will be starting later this year. > Thanks, > Arkady > > From: Jimmy McArthur > Sent: Thursday, January 10, 2019 11:43 AM > To: Kanevsky, Arkady > Cc: openstack-dev at lists.openstack.org; openstack-discuss at lists.openstack.org > Subject: Re: [openstack-dev] [stackalytics] Stackalytics Facelift > > [EXTERNAL EMAIL] > Absolutely. When we get there, I'll send an announcement to the MLs and ping you :) I don't currently have a > timeline, but given the Stackalytics changes, this might speed it up a bit. > > > > Arkady.Kanevsky at dell.com > > January 10, 2019 at 11:38 AM > > Thanks Jimmy. > > Since I am responsible for updating marketplace per release I just need to know what mechanism to use and which file > > I need to patch. > > Thanks, > > Arkady > > > > From: Jimmy McArthur > > Sent: Thursday, January 10, 2019 11:35 AM > > To: openstack-dev at lists.openstack.org; openstack-discuss at lists.openstack.org > > Subject: Re: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > [EXTERNAL EMAIL] > > > > > > > > > Arkady.Kanevsky at dell.com > > > January 9, 2019 at 9:20 AM > > > Thanks Boris. > > > Do we still use DriverLog for marketplace driver status updates? > > > > We do still use DriverLog for the Marketplace drivers listing. We have a cronjob set up to ingest nightly from > > Stackalytics. We also have the ability to CRUD the listings in the Foundation website CMS. > > > > That said, as Boris mentioned, the list is really not used much and I know there is a lot of out of date info > > there. We're planning to move the marketplace list to yaml in a public repo, similar to what we did for OpenStack > > Map [1]. > > > > Cheers, > > Jimmy > > > > [1] https://git.openstack.org/cgit/openstack/openstack-map/ > > > > > > > Thanks, > > > Arkady > > > > > > From: Boris Renski > > > Sent: Tuesday, January 8, 2019 11:11 AM > > > To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg > > > Subject: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > > > [EXTERNAL EMAIL] > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > Boris Renski > > > January 8, 2019 at 11:10 AM > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > > > > Jimmy McArthur > > January 10, 2019 at 11:34 AM > > > > > > > Arkady.Kanevsky at dell.com > > > January 9, 2019 at 9:20 AM > > > Thanks Boris. > > > Do we still use DriverLog for marketplace driver status updates? > > > > We do still use DriverLog for the Marketplace drivers listing. We have a cronjob set up to ingest nightly from > > Stackalytics. We also have the ability to CRUD the listings in the Foundation website CMS. > > > > That said, as Boris mentioned, the list is really not used much and I know there is a lot of out of date info > > there. We're planning to move the marketplace list to yaml in a public repo, similar to what we did for OpenStack > > Map [1]. > > > > Cheers, > > Jimmy > > > > [1] https://git.openstack.org/cgit/openstack/openstack-map/ > > > > > Thanks, > > > Arkady > > > > > > From: Boris Renski > > > Sent: Tuesday, January 8, 2019 11:11 AM > > > To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg > > > Subject: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > > > [EXTERNAL EMAIL] > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > Boris Renski > > > January 8, 2019 at 11:10 AM > > > Folks, > > > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > > openstack project). Brief summary of updates: > > > We have new look and feel at stackalytics.com > > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > > available via direct links, but not in the men on the top > > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > > Happy to hear comments or feedback or answer questions. > > > > > > -Boris > > > > > > Arkady.Kanevsky at dell.com > > January 9, 2019 at 9:20 AM > > Thanks Boris. > > Do we still use DriverLog for marketplace driver status updates? > > Thanks, > > Arkady > > > > From: Boris Renski > > Sent: Tuesday, January 8, 2019 11:11 AM > > To: openstack-dev at lists.openstack.org; Ilya Shakhat; Herman Narkaytis; David Stoltenberg > > Subject: [openstack-dev] [stackalytics] Stackalytics Facelift > > > > [EXTERNAL EMAIL] > > Folks, > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > openstack project). Brief summary of updates: > > We have new look and feel at stackalytics.com > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > available via direct links, but not in the men on the top > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > Happy to hear comments or feedback or answer questions. > > > > -Boris > > Boris Renski > > January 8, 2019 at 11:10 AM > > Folks, > > > > Happy New Year! We wanted to start the year by giving a facelift to stackalytics.com (based on stackalytics > > openstack project). Brief summary of updates: > > We have new look and feel at stackalytics.com > > We did away with DriverLog and Member Directory, which were not very actively used or maintained. Those are still > > available via direct links, but not in the men on the top > > BIGGEST CHANGE: You can now track some of the CNCF and Unaffiliated project commits via a separate subsection > > accessible at the top nav. Before this was all bunched up in Project Type -> Complimentary > > Happy to hear comments or feedback or answer questions. > > > > -Boris > > From igor.duarte.cardoso at intel.com Tue Feb 5 20:25:23 2019 From: igor.duarte.cardoso at intel.com (Duarte Cardoso, Igor) Date: Tue, 5 Feb 2019 20:25:23 +0000 Subject: [neutron] OVS OpenFlow L3 DVR / dvr_bridge agent_mode In-Reply-To: References: Message-ID: Thank you Slawek, Seán, Ryan, Miguel. We’ll get to work on this new refactoring, legacy router implementation and the missing unit/functional tests. We’re setting lower priority to the scenario job but hopefully it can be done in stein-3 as well. Best regards, Igor D.C. From: Miguel Lavalle Sent: Friday, February 1, 2019 5:07 PM To: openstack-discuss at lists.openstack.org Subject: Re: [neutron] OVS OpenFlow L3 DVR / dvr_bridge agent_mode Hi Igor, Please see my comments in-line below On Tue, Jan 29, 2019 at 1:26 AM Duarte Cardoso, Igor > wrote: Hi Neutron, I've been internally collaborating on the ``dvr_bridge`` L3 agent mode [1][2][3] work (David Shaughnessy, Xubo Zhang), which allows the L3 agent to make use of Open vSwitch / OpenFlow to implement ``distributed`` IPv4 Routers thus bypassing kernel namespaces and iptables and opening the door for higher performance by keeping packets in OVS for longer. I want to share a few questions in order to gather feedback from you. I understand parts of these questions may have been answered in the past before my involvement, but I believe it's still important to revisit and clarify them. This can impact how long it's going to take to complete the work and whether it can make it to stein-3. 1. Should OVS support also be added to the legacy router? And if so, would it make more sense to have a new variable (not ``agent_mode``) to specify what backend to use (OVS or kernel) instead of creating more combinations? I would like to see the legacy router also implemented. And yes, we need to specify a new config option. As it has already been pointed out, we need to separate what the agent does in each host from the backend technology implementing the routers. 2. What is expected in terms of CI for this? Regarding testing, what should this first patch include apart from the unit tests? (since the l3_agent.ini needs to be configured differently). I agree with Slawek. We would like to see a scenario job. 3. What problems can be anticipated by having the same agent managing both kernel and OVS powered routers (depending on whether they were created as ``distributed``)? We are experimenting with different ways of decoupling RouterInfo (mainly as part of the L3 agent refactor patch) and haven't been able to find the right balance yet. On one end we have an agent that is still coupled with kernel-based RouterInfo, and on the other end we have an agent that either only accepts OVS-based RouterInfos or only kernel-based RouterInfos depending on the ``agent_mode``. I also agree with Slawek here. It would a good idea if we can get the two efforts in synch so we can untangle RouterInfo from the agent code We'd also appreciate reviews on the 2 patches [4][5]. The L3 refactor one should be able to pass Zuul after a recheck. [1] Spec: https://blueprints.launchpad.net/neutron/+spec/openflow-based-dvr [2] RFE: https://bugs.launchpad.net/neutron/+bug/1705536 [3] Gerrit topic: https://review.openstack.org/#/q/topic:dvr_bridge+(status:open+OR+status:merged) [4] L3 agent refactor patch: https://review.openstack.org/#/c/528336/29 [5] dvr_bridge patch: https://review.openstack.org/#/c/472289/17 Thank you! Best regards, Igor D.C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgrosso at redhat.com Tue Feb 5 20:37:29 2019 From: jgrosso at redhat.com (Jason Grosso) Date: Tue, 5 Feb 2019 15:37:29 -0500 Subject: Manila Upstream Bugs Message-ID: Hello All, This is an email to the OpenStack manila upstream community but anyone can chime in would be great to get some input from other projects and how they organize their upstream defects and what tools they use... My goal here is to make the upstream manila bug process easier, cleaner, and more effective. My thoughts to accomplish this are by establishing a process that we can all agree upon. I have the following points/questions that I wanted to address to help create a more effective process: - Can we as a group go through some of the manila bugs so we can drive the visible bug count down? - How often as a group do you have bug scrubs? - Might be beneficial if we had bug scrubs every few months possibly? - It might be a good idea to go through the current upstream bugs and weed out one that can be closed or invalid. - When a new bug is logged how to we normally process this bug - How do we handle the importance? - When a manila bugs comes into launchpad I am assuming one of the people on this email will set the importance? - "Assigned" I will also assume it just picked by the person on this email list. - I am seeing some bugs "fixed committed" with no assignment. How do we know who was working on it? - What is the criteria for setting the importance. Do we have a standard understanding of what is CRITICAL or HIGH? - If there is a critical or high bug what is the response turn-around? Days or weeks? - I see some defect with HIGH that have not been assigned or looked at in a year? - I understand OpenStack has some long releases but how long do we normally keep defects around? - Do we have a way to archive bugs that are not looked at? I was told we can possibly set the status of a defect to “Invalid” or “Opinion” or “Won’t Fix” or “Expired" - Status needs to be something other than "NEW" after the first week - How can we have a defect over a year that is NEW? - Who is possible for see if there is enough information and if the bug is invalid or incomplete and if incomplete ask for relevant information. Do we randomly look at the list daily , weekly, or monthly to see if new info is needed? I started to create a google sheet [1] to see if it is easier to track some of the defect vs the manila-triage pad[2] . I have added both links here. I know a lot will not have access to this page I am working on transitioning to OpenStack ether cal. [1] https://docs.google.com/spreadsheets/d/1oaXEgo_BEkY2KleISN3M58waqw9U5W7xTR_O1jQmQ74/edit#gid=758082340 [2] https://etherpad.openstack.org/p/manila-bug-triage-pad *[3]* https://ethercalc.openstack.org/uc8b4567fpf4 I would also like to hear from all of you on what your issues are with the current process for upstream manila bugs using launchpad. I have not had the time to look at storyboard https://storyboard.openstack.org/ but I have heard that the OpenStack community is pushing toward using Storyboard, so I will be looking at that shortly. Any input would be greatly appreciated... Thanks All, Jason Grosso Senior Quality Engineer - Cloud Red Hat OpenStack Manila jgrosso at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgiusti at gmail.com Tue Feb 5 20:38:47 2019 From: kgiusti at gmail.com (Ken Giusti) Date: Tue, 5 Feb 2019 15:38:47 -0500 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: <4c3eda3d27c7e8199d23f6739bdad4ffcc132137.camel@redhat.com> References: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> <4c3eda3d27c7e8199d23f6739bdad4ffcc132137.camel@redhat.com> Message-ID: On 2/5/19, Harald Jensås wrote: > On Tue, 2019-02-05 at 11:43 -0500, Ken Giusti wrote: >> On 2/4/19, Harald Jensås wrote: >> > >> > I opened a oslo.messaging bug[1] yesterday. When using >> > notifications >> > and all consumers use one or more pools. The ironic-neutron-agent >> > does >> > use pools for all listeners in it's hash-ring member manager. And >> > the >> > result is that notifications are published to the 'ironic-neutron- >> > agent-heartbeat.info' queue and they are never consumed. >> > >> >> This is an issue with the design of the notification pool feature. >> >> The Notification service is designed so notification events can be >> sent even though there may currently be no consumers. It supports >> the >> ability for events to be queued until a consumer(s) is ready to >> process them. So when a notifier issues an event and there are no >> consumers subscribed, a queue must be provisioned to hold that event >> until consumers appear. >> >> For notification pools the pool identifier is supplied by the >> notification listener when it subscribes. The value of any pool id >> is >> not known beforehand by the notifier, which is important because pool >> ids can be dynamically created by the listeners. And in many cases >> pool ids are not even used. >> >> So notifications are always published to a non-pooled queue. If >> there >> are pooled subscriptions we rely on the broker to do the fanout. >> This means that the application should always have at least one >> non-pooled listener for the topic, since any events that may be >> published _before_ the listeners are established will be stored on a >> non-pooled queue. >> > > From what I observer any message published _before_ or _after_ pool > listeners are established are stored on the non-pooled queue. > True that. Even if listeners are established before a notification is issued the notifier still doesn't know that and blindly creates a non pooled queue just in case there aren't any listeners. Not intuitive I agree. >> The documentation doesn't make that clear AFAIKT - that needs to be >> fixed. >> > > I agree with your conclusion here. This is not clear in the > documentation. And it should be updated to reflect the requirement of > at least one non-pool listener to consume the non-pooled queue. > +1 I can do that. > >> > The second issue, each instance of the agent uses it's own pool to >> > ensure all agents are notified about the existance of peer-agents. >> > The >> > pools use a uuid that is generated at startup (and re-generated on >> > restart, stop/start etc). In the case where >> > `[oslo_messaging_rabbit]/amqp_auto_delete = false` in neutron >> > config >> > these uuid queues are not automatically removed. So after a restart >> > of >> > the ironic-neutron-agent the queue with the old UUID is left in the >> > message broker without no consumers, growing ... >> > >> > >> > I intend to push patches to fix both issues. As a workaround (or >> > the >> > permanent solution) will create another listener consuming the >> > notifications without a pool. This should fix the first issue. >> > >> > Second change will set amqp_auto_delete for these specific queues >> > to >> > 'true' no matter. What I'm currently stuck on here is that I need >> > to >> > change the control_exchange for the transport. According to >> > oslo.messaging documentation it should be possible to override the >> > control_exchange in the transport_url[3]. The idea is to set >> > amqp_auto_delete and a ironic-neutron-agent specific exchange on >> > the >> > url when setting up the transport for notifications, but so far I >> > belive the doc string on the control_exchange option is wrong. >> > >> >> Yes the doc string is wrong - you can override the default >> control_exchange via the Target's exchange field: >> >> > https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/target.py#n40 >> >> At least that's the intent... >> >> ... however the Notifier API does not take a Target, it takes a list >> of topic _strings_: >> >> > https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/notify/notifier.py#n239 >> >> Which seems wrong, especially since the notification Listener >> subscribes to a list of Targets: >> >> > https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/notify/listener.py#n227 >> >> I've opened a bug for this and will provide a patch for review >> shortly: >> >> https://bugs.launchpad.net/oslo.messaging/+bug/1814797 >> >> > > Thanks, this makes sense. > I've hacked in the ability to override the default exchange for notifiers, but I don't think it would help in your case. In rabbitmq exchange and queue names are scoped independently. This means that if you have an exchange named "openstack' and another named 'my-exchange' but use the same topic (say 'foo') you end up with a single instance of queue 'foo' bound to both exchanges. IOW declaring one listener on exchange=openstack and topic=foo, and another listener on exchange=my-exchange and topic=foo they will compete for messages because they are consuming from the same queue (foo). So if your intent is to partition notification traffic you'd still need unique topics as well. > > One question, in target I can see that there is the 'fanout' parameter. > > https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/target.py#n62 > > """ Clients may request that a copy of the message be delivered to all > servers listening on a topic by setting fanout to ``True``, rather than > just one of them. """ > > In my usecase I actually want exactly that. So once your patch lands I > can drop the use of pools and just set fanout=true on the target > instead? > The 'fanout' attribute is only used with RPC messaging, not Notifications. Can you use RPC fanout instead of Notifications? RPC fanout ('cast' as the API calls it) is different from 'normal' RPC in that no reply is returned to the caller. So it's a lot like Notifications in that regard. However RPC fanout is different from Notifications in two important ways: 1) RPC fanout messages are sent 'least effort', meaning they can be silently discarded, and 2) RPC fanout messages are not stored - they are only delivered to active subscribers (listeners). I've always felt that notification pools are an attempt to implement a Publish/Subscribe messaging pattern on top of an event queuing service. That's hard to do since event queuing has strict delivery guarantees (avoid dropping) which Pub/Sub doesn't (drop if no consumers). >> >> >> >> > >> > NOTE: The second issue can be worked around by stopping and >> > starting >> > rabbitmq as a dependency of the ironic-neutron-agent service. This >> > ensure only queues for active agent uuid's are present, and those >> > queues will be consumed. >> > >> > >> > -- >> > Harald Jensås >> > >> > >> > [1] https://bugs.launchpad.net/oslo.messaging/+bug/1814544 >> > [2] https://storyboard.openstack.org/#!/story/2004933 >> > [3] >> > > https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/transport.py#L58-L62 >> > >> > >> > >> >> > > -- Ken Giusti (kgiusti at gmail.com) From doug at doughellmann.com Tue Feb 5 21:35:04 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 05 Feb 2019 16:35:04 -0500 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: References: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> Message-ID: Ken Giusti writes: > On 2/4/19, Harald Jensås wrote: >> On Tue, 2019-02-05 at 09:54 +1100, Michael Still wrote: >>> Hi, >>> >>> I’ve been chasing a bug in ironic’s neutron agent for the last few >>> days and I think its time to ask for some advice. >>> >> >> I'm working on the same issue. (In fact there are two issues.) >> >>> Specifically, I was asked to debug why a set of controllers was using >>> so much RAM, and the answer was that rabbitmq had a queue called >>> ironic-neutron-agent-heartbeat.info with 800,000 messages enqueued. >>> This notification queue is used by ironic’s neutron agent to >>> calculate the hash ring. I have been able to duplicate this issue in >>> a stock kolla-ansible install with ironic turned on but no bare metal >>> nodes enrolled in ironic. About 0.6 messages are queued per second. >>> >>> I added some debugging code (hence the thread yesterday about >>> mangling the code kolla deploys), and I can see that the messages in >>> the queue are being read by the ironic neutron agent and acked >>> correctly. However, they are not removed from the queue. >>> >>> You can see your queue size while using kolla with this command: >>> >>> docker exec rabbitmq rabbitmqctl list_queues messages name >>> messages_ready consumers | sort -n | tail -1 >>> >>> My stock install that’s been running for about 12 hours currently has >>> 8,244 messages in that queue. >>> >>> Where I’m a bit stumped is I had assumed that the messages weren’t >>> being acked correctly, which is not the case. Is there something >>> obvious about notification queues like them being persistent that >>> I’ve missed in my general ignorance of the underlying implementation >>> of notifications? >>> >> >> I opened a oslo.messaging bug[1] yesterday. When using notifications >> and all consumers use one or more pools. The ironic-neutron-agent does >> use pools for all listeners in it's hash-ring member manager. And the >> result is that notifications are published to the 'ironic-neutron- >> agent-heartbeat.info' queue and they are never consumed. >> > > This is an issue with the design of the notification pool feature. > > The Notification service is designed so notification events can be > sent even though there may currently be no consumers. It supports the > ability for events to be queued until a consumer(s) is ready to > process them. So when a notifier issues an event and there are no > consumers subscribed, a queue must be provisioned to hold that event > until consumers appear. This has come up several times over the last few years, and it's always a surprise to whoever it has bitten. I wonder if we should change the default behavior to not create the consumer queue in the publisher? -- Doug From mikal at stillhq.com Tue Feb 5 22:07:29 2019 From: mikal at stillhq.com (Michael Still) Date: Wed, 6 Feb 2019 09:07:29 +1100 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: References: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> Message-ID: I'm also interested in how we catch future instances of this. Is there something we can do in CI or in a runtime warning to let people know? I am sure there are plenty of ironic deployments out there consuming heaps more RAM than is required for this queue. Michael On Wed, Feb 6, 2019 at 8:41 AM Doug Hellmann wrote: > Ken Giusti writes: > > > On 2/4/19, Harald Jensås wrote: > >> On Tue, 2019-02-05 at 09:54 +1100, Michael Still wrote: > >>> Hi, > >>> > >>> I’ve been chasing a bug in ironic’s neutron agent for the last few > >>> days and I think its time to ask for some advice. > >>> > >> > >> I'm working on the same issue. (In fact there are two issues.) > >> > >>> Specifically, I was asked to debug why a set of controllers was using > >>> so much RAM, and the answer was that rabbitmq had a queue called > >>> ironic-neutron-agent-heartbeat.info with 800,000 messages enqueued. > >>> This notification queue is used by ironic’s neutron agent to > >>> calculate the hash ring. I have been able to duplicate this issue in > >>> a stock kolla-ansible install with ironic turned on but no bare metal > >>> nodes enrolled in ironic. About 0.6 messages are queued per second. > >>> > >>> I added some debugging code (hence the thread yesterday about > >>> mangling the code kolla deploys), and I can see that the messages in > >>> the queue are being read by the ironic neutron agent and acked > >>> correctly. However, they are not removed from the queue. > >>> > >>> You can see your queue size while using kolla with this command: > >>> > >>> docker exec rabbitmq rabbitmqctl list_queues messages name > >>> messages_ready consumers | sort -n | tail -1 > >>> > >>> My stock install that’s been running for about 12 hours currently has > >>> 8,244 messages in that queue. > >>> > >>> Where I’m a bit stumped is I had assumed that the messages weren’t > >>> being acked correctly, which is not the case. Is there something > >>> obvious about notification queues like them being persistent that > >>> I’ve missed in my general ignorance of the underlying implementation > >>> of notifications? > >>> > >> > >> I opened a oslo.messaging bug[1] yesterday. When using notifications > >> and all consumers use one or more pools. The ironic-neutron-agent does > >> use pools for all listeners in it's hash-ring member manager. And the > >> result is that notifications are published to the 'ironic-neutron- > >> agent-heartbeat.info' queue and they are never consumed. > >> > > > > This is an issue with the design of the notification pool feature. > > > > The Notification service is designed so notification events can be > > sent even though there may currently be no consumers. It supports the > > ability for events to be queued until a consumer(s) is ready to > > process them. So when a notifier issues an event and there are no > > consumers subscribed, a queue must be provisioned to hold that event > > until consumers appear. > > This has come up several times over the last few years, and it's always > a surprise to whoever it has bitten. I wonder if we should change the > default behavior to not create the consumer queue in the publisher? > > -- > Doug > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manuel.sb at garvan.org.au Tue Feb 5 22:45:27 2019 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Tue, 5 Feb 2019 22:45:27 +0000 Subject: virt-install error while trying to create a new image Message-ID: <9D8A2486E35F0941A60430473E29F15B017BB3C9AE@MXDB2.ad.garvan.unsw.edu.au> Dear Openstack community, I am trying to create a new image for Ironic. I followed the documentation but got an error with virt-install. Please note: The OS has been reinstalled The host is a physical machine BIOS has virtualization enabled I changed /etc/libvirt/qemu.conf group from root to kvm following some linux forum instructions about this error but the issue persists # virt-install --virt-type kvm --name centos --ram 1024 --disk /tmp/centos.qcow2,format=qcow2 --network network=default --graphics vnc,listen=0.0.0.0 --noautoconsole --os-type=linux --os-variant=centos7.0 --location=/root/CentOS-7-x86_64-NetInstall-1810.iso Starting install... Retrieving file .treeinfo... | 0 B 00:00:00 Retrieving file content... | 0 B 00:00:00 Retrieving file vmlinuz... | 6.3 MB 00:00:00 Retrieving file initrd.img... | 50 MB 00:00:00 ERROR unsupported configuration: CPU mode 'custom' for x86_64 kvm domain on x86_64 host is not supported by hypervisor Domain installation does not appear to have been successful. If it was, you can restart your domain by running: virsh --connect qemu:///system start centos otherwise, please restart your installation. Any thoughts? Thank you very much Manuel Sopena Ballesteros | Big data Engineer Garvan Institute of Medical Research The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010 T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel.sb at garvan.org.au NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Tue Feb 5 23:29:41 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Tue, 5 Feb 2019 17:29:41 -0600 Subject: [Cinder][driver][ScaleIO] In-Reply-To: References: Message-ID: <9d98a006-a062-0a9b-a9d3-68ed0ef4078f@gmail.com> Adding Helen Walsh to this as she may be able to provide insight. Jay On 2/5/2019 12:16 PM, Martin Chlumsky wrote: > Hello, > > We are using EMC ScaleIO as our backend to cinder. > When we delete VMs that have attached volumes and then try deleting > said volumes, the volumes will sometimes end in state error_deleting. > The state is reached because for some reason the volumes are still > mapped (in the ScaleIO sense of the word) to the hypervisor despite > the VM being deleted. > We fixed the issue by setting the following option to True in cinder.conf: > > # Unmap volume before deletion. (boolean value) > sio_unmap_volume_before_deletion=False > > > What is the reasoning behind this option? Why would we ever set this > to False and why is it False by default? It seems you would always > want to unmap the volume from the hypervisor before deleting it. > > Thank you, > > Martin From Arkady.Kanevsky at dell.com Wed Feb 6 04:24:28 2019 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Wed, 6 Feb 2019 04:24:28 +0000 Subject: [Cinder][driver][ScaleIO] In-Reply-To: <9d98a006-a062-0a9b-a9d3-68ed0ef4078f@gmail.com> References: <9d98a006-a062-0a9b-a9d3-68ed0ef4078f@gmail.com> Message-ID: <74b2c779ee644a64b5b1939537ddffd1@AUSX13MPS304.AMER.DELL.COM> Adding Vlad who is the right person for ScaleIO driver. -----Original Message----- From: Jay Bryant Sent: Tuesday, February 5, 2019 5:30 PM To: openstack-discuss at lists.openstack.org; Walsh, Helen Subject: Re: [Cinder][driver][ScaleIO] [EXTERNAL EMAIL] Adding Helen Walsh to this as she may be able to provide insight. Jay On 2/5/2019 12:16 PM, Martin Chlumsky wrote: > Hello, > > We are using EMC ScaleIO as our backend to cinder. > When we delete VMs that have attached volumes and then try deleting > said volumes, the volumes will sometimes end in state error_deleting. > The state is reached because for some reason the volumes are still > mapped (in the ScaleIO sense of the word) to the hypervisor despite > the VM being deleted. > We fixed the issue by setting the following option to True in cinder.conf: > > # Unmap volume before deletion. (boolean value) > sio_unmap_volume_before_deletion=False > > > What is the reasoning behind this option? Why would we ever set this > to False and why is it False by default? It seems you would always > want to unmap the volume from the hypervisor before deleting it. > > Thank you, > > Martin From Arkady.Kanevsky at dell.com Wed Feb 6 04:25:54 2019 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Wed, 6 Feb 2019 04:25:54 +0000 Subject: [openstack-dev] [stackalytics] Stackalytics Facelift In-Reply-To: <5C378410.6050603@openstack.org> References: <45e9c80f282d4d2a880b279b990a964c@AUSX13MPS308.AMER.DELL.COM> <5C378231.8010603@openstack.org> <4b8edd5beecd4915b06278524482431e@AUSX13MPS308.AMER.DELL.COM> <5C378410.6050603@openstack.org> Message-ID: <0a2078f2b8ec44b19252633da58e3610@AUSX13MPS304.AMER.DELL.COM> How does Stackalytics shows statistics for current Train release work? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Wed Feb 6 05:01:25 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 6 Feb 2019 13:01:25 +0800 Subject: [heat]No meeting today Message-ID: Hi all, since it's current Chinese New Year time for me, I will not be able to host the meeting today. Also, I believe Zane is not available for today's meeting too, so let's run our meeting next week. Here's something we need feedback on since heat-agents still broken, I still need feedback on [1] and [2]. Two features that we can use some reviews on [3], [4] and [5], so please help us if you can. [1] https://review.openstack.org/#/c/634383/ [2] https://review.openstack.org/#/c/634563/ [3] https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/multiple-cloud-support [4] https://storyboard.openstack.org/#!/story/2003579 [5] https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/heat-plugin-blazar -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Feb 6 05:09:47 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 06 Feb 2019 14:09:47 +0900 Subject: [nova][qa][cinder] CI job changes In-Reply-To: <666ffefd-7344-1853-7fd5-a2a32ea8d222@gmail.com> References: <666ffefd-7344-1853-7fd5-a2a32ea8d222@gmail.com> Message-ID: <168c1364bfb.b6bfd9ad351371.5730819222747190801@ghanshyammann.com> ---- On Tue, 05 Feb 2019 23:35:20 +0900 Matt Riedemann wrote ---- > I'd like to propose some changes primarily to the CI jobs that run on > nova changes, but also impact cinder and tempest. > > 1. Drop the nova-multiattach job and move test coverage to other jobs > > This is actually an old thread [1] and I had started the work but got > hung up on a bug that was teased out of one of the tests when running in > the multi-node tempest-slow job [2]. For now I've added a conditional > skip on that test if running in a multi-node job. The open changes are > here [3]. +1. The only question I commented on review - this test is skipped in all jobs now. If that all ok as of now then I am +2. > > 2. Only run compute.api and scenario tests in nova-next job and run > under python3 only > > The nova-next job is a place to test new or advanced nova features like > placement and cells v2 when those were still optional in Newton. It > currently runs with a few changes from the normal tempest-full job: > > * configures service user tokens > * configures nova console proxy to use TLS > * disables the resource provider association refresh interval > * it runs the post_test_hook which runs some commands like > archive_delete_rows, purge, and looks for leaked resource allocations [4] > > Like tempest-full, it runs the non-slow tempest API tests concurrently > and then the scenario tests serially. I'm proposing that we: > > a) change that job to only run tempest compute API tests and scenario > tests to cut down on the number of tests to run; since the job is really > only about testing nova features, we don't need to spend time running > glance/keystone/cinder/neutron tests which don't touch nova. > > b) run it with python3 [5] which is the direction all jobs are moving anyway +1. It make sense to run only compute test in this job. > > 3. Drop the integrated-gate (py2) template jobs (from nova) > > Nova currently runs with both the integrated-gate and > integrated-gate-py3 templates, which adds a set of tempest-full and > grenade jobs each to the check and gate pipelines. I don't think we need > to be gating on both py2 and py3 at this point when it comes to > tempest/grenade changes. Tempest changes are still gating on both so we > have coverage there against breaking changes, but I think anything > that's py2 specific would be caught in unit and functional tests (which > we're running on both py27 and py3*). > IMO, we should keep running integrated-gate py2 templates on the project gate also along with Tempest. Jobs in integrated-gate-* templates cover a large amount of code so running that for both versions make sure we keep our code running on py2 also. Rest other job like tempest-slow, nova-next etc are good to run only py3 on project side (Tempest gate keep running py2 version also). I am not sure if unit/functional jobs cover all code coverage and it is safe to ignore the py version consideration from integration CI. As per TC resolution, python2 can be dropped during begning of U cycle [1]. You have good point of having the integrated-gate py2 coverage on Tempest gate only is enough but it has risk of merging the py2 breaking code on project side which will block the Tempest gate. I agree that such chances are rare but still it can happen. Other point is that we need integrated-gate template running when Stein and Train become stable branch (means on stable/stein and stable/train gate). Otherwise there are chance when py2 broken code from U (because we will test only py3 in U) is backported to stable/Train or stable/stein. My opinion on this proposal is to wait till we officially drop py2 which is starting of U. [1] https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html -gmann > Who's with me? > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-October/135299.html > [2] https://bugs.launchpad.net/tempest/+bug/1807723 > [3] > https://review.openstack.org/#/q/topic:drop-multiattach-job+(status:open+OR+status:merged) > [4] https://github.com/openstack/nova/blob/5283b464b/gate/post_test_hook.sh > [5] https://review.openstack.org/#/c/634739/ > > -- > > Thanks, > > Matt > > From alfredo.deluca at gmail.com Wed Feb 6 08:00:07 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Wed, 6 Feb 2019 09:00:07 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Hi Ignazio. sorry for late reply. security group is fine. It\s not blocking the network traffic. Not sure why but, with this fedora release I can finally find atomic but there is no yum,nslookup,dig,host command..... why is so different from another version (latest) which had yum but not atomic. It's all weird Cheers On Mon, Feb 4, 2019 at 5:46 PM Ignazio Cassano wrote: > Alfredo, try to check security group linked to your kubemaster. > > Il giorno Lun 4 Feb 2019 14:25 Alfredo De Luca > ha scritto: > >> Hi Ignazio. Thanks for the link...... so >> >> Now at least atomic is present on the system. >> Also I ve already had 8.8.8.8 on the system. So I can connect on the >> floating IP to the kube master....than I can ping 8.8.8.8 but for example >> doesn't resolve the names...so if I ping 8.8.8.8 >> *root at my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* >> *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* >> *64 bytes from 8.8.8.8 : icmp_seq=1 ttl=118 time=12.1 ms* >> *64 bytes from 8.8.8.8 : icmp_seq=2 ttl=118 time=12.2 ms* >> >> but if I ping google.com doesn't resolve. I can't either find on fedora >> dig or nslookup to check >> resolv.conf has >> *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* >> *nameserver 8.8.8.8* >> >> It\s all so weird. >> >> >> >> >> On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano >> wrote: >> >>> I also suggest to change dns in your external network used by magnum. >>> Using openstack dashboard you can change it to 8.8.8.8 (If I remember >>> fine you wrote that you can ping 8.8.8.8 from kuke baster) >>> >>> Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < >>> alfredo.deluca at gmail.com> ha scritto: >>> >>>> thanks ignazio >>>> Where can I get it from? >>>> >>>> >>>> On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano < >>>> ignaziocassano at gmail.com> wrote: >>>> >>>>> I used fedora-magnum-27-4 and it works >>>>> >>>>> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >>>>> alfredo.deluca at gmail.com> ha scritto: >>>>> >>>>>> Hi Clemens. >>>>>> So the image I downloaded is this >>>>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>>>>> which is the latest I think. >>>>>> But you are right...and I noticed that too.... It doesn't have atomic >>>>>> binary >>>>>> the os-release is >>>>>> >>>>>> *NAME=Fedora* >>>>>> *VERSION="29 (Cloud Edition)"* >>>>>> *ID=fedora* >>>>>> *VERSION_ID=29* >>>>>> *PLATFORM_ID="platform:f29"* >>>>>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>>>>> *ANSI_COLOR="0;34"* >>>>>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>>>>> *HOME_URL="https://fedoraproject.org/ "* >>>>>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>>>>> "* >>>>>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>>>>> "* >>>>>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>>>>> "* >>>>>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>>>>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>>>>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>>>>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>>>>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>>>>> "* >>>>>> *VARIANT="Cloud Edition"* >>>>>> *VARIANT_ID=cloud* >>>>>> >>>>>> >>>>>> so not sure why I don't have atomic tho >>>>>> >>>>>> >>>>>> On Sat, Feb 2, 2019 at 7:53 PM Clemens >>>>>> wrote: >>>>>> >>>>>>> Now to the failure of your part-013: Are you sure that you used the >>>>>>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>>>>>> error message below suggests that your image does not contain ‚atomic‘ as >>>>>>> part of the image … >>>>>>> >>>>>>> + _prefix=docker.io/openstackmagnum/ >>>>>>> + atomic install --storage ostree --system --system-package no --set >>>>>>> REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>>>>> heat-container-agent >>>>>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>>>>> ./part-013: line 8: atomic: command not found >>>>>>> + systemctl start heat-container-agent >>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>> heat-container-agent.service not found. >>>>>>> >>>>>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>>>>> alfredo.deluca at gmail.com>: >>>>>>> >>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>> heat-container-agent.service not found. >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> *Alfredo* >>>>>> >>>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >> >> -- >> *Alfredo* >> >> -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Wed Feb 6 08:18:35 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 6 Feb 2019 17:18:35 +0900 Subject: [TC][Searchlight] Project health evaluation Message-ID: Hi TC members and Searchlight team, As we discussed at the beginning of the Stein cycle, Searchlight would go through a propagation period to consider whether to let it continue to operate under the OpenStack foundation's umbrella [1]. For the last two milestones, we have achieved some results [2] [3] and designed a sustainable future for Searchlight with a vision [4]. As we're reaching the Stein-3 milestone [5] and preparing for the Denver summit. We, as a team, would like have a formal project health evaluation in several aspects such as active contributors / team, planning, bug fixes, features, etc. We would love to have some voice from the TC team and anyone from the community who follows our effort during the Stein cycle. We then would want to update the information at [6] and [7] to avoid any confusion that may stop potential contributors or users to come to Searchlight. [1] https://review.openstack.org/#/c/588644/ [2] https://www.dangtrinh.com/2018/10/searchlight-at-stein-1-weekly-report.html [3] https://www.dangtrinh.com/2019/01/searchlight-at-stein-2-r-14-r-13.html [4] https://docs.openstack.org/searchlight/latest/user/usecases.html#our-vision [5] https://releases.openstack.org/stein/schedule.html [6] https://governance.openstack.org/election/results/stein/ptl.html [7] https://wiki.openstack.org/wiki/OpenStack_health_tracker Many thanks, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Wed Feb 6 09:32:20 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 06 Feb 2019 10:32:20 +0100 Subject: [openstack-ansible] bug squash day! In-Reply-To: <5f88b97f42da5cd3015ec738d4d7a6f9@arcor.de> References: <717c065910a2365e8d9674f987227771@arcor.de> <5f88b97f42da5cd3015ec738d4d7a6f9@arcor.de> Message-ID: <2ddb206f78e4c79ed6bc45a0d027b656473f09e7.camel@evrard.me> On Tue, 2019-02-05 at 19:04 +0100, Frank Kloeker wrote: > Hi Mohammed, > > will there be an extra invitation or an etherpad for logistic? > > many thanks > > Frank > > Am 2019-02-05 17:22, schrieb Mohammed Naser: > > Hi everyone, > > > > We've discussed this over the ML today and we've decided for it to > > be > > next Wednesday (13th of February). Due to the distributed nature > > of > > our teams, we'll be aiming to go throughout the day and we'll all > > be > > hanging out on #openstack-ansible with a few more high bandwidth > > way > > of discussion if that is needed > > > > Thanks! > > Mohammed What I did in the past was to prepare an etherpad of the most urgent ones, but wasn't the most successful bug squash we had. I also took the other approach, BYO bug, list it in the etherpad, so we can track the bug squashers. And in both cases, I brought belgian cookies/chocolates to the most successful bug squasher (please note you should ponderate with the task criticality level, else people might solve the simplest bugs to get the chocolates :p) This was my informal motivational, but I didn't have to do that. I justliked doing so :) Regards, JP. From jean-philippe at evrard.me Wed Feb 6 09:36:37 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 06 Feb 2019 10:36:37 +0100 Subject: [Neutron] - Bug Report for the week of Jan 29th- Feb4th. In-Reply-To: <20190204195705.v6to7bmqe2ib2nfd@yuggoth.org> References: <5C589423020000D7000400BA@prv-mh.provo.novell.com> <20190204195705.v6to7bmqe2ib2nfd@yuggoth.org> Message-ID: On Mon, 2019-02-04 at 19:57 +0000, Jeremy Stanley wrote: > On 2019-02-04 12:36:03 -0700 (-0700), Swaminathan Vasudevan wrote: > > Hi Neutrinos,Here is the summary of the neutron bugs that came in > > last week ( starting from Jan 29th - Feb 4th). > > > > https://docs.google.com/spreadsheets/d/1MwoHgK_Ve_6JGYaM8tZxWha2HDaMeAYtq4qFdZ4TUAU/edit?usp=sharing > > If it's just a collaboratively-edited spreadsheet application you > need, don't forget we maintain https://ethercalc.openstack.org/ > (hopefully soon also reachable as ethercalc.opendev.org) which runs > entirely on free software and is usable from parts of the World > where Google's services are not (for example, mainland China). I agree with Jeremy here. Let's make use of infra as much as we can. From jean-philippe at evrard.me Wed Feb 6 09:45:03 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 06 Feb 2019 10:45:03 +0100 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> Message-ID: > So, maybe the next step is to convince someone to champion a goal of > improving our contributor documentation, and to have them describe > what > the documentation should include, covering the usual topics like how > to > actually submit patches as well as suggestions for how to describe > areas > where help is needed in a project and offers to mentor contributors. > > Does anyone want to volunteer to serve as the goal champion for that? > This doesn't get visibility yet, as this thread is under [tc] only. Lance and I will raise this in our next update (which should be tomorrow) if we don't have a volunteer here. JP. From jean-philippe at evrard.me Wed Feb 6 10:00:08 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 06 Feb 2019 11:00:08 +0100 Subject: [horizon] Horizon slowing down proportionally to the amount of instances (was: Horizon extremely slow with 400 instances) In-Reply-To: References: Message-ID: <33f1bdebb0efbb36dbb40af9564dde5daba62ffe.camel@evrard.me> On Wed, 2019-01-30 at 21:10 -0500, Satish Patel wrote: > folks, > > we have mid size openstack cloud running 400 instances, and day by > day > its getting slower, i can understand it render every single machine > during loading instance page but it seems it's design issue, why not > it load page from MySQL instead of running bunch of API calls behind > then page? > > is this just me or someone else also having this issue? i am > surprised > why there is no good and robust Web GUI for very popular openstack? > > I am curious how people running openstack in large environment using > Horizon. > > I have tired all kind of setting and tuning like memcache etc.. > > ~S > Hello, I took the liberty to change the mailing list and topic name: FYI, the openstack-discuss ML will help you reach more people (developers/operators). When you prefix your mail with [horizon], it will even pass filters for some people:) Anyway... I would say horizon performance depends on many aspects of your deployment, including keystone and caching, it's hard to know what's going on with your environment with so little data. I hope you're figure it out :) Regards, JP From jean-philippe at evrard.me Wed Feb 6 10:11:42 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 06 Feb 2019 11:11:42 +0100 Subject: [tc][all] Project deletion community goal for Train cycle In-Reply-To: <1689d71d0ef.ef1d5f8d185664.5395252099905607931@ghanshyammann.com> References: <8d25cbc43d4fc43f8a98de37992d5531c8662cdc.camel@evrard.me> <47F67A8C-8C89-4B0A-BCF3-7F3100D2A1B7@leafe.com> <86ed4afc-056e-602a-e30c-08a51c2a2080@catalyst.net.nz> <1689d71d0ef.ef1d5f8d185664.5395252099905607931@ghanshyammann.com> Message-ID: On Wed, 2019-01-30 at 15:28 +0900, Ghanshyam Mann wrote: > ---- On Wed, 23 Jan 2019 08:21:27 +0900 Adrian Turjak < > adriant at catalyst.net.nz> wrote ---- > > Thanks for the input! I'm willing to bet there are many people > excited > > about this goal, or will be when they realise it exists! > > > > The 'dirty' state I think would be solved with a report API in > each > > service (tell me everything a given project has resource wise). > Such an > > API would be useful without needing to query each resource list, > and > > potentially could be an easy thing to implement to help a purge > library > > figure out what to delete. I know right now our method for > checking if a > > project is 'dirty' is part of our quota checking scripts, and it > has to > > query a lot of APIs per service to build an idea of what a project > has. > > > > As for using existing code, OSPurge could well be a starting > point, but > > the major part of this goal has to be that each OpenStack service > (that > > creates resources owned by a project) takes ownership of their > own > > deletion logic. This is why a top level library for cross project > logic, > > with per service plugin libraries is possibly the best approach. > Each > > library would follow the same template and abstraction layers (as > > inherited from the top level library), but how each service > implements > > their own deletion is up to them. I would also push for them using > the > > SDK only as their point of interaction with the APIs (lets set > some hard > > requirements and standards!), because that is the python library > we > > should be using going forward. In addition such an approach could > mean > > that anyone can write a plugin for the top level library (e.g. > internal > > company only services) which will automatically get picked up if > installed. > > +100 for not making keystone as Actor. Leaving purge responsibility > to service > side is the best way without any doubt. > > Instead of accepting Purge APIs from each service, I am thinking > we should consider another approach also which can be the plugin-able > approach. > Ewe can expose the plugin interface from purge library/tool. Each > service implements > the interface with purge functionality(script or command etc). > On discovery of each service's purge plugin, purge library/tool will > start the deletion > in required order etc. > > This can give 2 simple benefits > 1. No need to detect the service availability before requesting them > to purge the resources. > I am not sure OSpurge check the availability of services or not. But > in plugin approach case, > that will not be required. For example, if Congress is not installed > in my env then, > congress's purge plugin will not be discovered so no need to check > Congress service availability. > > 2. purge all resources interface will not be exposed to anyone except > the Purge library/tool. > In case of API, we are exposing the interface to user(admin/system > scopped etc) which can > delete all the resources of that service which is little security > issue may be. This can be argued > with existing delete API but those are per resource not all. Other > side we can say those can be > taken care by RBAC but still IMO exposing anything to even > permissiable user(especially human) > which can destruct the env is not a good idea where only right usage > of that interface is something > else (Purge library/tool in this case). > > Plugin-able can also have its cons but Let's first discuss all those > possibilities. > > -gmann Wasn't it what was proposed in the etherpad? I am a little confused there. From jean-philippe at evrard.me Wed Feb 6 10:13:51 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 06 Feb 2019 11:13:51 +0100 Subject: [openstack-helm] How to specify nova override for multiple pci alias In-Reply-To: <8E5740EC88EF3E4BA3196F2545DC8625BA1CD21F@ALA-MBD.corp.ad.wrs.com> References: <8E5740EC88EF3E4BA3196F2545DC8625BA1CD21F@ALA-MBD.corp.ad.wrs.com> Message-ID: <2f17068ba3452c230e3dbe1d581d940f85961a12.camel@evrard.me> On Wed, 2019-01-30 at 15:40 +0000, Gauld, James wrote: > How can I specify a helm override to configure nova PCI alias when > there are multiple aliases? > I haven't been able to come up with a YAML compliant specification > for this. > > Are there other alternatives to be able to specify this as an > override? I assume that a nova Chart change would be required to > support this custom one-alias-entry-per-line formatting. > > Any insights on how to achieve this in helm are welcomed. > > Background: > There is a limitation in the nova.conf specification of PCI alias in > that it does not allow multiple PCI aliases as a list. The code says > "Supports multiple aliases by repeating the option (not by specifying > a list value)". Basically nova currently only supports one-alias- > entry-per-line format. > > Ideally I would specify global pci alias in a format similar to what > can be achieved with PCI passthrough_whitelist, which can takes JSON > list of dictionaries. > > This is what I am trying to specify in nova.conf (i.e., for nova-api- > osapi and nova-compute): > [pci] > alias = {dict 1} > alias = {dict 2} > . . . > > The following nova configuration format is desired, but not as yet > supported by nova: > [pci] > alias = [{dict 1}, {dict 2}] > > The following snippet of YAML works for PCI passthrough_whitelist, > where the value encoded is a JSON string: > > conf: > nova: > overrides: > nova_compute: > hosts: > - conf: > nova: > pci: > passthrough_whitelist: '[{"class_id": "030000", > "address": "0000:00:02.0"}]' > > Jim Gauld Could the '?' symbol (for complex keys) help here? I don't know, but I would love to see an answer, and I can't verify that now. Regards, JP From eumel at arcor.de Wed Feb 6 11:10:20 2019 From: eumel at arcor.de (Frank Kloeker) Date: Wed, 06 Feb 2019 12:10:20 +0100 Subject: [openstack-ansible] bug squash day! In-Reply-To: <2ddb206f78e4c79ed6bc45a0d027b656473f09e7.camel@evrard.me> References: <717c065910a2365e8d9674f987227771@arcor.de> <5f88b97f42da5cd3015ec738d4d7a6f9@arcor.de> <2ddb206f78e4c79ed6bc45a0d027b656473f09e7.camel@evrard.me> Message-ID: Am 2019-02-06 10:32, schrieb Jean-Philippe Evrard: > On Tue, 2019-02-05 at 19:04 +0100, Frank Kloeker wrote: >> Hi Mohammed, >> >> will there be an extra invitation or an etherpad for logistic? >> >> many thanks >> >> Frank >> >> Am 2019-02-05 17:22, schrieb Mohammed Naser: >> > Hi everyone, >> > >> > We've discussed this over the ML today and we've decided for it to >> > be >> > next Wednesday (13th of February). Due to the distributed nature >> > of >> > our teams, we'll be aiming to go throughout the day and we'll all >> > be >> > hanging out on #openstack-ansible with a few more high bandwidth >> > way >> > of discussion if that is needed >> > >> > Thanks! >> > Mohammed > > What I did in the past was to prepare an etherpad of the most urgent > ones, but wasn't the most successful bug squash we had. > > I also took the other approach, BYO bug, list it in the etherpad, so we > can track the bug squashers. > > And in both cases, I brought belgian cookies/chocolates to the most > successful bug squasher (please note you should ponderate with the task > criticality level, else people might solve the simplest bugs to get the > chocolates :p) > This was my informal motivational, but I didn't have to do that. I > justliked doing so :) Very generous, we appreciate that. Would it be possible to expand the list with Belgian beer? :) kind regards Frank From cdent+os at anticdent.org Wed Feb 6 12:14:12 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 6 Feb 2019 12:14:12 +0000 (GMT) Subject: [nova] [placement] extraction checkin meeting at 1700 UTC today Message-ID: A reminder that as discussed at the last placement extraction checkin meeting [1] we've got another one today at 1700 UTC. Join the #openstack-placement IRC channel around then if you are interested, and a google hangout url will be provided. In the thread with the notes, there was a question that didn't get answered [2] in email and remains open as far as I know. There's an etherpad [3] with pending extraction related tasks. If you've done some of the work on there, please make sure it is up to date. From that, it appears that the main pending things are deployment (with upgrade) and the vgpu reshaper work (which is close). Note that the main question we're trying to answer here is "when can we delete the nova code?", which is closely tied to the unanswered question [2] mentioned above. We are already using the extracted code in the integrate gate and we are not testing the unextracted code anywhere. [1] notes from the last meeting are at http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001789.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001805.html [3] https://etherpad.openstack.org/p/placement-extract-stein-5 -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From sean.mcginnis at gmx.com Wed Feb 6 13:32:36 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 6 Feb 2019 07:32:36 -0600 Subject: [openstack-dev] [stackalytics] Stackalytics Facelift In-Reply-To: <0a2078f2b8ec44b19252633da58e3610@AUSX13MPS304.AMER.DELL.COM> References: <45e9c80f282d4d2a880b279b990a964c@AUSX13MPS308.AMER.DELL.COM> <5C378231.8010603@openstack.org> <4b8edd5beecd4915b06278524482431e@AUSX13MPS308.AMER.DELL.COM> <5C378410.6050603@openstack.org> <0a2078f2b8ec44b19252633da58e3610@AUSX13MPS304.AMER.DELL.COM> Message-ID: <20190206133235.GA28569@sm-workstation> On Wed, Feb 06, 2019 at 04:25:54AM +0000, Arkady.Kanevsky at dell.com wrote: > How does Stackalytics shows statistics for current Train release work? As mentioned yesterday, we are currently on the Stein release. So there is no Train work yet. From sean.mcginnis at gmx.com Wed Feb 6 13:36:48 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 6 Feb 2019 07:36:48 -0600 Subject: [TC][Searchlight] Project health evaluation In-Reply-To: References: Message-ID: <20190206133648.GB28569@sm-workstation> > > As we're reaching the Stein-3 milestone [5] and preparing for the Denver > summit. We, as a team, would like have a formal project health evaluation > in several aspects such as active contributors / team, planning, bug fixes, > features, etc. We would love to have some voice from the TC team and anyone > from the community who follows our effort during the Stein cycle. We then > would want to update the information at [6] and [7] to avoid any confusion > that may stop potential contributors or users to come to Searchlight. > > [1] https://review.openstack.org/#/c/588644/ > [2] > https://www.dangtrinh.com/2018/10/searchlight-at-stein-1-weekly-report.html > [3] https://www.dangtrinh.com/2019/01/searchlight-at-stein-2-r-14-r-13.html > [4] > https://docs.openstack.org/searchlight/latest/user/usecases.html#our-vision > [5] https://releases.openstack.org/stein/schedule.html > [6] https://governance.openstack.org/election/results/stein/ptl.html > [7] https://wiki.openstack.org/wiki/OpenStack_health_tracker > It really looks like great progress with Searchlight over this release. Nice work Trinh and all that have been involved in that. [6] is a historical record of what happened with the PTL election. What would you want to update there? The best path forward, in my opinion, is to make sure there is a clear PTL candidate for the Train release. [7] is a periodic update of notes between TC members and the projects. If you would like to get more information added there, I would recommend working with the two TC members assigned to Searchlight to get an update. That appears to be Chris Dent and Dims. Sean From kchamart at redhat.com Wed Feb 6 13:40:54 2019 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 6 Feb 2019 14:40:54 +0100 Subject: virt-install error while trying to create a new image In-Reply-To: <9D8A2486E35F0941A60430473E29F15B017BB3C9AE@MXDB2.ad.garvan.unsw.edu.au> References: <9D8A2486E35F0941A60430473E29F15B017BB3C9AE@MXDB2.ad.garvan.unsw.edu.au> Message-ID: <20190206134054.GV5349@paraplu.home> On Tue, Feb 05, 2019 at 10:45:27PM +0000, Manuel Sopena Ballesteros wrote: > Dear Openstack community, > > I am trying to create a new image for Ironic. I followed the > documentation but got an error with virt-install. [...] > Please note: > > The OS has been reinstalled The host is a physical machine BIOS has > virtualization enabled I changed /etc/libvirt/qemu.conf group from > root to kvm following some linux forum instructions about this error > but the issue persists That's fine. Please also post your host kernel, QEMU and libvirt versions. > # virt-install --virt-type kvm --name centos --ram 1024 --disk > /tmp/centos.qcow2,format=qcow2 --network network=default > --graphics vnc,listen=0.0.0.0 --noautoconsole --os-type=linux > --os-variant=centos7.0 > --location=/root/CentOS-7-x86_64-NetInstall-1810.iso > > > Starting install... > > Retrieving file .treeinfo... > | 0 B 00:00:00 Retrieving file content... > | 0 B 00:00:00 Retrieving file vmlinuz... > | 6.3 MB 00:00:00 Retrieving file initrd.img... > | 50 MB 00:00:00 ERROR unsupported configuration: CPU mode > 'custom' for x86_64 kvm domain on x86_64 host is not supported by > hypervisor Domain installation does not appear to have been > successful. That error means a low-level QEMU command (that queries for what vCPUs QEMU supports) has failed for "some reason". To debug this, we need /var/log/libvirt/libvirtd.log with log filters. (a) Remove this directory and its contents (this step is specific to this problem; it's not always required): $ rm /var/cache/libvirt/qemu/ (b) Set the following in your /etc/libvirt/libvirtd.conf: log_filters="1:libvirt 1:qemu 1:conf 1:security 3:event 3:json 3:file 3:object 1:util 1:cpu" log_outputs="1:file:/var/log/libvirt/libvirtd.log" (c) Restart libvirtd: `systemctl restart libvirtd` (d) Repeat the test; and post the /var/log/libvirt/libvirtd.log somewhere. [...] BTW, I would highly recommend the `virt-builder` approach to create disk images for various operating systems and importing it to libvirt. (1) Download a CentOS 7.6 template (with latest updates) 20G of disk: $ sudo dnf install libguestfs-tools-c $ virt-builder centos-7.6 --update -o centos-vm1.qcow2 \ --selinux-relabel --size 20G (2) Import the downloaded disk image into libvirt: $ virt-install \ --name centosvm1 --ram 2048 \ --disk path=centos.img,format=qcow2 \ --os-variant centos7.0 \ --import Note-1: Although the command is called `virt-install`, we aren't _installing_ anything in this case. Note-2: The '--os-variant' can be whatever the nearest possible variant that's available on your host. To find the list of variants for your current Fedora release, run: `osinfo-query os | grep centos`. (The `osinfo-query` tool comes with the 'libosinfo' package.) The `virt-builder` tool is also available in Debian and Ubuntu. [...] -- /kashyap From jaypipes at gmail.com Wed Feb 6 13:57:32 2019 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 6 Feb 2019 08:57:32 -0500 Subject: [openstack-dev] [stackalytics] Stackalytics Facelift In-Reply-To: <0a2078f2b8ec44b19252633da58e3610@AUSX13MPS304.AMER.DELL.COM> References: <45e9c80f282d4d2a880b279b990a964c@AUSX13MPS308.AMER.DELL.COM> <5C378231.8010603@openstack.org> <4b8edd5beecd4915b06278524482431e@AUSX13MPS308.AMER.DELL.COM> <5C378410.6050603@openstack.org> <0a2078f2b8ec44b19252633da58e3610@AUSX13MPS304.AMER.DELL.COM> Message-ID: <2d401bcf-abda-222c-710a-8f5ee7162072@gmail.com> On 02/05/2019 11:25 PM, Arkady.Kanevsky at dell.com wrote: > How does Stackalytics shows statistics for current Train release work? The current release is Stein, not Train. https://releases.openstack.org/ Best, -jay From lyarwood at redhat.com Wed Feb 6 14:12:38 2019 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 6 Feb 2019 14:12:38 +0000 Subject: [nova] [placement] [packaging] placement extraction check in meeting In-Reply-To: <5c80b99e-e7b3-bc65-9556-c80608de0347@gmail.com> References: <5c80b99e-e7b3-bc65-9556-c80608de0347@gmail.com> Message-ID: <20190206141238.aavcywhimevxnerd@lyarwood.usersys.redhat.com> On 17-01-19 09:09:24, Matt Riedemann wrote: > On 1/17/2019 6:07 AM, Chris Dent wrote: > > > Deployment tools: > > > > > > * Lee is working on TripleO support for extracted placement and > > > estimates 3 more weeks for just deploy (base install) support to be > > > done, and at least 3 more weeks for upgrade support after that. Read > > > Lee's status update for details [2]. > > > * If nova were to go ahead and drop placement code and require > > > extracted placement before TripleO is ready, they would have to pin > > > nova to a git SHA before that which would delay their Stein release. > > > * Having the extraction span release boundaries would ease the > > > upgrade pain for TripleO. > > > > Can you (or Dan?) clarify if spanning the release boundaries is > > usefully specifically for tooling that chooses to upgrade everything > > at once and thus is forced to run Stein nova with Stein placement? > > > > And if someone were able/willing to run Rocky nova with Stein > > placement (briefly) the challenges are less of a concern? > > > > I'm not asking because I disagree with the assertion, I just want to > > be sure I understand (and by proxy our adoring readers do as well) > > what "ease" really means in this context as the above bullet doesn't > > really explain it. > > I didn't go into details on that point because honestly I also could use > some written words explaining the differences for TripleO in doing the > upgrade and migration in-step with the Stein upgrade versus upgrading to > Stein and then upgrading to Train, and how the migration with that is any > less painful. AFAIK it wouldn't make the migration itself any less painful but having an overlap release would provide additional development and validation time. Time that is currently lacking given the very late breaking way upgrades are developed by TripleO, often only stabilising after the official upstream release is out. Anyway, I think this was Dan's point here but I'm happy to be corrected. > I know Dan talked about it on the call, but I can't say I followed it > all well enough to be able to summarize the pros/cons (which is why I > didn't in my summary email). This might already be something I know > about, but the lights just aren't turning on right now. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From ignaziocassano at gmail.com Wed Feb 6 14:34:08 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 6 Feb 2019 15:34:08 +0100 Subject: [manila][glusterfs] on queens error In-Reply-To: <20190203100549.urtnvf2iatmqm6oy@barron.net> References: <20190203100549.urtnvf2iatmqm6oy@barron.net> Message-ID: Hello Tom, I think cases you suggested do not meet my needs. I have an openstack installation A with a fas netapp A. I have another openstack installation B with fas netapp B. I would like to use manila replication dr. If I replicate manila volumes from A to B the manila db on B does not knows anything about the replicated volume but only the backends on netapp B. Can I discover replicated volumes on openstack B? Or I must modify the manila db on B? Regards Ignazio Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha scritto: > On 01/02/19 07:28 +0100, Ignazio Cassano wrote: > >Thanks Goutham. > >If there are not mantainers for this driver I will switch on ceph and or > >netapp. > >I am already using netapp but I would like to export shares from an > >openstack installation to another. > >Since these 2 installations do non share any openstack component and have > >different openstack database, I would like to know it is possible . > >Regards > >Ignazio > > Hi Ignazio, > > If by "export shares from an openstack installation to another" you > mean removing them from management by manila in installation A and > instead managing them by manila in installation B then you can do that > while leaving them in place on your Net App back end using the manila > "manage-unmanage" administrative commands. Here's some documentation > [1] that should be helpful. > > If on the other hand by "export shares ... to another" you mean to > leave the shares under management of manila in installation A but > consume them from compute instances in installation B it's all about > the networking. One can use manila to "allow-access" to consumers of > shares anywhere but the consumers must be able to reach the "export > locations" for those shares and mount them. > > Cheers, > > -- Tom Barron > > [1] > https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 > > > >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < > gouthampravi at gmail.com> > >ha scritto: > > > >> Hi Ignazio, > >> > >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano > >> wrote: > >> > > >> > Hello All, > >> > I installed manila on my queens openstack based on centos 7. > >> > I configured two servers with glusterfs replocation and ganesha nfs. > >> > I configured my controllers octavia,conf but when I try to create a > share > >> > the manila scheduler logs reports: > >> > > >> > Failed to schedule create_share: No valid host was found. Failed to > find > >> a weighted host, the last executed filter was CapabilitiesFilter.: > >> NoValidHost: No valid host was found. Failed to find a weighted host, > the > >> last executed filter was CapabilitiesFilter. > >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api > >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a > 89f76bc5de5545f381da2c10c7df7f15 > >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for > >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a > >> > >> > >> The scheduler failure points out that you have a mismatch in > >> expectations (backend capabilities vs share type extra-specs) and > >> there was no host to schedule your share to. So a few things to check > >> here: > >> > >> - What is the share type you're using? Can you list the share type > >> extra-specs and confirm that the backend (your GlusterFS storage) > >> capabilities are appropriate with whatever you've set up as > >> extra-specs ($ manila pool-list --detail)? > >> - Is your backend operating correctly? You can list the manila > >> services ($ manila service-list) and see if the backend is both > >> 'enabled' and 'up'. If it isn't, there's a good chance there was a > >> problem with the driver initialization, please enable debug logging, > >> and look at the log file for the manila-share service, you might see > >> why and be able to fix it. > >> > >> > >> Please be aware that we're on a look out for a maintainer for the > >> GlusterFS driver for the past few releases. We're open to bug fixes > >> and maintenance patches, but there is currently no active maintainer > >> for this driver. > >> > >> > >> > I did not understand if controllers node must be connected to the > >> network where shares must be exported for virtual machines, so my > glusterfs > >> are connected on the management network where openstack controllers are > >> conencted and to the network where virtual machine are connected. > >> > > >> > My manila.conf section for glusterfs section is the following > >> > > >> > [gluster-manila565] > >> > driver_handles_share_servers = False > >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver > >> > glusterfs_target = root at 10.102.184.229:/manila565 > >> > glusterfs_path_to_private_key = /etc/manila/id_rsa > >> > glusterfs_ganesha_server_username = root > >> > glusterfs_nfs_server_type = Ganesha > >> > glusterfs_ganesha_server_ip = 10.102.184.229 > >> > #glusterfs_servers = root at 10.102.185.19 > >> > ganesha_config_dir = /etc/ganesha > >> > > >> > > >> > PS > >> > 10.102.184.0/24 is the network where controlelrs expose endpoint > >> > > >> > 10.102.189.0/24 is the shared network inside openstack where virtual > >> machines are connected. > >> > > >> > The gluster servers are connected on both. > >> > > >> > > >> > Any help, please ? > >> > > >> > Ignazio > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Feb 6 14:39:43 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 6 Feb 2019 15:39:43 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Alfredo it is very strange you can ping 8.8.8.8 but you cannot resolve names. I think atomic command uses names for finishing master installation. Curl is installed on master.... Il giorno Mer 6 Feb 2019 09:00 Alfredo De Luca ha scritto: > Hi Ignazio. sorry for late reply. security group is fine. It\s not > blocking the network traffic. > > Not sure why but, with this fedora release I can finally find atomic but > there is no yum,nslookup,dig,host command..... why is so different from > another version (latest) which had yum but not atomic. > > It's all weird > > > Cheers > > > > > On Mon, Feb 4, 2019 at 5:46 PM Ignazio Cassano > wrote: > >> Alfredo, try to check security group linked to your kubemaster. >> >> Il giorno Lun 4 Feb 2019 14:25 Alfredo De Luca >> ha scritto: >> >>> Hi Ignazio. Thanks for the link...... so >>> >>> Now at least atomic is present on the system. >>> Also I ve already had 8.8.8.8 on the system. So I can connect on the >>> floating IP to the kube master....than I can ping 8.8.8.8 but for example >>> doesn't resolve the names...so if I ping 8.8.8.8 >>> *root at my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* >>> *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* >>> *64 bytes from 8.8.8.8 : icmp_seq=1 ttl=118 time=12.1 ms* >>> *64 bytes from 8.8.8.8 : icmp_seq=2 ttl=118 time=12.2 ms* >>> >>> but if I ping google.com doesn't resolve. I can't either find on fedora >>> dig or nslookup to check >>> resolv.conf has >>> *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* >>> *nameserver 8.8.8.8* >>> >>> It\s all so weird. >>> >>> >>> >>> >>> On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano >>> wrote: >>> >>>> I also suggest to change dns in your external network used by magnum. >>>> Using openstack dashboard you can change it to 8.8.8.8 (If I remember >>>> fine you wrote that you can ping 8.8.8.8 from kuke baster) >>>> >>>> Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < >>>> alfredo.deluca at gmail.com> ha scritto: >>>> >>>>> thanks ignazio >>>>> Where can I get it from? >>>>> >>>>> >>>>> On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano < >>>>> ignaziocassano at gmail.com> wrote: >>>>> >>>>>> I used fedora-magnum-27-4 and it works >>>>>> >>>>>> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >>>>>> alfredo.deluca at gmail.com> ha scritto: >>>>>> >>>>>>> Hi Clemens. >>>>>>> So the image I downloaded is this >>>>>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>>>>>> which is the latest I think. >>>>>>> But you are right...and I noticed that too.... It doesn't have >>>>>>> atomic binary >>>>>>> the os-release is >>>>>>> >>>>>>> *NAME=Fedora* >>>>>>> *VERSION="29 (Cloud Edition)"* >>>>>>> *ID=fedora* >>>>>>> *VERSION_ID=29* >>>>>>> *PLATFORM_ID="platform:f29"* >>>>>>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>>>>>> *ANSI_COLOR="0;34"* >>>>>>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>>>>>> *HOME_URL="https://fedoraproject.org/ "* >>>>>>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>>>>>> "* >>>>>>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>>>>>> "* >>>>>>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>>>>>> "* >>>>>>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>>>>>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>>>>>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>>>>>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>>>>>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>>>>>> "* >>>>>>> *VARIANT="Cloud Edition"* >>>>>>> *VARIANT_ID=cloud* >>>>>>> >>>>>>> >>>>>>> so not sure why I don't have atomic tho >>>>>>> >>>>>>> >>>>>>> On Sat, Feb 2, 2019 at 7:53 PM Clemens >>>>>>> wrote: >>>>>>> >>>>>>>> Now to the failure of your part-013: Are you sure that you used the >>>>>>>> glance image ‚fedora-atomic-latest‘ and not some other fedora image? Your >>>>>>>> error message below suggests that your image does not contain ‚atomic‘ as >>>>>>>> part of the image … >>>>>>>> >>>>>>>> + _prefix=docker.io/openstackmagnum/ >>>>>>>> + atomic install --storage ostree --system --system-package no >>>>>>>> --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>>>>>> heat-container-agent >>>>>>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>>>>>> ./part-013: line 8: atomic: command not found >>>>>>>> + systemctl start heat-container-agent >>>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>>> heat-container-agent.service not found. >>>>>>>> >>>>>>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>> >>>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>>> heat-container-agent.service not found. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Alfredo* >>>>>>> >>>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>> >>> -- >>> *Alfredo* >>> >>> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgiusti at gmail.com Wed Feb 6 15:00:08 2019 From: kgiusti at gmail.com (Ken Giusti) Date: Wed, 6 Feb 2019 10:00:08 -0500 Subject: [ironic] [oslo] ironic overloading notifications for internal messaging In-Reply-To: References: <837ba8e4b0d1807fcd09c294873030f9155e5846.camel@redhat.com> Message-ID: On 2/5/19, Doug Hellmann wrote: > Ken Giusti writes: > >> On 2/4/19, Harald Jensås wrote: >>> On Tue, 2019-02-05 at 09:54 +1100, Michael Still wrote: >>>> Hi, >>>> >>>> I’ve been chasing a bug in ironic’s neutron agent for the last few >>>> days and I think its time to ask for some advice. >>>> >>> >>> I'm working on the same issue. (In fact there are two issues.) >>> >>>> Specifically, I was asked to debug why a set of controllers was using >>>> so much RAM, and the answer was that rabbitmq had a queue called >>>> ironic-neutron-agent-heartbeat.info with 800,000 messages enqueued. >>>> This notification queue is used by ironic’s neutron agent to >>>> calculate the hash ring. I have been able to duplicate this issue in >>>> a stock kolla-ansible install with ironic turned on but no bare metal >>>> nodes enrolled in ironic. About 0.6 messages are queued per second. >>>> >>>> I added some debugging code (hence the thread yesterday about >>>> mangling the code kolla deploys), and I can see that the messages in >>>> the queue are being read by the ironic neutron agent and acked >>>> correctly. However, they are not removed from the queue. >>>> >>>> You can see your queue size while using kolla with this command: >>>> >>>> docker exec rabbitmq rabbitmqctl list_queues messages name >>>> messages_ready consumers | sort -n | tail -1 >>>> >>>> My stock install that’s been running for about 12 hours currently has >>>> 8,244 messages in that queue. >>>> >>>> Where I’m a bit stumped is I had assumed that the messages weren’t >>>> being acked correctly, which is not the case. Is there something >>>> obvious about notification queues like them being persistent that >>>> I’ve missed in my general ignorance of the underlying implementation >>>> of notifications? >>>> >>> >>> I opened a oslo.messaging bug[1] yesterday. When using notifications >>> and all consumers use one or more pools. The ironic-neutron-agent does >>> use pools for all listeners in it's hash-ring member manager. And the >>> result is that notifications are published to the 'ironic-neutron- >>> agent-heartbeat.info' queue and they are never consumed. >>> >> >> This is an issue with the design of the notification pool feature. >> >> The Notification service is designed so notification events can be >> sent even though there may currently be no consumers. It supports the >> ability for events to be queued until a consumer(s) is ready to >> process them. So when a notifier issues an event and there are no >> consumers subscribed, a queue must be provisioned to hold that event >> until consumers appear. > > This has come up several times over the last few years, and it's always > a surprise to whoever it has bitten. I wonder if we should change the > default behavior to not create the consumer queue in the publisher? > +1 One possibility is to provide options on the Notifier constructor allowing the app to control the queue creation behavior. Something like "create_queue=True/False". We can document this as a 'dead letter' queue feature for events published w/o active listeners. > -- > Doug > -- Ken Giusti (kgiusti at gmail.com) From mnaser at vexxhost.com Wed Feb 6 15:15:21 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 6 Feb 2019 10:15:21 -0500 Subject: [openstack-ansible] bug squash day! In-Reply-To: References: <717c065910a2365e8d9674f987227771@arcor.de> <5f88b97f42da5cd3015ec738d4d7a6f9@arcor.de> <2ddb206f78e4c79ed6bc45a0d027b656473f09e7.camel@evrard.me> Message-ID: Hi all: We're likely going to have an etherpad and we'll be coordinating in IRC. Bring your own bug is probably the best avenue! Thanks all! Regards, Mohammed On Wed, Feb 6, 2019 at 6:10 AM Frank Kloeker wrote: > > Am 2019-02-06 10:32, schrieb Jean-Philippe Evrard: > > On Tue, 2019-02-05 at 19:04 +0100, Frank Kloeker wrote: > >> Hi Mohammed, > >> > >> will there be an extra invitation or an etherpad for logistic? > >> > >> many thanks > >> > >> Frank > >> > >> Am 2019-02-05 17:22, schrieb Mohammed Naser: > >> > Hi everyone, > >> > > >> > We've discussed this over the ML today and we've decided for it to > >> > be > >> > next Wednesday (13th of February). Due to the distributed nature > >> > of > >> > our teams, we'll be aiming to go throughout the day and we'll all > >> > be > >> > hanging out on #openstack-ansible with a few more high bandwidth > >> > way > >> > of discussion if that is needed > >> > > >> > Thanks! > >> > Mohammed > > > > What I did in the past was to prepare an etherpad of the most urgent > > ones, but wasn't the most successful bug squash we had. > > > > I also took the other approach, BYO bug, list it in the etherpad, so we > > can track the bug squashers. > > > > And in both cases, I brought belgian cookies/chocolates to the most > > successful bug squasher (please note you should ponderate with the task > > criticality level, else people might solve the simplest bugs to get the > > chocolates :p) > > This was my informal motivational, but I didn't have to do that. I > > justliked doing so :) > > Very generous, we appreciate that. Would it be possible to expand the > list with Belgian beer? :) > > kind regards > > Frank -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From martin.chlumsky at gmail.com Wed Feb 6 15:19:53 2019 From: martin.chlumsky at gmail.com (Martin Chlumsky) Date: Wed, 6 Feb 2019 10:19:53 -0500 Subject: [Cinder][driver][ScaleIO] In-Reply-To: References: <9d98a006-a062-0a9b-a9d3-68ed0ef4078f@gmail.com> <74b2c779ee644a64b5b1939537ddffd1@AUSX13MPS304.AMER.DELL.COM> Message-ID: Hi Yury, Thank you for the clarification. So if we get volumes that are still mapped to hypervisors after deleting the attached instances with sio_unmap_volume_before_deletion set to False, there's a good chance it's a bug? I will open a bug report in this case. Cheers, Martin On Wed, Feb 6, 2019 at 9:35 AM Kulazhenkov, Yury wrote: > Hi Martin, > > Martin wrote: > > It seems you would always > > want to unmap the volume from the hypervisor before deleting it. > If you remove or shelve instance from hypervisor host, then nova will > trigger ScaleIO to unmap volume from that host. > No issues should happen during deletion at this point, because volume is > already unmapped(unmounted). > No need to change sio_unmap_volume_before_deletion default value here. > > Martin wrote: > > What is the reasoning behind this option? > Setting sio_unmap_volume_before_deletion option to True means that cinder > driver will force unmount volume from ALL ScaleIO client nodes (not only > Openstack nodes) during volume deletion. > Enabling this option can be useful if you periodically detect compute > nodes with unmanaged ScaleIO volume mappings(volume mappings that not > managed by Openstack) in your environment. You can get such unmanaged > mappings in some cases, for example if there was hypervisor node power > failure. If during that power failure instances with mapped volumes were > moved to another host, than unmanaged mappings may appear on failed node > after its recovery. > > Martin wrote: > >Why would we ever set this > > to False and why is it False by default? > Force unmounting volumes from ALL ScaleIO clients is additional overhead. > It doesn't required in most environments. > > > Best regards, > Yury > > -----Original Message----- > From: Arkady.Kanevsky at dell.com > Sent: Wednesday, February 6, 2019 7:24 AM > To: jsbryant at electronicjungle.net; openstack-discuss at lists.openstack.org; > Walsh, Helen; Belogrudov, Vladislav > Subject: RE: [Cinder][driver][ScaleIO] > > Adding Vlad who is the right person for ScaleIO driver. > > -----Original Message----- > From: Jay Bryant > Sent: Tuesday, February 5, 2019 5:30 PM > To: openstack-discuss at lists.openstack.org; Walsh, Helen > Subject: Re: [Cinder][driver][ScaleIO] > > Adding Helen Walsh to this as she may be able to provide insight. > > Jay > > On 2/5/2019 12:16 PM, Martin Chlumsky wrote: > > Hello, > > > > We are using EMC ScaleIO as our backend to cinder. > > When we delete VMs that have attached volumes and then try deleting > > said volumes, the volumes will sometimes end in state error_deleting. > > The state is reached because for some reason the volumes are still > > mapped (in the ScaleIO sense of the word) to the hypervisor despite > > the VM being deleted. > > We fixed the issue by setting the following option to True in > cinder.conf: > > > > # Unmap volume before deletion. (boolean value) > > sio_unmap_volume_before_deletion=False > > > > > > What is the reasoning behind this option? Why would we ever set this > > to False and why is it False by default? It seems you would always > > want to unmap the volume from the hypervisor before deleting it. > > > > Thank you, > > > > Martin > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Wed Feb 6 15:32:19 2019 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 6 Feb 2019 10:32:19 -0500 Subject: [manila][glusterfs] on queens error In-Reply-To: References: <20190203100549.urtnvf2iatmqm6oy@barron.net> Message-ID: <20190206153219.yyir5m5tyw7bvrj7@barron.net> On 06/02/19 15:34 +0100, Ignazio Cassano wrote: >Hello Tom, I think cases you suggested do not meet my needs. >I have an openstack installation A with a fas netapp A. >I have another openstack installation B with fas netapp B. >I would like to use manila replication dr. >If I replicate manila volumes from A to B the manila db on B does not >knows anything about the replicated volume but only the backends on netapp >B. Can I discover replicated volumes on openstack B? >Or I must modify the manila db on B? >Regards >Ignazio I guess I don't understand your use case. Do Openstack installation A and Openstack installation B know *anything* about one another? For example, are their keystone and neutron databases somehow synced? Are they going to be operative for the same set of manila shares at the same time, or are you contemplating a migration of the shares from installation A to installation B? Probably it would be helpful to have a statement of the problem that you intend to solve before we consider the potential mechanisms for solving it. Cheers, -- Tom > > >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha scritto: > >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: >> >Thanks Goutham. >> >If there are not mantainers for this driver I will switch on ceph and or >> >netapp. >> >I am already using netapp but I would like to export shares from an >> >openstack installation to another. >> >Since these 2 installations do non share any openstack component and have >> >different openstack database, I would like to know it is possible . >> >Regards >> >Ignazio >> >> Hi Ignazio, >> >> If by "export shares from an openstack installation to another" you >> mean removing them from management by manila in installation A and >> instead managing them by manila in installation B then you can do that >> while leaving them in place on your Net App back end using the manila >> "manage-unmanage" administrative commands. Here's some documentation >> [1] that should be helpful. >> >> If on the other hand by "export shares ... to another" you mean to >> leave the shares under management of manila in installation A but >> consume them from compute instances in installation B it's all about >> the networking. One can use manila to "allow-access" to consumers of >> shares anywhere but the consumers must be able to reach the "export >> locations" for those shares and mount them. >> >> Cheers, >> >> -- Tom Barron >> >> [1] >> https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 >> > >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < >> gouthampravi at gmail.com> >> >ha scritto: >> > >> >> Hi Ignazio, >> >> >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano >> >> wrote: >> >> > >> >> > Hello All, >> >> > I installed manila on my queens openstack based on centos 7. >> >> > I configured two servers with glusterfs replocation and ganesha nfs. >> >> > I configured my controllers octavia,conf but when I try to create a >> share >> >> > the manila scheduler logs reports: >> >> > >> >> > Failed to schedule create_share: No valid host was found. Failed to >> find >> >> a weighted host, the last executed filter was CapabilitiesFilter.: >> >> NoValidHost: No valid host was found. Failed to find a weighted host, >> the >> >> last executed filter was CapabilitiesFilter. >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a >> 89f76bc5de5545f381da2c10c7df7f15 >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a >> >> >> >> >> >> The scheduler failure points out that you have a mismatch in >> >> expectations (backend capabilities vs share type extra-specs) and >> >> there was no host to schedule your share to. So a few things to check >> >> here: >> >> >> >> - What is the share type you're using? Can you list the share type >> >> extra-specs and confirm that the backend (your GlusterFS storage) >> >> capabilities are appropriate with whatever you've set up as >> >> extra-specs ($ manila pool-list --detail)? >> >> - Is your backend operating correctly? You can list the manila >> >> services ($ manila service-list) and see if the backend is both >> >> 'enabled' and 'up'. If it isn't, there's a good chance there was a >> >> problem with the driver initialization, please enable debug logging, >> >> and look at the log file for the manila-share service, you might see >> >> why and be able to fix it. >> >> >> >> >> >> Please be aware that we're on a look out for a maintainer for the >> >> GlusterFS driver for the past few releases. We're open to bug fixes >> >> and maintenance patches, but there is currently no active maintainer >> >> for this driver. >> >> >> >> >> >> > I did not understand if controllers node must be connected to the >> >> network where shares must be exported for virtual machines, so my >> glusterfs >> >> are connected on the management network where openstack controllers are >> >> conencted and to the network where virtual machine are connected. >> >> > >> >> > My manila.conf section for glusterfs section is the following >> >> > >> >> > [gluster-manila565] >> >> > driver_handles_share_servers = False >> >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver >> >> > glusterfs_target = root at 10.102.184.229:/manila565 >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa >> >> > glusterfs_ganesha_server_username = root >> >> > glusterfs_nfs_server_type = Ganesha >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 >> >> > #glusterfs_servers = root at 10.102.185.19 >> >> > ganesha_config_dir = /etc/ganesha >> >> > >> >> > >> >> > PS >> >> > 10.102.184.0/24 is the network where controlelrs expose endpoint >> >> > >> >> > 10.102.189.0/24 is the shared network inside openstack where virtual >> >> machines are connected. >> >> > >> >> > The gluster servers are connected on both. >> >> > >> >> > >> >> > Any help, please ? >> >> > >> >> > Ignazio >> >> >> From lars at redhat.com Wed Feb 6 15:41:38 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 6 Feb 2019 10:41:38 -0500 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> Message-ID: <20190206154138.qfhgh5cax3j2r4qh@redhat.com> On Fri, Feb 01, 2019 at 06:16:42PM +0000, Sean Mooney wrote: > > 1. Implement multi-tenancy either (a) directly in Ironic or (b) in a > > shim service that sits between Ironic and the client. > that shim service could be nova, which already has multi tenancy. > > > > 2. Implement a Blazar plugin that is able to talk to whichever service > > in (1) is appropriate. > and nova is supported by blazar > > > > 3. Work with Blazar developers to implement any lease logic that we > > think is necessary. > +1 > by they im sure there is a reason why you dont want to have blazar drive > nova and nova dirve ironic but it seam like all the fucntionality would > already be there in that case. Sean, Being able to use Nova is a really attractive idea. I'm a little fuzzy on some of the details, though, starting with how to handle node discovery. A key goal is being able to parametrically request systems ("I want a system with a GPU and >= 40GB of memory"). With Nova, would this require effectively creating a flavor for every unique hardware configuration? Conceptually, I want "... create server --flavor any --filter 'has_gpu and member_mb>40000' ...", but it's not clear to me if that's something we could do now or if that would require changes to the way Nova handles baremetal scheduling. Additionally, we also want the ability to acquire a node without provisioning it, so that a consumer can use their own provisioning tool. From Nova's perspective, I guess this would be like requesting a system without specifying an image. Is that possible right now? I'm sure I'll have other questions, but these are the first few that crop up. Thanks, -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From mihalis68 at gmail.com Wed Feb 6 15:42:19 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 6 Feb 2019 10:42:19 -0500 Subject: [ops] OpenStack operators meetup, Berlin, March 6th,7th Message-ID: Dear All, The Evenbrite for the next ops meetup is now open, see https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 Thanks for Allison Price from the foundation for making this for us. We'll be sharing more details on the event soon. Chris on behalf of the ops meetups team -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Wed Feb 6 15:45:23 2019 From: openstack at fried.cc (Eric Fried) Date: Wed, 6 Feb 2019 09:45:23 -0600 Subject: [openstack-helm] How to specify nova override for multiple pci alias In-Reply-To: <2f17068ba3452c230e3dbe1d581d940f85961a12.camel@evrard.me> References: <8E5740EC88EF3E4BA3196F2545DC8625BA1CD21F@ALA-MBD.corp.ad.wrs.com> <2f17068ba3452c230e3dbe1d581d940f85961a12.camel@evrard.me> Message-ID: Folks- On 2/6/19 4:13 AM, Jean-Philippe Evrard wrote: > On Wed, 2019-01-30 at 15:40 +0000, Gauld, James wrote: >> How can I specify a helm override to configure nova PCI alias when >> there are multiple aliases? >> I haven't been able to come up with a YAML compliant specification >> for this. >> >> Are there other alternatives to be able to specify this as an >> override? I assume that a nova Chart change would be required to >> support this custom one-alias-entry-per-line formatting. >> >> Any insights on how to achieve this in helm are welcomed. >> The following nova configuration format is desired, but not as yet >> supported by nova: >> [pci] >> alias = [{dict 1}, {dict 2}] >> >> The following snippet of YAML works for PCI passthrough_whitelist, >> where the value encoded is a JSON string: >> >> conf: >> nova: >> overrides: >> nova_compute: >> hosts: >> - conf: >> nova: >> pci: >> passthrough_whitelist: '[{"class_id": "030000", >> "address": "0000:00:02.0"}]' I played around with the code as it stands, and I agree there doesn't seem to be a way around having to specify the alias key multiple times to get multiple aliases. Lacking some fancy way to make YAML understand a dict with repeated keys ((how) do you handle HTTP headers?), I've hacked up a solution on the nova side [1] which should allow you to do what you've described above. Do you have a way to pull it down and try it? (Caveat: I put this up as a proof of concept, but it (or anything that messes with the existing pci passthrough mechanisms) may not be a mergeable solution.) -efried [1] https://review.openstack.org/#/c/635191/ From thierry at openstack.org Wed Feb 6 15:55:49 2019 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 6 Feb 2019 16:55:49 +0100 Subject: [tc][uc] Becoming an Open Source Initiative affiliate org Message-ID: I started a thread on the Foundation mailing-list about the OSF becoming an OSI affiliate org: http://lists.openstack.org/pipermail/foundation/2019-February/002680.html Please follow-up there is you have any concerns or questions. -- Thierry Carrez (ttx) From Tim.Bell at cern.ch Wed Feb 6 16:00:40 2019 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 6 Feb 2019 16:00:40 +0000 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: <20190206154138.qfhgh5cax3j2r4qh@redhat.com> References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <20190206154138.qfhgh5cax3j2r4qh@redhat.com> Message-ID: A few years ago, there was a discussion in one of the summit forums where users wanted to be able to come along to a generic OpenStack cloud and say "give me the flavor that has at least X GB RAM and Y GB disk space". At the time, the thoughts were that this could be done by doing a flavour list and then finding the smallest one which matched the requirements. Would that be an option or would it require some more Nova internals? For reserving, you could install the machine with a simple image and then let the user rebuild with their choice? Not sure if these meet what you'd like but it may allow a proof-of-concept without needing too many code changes. Tim -----Original Message----- From: Lars Kellogg-Stedman Date: Wednesday, 6 February 2019 at 16:44 To: Sean Mooney Cc: "Ansari, Mohhamad Naved" , Julia Kreger , Ian Ballou , Kristi Nikolla , "openstack-discuss at lists.openstack.org" , Tzu-Mainn Chen Subject: Re: [ironic] Hardware leasing with Ironic On Fri, Feb 01, 2019 at 06:16:42PM +0000, Sean Mooney wrote: > > 1. Implement multi-tenancy either (a) directly in Ironic or (b) in a > > shim service that sits between Ironic and the client. > that shim service could be nova, which already has multi tenancy. > > > > 2. Implement a Blazar plugin that is able to talk to whichever service > > in (1) is appropriate. > and nova is supported by blazar > > > > 3. Work with Blazar developers to implement any lease logic that we > > think is necessary. > +1 > by they im sure there is a reason why you dont want to have blazar drive > nova and nova dirve ironic but it seam like all the fucntionality would > already be there in that case. Sean, Being able to use Nova is a really attractive idea. I'm a little fuzzy on some of the details, though, starting with how to handle node discovery. A key goal is being able to parametrically request systems ("I want a system with a GPU and >= 40GB of memory"). With Nova, would this require effectively creating a flavor for every unique hardware configuration? Conceptually, I want "... create server --flavor any --filter 'has_gpu and member_mb>40000' ...", but it's not clear to me if that's something we could do now or if that would require changes to the way Nova handles baremetal scheduling. Additionally, we also want the ability to acquire a node without provisioning it, so that a consumer can use their own provisioning tool. From Nova's perspective, I guess this would be like requesting a system without specifying an image. Is that possible right now? I'm sure I'll have other questions, but these are the first few that crop up. Thanks, -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From martin.chlumsky at gmail.com Wed Feb 6 16:24:16 2019 From: martin.chlumsky at gmail.com (Martin Chlumsky) Date: Wed, 6 Feb 2019 11:24:16 -0500 Subject: [Cinder][driver][ScaleIO] In-Reply-To: References: <9d98a006-a062-0a9b-a9d3-68ed0ef4078f@gmail.com> <74b2c779ee644a64b5b1939537ddffd1@AUSX13MPS304.AMER.DELL.COM> Message-ID: Thanks! Martin On Wed, Feb 6, 2019 at 11:20 AM Kulazhenkov, Yury wrote: > Hi Martin, > > > > Martin wrote: > > > So if we get volumes that are still mapped to hypervisors after deleting > the attached instances with sio_unmap_volume_before_deletion set to False, > there's a good chance it's a bug? > > Yes, volumes should be detached from host even without set > sio_unmap_volume_before_deletion = True. > > > > Yury > > > > *From:* Martin Chlumsky > *Sent:* Wednesday, February 6, 2019 6:20 PM > *To:* Kulazhenkov, Yury > *Cc:* Kanevsky, Arkady; jsbryant at electronicjungle.net; > openstack-discuss at lists.openstack.org; Walsh, Helen; Belogrudov, Vladislav > *Subject:* Re: [Cinder][driver][ScaleIO] > > > > [EXTERNAL EMAIL] > > Hi Yury, > > Thank you for the clarification. > So if we get volumes that are still mapped to hypervisors after deleting > the attached instances with sio_unmap_volume_before_deletion set to False, > there's a good chance it's a bug? I will open a bug report in this case. > > Cheers, > > Martin > > > > On Wed, Feb 6, 2019 at 9:35 AM Kulazhenkov, Yury < > Yury.Kulazhenkov at dell.com> wrote: > > Hi Martin, > > Martin wrote: > > It seems you would always > > want to unmap the volume from the hypervisor before deleting it. > If you remove or shelve instance from hypervisor host, then nova will > trigger ScaleIO to unmap volume from that host. > No issues should happen during deletion at this point, because volume is > already unmapped(unmounted). > No need to change sio_unmap_volume_before_deletion default value here. > > Martin wrote: > > What is the reasoning behind this option? > Setting sio_unmap_volume_before_deletion option to True means that cinder > driver will force unmount volume from ALL ScaleIO client nodes (not only > Openstack nodes) during volume deletion. > Enabling this option can be useful if you periodically detect compute > nodes with unmanaged ScaleIO volume mappings(volume mappings that not > managed by Openstack) in your environment. You can get such unmanaged > mappings in some cases, for example if there was hypervisor node power > failure. If during that power failure instances with mapped volumes were > moved to another host, than unmanaged mappings may appear on failed node > after its recovery. > > Martin wrote: > >Why would we ever set this > > to False and why is it False by default? > Force unmounting volumes from ALL ScaleIO clients is additional overhead. > It doesn't required in most environments. > > > Best regards, > Yury > > -----Original Message----- > From: Arkady.Kanevsky at dell.com > Sent: Wednesday, February 6, 2019 7:24 AM > To: jsbryant at electronicjungle.net; openstack-discuss at lists.openstack.org; > Walsh, Helen; Belogrudov, Vladislav > Subject: RE: [Cinder][driver][ScaleIO] > > Adding Vlad who is the right person for ScaleIO driver. > > -----Original Message----- > From: Jay Bryant > Sent: Tuesday, February 5, 2019 5:30 PM > To: openstack-discuss at lists.openstack.org; Walsh, Helen > Subject: Re: [Cinder][driver][ScaleIO] > > Adding Helen Walsh to this as she may be able to provide insight. > > Jay > > On 2/5/2019 12:16 PM, Martin Chlumsky wrote: > > Hello, > > > > We are using EMC ScaleIO as our backend to cinder. > > When we delete VMs that have attached volumes and then try deleting > > said volumes, the volumes will sometimes end in state error_deleting. > > The state is reached because for some reason the volumes are still > > mapped (in the ScaleIO sense of the word) to the hypervisor despite > > the VM being deleted. > > We fixed the issue by setting the following option to True in > cinder.conf: > > > > # Unmap volume before deletion. (boolean value) > > sio_unmap_volume_before_deletion=False > > > > > > What is the reasoning behind this option? Why would we ever set this > > to False and why is it False by default? It seems you would always > > want to unmap the volume from the hypervisor before deleting it. > > > > Thank you, > > > > Martin > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Feb 6 16:48:39 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 6 Feb 2019 17:48:39 +0100 Subject: [manila][glusterfs] on queens error In-Reply-To: <20190206153219.yyir5m5tyw7bvrj7@barron.net> References: <20190203100549.urtnvf2iatmqm6oy@barron.net> <20190206153219.yyir5m5tyw7bvrj7@barron.net> Message-ID: The 2 openstack Installations do not share anything. The manila on each one works on different netapp storage, but the 2 netapp can be synchronized. Site A with an openstack instalkation and netapp A. Site B with an openstack with netapp B. Netapp A and netapp B can be synchronized via network. Ignazio Il giorno Mer 6 Feb 2019 16:32 Tom Barron ha scritto: > On 06/02/19 15:34 +0100, Ignazio Cassano wrote: > >Hello Tom, I think cases you suggested do not meet my needs. > >I have an openstack installation A with a fas netapp A. > >I have another openstack installation B with fas netapp B. > >I would like to use manila replication dr. > >If I replicate manila volumes from A to B the manila db on B does not > >knows anything about the replicated volume but only the backends on > netapp > >B. Can I discover replicated volumes on openstack B? > >Or I must modify the manila db on B? > >Regards > >Ignazio > > I guess I don't understand your use case. Do Openstack installation A > and Openstack installation B know *anything* about one another? For > example, are their keystone and neutron databases somehow synced? Are > they going to be operative for the same set of manila shares at the > same time, or are you contemplating a migration of the shares from > installation A to installation B? > > Probably it would be helpful to have a statement of the problem that > you intend to solve before we consider the potential mechanisms for > solving it. > > Cheers, > > -- Tom > > > > > > >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha scritto: > > > >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: > >> >Thanks Goutham. > >> >If there are not mantainers for this driver I will switch on ceph and > or > >> >netapp. > >> >I am already using netapp but I would like to export shares from an > >> >openstack installation to another. > >> >Since these 2 installations do non share any openstack component and > have > >> >different openstack database, I would like to know it is possible . > >> >Regards > >> >Ignazio > >> > >> Hi Ignazio, > >> > >> If by "export shares from an openstack installation to another" you > >> mean removing them from management by manila in installation A and > >> instead managing them by manila in installation B then you can do that > >> while leaving them in place on your Net App back end using the manila > >> "manage-unmanage" administrative commands. Here's some documentation > >> [1] that should be helpful. > >> > >> If on the other hand by "export shares ... to another" you mean to > >> leave the shares under management of manila in installation A but > >> consume them from compute instances in installation B it's all about > >> the networking. One can use manila to "allow-access" to consumers of > >> shares anywhere but the consumers must be able to reach the "export > >> locations" for those shares and mount them. > >> > >> Cheers, > >> > >> -- Tom Barron > >> > >> [1] > >> > https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 > >> > > >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < > >> gouthampravi at gmail.com> > >> >ha scritto: > >> > > >> >> Hi Ignazio, > >> >> > >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano > >> >> wrote: > >> >> > > >> >> > Hello All, > >> >> > I installed manila on my queens openstack based on centos 7. > >> >> > I configured two servers with glusterfs replocation and ganesha > nfs. > >> >> > I configured my controllers octavia,conf but when I try to create a > >> share > >> >> > the manila scheduler logs reports: > >> >> > > >> >> > Failed to schedule create_share: No valid host was found. Failed to > >> find > >> >> a weighted host, the last executed filter was CapabilitiesFilter.: > >> >> NoValidHost: No valid host was found. Failed to find a weighted host, > >> the > >> >> last executed filter was CapabilitiesFilter. > >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api > >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a > >> 89f76bc5de5545f381da2c10c7df7f15 > >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for > >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a > >> >> > >> >> > >> >> The scheduler failure points out that you have a mismatch in > >> >> expectations (backend capabilities vs share type extra-specs) and > >> >> there was no host to schedule your share to. So a few things to check > >> >> here: > >> >> > >> >> - What is the share type you're using? Can you list the share type > >> >> extra-specs and confirm that the backend (your GlusterFS storage) > >> >> capabilities are appropriate with whatever you've set up as > >> >> extra-specs ($ manila pool-list --detail)? > >> >> - Is your backend operating correctly? You can list the manila > >> >> services ($ manila service-list) and see if the backend is both > >> >> 'enabled' and 'up'. If it isn't, there's a good chance there was a > >> >> problem with the driver initialization, please enable debug logging, > >> >> and look at the log file for the manila-share service, you might see > >> >> why and be able to fix it. > >> >> > >> >> > >> >> Please be aware that we're on a look out for a maintainer for the > >> >> GlusterFS driver for the past few releases. We're open to bug fixes > >> >> and maintenance patches, but there is currently no active maintainer > >> >> for this driver. > >> >> > >> >> > >> >> > I did not understand if controllers node must be connected to the > >> >> network where shares must be exported for virtual machines, so my > >> glusterfs > >> >> are connected on the management network where openstack controllers > are > >> >> conencted and to the network where virtual machine are connected. > >> >> > > >> >> > My manila.conf section for glusterfs section is the following > >> >> > > >> >> > [gluster-manila565] > >> >> > driver_handles_share_servers = False > >> >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver > >> >> > glusterfs_target = root at 10.102.184.229:/manila565 > >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa > >> >> > glusterfs_ganesha_server_username = root > >> >> > glusterfs_nfs_server_type = Ganesha > >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 > >> >> > #glusterfs_servers = root at 10.102.185.19 > >> >> > ganesha_config_dir = /etc/ganesha > >> >> > > >> >> > > >> >> > PS > >> >> > 10.102.184.0/24 is the network where controlelrs expose endpoint > >> >> > > >> >> > 10.102.189.0/24 is the shared network inside openstack where > virtual > >> >> machines are connected. > >> >> > > >> >> > The gluster servers are connected on both. > >> >> > > >> >> > > >> >> > Any help, please ? > >> >> > > >> >> > Ignazio > >> >> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Wed Feb 6 17:18:37 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 6 Feb 2019 12:18:37 -0500 Subject: [ops] OpenStack operators meetup, Berlin, March 6th,7th In-Reply-To: References: Message-ID: I'm all signed up. See you in Berlin! On Wed, Feb 6, 2019, 10:43 AM Chris Morgan Dear All, > The Evenbrite for the next ops meetup is now open, see > > > https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 > > Thanks for Allison Price from the foundation for making this for us. We'll > be sharing more details on the event soon. > > Chris > on behalf of the ops meetups team > > -- > Chris Morgan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanthini.a.a at ericsson.com Wed Feb 6 06:12:08 2019 From: nanthini.a.a at ericsson.com (NANTHINI A A) Date: Wed, 6 Feb 2019 06:12:08 +0000 Subject: [Heat] Reg accessing variables of resource group heat api Message-ID: Hi , We are developing heat templates for our vnf deployment .It includes multiple resources .We want to repeat the resource and hence used the api RESOURCE GROUP . Attached are the templates which we used Set1.yaml -> has the resources we want to repeat Setrepeat.yaml -> has the resource group api with count . We want to access the variables of resource in set1.yaml while repeating it with count .Eg . port name ,port fixed ip address we want to change in each set . Please let us know how we can have a variable with each repeated resource . Thanks, A.Nanthini -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: set1.yaml.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: setrepeat.yaml.txt URL: From linus.nilsson at it.uu.se Wed Feb 6 12:04:57 2019 From: linus.nilsson at it.uu.se (Linus Nilsson) Date: Wed, 6 Feb 2019 13:04:57 +0100 Subject: Rocky and older Ceph compatibility Message-ID: <88212313-4fde-8e01-d804-27c6354b7046@it.uu.se> Hi all, I'm working on upgrading our cloud, which consists of a block storage system running Ceph 11.2.1 ("Kraken") and a controlplane running OSA Newton. We want to migrate to Ceph Mimic and OSA Rocky respectively. As part of the upgrade plan we are discussing first going to Rocky while keeping the block system at the "Kraken" release. It would be helpful to know if anyone has attempted to run the Rocky Cinder/Glance drivers with Ceph Kraken or older? References or documentation is welcomed. I fail to find much information online, but perhaps I'm looking in the wrong places or I'm asking a question with an obvious answer. Thanks! Best regards, Linus UPPMAX När du har kontakt med oss på Uppsala universitet med e-post så innebär det att vi behandlar dina personuppgifter. För att läsa mer om hur vi gör det kan du läsa här: http://www.uu.se/om-uu/dataskydd-personuppgifter/ E-mailing Uppsala University means that we will process your personal data. For more information on how this is performed, please read here: http://www.uu.se/en/about-uu/data-protection-policy From Yury.Kulazhenkov at dell.com Wed Feb 6 14:35:11 2019 From: Yury.Kulazhenkov at dell.com (Kulazhenkov, Yury) Date: Wed, 6 Feb 2019 14:35:11 +0000 Subject: [Cinder][driver][ScaleIO] In-Reply-To: <74b2c779ee644a64b5b1939537ddffd1@AUSX13MPS304.AMER.DELL.COM> References: <9d98a006-a062-0a9b-a9d3-68ed0ef4078f@gmail.com> <74b2c779ee644a64b5b1939537ddffd1@AUSX13MPS304.AMER.DELL.COM> Message-ID: Hi Martin, Martin wrote: > It seems you would always > want to unmap the volume from the hypervisor before deleting it. If you remove or shelve instance from hypervisor host, then nova will trigger ScaleIO to unmap volume from that host. No issues should happen during deletion at this point, because volume is already unmapped(unmounted). No need to change sio_unmap_volume_before_deletion default value here. Martin wrote: > What is the reasoning behind this option? Setting sio_unmap_volume_before_deletion option to True means that cinder driver will force unmount volume from ALL ScaleIO client nodes (not only Openstack nodes) during volume deletion. Enabling this option can be useful if you periodically detect compute nodes with unmanaged ScaleIO volume mappings(volume mappings that not managed by Openstack) in your environment. You can get such unmanaged mappings in some cases, for example if there was hypervisor node power failure. If during that power failure instances with mapped volumes were moved to another host, than unmanaged mappings may appear on failed node after its recovery. Martin wrote: >Why would we ever set this > to False and why is it False by default? Force unmounting volumes from ALL ScaleIO clients is additional overhead. It doesn't required in most environments. Best regards, Yury -----Original Message----- From: Arkady.Kanevsky at dell.com Sent: Wednesday, February 6, 2019 7:24 AM To: jsbryant at electronicjungle.net; openstack-discuss at lists.openstack.org; Walsh, Helen; Belogrudov, Vladislav Subject: RE: [Cinder][driver][ScaleIO] Adding Vlad who is the right person for ScaleIO driver. -----Original Message----- From: Jay Bryant Sent: Tuesday, February 5, 2019 5:30 PM To: openstack-discuss at lists.openstack.org; Walsh, Helen Subject: Re: [Cinder][driver][ScaleIO] Adding Helen Walsh to this as she may be able to provide insight. Jay On 2/5/2019 12:16 PM, Martin Chlumsky wrote: > Hello, > > We are using EMC ScaleIO as our backend to cinder. > When we delete VMs that have attached volumes and then try deleting > said volumes, the volumes will sometimes end in state error_deleting. > The state is reached because for some reason the volumes are still > mapped (in the ScaleIO sense of the word) to the hypervisor despite > the VM being deleted. > We fixed the issue by setting the following option to True in cinder.conf: > > # Unmap volume before deletion. (boolean value) > sio_unmap_volume_before_deletion=False > > > What is the reasoning behind this option? Why would we ever set this > to False and why is it False by default? It seems you would always > want to unmap the volume from the hypervisor before deleting it. > > Thank you, > > Martin From Yury.Kulazhenkov at dell.com Wed Feb 6 16:19:15 2019 From: Yury.Kulazhenkov at dell.com (Kulazhenkov, Yury) Date: Wed, 6 Feb 2019 16:19:15 +0000 Subject: [Cinder][driver][ScaleIO] In-Reply-To: References: <9d98a006-a062-0a9b-a9d3-68ed0ef4078f@gmail.com> <74b2c779ee644a64b5b1939537ddffd1@AUSX13MPS304.AMER.DELL.COM> Message-ID: Hi Martin, Martin wrote: > So if we get volumes that are still mapped to hypervisors after deleting the attached instances with sio_unmap_volume_before_deletion set to False, there's a good chance it's a bug? Yes, volumes should be detached from host even without set sio_unmap_volume_before_deletion = True. Yury From: Martin Chlumsky Sent: Wednesday, February 6, 2019 6:20 PM To: Kulazhenkov, Yury Cc: Kanevsky, Arkady; jsbryant at electronicjungle.net; openstack-discuss at lists.openstack.org; Walsh, Helen; Belogrudov, Vladislav Subject: Re: [Cinder][driver][ScaleIO] [EXTERNAL EMAIL] Hi Yury, Thank you for the clarification. So if we get volumes that are still mapped to hypervisors after deleting the attached instances with sio_unmap_volume_before_deletion set to False, there's a good chance it's a bug? I will open a bug report in this case. Cheers, Martin On Wed, Feb 6, 2019 at 9:35 AM Kulazhenkov, Yury > wrote: Hi Martin, Martin wrote: > It seems you would always > want to unmap the volume from the hypervisor before deleting it. If you remove or shelve instance from hypervisor host, then nova will trigger ScaleIO to unmap volume from that host. No issues should happen during deletion at this point, because volume is already unmapped(unmounted). No need to change sio_unmap_volume_before_deletion default value here. Martin wrote: > What is the reasoning behind this option? Setting sio_unmap_volume_before_deletion option to True means that cinder driver will force unmount volume from ALL ScaleIO client nodes (not only Openstack nodes) during volume deletion. Enabling this option can be useful if you periodically detect compute nodes with unmanaged ScaleIO volume mappings(volume mappings that not managed by Openstack) in your environment. You can get such unmanaged mappings in some cases, for example if there was hypervisor node power failure. If during that power failure instances with mapped volumes were moved to another host, than unmanaged mappings may appear on failed node after its recovery. Martin wrote: >Why would we ever set this > to False and why is it False by default? Force unmounting volumes from ALL ScaleIO clients is additional overhead. It doesn't required in most environments. Best regards, Yury -----Original Message----- From: Arkady.Kanevsky at dell.com > Sent: Wednesday, February 6, 2019 7:24 AM To: jsbryant at electronicjungle.net; openstack-discuss at lists.openstack.org; Walsh, Helen; Belogrudov, Vladislav Subject: RE: [Cinder][driver][ScaleIO] Adding Vlad who is the right person for ScaleIO driver. -----Original Message----- From: Jay Bryant > Sent: Tuesday, February 5, 2019 5:30 PM To: openstack-discuss at lists.openstack.org; Walsh, Helen Subject: Re: [Cinder][driver][ScaleIO] Adding Helen Walsh to this as she may be able to provide insight. Jay On 2/5/2019 12:16 PM, Martin Chlumsky wrote: > Hello, > > We are using EMC ScaleIO as our backend to cinder. > When we delete VMs that have attached volumes and then try deleting > said volumes, the volumes will sometimes end in state error_deleting. > The state is reached because for some reason the volumes are still > mapped (in the ScaleIO sense of the word) to the hypervisor despite > the VM being deleted. > We fixed the issue by setting the following option to True in cinder.conf: > > # Unmap volume before deletion. (boolean value) > sio_unmap_volume_before_deletion=False > > > What is the reasoning behind this option? Why would we ever set this > to False and why is it False by default? It seems you would always > want to unmap the volume from the hypervisor before deleting it. > > Thank you, > > Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Wed Feb 6 17:37:29 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 6 Feb 2019 12:37:29 -0500 Subject: [ops] OpenStack operators meetup, Berlin, March 6th,7th In-Reply-To: References: Message-ID: See you there! On Wed, Feb 6, 2019 at 12:18 PM Erik McCormick wrote: > I'm all signed up. See you in Berlin! > > On Wed, Feb 6, 2019, 10:43 AM Chris Morgan >> Dear All, >> The Evenbrite for the next ops meetup is now open, see >> >> >> https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 >> >> Thanks for Allison Price from the foundation for making this for us. >> We'll be sharing more details on the event soon. >> >> Chris >> on behalf of the ops meetups team >> >> -- >> Chris Morgan >> > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Wed Feb 6 17:55:17 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 6 Feb 2019 12:55:17 -0500 Subject: Rocky and older Ceph compatibility In-Reply-To: <88212313-4fde-8e01-d804-27c6354b7046@it.uu.se> References: <88212313-4fde-8e01-d804-27c6354b7046@it.uu.se> Message-ID: On Wed, Feb 6, 2019 at 12:37 PM Linus Nilsson wrote: > > Hi all, > > I'm working on upgrading our cloud, which consists of a block storage > system running Ceph 11.2.1 ("Kraken") and a controlplane running OSA > Newton. We want to migrate to Ceph Mimic and OSA Rocky respectively. As > part of the upgrade plan we are discussing first going to Rocky while > keeping the block system at the "Kraken" release. > For the most part it comes down to your client libraries. Personally, I would upgrade Ceph first, leaving Openstack running older client libraries. I did this with Jewel clients talking to a Luminous cluster, so you should be fine with K->M. Then, when you upgrade Openstack, your client libraries can get updated along with it. If you do Openstack first, you'll need to come back around and update your clients, and that will require you to restart everything a second time. . > It would be helpful to know if anyone has attempted to run the Rocky > Cinder/Glance drivers with Ceph Kraken or older? > I haven't done this specific combination, but I have mixed and matched Openstack and Ceph versions without any issues. I have MItaka, Queens, and Rocky all talking to Luminous without incident. -Erik > References or documentation is welcomed. I fail to find much information > online, but perhaps I'm looking in the wrong places or I'm asking a > question with an obvious answer. > > Thanks! > > Best regards, > Linus > UPPMAX > > > > > > > > > När du har kontakt med oss på Uppsala universitet med e-post så innebär det att vi behandlar dina personuppgifter. För att läsa mer om hur vi gör det kan du läsa här: http://www.uu.se/om-uu/dataskydd-personuppgifter/ > > E-mailing Uppsala University means that we will process your personal data. For more information on how this is performed, please read here: http://www.uu.se/en/about-uu/data-protection-policy > From bharat at stackhpc.com Wed Feb 6 18:04:50 2019 From: bharat at stackhpc.com (Bharat Kunwar) Date: Wed, 6 Feb 2019 18:04:50 +0000 Subject: [magnum][kayobe][kolla-ansible] heat-container-agent reports that `publicURL endpoint for orchestration service in null region not found` and `Source [heat] Unavailable.` Message-ID: I have a Magnum deployment using stable/queens which appears to be successful in every way when you look at `/var/log/cloud-init.log` and `/var/log/cloud-init-output.log`. They look something like this: http://paste.openstack.org/show/744620/ http://paste.openstack.org/show/744621/ However, `heat-container-agent` log reports this on repeat: Feb 06 17:56:38 tesom31-q7fhuprr64fp-master-0.novalocal runc[2040]: Source [heat] Unavailable. Feb 06 17:56:38 tesom31-q7fhuprr64fp-master-0.novalocal runc[2040]: /var/lib/os-collect-config/local-data not found. Skipping Feb 06 17:56:54 tesom31-q7fhuprr64fp-master-0.novalocal runc[2040]: publicURL endpoint for orchestration service in null region not found These are the parts of the heat stack that stay in CREATE_IN_PROGRESS for a long time before eventually failing: http://paste.openstack.org/show/744638/ As a result, the workers never get created. Here is what my magnum.conf looks like: http://paste.openstack.org/show/744625 . And this is what the heat params looks like: http://paste.openstack.org/show/744639/ I have tried setting `send_cluster_metrics=False` and also tried adding `region_name` to [keystone*] inside `magnum.conf`. What else should I try? Best Bharat -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Feb 6 18:17:53 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 6 Feb 2019 19:17:53 +0100 Subject: [all] Denver Open Infrastructure Summit Community Contributor Awards! Message-ID: Hello Everyone! As we approach the Summit (still a ways away thankfully), its time to kick off the Community Contributor Award nominations[1]! For those of you that have never heard of the CCA, I'll briefly explain what they are :) We all know people in our communities that do the dirty jobs, we all know people that will bend over backwards trying to help someone new, we all know someone that is a savant in some area of the code we could never hope to understand. These people rarely get the thanks they deserve and the Community Contributor Awards are a chance to make sure they know that they are appreciated for the amazing work they do and skills they have. As always, participation is voluntary :) Nominations will close on April 14th at 7:00 UTC and recipients will be announced at the Open Infrastructure Summit in Denver[2]. Recipients will be selected by a panel of top-level OSF project representatives who wish to participate. Finally, congrats again to recipients in Berlin[3]! -Kendall Nelson (diablo_rojo) [1] https://openstackfoundation.formstack.com/forms/train_cca_nominations [2]https://www.openstack.org/summit/denver-2019/ [3] http://superuser.openstack.org/articles/openstack-community-contributor-awards-berlin-summit-edition/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed Feb 6 18:52:10 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 6 Feb 2019 18:52:10 +0000 (GMT) Subject: [nova] [placement] extraction checkin meeting at 1700 UTC today In-Reply-To: References: Message-ID: On Wed, 6 Feb 2019, Chris Dent wrote: > A reminder that as discussed at the last placement extraction > checkin meeting [1] we've got another one today at 1700 UTC. Join > the #openstack-placement IRC channel around then if you are > interested, and a google hangout url will be provided. We did this. What follows are some notes. TL;DR: We're going to keep the placement code in nova, but freeze it. The extracted code is unfrozen and open for API changes. We did some warning up in IRC throughout the day, which may be useful context for readers: * update on tripleo situation: http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2019-02-06.log.html#t2019-02-06T14:19:59 * update on osa situation, especially testing: http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2019-02-06.log.html#t2019-02-06T15:30:46 * leaving the placement code in nova: http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2019-02-06.log.html#t2019-02-06T16:43:19 On the call we used the extraction etherpad to take notes and be an agenda: https://etherpad.openstack.org/p/placement-extract-stein-5 The main points to summarize are: * Progress is going well in TripleO with regard to being able to deploy (by lyarwood) extracted placement but upgrades, especially CI of those upgrades is going to be challenging if not impossible in the near term. This is a relatively new development, resulting from a change in procedure within tripleo. Not deleting the code from nova will help when the time comes, later, to test those upgrades. * In OSA there have been some resourcing gaps on the work, but mnaser is currently iterating on making things go. cdent is going to help add some placement-only live tests (to avoid deploying nova) to the code mnaser is working (by mnaser, cdent). As with tripleo, upgrade testing can be eased by leaving the placement code in nova. * The nested VGPU reshaper work is undergoing some light refactoring but confidence is high that it is ready (by bauzas). Functional testing is ready to go too. A manual test with real hardware was done some months ago, but not on the extracted code. It was decided that doing this again but we took it off the requirements because nobody has easy access to the right hardware[1]. * Based on the various issues above, and a general sense that it was the right thing to do, we're not going to delete the placement code from nova. This will allow upgrade testing Both OSA and TripleO are currently able to test with not-extracted placement, and will continue to do so. A patch will be made to nova to add a job using OSA (by mnaser). Other avenues are being explored to make sure the kept-in-nova placement code is tested. The previous functional and unit tests were already deleted and devstack and grenade use extracted. **The code still in nova is now considered frozen and nova's use of placement will be frozen (that is, it will assume microversion 1.30 or less) for the rest of Stein [2].** * The documentation changes which are currently stacked behind the change to delete the placement code from nova will be pulled out (by cdent). * The API presented by the extracted placement is now allowed to change. There are a few pending specs that we can make progress on if people would like to do so: https://blueprints.launchpad.net/nova/+spec/alloc-candidates-in-tree https://blueprints.launchpad.net/nova/+spec/any-traits-in-allocation-candidates-query https://blueprints.launchpad.net/nova/+spec/mixing-required-traits-with-any-traits https://blueprints.launchpad.net/nova/+spec/negative-aggregate-membership Nobody has yet made any commitment to do this stuff, and there's a general sense that people are busy, but if there is time and interest we should talk about it. We did not schedule a next check in meeting. When one needs to happen, which it will, we'll figure that out and make an announcement. Thanks for your attention. If I made any errors above, or left something out, please followup. If you have questions, please ask them. [1] Our employers should really be ashamed of themselves. This happens over and over and over again, across OpenStack, and is a huge drag on velocity. [2] While technically it would be possible to do version discovery or version-guarded changes, to remove ambiguity and because nova is already overbooked and moving slowly, easier to just say "no" and leave it. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From tpb at dyncloud.net Wed Feb 6 20:16:19 2019 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 6 Feb 2019 15:16:19 -0500 Subject: [manila][glusterfs] on queens error In-Reply-To: References: <20190203100549.urtnvf2iatmqm6oy@barron.net> <20190206153219.yyir5m5tyw7bvrj7@barron.net> Message-ID: <20190206201619.o6turxaps6iv65p7@barron.net> On 06/02/19 17:48 +0100, Ignazio Cassano wrote: >The 2 openstack Installations do not share anything. The manila on each one >works on different netapp storage, but the 2 netapp can be synchronized. >Site A with an openstack instalkation and netapp A. >Site B with an openstack with netapp B. >Netapp A and netapp B can be synchronized via network. >Ignazio OK, thanks. You can likely get the share data and its netapp metadata to show up on B via replication and (gouthamr may explain details) but you will lose all the Openstack/manila information about the share unless Openstack database info (more than just manila tables) is imported. That may be OK foryour use case. -- Tom > > >Il giorno Mer 6 Feb 2019 16:32 Tom Barron ha scritto: > >> On 06/02/19 15:34 +0100, Ignazio Cassano wrote: >> >Hello Tom, I think cases you suggested do not meet my needs. >> >I have an openstack installation A with a fas netapp A. >> >I have another openstack installation B with fas netapp B. >> >I would like to use manila replication dr. >> >If I replicate manila volumes from A to B the manila db on B does not >> >knows anything about the replicated volume but only the backends on >> netapp >> >B. Can I discover replicated volumes on openstack B? >> >Or I must modify the manila db on B? >> >Regards >> >Ignazio >> >> I guess I don't understand your use case. Do Openstack installation A >> and Openstack installation B know *anything* about one another? For >> example, are their keystone and neutron databases somehow synced? Are >> they going to be operative for the same set of manila shares at the >> same time, or are you contemplating a migration of the shares from >> installation A to installation B? >> >> Probably it would be helpful to have a statement of the problem that >> you intend to solve before we consider the potential mechanisms for >> solving it. >> >> Cheers, >> >> -- Tom >> >> > >> > >> >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha scritto: >> > >> >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: >> >> >Thanks Goutham. >> >> >If there are not mantainers for this driver I will switch on ceph and >> or >> >> >netapp. >> >> >I am already using netapp but I would like to export shares from an >> >> >openstack installation to another. >> >> >Since these 2 installations do non share any openstack component and >> have >> >> >different openstack database, I would like to know it is possible . >> >> >Regards >> >> >Ignazio >> >> >> >> Hi Ignazio, >> >> >> >> If by "export shares from an openstack installation to another" you >> >> mean removing them from management by manila in installation A and >> >> instead managing them by manila in installation B then you can do that >> >> while leaving them in place on your Net App back end using the manila >> >> "manage-unmanage" administrative commands. Here's some documentation >> >> [1] that should be helpful. >> >> >> >> If on the other hand by "export shares ... to another" you mean to >> >> leave the shares under management of manila in installation A but >> >> consume them from compute instances in installation B it's all about >> >> the networking. One can use manila to "allow-access" to consumers of >> >> shares anywhere but the consumers must be able to reach the "export >> >> locations" for those shares and mount them. >> >> >> >> Cheers, >> >> >> >> -- Tom Barron >> >> >> >> [1] >> >> >> https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 >> >> > >> >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < >> >> gouthampravi at gmail.com> >> >> >ha scritto: >> >> > >> >> >> Hi Ignazio, >> >> >> >> >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano >> >> >> wrote: >> >> >> > >> >> >> > Hello All, >> >> >> > I installed manila on my queens openstack based on centos 7. >> >> >> > I configured two servers with glusterfs replocation and ganesha >> nfs. >> >> >> > I configured my controllers octavia,conf but when I try to create a >> >> share >> >> >> > the manila scheduler logs reports: >> >> >> > >> >> >> > Failed to schedule create_share: No valid host was found. Failed to >> >> find >> >> >> a weighted host, the last executed filter was CapabilitiesFilter.: >> >> >> NoValidHost: No valid host was found. Failed to find a weighted host, >> >> the >> >> >> last executed filter was CapabilitiesFilter. >> >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api >> >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a >> >> 89f76bc5de5545f381da2c10c7df7f15 >> >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for >> >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a >> >> >> >> >> >> >> >> >> The scheduler failure points out that you have a mismatch in >> >> >> expectations (backend capabilities vs share type extra-specs) and >> >> >> there was no host to schedule your share to. So a few things to check >> >> >> here: >> >> >> >> >> >> - What is the share type you're using? Can you list the share type >> >> >> extra-specs and confirm that the backend (your GlusterFS storage) >> >> >> capabilities are appropriate with whatever you've set up as >> >> >> extra-specs ($ manila pool-list --detail)? >> >> >> - Is your backend operating correctly? You can list the manila >> >> >> services ($ manila service-list) and see if the backend is both >> >> >> 'enabled' and 'up'. If it isn't, there's a good chance there was a >> >> >> problem with the driver initialization, please enable debug logging, >> >> >> and look at the log file for the manila-share service, you might see >> >> >> why and be able to fix it. >> >> >> >> >> >> >> >> >> Please be aware that we're on a look out for a maintainer for the >> >> >> GlusterFS driver for the past few releases. We're open to bug fixes >> >> >> and maintenance patches, but there is currently no active maintainer >> >> >> for this driver. >> >> >> >> >> >> >> >> >> > I did not understand if controllers node must be connected to the >> >> >> network where shares must be exported for virtual machines, so my >> >> glusterfs >> >> >> are connected on the management network where openstack controllers >> are >> >> >> conencted and to the network where virtual machine are connected. >> >> >> > >> >> >> > My manila.conf section for glusterfs section is the following >> >> >> > >> >> >> > [gluster-manila565] >> >> >> > driver_handles_share_servers = False >> >> >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver >> >> >> > glusterfs_target = root at 10.102.184.229:/manila565 >> >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa >> >> >> > glusterfs_ganesha_server_username = root >> >> >> > glusterfs_nfs_server_type = Ganesha >> >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 >> >> >> > #glusterfs_servers = root at 10.102.185.19 >> >> >> > ganesha_config_dir = /etc/ganesha >> >> >> > >> >> >> > >> >> >> > PS >> >> >> > 10.102.184.0/24 is the network where controlelrs expose endpoint >> >> >> > >> >> >> > 10.102.189.0/24 is the shared network inside openstack where >> virtual >> >> >> machines are connected. >> >> >> > >> >> >> > The gluster servers are connected on both. >> >> >> > >> >> >> > >> >> >> > Any help, please ? >> >> >> > >> >> >> > Ignazio >> >> >> >> >> >> From gouthampravi at gmail.com Wed Feb 6 20:26:18 2019 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 6 Feb 2019 12:26:18 -0800 Subject: [manila][glusterfs] on queens error In-Reply-To: <20190206201619.o6turxaps6iv65p7@barron.net> References: <20190203100549.urtnvf2iatmqm6oy@barron.net> <20190206153219.yyir5m5tyw7bvrj7@barron.net> <20190206201619.o6turxaps6iv65p7@barron.net> Message-ID: On Wed, Feb 6, 2019 at 12:16 PM Tom Barron wrote: > > On 06/02/19 17:48 +0100, Ignazio Cassano wrote: > >The 2 openstack Installations do not share anything. The manila on each one > >works on different netapp storage, but the 2 netapp can be synchronized. > >Site A with an openstack instalkation and netapp A. > >Site B with an openstack with netapp B. > >Netapp A and netapp B can be synchronized via network. > >Ignazio > > OK, thanks. > > You can likely get the share data and its netapp metadata to show up > on B via replication and (gouthamr may explain details) but you will > lose all the Openstack/manila information about the share unless > Openstack database info (more than just manila tables) is imported. > That may be OK foryour use case. > > -- Tom Checking if I understand your request correctly, you have setup manila's "dr" replication in OpenStack A and now want to move your shares from OpenStack A to OpenStack B's manila. Is this correct? If yes, you must: * Promote your replicas - this will make the mirrored shares available. This action does not delete the old "primary" shares though, you need to clean them up yourself, because manila will attempt to reverse the replication relationships if the primary shares are still accessible * Note the export locations and Unmanage your shares from OpenStack A's manila * Manage your shares in OpenStack B's manila with the export locations you noted. > > > > > >Il giorno Mer 6 Feb 2019 16:32 Tom Barron ha scritto: > > > >> On 06/02/19 15:34 +0100, Ignazio Cassano wrote: > >> >Hello Tom, I think cases you suggested do not meet my needs. > >> >I have an openstack installation A with a fas netapp A. > >> >I have another openstack installation B with fas netapp B. > >> >I would like to use manila replication dr. > >> >If I replicate manila volumes from A to B the manila db on B does not > >> >knows anything about the replicated volume but only the backends on > >> netapp > >> >B. Can I discover replicated volumes on openstack B? > >> >Or I must modify the manila db on B? > >> >Regards > >> >Ignazio > >> > >> I guess I don't understand your use case. Do Openstack installation A > >> and Openstack installation B know *anything* about one another? For > >> example, are their keystone and neutron databases somehow synced? Are > >> they going to be operative for the same set of manila shares at the > >> same time, or are you contemplating a migration of the shares from > >> installation A to installation B? > >> > >> Probably it would be helpful to have a statement of the problem that > >> you intend to solve before we consider the potential mechanisms for > >> solving it. > >> > >> Cheers, > >> > >> -- Tom > >> > >> > > >> > > >> >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha scritto: > >> > > >> >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: > >> >> >Thanks Goutham. > >> >> >If there are not mantainers for this driver I will switch on ceph and > >> or > >> >> >netapp. > >> >> >I am already using netapp but I would like to export shares from an > >> >> >openstack installation to another. > >> >> >Since these 2 installations do non share any openstack component and > >> have > >> >> >different openstack database, I would like to know it is possible . > >> >> >Regards > >> >> >Ignazio > >> >> > >> >> Hi Ignazio, > >> >> > >> >> If by "export shares from an openstack installation to another" you > >> >> mean removing them from management by manila in installation A and > >> >> instead managing them by manila in installation B then you can do that > >> >> while leaving them in place on your Net App back end using the manila > >> >> "manage-unmanage" administrative commands. Here's some documentation > >> >> [1] that should be helpful. > >> >> > >> >> If on the other hand by "export shares ... to another" you mean to > >> >> leave the shares under management of manila in installation A but > >> >> consume them from compute instances in installation B it's all about > >> >> the networking. One can use manila to "allow-access" to consumers of > >> >> shares anywhere but the consumers must be able to reach the "export > >> >> locations" for those shares and mount them. > >> >> > >> >> Cheers, > >> >> > >> >> -- Tom Barron > >> >> > >> >> [1] > >> >> > >> https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 > >> >> > > >> >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < > >> >> gouthampravi at gmail.com> > >> >> >ha scritto: > >> >> > > >> >> >> Hi Ignazio, > >> >> >> > >> >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano > >> >> >> wrote: > >> >> >> > > >> >> >> > Hello All, > >> >> >> > I installed manila on my queens openstack based on centos 7. > >> >> >> > I configured two servers with glusterfs replocation and ganesha > >> nfs. > >> >> >> > I configured my controllers octavia,conf but when I try to create a > >> >> share > >> >> >> > the manila scheduler logs reports: > >> >> >> > > >> >> >> > Failed to schedule create_share: No valid host was found. Failed to > >> >> find > >> >> >> a weighted host, the last executed filter was CapabilitiesFilter.: > >> >> >> NoValidHost: No valid host was found. Failed to find a weighted host, > >> >> the > >> >> >> last executed filter was CapabilitiesFilter. > >> >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api > >> >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a > >> >> 89f76bc5de5545f381da2c10c7df7f15 > >> >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record for > >> >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a > >> >> >> > >> >> >> > >> >> >> The scheduler failure points out that you have a mismatch in > >> >> >> expectations (backend capabilities vs share type extra-specs) and > >> >> >> there was no host to schedule your share to. So a few things to check > >> >> >> here: > >> >> >> > >> >> >> - What is the share type you're using? Can you list the share type > >> >> >> extra-specs and confirm that the backend (your GlusterFS storage) > >> >> >> capabilities are appropriate with whatever you've set up as > >> >> >> extra-specs ($ manila pool-list --detail)? > >> >> >> - Is your backend operating correctly? You can list the manila > >> >> >> services ($ manila service-list) and see if the backend is both > >> >> >> 'enabled' and 'up'. If it isn't, there's a good chance there was a > >> >> >> problem with the driver initialization, please enable debug logging, > >> >> >> and look at the log file for the manila-share service, you might see > >> >> >> why and be able to fix it. > >> >> >> > >> >> >> > >> >> >> Please be aware that we're on a look out for a maintainer for the > >> >> >> GlusterFS driver for the past few releases. We're open to bug fixes > >> >> >> and maintenance patches, but there is currently no active maintainer > >> >> >> for this driver. > >> >> >> > >> >> >> > >> >> >> > I did not understand if controllers node must be connected to the > >> >> >> network where shares must be exported for virtual machines, so my > >> >> glusterfs > >> >> >> are connected on the management network where openstack controllers > >> are > >> >> >> conencted and to the network where virtual machine are connected. > >> >> >> > > >> >> >> > My manila.conf section for glusterfs section is the following > >> >> >> > > >> >> >> > [gluster-manila565] > >> >> >> > driver_handles_share_servers = False > >> >> >> > share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver > >> >> >> > glusterfs_target = root at 10.102.184.229:/manila565 > >> >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa > >> >> >> > glusterfs_ganesha_server_username = root > >> >> >> > glusterfs_nfs_server_type = Ganesha > >> >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 > >> >> >> > #glusterfs_servers = root at 10.102.185.19 > >> >> >> > ganesha_config_dir = /etc/ganesha > >> >> >> > > >> >> >> > > >> >> >> > PS > >> >> >> > 10.102.184.0/24 is the network where controlelrs expose endpoint > >> >> >> > > >> >> >> > 10.102.189.0/24 is the shared network inside openstack where > >> virtual > >> >> >> machines are connected. > >> >> >> > > >> >> >> > The gluster servers are connected on both. > >> >> >> > > >> >> >> > > >> >> >> > Any help, please ? > >> >> >> > > >> >> >> > Ignazio > >> >> >> > >> >> > >> From James.Gauld at windriver.com Wed Feb 6 21:24:39 2019 From: James.Gauld at windriver.com (Gauld, James) Date: Wed, 6 Feb 2019 21:24:39 +0000 Subject: [openstack-helm] How to specify nova override for multiple pci alias In-Reply-To: References: <8E5740EC88EF3E4BA3196F2545DC8625BA1CD21F@ALA-MBD.corp.ad.wrs.com> <2f17068ba3452c230e3dbe1d581d940f85961a12.camel@evrard.me> Message-ID: <8E5740EC88EF3E4BA3196F2545DC8625BA1CF471@ALA-MBD.corp.ad.wrs.com> Eric, I had assistance from portdirect in IRC who provided the 'multistring' solution to this problem. This solution does not require a change on nova side, or a change to nova chart. I should have replied a day ago. The nova solution WIP you coded would work. It requires slight documentation change to remove the one-line-per-entry input limitation. The following helm multistring method works for OSLO.conf compatible with oslo_config.MultiStringOpt(). I get correct nova.conf output if I individually JSON encode each string in the list of values (eg, for PCI alias, PCI passthrough whitelist). Here is sample YAML for multistring : conf: nova: pci: alias: type: multistring values: - '{"vendor_id": "8086", "product_id": "0435", "name": "qat-dh895xcc-pf"}' - '{"class_id": "030000", "name": "gpu"}' Here is the resultant nova.conf : [pci] alias = {"vendor_id": "8086", "product_id": "0435", "name": "qat-dh895xcc-pf"} alias = {"class_id": "030000", "name": "gpu"} This solution does not require a change on nova side, or a change to nova helm chart. IMO, I did not find the multistring example obvious when I was looking for documentation. -Jim Gauld -----Original Message----- From: Eric Fried [mailto:openstack at fried.cc] Sent: February-06-19 10:45 AM To: openstack-discuss at lists.openstack.org Subject: Re: [openstack-helm] How to specify nova override for multiple pci alias Folks- On 2/6/19 4:13 AM, Jean-Philippe Evrard wrote: > On Wed, 2019-01-30 at 15:40 +0000, Gauld, James wrote: >> How can I specify a helm override to configure nova PCI alias when >> there are multiple aliases? >> I haven't been able to come up with a YAML compliant specification >> for this. >> >> Are there other alternatives to be able to specify this as an >> override? I assume that a nova Chart change would be required to >> support this custom one-alias-entry-per-line formatting. >> >> Any insights on how to achieve this in helm are welcomed. >> The following nova configuration format is desired, but not as yet >> supported by nova: >> [pci] >> alias = [{dict 1}, {dict 2}] >> >> The following snippet of YAML works for PCI passthrough_whitelist, >> where the value encoded is a JSON string: >> >> conf: >> nova: >> overrides: >> nova_compute: >> hosts: >> - conf: >> nova: >> pci: >> passthrough_whitelist: '[{"class_id": "030000", >> "address": "0000:00:02.0"}]' I played around with the code as it stands, and I agree there doesn't seem to be a way around having to specify the alias key multiple times to get multiple aliases. Lacking some fancy way to make YAML understand a dict with repeated keys ((how) do you handle HTTP headers?), I've hacked up a solution on the nova side [1] which should allow you to do what you've described above. Do you have a way to pull it down and try it? (Caveat: I put this up as a proof of concept, but it (or anything that messes with the existing pci passthrough mechanisms) may not be a mergeable solution.) -efried [1] https://review.openstack.org/#/c/635191/ From lars at redhat.com Wed Feb 6 21:32:22 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 6 Feb 2019 16:32:22 -0500 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <20190206154138.qfhgh5cax3j2r4qh@redhat.com> Message-ID: <20190206213222.43nin24mkbqhsrw7@redhat.com> On Wed, Feb 06, 2019 at 04:00:40PM +0000, Tim Bell wrote: > > A few years ago, there was a discussion in one of the summit forums > where users wanted to be able to come along to a generic OpenStack > cloud and say "give me the flavor that has at least X GB RAM and Y > GB disk space". At the time, the thoughts were that this could be > done by doing a flavour list and then finding the smallest one which > matched the requirements. The problem is that "flavor list" part: that implies that every time someone adds a new hardware configuration to the environment (maybe they add a new group of machines, or maybe they simply upgrade RAM/disk/etc in some existing nodes), they need to manually create corresponding flavors. That also implies that you could quickly end up with an egregious number of flavors to represent different types of available hardware. Really, what we want is the ability to select hardware based on Ironic introspection data, without any manual steps in between. I'm still not clear on whether there's any way to make this work with existing tools, or if it makes sense to figure out to make Nova do this or if we need something else sitting in front of Ironic. > For reserving, you could install the machine with a simple image and then let the user rebuild with their choice? That's probably a fine workaround for now. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From openstack at fried.cc Wed Feb 6 21:34:50 2019 From: openstack at fried.cc (Eric Fried) Date: Wed, 6 Feb 2019 15:34:50 -0600 Subject: [openstack-helm] How to specify nova override for multiple pci alias In-Reply-To: <8E5740EC88EF3E4BA3196F2545DC8625BA1CF471@ALA-MBD.corp.ad.wrs.com> References: <8E5740EC88EF3E4BA3196F2545DC8625BA1CD21F@ALA-MBD.corp.ad.wrs.com> <2f17068ba3452c230e3dbe1d581d940f85961a12.camel@evrard.me> <8E5740EC88EF3E4BA3196F2545DC8625BA1CF471@ALA-MBD.corp.ad.wrs.com> Message-ID: On 2/6/19 3:24 PM, Gauld, James wrote: > Eric, > I had assistance from portdirect in IRC who provided the 'multistring' solution to this problem. > This solution does not require a change on nova side, or a change to nova chart. I should have replied a day ago. > > The nova solution WIP you coded would work. It requires slight documentation change to remove the one-line-per-entry input limitation. > > The following helm multistring method works for OSLO.conf compatible with oslo_config.MultiStringOpt(). > I get correct nova.conf output if I individually JSON encode each string in the list of values (eg, for PCI alias, PCI passthrough whitelist). > > Here is sample YAML for multistring : > conf: > nova: > pci: > alias: > type: multistring > values: > - '{"vendor_id": "8086", "product_id": "0435", "name": "qat-dh895xcc-pf"}' > - '{"class_id": "030000", "name": "gpu"}' > > Here is the resultant nova.conf : > [pci] > alias = {"vendor_id": "8086", "product_id": "0435", "name": "qat-dh895xcc-pf"} > alias = {"class_id": "030000", "name": "gpu"} > > This solution does not require a change on nova side, or a change to nova helm chart. > IMO, I did not find the multistring example obvious when I was looking for documentation. > > -Jim Gauld > > -----Original Message----- > From: Eric Fried [mailto:openstack at fried.cc] > Sent: February-06-19 10:45 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: [openstack-helm] How to specify nova override for multiple pci alias > > Folks- > > On 2/6/19 4:13 AM, Jean-Philippe Evrard wrote: >> On Wed, 2019-01-30 at 15:40 +0000, Gauld, James wrote: >>> How can I specify a helm override to configure nova PCI alias when >>> there are multiple aliases? >>> I haven't been able to come up with a YAML compliant specification >>> for this. >>> >>> Are there other alternatives to be able to specify this as an >>> override? I assume that a nova Chart change would be required to >>> support this custom one-alias-entry-per-line formatting. >>> >>> Any insights on how to achieve this in helm are welcomed. > > > >>> The following nova configuration format is desired, but not as yet >>> supported by nova: >>> [pci] >>> alias = [{dict 1}, {dict 2}] >>> >>> The following snippet of YAML works for PCI passthrough_whitelist, >>> where the value encoded is a JSON string: >>> >>> conf: >>> nova: >>> overrides: >>> nova_compute: >>> hosts: >>> - conf: >>> nova: >>> pci: >>> passthrough_whitelist: '[{"class_id": "030000", >>> "address": "0000:00:02.0"}]' > > > > I played around with the code as it stands, and I agree there doesn't seem to be a way around having to specify the alias key multiple times to get multiple aliases. Lacking some fancy way to make YAML understand a dict with repeated keys ((how) do you handle HTTP headers?), I've hacked up a solution on the nova side [1] which should allow you to do what you've described above. Do you have a way to pull it down and try it? > > (Caveat: I put this up as a proof of concept, but it (or anything that messes with the existing pci passthrough mechanisms) may not be a mergeable solution.) > > -efried > > [1] https://review.openstack.org/#/c/635191/ > From openstack at fried.cc Wed Feb 6 21:38:06 2019 From: openstack at fried.cc (Eric Fried) Date: Wed, 6 Feb 2019 15:38:06 -0600 Subject: [openstack-helm] How to specify nova override for multiple pci alias In-Reply-To: <8E5740EC88EF3E4BA3196F2545DC8625BA1CF471@ALA-MBD.corp.ad.wrs.com> References: <8E5740EC88EF3E4BA3196F2545DC8625BA1CD21F@ALA-MBD.corp.ad.wrs.com> <2f17068ba3452c230e3dbe1d581d940f85961a12.camel@evrard.me> <8E5740EC88EF3E4BA3196F2545DC8625BA1CF471@ALA-MBD.corp.ad.wrs.com> Message-ID: <42206c34-dd17-75ef-ba02-2b5f8f905e21@fried.cc> James- On 2/6/19 3:24 PM, Gauld, James wrote: > Eric, > I had assistance from portdirect in IRC who provided the 'multistring' solution to this problem. > This solution does not require a change on nova side, or a change to nova chart. I should have replied a day ago. Ah, I'm glad you got it figured out. I'll abandon my change. > IMO, I did not find the multistring example obvious when I was looking for documentation. I don't know if you're referring to the nova docs or something else, but I couldn't agree more. The syntax of both this and passthrough_whitelist is very confusing. We're working to come up with nicer ways to talk about device passthrough. It's been a multi-release effort, but we're getting closer all the time. Stay tuned. -efried From mriedemos at gmail.com Wed Feb 6 22:00:44 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 6 Feb 2019 16:00:44 -0600 Subject: [nova] [placement] extraction checkin meeting at 1700 UTC today In-Reply-To: References: Message-ID: On 2/6/2019 12:52 PM, Chris Dent wrote: > Thanks for your attention. If I made any errors above, or left > something out, please followup. If you have questions, please ask > them. Thanks for the summary, it matches with my recollection and notes in the etherpad. -- Thanks, Matt From gouthampravi at gmail.com Wed Feb 6 22:42:37 2019 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 6 Feb 2019 14:42:37 -0800 Subject: Manila Upstream Bugs In-Reply-To: References: Message-ID: First off, thank you so much for starting this effort Jason! Responses inline: On Tue, Feb 5, 2019 at 12:37 PM Jason Grosso wrote: > Hello All, > > > This is an email to the OpenStack manila upstream community but anyone can chime in would be great to get some input from other projects and how they organize their upstream defects and what tools they use... > > > > My goal here is to make the upstream manila bug process easier, cleaner, and more effective. > > > My thoughts to accomplish this are by establishing a process that we can all agree upon. > > > > I have the following points/questions that I wanted to address to help create a more effective process: > > > > Can we as a group go through some of the manila bugs so we can drive the visible bug count down? > > How often as a group do you have bug scrubs? > > Might be beneficial if we had bug scrubs every few months possibly? IMHO, we could start doing these biweekly with a synchronized meeting. I feel once we bring down the number of bugs to a manageable number, we can go back to using our IRC meeting slot to triage new bugs as they come and gather progress on existing bugs. > It might be a good idea to go through the current upstream bugs and weed out one that can be closed or invalid. > > > When a new bug is logged how to we normally process this bug > > How do we handle the importance? > When a manila bugs comes into launchpad I am assuming one of the people on this email will set the importance? > "Assigned" I will also assume it just picked by the person on this email list. > I am seeing some bugs "fixed committed" with no assignment. How do we know who was working on it? If a fix has been committed, an appropriate Gerrit review patch should be noted, unless our automation fails (which happens sometimes) > What is the criteria for setting the importance. Do we have a standard understanding of what is CRITICAL or HIGH? > If there is a critical or high bug what is the response turn-around? Days or weeks? > I see some defect with HIGH that have not been assigned or looked at in a year? This has been informal so far, and our bug supervisors group (https://launchpad.net/~manila-bug-supervisors) on Launchpad is a small subset of our contributors and maintainers; if a bug causes a security issue, or data loss it is marked CRITICAL and scheduled to be fixed right away. If a bug affects manila's API and core internals, it is marked between HIGH and LOW depending on whether we can live with a bug not being fixed right away. Typically vendor driver bugs are marked LOW unless the driver is badly broken. We usually reduce the bug importance if it is HIGH, but goes un-fixed for a long time. We don't have stats around turnaround time, gathering those would be an interesting exercise. > I understand OpenStack has some long releases but how long do we normally keep defects around? > Do we have a way to archive bugs that are not looked at? I was told we can possibly set the status of a defect to “Invalid” or “Opinion” or “Won’t Fix” or “Expired" > Status needs to be something other than "NEW" after the first week > How can we have a defect over a year that is NEW? Great point, it feels like we shouldn't. If we start triaging new bugs as they come, they should not be "NEW" for too long. > Who is possible for see if there is enough information and if the bug is invalid or incomplete and if incomplete ask for relevant information. Do we randomly look at the list daily , weekly, or monthly to see if new info is needed? > > > > > I started to create a google sheet [1] to see if it is easier to track some of the defect vs the manila-triage pad[2] . I have added both links here. I know a lot will not have access to this page I am working on transitioning to OpenStack ether cal. Great stuff :) The sheet [1] requires permissions, so your thought of using ethercalc.openstack.org [3] may be the way to go! We can start referring to this in our meetings. > [1] https://docs.google.com/spreadsheets/d/1oaXEgo_BEkY2KleISN3M58waqw9U5W7xTR_O1jQmQ74/edit#gid=758082340 > > [2] https://etherpad.openstack.org/p/manila-bug-triage-pad > > [3] https://ethercalc.openstack.org/uc8b4567fpf4 > > > > > I would also like to hear from all of you on what your issues are with the current process for upstream manila bugs using launchpad. I have not had the time to look at storyboard https://storyboard.openstack.org/ but I have heard that the OpenStack community is pushing toward using Storyboard, so I will be looking at that shortly. > > > Any input would be greatly appreciated... > > > Thanks All, > > Jason Grosso > > Senior Quality Engineer - Cloud > > Red Hat OpenStack Manila > > jgrosso at redhat.com From juliaashleykreger at gmail.com Wed Feb 6 23:04:06 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 6 Feb 2019 15:04:06 -0800 Subject: [ironic] cisco-ucs-managed and cisco-ucs-standalone drivers - Python3 and CI Message-ID: Greetings fellow OpenStack humans, AIs, and unknown entities! At present, ironic has two hardware types, or drivers, that are presently in the code base to support Cisco UCS hardware. These are "cisco-ucs-managed" and "cisco-ucs-standalone". At present they utilize an underlying library which is not python3 compatible and has been deprecated by the vendor. In their current state the drivers will need to be removed from ironic when python2 support is removed. While work was started[1][2] to convert these drivers, the patch author seems to have stopped working on updating these drivers. Repeated attempts to contact prior ironic contributors from Cisco and the aforementioned patch author have gone unanswered. To further complicate matters, it appears the last time Cisco CI [3] last voted was on January 30th [4] of this year and the the log server [5] appears to be unreachable. Ironic's requirement is that a vendor driver has to have third-party CI to remain in-tree. At present it appears ironic will have no choice but to deprecate the "cisco-ucs-managed" and "cisco-ucs-standalone" hardware types and remove them in the Train cycle. If nobody steps forward to maintain the drivers and CI does not return, the drivers will be marked deprecated during the Stein cycle, and ironic shall proceed to remove them during Train. Please let me know if there are any questions or concerns. Thanks, -Julia [1]: https://review.openstack.org/#/c/607732 [2]: https://review.openstack.org/#/c/598194 [3]: https://review.openstack.org/#/q/reviewedby:%22Cisco+CI+%253Cml2.ci%2540cisco.com%253E%22+project:openstack/ironic [4]: https://review.openstack.org/#/c/620376/ [5]: http://3ci-logs.ciscolabs.net/76/620376/4/check/dsvm-tempest-ironic-cimc-job/a984082/ From pierre at stackhpc.com Wed Feb 6 23:17:45 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Wed, 6 Feb 2019 23:17:45 +0000 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: <20190206154138.qfhgh5cax3j2r4qh@redhat.com> References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <20190206154138.qfhgh5cax3j2r4qh@redhat.com> Message-ID: On Wed, 6 Feb 2019 at 15:47, Lars Kellogg-Stedman wrote: > > On Fri, Feb 01, 2019 at 06:16:42PM +0000, Sean Mooney wrote: > > > 1. Implement multi-tenancy either (a) directly in Ironic or (b) in a > > > shim service that sits between Ironic and the client. > > that shim service could be nova, which already has multi tenancy. > > > > > > 2. Implement a Blazar plugin that is able to talk to whichever service > > > in (1) is appropriate. > > and nova is supported by blazar > > > > > > 3. Work with Blazar developers to implement any lease logic that we > > > think is necessary. > > +1 > > by they im sure there is a reason why you dont want to have blazar drive > > nova and nova dirve ironic but it seam like all the fucntionality would > > already be there in that case. > > Sean, > > Being able to use Nova is a really attractive idea. I'm a little > fuzzy on some of the details, though, starting with how to handle node > discovery. A key goal is being able to parametrically request systems > ("I want a system with a GPU and >= 40GB of memory"). With Nova, > would this require effectively creating a flavor for every unique > hardware configuration? Conceptually, I want "... create server > --flavor any --filter 'has_gpu and member_mb>40000' ...", but it's not > clear to me if that's something we could do now or if that would > require changes to the way Nova handles baremetal scheduling. Such node selection is something you can already do with Blazar using the parameters "hypervisor_properties" (which are hypervisor details automatically imported from Nova) and "resource_properties" (extra key/value pairs that can be tagged on the resource, which could be has_gpu=true) when creating reservations: https://developer.openstack.org/api-ref/reservation/v1/index.html?expanded=create-lease-detail#id3 I believe you can also do such filtering with the ComputeCapabilitiesFilter directly with Nova. It was supposed to be deprecated (https://review.openstack.org/#/c/603102/) but it looks like it's staying around for now. In either case, using Nova still requires a flavor to be selected, but you could have a single "baremetal" flavor associated with a single resource class for the whole baremetal cloud. From pierre at stackhpc.com Wed Feb 6 23:26:57 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Wed, 6 Feb 2019 23:26:57 +0000 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <20190206154138.qfhgh5cax3j2r4qh@redhat.com> Message-ID: On Wed, 6 Feb 2019 at 23:17, Pierre Riteau wrote: > I believe you can also do such filtering with the > ComputeCapabilitiesFilter directly with Nova. It was supposed to be > deprecated (https://review.openstack.org/#/c/603102/) but it looks > like it's staying around for now. Sorry, I was actually thinking about JsonFilter rather than ComputeCapabilitiesFilter. The former allows users to pass a query via scheduler hints, while the latter filters based on flavors. From gmann at ghanshyammann.com Thu Feb 7 02:33:32 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 07 Feb 2019 11:33:32 +0900 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> Message-ID: <168c5cd9aac.103a6ed0c31827.3131004736809589089@ghanshyammann.com> ---- On Wed, 06 Feb 2019 18:45:03 +0900 Jean-Philippe Evrard wrote ---- > > So, maybe the next step is to convince someone to champion a goal of > > improving our contributor documentation, and to have them describe > > what > > the documentation should include, covering the usual topics like how > > to > > actually submit patches as well as suggestions for how to describe > > areas > > where help is needed in a project and offers to mentor contributors. If I am not wrong, you are saying to have help-wanted-list owned by each project side which is nothing but a part of contributor documentation? As you mentioned that complete doc can be linked as a central page on docs.openstack.org. If so then, it looks perfect to me. Further, that list can be updated/maintained with mentor mapping by the project team on every cycle which is nothing but what projects present in onboarding sessions etc. > > > > Does anyone want to volunteer to serve as the goal champion for that? > > > > This doesn't get visibility yet, as this thread is under [tc] only. > > Lance and I will raise this in our next update (which should be > tomorrow) if we don't have a volunteer here. I was waiting in case anyone shows up for that but looks like no. I can take this goal. -gmann > > JP. > > > From ignaziocassano at gmail.com Thu Feb 7 05:11:47 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 7 Feb 2019 06:11:47 +0100 Subject: [manila][glusterfs] on queens error In-Reply-To: References: <20190203100549.urtnvf2iatmqm6oy@barron.net> <20190206153219.yyir5m5tyw7bvrj7@barron.net> <20190206201619.o6turxaps6iv65p7@barron.net> Message-ID: Many thanks. I'll check today. Ignazio Il giorno Mer 6 Feb 2019 21:26 Goutham Pacha Ravi ha scritto: > On Wed, Feb 6, 2019 at 12:16 PM Tom Barron wrote: > > > > On 06/02/19 17:48 +0100, Ignazio Cassano wrote: > > >The 2 openstack Installations do not share anything. The manila on each > one > > >works on different netapp storage, but the 2 netapp can be > synchronized. > > >Site A with an openstack instalkation and netapp A. > > >Site B with an openstack with netapp B. > > >Netapp A and netapp B can be synchronized via network. > > >Ignazio > > > > OK, thanks. > > > > You can likely get the share data and its netapp metadata to show up > > on B via replication and (gouthamr may explain details) but you will > > lose all the Openstack/manila information about the share unless > > Openstack database info (more than just manila tables) is imported. > > That may be OK foryour use case. > > > > -- Tom > > > Checking if I understand your request correctly, you have setup > manila's "dr" replication in OpenStack A and now want to move your > shares from OpenStack A to OpenStack B's manila. Is this correct? > > If yes, you must: > * Promote your replicas > - this will make the mirrored shares available. This action does > not delete the old "primary" shares though, you need to clean them up > yourself, because manila will attempt to reverse the replication > relationships if the primary shares are still accessible > * Note the export locations and Unmanage your shares from OpenStack A's > manila > * Manage your shares in OpenStack B's manila with the export locations > you noted. > > > > > > > > > >Il giorno Mer 6 Feb 2019 16:32 Tom Barron ha > scritto: > > > > > >> On 06/02/19 15:34 +0100, Ignazio Cassano wrote: > > >> >Hello Tom, I think cases you suggested do not meet my needs. > > >> >I have an openstack installation A with a fas netapp A. > > >> >I have another openstack installation B with fas netapp B. > > >> >I would like to use manila replication dr. > > >> >If I replicate manila volumes from A to B the manila db on B does > not > > >> >knows anything about the replicated volume but only the backends on > > >> netapp > > >> >B. Can I discover replicated volumes on openstack B? > > >> >Or I must modify the manila db on B? > > >> >Regards > > >> >Ignazio > > >> > > >> I guess I don't understand your use case. Do Openstack installation A > > >> and Openstack installation B know *anything* about one another? For > > >> example, are their keystone and neutron databases somehow synced? Are > > >> they going to be operative for the same set of manila shares at the > > >> same time, or are you contemplating a migration of the shares from > > >> installation A to installation B? > > >> > > >> Probably it would be helpful to have a statement of the problem that > > >> you intend to solve before we consider the potential mechanisms for > > >> solving it. > > >> > > >> Cheers, > > >> > > >> -- Tom > > >> > > >> > > > >> > > > >> >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha > scritto: > > >> > > > >> >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: > > >> >> >Thanks Goutham. > > >> >> >If there are not mantainers for this driver I will switch on ceph > and > > >> or > > >> >> >netapp. > > >> >> >I am already using netapp but I would like to export shares from > an > > >> >> >openstack installation to another. > > >> >> >Since these 2 installations do non share any openstack component > and > > >> have > > >> >> >different openstack database, I would like to know it is possible > . > > >> >> >Regards > > >> >> >Ignazio > > >> >> > > >> >> Hi Ignazio, > > >> >> > > >> >> If by "export shares from an openstack installation to another" you > > >> >> mean removing them from management by manila in installation A and > > >> >> instead managing them by manila in installation B then you can do > that > > >> >> while leaving them in place on your Net App back end using the > manila > > >> >> "manage-unmanage" administrative commands. Here's some > documentation > > >> >> [1] that should be helpful. > > >> >> > > >> >> If on the other hand by "export shares ... to another" you mean to > > >> >> leave the shares under management of manila in installation A but > > >> >> consume them from compute instances in installation B it's all > about > > >> >> the networking. One can use manila to "allow-access" to consumers > of > > >> >> shares anywhere but the consumers must be able to reach the "export > > >> >> locations" for those shares and mount them. > > >> >> > > >> >> Cheers, > > >> >> > > >> >> -- Tom Barron > > >> >> > > >> >> [1] > > >> >> > > >> > https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 > > >> >> > > > >> >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < > > >> >> gouthampravi at gmail.com> > > >> >> >ha scritto: > > >> >> > > > >> >> >> Hi Ignazio, > > >> >> >> > > >> >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano > > >> >> >> wrote: > > >> >> >> > > > >> >> >> > Hello All, > > >> >> >> > I installed manila on my queens openstack based on centos 7. > > >> >> >> > I configured two servers with glusterfs replocation and > ganesha > > >> nfs. > > >> >> >> > I configured my controllers octavia,conf but when I try to > create a > > >> >> share > > >> >> >> > the manila scheduler logs reports: > > >> >> >> > > > >> >> >> > Failed to schedule create_share: No valid host was found. > Failed to > > >> >> find > > >> >> >> a weighted host, the last executed filter was > CapabilitiesFilter.: > > >> >> >> NoValidHost: No valid host was found. Failed to find a weighted > host, > > >> >> the > > >> >> >> last executed filter was CapabilitiesFilter. > > >> >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api > > >> >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a > > >> >> 89f76bc5de5545f381da2c10c7df7f15 > > >> >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message record > for > > >> >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a > > >> >> >> > > >> >> >> > > >> >> >> The scheduler failure points out that you have a mismatch in > > >> >> >> expectations (backend capabilities vs share type extra-specs) > and > > >> >> >> there was no host to schedule your share to. So a few things to > check > > >> >> >> here: > > >> >> >> > > >> >> >> - What is the share type you're using? Can you list the share > type > > >> >> >> extra-specs and confirm that the backend (your GlusterFS > storage) > > >> >> >> capabilities are appropriate with whatever you've set up as > > >> >> >> extra-specs ($ manila pool-list --detail)? > > >> >> >> - Is your backend operating correctly? You can list the manila > > >> >> >> services ($ manila service-list) and see if the backend is both > > >> >> >> 'enabled' and 'up'. If it isn't, there's a good chance there > was a > > >> >> >> problem with the driver initialization, please enable debug > logging, > > >> >> >> and look at the log file for the manila-share service, you > might see > > >> >> >> why and be able to fix it. > > >> >> >> > > >> >> >> > > >> >> >> Please be aware that we're on a look out for a maintainer for > the > > >> >> >> GlusterFS driver for the past few releases. We're open to bug > fixes > > >> >> >> and maintenance patches, but there is currently no active > maintainer > > >> >> >> for this driver. > > >> >> >> > > >> >> >> > > >> >> >> > I did not understand if controllers node must be connected to > the > > >> >> >> network where shares must be exported for virtual machines, so > my > > >> >> glusterfs > > >> >> >> are connected on the management network where openstack > controllers > > >> are > > >> >> >> conencted and to the network where virtual machine are > connected. > > >> >> >> > > > >> >> >> > My manila.conf section for glusterfs section is the following > > >> >> >> > > > >> >> >> > [gluster-manila565] > > >> >> >> > driver_handles_share_servers = False > > >> >> >> > share_driver = > manila.share.drivers.glusterfs.GlusterfsShareDriver > > >> >> >> > glusterfs_target = root at 10.102.184.229:/manila565 > > >> >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa > > >> >> >> > glusterfs_ganesha_server_username = root > > >> >> >> > glusterfs_nfs_server_type = Ganesha > > >> >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 > > >> >> >> > #glusterfs_servers = root at 10.102.185.19 > > >> >> >> > ganesha_config_dir = /etc/ganesha > > >> >> >> > > > >> >> >> > > > >> >> >> > PS > > >> >> >> > 10.102.184.0/24 is the network where controlelrs expose > endpoint > > >> >> >> > > > >> >> >> > 10.102.189.0/24 is the shared network inside openstack where > > >> virtual > > >> >> >> machines are connected. > > >> >> >> > > > >> >> >> > The gluster servers are connected on both. > > >> >> >> > > > >> >> >> > > > >> >> >> > Any help, please ? > > >> >> >> > > > >> >> >> > Ignazio > > >> >> >> > > >> >> > > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Thu Feb 7 06:11:23 2019 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Thu, 7 Feb 2019 07:11:23 +0100 Subject: [TripleO] containers logging to stdout In-Reply-To: <95dc4e6c-dc4a-7cc6-a34d-7999566725ba@redhat.com> References: <7cee5db5-f4cd-9e11-e0a3-7438154fb9af@redhat.com> <95dc4e6c-dc4a-7cc6-a34d-7999566725ba@redhat.com> Message-ID: <05cc6365-0502-0fa8-ce0d-741269b0c389@redhat.com> Hello, I'm currently testing things, related to this LP: https://bugs.launchpad.net/tripleo/+bug/1814897 We might hit some issues: - With docker, json-file log driver doesn't support any "path" options, and it outputs the files inside the container namespace (/var/lib/docker/container/ID/ID-json.log) - With podman, we actually have a "path" option, and it works nice. But the json-file isn't a JSON at all. - Docker supports journald and some other outputs - Podman doesn't support anything else than json-file Apparently, Docker seems to support a failing "journald" backend. So we might end with two ways of logging, if we're to keep docker in place. Cheers, C. On 2/5/19 11:11 AM, Cédric Jeanneret wrote: > Hello there! > > small thoughts: > - we might already push the stdout logging, in parallel of the current > existing one > > - that would already point some weakness and issues, without making the > whole thing crash, since there aren't that many logs in stdout for now > > - that would already allow to check what's the best way to do it, and > what's the best format for re-usability (thinking: sending logs to some > (k)elk and the like) > > This would also allow devs to actually test that for their services. And > thus going forward on this topic. > > Any thoughts? > > Cheers, > > C. > > On 1/30/19 11:49 AM, Juan Antonio Osorio Robles wrote: >> Hello! >> >> >> In Queens, the a spec to provide the option to make containers log to >> standard output was proposed [1] [2]. Some work was done on that side, >> but due to the lack of traction, it wasn't completed. With the Train >> release coming, I think it would be a good idea to revive this effort, >> but make logging to stdout the default in that release. >> >> This would allow several benefits: >> >> * All logging from the containers would en up in journald; this would >> make it easier for us to forward the logs, instead of having to keep >> track of the different directories in /var/log/containers >> >> * The journald driver would add metadata to the logs about the container >> (we would automatically get what container ID issued the logs). >> >> * This wouldo also simplify the stacks (removing the Logging nested >> stack which is present in several templates). >> >> * Finally... if at some point we move towards kubernetes (or something >> in between), managing our containers, it would work with their logging >> tooling as well. >> >> >> Any thoughts? >> >> >> [1] >> https://specs.openstack.org/openstack/tripleo-specs/specs/queens/logging-stdout.html >> >> [2] https://blueprints.launchpad.net/tripleo/+spec/logging-stdout-rsyslog >> >> >> > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From iwienand at redhat.com Thu Feb 7 06:39:40 2019 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 7 Feb 2019 17:39:40 +1100 Subject: [cinder] Help with Fedora 29 devstack volume/iscsi issues Message-ID: <20190207063940.GA1754@fedora19.localdomain> Hello, I'm trying to diagnose what has gone wrong with Fedora 29 in our gate devstack test; it seems there is a problem with the iscsi setup and consequently the volume based tempest tests all fail. AFAICS we end up with nova hitting parsing errors inside os_brick's iscsi querying routines; so it seems whatever error path we've hit is outside the usual as it's made it pretty far down the stack. I have a rather haphazard bug report going on at https://bugs.launchpad.net/os-brick/+bug/1814849 as I've tried to trace it down. At this point, it's exceeding the abilities of my cinder/nova/lvm/iscsi/how-this-all-hangs-together knowledge. The final comment there has a link the devstack logs and a few bits and pieces of gleaned off the host (which I have on hold and can examine) which is hopefully useful to someone skilled in the art. I'm hoping ultimately it's a rather simple case of a missing package or config option; I would greatly appreciate any input so we can get this test stable. Thanks, -i From kennelson11 at gmail.com Thu Feb 7 06:59:45 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 6 Feb 2019 22:59:45 -0800 Subject: Manila Upstream Bugs In-Reply-To: References: Message-ID: Hello :) Another thing to consider is what process might look like and how you want to organize things after migrating to StoryBoard. While there isn't a set date yet, it should be kept in mind :) If you have any questions, please let us (the storyboard team) know by pinging us in #storyboard or by using the [storyboard] tag to the openstack-discuss list. -Kendall (diablo_rojo) On Tue, Feb 5, 2019 at 12:38 PM Jason Grosso wrote: > Hello All, > > > This is an email to the OpenStack manila upstream community but anyone can > chime in would be great to get some input from other projects and how they > organize their upstream defects and what tools they use... > > > > My goal here is to make the upstream manila bug process easier, cleaner, > and more effective. > > My thoughts to accomplish this are by establishing a process that we can > all agree upon. > > > I have the following points/questions that I wanted to address to help > create a more effective process: > > > > - > > Can we as a group go through some of the manila bugs so we can drive > the visible bug count down? > > > - > > How often as a group do you have bug scrubs? > > > - > > Might be beneficial if we had bug scrubs every few months possibly? > - > > It might be a good idea to go through the current upstream bugs and > weed out one that can be closed or invalid. > > > > - > > When a new bug is logged how to we normally process this bug > > > - > > How do we handle the importance? > > > - > > When a manila bugs comes into launchpad I am assuming one of the > people on this email will set the importance? > > > - > > "Assigned" I will also assume it just picked by the person on this > email list. > > > - > > I am seeing some bugs "fixed committed" with no assignment. How do we > know who was working on it? > > > - > > What is the criteria for setting the importance. Do we have a standard > understanding of what is CRITICAL or HIGH? > > > - > > If there is a critical or high bug what is the response turn-around? > Days or weeks? > > > - > > I see some defect with HIGH that have not been assigned or looked at > in a year? > > > - > > I understand OpenStack has some long releases but how long do we > normally keep defects around? > > > - > > Do we have a way to archive bugs that are not looked at? I was told we > can possibly set the status of a defect to “Invalid” or “Opinion” or > “Won’t Fix” or “Expired" > > > - > > Status needs to be something other than "NEW" after the first week > > > - > > How can we have a defect over a year that is NEW? > > > - > > Who is possible for see if there is enough information and if the bug > is invalid or incomplete and if incomplete ask for relevant information. Do > we randomly look at the list daily , weekly, or monthly to see if new > info is needed? > > > > > I started to create a google sheet [1] to see if it is easier to track > some of the defect vs the manila-triage pad[2] . I have added both links > here. I know a lot will not have access to this page I am working on > transitioning to OpenStack ether cal. > > [1] > https://docs.google.com/spreadsheets/d/1oaXEgo_BEkY2KleISN3M58waqw9U5W7xTR_O1jQmQ74/edit#gid=758082340 > > [2] https://etherpad.openstack.org/p/manila-bug-triage-pad > > *[3]* https://ethercalc.openstack.org/uc8b4567fpf4 > > > > > I would also like to hear from all of you on what your issues are with the > current process for upstream manila bugs using launchpad. I have not had > the time to look at storyboard https://storyboard.openstack.org/ but I > have heard that the OpenStack community is pushing toward using Storyboard, > so I will be looking at that shortly. > > > Any input would be greatly appreciated... > > > Thanks All, > > Jason Grosso > > Senior Quality Engineer - Cloud > > Red Hat OpenStack Manila > > jgrosso at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Thu Feb 7 07:29:28 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 7 Feb 2019 08:29:28 +0100 Subject: [nova[metadata] queens issues Message-ID: Hello All, I am facing an issue with nova metadata or probably something is missed by design. If I create an instance from an image with os_require_quiesce='yes' and hw_qemu_guest_agent='yes' and the image contain the package quemu-guest-agent, when the instance boots the service quemu.gust-agent starts fine because all needed device are created in kvm for the instance. If I destory the instance and boot another instance from the volume used by the previous instance, metadata are missed and qemu-guest agent does not start, I think this is a problem from backupping and restoring instances. Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From lujinluo at gmail.com Thu Feb 7 07:31:59 2019 From: lujinluo at gmail.com (Lujin Luo) Date: Wed, 6 Feb 2019 23:31:59 -0800 Subject: [neutron] [upgrade] No meeting on Feb. 7th Message-ID: Hi team, I will not be able to chair the meeting tomorrow. Let's skip it and resume next week! Sorry for any inconvenience caused. Best regards, Lujin From Yury.Kulazhenkov at dell.com Thu Feb 7 08:02:31 2019 From: Yury.Kulazhenkov at dell.com (Kulazhenkov, Yury) Date: Thu, 7 Feb 2019 08:02:31 +0000 Subject: [cinder][nova][os-brick] os-brick initiator rename In-Reply-To: References: Message-ID: Hi all, Some time ago Dell EMC software-defined storage ScaleIO was renamed to VxFlex OS. I am currently working on renaming ScaleIO to VxFlex OS in Openstack code to prevent confusion with storage documentation from vendor. This changes require patches at least for cinder, nova and os-brick repos. I already submitted patches for cinder(634397) and nova(634866), but for now code in these patches relies on os-brick initiator with name SCALEIO. Now I'm looking for right way to rename os-brick initiator. Renaming initiator in os-brick library and then make required changes in nova and cinder is quiet easy, but os-brick is library and those changes can break someone else code. Is some sort of policy for updates with breaking changes exist for os-brick? One possible solution is to rename initiator to new name and create alias with deprecation warning for old initiator name(should this alias be preserved more than one release?). What do you think about it? Thanks, Yury From alfredo.deluca at gmail.com Thu Feb 7 08:17:27 2019 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Thu, 7 Feb 2019 09:17:27 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: hi Ignazio. Unfortunately doesn't resolve either with ping or curl .... but what is strange also it doesn't have yum or dnf o any installer ....unless it use only atomic..... I think at the end it\s the issue with the network as I found out my all-in-one deployment doesn't have the br-ex which it supposed to be the external network interface. I installed OS with ansible-openstack Cheers On Wed, Feb 6, 2019 at 3:39 PM Ignazio Cassano wrote: > Alfredo it is very strange you can ping 8.8.8.8 but you cannot resolve > names. I think atomic command uses names for finishing master installation. > Curl is installed on master.... > > > Il giorno Mer 6 Feb 2019 09:00 Alfredo De Luca > ha scritto: > >> Hi Ignazio. sorry for late reply. security group is fine. It\s not >> blocking the network traffic. >> >> Not sure why but, with this fedora release I can finally find atomic but >> there is no yum,nslookup,dig,host command..... why is so different from >> another version (latest) which had yum but not atomic. >> >> It's all weird >> >> >> Cheers >> >> >> >> >> On Mon, Feb 4, 2019 at 5:46 PM Ignazio Cassano >> wrote: >> >>> Alfredo, try to check security group linked to your kubemaster. >>> >>> Il giorno Lun 4 Feb 2019 14:25 Alfredo De Luca >>> ha scritto: >>> >>>> Hi Ignazio. Thanks for the link...... so >>>> >>>> Now at least atomic is present on the system. >>>> Also I ve already had 8.8.8.8 on the system. So I can connect on the >>>> floating IP to the kube master....than I can ping 8.8.8.8 but for example >>>> doesn't resolve the names...so if I ping 8.8.8.8 >>>> *root at my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* >>>> *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* >>>> *64 bytes from 8.8.8.8 : icmp_seq=1 ttl=118 time=12.1 >>>> ms* >>>> *64 bytes from 8.8.8.8 : icmp_seq=2 ttl=118 time=12.2 >>>> ms* >>>> >>>> but if I ping google.com doesn't resolve. I can't either find on >>>> fedora dig or nslookup to check >>>> resolv.conf has >>>> *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* >>>> *nameserver 8.8.8.8* >>>> >>>> It\s all so weird. >>>> >>>> >>>> >>>> >>>> On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano < >>>> ignaziocassano at gmail.com> wrote: >>>> >>>>> I also suggest to change dns in your external network used by magnum. >>>>> Using openstack dashboard you can change it to 8.8.8.8 (If I remember >>>>> fine you wrote that you can ping 8.8.8.8 from kuke baster) >>>>> >>>>> Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < >>>>> alfredo.deluca at gmail.com> ha scritto: >>>>> >>>>>> thanks ignazio >>>>>> Where can I get it from? >>>>>> >>>>>> >>>>>> On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano < >>>>>> ignaziocassano at gmail.com> wrote: >>>>>> >>>>>>> I used fedora-magnum-27-4 and it works >>>>>>> >>>>>>> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >>>>>>> alfredo.deluca at gmail.com> ha scritto: >>>>>>> >>>>>>>> Hi Clemens. >>>>>>>> So the image I downloaded is this >>>>>>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>>>>>>> which is the latest I think. >>>>>>>> But you are right...and I noticed that too.... It doesn't have >>>>>>>> atomic binary >>>>>>>> the os-release is >>>>>>>> >>>>>>>> *NAME=Fedora* >>>>>>>> *VERSION="29 (Cloud Edition)"* >>>>>>>> *ID=fedora* >>>>>>>> *VERSION_ID=29* >>>>>>>> *PLATFORM_ID="platform:f29"* >>>>>>>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>>>>>>> *ANSI_COLOR="0;34"* >>>>>>>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>>>>>>> *HOME_URL="https://fedoraproject.org/ "* >>>>>>>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>>>>>>> "* >>>>>>>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>>>>>>> "* >>>>>>>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>>>>>>> "* >>>>>>>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>>>>>>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>>>>>>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>>>>>>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>>>>>>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>>>>>>> "* >>>>>>>> *VARIANT="Cloud Edition"* >>>>>>>> *VARIANT_ID=cloud* >>>>>>>> >>>>>>>> >>>>>>>> so not sure why I don't have atomic tho >>>>>>>> >>>>>>>> >>>>>>>> On Sat, Feb 2, 2019 at 7:53 PM Clemens < >>>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>>> >>>>>>>>> Now to the failure of your part-013: Are you sure that you used >>>>>>>>> the glance image ‚fedora-atomic-latest‘ and not some other fedora image? >>>>>>>>> Your error message below suggests that your image does not contain ‚atomic‘ >>>>>>>>> as part of the image … >>>>>>>>> >>>>>>>>> + _prefix=docker.io/openstackmagnum/ >>>>>>>>> + atomic install --storage ostree --system --system-package no >>>>>>>>> --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>>>>>>> heat-container-agent >>>>>>>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>>>>>>> ./part-013: line 8: atomic: command not found >>>>>>>>> + systemctl start heat-container-agent >>>>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>>>> heat-container-agent.service not found. >>>>>>>>> >>>>>>>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>>> >>>>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>>>> heat-container-agent.service not found. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> *Alfredo* >>>>>>>> >>>>>>>> >>>>>> >>>>>> -- >>>>>> *Alfredo* >>>>>> >>>>>> >>>> >>>> -- >>>> *Alfredo* >>>> >>>> >> >> -- >> *Alfredo* >> >> -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Thu Feb 7 09:07:33 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 7 Feb 2019 10:07:33 +0100 Subject: [openstack-ansible][magnum] In-Reply-To: References: <1F00FD58-4132-4C42-A9C2-41E3FF8A84C4@crandale.de> <6A3DDC0B-BDCB-4403-B17F-D2056ADC8E09@crandale.de> Message-ID: Hi Alfredo, I know some utilities are not installed on the fedora image but on my installation it is not a problem. As you wrote there are some issues on networking. I've never used openstack-ansible, so I cannot help you. I am sorry Ignazio Il giorno gio 7 feb 2019 alle ore 09:17 Alfredo De Luca < alfredo.deluca at gmail.com> ha scritto: > hi Ignazio. Unfortunately doesn't resolve either with ping or curl .... > but what is strange also it doesn't have yum or dnf o any installer > ....unless it use only atomic..... > > I think at the end it\s the issue with the network as I found out my > all-in-one deployment doesn't have the br-ex which it supposed to be the > external network interface. > > I installed OS with ansible-openstack > > > Cheers > > > On Wed, Feb 6, 2019 at 3:39 PM Ignazio Cassano > wrote: > >> Alfredo it is very strange you can ping 8.8.8.8 but you cannot resolve >> names. I think atomic command uses names for finishing master installation. >> Curl is installed on master.... >> >> >> Il giorno Mer 6 Feb 2019 09:00 Alfredo De Luca >> ha scritto: >> >>> Hi Ignazio. sorry for late reply. security group is fine. It\s not >>> blocking the network traffic. >>> >>> Not sure why but, with this fedora release I can finally find atomic but >>> there is no yum,nslookup,dig,host command..... why is so different from >>> another version (latest) which had yum but not atomic. >>> >>> It's all weird >>> >>> >>> Cheers >>> >>> >>> >>> >>> On Mon, Feb 4, 2019 at 5:46 PM Ignazio Cassano >>> wrote: >>> >>>> Alfredo, try to check security group linked to your kubemaster. >>>> >>>> Il giorno Lun 4 Feb 2019 14:25 Alfredo De Luca < >>>> alfredo.deluca at gmail.com> ha scritto: >>>> >>>>> Hi Ignazio. Thanks for the link...... so >>>>> >>>>> Now at least atomic is present on the system. >>>>> Also I ve already had 8.8.8.8 on the system. So I can connect on the >>>>> floating IP to the kube master....than I can ping 8.8.8.8 but for example >>>>> doesn't resolve the names...so if I ping 8.8.8.8 >>>>> *root at my-last-wdikr74tynij-master-0 log]# ping 8.8.8.8* >>>>> *PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.* >>>>> *64 bytes from 8.8.8.8 : icmp_seq=1 ttl=118 time=12.1 >>>>> ms* >>>>> *64 bytes from 8.8.8.8 : icmp_seq=2 ttl=118 time=12.2 >>>>> ms* >>>>> >>>>> but if I ping google.com doesn't resolve. I can't either find on >>>>> fedora dig or nslookup to check >>>>> resolv.conf has >>>>> *search openstacklocal my-last-wdikr74tynij-master-0.novalocal* >>>>> *nameserver 8.8.8.8* >>>>> >>>>> It\s all so weird. >>>>> >>>>> >>>>> >>>>> >>>>> On Mon, Feb 4, 2019 at 1:02 PM Ignazio Cassano < >>>>> ignaziocassano at gmail.com> wrote: >>>>> >>>>>> I also suggest to change dns in your external network used by magnum. >>>>>> Using openstack dashboard you can change it to 8.8.8.8 (If I remember >>>>>> fine you wrote that you can ping 8.8.8.8 from kuke baster) >>>>>> >>>>>> Il giorno lun 4 feb 2019 alle ore 12:39 Alfredo De Luca < >>>>>> alfredo.deluca at gmail.com> ha scritto: >>>>>> >>>>>>> thanks ignazio >>>>>>> Where can I get it from? >>>>>>> >>>>>>> >>>>>>> On Mon, Feb 4, 2019 at 11:45 AM Ignazio Cassano < >>>>>>> ignaziocassano at gmail.com> wrote: >>>>>>> >>>>>>>> I used fedora-magnum-27-4 and it works >>>>>>>> >>>>>>>> Il giorno lun 4 feb 2019 alle ore 09:42 Alfredo De Luca < >>>>>>>> alfredo.deluca at gmail.com> ha scritto: >>>>>>>> >>>>>>>>> Hi Clemens. >>>>>>>>> So the image I downloaded is this >>>>>>>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-29-updates-20190121.0/AtomicHost/x86_64/images/Fedora-AtomicHost-29-20190121.0.x86_64.qcow2 >>>>>>>>> which is the latest I think. >>>>>>>>> But you are right...and I noticed that too.... It doesn't have >>>>>>>>> atomic binary >>>>>>>>> the os-release is >>>>>>>>> >>>>>>>>> *NAME=Fedora* >>>>>>>>> *VERSION="29 (Cloud Edition)"* >>>>>>>>> *ID=fedora* >>>>>>>>> *VERSION_ID=29* >>>>>>>>> *PLATFORM_ID="platform:f29"* >>>>>>>>> *PRETTY_NAME="Fedora 29 (Cloud Edition)"* >>>>>>>>> *ANSI_COLOR="0;34"* >>>>>>>>> *CPE_NAME="cpe:/o:fedoraproject:fedora:29"* >>>>>>>>> *HOME_URL="https://fedoraproject.org/ >>>>>>>>> "* >>>>>>>>> *DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/ >>>>>>>>> "* >>>>>>>>> *SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help >>>>>>>>> "* >>>>>>>>> *BUG_REPORT_URL="https://bugzilla.redhat.com/ >>>>>>>>> "* >>>>>>>>> *REDHAT_BUGZILLA_PRODUCT="Fedora"* >>>>>>>>> *REDHAT_BUGZILLA_PRODUCT_VERSION=29* >>>>>>>>> *REDHAT_SUPPORT_PRODUCT="Fedora"* >>>>>>>>> *REDHAT_SUPPORT_PRODUCT_VERSION=29* >>>>>>>>> *PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy >>>>>>>>> "* >>>>>>>>> *VARIANT="Cloud Edition"* >>>>>>>>> *VARIANT_ID=cloud* >>>>>>>>> >>>>>>>>> >>>>>>>>> so not sure why I don't have atomic tho >>>>>>>>> >>>>>>>>> >>>>>>>>> On Sat, Feb 2, 2019 at 7:53 PM Clemens < >>>>>>>>> clemens.hardewig at crandale.de> wrote: >>>>>>>>> >>>>>>>>>> Now to the failure of your part-013: Are you sure that you used >>>>>>>>>> the glance image ‚fedora-atomic-latest‘ and not some other fedora image? >>>>>>>>>> Your error message below suggests that your image does not contain ‚atomic‘ >>>>>>>>>> as part of the image … >>>>>>>>>> >>>>>>>>>> + _prefix=docker.io/openstackmagnum/ >>>>>>>>>> + atomic install --storage ostree --system --system-package no >>>>>>>>>> --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name >>>>>>>>>> heat-container-agent >>>>>>>>>> docker.io/openstackmagnum/heat-container-agent:queens-stable >>>>>>>>>> ./part-013: line 8: atomic: command not found >>>>>>>>>> + systemctl start heat-container-agent >>>>>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>>>>> heat-container-agent.service not found. >>>>>>>>>> >>>>>>>>>> Am 02.02.2019 um 17:36 schrieb Alfredo De Luca < >>>>>>>>>> alfredo.deluca at gmail.com>: >>>>>>>>>> >>>>>>>>>> Failed to start heat-container-agent.service: Unit >>>>>>>>>> heat-container-agent.service not found. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> *Alfredo* >>>>>>>>> >>>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Alfredo* >>>>>>> >>>>>>> >>>>> >>>>> -- >>>>> *Alfredo* >>>>> >>>>> >>> >>> -- >>> *Alfredo* >>> >>> > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arne.Wiebalck at cern.ch Thu Feb 7 10:08:04 2019 From: Arne.Wiebalck at cern.ch (Arne Wiebalck) Date: Thu, 7 Feb 2019 10:08:04 +0000 Subject: Rocky and older Ceph compatibility In-Reply-To: References: <88212313-4fde-8e01-d804-27c6354b7046@it.uu.se> Message-ID: Linus, We've basically upgraded Ceph and OpenStack independently over the past years (now on Luminous/Rocky). One thing to keep in mind after upgrading Ceph is to not enable new Ceph tunables that older clients may not know about. FWIU, upgrading alone will not enable new tunables, though. HTH, Arne > On 6 Feb 2019, at 18:55, Erik McCormick wrote: > > On Wed, Feb 6, 2019 at 12:37 PM Linus Nilsson wrote: >> >> Hi all, >> >> I'm working on upgrading our cloud, which consists of a block storage >> system running Ceph 11.2.1 ("Kraken") and a controlplane running OSA >> Newton. We want to migrate to Ceph Mimic and OSA Rocky respectively. As >> part of the upgrade plan we are discussing first going to Rocky while >> keeping the block system at the "Kraken" release. >> > > For the most part it comes down to your client libraries. Personally, > I would upgrade Ceph first, leaving Openstack running older client > libraries. I did this with Jewel clients talking to a Luminous > cluster, so you should be fine with K->M. Then, when you upgrade > Openstack, your client libraries can get updated along with it. If you > do Openstack first, you'll need to come back around and update your > clients, and that will require you to restart everything a second > time. > . >> It would be helpful to know if anyone has attempted to run the Rocky >> Cinder/Glance drivers with Ceph Kraken or older? >> > I haven't done this specific combination, but I have mixed and matched > Openstack and Ceph versions without any issues. I have MItaka, Queens, > and Rocky all talking to Luminous without incident. > > -Erik >> References or documentation is welcomed. I fail to find much information >> online, but perhaps I'm looking in the wrong places or I'm asking a >> question with an obvious answer. >> >> Thanks! >> >> Best regards, >> Linus >> UPPMAX >> >> >> >> >> >> >> >> >> När du har kontakt med oss på Uppsala universitet med e-post så innebär det att vi behandlar dina personuppgifter. För att läsa mer om hur vi gör det kan du läsa här: http://www.uu.se/om-uu/dataskydd-personuppgifter/ >> >> E-mailing Uppsala University means that we will process your personal data. For more information on how this is performed, please read here: http://www.uu.se/en/about-uu/data-protection-policy >> > -- Arne Wiebalck CERN IT From cdent+os at anticdent.org Thu Feb 7 10:34:12 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 7 Feb 2019 10:34:12 +0000 (GMT) Subject: [ironic] Hardware leasing with Ironic In-Reply-To: <20190206213222.43nin24mkbqhsrw7@redhat.com> References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <20190206154138.qfhgh5cax3j2r4qh@redhat.com> <20190206213222.43nin24mkbqhsrw7@redhat.com> Message-ID: On Wed, 6 Feb 2019, Lars Kellogg-Stedman wrote: > I'm still not clear on whether there's any way to make this work with > existing tools, or if it makes sense to figure out to make Nova do > this or if we need something else sitting in front of Ironic. If I recall the early conversations correctly, one of the thoughts/frustrations that brought placement into existence was the way in which there needed to be a pile of flavors, constantly managed to reflect the variety of resources in the "cloud"; wouldn't it be nice to simply reflect those resources, ask for the things you wanted, not need to translate that into a flavor, and not need to create a new flavor every time some new thing came along? It wouldn't be super complicated for Ironic to interact directly with placement to report hardware inventory at regular intervals and to get a list of machines that meet the "at least X GB RAM and Y GB disk space" requirements when somebody wants to boot (or otherwise select, perhaps for later use) a machine, circumventing nova and concepts like flavors. As noted elsewhere in the thread you lose concepts of tenancy, affinity and other orchestration concepts that nova provides. But if those don't matter, or if the shape of those things doesn't fit, it might (might!) be a simple matter of programming... I seem to recall there have been several efforts in this direction over the years, but not any that take advantage of placement. One thing to keep in mind is the reasons behind the creation of custom resource classes like CUSTOM_BAREMETAL_GOLD for reporting ironic inventory (instead of the actual available hardware): A job on baremetal consumes all of it. If Ironic is reporting granular inventory, when it claims a big machine if the initial request was for a smaller machine, the claim would either need to be for all the stuff (to not leave inventory something else might like to claim) or some other kind of inventory manipulation (such as adjusting reserved) might be required. One option might be to have all inventoried machines to have classes of resource for hardware and then something like a PHYSICAL_MACHINE class with a value of 1. When a request is made (including the PHSYICAL_MACHINE=1), the returned resources are sorted by "best fit" and an allocation is made. PHYSICAL_MACHINE goes to 0, taking that resource provider out of service, but leaving the usage an accurate representation of reality. I think it might be worth exploring, and so it's clear I'm not talking from my armchair here, I've been doing some experiments/hacks with launching VMs with just placement, etcd and a bit of python that have proven quite elegant and may help to demonstrate how simple an initial POC that talked with ironic instead could be: https://github.com/cdent/etcd-compute -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From thierry at openstack.org Thu Feb 7 11:06:46 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 7 Feb 2019 12:06:46 +0100 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> Message-ID: <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> Doug Hellmann wrote: > [...] > During the Train series goal discussion in Berlin we talked about having > a goal of ensuring that each team had documentation for bringing new > contributors onto the team. Offering specific mentoring resources seems > to fit nicely with that goal, and doing it in each team's repository in > a consistent way would let us build a central page on docs.openstack.org > to link to all of the team contributor docs, like we link to the user > and installation documentation, without requiring us to find a separate > group of people to manage the information across the entire community. I'm a bit skeptical of that approach. Proper peer mentoring takes a lot of time, so I expect there will be a limited number of "I'll spend significant time helping you if you help us" offers. I don't envision potential contributors to browse dozens of project-specific "on-boarding doc" to find them. I would rather consolidate those offers on a single page. So.. either some magic consolidation job that takes input from all of those project-specific repos to build a nice rendered list... Or just a wiki page ? -- Thierry Carrez (ttx) From dangtrinhnt at gmail.com Thu Feb 7 11:20:46 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 7 Feb 2019 20:20:46 +0900 Subject: [TC][Searchlight] Project health evaluation In-Reply-To: <20190206133648.GB28569@sm-workstation> References: <20190206133648.GB28569@sm-workstation> Message-ID: Thank Sean for your comments. [6] I thought it would be the indication of the current PTLs. [7] So I will just communicate with the responding TC members to update. Thanks again, On Wed, Feb 6, 2019 at 10:36 PM Sean McGinnis wrote: > > > > As we're reaching the Stein-3 milestone [5] and preparing for the Denver > > summit. We, as a team, would like have a formal project health evaluation > > in several aspects such as active contributors / team, planning, bug > fixes, > > features, etc. We would love to have some voice from the TC team and > anyone > > from the community who follows our effort during the Stein cycle. We then > > would want to update the information at [6] and [7] to avoid any > confusion > > that may stop potential contributors or users to come to Searchlight. > > > > [1] https://review.openstack.org/#/c/588644/ > > [2] > > > https://www.dangtrinh.com/2018/10/searchlight-at-stein-1-weekly-report.html > > [3] > https://www.dangtrinh.com/2019/01/searchlight-at-stein-2-r-14-r-13.html > > [4] > > > https://docs.openstack.org/searchlight/latest/user/usecases.html#our-vision > > [5] https://releases.openstack.org/stein/schedule.html > > [6] https://governance.openstack.org/election/results/stein/ptl.html > > [7] https://wiki.openstack.org/wiki/OpenStack_health_tracker > > > > It really looks like great progress with Searchlight over this release. > Nice > work Trinh and all that have been involved in that. > > [6] is a historical record of what happened with the PTL election. What > would > you want to update there? The best path forward, in my opinion, is to make > sure > there is a clear PTL candidate for the Train release. > > [7] is a periodic update of notes between TC members and the projects. If > you > would like to get more information added there, I would recommend working > with > the two TC members assigned to Searchlight to get an update. That appears > to be > Chris Dent and Dims. > > Sean > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Thu Feb 7 11:22:26 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 7 Feb 2019 12:22:26 +0100 Subject: [nova][queens] qeumu-guest-agent Message-ID: Hello, is it possible to force metadata for instances like the following ? hw_qemu_guest_agent='yes' os_require_quiesce='yes' I know if an instance is created from an image with the above metadata the quemu-guest-agent works, but sometimes instances can start from volumes ( for example after a cnder backup). Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Feb 7 11:23:54 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 7 Feb 2019 12:23:54 +0100 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> Message-ID: <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> Adam Spiers wrote: > [...] > Sure.  I particularly agree with your point about processes; I think the > TC (or whoever else volunteers) could definitely help lower the barrier > to starting up a pop-up team by creating a cookie-cutter kind of > approach which would quickly set up any required infrastructure. For > example it could be a simple form or CLI-based tool posing questions > like the following, where the answers could facilitate the bootstrapping > process: > - What is the name of your pop-up team? > - Please enter a brief description of the purpose of your pop-up team. > - If you will use an IRC channel, please state it here. > - Do you need regular IRC meetings? > - Do you need a new git repository?  [If so, ...] > - Do you need a new StoryBoard project?  [If so, ...] > - Do you need a [badge] for use in Subject: headers on openstack-discuss? > etc. > > The outcome of the form could be anything from pointers to specific bits > of documentation on how to set up the various bits of infrastructure, > all the way through to automation of as much of the setup as is > possible.  The slicker the process, the more agile the community could > become in this respect. That's a great idea -- if the pop-up team concept takes on we could definitely automate stuff. In the mean time I feel like the next step is to document what we mean by pop-up team, list them, and give pointers to the type of resources you can have access to (and how to ask for them). In terms of "blessing" do you think pop-up teams should be ultimately approved by the TC ? On one hand that adds bureaucracy / steps to the process, but on the other having some kind of official recognition can help them... So maybe some after-the-fact recognition would work ? Let pop-up teams freely form and be listed, then have the TC declaring some of them (if not all of them) to be of public interest ? -- Thierry Carrez (ttx) From kchamart at redhat.com Thu Feb 7 11:29:59 2019 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 7 Feb 2019 12:29:59 +0100 Subject: [nova] Floppy drive support =?utf-8?B?4oCU?= =?utf-8?Q?_does?= anyone rely on it? Message-ID: <20190207112959.GF5349@paraplu.home> Question for operators: Do anyone rely on floppy disk support in Nova? Background ---------- The "VENOM" vulnerability (CVE-2015-3456)[1] was caused due to a Floppy Disk Controller (FDC) being initialized for all x86 guests, regardless of their configuration — so even if a guest does not explicitly have a virtual floppy disk configured and attached, this issue was exploitable. As a result of that, upstream QEMU has suppressed the FDC for modern machine types (e.g. 'q35') by default — commit ea96bc629cb; from QEMU v2.4.0 onwards. From the commit message: "It is Very annoying to carry forward an outdatEd coNtroller with a mOdern Machine type." QEMU users can still get floppy devices, but they have to ask for them explicitly on the command-line. * * * Given that, and the use of floppy drives is generally not recommended in 2019, any objection to go ahead and remove support for floppy drives? Currently Nova allows the use of the floppy drive via these two disk image metadata properties: - hw_floppy_bus=fd - hw_rescue_device=floppy Filed this blueprint[2] to track this. * * * [1] https://access.redhat.com/articles/1444903 [2] https://blueprints.launchpad.net/nova/+spec/remove-support-for-floppy-disks -- /kashyap From smooney at redhat.com Thu Feb 7 11:53:51 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 07 Feb 2019 11:53:51 +0000 Subject: [nova][queens] qeumu-guest-agent In-Reply-To: References: Message-ID: On Thu, 2019-02-07 at 12:22 +0100, Ignazio Cassano wrote: > Hello, is it possible to force metadata for instances like the following ? > hw_qemu_guest_agent='yes' > os_require_quiesce='yes' > > > I know if an instance is created from an image with the above metadata the quemu-guest-agent works, but sometimes > instances can start from volumes ( for example after a cnder backup). if you are using a recent version of cinder/nova the image metadata is copied to the volume. i don't know if that is in queens or not but the only other way to force it that i can think of would be via the flavor. unfortunately the guest agnet is not supported in the flavor https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L7-L11 > > Regards > Ignazio From ignaziocassano at gmail.com Thu Feb 7 12:16:39 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 7 Feb 2019 13:16:39 +0100 Subject: [nova][queens] qeumu-guest-agent In-Reply-To: References: Message-ID: Thanks. In queens hw_qemu_guest_agent is not considerated for v olumes because it belongs to "libvirt driver option for images". It is a problem for starting instance from backupped volumes :-( Ignazio Il giorno Gio 7 Feb 2019 12:53 Sean Mooney ha scritto: > On Thu, 2019-02-07 at 12:22 +0100, Ignazio Cassano wrote: > > Hello, is it possible to force metadata for instances like the following > ? > > hw_qemu_guest_agent='yes' > > os_require_quiesce='yes' > > > > > > I know if an instance is created from an image with the above metadata > the quemu-guest-agent works, but sometimes > > instances can start from volumes ( for example after a cnder backup). > if you are using a recent version of cinder/nova the image metadata is > copied to the volume. > i don't know if that is in queens or not but the only other way to force > it that i can think of would be via the flavor. > unfortunately the guest agnet is not supported in the flavor > > https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L7-L11 > > > > > Regards > > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Feb 7 12:42:53 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Feb 2019 07:42:53 -0500 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> Message-ID: Thierry Carrez writes: > Doug Hellmann wrote: >> [...] >> During the Train series goal discussion in Berlin we talked about having >> a goal of ensuring that each team had documentation for bringing new >> contributors onto the team. Offering specific mentoring resources seems >> to fit nicely with that goal, and doing it in each team's repository in >> a consistent way would let us build a central page on docs.openstack.org >> to link to all of the team contributor docs, like we link to the user >> and installation documentation, without requiring us to find a separate >> group of people to manage the information across the entire community. > > I'm a bit skeptical of that approach. > > Proper peer mentoring takes a lot of time, so I expect there will be a > limited number of "I'll spend significant time helping you if you help > us" offers. I don't envision potential contributors to browse dozens of > project-specific "on-boarding doc" to find them. I would rather > consolidate those offers on a single page. > > So.. either some magic consolidation job that takes input from all of > those project-specific repos to build a nice rendered list... Or just a > wiki page ? > > -- > Thierry Carrez (ttx) > A wiki page would be nicely lightweight, so that approach makes some sense. Maybe if the only maintenance is to review the page periodically, we can convince one of the existing mentorship groups or the first contact SIG to do that. -- Doug From ignaziocassano at gmail.com Thu Feb 7 13:16:21 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 7 Feb 2019 14:16:21 +0100 Subject: [nova][queens] qeumu-guest-agent In-Reply-To: References: Message-ID: Hello, I also tryed to unprotect "libvirt driver options for images" and "instance config data" metadata definitions Then I associated them to flavors. With the above configuration I can define . os_require_quiesce='yes' and hw_qemu_guest_agent='yes' in a flavor bu starting an instance from a volume using that flavor did not solved the issue: qemu-guest agent did not work. It works only when an instance is created from an image with the above metadata. Ignazio Il giorno gio 7 feb 2019 alle ore 12:53 Sean Mooney ha scritto: > On Thu, 2019-02-07 at 12:22 +0100, Ignazio Cassano wrote: > > Hello, is it possible to force metadata for instances like the following > ? > > hw_qemu_guest_agent='yes' > > os_require_quiesce='yes' > > > > > > I know if an instance is created from an image with the above metadata > the quemu-guest-agent works, but sometimes > > instances can start from volumes ( for example after a cnder backup). > if you are using a recent version of cinder/nova the image metadata is > copied to the volume. > i don't know if that is in queens or not but the only other way to force > it that i can think of would be via the flavor. > unfortunately the guest agnet is not supported in the flavor > > https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L7-L11 > > > > > Regards > > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Feb 7 13:47:31 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 07 Feb 2019 13:47:31 +0000 Subject: [nova][queens] qeumu-guest-agent In-Reply-To: References: Message-ID: <6e3c9a74b0921c5f2ffa898367ae6f40131f88bf.camel@redhat.com> On Thu, 2019-02-07 at 14:16 +0100, Ignazio Cassano wrote: > Hello, > I also tryed to unprotect "libvirt driver options for images" and "instance config data" metadata definitions Then I > associated them to flavors. > With the above configuration I can define . os_require_quiesce='yes' and hw_qemu_guest_agent='yes' in a flavor bu > starting an instance from a volume using that flavor did not solved the issue: you can add the key to the flavor extra_spces but nova does not support that this is only supported in the image metadata. if you boot form volume unless you have this set in the image_metatada section of the volume nova will not use it. it may not use it even in that case. > qemu-guest agent did not work. > It works only when an instance is created from an image with the above metadata. > > Ignazio > > Il giorno gio 7 feb 2019 alle ore 12:53 Sean Mooney ha scritto: > > On Thu, 2019-02-07 at 12:22 +0100, Ignazio Cassano wrote: > > > Hello, is it possible to force metadata for instances like the following ? > > > hw_qemu_guest_agent='yes' > > > os_require_quiesce='yes' > > > > > > > > > I know if an instance is created from an image with the above metadata the quemu-guest-agent works, but sometimes > > > instances can start from volumes ( for example after a cnder backup). > > if you are using a recent version of cinder/nova the image metadata is copied to the volume. > > i don't know if that is in queens or not but the only other way to force it that i can think of would be via the > > flavor. > > unfortunately the guest agnet is not supported in the flavor > > https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L7-L11 > > > > > > > > Regards > > > Ignazio > > From ignaziocassano at gmail.com Thu Feb 7 13:55:23 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 7 Feb 2019 14:55:23 +0100 Subject: [nova][queens] qeumu-guest-agent In-Reply-To: <6e3c9a74b0921c5f2ffa898367ae6f40131f88bf.camel@redhat.com> References: <6e3c9a74b0921c5f2ffa898367ae6f40131f88bf.camel@redhat.com> Message-ID: Hello, do you mean there is not any workaround? Ignazio Il giorno Gio 7 Feb 2019 14:47 Sean Mooney ha scritto: > On Thu, 2019-02-07 at 14:16 +0100, Ignazio Cassano wrote: > > Hello, > > I also tryed to unprotect "libvirt driver options for images" and > "instance config data" metadata definitions Then I > > associated them to flavors. > > With the above configuration I can define . os_require_quiesce='yes' and > hw_qemu_guest_agent='yes' in a flavor bu > > starting an instance from a volume using that flavor did not solved the > issue: > you can add the key to the flavor extra_spces but nova does not support > that > this is only supported in the image metadata. > if you boot form volume unless you have this set in the image_metatada > section of the volume > nova will not use it. it may not use it even in that case. > > > qemu-guest agent did not work. > > It works only when an instance is created from an image with the above > metadata. > > > > Ignazio > > > > Il giorno gio 7 feb 2019 alle ore 12:53 Sean Mooney > ha scritto: > > > On Thu, 2019-02-07 at 12:22 +0100, Ignazio Cassano wrote: > > > > Hello, is it possible to force metadata for instances like the > following ? > > > > hw_qemu_guest_agent='yes' > > > > os_require_quiesce='yes' > > > > > > > > > > > > I know if an instance is created from an image with the above > metadata the quemu-guest-agent works, but sometimes > > > > instances can start from volumes ( for example after a cnder backup). > > > if you are using a recent version of cinder/nova the image metadata is > copied to the volume. > > > i don't know if that is in queens or not but the only other way to > force it that i can think of would be via the > > > flavor. > > > unfortunately the guest agnet is not supported in the flavor > > > > https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L7-L11 > > > > > > > > > > > Regards > > > > Ignazio > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Feb 7 14:01:40 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 07 Feb 2019 23:01:40 +0900 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> Message-ID: <168c8439d24.feed3a49551.7656492683145817726@ghanshyammann.com> ---- On Thu, 07 Feb 2019 21:42:53 +0900 Doug Hellmann wrote ---- > Thierry Carrez writes: > > > Doug Hellmann wrote: > >> [...] > >> During the Train series goal discussion in Berlin we talked about having > >> a goal of ensuring that each team had documentation for bringing new > >> contributors onto the team. Offering specific mentoring resources seems > >> to fit nicely with that goal, and doing it in each team's repository in > >> a consistent way would let us build a central page on docs.openstack.org > >> to link to all of the team contributor docs, like we link to the user > >> and installation documentation, without requiring us to find a separate > >> group of people to manage the information across the entire community. > > > > I'm a bit skeptical of that approach. > > > > Proper peer mentoring takes a lot of time, so I expect there will be a > > limited number of "I'll spend significant time helping you if you help > > us" offers. I don't envision potential contributors to browse dozens of > > project-specific "on-boarding doc" to find them. I would rather > > consolidate those offers on a single page. > > > > So.. either some magic consolidation job that takes input from all of > > those project-specific repos to build a nice rendered list... Or just a > > wiki page ? > > > > -- > > Thierry Carrez (ttx) > > > > A wiki page would be nicely lightweight, so that approach makes some > sense. Maybe if the only maintenance is to review the page periodically, > we can convince one of the existing mentorship groups or the first > contact SIG to do that. Same can be achieved If we have a single link on doc.openstack.org or contributor guide with top section "Help-wanted" with subsection of each project specific help-wanted. project help wanted subsection can be build from help wanted section from project contributor doc. That way it is easy for the project team to maintain their help wanted list. Wiki page can have the challenge of prioritizing and maintain the list. -gmann > > -- > Doug > > From frode.nordahl at canonical.com Thu Feb 7 14:04:54 2019 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Thu, 7 Feb 2019 15:04:54 +0100 Subject: [charms][zuul] State of external GitHub dependencies Message-ID: Hello all, The Charms projects are increasingly heavy users of external GitHub dependencies, and we are facing intermittent issues with this at the gate. Does anyone have ideas as to how we should handle this from the point of view of Charm teams? Anyone from Zuul have ideas/pointers on how we could help improve the external GitHub dependency support? As many of you know the OpenStack Charms project is in the process of replacing the framework for performing functional deployment testing of Charms with ``Zaza`` [0]. Two of the key features of the Zaza framework is reusability of tests simply by referencing already written tests with a Python module path in a test definition in a YAML file, and general applicability across other Charms, not just OpenStack specific ones. Because of this the Zaza project, which also contains the individual functional test modules, is hosted on GitHub and not on the OpenStack Infrastructure. Whenever a change is proposed to a charm that require new or changes to existing functional tests, we need a effective way for the individual contributor to have their Charm change (which is proposed on OpenStack Infrastructure) tested with code from their Zaza change (which is proposed as a PR on GitHub). We have had some success with adding ``Depends-On:`` and the full URL to the GitHub PR in the commit message. There is experimental support for using that as a gate check in Zuul, and Canonical's third party Charm CI is configured to pull the correct version of Zaza based on Depends-On referencing GitHub PRs. However, we often have to go through extra hoops to land things as the gate code appears to not always successfully handle GitHub PR references in Depends-On. For reference, after discussion in #openstack-infra I got a log excerpt [1] and a reference to a WIP PR [2] that might be relevant. 0: https://zaza.readthedocs.io/en/latest/ 1: http://paste.openstack.org/show/744664/ 2: https://review.openstack.org/#/c/613143/ -- Frode Nordahl -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at canonical.com Thu Feb 7 14:08:42 2019 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Thu, 7 Feb 2019 15:08:42 +0100 Subject: [charms][zuul] State of external GitHub dependencies In-Reply-To: References: Message-ID: Hello all, The Charms projects are increasingly heavy users of external GitHub dependencies, and we are facing intermittent issues with this at the gate. Does anyone have ideas as to how we should handle this from the point of view of Charm teams? Anyone from Zuul have ideas/pointers on how we could help improve the external GitHub dependency support? As many of you know the OpenStack Charms project is in the process of replacing the framework for performing functional deployment testing of Charms with ``Zaza`` [0]. Two of the key features of the Zaza framework is reusability of tests simply by referencing already written tests with a Python module path in a test definition in a YAML file, and general applicability across other Charms, not just OpenStack specific ones. Because of this the Zaza project, which also contains the individual functional test modules, is hosted on GitHub and not on the OpenStack Infrastructure. Whenever a change is proposed to a charm that require new or changes to existing functional tests, we need a effective way for the individual contributor to have their Charm change (which is proposed on OpenStack Infrastructure) tested with code from their Zaza change (which is proposed as a PR on GitHub). We have had some success with adding ``Depends-On:`` and the full URL to the GitHub PR in the commit message. There is experimental support for using that as a gate check in Zuul, and Canonical's third party Charm CI is configured to pull the correct version of Zaza based on Depends-On referencing GitHub PRs. However, we often have to go through extra hoops to land things as the gate code appears to not always successfully handle GitHub PR references in Depends-On. For reference, after discussion in #openstack-infra I got a log excerpt [1] and a reference to a WIP PR [2] that might be relevant. 0: https://zaza.readthedocs.io/en/latest/ 1: http://paste.openstack.org/show/744664/ 2: https://review.openstack.org/#/c/613143/ -- Frode Nordahl -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Thu Feb 7 14:16:46 2019 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 7 Feb 2019 09:16:46 -0500 Subject: [cinder][nova][os-brick] os-brick initiator rename In-Reply-To: References: Message-ID: <9bec296b-3bcf-1f06-1927-62f71395e03d@gmail.com> On 02/07/2019 03:02 AM, Kulazhenkov, Yury wrote: > One possible solution is to rename initiator to new name and create alias with deprecation warning for > old initiator name(should this alias be preserved more than one release?). > What do you think about it? That's exactly what I would suggest. Best, -jay From lyarwood at redhat.com Thu Feb 7 14:32:31 2019 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 7 Feb 2019 14:32:31 +0000 Subject: [nova] [placement] extraction checkin meeting at 1700 UTC today In-Reply-To: References: Message-ID: <20190207143231.gtxpdounr3neleig@lyarwood.usersys.redhat.com> On 06-02-19 18:52:10, Chris Dent wrote: > We did not schedule a next check in meeting. When one needs to happen, > which it will, we'll figure that out and make an announcement. I had assumed this would be at the PTG so we could agree on a date early in T for the deletion of the code from Nova. I'm not sure that we need to meet anytime before that. > Thanks for your attention. If I made any errors above, or left > something out, please followup. If you have questions, please ask > them. This all looks present and correct, thanks for writing this up Chris! -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From jaypipes at gmail.com Thu Feb 7 14:41:19 2019 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 7 Feb 2019 09:41:19 -0500 Subject: =?UTF-8?Q?Re:_[nova]_Floppy_drive_support_=e2=80=94_does_anyone_rel?= =?UTF-8?Q?y_on_it=3f?= In-Reply-To: <20190207112959.GF5349@paraplu.home> References: <20190207112959.GF5349@paraplu.home> Message-ID: On 02/07/2019 06:29 AM, Kashyap Chamarthy wrote: > Given that, and the use of floppy drives is generally not recommended in > 2019, any objection to go ahead and remove support for floppy drives? No objections from me. -jay From aspiers at suse.com Thu Feb 7 14:42:27 2019 From: aspiers at suse.com (Adam Spiers) Date: Thu, 7 Feb 2019 14:42:27 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> Message-ID: <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> Thierry Carrez wrote: >Adam Spiers wrote: >>[...] >>Sure.  I particularly agree with your point about processes; I think >>the TC (or whoever else volunteers) could definitely help lower the >>barrier to starting up a pop-up team by creating a cookie-cutter >>kind of approach which would quickly set up any required >>infrastructure. For example it could be a simple form or CLI-based >>tool posing questions like the following, where the answers could >>facilitate the bootstrapping process: >>- What is the name of your pop-up team? >>- Please enter a brief description of the purpose of your pop-up team. >>- If you will use an IRC channel, please state it here. >>- Do you need regular IRC meetings? >>- Do you need a new git repository?  [If so, ...] >>- Do you need a new StoryBoard project?  [If so, ...] >>- Do you need a [badge] for use in Subject: headers on openstack-discuss? >>etc. >> >>The outcome of the form could be anything from pointers to specific >>bits of documentation on how to set up the various bits of >>infrastructure, all the way through to automation of as much of the >>setup as is possible.  The slicker the process, the more agile the >>community could become in this respect. > >That's a great idea -- if the pop-up team concept takes on we could >definitely automate stuff. In the mean time I feel like the next step >is to document what we mean by pop-up team, list them, and give >pointers to the type of resources you can have access to (and how to >ask for them). Agreed - a quickstart document would be a great first step. >In terms of "blessing" do you think pop-up teams should be ultimately >approved by the TC ? On one hand that adds bureaucracy / steps to the >process, but on the other having some kind of official recognition can >help them... > >So maybe some after-the-fact recognition would work ? Let pop-up teams >freely form and be listed, then have the TC declaring some of them (if >not all of them) to be of public interest ? Yeah, good questions. The official recognition is definitely beneficial; OTOH I agree that requiring steps up-front might deter some teams from materialising. Automating these as much as possible would reduce the risk of that. One challenge I see facing an after-the-fact approach is that any requests for infrastructure (IRC channel / meetings / git repo / Storyboard project etc.) would still need to be approved in advance, and presumably a coordinated approach to approval might be more effective than one where some of these requests could be approved and others denied. I'm not sure what the best approach is - sorry ;-) From doug at doughellmann.com Thu Feb 7 15:04:14 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Feb 2019 10:04:14 -0500 Subject: [tc] agenda for upcoming TC meeting on 7 Feb In-Reply-To: References: Message-ID: Doug Hellmann writes: > TC Members, > > Our next meeting will be on Thursday, 7 Feb at 1400 UTC in > #openstack-tc. This email contains the agenda for the meeting. > > If you will not be able to attend, please include your name in the > "Apologies for Absence" section of the wiki page [0]. > > * corrections to TC member election section of bylaws are completed > (fungi, dhellmann) > > * status update for project team evaluations based on technical vision > (cdent, TheJulia) > > * defining the role of the TC (cdent, ttx) > > * keeping up with python 3 releases (dhellmann, gmann) > > * status update of Train cycle goals selection update (lbragstad, > evrardjp) > > * TC governance resolution voting procedures (dhellmann) > > * upcoming TC election (dhellmann) > > * review proposed OIP acceptance criteria (dhellmann, wendar) > > * TC goals for Stein (dhellmann) > > [0] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee > > -- > Doug > The minutes and logs for the meeting are available on the eavesdrop server: Minutes: http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-02-07-14.00.html Log: http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-02-07-14.00.log.html -- Doug From mriedemos at gmail.com Thu Feb 7 15:11:10 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 7 Feb 2019 09:11:10 -0600 Subject: [nova][qa][cinder] CI job changes In-Reply-To: <168c1364bfb.b6bfd9ad351371.5730819222747190801@ghanshyammann.com> References: <666ffefd-7344-1853-7fd5-a2a32ea8d222@gmail.com> <168c1364bfb.b6bfd9ad351371.5730819222747190801@ghanshyammann.com> Message-ID: On 2/5/2019 11:09 PM, Ghanshyam Mann wrote: > > 3. Drop the integrated-gate (py2) template jobs (from nova) > > > > Nova currently runs with both the integrated-gate and > > integrated-gate-py3 templates, which adds a set of tempest-full and > > grenade jobs each to the check and gate pipelines. I don't think we need > > to be gating on both py2 and py3 at this point when it comes to > > tempest/grenade changes. Tempest changes are still gating on both so we > > have coverage there against breaking changes, but I think anything > > that's py2 specific would be caught in unit and functional tests (which > > we're running on both py27 and py3*). > > > > IMO, we should keep running integrated-gate py2 templates on the project gate also > along with Tempest. Jobs in integrated-gate-* templates cover a large amount of code so > running that for both versions make sure we keep our code running on py2 also. Rest other > job like tempest-slow, nova-next etc are good to run only py3 on project side (Tempest gate > keep running py2 version also). > > I am not sure if unit/functional jobs cover all code coverage and it is safe to ignore the py version > consideration from integration CI. As per TC resolution, python2 can be dropped during begning of > U cycle [1]. > > You have good point of having the integrated-gate py2 coverage on Tempest gate only is enough > but it has risk of merging the py2 breaking code on project side which will block the Tempest gate. > I agree that such chances are rare but still it can happen. > > Other point is that we need integrated-gate template running when Stein and Train become > stable branch (means on stable/stein and stable/train gate). Otherwise there are chance when > py2 broken code from U (because we will test only py3 in U) is backported to stable/Train or > stable/stein. > > My opinion on this proposal is to wait till we officially drop py2 which is starting of U. > > [1]https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html > > -gmann We talked about this during the nova meeting today [1]. My main concern right now is efficiency and avoid running redundant test coverage when there is otherwise not much of a difference in the configured environment, which is what we have between the py2 and py3 integrated-gate templates. This is also driving my push to drop the nova-multiattach job and fold those tests into the integrated gate and slim down the number of tests we run in the nova-next job. I understand the concern of dropping the integrated-gate template from nova is a risk to break something in those jobs unknowingly. However, I assume that most py2-specific issues in nova will be caught in unit and functional test jobs which we continue to run. Also, nova is also running a few integration jobs that run on py27 (devstack-plugin-ceph-tempest and neutron-grenade-multinode), so we still have py2 test coverage. We're not dropping py27 support and we're still testing it, but it's a lower priority with everything moving to python3 and I think our test coverage should reflect that. I think we should try this [2] and if it does become a major issue we can revisit adding the integrated-gate py2 template jobs in nova until the U release. [1] http://eavesdrop.openstack.org/meetings/nova/2019/nova.2019-02-07-14.00.log.html#l-113 [2] https://review.openstack.org/#/c/634949/ -- Thanks, Matt From doug at doughellmann.com Thu Feb 7 15:58:58 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Feb 2019 10:58:58 -0500 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> Message-ID: Adam Spiers writes: > Thierry Carrez wrote: >>Adam Spiers wrote: >>>[...] >>>Sure.  I particularly agree with your point about processes; I think >>>the TC (or whoever else volunteers) could definitely help lower the >>>barrier to starting up a pop-up team by creating a cookie-cutter >>>kind of approach which would quickly set up any required >>>infrastructure. For example it could be a simple form or CLI-based >>>tool posing questions like the following, where the answers could >>>facilitate the bootstrapping process: >>>- What is the name of your pop-up team? >>>- Please enter a brief description of the purpose of your pop-up team. >>>- If you will use an IRC channel, please state it here. >>>- Do you need regular IRC meetings? >>>- Do you need a new git repository?  [If so, ...] >>>- Do you need a new StoryBoard project?  [If so, ...] >>>- Do you need a [badge] for use in Subject: headers on openstack-discuss? >>>etc. >>> >>>The outcome of the form could be anything from pointers to specific >>>bits of documentation on how to set up the various bits of >>>infrastructure, all the way through to automation of as much of the >>>setup as is possible.  The slicker the process, the more agile the >>>community could become in this respect. >> >>That's a great idea -- if the pop-up team concept takes on we could >>definitely automate stuff. In the mean time I feel like the next step >>is to document what we mean by pop-up team, list them, and give >>pointers to the type of resources you can have access to (and how to >>ask for them). > > Agreed - a quickstart document would be a great first step. > >>In terms of "blessing" do you think pop-up teams should be ultimately >>approved by the TC ? On one hand that adds bureaucracy / steps to the >>process, but on the other having some kind of official recognition can >>help them... >> >>So maybe some after-the-fact recognition would work ? Let pop-up teams >>freely form and be listed, then have the TC declaring some of them (if >>not all of them) to be of public interest ? > > Yeah, good questions. The official recognition is definitely > beneficial; OTOH I agree that requiring steps up-front might deter > some teams from materialising. Automating these as much as possible > would reduce the risk of that. What benefit do you perceive to having official recognition? > > One challenge I see facing an after-the-fact approach is that any > requests for infrastructure (IRC channel / meetings / git repo / > Storyboard project etc.) would still need to be approved in advance, > and presumably a coordinated approach to approval might be more > effective than one where some of these requests could be approved and > others denied. Isn't the point of these teams that they would be coordinating work within other existing projects? So I wouldn't expect them to need git repositories or new IRC channels. Meeting times, yes. > > I'm not sure what the best approach is - sorry ;-) > -- Doug From doug at doughellmann.com Thu Feb 7 16:07:33 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Feb 2019 11:07:33 -0500 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <168c8439d24.feed3a49551.7656492683145817726@ghanshyammann.com> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <168c8439d24.feed3a49551.7656492683145817726@ghanshyammann.com> Message-ID: Ghanshyam Mann writes: > ---- On Thu, 07 Feb 2019 21:42:53 +0900 Doug Hellmann wrote ---- > > Thierry Carrez writes: > > > > > Doug Hellmann wrote: > > >> [...] > > >> During the Train series goal discussion in Berlin we talked about having > > >> a goal of ensuring that each team had documentation for bringing new > > >> contributors onto the team. Offering specific mentoring resources seems > > >> to fit nicely with that goal, and doing it in each team's repository in > > >> a consistent way would let us build a central page on docs.openstack.org > > >> to link to all of the team contributor docs, like we link to the user > > >> and installation documentation, without requiring us to find a separate > > >> group of people to manage the information across the entire community. > > > > > > I'm a bit skeptical of that approach. > > > > > > Proper peer mentoring takes a lot of time, so I expect there will be a > > > limited number of "I'll spend significant time helping you if you help > > > us" offers. I don't envision potential contributors to browse dozens of > > > project-specific "on-boarding doc" to find them. I would rather > > > consolidate those offers on a single page. > > > > > > So.. either some magic consolidation job that takes input from all of > > > those project-specific repos to build a nice rendered list... Or just a > > > wiki page ? > > > > > > -- > > > Thierry Carrez (ttx) > > > > > > > A wiki page would be nicely lightweight, so that approach makes some > > sense. Maybe if the only maintenance is to review the page periodically, > > we can convince one of the existing mentorship groups or the first > > contact SIG to do that. > > Same can be achieved If we have a single link on doc.openstack.org or contributor guide with > top section "Help-wanted" with subsection of each project specific help-wanted. project help > wanted subsection can be build from help wanted section from project contributor doc. > > That way it is easy for the project team to maintain their help wanted list. Wiki page can > have the challenge of prioritizing and maintain the list. > > -gmann > > > > > -- > > Doug Another benefit of using the wiki is that SIGs and pop-up teams can add their own items. We don't have a good way for those groups to be integrated with docs.openstack.org right now. -- Doug From Kevin.Fox at pnnl.gov Thu Feb 7 16:19:03 2019 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 7 Feb 2019 16:19:03 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall>, Message-ID: <1A3C52DFCD06494D8528644858247BF01C293652@EX10MBOX03.pnnl.gov> Currently cross project work is very hard due to contributors not having enough political capital (review capital) in each project to get attention/priority. By the TC putting its weight behind a popupgroup, the projects can know, that this is important, even though I haven't seen that contributor much before. They may not need git repo's but new IRC channels do make sense I think. Sometimes you need to coordinate work between projects and trying to do that in one of the project channels might not facilitate that. Thanks, Kevin ________________________________________ From: Doug Hellmann [doug at doughellmann.com] Sent: Thursday, February 07, 2019 7:58 AM To: Adam Spiers; Thierry Carrez Cc: Sean McGinnis; openstack-discuss at lists.openstack.org Subject: Re: [all][tc] Formalizing cross-project pop-up teams Adam Spiers writes: > Thierry Carrez wrote: >>Adam Spiers wrote: >>>[...] >>>Sure. I particularly agree with your point about processes; I think >>>the TC (or whoever else volunteers) could definitely help lower the >>>barrier to starting up a pop-up team by creating a cookie-cutter >>>kind of approach which would quickly set up any required >>>infrastructure. For example it could be a simple form or CLI-based >>>tool posing questions like the following, where the answers could >>>facilitate the bootstrapping process: >>>- What is the name of your pop-up team? >>>- Please enter a brief description of the purpose of your pop-up team. >>>- If you will use an IRC channel, please state it here. >>>- Do you need regular IRC meetings? >>>- Do you need a new git repository? [If so, ...] >>>- Do you need a new StoryBoard project? [If so, ...] >>>- Do you need a [badge] for use in Subject: headers on openstack-discuss? >>>etc. >>> >>>The outcome of the form could be anything from pointers to specific >>>bits of documentation on how to set up the various bits of >>>infrastructure, all the way through to automation of as much of the >>>setup as is possible. The slicker the process, the more agile the >>>community could become in this respect. >> >>That's a great idea -- if the pop-up team concept takes on we could >>definitely automate stuff. In the mean time I feel like the next step >>is to document what we mean by pop-up team, list them, and give >>pointers to the type of resources you can have access to (and how to >>ask for them). > > Agreed - a quickstart document would be a great first step. > >>In terms of "blessing" do you think pop-up teams should be ultimately >>approved by the TC ? On one hand that adds bureaucracy / steps to the >>process, but on the other having some kind of official recognition can >>help them... >> >>So maybe some after-the-fact recognition would work ? Let pop-up teams >>freely form and be listed, then have the TC declaring some of them (if >>not all of them) to be of public interest ? > > Yeah, good questions. The official recognition is definitely > beneficial; OTOH I agree that requiring steps up-front might deter > some teams from materialising. Automating these as much as possible > would reduce the risk of that. What benefit do you perceive to having official recognition? > > One challenge I see facing an after-the-fact approach is that any > requests for infrastructure (IRC channel / meetings / git repo / > Storyboard project etc.) would still need to be approved in advance, > and presumably a coordinated approach to approval might be more > effective than one where some of these requests could be approved and > others denied. Isn't the point of these teams that they would be coordinating work within other existing projects? So I wouldn't expect them to need git repositories or new IRC channels. Meeting times, yes. > > I'm not sure what the best approach is - sorry ;-) > -- Doug From juliaashleykreger at gmail.com Thu Feb 7 16:21:45 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 7 Feb 2019 08:21:45 -0800 Subject: [ironic] Hardware leasing with Ironic In-Reply-To: References: <20190130152604.ik7zi2w7hrpabahd@redhat.com> <20190206154138.qfhgh5cax3j2r4qh@redhat.com> <20190206213222.43nin24mkbqhsrw7@redhat.com> Message-ID: An awesome email Chris, thanks! Various thoughts below. On Thu, Feb 7, 2019 at 2:40 AM Chris Dent wrote: > > On Wed, 6 Feb 2019, Lars Kellogg-Stedman wrote: > > > I'm still not clear on whether there's any way to make this work with > > existing tools, or if it makes sense to figure out to make Nova do > > this or if we need something else sitting in front of Ironic. The community is not going to disagree with supporting a different model for access. For some time we've had a consensus that there is a need, it is just getting there and understanding the full of extent of the needs that is the conundrum. Today, a user doesn't need nova to deploy a baremetal machine, they just need baremetal_admin access rights and to have chosen which machine they want. I kind of feel like if there are specific access patterns and usage rights, then it would be good to write those down because the ironic api has always been geared for admin usage or usage via nova. While not perfect, each API endpoint is ultimately represent a pool of hardware resources to be managed. Different patterns do have different needs, and some of that may be filtering the view of hardware from a user, or only showing a user what they have rights to access. For example, with some of the discussion, there would conceivably be a need to expose or point to bmc credentials for machines checked out. That seems like a huge conundrum and would require access rights and an entire workflow, that is outside of a fully trusted or single tenant admin trusted environment. Ultimately I think some of this is going to require discussion in a specification document to hammer out exactly what is needed from ironic. > > If I recall the early conversations correctly, one of the > thoughts/frustrations that brought placement into existence was the > way in which there needed to be a pile of flavors, constantly > managed to reflect the variety of resources in the "cloud"; wouldn't > it be nice to simply reflect those resources, ask for the things you > wanted, not need to translate that into a flavor, and not need to > create a new flavor every time some new thing came along? > I feel like this is also why we started heading in the direction of traits and why we now have the capability to have traits described about a specific node. Granted, traits doesn't solve it all, and operators kind of agreed (In the Sydney Forum) that they couldn't really agree on common trait names for additional baremetal traits. > It wouldn't be super complicated for Ironic to interact directly > with placement to report hardware inventory at regular intervals > and to get a list of machines that meet the "at least X > GB RAM and Y GB disk space" requirements when somebody wants to boot > (or otherwise select, perhaps for later use) a machine, circumventing > nova and concepts like flavors. As noted elsewhere in the thread you > lose concepts of tenancy, affinity and other orchestration concepts > that nova provides. But if those don't matter, or if the shape of > those things doesn't fit, it might (might!) be a simple matter of > programming... I seem to recall there have been several efforts in > this direction over the years, but not any that take advantage of > placement. > I know myself and others in the ironic community would be interested to see a proof of concept and to support this behavior. Admittedly I don't know enough about placement and I suspect the bulk of our primary contributors are in a similar boat as myself with multiple commitments that would really prevent spending time on an experiment such as this. > One thing to keep in mind is the reasons behind the creation of > custom resource classes like CUSTOM_BAREMETAL_GOLD for reporting > ironic inventory (instead of the actual available hardware): A job > on baremetal consumes all of it. If Ironic is reporting granular > inventory, when it claims a big machine if the initial request was > for a smaller machine, the claim would either need to be for all the > stuff (to not leave inventory something else might like to claim) or > some other kind of inventory manipulation (such as adjusting > reserved) might be required. I think some of this logic and some of the conundrums we've hit with nova interaction in the past is also one of the items that might seem as too much to take on, then again I guess it should end up being kind of simpler... I think. > > One option might be to have all inventoried machines to have classes > of resource for hardware and then something like a PHYSICAL_MACHINE > class with a value of 1. When a request is made (including the > PHSYICAL_MACHINE=1), the returned resources are sorted by "best fit" > and an allocation is made. PHYSICAL_MACHINE goes to 0, taking that > resource provider out of service, but leaving the usage an accurate > representation of reality. > I feel like this was kind of already the next discussion direction, but I suspect I'm going to need to see a data model to picture it in my head. :( > I think it might be worth exploring, and so it's clear I'm not > talking from my armchair here, I've been doing some > experiments/hacks with launching VMs with just placement, etcd and a > bit of python that have proven quite elegant and may help to > demonstrate how simple an initial POC that talked with ironic > instead could be: > > https://github.com/cdent/etcd-compute Awesome, I'll add it to my list of things to check out! > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent From Kevin.Fox at pnnl.gov Thu Feb 7 16:32:23 2019 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 7 Feb 2019 16:32:23 +0000 Subject: [TripleO] containers logging to stdout In-Reply-To: <05cc6365-0502-0fa8-ce0d-741269b0c389@redhat.com> References: <7cee5db5-f4cd-9e11-e0a3-7438154fb9af@redhat.com> <95dc4e6c-dc4a-7cc6-a34d-7999566725ba@redhat.com>, <05cc6365-0502-0fa8-ce0d-741269b0c389@redhat.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C29CAC3@EX10MBOX03.pnnl.gov> k8s only supports the json driver too. So if its the end goal, sticking to that might be easier. Thanks, Kevin ________________________________________ From: Cédric Jeanneret [cjeanner at redhat.com] Sent: Wednesday, February 06, 2019 10:11 PM To: openstack-discuss at lists.openstack.org Subject: Re: [TripleO] containers logging to stdout Hello, I'm currently testing things, related to this LP: https://bugs.launchpad.net/tripleo/+bug/1814897 We might hit some issues: - With docker, json-file log driver doesn't support any "path" options, and it outputs the files inside the container namespace (/var/lib/docker/container/ID/ID-json.log) - With podman, we actually have a "path" option, and it works nice. But the json-file isn't a JSON at all. - Docker supports journald and some other outputs - Podman doesn't support anything else than json-file Apparently, Docker seems to support a failing "journald" backend. So we might end with two ways of logging, if we're to keep docker in place. Cheers, C. On 2/5/19 11:11 AM, Cédric Jeanneret wrote: > Hello there! > > small thoughts: > - we might already push the stdout logging, in parallel of the current > existing one > > - that would already point some weakness and issues, without making the > whole thing crash, since there aren't that many logs in stdout for now > > - that would already allow to check what's the best way to do it, and > what's the best format for re-usability (thinking: sending logs to some > (k)elk and the like) > > This would also allow devs to actually test that for their services. And > thus going forward on this topic. > > Any thoughts? > > Cheers, > > C. > > On 1/30/19 11:49 AM, Juan Antonio Osorio Robles wrote: >> Hello! >> >> >> In Queens, the a spec to provide the option to make containers log to >> standard output was proposed [1] [2]. Some work was done on that side, >> but due to the lack of traction, it wasn't completed. With the Train >> release coming, I think it would be a good idea to revive this effort, >> but make logging to stdout the default in that release. >> >> This would allow several benefits: >> >> * All logging from the containers would en up in journald; this would >> make it easier for us to forward the logs, instead of having to keep >> track of the different directories in /var/log/containers >> >> * The journald driver would add metadata to the logs about the container >> (we would automatically get what container ID issued the logs). >> >> * This wouldo also simplify the stacks (removing the Logging nested >> stack which is present in several templates). >> >> * Finally... if at some point we move towards kubernetes (or something >> in between), managing our containers, it would work with their logging >> tooling as well. >> >> >> Any thoughts? >> >> >> [1] >> https://specs.openstack.org/openstack/tripleo-specs/specs/queens/logging-stdout.html >> >> [2] https://blueprints.launchpad.net/tripleo/+spec/logging-stdout-rsyslog >> >> >> > -- Cédric Jeanneret Software Engineer DFG:DF From allison at openstack.org Thu Feb 7 17:31:52 2019 From: allison at openstack.org (Allison Price) Date: Thu, 7 Feb 2019 11:31:52 -0600 Subject: OpenStack Foundation 2018 Annual Report Message-ID: <74148057-916E-4953-9E17-2193B269333A@openstack.org> Hi everyone, Today, we have published the OpenStack Foundation 2018 Annual Report [1], a yearly report highlighting the incredible work and advancements being achieved by the community. Thank you to all of the community contributors who helped pull the report together. Read the latest on: The Foundation’s latest initiatives to support Open Infrastructure Project updates from the OpenStack, Airship, Kata Containers, StarlingX, and Zuul communities Highlights from OpenStack Workings Groups and SIGs Community programs including OpenStack Upstream Institute, the Travel Support Program, Outreachy Internship Programs, and Contributor recognition OpenStack Foundation events including PTGs, Forums, OpenStack / OpenInfra Days, and the OpenStack Summit With almost 100,000 individual members, our community accomplished a lot last year. If you would like to continue to stay updated in the latest Foundation and project news, subscribe to the bi-weekly Open Infrastructure newsletter [2]. We look forward to another successful year in 2019! Cheers, Allison [1] https://www.openstack.org/foundation/2018-openstack-foundation-annual-report [2] https://www.openstack.org/community/email-signup Allison Price OpenStack Foundation allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Feb 7 17:58:50 2019 From: melwittt at gmail.com (melanie witt) Date: Thu, 7 Feb 2019 09:58:50 -0800 Subject: [nova][dev] project self-evaluation against TC technical vision Message-ID: <7176c3c4-52a3-50e9-2d6c-c4f546428c4b@gmail.com> Howdy everyone, About a month ago, the TC sent out a mail [1] asking projects to complete a self-evaluation exercise against the technical vision for OpenStack clouds, published by the TC [2]. The self-evaluation is to be added to our in-tree docs as a living document to be updated over time as things change. To paraphrase from [1], the intent of the exercise is to help projects identify areas they can work on to improve alignment with the rest of OpenStack. The doc should be a concise, easily consumable list of things that interested contributors can work on. Here are examples of vision reflection documents: * openstack/ironic: https://review.openstack.org/629060 * openstack/placement: https://review.openstack.org/630216 I have created an etherpad for us to use to fill in ideas for our vision reflection document: https://etherpad.openstack.org/p/nova-tc-vision-self-eval I'd like to invite everyone in the nova community including operators, users, and developers to join the etherpad and share their thoughts on how nova can improve its alignment to the technical vision for OpenStack clouds. Feel free to add or modify sections as you like. And once we've collected ideas for the doc, I (or anyone) can propose a doc patch to openstack/nova, to be included with our in-tree documentation. Cheers, -melanie [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001417.html [2] https://governance.openstack.org/tc/reference/technical-vision.html From sean.mcginnis at gmx.com Thu Feb 7 18:20:34 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 7 Feb 2019 12:20:34 -0600 Subject: [release] Release countdown for week R-8, February 11-15 Message-ID: <20190207182034.GA4139@sm-workstation> Your long awaited countdown email... Development Focus ----------------- It's probably a good time for teams to take stock of their library and client work that needs to be completed yet. The non-client library freeze is coming up, followed closely by the client lib freeze. Please plan accordingly so avoid any last minute rushes to get key functionality in. General Information ------------------- We have a few deadlines coming up as we get closer to the end of the cycle: * Non-client libraries (generally, any library that is not python-${PROJECT}client) must have a final release by February 28. Only critical bugfixes will be allowed past this point. Please make sure any important feature works has required library changes by this time. * Client libraries must have a final release by March 7. We will be proposing a few patches to switch some cycle-with-intermediary deliverables over to cycle-with-rc if they are not actually doing intermediary releases. PTLs and release liaisons, please watch for being added to any of those reviews. If the switch is not desired, this is a good time to do an intermediary release if you have been putting it off. More information can be found in our post to the mailing list back in December: http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000465.html It is also a good time to start planning what highlights you want for your project team in the cycle highlights: Background on cycle-highlights: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125613.html Project Team Guide, Cycle-Highlights: https://docs.openstack.org/project-team-guide/release-management.html#cycle-highlights knelson [at] openstack.org/diablo_rojo on IRC is available if you need help selecting or writing your highlights Upcoming Deadlines & Dates -------------------------- Non-client library freeze: February 28 Stein-3 milestone: March 7 -- Sean McGinnis (smcginnis) From jgrosso at redhat.com Thu Feb 7 18:59:48 2019 From: jgrosso at redhat.com (Jason Grosso) Date: Thu, 7 Feb 2019 13:59:48 -0500 Subject: [storyboard] sandbox to play with Message-ID: Hello Storyboard, Is there a sandbox where I can test some of the functionality compared to launchpad? Any help would be appreciated! Thanks, Jason Grosso Senior Quality Engineer - Cloud Red Hat OpenStack Manila jgrosso at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 7 19:27:00 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 7 Feb 2019 11:27:00 -0800 Subject: [storyboard] sandbox to play with In-Reply-To: References: Message-ID: Yes there is! [1] Let us know if you have any other questions! -Kendall (diablo_rojo) [1] https://storyboard-dev.openstack.org/ On Thu, Feb 7, 2019 at 11:01 AM Jason Grosso wrote: > Hello Storyboard, > > Is there a sandbox where I can test some of the functionality compared to > launchpad? > > Any help would be appreciated! > > Thanks, > > Jason Grosso > > Senior Quality Engineer - Cloud > > Red Hat OpenStack Manila > > jgrosso at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 7 19:45:21 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 7 Feb 2019 11:45:21 -0800 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> Message-ID: On Mon, Feb 4, 2019 at 9:26 AM Doug Hellmann wrote: > Jeremy Stanley writes: > > > On 2019-02-04 17:31:46 +0900 (+0900), Ghanshyam Mann wrote: > > [...] > >> If I recall it correctly from Board+TC meeting, TC is looking for > >> a new home for this list ? Or we continue to maintain this in TC > >> itself which should not be much effort I feel. > > [...] > > > > It seems like you might be referring to the in-person TC meeting we > > held on the Sunday prior to the Stein PTG in Denver (Alan from the > > OSF BoD was also present). Doug's recap can be found in the old > > openstack-dev archive here: > > > > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134744.html > > > > Quoting Doug, "...it wasn't clear that the TC was the best group to > > manage a list of 'roles' or other more detailed information. We > > discussed placing that information into team documentation or > > hosting it somewhere outside of the governance repository where more > > people could contribute." (If memory serves, this was in response to > > earlier OSF BoD suggestions that retooling the Help Wanted list to > > be a set of business-case-focused job descriptions might garner more > > uptake from the organizations they represent.) > > -- > > Jeremy Stanley > > Right, the feedback was basically that we might have more luck > convincing companies to provide resources if we were more specific about > how they would be used by describing the work in more detail. When we > started thinking about how that change might be implemented, it seemed > like managing the information a well-defined job in its own right, and > our usual pattern is to establish a group of people interested in doing > something and delegating responsibility to them. When we talked about it > in the TC meeting in Denver we did not have any TC members volunteer to > drive the implementation to the next step by starting to recruit a team. > > During the Train series goal discussion in Berlin we talked about having > a goal of ensuring that each team had documentation for bringing new > contributors onto the team. This was something I thought the docs team was working on pushing with all of the individual projects, but I am happy to help if they need extra hands. I think this is suuuuuper important. Each Upstream Institute we teach all the general info we can, but we always mention that there are project specific ways of handling things and project specific processes. If we want to lower the barrier for new contributors, good per project documentation is vital. > Offering specific mentoring resources seems > to fit nicely with that goal, and doing it in each team's repository in > a consistent way would let us build a central page on docs.openstack.org > to link to all of the team contributor docs, like we link to the user > and installation documentation, without requiring us to find a separate > group of people to manage the information across the entire community. I think maintaining the project liaison list[1] that the First Contact SIG has kind of does this? Between that list and the mentoring cohort program that lives under the D&I WG, I think we have things covered. Its more a matter of publicizing those than starting something new I think? > > So, maybe the next step is to convince someone to champion a goal of > improving our contributor documentation, and to have them describe what > the documentation should include, covering the usual topics like how to > actually submit patches as well as suggestions for how to describe areas > where help is needed in a project and offers to mentor contributors. Does anyone want to volunteer to serve as the goal champion for that? > > I can probably draft a rough outline of places where I see projects diverge and make a template, but where should we have that live? /me imagines a template similar to the infra spec template > -- > Doug > > [1] https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 7 19:52:15 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 7 Feb 2019 11:52:15 -0800 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> Message-ID: On Thu, Feb 7, 2019 at 4:45 AM Doug Hellmann wrote: > Thierry Carrez writes: > > > Doug Hellmann wrote: > >> [...] > >> During the Train series goal discussion in Berlin we talked about having > >> a goal of ensuring that each team had documentation for bringing new > >> contributors onto the team. Offering specific mentoring resources seems > >> to fit nicely with that goal, and doing it in each team's repository in > >> a consistent way would let us build a central page on > docs.openstack.org > >> to link to all of the team contributor docs, like we link to the user > >> and installation documentation, without requiring us to find a separate > >> group of people to manage the information across the entire community. > > > > I'm a bit skeptical of that approach. > > > > Proper peer mentoring takes a lot of time, so I expect there will be a > > limited number of "I'll spend significant time helping you if you help > > us" offers. I don't envision potential contributors to browse dozens of > > project-specific "on-boarding doc" to find them. I would rather > > consolidate those offers on a single page. > > > > So.. either some magic consolidation job that takes input from all of > > those project-specific repos to build a nice rendered list... Or just a > > wiki page ? > > > > -- > > Thierry Carrez (ttx) > > > > A wiki page would be nicely lightweight, so that approach makes some > sense. Maybe if the only maintenance is to review the page periodically, > we can convince one of the existing mentorship groups or the first > contact SIG to do that. > So I think that the First Contact SIG project liaison list kind of fits this. Its already maintained in a wiki and its already a list of people willing to be contacted for helping people get started. It probably just needs more attention and refreshing. When it was first set up we (the FC SIG) kind of went around begging for volunteers and then once we maxxed out on them, we said those projects without volunteers will have the role defaulted to the PTL unless they delegate (similar to how other liaison roles work). Long story short, I think we have the sort of mentoring things covered. And to back up an earlier email, project specific onboarding would be a good help too. In my mind I see the help most wanted list as being useful if we want to point people at specific projects that need more hands than others, but I think that the problem is that its hard to quanitfy/keep up to date and the TC was put in charge thinking that they had a better lay of the overall landscape? I think it could go away as documentation maintained by the TC. If we wanted to try to keep a like.. top 5 projects that need friends list... that could live in the FC SIG wiki as well I think. > -- > Doug > > -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 7 20:02:48 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 7 Feb 2019 12:02:48 -0800 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> References: <20190201043349.GB6183@thor.bakeyournoodle.com> <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> Message-ID: On Fri, Feb 1, 2019 at 6:26 AM Eric Fried wrote: > Tony- > > Thanks for following up on this! > > > The general idea is that the bot would: > > 1. Leave a -1 review on 'qualifying'[2] changes along with a request for > > some small change > > As I mentioned in the room, to give a realistic experience the bot > should wait two or three weeks before tendering its -1. > > I kid (in case that wasn't clear). > > > 2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) > > on the change > > If you're compiling a list of eventual features for the bot, another one > that could be neat is, after the second patch set, the bot merges a > change that creates a merge conflict on the student's patch, which they > then have to go resolve. > Another, other eventual feature I talked about with Jimmy MacArthur a few weeks ago was if we could have the bot ask the new contributors how it was they got to this point in their contributions? Was it self driven? Was it a part of OUI, was it from other documentation? Would be interesting to see how our new contributors are making their way in so that we can better help them/fix where the system is falling down. Would also be really interesting data :) And who doesn't live data? > > Also, cross-referencing [1], it might be nice to update that tutorial at > some point to use the sandbox repo instead of nova. That could be done > once we have bot action so said action could be incorporated into the > tutorial flow. > > > [2] The details of what counts as qualifying can be fleshed out later > > but there needs to be something so that contributors using the > > sandbox that don't want to be bothered by the bot wont be. > > Yeah, I had been assuming it would be some tag in the commit message. If > we ultimately enact different flows of varying complexity, the tag > syntax could be enriched so students in different courses/grades could > get different experiences. For example: > > Bot-Reviewer: > > or > > Bot-Reviewer: Level 2 > > or > > Bot-Reviewer: initial-downvote, merge-conflict, series-depth=3 > > The possibilities are endless :P > > -efried > > [1] https://review.openstack.org/#/c/634333/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Thu Feb 7 20:21:03 2019 From: aspiers at suse.com (Adam Spiers) Date: Thu, 7 Feb 2019 20:21:03 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> Message-ID: <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> Doug Hellmann wrote: >Adam Spiers writes: >>Thierry Carrez wrote: >>>Adam Spiers wrote: >>>>[...] >>>>Sure.  I particularly agree with your point about processes; I think >>>>the TC (or whoever else volunteers) could definitely help lower the >>>>barrier to starting up a pop-up team by creating a cookie-cutter >>>>kind of approach which would quickly set up any required >>>>infrastructure. For example it could be a simple form or CLI-based >>>>tool posing questions like the following, where the answers could >>>>facilitate the bootstrapping process: >>>>- What is the name of your pop-up team? >>>>- Please enter a brief description of the purpose of your pop-up team. >>>>- If you will use an IRC channel, please state it here. >>>>- Do you need regular IRC meetings? >>>>- Do you need a new git repository?  [If so, ...] >>>>- Do you need a new StoryBoard project?  [If so, ...] >>>>- Do you need a [badge] for use in Subject: headers on openstack-discuss? >>>>etc. >>>> >>>>The outcome of the form could be anything from pointers to specific >>>>bits of documentation on how to set up the various bits of >>>>infrastructure, all the way through to automation of as much of the >>>>setup as is possible.  The slicker the process, the more agile the >>>>community could become in this respect. >>> >>>That's a great idea -- if the pop-up team concept takes on we could >>>definitely automate stuff. In the mean time I feel like the next step >>>is to document what we mean by pop-up team, list them, and give >>>pointers to the type of resources you can have access to (and how to >>>ask for them). >> >>Agreed - a quickstart document would be a great first step. >> >>>In terms of "blessing" do you think pop-up teams should be ultimately >>>approved by the TC ? On one hand that adds bureaucracy / steps to the >>>process, but on the other having some kind of official recognition can >>>help them... >>> >>>So maybe some after-the-fact recognition would work ? Let pop-up teams >>>freely form and be listed, then have the TC declaring some of them (if >>>not all of them) to be of public interest ? >> >>Yeah, good questions. The official recognition is definitely >>beneficial; OTOH I agree that requiring steps up-front might deter >>some teams from materialising. Automating these as much as possible >>would reduce the risk of that. > >What benefit do you perceive to having official recognition? Difficult to quantify a cultural impact ... Maybe it's not a big deal, but I'm pretty sure it makes a difference in that news of "official" things seems to propagate along the various grapevines better than skunkworks initiatives. One possibility is that the TC is the mother of all other grapevines ;-) So if the TC is aware of something then (perhaps naively) I expect that the ensuing discussion will accelerate spreading of awareness amongst rest of the community. And of course there are other official communication channels which could have a similar effect. >>One challenge I see facing an after-the-fact approach is that any >>requests for infrastructure (IRC channel / meetings / git repo / >>Storyboard project etc.) would still need to be approved in advance, >>and presumably a coordinated approach to approval might be more >>effective than one where some of these requests could be approved and >>others denied. > >Isn't the point of these teams that they would be coordinating work >within other existing projects? Yes. >So I wouldn't expect them to need git repositories or new IRC >channels. Never? Code and documentation doesn't always naturally belong in a single project, especially when it relates to cross-project work. Similarly, if (say) Monasca, Vitrage, and Heat all need an IRC channel in which to collaborate on a specific topic, it seems fairly clear that none of #openstack-{monasca,vitrage,heat} are optimal choices. The self-healing SIG has both a dedicated git repository (for docs, code, and in order to be able to use StoryBoard) and a dedicated IRC channel. We find both useful. Of course SIGs are more heavy-weight and long-lived so I'm not suggesting that all or even necessarily the majority of popup teams would need git/IRC. But I imagine it's possible in some cases, at least. From doug at doughellmann.com Thu Feb 7 20:27:39 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Feb 2019 15:27:39 -0500 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> Message-ID: Adam Spiers writes: > Doug Hellmann wrote: >>Adam Spiers writes: >>>Thierry Carrez wrote: >>>>Adam Spiers wrote: >>>>>[...] >>>>>Sure.  I particularly agree with your point about processes; I think >>>>>the TC (or whoever else volunteers) could definitely help lower the >>>>>barrier to starting up a pop-up team by creating a cookie-cutter >>>>>kind of approach which would quickly set up any required >>>>>infrastructure. For example it could be a simple form or CLI-based >>>>>tool posing questions like the following, where the answers could >>>>>facilitate the bootstrapping process: >>>>>- What is the name of your pop-up team? >>>>>- Please enter a brief description of the purpose of your pop-up team. >>>>>- If you will use an IRC channel, please state it here. >>>>>- Do you need regular IRC meetings? >>>>>- Do you need a new git repository?  [If so, ...] >>>>>- Do you need a new StoryBoard project?  [If so, ...] >>>>>- Do you need a [badge] for use in Subject: headers on openstack-discuss? >>>>>etc. >>>>> >>>>>The outcome of the form could be anything from pointers to specific >>>>>bits of documentation on how to set up the various bits of >>>>>infrastructure, all the way through to automation of as much of the >>>>>setup as is possible.  The slicker the process, the more agile the >>>>>community could become in this respect. >>>> >>>>That's a great idea -- if the pop-up team concept takes on we could >>>>definitely automate stuff. In the mean time I feel like the next step >>>>is to document what we mean by pop-up team, list them, and give >>>>pointers to the type of resources you can have access to (and how to >>>>ask for them). >>> >>>Agreed - a quickstart document would be a great first step. >>> >>>>In terms of "blessing" do you think pop-up teams should be ultimately >>>>approved by the TC ? On one hand that adds bureaucracy / steps to the >>>>process, but on the other having some kind of official recognition can >>>>help them... >>>> >>>>So maybe some after-the-fact recognition would work ? Let pop-up teams >>>>freely form and be listed, then have the TC declaring some of them (if >>>>not all of them) to be of public interest ? >>> >>>Yeah, good questions. The official recognition is definitely >>>beneficial; OTOH I agree that requiring steps up-front might deter >>>some teams from materialising. Automating these as much as possible >>>would reduce the risk of that. >> >>What benefit do you perceive to having official recognition? > > Difficult to quantify a cultural impact ... Maybe it's not a big > deal, but I'm pretty sure it makes a difference in that news of > "official" things seems to propagate along the various grapevines > better than skunkworks initiatives. One possibility is that the TC is > the mother of all other grapevines ;-) So if the TC is aware of > something then (perhaps naively) I expect that the ensuing discussion > will accelerate spreading of awareness amongst rest of the community. > And of course there are other official communication channels which > could have a similar effect. > >>>One challenge I see facing an after-the-fact approach is that any >>>requests for infrastructure (IRC channel / meetings / git repo / >>>Storyboard project etc.) would still need to be approved in advance, >>>and presumably a coordinated approach to approval might be more >>>effective than one where some of these requests could be approved and >>>others denied. >> >>Isn't the point of these teams that they would be coordinating work >>within other existing projects? > > Yes. > >>So I wouldn't expect them to need git repositories or new IRC >>channels. > > Never? Code and documentation doesn't always naturally belong in a > single project, especially when it relates to cross-project work. > Similarly, if (say) Monasca, Vitrage, and Heat all need an IRC channel > in which to collaborate on a specific topic, it seems fairly clear > that none of #openstack-{monasca,vitrage,heat} are optimal choices. What's wrong with #openstack-dev? > The self-healing SIG has both a dedicated git repository (for docs, > code, and in order to be able to use StoryBoard) and a dedicated IRC > channel. We find both useful. > > Of course SIGs are more heavy-weight and long-lived so I'm not > suggesting that all or even necessarily the majority of popup teams > would need git/IRC. But I imagine it's possible in some cases, at > least. Right, SIGs are not designed to disappear after a task is done in the way that popup teams are. If a popup team is going to create code, it needs to end up in a repository that is owned and maintained by someone over the long term. If that requires a new repo, and one of the existing teams isn't a natural home, then I think a new regular team is likely a better fit for the task than a popup team. -- Doug From Kevin.Fox at pnnl.gov Thu Feb 7 20:29:04 2019 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 7 Feb 2019 20:29:04 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> , <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> Message-ID: <1A3C52DFCD06494D8528644858247BF01C29DE58@EX10MBOX03.pnnl.gov> yeah, I don't think k8s working groups have repos, just sigs. as working groups are short lived. Popup Groups should be similar to working groups I think. Thanks, Kevin ________________________________________ From: Adam Spiers [aspiers at suse.com] Sent: Thursday, February 07, 2019 12:21 PM To: Doug Hellmann Cc: Thierry Carrez; Sean McGinnis; openstack-discuss at lists.openstack.org Subject: Re: [all][tc] Formalizing cross-project pop-up teams Doug Hellmann wrote: >Adam Spiers writes: >>Thierry Carrez wrote: >>>Adam Spiers wrote: >>>>[...] >>>>Sure. I particularly agree with your point about processes; I think >>>>the TC (or whoever else volunteers) could definitely help lower the >>>>barrier to starting up a pop-up team by creating a cookie-cutter >>>>kind of approach which would quickly set up any required >>>>infrastructure. For example it could be a simple form or CLI-based >>>>tool posing questions like the following, where the answers could >>>>facilitate the bootstrapping process: >>>>- What is the name of your pop-up team? >>>>- Please enter a brief description of the purpose of your pop-up team. >>>>- If you will use an IRC channel, please state it here. >>>>- Do you need regular IRC meetings? >>>>- Do you need a new git repository? [If so, ...] >>>>- Do you need a new StoryBoard project? [If so, ...] >>>>- Do you need a [badge] for use in Subject: headers on openstack-discuss? >>>>etc. >>>> >>>>The outcome of the form could be anything from pointers to specific >>>>bits of documentation on how to set up the various bits of >>>>infrastructure, all the way through to automation of as much of the >>>>setup as is possible. The slicker the process, the more agile the >>>>community could become in this respect. >>> >>>That's a great idea -- if the pop-up team concept takes on we could >>>definitely automate stuff. In the mean time I feel like the next step >>>is to document what we mean by pop-up team, list them, and give >>>pointers to the type of resources you can have access to (and how to >>>ask for them). >> >>Agreed - a quickstart document would be a great first step. >> >>>In terms of "blessing" do you think pop-up teams should be ultimately >>>approved by the TC ? On one hand that adds bureaucracy / steps to the >>>process, but on the other having some kind of official recognition can >>>help them... >>> >>>So maybe some after-the-fact recognition would work ? Let pop-up teams >>>freely form and be listed, then have the TC declaring some of them (if >>>not all of them) to be of public interest ? >> >>Yeah, good questions. The official recognition is definitely >>beneficial; OTOH I agree that requiring steps up-front might deter >>some teams from materialising. Automating these as much as possible >>would reduce the risk of that. > >What benefit do you perceive to having official recognition? Difficult to quantify a cultural impact ... Maybe it's not a big deal, but I'm pretty sure it makes a difference in that news of "official" things seems to propagate along the various grapevines better than skunkworks initiatives. One possibility is that the TC is the mother of all other grapevines ;-) So if the TC is aware of something then (perhaps naively) I expect that the ensuing discussion will accelerate spreading of awareness amongst rest of the community. And of course there are other official communication channels which could have a similar effect. >>One challenge I see facing an after-the-fact approach is that any >>requests for infrastructure (IRC channel / meetings / git repo / >>Storyboard project etc.) would still need to be approved in advance, >>and presumably a coordinated approach to approval might be more >>effective than one where some of these requests could be approved and >>others denied. > >Isn't the point of these teams that they would be coordinating work >within other existing projects? Yes. >So I wouldn't expect them to need git repositories or new IRC >channels. Never? Code and documentation doesn't always naturally belong in a single project, especially when it relates to cross-project work. Similarly, if (say) Monasca, Vitrage, and Heat all need an IRC channel in which to collaborate on a specific topic, it seems fairly clear that none of #openstack-{monasca,vitrage,heat} are optimal choices. The self-healing SIG has both a dedicated git repository (for docs, code, and in order to be able to use StoryBoard) and a dedicated IRC channel. We find both useful. Of course SIGs are more heavy-weight and long-lived so I'm not suggesting that all or even necessarily the majority of popup teams would need git/IRC. But I imagine it's possible in some cases, at least. From doug at doughellmann.com Thu Feb 7 20:29:22 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Feb 2019 15:29:22 -0500 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> Message-ID: Kendall Nelson writes: > On Mon, Feb 4, 2019 at 9:26 AM Doug Hellmann wrote: > >> Jeremy Stanley writes: >> >> > On 2019-02-04 17:31:46 +0900 (+0900), Ghanshyam Mann wrote: >> > [...] >> >> If I recall it correctly from Board+TC meeting, TC is looking for >> >> a new home for this list ? Or we continue to maintain this in TC >> >> itself which should not be much effort I feel. >> > [...] >> > >> > It seems like you might be referring to the in-person TC meeting we >> > held on the Sunday prior to the Stein PTG in Denver (Alan from the >> > OSF BoD was also present). Doug's recap can be found in the old >> > openstack-dev archive here: >> > >> > >> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134744.html >> > >> > Quoting Doug, "...it wasn't clear that the TC was the best group to >> > manage a list of 'roles' or other more detailed information. We >> > discussed placing that information into team documentation or >> > hosting it somewhere outside of the governance repository where more >> > people could contribute." (If memory serves, this was in response to >> > earlier OSF BoD suggestions that retooling the Help Wanted list to >> > be a set of business-case-focused job descriptions might garner more >> > uptake from the organizations they represent.) >> > -- >> > Jeremy Stanley >> >> Right, the feedback was basically that we might have more luck >> convincing companies to provide resources if we were more specific about >> how they would be used by describing the work in more detail. When we >> started thinking about how that change might be implemented, it seemed >> like managing the information a well-defined job in its own right, and >> our usual pattern is to establish a group of people interested in doing >> something and delegating responsibility to them. When we talked about it >> in the TC meeting in Denver we did not have any TC members volunteer to >> drive the implementation to the next step by starting to recruit a team. >> >> During the Train series goal discussion in Berlin we talked about having >> a goal of ensuring that each team had documentation for bringing new >> contributors onto the team. > > > This was something I thought the docs team was working on pushing with all > of the individual projects, but I am happy to help if they need extra > hands. I think this is suuuuuper important. Each Upstream Institute we > teach all the general info we can, but we always mention that there are > project specific ways of handling things and project specific processes. If > we want to lower the barrier for new contributors, good per project > documentation is vital. > > >> Offering specific mentoring resources seems >> to fit nicely with that goal, and doing it in each team's repository in >> a consistent way would let us build a central page on docs.openstack.org >> to link to all of the team contributor docs, like we link to the user >> and installation documentation, without requiring us to find a separate >> group of people to manage the information across the entire community. > > > I think maintaining the project liaison list[1] that the First Contact SIG > has kind of does this? Between that list and the mentoring cohort program > that lives under the D&I WG, I think we have things covered. Its more a > matter of publicizing those than starting something new I think? > > >> >> So, maybe the next step is to convince someone to champion a goal of >> improving our contributor documentation, and to have them describe what >> the documentation should include, covering the usual topics like how to >> actually submit patches as well as suggestions for how to describe areas >> where help is needed in a project and offers to mentor contributors. > >> Does anyone want to volunteer to serve as the goal champion for that? >> >> > I can probably draft a rough outline of places where I see projects diverge > and make a template, but where should we have that live? > > /me imagines a template similar to the infra spec template Could we put it in the project team guide? > > >> -- >> Doug >> >> > [1] https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons -- Doug From doug at doughellmann.com Thu Feb 7 20:32:26 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Feb 2019 15:32:26 -0500 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> Message-ID: Kendall Nelson writes: > On Thu, Feb 7, 2019 at 4:45 AM Doug Hellmann wrote: > >> Thierry Carrez writes: >> >> > Doug Hellmann wrote: >> >> [...] >> >> During the Train series goal discussion in Berlin we talked about having >> >> a goal of ensuring that each team had documentation for bringing new >> >> contributors onto the team. Offering specific mentoring resources seems >> >> to fit nicely with that goal, and doing it in each team's repository in >> >> a consistent way would let us build a central page on >> docs.openstack.org >> >> to link to all of the team contributor docs, like we link to the user >> >> and installation documentation, without requiring us to find a separate >> >> group of people to manage the information across the entire community. >> > >> > I'm a bit skeptical of that approach. >> > >> > Proper peer mentoring takes a lot of time, so I expect there will be a >> > limited number of "I'll spend significant time helping you if you help >> > us" offers. I don't envision potential contributors to browse dozens of >> > project-specific "on-boarding doc" to find them. I would rather >> > consolidate those offers on a single page. >> > >> > So.. either some magic consolidation job that takes input from all of >> > those project-specific repos to build a nice rendered list... Or just a >> > wiki page ? >> > >> > -- >> > Thierry Carrez (ttx) >> > >> >> A wiki page would be nicely lightweight, so that approach makes some >> sense. Maybe if the only maintenance is to review the page periodically, >> we can convince one of the existing mentorship groups or the first >> contact SIG to do that. >> > > So I think that the First Contact SIG project liaison list kind of fits > this. Its already maintained in a wiki and its already a list of people > willing to be contacted for helping people get started. It probably just > needs more attention and refreshing. When it was first set up we (the FC > SIG) kind of went around begging for volunteers and then once we maxxed out > on them, we said those projects without volunteers will have the role > defaulted to the PTL unless they delegate (similar to how other liaison > roles work). > > Long story short, I think we have the sort of mentoring things covered. And > to back up an earlier email, project specific onboarding would be a good > help too. OK, that does sound pretty similar. I guess the piece that's missing is a description of the sort of help the team is interested in receiving. > In my mind I see the help most wanted list as being useful if we want to > point people at specific projects that need more hands than others, but I > think that the problem is that its hard to quanitfy/keep up to date and the > TC was put in charge thinking that they had a better lay of the overall > landscape? I think it could go away as documentation maintained by the TC. > If we wanted to try to keep a like.. top 5 projects that need friends > list... that could live in the FC SIG wiki as well I think. When we started the current list we had a pretty small set of very high priority gaps to fill. The list is growing, the priorities are changes, and the previous list wasn't especially effective. All of which is driving this desire to have a new list of some sort. -- Doug From jimmy at openstack.org Thu Feb 7 21:25:01 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 07 Feb 2019 15:25:01 -0600 Subject: [TC] [UC] Volunteers for Forum Selection Committee Message-ID: <5C5CA22D.8010202@openstack.org> Hello! We need 2 volunteers from the TC and 2 from the UC for the Denver Forum Selection Committee. For more information, please see: https://wiki.openstack.org/wiki/Forum Please reach out to myself or knelson at openstack.org if you're interested. Volunteers should respond before Feb 15, 2019. Note: volunteers are required to be currently serving on either the UC or the TC. Cheers, Jimmy From tony at bakeyournoodle.com Thu Feb 7 22:07:57 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 8 Feb 2019 09:07:57 +1100 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: References: <20190201043349.GB6183@thor.bakeyournoodle.com> <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> Message-ID: <20190207220756.GA12795@thor.bakeyournoodle.com> On Thu, Feb 07, 2019 at 12:02:48PM -0800, Kendall Nelson wrote: > Another, other eventual feature I talked about with Jimmy MacArthur a few > weeks ago was if we could have the bot ask the new contributors how it was > they got to this point in their contributions? Was it self driven? Was it a > part of OUI, was it from other documentation? Would be interesting to see > how our new contributors are making their way in so that we can better help > them/fix where the system is falling down. > > Would also be really interesting data :) And who doesn't live data? We could do that. Do you think it should block the 'approval' of the sandbox change or would it be a purely optional question/response? Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jgrosso at redhat.com Thu Feb 7 22:13:23 2019 From: jgrosso at redhat.com (Jason Grosso) Date: Thu, 7 Feb 2019 17:13:23 -0500 Subject: [storyboard] sandbox to play with In-Reply-To: References: Message-ID: Awesome, thanks! On Thu, Feb 7, 2019 at 2:27 PM Kendall Nelson wrote: > Yes there is! [1] > > Let us know if you have any other questions! > > -Kendall (diablo_rojo) > > [1] https://storyboard-dev.openstack.org/ > > > On Thu, Feb 7, 2019 at 11:01 AM Jason Grosso wrote: > >> Hello Storyboard, >> >> Is there a sandbox where I can test some of the functionality compared to >> launchpad? >> >> Any help would be appreciated! >> >> Thanks, >> >> Jason Grosso >> >> Senior Quality Engineer - Cloud >> >> Red Hat OpenStack Manila >> >> jgrosso at redhat.com >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Feb 7 23:06:11 2019 From: melwittt at gmail.com (melanie witt) Date: Thu, 7 Feb 2019 15:06:11 -0800 Subject: [nova][dev] 4 weeks until feature freeze Message-ID: <0022f4bb-43c0-d35c-e3c3-d33269bdb843@gmail.com> Hey all, We've 4 weeks left until feature freeze milestone s-3 on March 7. I've updated the blueprint status tracking etherpad: https://etherpad.openstack.org/p/nova-stein-blueprint-status For our Cycle Themes: Multi-cell operational enhancements: We have good progress going on handling of down cells and cross-cell resize. Counting quota usage from placement is still a WIP and I will be pushing updates this week. Compute nodes able to upgrade and exist with nested resource providers for multiple vGPU types: This effort has stalled during the cycle but the libvirt driver reshaper patch has updates coming soon. The xenapi driver reshaper patch has a -1 from Nov 28 and has not been updated yet in response. Help is needed here. The patches for multiple vGPU types (libvirt and xenapi) are stale since Rocky (as they depend on the reshapers). Volume-backed user experience and API improvement: The ability to specify volume type during server create is complete since 2018-10-16. However, the patches for being able to detach a boot volume and volume-backed server rebuild are in merge conflict/stale. Help is needed here. If you are the owner of an approved blueprint, please: * Add the blueprint if I've missed it * Update the status if it is not accurate * If your blueprint is in the "Wayward changes" section, please upload and update patches as soon as you can, to allow maximum time for review * If your patches are noted as Merge Conflict or WIP or needing an update, please update them and update the status on the etherpad * Add a note under your blueprint if you're no longer able to work on it this cycle Let us know if you have any questions or need assistance with your blueprint. Cheers, -melanie From miguel at mlavalle.com Thu Feb 7 23:47:58 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 7 Feb 2019 17:47:58 -0600 Subject: [openstack-dev] [neutron] Cancelling Drivers meeting on February 8th Message-ID: Hi Neutron Drivers, We don't have RFEs ready to be discussed during our weekly meeting. On top of that, some of you are traveling. So let's cancel this week's meeting. We will resume on the 15th. Best regards and safe travels for those of you returning home Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Fri Feb 8 00:20:37 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Fri, 8 Feb 2019 13:20:37 +1300 Subject: Subject: Re: [Trove] State of the Trove service tenant deployment model In-Reply-To: References: Message-ID: Sorry for bringing this thread back to the top again. But I am wondering if there are people who have already deployed Trove in production? If yes, are you using service tenant model(create the database vm and related resources in the admin project) or using the flatten mode that the end user has access to the database vm and the control plane network as well? I am asking because we are going to deploy Trove in a private cloud, and we want to take more granular control of the resources created, e.g for every database vm, we will create the vm in the admin tenant, plug a port to the control plane(`CONF.default_neutron_networks`) and the other ports to the network given by the users, we also need to specify different security groups to different types of neutron ports for security reasons, etc. There are something missing in trove in order to achieve the above, I'm working on that, but I'd like to hear more suggestions. My irc name is lxkong in #openstack-trove, please ping me if you have something to share. Cheers, Lingxian Kong On Wed, Jan 23, 2019 at 7:35 PM Darek Król wrote: > On Wed, Jan 23, 2019 at 9:27 AM Fox, Kevin M > > wrote: > > > > I'd recommend at this point to maybe just run kubernetes across the > vms and push the guest agents/workload to them. > > > This sounds like an overkill to me. Currently, different projects in > openstack are solving this issue > in different ways, e.g. Octavia is using > two-way SSL authentication API between the controller service and > amphora(which is the vm running HTTP server inside), Magnum is using > heat-container-agent that is communicating with Heat via API, etc. However, > Trove chooses another option which has brought a lot of discussions over a > long time. > > > In the current situation, I don't think it's doable for each project > heading to one common solution, but Trove can learn from other projects to > solve its own problem. > > Cheers, > > Lingxian Kong > > The Octavia way of communication was discussed by Trove several times > in the context of secuirty. However, the security threat has been > eliminated by encryption. > I'm wondering if the Octavia way prevents DDOS attacks also ? > > Implementation of two-way SSL authentication API could be included in > the Trove priority list IMHO if it solves all issues with > security/DDOS attacks. This could also creates some share code between > both projects and help other services as well. > > Best, > Darek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Feb 8 01:26:36 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 7 Feb 2019 19:26:36 -0600 Subject: [nova][dev] 4 weeks until feature freeze In-Reply-To: <0022f4bb-43c0-d35c-e3c3-d33269bdb843@gmail.com> References: <0022f4bb-43c0-d35c-e3c3-d33269bdb843@gmail.com> Message-ID: <1ea6d402-61bf-0528-6de7-55c8fe920bc9@gmail.com> On 2/7/2019 5:06 PM, melanie witt wrote: > The ability to specify volume type during server create is complete > since 2018-10-16. However, the patches for being able to detach a boot > volume and volume-backed server rebuild are in merge conflict/stale. > Help is needed here. It's Chinese New Year / Spring Festival this week so the developers that own these changes are on holiday. Kevin told me last week that once he's back he's going to complete the detach/attach root volume work. The spec was amended [1] and needs another spec core to approve (probably would be good to have Dan do that since was involved in the initial spec review). As for the volume-backed rebuild change, I asked Jie Li on the review if he needed someone to help push it forward and he said he did. It sounds like Kevin and/or Yikun might be able to help there. Yikun already has the Cinder side API changes all done and there is a patch for the python-cinderclient change, but the Cinder API change is blocked until we have an end-to-end working scenario in Tempest for the volume-backed rebuild flow in nova. I can help with the Tempest change when the time comes since that should be pretty straightforward. [1] https://review.openstack.org/#/c/619161/ -- Thanks, Matt From sam47priya at gmail.com Fri Feb 8 02:06:49 2019 From: sam47priya at gmail.com (Sam P) Date: Fri, 8 Feb 2019 11:06:49 +0900 Subject: [ops] OpenStack operators meetup, Berlin, March 6th,7th In-Reply-To: References: Message-ID: Hi Chris, I need an invitation letter to get my German visa. Please let me know who to contact. --- Regards, Sampath On Thu, Feb 7, 2019 at 2:38 AM Chris Morgan wrote: > > See you there! > > On Wed, Feb 6, 2019 at 12:18 PM Erik McCormick wrote: >> >> I'm all signed up. See you in Berlin! >> >> On Wed, Feb 6, 2019, 10:43 AM Chris Morgan >> >>> Dear All, >>> The Evenbrite for the next ops meetup is now open, see >>> >>> https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 >>> >>> Thanks for Allison Price from the foundation for making this for us. We'll be sharing more details on the event soon. >>> >>> Chris >>> on behalf of the ops meetups team >>> >>> -- >>> Chris Morgan > > > > -- > Chris Morgan From cjeanner at redhat.com Fri Feb 8 08:40:22 2019 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Fri, 8 Feb 2019 09:40:22 +0100 Subject: [TripleO] containers logging to stdout In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C29CAC3@EX10MBOX03.pnnl.gov> References: <7cee5db5-f4cd-9e11-e0a3-7438154fb9af@redhat.com> <95dc4e6c-dc4a-7cc6-a34d-7999566725ba@redhat.com> <05cc6365-0502-0fa8-ce0d-741269b0c389@redhat.com> <1A3C52DFCD06494D8528644858247BF01C29CAC3@EX10MBOX03.pnnl.gov> Message-ID: On 2/7/19 5:32 PM, Fox, Kevin M wrote: > k8s only supports the json driver too. So if its the end goal, sticking to that might be easier. Cool then - the only big difference being the path, it shouldn't be that hard: docker outputs its json directly in a container-related path, while podman needs a parameter for it (in tripleo world, I've set it to /var/lib/containers/stdouts - we can change it if needed). Oh, not to mention the format - podman doesn't output a proper JSON, it's more a "kubernetes-like-ish" format iiuc[1]... A first patch has been merged by the way: https://review.openstack.org/635437 A second is waiting for reviews: https://review.openstack.org/635438 And a third will hit tripleo-heat-templates once we get the new paunch, in order to inject "--container-log-path /var/log/containers/stdouts". I suppose it would be best to push a parameter in heat (ContainerLogPath for example), I'll check how to do that and reflect its value in docker-puppet.py. Cheers, C. [1] https://github.com/containers/libpod/issues/2265#issuecomment-461060541 > > Thanks, > Kevin > ________________________________________ > From: Cédric Jeanneret [cjeanner at redhat.com] > Sent: Wednesday, February 06, 2019 10:11 PM > To: openstack-discuss at lists.openstack.org > Subject: Re: [TripleO] containers logging to stdout > > Hello, > > I'm currently testing things, related to this LP: > https://bugs.launchpad.net/tripleo/+bug/1814897 > > We might hit some issues: > - With docker, json-file log driver doesn't support any "path" options, > and it outputs the files inside the container namespace > (/var/lib/docker/container/ID/ID-json.log) > > - With podman, we actually have a "path" option, and it works nice. But > the json-file isn't a JSON at all. > > - Docker supports journald and some other outputs > > - Podman doesn't support anything else than json-file > > Apparently, Docker seems to support a failing "journald" backend. So we > might end with two ways of logging, if we're to keep docker in place. > > Cheers, > > C. > > On 2/5/19 11:11 AM, Cédric Jeanneret wrote: >> Hello there! >> >> small thoughts: >> - we might already push the stdout logging, in parallel of the current >> existing one >> >> - that would already point some weakness and issues, without making the >> whole thing crash, since there aren't that many logs in stdout for now >> >> - that would already allow to check what's the best way to do it, and >> what's the best format for re-usability (thinking: sending logs to some >> (k)elk and the like) >> >> This would also allow devs to actually test that for their services. And >> thus going forward on this topic. >> >> Any thoughts? >> >> Cheers, >> >> C. >> >> On 1/30/19 11:49 AM, Juan Antonio Osorio Robles wrote: >>> Hello! >>> >>> >>> In Queens, the a spec to provide the option to make containers log to >>> standard output was proposed [1] [2]. Some work was done on that side, >>> but due to the lack of traction, it wasn't completed. With the Train >>> release coming, I think it would be a good idea to revive this effort, >>> but make logging to stdout the default in that release. >>> >>> This would allow several benefits: >>> >>> * All logging from the containers would en up in journald; this would >>> make it easier for us to forward the logs, instead of having to keep >>> track of the different directories in /var/log/containers >>> >>> * The journald driver would add metadata to the logs about the container >>> (we would automatically get what container ID issued the logs). >>> >>> * This wouldo also simplify the stacks (removing the Logging nested >>> stack which is present in several templates). >>> >>> * Finally... if at some point we move towards kubernetes (or something >>> in between), managing our containers, it would work with their logging >>> tooling as well. >>> >>> >>> Any thoughts? >>> >>> >>> [1] >>> https://specs.openstack.org/openstack/tripleo-specs/specs/queens/logging-stdout.html >>> >>> [2] https://blueprints.launchpad.net/tripleo/+spec/logging-stdout-rsyslog >>> >>> >>> >> > > -- > Cédric Jeanneret > Software Engineer > DFG:DF > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From aspiers at suse.com Fri Feb 8 09:18:29 2019 From: aspiers at suse.com (Adam Spiers) Date: Fri, 8 Feb 2019 09:18:29 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> Message-ID: <20190208091829.6tiig7lgef6txcxk@pacific.linksys.moosehall> Doug Hellmann wrote: >Adam Spiers writes: >>Doug Hellmann wrote: >>>Isn't the point of these teams that they would be coordinating work >>>within other existing projects? >> >>Yes. >> >>>So I wouldn't expect them to need git repositories or new IRC >>>channels. >> >>Never? Code and documentation doesn't always naturally belong in a >>single project, especially when it relates to cross-project work. >>Similarly, if (say) Monasca, Vitrage, and Heat all need an IRC channel >>in which to collaborate on a specific topic, it seems fairly clear >>that none of #openstack-{monasca,vitrage,heat} are optimal choices. > >What's wrong with #openstack-dev? Maybe nothing, or maybe it's too noisy - I dunno ;-) Maybe the latter could be solved by setting up #openstack-breakout{1..10} for impromptu meetings where meetbot and channel logging are provided. >>The self-healing SIG has both a dedicated git repository (for docs, >>code, and in order to be able to use StoryBoard) and a dedicated IRC >>channel. We find both useful. >> >>Of course SIGs are more heavy-weight and long-lived so I'm not >>suggesting that all or even necessarily the majority of popup teams >>would need git/IRC. But I imagine it's possible in some cases, at >>least. > >Right, SIGs are not designed to disappear after a task is done in the >way that popup teams are. If a popup team is going to create code, it >needs to end up in a repository that is owned and maintained by someone >over the long term. If that requires a new repo, and one of the existing >teams isn't a natural home, then I think a new regular team is likely a >better fit for the task than a popup team. True. And for temporary docs / notes / brainstorming there's the wiki and etherpad. So yeah, in terms of infrastructure maybe IRC meetings in one of the communal meeting channels is the only thing needed. We'd still need to take care of ensuring that popups are easily discoverable by anyone, however. And this ties in with the "should we require official approval" debate - maybe a halfway house is the right balance between red tape and agility? For example, set up a table on a page like https://wiki.openstack.org/wiki/Popup_teams and warmly encourage newly forming teams to register themselves by adding a row to that table. Suggested columns: - Team name - One-line summary of team purpose - Expected life span (optional) - Link to team wiki page or etherpad - Link to IRC meeting schedule (if any) - Other comments Or if that's too much of a free-for-all, it could be a slightly more formal process of submitting a review to add a row to a page: https://governance.openstack.org/popup-teams/ which would be similar in spirit to: https://governance.openstack.org/sigs/ Either this or a wiki page would ensure that anyone can easily discover what teams are currently in existence, or have been in the past (since historical information is often useful too). Just thinking out aloud ... From cdent+os at anticdent.org Fri Feb 8 12:34:18 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 8 Feb 2019 12:34:18 +0000 (GMT) Subject: [tc] cdent non-nomination for TC Message-ID: Next week sees the start of election season for the TC [1]. People often worry that incumbents always get re-elected so it is considered good form to announce if you are an incumbent and do not intend to run. I do not intend to run. I've done two years and that's enough. When I was first elected I had no intention of doing any more than one year but at the end of the first term I had not accomplished much of what I hoped, so stayed on. Now, at the end of the second term I still haven't accomplished much of what I hoped, so I think it is time to focus my energy in the places where I've been able to get some traction and give someone else—someone with a different approach—a chance. If you're interested in being on the TC, I encourage you to run. If you have questions about it, please feel free to ask me, but also ask others so you get plenty of opinions. And do your due diligence: Make sure you're clear with yourself about what the TC has been, is now, what you would like it to be, and what it can be. Elections are fairly far in advance of the end of term this time around. I'll continue in my TC responsibilities until the end of term, which is some time in April. I'm not leaving the community or anything like that, I'm simply narrowing my focus. Over the past several months I've been stripping things back so I can be sure that I'm not ineffectively over-committing myself to OpenStack but am instead focusing where I can be most useful and make the most progress. Stepping away from the TC is just one more part of that. Thanks very much for the experiences and for the past votes. [1] https://governance.openstack.org/election/ -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From cdent+os at anticdent.org Fri Feb 8 13:15:42 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 8 Feb 2019 13:15:42 +0000 (GMT) Subject: [dev][tc][ptl] Continuing Evaluating projects in relation to OpenStack cloud vision Message-ID: Yesterday at the TC meeting [1] we decided that the in-progress task to make sure the technical vision document [2] has been fully evaluated by project teams needs a bit more time, so this message is being produced as a reminder. Back in January Julia produced a message [3] suggesting that each project consider producing a document where they compare their current state with an idealized state if they were in complete alignment with the vision. There were two hoped for outcomes: * A useful in-project document that could help guide future development. * Patches to the vision document to clarify or correct the vision where it is discovered to be not quite right. A few projects have started that process (see, for example, melwitt's recent message for some links [4]) resulting in some good plans as well as some improvements to the vision document [5]. In the future the TC would like to use the vision document to help evaluate projects applying to be "official" as well as determining if projects are "healthy". As such it is important that the document be constantly evolving toward whatever "correct" means. The process described in Julia's message [3] is a useful to make it so. Please check it out. Thanks. [1] http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-02-07-14.00.html [2] https://governance.openstack.org/tc/reference/technical-vision.html [3] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001417.html [4] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002501.html [5] https://review.openstack.org/#/q/project:openstack/governance+file:reference/technical-vision.rst -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From sean.mcginnis at gmx.com Fri Feb 8 13:52:32 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 8 Feb 2019 07:52:32 -0600 Subject: [tc] cdent non-nomination for TC In-Reply-To: References: Message-ID: <20190208135231.GA8848@sm-workstation> On Fri, Feb 08, 2019 at 12:34:18PM +0000, Chris Dent wrote: > > Next week sees the start of election season for the TC [1]. People > often worry that incumbents always get re-elected so it is > considered good form to announce if you are an incumbent and do > not intend to run. > Thanks for all you've done on the TC Chris! From sean.mcginnis at gmx.com Fri Feb 8 14:00:51 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 8 Feb 2019 08:00:51 -0600 Subject: [tc] smcginnis non-nomination for TC Message-ID: <20190208140051.GB8848@sm-workstation> As Chris said, it is probably good for incumbents to make it known if they are not running. This is my second term on the TC. It's been great being part of this group and trying to contribute whatever I can. But I do feel it is important to make room for new folks to regularly join and help shape things. So with that in mind, along with the need to focus on some other areas for a bit, I do not plan to run in the upcoming TC election. I would highly encourage anyone interested to run for the TC. If you have any questions about it, feel free to ping me for any thoughts/advice/feedback. Thanks for the last two years. I think I've learned a lot since joining the TC, and hopefully I have been able to contribute some positive things over the years. I will still be around, so hopefully I will see folks in Denver. Sean From lbragstad at gmail.com Fri Feb 8 14:39:32 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 8 Feb 2019 08:39:32 -0600 Subject: [dev][tc][ptl] Continuing Evaluating projects in relation to OpenStack cloud vision In-Reply-To: References: Message-ID: <4b835a4c-a2a7-d6ac-f8e9-ad61a591dd46@gmail.com> On 2/8/19 7:15 AM, Chris Dent wrote: > > Yesterday at the TC meeting [1] we decided that the in-progress task > to make sure the technical vision document [2] has been fully > evaluated by project teams needs a bit more time, so this message is > being produced as a reminder. > > Back in January Julia produced a message [3] suggesting that each > project consider producing a document where they compare their > current state with an idealized state if they were in complete > alignment with the vision. There were two hoped for outcomes: > > * A useful in-project document that could help guide future >   development. > * Patches to the vision document to clarify or correct the vision >   where it is discovered to be not quite right. > > A few projects have started that process (see, for example, > melwitt's recent message for some links [4]) resulting in some good > plans as well as some improvements to the vision document [5]. Is it worth knowing which projects have this underway? If so, do we want to track that somewhere? Colleen started generating notes for keystone [0] and there is a plan to get it proposed for review to our contributor guide sometime before the the Summit [1]. [0] https://etherpad.openstack.org/p/keystone-technical-vision-notes [1] http://eavesdrop.openstack.org/meetings/keystone/2019/keystone.2019-02-05-16.01.log.html#l-13 > > In the future the TC would like to use the vision document to help > evaluate projects applying to be "official" as well as determining > if projects are "healthy". As such it is important that the document > be constantly evolving toward whatever "correct" means. The process > described in Julia's message [3] is a useful to make it so. Please > check it out. > > Thanks. > > [1] > http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-02-07-14.00.html > [2] https://governance.openstack.org/tc/reference/technical-vision.html > [3] > http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001417.html > [4] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002501.html > [5] > https://review.openstack.org/#/q/project:openstack/governance+file:reference/technical-vision.rst > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lars at redhat.com Fri Feb 8 15:38:17 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 8 Feb 2019 10:38:17 -0500 Subject: [tripleo] puppet failing with "cannot load such file -- json" Message-ID: <20190208153817.5kjcpu6ebrs35sop@redhat.com> Our "openstack tripleo deploy" is failing during "step 1" while trying to configure swift. It looks like the error comes from puppet apply. Looking at the ansible output, the command is: /usr/bin/puppet apply --summarize --detailed-exitcodes \ --color=false --logdest syslog --logdest console \ --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules \ --tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server \ /etc/config.pp And the error is: "cannot load such file -- json" We're running recent delorean packages: so, python-tripleoclient @ 034edf0, and puppet-swift @ bc8dc51. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From emilien at redhat.com Fri Feb 8 16:51:46 2019 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 8 Feb 2019 11:51:46 -0500 Subject: [tripleo] puppet failing with "cannot load such file -- json" In-Reply-To: <20190208153817.5kjcpu6ebrs35sop@redhat.com> References: <20190208153817.5kjcpu6ebrs35sop@redhat.com> Message-ID: Hey Lars, I wish I could help but I suspect we'll need more infos. Please file a bug in Launchpad, explain how to reproduce, and provide more logs, like /var/log/messages maybe. Once the bug filed, we'll take a look and hopefully help you. Thank you, On Fri, Feb 8, 2019 at 10:44 AM Lars Kellogg-Stedman wrote: > Our "openstack tripleo deploy" is failing during "step 1" while trying > to configure swift. It looks like the error comes from puppet apply. > Looking at the ansible output, the command is: > > /usr/bin/puppet apply --summarize --detailed-exitcodes \ > --color=false --logdest syslog --logdest console \ > --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules \ > --tags > file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server > \ > /etc/config.pp > > And the error is: > > "cannot load such file -- json" > > We're running recent delorean packages: so, python-tripleoclient @ > 034edf0, and puppet-swift @ bc8dc51. > > -- > Lars Kellogg-Stedman | larsks @ {irc,twitter,github} > http://blog.oddbit.com/ | > > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Fri Feb 8 17:30:38 2019 From: emccormick at cirrusseven.com (Erik McCormick) Date: Fri, 8 Feb 2019 12:30:38 -0500 Subject: [ops] OpenStack operators meetup, Berlin, March 6th,7th In-Reply-To: References: Message-ID: Hi Sam, On Thu, Feb 7, 2019 at 9:07 PM Sam P wrote: > > Hi Chris, > > I need an invitation letter to get my German visa. Please let me know > who to contact. > You can contact Ashlee at the foundation and she will be able to assist you. Her email is ashlee at openstack.org. See you in Berlin! > --- Regards, > Sampath > > > On Thu, Feb 7, 2019 at 2:38 AM Chris Morgan wrote: > > > > See you there! > > > > On Wed, Feb 6, 2019 at 12:18 PM Erik McCormick wrote: > >> > >> I'm all signed up. See you in Berlin! > >> > >> On Wed, Feb 6, 2019, 10:43 AM Chris Morgan >>> > >>> Dear All, > >>> The Evenbrite for the next ops meetup is now open, see > >>> > >>> https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 > >>> > >>> Thanks for Allison Price from the foundation for making this for us. We'll be sharing more details on the event soon. > >>> > >>> Chris > >>> on behalf of the ops meetups team > >>> > >>> -- > >>> Chris Morgan > > > > > > > > -- > > Chris Morgan From linus.nilsson at it.uu.se Thu Feb 7 09:44:34 2019 From: linus.nilsson at it.uu.se (Linus Nilsson) Date: Thu, 7 Feb 2019 10:44:34 +0100 Subject: Rocky and older Ceph compatibility In-Reply-To: References: <88212313-4fde-8e01-d804-27c6354b7046@it.uu.se> Message-ID: <4ff504fe-23dd-2763-aa08-bc98952db5be@it.uu.se> On 2/6/19 6:55 PM, Erik McCormick wrote: > On Wed, Feb 6, 2019 at 12:37 PM Linus Nilsson wrote: >> Hi all, >> >> I'm working on upgrading our cloud, which consists of a block storage >> system running Ceph 11.2.1 ("Kraken") and a controlplane running OSA >> Newton. We want to migrate to Ceph Mimic and OSA Rocky respectively. As >> part of the upgrade plan we are discussing first going to Rocky while >> keeping the block system at the "Kraken" release. >> > For the most part it comes down to your client libraries. Personally, > I would upgrade Ceph first, leaving Openstack running older client > libraries. I did this with Jewel clients talking to a Luminous > cluster, so you should be fine with K->M. Then, when you upgrade > Openstack, your client libraries can get updated along with it. If you > do Openstack first, you'll need to come back around and update your > clients, and that will require you to restart everything a second > time. > . Thanks. Upgrading first to Luminous is certainly an option. >> It would be helpful to know if anyone has attempted to run the Rocky >> Cinder/Glance drivers with Ceph Kraken or older? >> > I haven't done this specific combination, but I have mixed and matched > Openstack and Ceph versions without any issues. I have MItaka, Queens, > and Rocky all talking to Luminous without incident. > > -Erik OK, good to know. Perhaps the plan becomes upgrade to Luminous first and then Newton -> Ocata -> Pike -> Queens -> Rocky and finally go Luminous -> Mimic. Best regards, Linus UPPMAX >> References or documentation is welcomed. I fail to find much information >> online, but perhaps I'm looking in the wrong places or I'm asking a >> question with an obvious answer. >> >> Thanks! >> >> Best regards, >> Linus >> UPPMAX >> >> >> >> >> >> >> >> >> När du har kontakt med oss på Uppsala universitet med e-post så innebär det att vi behandlar dina personuppgifter. För att läsa mer om hur vi gör det kan du läsa här: http://www.uu.se/om-uu/dataskydd-personuppgifter/ >> >> E-mailing Uppsala University means that we will process your personal data. For more information on how this is performed, please read here: http://www.uu.se/en/about-uu/data-protection-policy >> From rnoriega at redhat.com Thu Feb 7 17:45:45 2019 From: rnoriega at redhat.com (Ricardo Noriega De Soto) Date: Thu, 7 Feb 2019 18:45:45 +0100 Subject: [Neutron] Multi segment networks Message-ID: Hello guys, Quick question about multi-segment provider networks. Let's say I create a network and a subnet this way: neutron net-create multinet --segments type=dict list=true provider:physical_network='',provider:segmentation_id=1500,provider:network_type=vxlan provider:physical_network=physnet_sriov,provider:segmentation_id=2201,provider:network_typ e=vlan neutron subnet-create multinet --allocation-pool start=10.100.5.2,end=10.100.5.254 --name mn-subnet --dns-nameserver 8.8.8.8 10.100.5.0/24 Does it mean, that placing two VMs (with regular virtio interfaces), one in the vxlan segment and one on the vlan segment, would be able to ping each other without the need of a router? Or would it require an external router that belongs to the owner of the infrastructure? Thanks in advance! -- Ricardo Noriega Senior Software Engineer - NFV Partner Engineer | Office of Technology | Red Hat irc: rnoriega @freenode -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Feb 8 19:25:51 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 8 Feb 2019 19:25:51 +0000 Subject: [tc] cdent non-nomination for TC In-Reply-To: References: Message-ID: <20190208192550.5s2sx52fnvzps4sl@yuggoth.org> On 2019-02-08 12:34:18 +0000 (+0000), Chris Dent wrote: [...] > I do not intend to run. I've done two years and that's enough. When > I was first elected I had no intention of doing any more than one > year but at the end of the first term I had not accomplished much of > what I hoped, so stayed on. Now, at the end of the second term I > still haven't accomplished much of what I hoped [...] You may not have accomplished what you set out to, but you certainly have made a difference. You've nudged lines of discussion into useful directions they might not otherwise have gone, provided a frequent reminder of the representative nature of our governance, and produced broadly useful summaries of our long-running conversations. I really appreciate what you brought to the TC, and am glad you'll still be around to hold the rest of us (and those who succeed you/us) accountable. Thanks! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Feb 8 19:28:50 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 8 Feb 2019 19:28:50 +0000 Subject: [tc] smcginnis non-nomination for TC In-Reply-To: <20190208140051.GB8848@sm-workstation> References: <20190208140051.GB8848@sm-workstation> Message-ID: <20190208192849.zx6equh4h5zibkqa@yuggoth.org> On 2019-02-08 08:00:51 -0600 (-0600), Sean McGinnis wrote: [...] > This is my second term on the TC. It's been great being part of > this group and trying to contribute whatever I can. But I do feel > it is important to make room for new folks to regularly join and > help shape things. So with that in mind, along with the need to > focus on some other areas for a bit, I do not plan to run in the > upcoming TC election. [...] Thanks for everything you've done these past couple of years, and I'm glad we'll have your experience as a contributor, PTL and TC member to help guide the OSF board of directors for the coming year! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Fri Feb 8 21:15:40 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 8 Feb 2019 15:15:40 -0600 Subject: [telemetry][cloudkitty][magnum][solum][tacker][watcher][zun][release] Switching to cycle-with-rc Message-ID: <20190208211540.GA24049@sm-workstation> Following up from [1], I have proposed changes to a few cycle-with-intermediary service releases to by cycle-with-rc. We've already received some feedback from the affected projects, but just posting here to make sure there's an easy reference and to make sure others are aware of the changes. The patches proposed are: aodh - https://review.openstack.org/635656 ceilometer panko tricircle cloudkitty - https://review.openstack.org/635657 magnum - https://review.openstack.org/635658 solum - https://review.openstack.org/635659 tacker - https://review.openstack.org/635660 watcher - https://review.openstack.org/635662 zun - https://review.openstack.org/635663 If there are any questions, just let us know here or in the #openstack-release channel Thanks! Sean [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002502.html From mriedemos at gmail.com Fri Feb 8 22:36:28 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 8 Feb 2019 16:36:28 -0600 Subject: [cinder][qa] Cinder 3rd party CI jobs and multiattach tests Message-ID: <5ad5391b-9f3a-6d08-5eca-89e690dd9b03@gmail.com> With tempest change [1] the multiattach tests are enabled in tempest-full, tempest-full-py3 and tempest-slow job configurations. This was to allow dropping the nova-multiattach job and still retain test coverage in the upstream gate. There are 3rd party CI jobs that are basing their job configs on these tempest job configs, and as a result they will now fail if the storage backend driver does not support multiattach volumes and the job configuration does not override and set: ENABLE_VOLUME_MULTIATTACH: false in the tempest job config, like was done in the devstack-plugin-ceph-tempest jobs [2]. Let me know if there are any questions. [1] https://review.openstack.org/#/c/606978/ [2] https://review.openstack.org/#/c/634977/ -- Thanks, Matt From melwittt at gmail.com Fri Feb 8 23:59:28 2019 From: melwittt at gmail.com (melanie witt) Date: Fri, 8 Feb 2019 15:59:28 -0800 Subject: [nova][dev] 4 weeks until feature freeze In-Reply-To: <1ea6d402-61bf-0528-6de7-55c8fe920bc9@gmail.com> References: <0022f4bb-43c0-d35c-e3c3-d33269bdb843@gmail.com> <1ea6d402-61bf-0528-6de7-55c8fe920bc9@gmail.com> Message-ID: On Thu, 7 Feb 2019 19:26:36 -0600, Matt Riedemann wrote: > On 2/7/2019 5:06 PM, melanie witt wrote: >> The ability to specify volume type during server create is complete >> since 2018-10-16. However, the patches for being able to detach a boot >> volume and volume-backed server rebuild are in merge conflict/stale. >> Help is needed here. > > It's Chinese New Year / Spring Festival this week so the developers that > own these changes are on holiday. Kevin told me last week that once he's > back he's going to complete the detach/attach root volume work. The spec > was amended [1] and needs another spec core to approve (probably would > be good to have Dan do that since was involved in the initial spec review). > > As for the volume-backed rebuild change, I asked Jie Li on the review if > he needed someone to help push it forward and he said he did. It sounds > like Kevin and/or Yikun might be able to help there. Yikun already has > the Cinder side API changes all done and there is a patch for the > python-cinderclient change, but the Cinder API change is blocked until > we have an end-to-end working scenario in Tempest for the volume-backed > rebuild flow in nova. I can help with the Tempest change when the time > comes since that should be pretty straightforward. > > [1] https://review.openstack.org/#/c/619161/ That's all great news! Thanks for the summary and for volunteering to help with the Tempest change. We'll keep our eyes peeled for updates to those patch series coming soon. Cheers, -melanie From lars at redhat.com Sat Feb 9 00:21:08 2019 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 8 Feb 2019 19:21:08 -0500 Subject: [tripleo] puppet failing with "cannot load such file -- json" In-Reply-To: <20190208153817.5kjcpu6ebrs35sop@redhat.com> References: <20190208153817.5kjcpu6ebrs35sop@redhat.com> Message-ID: <20190209002108.6fulrhehg2ro62pi@redhat.com> On Fri, Feb 08, 2019 at 10:38:17AM -0500, Lars Kellogg-Stedman wrote: > And the error is: > > "cannot load such file -- json" > > We're running recent delorean packages: so, python-tripleoclient @ > 034edf0, and puppet-swift @ bc8dc51. False alarm, that was just failure to selinux relabel a filesystem after relocating /var/lib/docker. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From colleen at gazlene.net Sat Feb 9 14:27:32 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Sat, 09 Feb 2019 15:27:32 +0100 Subject: [dev][keystone] Keystone Team Update - Week of 4 February 2019 Message-ID: <1549722452.3566947.1654366432.049A66E5@webmail.messagingengine.com> # Keystone Team Update - Week of 4 February 2019 ## News ### Performance of Loading Fernet/JWT Key Repositories Lance noticed that it seems that token signing/encryption keys are loaded from disk on every request and is therefore not very performant, and started investigating ways we could improve this[1][2]. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-02-07.log.html#t2019-02-07T17:55:34 [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-02-08.log.html#t2019-02-08T17:09:24 ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 10 changes this week. ## Changes that need Attention Search query: https://bit.ly/2RLApdA There are 73 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs This week we opened 2 new bugs and closed 3. Bugs opened (2) Bug #1814589 (keystone:High) opened by Guang Yee https://bugs.launchpad.net/keystone/+bug/1814589 Bug #1814570 (keystone:Medium) opened by Guang Yee https://bugs.launchpad.net/keystone/+bug/1814570 Bugs fixed (3) Bug #1804483 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1804483 Bug #1805406 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1805406 Bug #1801095 (keystone:Wishlist) fixed by Artem Vasilyev https://bugs.launchpad.net/keystone/+bug/1801095 ## Milestone Outlook https://releases.openstack.org/stein/schedule.html Feature freeze is in four weeks. Be mindful of the gate and try to submit and review things early. ## Shout-outs Congratulations and thank you to our Outreachy intern Islam for completing the first step in refactoring our unit tests to lean on our shiny new Flask framework! Great work! ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From dkrol3 at gmail.com Sat Feb 9 18:04:24 2019 From: dkrol3 at gmail.com (=?UTF-8?Q?Darek_Kr=C3=B3l?=) Date: Sat, 9 Feb 2019 19:04:24 +0100 Subject: Subject: Re: [Trove] State of the Trove service tenant deployment model In-Reply-To: References: Message-ID: Hello Lingxian, I’ve heard about a few tries of running Trove in production. Unfortunately, I didn’t have opportunity to get details about networking. At Samsung, we introducing Trove into our products for on-premise cloud platforms. However, I cannot share too many details about it, besides it is oriented towards performance and security is not a concern. Hence, the networking is very basic without any layers of abstractions if possible. Could you share more details about your topology and goals you want to achieve in Trove ? Maybe Trove team could help you in this ? Unfortunately, I’m not a network expert so I would need to get more details to understand your use case better. I would also like to get this opportunity to ask you for details about Octavia way of communication ? I'm wondering if the Octavia way prevents DDOS attacks also ? Best, Darek On Fri, 8 Feb 2019 at 01:20, Lingxian Kong wrote: > Sorry for bringing this thread back to the top again. > > But I am wondering if there are people who have already deployed Trove in > production? If yes, are you using service tenant model(create the database > vm and related resources in the admin project) or using the flatten mode > that the end user has access to the database vm and the control plane > network as well? > > I am asking because we are going to deploy Trove in a private cloud, and > we want to take more granular control of the resources created, e.g for > every database vm, we will create the vm in the admin tenant, plug a port > to the control plane(`CONF.default_neutron_networks`) and the other ports > to the network given by the users, we also need to specify different > security groups to different types of neutron ports for security reasons, > etc. > > There are something missing in trove in order to achieve the above, I'm > working on that, but I'd like to hear more suggestions. > > My irc name is lxkong in #openstack-trove, please ping me if you have > something to share. > > Cheers, > Lingxian Kong > > > On Wed, Jan 23, 2019 at 7:35 PM Darek Król wrote: > >> On Wed, Jan 23, 2019 at 9:27 AM Fox, Kevin M >> > wrote: >> >> > > I'd recommend at this point to maybe just run kubernetes across the >> vms and push the guest agents/workload to them. >> >> > This sounds like an overkill to me. Currently, different projects in >> openstack are solving this issue > in different ways, e.g. Octavia is using >> two-way SSL authentication API between the controller service and >> amphora(which is the vm running HTTP server inside), Magnum is using >> heat-container-agent that is communicating with Heat via API, etc. However, >> Trove chooses another option which has brought a lot of discussions over a >> long time. >> >> > In the current situation, I don't think it's doable for each project >> heading to one common solution, but Trove can learn from other projects to >> solve its own problem. >> > Cheers, >> > Lingxian Kong >> >> The Octavia way of communication was discussed by Trove several times >> in the context of secuirty. However, the security threat has been >> eliminated by encryption. >> I'm wondering if the Octavia way prevents DDOS attacks also ? >> >> Implementation of two-way SSL authentication API could be included in >> the Trove priority list IMHO if it solves all issues with >> security/DDOS attacks. This could also creates some share code between >> both projects and help other services as well. >> >> Best, >> Darek >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Sat Feb 9 18:54:33 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 9 Feb 2019 12:54:33 -0600 Subject: [goals][upgrade-checkers] Week R-9 Update Message-ID: <78c401f2-e138-6491-219f-ee78c855548a@gmail.com> The only change since last week [1] is the swift patch was abandoned. The next closest patches to merge should be cloudkitty, ceilometer and aodh so if someone from those teams is reading this please check the open reviews [2]. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002328.html [2] https://review.openstack.org/#/q/topic:upgrade-checkers+status:open -- Thanks, Matt From amy at demarco.com Sun Feb 10 16:04:45 2019 From: amy at demarco.com (Amy Marrich) Date: Sun, 10 Feb 2019 10:04:45 -0600 Subject: D&I WG Meeting Reminder Message-ID: The Diversity & Inclusion WG will hold it's next meeting Monday(2/11) at 17:00 UTC in the #openstack-diversity channel. The agenda can be found at https://etherpad.openstack.org/p/diversity-wg-agenda. Please feel free to add any other topics you wish to discuss at the meeting. Including the discuss list to invite potential new members! Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Sun Feb 10 17:28:26 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Sun, 10 Feb 2019 18:28:26 +0100 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <20190208091829.6tiig7lgef6txcxk@pacific.linksys.moosehall> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> <20190208091829.6tiig7lgef6txcxk@pacific.linksys.moosehall> Message-ID: <723736DB-ED80-4600-AA98-F51FE70A8D73@gmail.com> First of all I like the idea of pop-up teams. > On 2019. Feb 8., at 10:18, Adam Spiers wrote: > > Doug Hellmann wrote: >> Adam Spiers writes: >>> Doug Hellmann wrote: >>>> Isn't the point of these teams that they would be coordinating work within other existing projects? >>> >>> Yes. >>> >>>> So I wouldn't expect them to need git repositories or new IRC channels. >>> >>> Never? Code and documentation doesn't always naturally belong in a single project, especially when it relates to cross-project work. Similarly, if (say) Monasca, Vitrage, and Heat all need an IRC channel in which to collaborate on a specific topic, it seems fairly clear that none of #openstack-{monasca,vitrage,heat} are optimal choices. >> >> What's wrong with #openstack-dev? > > Maybe nothing, or maybe it's too noisy - I dunno ;-) Maybe the latter could be solved by setting up #openstack-breakout{1..10} for impromptu meetings where meetbot and channel logging are provided. I think the project channels along with #opentack-dev should be enough to start with. As we are talking about activities concerning multiple projects many of the conversations will naturally land in one of the project channels depending on the stage of the design/development/testing work. Using the multi-attach work as an example we used the Cinder and Nova channels for daily communication which worked out well as we had all the stakeholders around without asking them to join yet-another-IRC-channel. Discussing more general items can happen on the regular meetings and details can be moved to the project channels where the details often hint which project team is the most affected. I would expect the pop-up team having representatives from all teams as well as all pop-up team members hanging out in all relevant project team channels. As a fall back for high level, all-project topics I believe #openstack-dev is a good choice expecting most of the people being in that channel already while also gaining further visibility to the topic there. >>> The self-healing SIG has both a dedicated git repository (for docs, code, and in order to be able to use StoryBoard) and a dedicated IRC channel. We find both useful. >>> Of course SIGs are more heavy-weight and long-lived so I'm not suggesting that all or even necessarily the majority of popup teams would need git/IRC. But I imagine it's possible in some cases, at least. >> >> Right, SIGs are not designed to disappear after a task is done in the way that popup teams are. If a popup team is going to create code, it needs to end up in a repository that is owned and maintained by someone over the long term. If that requires a new repo, and one of the existing teams isn't a natural home, then I think a new regular team is likely a better fit for the task than a popup team. > > True. And for temporary docs / notes / brainstorming there's the wiki and etherpad. So yeah, in terms of infrastructure maybe IRC meetings in one of the communal meeting channels is the only thing needed. > We'd still need to take care of ensuring that popups are easily discoverable by anyone, however. And this ties in with the "should we require official approval" debate - maybe a halfway house is the right balance between red tape and agility? For example, set up a table on a page like > https://wiki.openstack.org/wiki/Popup_teams > > and warmly encourage newly forming teams to register themselves by adding a row to that table. Suggested columns: > - Team name > - One-line summary of team purpose > - Expected life span (optional) > - Link to team wiki page or etherpad > - Link to IRC meeting schedule (if any) > - Other comments > > Or if that's too much of a free-for-all, it could be a slightly more formal process of submitting a review to add a row to a page: > https://governance.openstack.org/popup-teams/ > > which would be similar in spirit to: > https://governance.openstack.org/sigs/ > > Either this or a wiki page would ensure that anyone can easily discover what teams are currently in existence, or have been in the past (since historical information is often useful too). > Just thinking out aloud … In my experience there are two crucial steps to make a cross-project team work successful. The first is making sure that the proposed new feature/enhancement is accepted by all teams. The second is to have supporters from every affected project team preferably also resulting in involvement during both design and review time maybe also during feature development and testing phase. When these two steps are done you can work on the design part and making sure you have the work items prioritized on each side in a way that you don’t end up with road blocks that would delay the work by multiple release cycles. To help with all this I would start the experiment with wiki pages and etherpads as these are all materials you can point to without too much formality to follow so the goals, drivers, supporters and progress are visible to everyone who’s interested and to the TC to follow-up on. Do we expect an approval process to help with or even drive either of the crucial steps I listed above? Thanks, Ildikó From ildiko.vancsa at gmail.com Sun Feb 10 17:43:38 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Sun, 10 Feb 2019 18:43:38 +0100 Subject: [infra][upstream-institute] Bot to vote on changes in the sandbox In-Reply-To: <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> References: <20190201043349.GB6183@thor.bakeyournoodle.com> <493e6ac1-a00b-3c83-cfc3-8ac3c96d7b51@fried.cc> Message-ID: @Tony: Thank you for working on this! > […] > > >> 2. Upon seeing a new patchset to the change vote +2 (and possibly +W?) >> on the change > > If you're compiling a list of eventual features for the bot, another one > that could be neat is, after the second patch set, the bot merges a > change that creates a merge conflict on the student's patch, which they > then have to go resolve. > > Also, cross-referencing [1], it might be nice to update that tutorial at > some point to use the sandbox repo instead of nova. That could be done > once we have bot action so said action could be incorporated into the > tutorial flow. > >> [2] The details of what counts as qualifying can be fleshed out later >> but there needs to be something so that contributors using the >> sandbox that don't want to be bothered by the bot wont be. > > Yeah, I had been assuming it would be some tag in the commit message. If > we ultimately enact different flows of varying complexity, the tag > syntax could be enriched so students in different courses/grades could > get different experiences. For example: > > Bot-Reviewer: > > or > > Bot-Reviewer: Level 2 > > or > > Bot-Reviewer: initial-downvote, merge-conflict, series-depth=3 > > The possibilities are endless :P By having tags we can turn off the bot for the in person trainings while we can also help people practice different things outside of trainings so I really like the approach! Once we have prototype working we can also think of putting some more pointers in the training slides to the Contributor Guide sections describing how to manage open reviews/changes to make sure people find it. Thanks, Ildikó > > -efried > > [1] https://review.openstack.org/#/c/634333/ > From cdent+os at anticdent.org Sun Feb 10 20:33:02 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Sun, 10 Feb 2019 20:33:02 +0000 (GMT) Subject: [dev][tc] Part 2: Evaluating projects in relation to OpenStack cloud vision Message-ID: This a "part 2" or "other half" of evaluating OpenStack projects in relation to the technical vision. See the other threads [1][2] for more information. In the conversations that led up to the creation of the vision document [3] one of the things we hoped was that the process could help identify ways in which existing projects could evolve to be better at what they do. This was couched in two ideas: * Helping to make sure that OpenStack continuously improves, in the right direction. * Helping to make sure that developers were working on projects that leaned more towards interesting and educational than frustrating and embarrassing, where choices about what to do and how to do it were straightforward, easy to share with others, so well-founded in agreed good practice that argument would be rare, and so few that it was easy to decide. Of course, to have a "right direction" you first have to have a direction, and thus the vision document and the idea of evaluating how aligned a project is with that. The other half, then, is looking at the projects from a development standpoint and thinking about what aspects of the project are: * Things (techniques, tools) the project contributors would encourage others to try. Stuff that has worked out well. * Things—given a clean slate, unlimited time and resources, the benefit of hindsight and without the weight of legacy—the project contributors would encourage others to not repeat. And documenting those things so they can be carried forward in time some place other than people's heads, and new projects or refactorings of existing projects can start on a good foot. A couple of examples: * Whatever we might say about the implementation (in itself and how it is used), the concept of a unified configuration file format, via oslo_config, is probably considered a good choice, and we should keep on doing that. * On the other hand, given hindsight and improvements in commonly available tools, using a homegrown WSGI (non-)framework (unless you are Swift) plus eventlet may not have been the way to go, yet because it is what's still there in nova, it often gets copied. It's not clear at this point whether these sorts of things should be documented in projects, or somewhere more central. So perhaps we can just talk about it here in email and figure something out. I'll followup with some I have for placement, since that's the project I've given the most attention. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001417.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002524.html [3] https://governance.openstack.org/tc/reference/technical-vision.html -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From cdent+os at anticdent.org Sun Feb 10 21:08:29 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Sun, 10 Feb 2019 21:08:29 +0000 (GMT) Subject: [dev][tc] Part 2: Evaluating projects in relation to OpenStack cloud vision In-Reply-To: References: Message-ID: On Sun, 10 Feb 2019, Chris Dent wrote: > It's not clear at this point whether these sorts of things should be > documented in projects, or somewhere more central. So perhaps we can > just talk about it here in email and figure something out. I'll > followup with some I have for placement, since that's the project > I've given the most attention. Conversation on vision reflection for placement [1] is what reminded me that this part 2 is something we should be doing. I should disclaim that I'm the author of a lot of the architecture of placement so I'm hugely biased. Please call me out where my preferences are clouding reality. Other contributors to placement probably have other ideas. They would be great to hear. However, it's been at least two years since we started, so I think we can extract some useful lessons. Things have have worked out well (you can probably see a theme): * Placement is a single purpose service with, until very recently, only the WSGI service as the sole moving part. There are now placement-manage and placement-status commands, but they are rarely used (thankfully). This makes the system easier to reason about than something with multiple agents. Obviously some things need lots of agents. Placement isn't one of them. * Using gabbi [2] as the framework for functional tests of the API and using them to enable test-driven-development, via those functional tests, has worked out really well. It keeps the focus on that sole moving part: The API. * No RPC, no messaging, no notifications. * Very little configuration, reasonable defaults to that config. It's possible to run a working placement service with two config settings, if you are not using keystone. Keystone adds a few more, but not that much. * String adherence to WSGI norms (that is, any WSGI server can run a placement WSGI app) and avoidance of eventlet, but see below. The combination of this with small number of moving parts and little configuration make it super easy to deploy placement [3] in lots of different setups, from tiny to huge, scaling and robustifying those setups as required. * Declarative URL routing. There's a dict which maps HTTP method:URL pairs to python functions. Clear dispatch is a _huge_ help when debugging. Look one place, as a computer or human, to find where to go. * microversion-parse [4] has made microversion handling easy. Things that haven't gone so well (none of these are dire) and would have been nice to do differently had we but known: * Because of a combination of "we might need it later", "it's a handy tool and constraint" and "that's the way we do things" the interface between the placement URL handlers and the database is mediated through oslo versioned objects. Since there's no RPC, nor inter-version interaction, this is overkill. It also turns out that OVO getters and setters are a moderate factor in performance. Initially we were versioning the versioned objects, which created a lot of cognitive overhead when evolving the system, but we no longer do that, now that we've declared RPC isn't going to happen. * Despite the strict adherence to being a good WSGI citizen mentioned above, placement is using a custom (very limited) framework for the WSGI application. An initial proof of concept used flask but it was decided that introducing flask into the nova development environment would be introducing another thing to know when decoding nova. I suspect the expected outcome was that placement would reuse nova's framework, but the truth is I simply couldn't do it. Declarative URL dispatch was a critical feature that has proven worth it. The resulting code is relatively straightforward but it is unicorn where a boring pony would have been the right thing. Boring ponies are very often the right thing. I'm sure there are more here, but I've run out of brain. [1] https://review.openstack.org/#/c/630216/ [2] https://gabbi.readthedocs.io/ [3] https://anticdent.org/placement-from-pypi.html [4] https://pypi.org/project/microversion_parse/ -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From feilong at catalyst.net.nz Sun Feb 10 22:10:30 2019 From: feilong at catalyst.net.nz (Feilong Wang) Date: Mon, 11 Feb 2019 11:10:30 +1300 Subject: [horizon] Horizon slowing down proportionally to the amount of instances (was: Horizon extremely slow with 400 instances) In-Reply-To: <33f1bdebb0efbb36dbb40af9564dde5daba62ffe.camel@evrard.me> References: <33f1bdebb0efbb36dbb40af9564dde5daba62ffe.camel@evrard.me> Message-ID: Hi JP, We run into same problem before (and now I think). The root cause is because when Horizon loading the instances page, for each instance row, it has to decide if show an action, unfortunately, for each instance, there are more than 20+ actions to check, and more worse, some actions may involve an API call. And whenever you have 20+ instances (the default page size is 20), you will run into this issue. I have done some upstream before to mitigate this, but it definitely needs ajax to load those actions after loading the page. On 6/02/19 11:00 PM, Jean-Philippe Evrard wrote: > On Wed, 2019-01-30 at 21:10 -0500, Satish Patel wrote: >> folks, >> >> we have mid size openstack cloud running 400 instances, and day by >> day >> its getting slower, i can understand it render every single machine >> during loading instance page but it seems it's design issue, why not >> it load page from MySQL instead of running bunch of API calls behind >> then page? >> >> is this just me or someone else also having this issue? i am >> surprised >> why there is no good and robust Web GUI for very popular openstack? >> >> I am curious how people running openstack in large environment using >> Horizon. >> >> I have tired all kind of setting and tuning like memcache etc.. >> >> ~S >> > Hello, > > I took the liberty to change the mailing list and topic name: > FYI, the openstack-discuss ML will help you reach more people > (developers/operators). When you prefix your mail with [horizon], it > will even pass filters for some people:) > > Anyway... I would say horizon performance depends on many aspects of > your deployment, including keystone and caching, it's hard to know > what's going on with your environment with so little data. > > I hope you're figure it out :) > > Regards, > JP > > -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- From anlin.kong at gmail.com Sun Feb 10 22:44:07 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Mon, 11 Feb 2019 11:44:07 +1300 Subject: Subject: Re: [Trove] State of the Trove service tenant deployment model In-Reply-To: References: Message-ID: On Sun, Feb 10, 2019 at 7:04 AM Darek Król wrote: > Hello Lingxian, > > I’ve heard about a few tries of running Trove in production. > Unfortunately, I didn’t have opportunity to get details about networking. > At Samsung, we introducing Trove into our products for on-premise cloud > platforms. However, I cannot share too many details about it, besides it is > oriented towards performance and security is not a concern. Hence, the > networking is very basic without any layers of abstractions if possible. > > Could you share more details about your topology and goals you want to > achieve in Trove ? Maybe Trove team could help you in this ? Unfortunately, > I’m not a network expert so I would need to get more details to understand > your use case better. > Yeah, I think trove team could definitely help. I've been working on a patch[1] to support different sgs for different type of neutron ports, the patch is for the use case that `CONF.default_neutron_networks` is configured as trove management network. Besides, I also have some patches[2][3] for trove need to be reviewed, not sure who are the right people I should ask for review now, but would appriciate if you could help. [1]: https://review.openstack.org/#/c/635705/ [2]: https://review.openstack.org/#/c/635099/ [3]: https://review.openstack.org/#/c/635138/ Cheers, Lingxian Kong -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufar at onf-ambassador.org Mon Feb 11 02:33:15 2019 From: zufar at onf-ambassador.org (Zufar Dhiyaulhaq) Date: Mon, 11 Feb 2019 09:33:15 +0700 Subject: [Neutron] Split Network node from controller Node Message-ID: Hi everyone, I Have existing OpenStack with 1 controller node (Network Node in controller node) and 2 compute node. I need to expand the architecture by splitting the network node from controller node (create 1 node for network). Do you have any recommended step or tutorial for doing this? Thanks Best Regards, Zufar Dhiyaulhaq -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyangii at gmail.com Mon Feb 11 06:54:47 2019 From: hyangii at gmail.com (Jae Sang Lee) Date: Mon, 11 Feb 2019 15:54:47 +0900 Subject: [cinder][dev] Bug for deferred deletion in RBD Message-ID: Hello, I recently ran a volume deletion test with deferred deletion enabled on the pike release. We experienced a cinder-volume hung when we were deleting a large amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it. However, while deleting 200 volumes, after 50 volumes, the cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api. If these test results are my fault, please let me know the correct test method. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arne.Wiebalck at cern.ch Mon Feb 11 07:39:27 2019 From: Arne.Wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 11 Feb 2019 07:39:27 +0000 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: References: Message-ID: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> Hi Jae, You back ported the deferred deletion patch to Pike? Cheers, Arne > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: > > Hello, > > I recently ran a volume deletion test with deferred deletion enabled on the pike release. > > We experienced a cinder-volume hung when we were deleting a large amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it. > > However, while deleting 200 volumes, after 50 volumes, the cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api. > > If these test results are my fault, please let me know the correct test method. > -- Arne Wiebalck CERN IT From skaplons at redhat.com Mon Feb 11 08:13:28 2019 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 11 Feb 2019 09:13:28 +0100 Subject: [Neutron] Split Network node from controller Node In-Reply-To: References: Message-ID: <3DC9635F-4B85-41D4-B615-E6E2A8234B38@redhat.com> Hi, I don’t know if there is any tutorial for that but You can just deploy new node with agents which You need, then disable old DHCP/L3 agents with neutron API [1] and move existing networks/routers to agents in new host with neutron API. Docs for agents scheduler API is in [2] and [3]. Please keep in mind that when You will move routers to new agent You will have some downtime in data plane. [1] https://developer.openstack.org/api-ref/network/v2/#update-agent [2] https://developer.openstack.org/api-ref/network/v2/#l3-agent-scheduler [3] https://developer.openstack.org/api-ref/network/v2/#dhcp-agent-scheduler > Wiadomość napisana przez Zufar Dhiyaulhaq w dniu 11.02.2019, o godz. 03:33: > > Hi everyone, > > I Have existing OpenStack with 1 controller node (Network Node in controller node) and 2 compute node. I need to expand the architecture by splitting the network node from controller node (create 1 node for network). > > Do you have any recommended step or tutorial for doing this? > Thanks > > Best Regards, > Zufar Dhiyaulhaq — Slawek Kaplonski Senior software engineer Red Hat From hyangii at gmail.com Mon Feb 11 08:47:56 2019 From: hyangii at gmail.com (Jae Sang Lee) Date: Mon, 11 Feb 2019 17:47:56 +0900 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> Message-ID: Yes, I added your code to pike release manually. 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 작성: > Hi Jae, > > You back ported the deferred deletion patch to Pike? > > Cheers, > Arne > > > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: > > > > Hello, > > > > I recently ran a volume deletion test with deferred deletion enabled on > the pike release. > > > > We experienced a cinder-volume hung when we were deleting a large amount > of the volume in which the data was actually written(I make 15GB file in > every volumes), and we thought deferred deletion would solve it. > > > > However, while deleting 200 volumes, after 50 volumes, the cinder-volume > downed as before. In my opinion, the trash_move api does not seem to work > properly when removing multiple volumes, just like remove api. > > > > If these test results are my fault, please let me know the correct test > method. > > > > -- > Arne Wiebalck > CERN IT > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Feb 11 09:00:36 2019 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 11 Feb 2019 10:00:36 +0100 Subject: [tc] cdent non-nomination for TC In-Reply-To: <20190208192550.5s2sx52fnvzps4sl@yuggoth.org> References: <20190208192550.5s2sx52fnvzps4sl@yuggoth.org> Message-ID: <0047dff9-7138-fa7b-16a6-6bbad31a493a@openstack.org> Jeremy Stanley wrote: > On 2019-02-08 12:34:18 +0000 (+0000), Chris Dent wrote: > [...] >> I do not intend to run. I've done two years and that's enough. When >> I was first elected I had no intention of doing any more than one >> year but at the end of the first term I had not accomplished much of >> what I hoped, so stayed on. Now, at the end of the second term I >> still haven't accomplished much of what I hoped > [...] > > You may not have accomplished what you set out to, but you certainly > have made a difference. You've nudged lines of discussion into > useful directions they might not otherwise have gone, provided a > frequent reminder of the representative nature of our governance, > and produced broadly useful summaries of our long-running > conversations. I really appreciate what you brought to the TC, and > am glad you'll still be around to hold the rest of us (and those who > succeed you/us) accountable. Thanks! Jeremy said it better than I could have ! While I really appreciated the perspective you brought to the TC, I understand the need to focus to have the most impact. It's also a good reminder that the role that the TC fills can be shared beyond the elected membership -- so if you care about a specific aspect of governance, OpenStack-wide technical leadership or community health, I encourage you to participate in the TC activities, whether you are elected or not. -- Thierry Carrez (ttx) From thierry at openstack.org Mon Feb 11 09:02:49 2019 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 11 Feb 2019 10:02:49 +0100 Subject: [tc] smcginnis non-nomination for TC In-Reply-To: <20190208140051.GB8848@sm-workstation> References: <20190208140051.GB8848@sm-workstation> Message-ID: <66a20d02-bd05-2ace-80dc-4880befabbd7@openstack.org> Sean McGinnis wrote: > As Chris said, it is probably good for incumbents to make it known if they are > not running. > > This is my second term on the TC. It's been great being part of this group and > trying to contribute whatever I can. But I do feel it is important to make room > for new folks to regularly join and help shape things. So with that in mind, > along with the need to focus on some other areas for a bit, I do not plan to > run in the upcoming TC election. > > I would highly encourage anyone interested to run for the TC. If you have any > questions about it, feel free to ping me for any thoughts/advice/feedback. > > Thanks for the last two years. I think I've learned a lot since joining the TC, > and hopefully I have been able to contribute some positive things over the > years. I will still be around, so hopefully I will see folks in Denver. Thanks Sean for all your help and insights during those two TC runs ! -- Thierry Carrez (ttx) From Arne.Wiebalck at cern.ch Mon Feb 11 09:13:42 2019 From: Arne.Wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 11 Feb 2019 09:13:42 +0000 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> Message-ID: <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> Jae, To make sure deferred deletion is properly working: when you delete individual large volumes with data in them, do you see that - the volume is fully “deleted" within a few seconds, ie. not staying in ‘deleting’ for a long time? - that the volume shows up in trash (with “rbd trash ls”)? - the periodic task reports it is deleting volumes from the trash? Another option to look at is “backend_native_threads_pool_size": this will increase the number of threads to work on deleting volumes. It is independent from deferred deletion, but can also help with situations where Cinder has more work to do than it can cope with at the moment. Cheers, Arne On 11 Feb 2019, at 09:47, Jae Sang Lee > wrote: Yes, I added your code to pike release manually. 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck >님이 작성: Hi Jae, You back ported the deferred deletion patch to Pike? Cheers, Arne > On 11 Feb 2019, at 07:54, Jae Sang Lee > wrote: > > Hello, > > I recently ran a volume deletion test with deferred deletion enabled on the pike release. > > We experienced a cinder-volume hung when we were deleting a large amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it. > > However, while deleting 200 volumes, after 50 volumes, the cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api. > > If these test results are my fault, please let me know the correct test method. > -- Arne Wiebalck CERN IT -- Arne Wiebalck CERN IT -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Mon Feb 11 09:21:42 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 11 Feb 2019 10:21:42 +0100 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> Message-ID: <20190211092142.pva6t6zol77fowsn@localhost> On 11/02, Jae Sang Lee wrote: > Yes, I added your code to pike release manually. > Hi, Did you enable the feature? If I remember correctly, 50 is the default value of the native thread pool size, so it seems that the 50 available threads are busy deleting the volumes. I would double check that the feature is actually enabled (enable_deferred_deletion = True in the backend section configuration and checking the logs to see if there are any messages indicating that a volume is being deleted from the trash), and increase the thread pool size. You can change it with environmental variable EVENTLET_THREADPOOL_SIZE. Cheers, Gorka. > > > 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 작성: > > > Hi Jae, > > > > You back ported the deferred deletion patch to Pike? > > > > Cheers, > > Arne > > > > > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: > > > > > > Hello, > > > > > > I recently ran a volume deletion test with deferred deletion enabled on > > the pike release. > > > > > > We experienced a cinder-volume hung when we were deleting a large amount > > of the volume in which the data was actually written(I make 15GB file in > > every volumes), and we thought deferred deletion would solve it. > > > > > > However, while deleting 200 volumes, after 50 volumes, the cinder-volume > > downed as before. In my opinion, the trash_move api does not seem to work > > properly when removing multiple volumes, just like remove api. > > > > > > If these test results are my fault, please let me know the correct test > > method. > > > > > > > -- > > Arne Wiebalck > > CERN IT > > > > From geguileo at redhat.com Mon Feb 11 09:23:26 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 11 Feb 2019 10:23:26 +0100 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> Message-ID: <20190211092326.2qapcegmvpftzt6v@localhost> On 11/02, Arne Wiebalck wrote: > Jae, > > To make sure deferred deletion is properly working: when you delete individual large volumes > with data in them, do you see that > - the volume is fully “deleted" within a few seconds, ie. not staying in ‘deleting’ for a long time? > - that the volume shows up in trash (with “rbd trash ls”)? > - the periodic task reports it is deleting volumes from the trash? > > Another option to look at is “backend_native_threads_pool_size": this will increase the number > of threads to work on deleting volumes. It is independent from deferred deletion, but can also > help with situations where Cinder has more work to do than it can cope with at the moment. > > Cheers, > Arne Hi, That configuration option was added in Queens, so I recommend using the env variable to set it if running in Pike. Cheers, Gorka. > > > > On 11 Feb 2019, at 09:47, Jae Sang Lee > wrote: > > Yes, I added your code to pike release manually. > > > > 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck >님이 작성: > Hi Jae, > > You back ported the deferred deletion patch to Pike? > > Cheers, > Arne > > > On 11 Feb 2019, at 07:54, Jae Sang Lee > wrote: > > > > Hello, > > > > I recently ran a volume deletion test with deferred deletion enabled on the pike release. > > > > We experienced a cinder-volume hung when we were deleting a large amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it. > > > > However, while deleting 200 volumes, after 50 volumes, the cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api. > > > > If these test results are my fault, please let me know the correct test method. > > > > -- > Arne Wiebalck > CERN IT > > > -- > Arne Wiebalck > CERN IT > From geguileo at redhat.com Mon Feb 11 09:33:17 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 11 Feb 2019 10:33:17 +0100 Subject: [cinder][nova][os-brick] os-brick initiator rename In-Reply-To: References: Message-ID: <20190211093317.uf4zofcbtuu6zb7o@localhost> On 07/02, Kulazhenkov, Yury wrote: > Hi all, > Some time ago Dell EMC software-defined storage ScaleIO was renamed to VxFlex OS. > I am currently working on renaming ScaleIO to VxFlex OS in Openstack code to prevent confusion > with storage documentation from vendor. > > This changes require patches at least for cinder, nova and os-brick repos. > I already submitted patches for cinder(634397) and nova(634866), but for now code in these > patches relies on os-brick initiator with name SCALEIO. > Now I'm looking for right way to rename os-brick initiator. > Renaming initiator in os-brick library and then make required changes in nova and cinder is quiet easy, > but os-brick is library and those changes can break someone else code. > > Is some sort of policy for updates with breaking changes exist for os-brick? > > One possible solution is to rename initiator to new name and create alias with deprecation warning for > old initiator name(should this alias be preserved more than one release?). > What do you think about it? > > Thanks, > Yury > Hi Yury, That sounds like a good plan. But don't forget that you'll need to add a new online data migration to Cinder as well, since you are renaming the SCALEIO connector identifier. Otherwise a deployment could have problems when you drop the SCALEIO alias if they've had a very long running VM or if you are doing a fast-forward upgrade. Cheers, Gorka. From geguileo at redhat.com Mon Feb 11 10:12:29 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 11 Feb 2019 11:12:29 +0100 Subject: [cinder] Help with Fedora 29 devstack volume/iscsi issues In-Reply-To: <20190207063940.GA1754@fedora19.localdomain> References: <20190207063940.GA1754@fedora19.localdomain> Message-ID: <20190211101229.j5aqii2os5z2p2cw@localhost> On 07/02, Ian Wienand wrote: > Hello, > > I'm trying to diagnose what has gone wrong with Fedora 29 in our gate > devstack test; it seems there is a problem with the iscsi setup and > consequently the volume based tempest tests all fail. AFAICS we end > up with nova hitting parsing errors inside os_brick's iscsi querying > routines; so it seems whatever error path we've hit is outside the > usual as it's made it pretty far down the stack. > > I have a rather haphazard bug report going on at > > https://bugs.launchpad.net/os-brick/+bug/1814849 > > as I've tried to trace it down. At this point, it's exceeding the > abilities of my cinder/nova/lvm/iscsi/how-this-all-hangs-together > knowledge. > > The final comment there has a link the devstack logs and a few bits > and pieces of gleaned off the host (which I have on hold and can > examine) which is hopefully useful to someone skilled in the art. > > I'm hoping ultimately it's a rather simple case of a missing package > or config option; I would greatly appreciate any input so we can get > this test stable. > > Thanks, > > -i > Hi Ian, Well, the system from the pastebin [1] doesn't look too good. DB and LIO are out of sync. You can see that the database says that there must be 3 exports and maps available, yet you only see 1 in LIO. It is werid that there are things missing from the logs: In method _get_connection_devices we have: LOG.debug('Getting connected devices for (ips,iqns,luns)=%s', 1 ips_iqns_luns) nodes = self._get_iscsi_nodes() And we can see the message in the logs [2], but then we don't see the call to iscsiadm that happens as the first instruction in _get_iscsi_nodes: out, err = self._execute('iscsiadm', '-m', 'node', run_as_root=True, root_helper=self._root_helper, check_exit_code=False) And we only see the error coming from parsing the output of that command that is not logged. I believe Matthew is right in his assessment, the problem is the output from "iscsiadm -m node", there is a missing space between the first 2 columns in the output [4]. This looks like an issue in Open iSCSI, not in OS-Brick, Cinder, or Nova. And checking their code, it looks like this is the patch that fixes it [5], so it needs to be added to F29 iscsi-initiator-utils package. Cheers, Gorka. [1]: http://paste.openstack.org/show/744723/ [2]: http://logs.openstack.org/59/619259/2/check/devstack-platform-fedora-latest/3eaee4d/controller/logs/screen-n-cpu.txt.gz?#_Feb_06_00_10_05_234149 [3]: https://bugs.launchpad.net/os-brick/+bug/1814849/comments/9 [4]: http://paste.openstack.org/show/744724/ [5]: https://github.com/open-iscsi/open-iscsi/commit/baa0cb45cfcf10a81283c191b0b236cd1a2f66ee From hyangii at gmail.com Mon Feb 11 10:39:15 2019 From: hyangii at gmail.com (Jae Sang Lee) Date: Mon, 11 Feb 2019 19:39:15 +0900 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> Message-ID: Arne, I saw the messages like ''moving volume to trash" in the cinder-volume logs and the peridic task also reports like "Deleted from trash for backend ''" The patch worked well when clearing a small number of volumes. This happens only when I am deleting a large number of volumes. I will try to adjust the number of thread pools by adjusting the environment variables with your advices Do you know why the cinder-volume hang does not occur when create a volume, but only when delete a volume? Thanks. 2019년 2월 11일 (월) 오후 6:14, Arne Wiebalck 님이 작성: > Jae, > > To make sure deferred deletion is properly working: when you delete > individual large volumes > with data in them, do you see that > - the volume is fully “deleted" within a few seconds, ie. not staying in > ‘deleting’ for a long time? > - that the volume shows up in trash (with “rbd trash ls”)? > - the periodic task reports it is deleting volumes from the trash? > > Another option to look at is “backend_native_threads_pool_size": this will > increase the number > of threads to work on deleting volumes. It is independent from deferred > deletion, but can also > help with situations where Cinder has more work to do than it can cope > with at the moment. > > Cheers, > Arne > > > > On 11 Feb 2019, at 09:47, Jae Sang Lee wrote: > > Yes, I added your code to pike release manually. > > > > 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 작성: > >> Hi Jae, >> >> You back ported the deferred deletion patch to Pike? >> >> Cheers, >> Arne >> >> > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: >> > >> > Hello, >> > >> > I recently ran a volume deletion test with deferred deletion enabled on >> the pike release. >> > >> > We experienced a cinder-volume hung when we were deleting a large >> amount of the volume in which the data was actually written(I make 15GB >> file in every volumes), and we thought deferred deletion would solve it. >> > >> > However, while deleting 200 volumes, after 50 volumes, the >> cinder-volume downed as before. In my opinion, the trash_move api does not >> seem to work properly when removing multiple volumes, just like remove api. >> > >> > If these test results are my fault, please let me know the correct test >> method. >> > >> >> -- >> Arne Wiebalck >> CERN IT >> >> > -- > Arne Wiebalck > CERN IT > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyangii at gmail.com Mon Feb 11 10:41:05 2019 From: hyangii at gmail.com (Jae Sang Lee) Date: Mon, 11 Feb 2019 19:41:05 +0900 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: <20190211092142.pva6t6zol77fowsn@localhost> References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <20190211092142.pva6t6zol77fowsn@localhost> Message-ID: Gorka, I found the default size of threadpool is 20 in source code. However, I will try to increase this size. Thanks a lot. 2019년 2월 11일 (월) 오후 6:21, Gorka Eguileor 님이 작성: > On 11/02, Jae Sang Lee wrote: > > Yes, I added your code to pike release manually. > > > > Hi, > > Did you enable the feature? > > If I remember correctly, 50 is the default value of the native thread > pool size, so it seems that the 50 available threads are busy deleting > the volumes. > > I would double check that the feature is actually enabled > (enable_deferred_deletion = True in the backend section configuration > and checking the logs to see if there are any messages indicating that a > volume is being deleted from the trash), and increase the thread pool > size. You can change it with environmental variable > EVENTLET_THREADPOOL_SIZE. > > Cheers, > Gorka. > > > > > > > 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 작성: > > > > > Hi Jae, > > > > > > You back ported the deferred deletion patch to Pike? > > > > > > Cheers, > > > Arne > > > > > > > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: > > > > > > > > Hello, > > > > > > > > I recently ran a volume deletion test with deferred deletion enabled > on > > > the pike release. > > > > > > > > We experienced a cinder-volume hung when we were deleting a large > amount > > > of the volume in which the data was actually written(I make 15GB file > in > > > every volumes), and we thought deferred deletion would solve it. > > > > > > > > However, while deleting 200 volumes, after 50 volumes, the > cinder-volume > > > downed as before. In my opinion, the trash_move api does not seem to > work > > > properly when removing multiple volumes, just like remove api. > > > > > > > > If these test results are my fault, please let me know the correct > test > > > method. > > > > > > > > > > -- > > > Arne Wiebalck > > > CERN IT > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev.faz at gmail.com Mon Feb 11 11:58:16 2019 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Mon, 11 Feb 2019 12:58:16 +0100 Subject: [keystone] adfs SingleSignOn with CLI/API? Message-ID: <1B71BEE3-D72D-42E8-A61A-380CAA548722@gmail.com> Hi, Im currently trying to implement some way to do a SSO against our ActiveDirectory. I already tried SAMLv2 and OpenID Connect. Im able to sign in via Horizon, but im unable to find a working way on cli. Already tried v3adfspassword and v3oidcpassword, but im unable to get them working. Any hints / links / docs where to find more information? Anyone using this kind of setup and willing to share KnowHow? Thanks a lot, Fabian Zimmermann -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arne.Wiebalck at cern.ch Mon Feb 11 12:40:05 2019 From: Arne.Wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 11 Feb 2019 12:40:05 +0000 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> Message-ID: <3C065CFC-3E64-47C1-84C9-FB87A1F9B475@cern.ch> Jae, On 11 Feb 2019, at 11:39, Jae Sang Lee > wrote: Arne, I saw the messages like ''moving volume to trash" in the cinder-volume logs and the peridic task also reports like "Deleted from trash for backend ''" The patch worked well when clearing a small number of volumes. This happens only when I am deleting a large number of volumes. Hmm, from cinder’s point of view, the deletion should be more or less instantaneous, so it should be able to “delete” many more volumes before getting stuck. The periodic task, however, will go through the volumes one by one, so if you delete many at the same time, volumes may pile up in the trash (for some time) before the tasks gets round to delete them. This should not affect c-vol, though. I will try to adjust the number of thread pools by adjusting the environment variables with your advices Do you know why the cinder-volume hang does not occur when create a volume, but only when delete a volume? Deleting a volume ties up a thread for the duration of the deletion (which is synchronous and can hence take very long for ). If you have too many deletions going on at the same time, you run out of threads and c-vol will eventually time out. FWIU, creation basically works the same way, but it is almost instantaneous, hence the risk of using up all threads is simply lower (Gorka may correct me here :-). Cheers, Arne Thanks. 2019년 2월 11일 (월) 오후 6:14, Arne Wiebalck >님이 작성: Jae, To make sure deferred deletion is properly working: when you delete individual large volumes with data in them, do you see that - the volume is fully “deleted" within a few seconds, ie. not staying in ‘deleting’ for a long time? - that the volume shows up in trash (with “rbd trash ls”)? - the periodic task reports it is deleting volumes from the trash? Another option to look at is “backend_native_threads_pool_size": this will increase the number of threads to work on deleting volumes. It is independent from deferred deletion, but can also help with situations where Cinder has more work to do than it can cope with at the moment. Cheers, Arne On 11 Feb 2019, at 09:47, Jae Sang Lee > wrote: Yes, I added your code to pike release manually. 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck >님이 작성: Hi Jae, You back ported the deferred deletion patch to Pike? Cheers, Arne > On 11 Feb 2019, at 07:54, Jae Sang Lee > wrote: > > Hello, > > I recently ran a volume deletion test with deferred deletion enabled on the pike release. > > We experienced a cinder-volume hung when we were deleting a large amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it. > > However, while deleting 200 volumes, after 50 volumes, the cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api. > > If these test results are my fault, please let me know the correct test method. > -- Arne Wiebalck CERN IT -- Arne Wiebalck CERN IT -- Arne Wiebalck CERN IT -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Mon Feb 11 14:02:20 2019 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 11 Feb 2019 15:02:20 +0100 Subject: [TripleO][Kolla] Reduce base layer of containers for security and size of images (maintenance) sakes: UPDATE Message-ID: Good news: so the %systemd_ordering macro works well for containers images to build it w/o systemd & deps pulled in, and the changes got accepted for RDO and some of the base packages for f29! Bad news: [0] is a show stopper for removing systemd off the base RHEL/Fedora containers in Kolla. To mitigate that issue for the remaining dnf and puppet, and as well for the less important* to have it fixed iscsi-initiator-utils and kuryr-kubernetes-distgit, we need to consider using microdnf instead of dnf for installing RPM packages in Kolla. Or alternatively somehow to achieve a trick with _tmpfiles to be split off the main spec files into sub-packages [1]: if the tmpfiles and such were split out into a subpackage that'd be required if and only if the kernel was installed or being installed, that might work. * it is only less important as those do not belong to the Kolla base/openstack-base images and impacts only its individual containers images. [0] https://bugs.launchpad.net/tripleo/+bug/1804822/comments/17 [1] https://github.com/rpm-software-management/dnf/pull/1315#issuecomment-462326283 > Here is an update. > The %{systemd_ordering} macro is proposed for lightening containers > images and removing the systemd dependency for containers. Please see & > try patches in the topic [0] for RDO, and [1][2][3][4][5] for generic > Fedora 29 rpms. I'd very appreciate if anyone building Kolla containers > for f29/(rhel8 yet?) could try these out as well. > > PS (somewhat internal facing but who cares): I wonder if we could see > those changes catched up automagically for rhel8 repos as well? > >> I'm tracking systemd changes here [0],[1],[2], btw (if accepted, >> it should be working as of fedora28(or 29) I hope) >> >> [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659 >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672 > > [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672 > [3] https://bugzilla.redhat.com/show_bug.cgi?id=1668688 > [4] https://bugzilla.redhat.com/show_bug.cgi?id=1668687 > [5] https://bugzilla.redhat.com/show_bug.cgi?id=1668678 -- Best regards, Bogdan Dobrelya, Irc #bogdando From hjensas at redhat.com Mon Feb 11 14:16:53 2019 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Mon, 11 Feb 2019 15:16:53 +0100 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: Message-ID: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> On Wed, 2019-02-06 at 06:12 +0000, NANTHINI A A wrote: > Hi , > We are developing heat templates for our vnf deployment .It > includes multiple resources .We want to repeat the resource and hence > used the api RESOURCE GROUP . > Attached are the templates which we used > > Set1.yaml -> has the resources we want to repeat > Setrepeat.yaml -> has the resource group api with count . > > We want to access the variables of resource in set1.yaml while > repeating it with count .Eg . port name ,port fixed ip address we > want to change in each set . > Please let us know how we can have a variable with each repeated > resource . > Sounds like you want to use the index_var variable[1] to prefix/suffix reource names? I.e in set1.yaml you can use: name: list_join: - '_' - {get_param: 'OS::stack_name'} - %index% - The example should resulting in something like: stack_0_Network3, stack_0_Subnet3 stack_1_Network0, stack_1_Subnet0 [ ... ] If you want to be more advanced you could use a list parameter in the set1.yaml template, and have each list entry contain a dictionaly of each resource name. The %index% variable would then be used to pick the correct entry from the list. parameters: resource_name_map: - network1: foo_custom_name_net1 network2: foo_custom_name_net2 - network1: bar_custom_name_net1 network2: bar_custom_name_net2 - resources: neutron_Network_1: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network1]} neutron_Network_2: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network2]} %index% is the "count" picking the 'foo' entries when %index% is 0, and 'bar' entries when %index% is 1 and so on. [1] https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Heat::ResourceGroup-props-opt From colleen at gazlene.net Mon Feb 11 14:19:51 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 11 Feb 2019 15:19:51 +0100 Subject: [keystone] adfs SingleSignOn with CLI/API? In-Reply-To: <1B71BEE3-D72D-42E8-A61A-380CAA548722@gmail.com> References: <1B71BEE3-D72D-42E8-A61A-380CAA548722@gmail.com> Message-ID: <1549894791.2312833.1655509928.25450D18@webmail.messagingengine.com> Hi Fabian, On Mon, Feb 11, 2019, at 12:58 PM, Fabian Zimmermann wrote: > Hi, > > Im currently trying to implement some way to do a SSO against our > ActiveDirectory. I already tried SAMLv2 and OpenID Connect. > > Im able to sign in via Horizon, but im unable to find a working way on cli. > > Already tried v3adfspassword and v3oidcpassword, but im unable to get > them working. > > Any hints / links / docs where to find more information? > > Anyone using this kind of setup and willing to share KnowHow? > > Thanks a lot, > > Fabian Zimmermann We have an example of authenticating with the CLI here: https://docs.openstack.org/keystone/latest/admin/federation/configure_federation.html#authenticating That only covers the regular SAML2.0 ECP type of authentication, which I guess won't work with ADFS, and we seem to have zero ADFS-specific documentation. >From the keystoneauth plugin code, it looks like you need to set identity-provider-url, service-provider-endpoint, service-provider-entity-id, username, password, identity-provider, and protocol (I'm getting that from the loader classes[1][2]). Is that the information you're looking for, or can you give more details on what specifically isn't working? Colleen [1] http://git.openstack.org/cgit/openstack/keystoneauth/tree/keystoneauth1/loading/identity.py#n104 [2] http://git.openstack.org/cgit/openstack/keystoneauth/tree/keystoneauth1/extras/_saml2/_loading.py#n45 From kchamart at redhat.com Mon Feb 11 14:38:45 2019 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 11 Feb 2019 15:38:45 +0100 Subject: [nova] Floppy drive =?utf-8?Q?support_?= =?utf-8?B?4oCU?= does anyone rely on it? In-Reply-To: References: <20190207112959.GF5349@paraplu.home> Message-ID: <20190211143845.GA26837@paraplu> On Thu, Feb 07, 2019 at 09:41:19AM -0500, Jay Pipes wrote: > On 02/07/2019 06:29 AM, Kashyap Chamarthy wrote: > > Given that, and the use of floppy drives is generally not recommended in > > 2019, any objection to go ahead and remove support for floppy drives? > > No objections from me. Thanks. Since I haven't heard much else objections, I'll add the blueprint (to remove floppy drive support) to my queue. [...] -- /kashyap From mriedemos at gmail.com Mon Feb 11 14:41:09 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 11 Feb 2019 08:41:09 -0600 Subject: [nova] Can we drop the cells v1 docs now? Message-ID: <1979b66e-7de8-9826-1145-e80af5d6a270@gmail.com> I have kind of lost where we are on dropping cells v1 code at this point, but it's probably too late in Stein. And technically nova-network won't start unless cells v1 is configured, and we've left the nova-network code in place while CERN is migrating their deployment to neutron*. CERN is running cells v2 since Queens and I think they have just removed this [1] to still run nova-network without cells v1. There has been no work in Stein to remove nova-network [2] even though we still have a few API related things we can work on removing [3] but that is very low priority. To be clear, CERN only cares about the nova-network service, not the APIs which is why we started removing those in Rocky. As for cells v1, if we're not going to drop it in Stein, can we at least make incremental progress and drop the cells v1 related docs to further signal the eventual demise and to avoid confusion in the docs about what cells is (v1 vs v2) for newcomers? People can still get the cells v1 in-tree docs on the stable branches (which are being published [4]). [1] https://github.com/openstack/nova/blob/bff3fd1cd/nova/cmd/network.py#L43 [2] https://blueprints.launchpad.net/nova/+spec/remove-nova-network-stein [3] https://etherpad.openstack.org/p/nova-network-removal-rocky [4] https://docs.openstack.org/nova/queens/user/cells.html#cells-v1 *I think they said there are parts of their deployment that will probably never move off of nova-network, and they will just maintain a fork for that part of the deployment. -- Thanks, Matt From mnaser at vexxhost.com Mon Feb 11 14:51:59 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 11 Feb 2019 09:51:59 -0500 Subject: [nova] Can we drop the cells v1 docs now? In-Reply-To: <1979b66e-7de8-9826-1145-e80af5d6a270@gmail.com> References: <1979b66e-7de8-9826-1145-e80af5d6a270@gmail.com> Message-ID: On Mon, Feb 11, 2019 at 9:43 AM Matt Riedemann wrote: > > I have kind of lost where we are on dropping cells v1 code at this > point, but it's probably too late in Stein. And technically nova-network > won't start unless cells v1 is configured, and we've left the > nova-network code in place while CERN is migrating their deployment to > neutron*. CERN is running cells v2 since Queens and I think they have > just removed this [1] to still run nova-network without cells v1. > > There has been no work in Stein to remove nova-network [2] even though > we still have a few API related things we can work on removing [3] but > that is very low priority. To be clear, CERN only cares about the > nova-network service, not the APIs which is why we started removing > those in Rocky. > > As for cells v1, if we're not going to drop it in Stein, can we at least > make incremental progress and drop the cells v1 related docs to further > signal the eventual demise and to avoid confusion in the docs about what > cells is (v1 vs v2) for newcomers? People can still get the cells v1 > in-tree docs on the stable branches (which are being published [4]). I think from an operators perspective, the documentation should at least be ripped out (and any nova-manage commands, assuming there's any). I guess there should be any tooling to allow you to get a cells v1 deployment (imho). Cells V2 have been out for a while, extensively tested and work pretty well now. > [1] https://github.com/openstack/nova/blob/bff3fd1cd/nova/cmd/network.py#L43 > [2] https://blueprints.launchpad.net/nova/+spec/remove-nova-network-stein > [3] https://etherpad.openstack.org/p/nova-network-removal-rocky > [4] https://docs.openstack.org/nova/queens/user/cells.html#cells-v1 > > *I think they said there are parts of their deployment that will > probably never move off of nova-network, and they will just maintain a > fork for that part of the deployment. > > -- > > Thanks, > > Matt > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From lbragstad at gmail.com Mon Feb 11 14:53:23 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 11 Feb 2019 08:53:23 -0600 Subject: [dev][keystone] Keystone Team Update - Week of 4 February 2019 In-Reply-To: <1549722452.3566947.1654366432.049A66E5@webmail.messagingengine.com> References: <1549722452.3566947.1654366432.049A66E5@webmail.messagingengine.com> Message-ID: On 2/9/19 8:27 AM, Colleen Murphy wrote: > # Keystone Team Update - Week of 4 February 2019 > > ## News > > ### Performance of Loading Fernet/JWT Key Repositories > > Lance noticed that it seems that token signing/encryption keys are loaded from disk on every request and is therefore not very performant, and started investigating ways we could improve this[1][2]. I didn't come to a conclusion on if the performance hit was due to the actual reading of something from disk, if it was because we loop through each available key until we find one that works, or if it was because I completely disabled token caching. The obvious worst case in this scenario is trying the right key, last - O(n). This is the approach I was using to preemptively identify which public key needs to be used to validate a JWT [0]. Ultimately, I need some more information/constraints from wxy [1]. Possibly something we can start in anther thread. [0] https://pasted.tech/pastes/c10a774a9d17e1743f7a6543031b8c43d930906c.raw [1] https://review.openstack.org/#/c/614549/13/keystone/token/providers/jws/core.py > > [1] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-02-07.log.html#t2019-02-07T17:55:34 > [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-02-08.log.html#t2019-02-08T17:09:24 > > ## Recently Merged Changes > > Search query: https://bit.ly/2pquOwT > > We merged 10 changes this week. > > ## Changes that need Attention > > Search query: https://bit.ly/2RLApdA > > There are 73 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. > > ## Bugs > > This week we opened 2 new bugs and closed 3. > > Bugs opened (2) > Bug #1814589 (keystone:High) opened by Guang Yee https://bugs.launchpad.net/keystone/+bug/1814589 > Bug #1814570 (keystone:Medium) opened by Guang Yee https://bugs.launchpad.net/keystone/+bug/1814570 > > Bugs fixed (3) > Bug #1804483 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1804483 > Bug #1805406 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1805406 > Bug #1801095 (keystone:Wishlist) fixed by Artem Vasilyev https://bugs.launchpad.net/keystone/+bug/1801095 > > ## Milestone Outlook > > https://releases.openstack.org/stein/schedule.html > > Feature freeze is in four weeks. Be mindful of the gate and try to submit and review things early. > > ## Shout-outs > > Congratulations and thank you to our Outreachy intern Islam for completing the first step in refactoring our unit tests to lean on our shiny new Flask framework! Great work! ++ > > ## Help with this newsletter > > Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From nanthini.a.a at ericsson.com Mon Feb 11 15:32:58 2019 From: nanthini.a.a at ericsson.com (NANTHINI A A) Date: Mon, 11 Feb 2019 15:32:58 +0000 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: Hi , I have tried the below .But getting error .Please let me know how I can proceed further . root at cic-1:~# cat try1.yaml heat_template_version: 2013-05-23 description: This is the template for I&V R6.1 base configuration to create neutron resources other than sg and vm for vyos vms parameters: resource_name_map: - network1: NetworkA1 network2: NetworkA2 - network1: NetworkB1 network2: NetworkB2 resources: neutron_Network_1: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network1]} neutron_Network_2: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network2]} root at cic-1:~# cat tryrepeat.yaml heat_template_version: 2013-05-23 resources: rg: type: OS::Heat::ResourceGroup properties: count: 2 resource_def: type: try1.yaml root at cic-1:~# root at cic-1:~# heat stack-create tests -f tryrepeat.yaml WARNING (shell) "heat stack-create" is deprecated, please use "openstack stack create" instead ERROR: resources.rg: : Error parsing template file:///root/try1.yaml while scanning for the next token found character '%' that cannot start any token in "", line 15, column 45: ... {get_param: [resource_name_map, %index%, network1]} Thanks in advance . Thanks, A.Nanthini -----Original Message----- From: Harald Jensås [mailto:hjensas at redhat.com] Sent: Monday, February 11, 2019 7:47 PM To: NANTHINI A A ; openstack-dev at lists.openstack.org Subject: Re: [Heat] Reg accessing variables of resource group heat api On Wed, 2019-02-06 at 06:12 +0000, NANTHINI A A wrote: > Hi , > We are developing heat templates for our vnf deployment .It > includes multiple resources .We want to repeat the resource and hence > used the api RESOURCE GROUP . > Attached are the templates which we used > > Set1.yaml -> has the resources we want to repeat Setrepeat.yaml -> has > the resource group api with count . > > We want to access the variables of resource in set1.yaml while > repeating it with count .Eg . port name ,port fixed ip address we want > to change in each set . > Please let us know how we can have a variable with each repeated > resource . > Sounds like you want to use the index_var variable[1] to prefix/suffix reource names? I.e in set1.yaml you can use: name: list_join: - '_' - {get_param: 'OS::stack_name'} - %index% - The example should resulting in something like: stack_0_Network3, stack_0_Subnet3 stack_1_Network0, stack_1_Subnet0 [ ... ] If you want to be more advanced you could use a list parameter in the set1.yaml template, and have each list entry contain a dictionaly of each resource name. The %index% variable would then be used to pick the correct entry from the list. parameters: resource_name_map: - network1: foo_custom_name_net1 network2: foo_custom_name_net2 - network1: bar_custom_name_net1 network2: bar_custom_name_net2 - resources: neutron_Network_1: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network1]} neutron_Network_2: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network2]} %index% is the "count" picking the 'foo' entries when %index% is 0, and 'bar' entries when %index% is 1 and so on. [1] https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Heat::ResourceGroup-props-opt From thierry at openstack.org Mon Feb 11 15:50:57 2019 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 11 Feb 2019 16:50:57 +0100 Subject: Subject: Re: [Trove] State of the Trove service tenant deployment model In-Reply-To: References: Message-ID: Lingxian Kong wrote: > On Sun, Feb 10, 2019 at 7:04 AM Darek Król > wrote: > > Hello Lingxian, > > I’ve heard about a few tries of running Trove in production. > Unfortunately, I didn’t have opportunity to get details about > networking. At Samsung, we introducing Trove into our products for > on-premise cloud platforms. However, I cannot share too many details > about it, besides it is oriented towards performance and security is > not a concern. Hence, the networking is very basic without any > layers of abstractions if possible. > > Could you share more details about your topology and goals you want > to achieve in Trove ? Maybe Trove team could help you in this ? > Unfortunately, I’m not a network expert so I would need to get more > details to understand your use case better. > > > Yeah, I think trove team could definitely help. I've been working on a > patch[1] to support different sgs for different type of neutron ports, > the patch is for the use case that `CONF.default_neutron_networks` is > configured as trove management network. > > Besides, I also have some patches[2][3] for trove need to be reviewed, > not sure who are the right people I should ask for review now, but would > appriciate if you could help. I think OVH has been deploying Trove as well, or at least considering it... Ccing Jean-Daniel in case he can bring some insights on that. -- Thierry From thierry at openstack.org Mon Feb 11 16:01:40 2019 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 11 Feb 2019 17:01:40 +0100 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> Message-ID: <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> Doug Hellmann wrote: > Kendall Nelson writes: >> [...] >> So I think that the First Contact SIG project liaison list kind of fits >> this. Its already maintained in a wiki and its already a list of people >> willing to be contacted for helping people get started. It probably just >> needs more attention and refreshing. When it was first set up we (the FC >> SIG) kind of went around begging for volunteers and then once we maxxed out >> on them, we said those projects without volunteers will have the role >> defaulted to the PTL unless they delegate (similar to how other liaison >> roles work). >> >> Long story short, I think we have the sort of mentoring things covered. And >> to back up an earlier email, project specific onboarding would be a good >> help too. > > OK, that does sound pretty similar. I guess the piece that's missing is > a description of the sort of help the team is interested in receiving. I guess the key difference is that the first contact list is more a function of the team (who to contact for first contributions in this team, defaults to PTL), rather than a distinct offer to do 1:1 mentoring to cover specific needs in a team. It's probably pretty close (and the same people would likely be involved), but I think an approach where specific people offer a significant amount of their time to one mentee interested in joining a team is a bit different. I don't think every team would have volunteers to do that. I would not expect a mentor volunteer to care for several mentees. In the end I think we would end up with a much shorter list than the FC list. Maybe the two efforts can converge into one, or they can be kept as two different things but coordinated by the same team ? -- Thierry Carrez (ttx) From ed at leafe.com Mon Feb 11 16:03:26 2019 From: ed at leafe.com (Ed Leafe) Date: Mon, 11 Feb 2019 10:03:26 -0600 Subject: Placement governance switch Message-ID: <8BE26158-5817-497F-A9D2-60222BD9F82C@leafe.com> With PTL election season coming up soon, this seems like a good time to revisit the plans for the Placement effort to become a separate project with its own governance. We last discussed this back at the Denver PTG in September 2018, and settled on making Placement governance dependent on a number of items. [0] Most of the items in that list have been either completed, are very close to completion, or, in the case of the upgrade, is no longer expected. But in the time since that last discussion, much has changed. Placement is now a separate git repo, and is deployed and run independently of Nova. The integrated gate in CI is using the extracted Placement repo, and not Nova’s version. In a hangout last week [1], we agreed to several things: * Placement code would remain in the Nova repo for the Stein release to allow for an easier transition for deployments tools that were not prepared for this change * The Placement code in the Nova tree will remain frozen; all new Placement work will be in the Placement repo. * The Placement API is now unfrozen. Nova, however, will not develop code in Stein that will rely on any newer Placement microversion than the current 1.30. * The Placement code in the Nova repo will be deleted in the Train release. Given the change of context, now may be a good time to change to a separate governance. The concerns on the Nova side have been largely addressed, and switching governance now would allow us to participate in the next PTL election cycle. We’d like to get input from anyone else in the OpenStack community who feels that a governance change would impact them, so please reply in this thread if you have concerns. [0] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002451.html -- Ed Leafe From colleen at gazlene.net Mon Feb 11 16:18:40 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 11 Feb 2019 17:18:40 +0100 Subject: [keystone] adfs SingleSignOn with CLI/API? In-Reply-To: References: <1B71BEE3-D72D-42E8-A61A-380CAA548722@gmail.com> <1549894791.2312833.1655509928.25450D18@webmail.messagingengine.com> Message-ID: <1549901920.3451697.1655621200.6F07535E@webmail.messagingengine.com> Forwarding back to list On Mon, Feb 11, 2019, at 5:11 PM, Blake Covarrubias wrote: > > On Feb 11, 2019, at 6:19 AM, Colleen Murphy wrote: > > > > Hi Fabian, > > > > On Mon, Feb 11, 2019, at 12:58 PM, Fabian Zimmermann wrote: > >> Hi, > >> > >> Im currently trying to implement some way to do a SSO against our > >> ActiveDirectory. I already tried SAMLv2 and OpenID Connect. > >> > >> Im able to sign in via Horizon, but im unable to find a working way on cli. > >> > >> Already tried v3adfspassword and v3oidcpassword, but im unable to get > >> them working. > >> > >> Any hints / links / docs where to find more information? > >> > >> Anyone using this kind of setup and willing to share KnowHow? > >> > >> Thanks a lot, > >> > >> Fabian Zimmermann > > > > We have an example of authenticating with the CLI here: > > > > https://docs.openstack.org/keystone/latest/admin/federation/configure_federation.html#authenticating > > > > That only covers the regular SAML2.0 ECP type of authentication, which I guess won't work with ADFS, and we seem to have zero ADFS-specific documentation. > > > > From the keystoneauth plugin code, it looks like you need to set identity-provider-url, service-provider-endpoint, service-provider-entity-id, username, password, identity-provider, and protocol (I'm getting that from the loader classes[1][2]). Is that the information you're looking for, or can you give more details on what specifically isn't working? > > > > Colleen > > > > [1] http://git.openstack.org/cgit/openstack/keystoneauth/tree/keystoneauth1/loading/identity.py#n104 > > [2] http://git.openstack.org/cgit/openstack/keystoneauth/tree/keystoneauth1/extras/_saml2/_loading.py#n45 > > > > Fabian, > > To add a bit more info, the AD FS plugin essentially uses IdP-initiated > sign-on. The identity provider URL is where the initial authentication > request to AD FS will be sent. An example of this would be > https://HOSTNAME/adfs/services/trust/13/usernamemixed > . The service > provider’s entity ID must also be sent in the request so that AD FS > knows which Relying Party Trust to associate with the request. > > AD FS will provide a SAML assertion upon successful authentication. The > service provider endpoint is the URL of the Assertion Consumer Service. > If you’re using Shibboleth on the SP, this would be > https://HOSTNAME/Shibboleth.sso/ADFS > . > > Note: The service-provider-entity-id can be omitted if it is the same > value as the service-provider-endpoint (or Assertion Consumer Service > URL). > > Hope this helps. > > — > Blake Covarrubias > From openstack at fried.cc Mon Feb 11 16:34:22 2019 From: openstack at fried.cc (Eric Fried) Date: Mon, 11 Feb 2019 10:34:22 -0600 Subject: [ptg][nova][placement] Etherpad & collector started Message-ID: <407f5508-b2ab-667e-d4f1-122e2906e324@fried.cc> I needed to brain-dump some topics to be discussed at the PTG in a couple of months. I asked if there was already an etherpad and the two people who happened to hear my question weren't aware of one, so I started one [1]. I also started the collector wiki page [2], templated on the Stein one [3]. Enjoy. -efried [1] https://etherpad.openstack.org/p/nova-ptg-train [2] https://wiki.openstack.org/wiki/PTG/Train/Etherpads [3] https://wiki.openstack.org/wiki/PTG/Stein/Etherpads From zufar at onf-ambassador.org Mon Feb 11 16:46:45 2019 From: zufar at onf-ambassador.org (Zufar Dhiyaulhaq) Date: Mon, 11 Feb 2019 23:46:45 +0700 Subject: [Neutron] Split Network node from controller Node In-Reply-To: <3DC9635F-4B85-41D4-B615-E6E2A8234B38@redhat.com> References: <3DC9635F-4B85-41D4-B615-E6E2A8234B38@redhat.com> Message-ID: Hi Thank you for your answer, I just install the network agent in a network node, with this following package - openstack-neutron.noarch - openstack-neutron-common.noarch - openstack-neutron-openvswitch.noarch - openstack-neutron-metering-agent.noarch and configuring and appear in the agent list [root at zu-controller1 ~(keystone_admin)]# openstack network agent list +--------------------------------------+--------------------+----------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+----------------+-------------------+-------+-------+---------------------------+ | 025f8a15-03b5-421e-94ff-3e07fc1317b5 | Open vSwitch agent | zu-compute2 | None | :-) | UP | neutron-openvswitch-agent | | 04af3150-7673-4ac4-9670-fd1505737466 | Metadata agent | zu-network1 | None | :-) | UP | neutron-metadata-agent | | 11a9c764-e53d-4316-9801-fa2a931f0572 | Open vSwitch agent | zu-compute1 | None | :-) | UP | neutron-openvswitch-agent | | 1875a93f-09df-4c50-8660-1f4dc33b228d | L3 agent | zu-controller1 | nova | :-) | UP | neutron-l3-agent | | 1b492ed7-fbc2-4b95-ba70-e045e255a63d | L3 agent | zu-network1 | nova | :-) | UP | neutron-l3-agent | | 2fb2a714-9735-4f78-8019-935cb6422063 | Metering agent | zu-network1 | None | :-) | UP | neutron-metering-agent | | 3873fc10-1758-47e9-92b8-1e8605651c70 | Open vSwitch agent | zu-network1 | None | :-) | UP | neutron-openvswitch-agent | | 4b51bdd2-df13-4a35-9263-55e376b6e2ea | Metering agent | zu-controller1 | None | :-) | UP | neutron-metering-agent | | 54af229f-3dc1-49db-b32a-25f3fd62c010 | DHCP agent | zu-controller1 | nova | :-) | UP | neutron-dhcp-agent | | 9337c72b-8703-4c80-911b-106abe51ffbd | DHCP agent | zu-network1 | nova | :-) | UP | neutron-dhcp-agent | | a3c78231-027d-4ddd-8234-7afd1d67910e | Metadata agent | zu-controller1 | None | :-) | UP | neutron-metadata-agent | | aeb7537e-98af-49f0-914b-204e64cb4103 | Open vSwitch agent | zu-controller1 | None | :-) | UP | neutron-openvswitch-agent | +--------------------------------------+--------------------+----------------+-------------------+-------+-------+---------------------------+ I try to migrate the network (external & internal) and router into zu-network1 (my new network node). and success [root at zu-controller1 ~(keystone_admin)]# openstack network agent list --router $ROUTER_ID +--------------------------------------+------------+-------------+-------------------+-------+-------+------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+------------+-------------+-------------------+-------+-------+------------------+ | 1b492ed7-fbc2-4b95-ba70-e045e255a63d | L3 agent | zu-network1 | nova | :-) | UP | neutron-l3-agent | +--------------------------------------+------------+-------------+-------------------+-------+-------+------------------+ [root at zu-controller1 ~(keystone_admin)]# openstack network agent list --network $NETWORK_INTERNAL +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ | 9337c72b-8703-4c80-911b-106abe51ffbd | DHCP agent | zu-network1 | nova | :-) | UP | neutron-dhcp-agent | +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ [root at zu-controller1 ~(keystone_admin)]# openstack network agent list --network $NETWORK_EXTERNAL +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ | 9337c72b-8703-4c80-911b-106abe51ffbd | DHCP agent | zu-network1 | nova | :-) | UP | neutron-dhcp-agent | +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ But, I cannot ping my instance after the migration. I don't know why. ii check my DHCP and router has already moved. [root at zu-controller1 ~(keystone_admin)]# ip netns [root at zu-controller1 ~(keystone_admin)]# [root at zu-network1 ~]# ip netns qdhcp-fddd647b-3601-43e4-8299-60b703405110 (id: 1) qrouter-dd8ae033-0db2-4153-a060-cbb7cd18bae7 (id: 0) [root at zu-network1 ~]# What step do I miss? Thanks Best Regards, Zufar Dhiyaulhaq On Mon, Feb 11, 2019 at 3:13 PM Slawomir Kaplonski wrote: > Hi, > > I don’t know if there is any tutorial for that but You can just deploy new > node with agents which You need, then disable old DHCP/L3 agents with > neutron API [1] and move existing networks/routers to agents in new host > with neutron API. Docs for agents scheduler API is in [2] and [3]. > Please keep in mind that when You will move routers to new agent You will > have some downtime in data plane. > > [1] https://developer.openstack.org/api-ref/network/v2/#update-agent > [2] https://developer.openstack.org/api-ref/network/v2/#l3-agent-scheduler > [3] > https://developer.openstack.org/api-ref/network/v2/#dhcp-agent-scheduler > > > Wiadomość napisana przez Zufar Dhiyaulhaq w > dniu 11.02.2019, o godz. 03:33: > > > > Hi everyone, > > > > I Have existing OpenStack with 1 controller node (Network Node in > controller node) and 2 compute node. I need to expand the architecture by > splitting the network node from controller node (create 1 node for > network). > > > > Do you have any recommended step or tutorial for doing this? > > Thanks > > > > Best Regards, > > Zufar Dhiyaulhaq > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Feb 11 17:03:39 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 11 Feb 2019 11:03:39 -0600 Subject: [tc][all] Train Community Goals In-Reply-To: References: <66d73db6-9f84-1290-1ab8-cf901a7fb355@catalyst.net.nz> <6b498008e71b7dae651e54e29717f3ccedea50d1.camel@evrard.me> Message-ID: <7e69aef5-d3c1-22df-7a6f-89b35e14fb8c@nemebean.com> cc aspiers, who sounded interested in leading this work, pending discussion with his employer[1]. 1: http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001750.html On 1/31/19 9:59 AM, Lance Bragstad wrote: > *Healthcheck middleware* > > There is currently no volunteer to champion for this goal. The first > iteration of the work on the oslo.middleware was updated [3], and a gap > analysis was started on the mailing lists [4]. > If you want to get involved in this goal, don't hesitate to answer on > the ML thread there. > > [3] https://review.openstack.org/#/c/617924/2 > [4] https://ethercalc.openstack.org/di0mxkiepll8 From kennelson11 at gmail.com Mon Feb 11 17:14:56 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 11 Feb 2019 09:14:56 -0800 Subject: [tc] The future of the "Help most needed" list In-Reply-To: <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> Message-ID: On Mon, Feb 11, 2019 at 8:01 AM Thierry Carrez wrote: > Doug Hellmann wrote: > > Kendall Nelson writes: > >> [...] > >> So I think that the First Contact SIG project liaison list kind of fits > >> this. Its already maintained in a wiki and its already a list of people > >> willing to be contacted for helping people get started. It probably just > >> needs more attention and refreshing. When it was first set up we (the FC > >> SIG) kind of went around begging for volunteers and then once we maxxed > out > >> on them, we said those projects without volunteers will have the role > >> defaulted to the PTL unless they delegate (similar to how other liaison > >> roles work). > >> > >> Long story short, I think we have the sort of mentoring things covered. > And > >> to back up an earlier email, project specific onboarding would be a good > >> help too. > > > > OK, that does sound pretty similar. I guess the piece that's missing is > > a description of the sort of help the team is interested in receiving. > > I guess the key difference is that the first contact list is more a > function of the team (who to contact for first contributions in this > team, defaults to PTL), rather than a distinct offer to do 1:1 mentoring > to cover specific needs in a team. > > It's probably pretty close (and the same people would likely be > involved), but I think an approach where specific people offer a > significant amount of their time to one mentee interested in joining a > team is a bit different. I don't think every team would have volunteers > to do that. I would not expect a mentor volunteer to care for several > mentees. In the end I think we would end up with a much shorter list > than the FC list. > I think our original ask for people volunteering (before we completed the list with PTLs as stand ins) was for people willing to help get started in a project and look after their first few patches. So I think that was kinda the mentoring role originally but then it evolved? Maybe Matt Oliver or Ghanshyam remember better than I do? > > Maybe the two efforts can converge into one, or they can be kept as two > different things but coordinated by the same team ? > > I think we could go either way, but that they both would live with the FC SIG. Seems like the most logical place to me. I lean towards two lists, one being a list of volunteer mentors for projects that are actively looking for new contributors (the shorter list) and the other being a list of people just willing to keep an eye out for the welcome new contributor patches and being the entry point for people asking about getting started that don't know anyone in the project yet (kind of what our current view is, I think). > -- > Thierry Carrez (ttx) > -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Feb 11 17:52:09 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 11 Feb 2019 11:52:09 -0600 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <1548944804.945378.1647818352.1EDB6215@webmail.messagingengine.com> References: <885eb5c9-55d7-2fea-ff83-b917b7d6c4d8@openstack.org> <1548944804.945378.1647818352.1EDB6215@webmail.messagingengine.com> Message-ID: <9133d5d8-8d0e-62de-aca9-4efbda6703fe@nemebean.com> On 1/31/19 8:26 AM, Colleen Murphy wrote: > > I like the idea. One question is, how would these groups be bootstrapped? At the moment, SIGs are formed by 1) people express an interest in a common idea 2) the SIG is proposed and approved by the TC and UC chairs 3) profit. With a more cross-project, deliverable-focused type of group, you would need to have buy-in from all project teams involved before bringing it up for approval by the TC - but getting that buy-in from many different groups can be difficult if you aren't already a blessed group. And if you didn't get buy-in first and the group became approved anyway, project teams may be resentful of having new objectives imposed on them when they may not even agree it's the right direction. As a concrete example of this, the image encryption feature[1] had multiple TC members pushing it along in Berlin, but then got some pushback from the project side[2] on the basis that they didn't prioritize it as highly as the group did. Matt suggested that the priority could be raised if a SIG essentially sponsored it as a top priority for them. Maybe SIG support would be an aspect of creating one of these teams? I don't know what you would do with something that doesn't fall under a SIG though. 1: https://review.openstack.org/#/c/618754/ 2: http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000815.html From ignaziocassano at gmail.com Mon Feb 11 18:18:00 2019 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 11 Feb 2019 19:18:00 +0100 Subject: [manila][glusterfs] on queens error In-Reply-To: References: <20190203100549.urtnvf2iatmqm6oy@barron.net> <20190206153219.yyir5m5tyw7bvrj7@barron.net> <20190206201619.o6turxaps6iv65p7@barron.net> Message-ID: Hello, the manila replication dr works fine on netapp ontap following your suggestions. :-) Source backends (svm for netapp) must belong to a different destination backends availability zone, but in a single manila.conf I cannot specify more than one availability zone. For doing this I must create more share servers ....one for each availability zone. Svm1 with avz1 Svm1-dr with avz1-dr ......... Are you agree??? Thanks & Regards Ignazio Il giorno Gio 7 Feb 2019 06:11 Ignazio Cassano ha scritto: > Many thanks. > I'll check today. > Ignazio > > > Il giorno Mer 6 Feb 2019 21:26 Goutham Pacha Ravi > ha scritto: > >> On Wed, Feb 6, 2019 at 12:16 PM Tom Barron wrote: >> > >> > On 06/02/19 17:48 +0100, Ignazio Cassano wrote: >> > >The 2 openstack Installations do not share anything. The manila on >> each one >> > >works on different netapp storage, but the 2 netapp can be >> synchronized. >> > >Site A with an openstack instalkation and netapp A. >> > >Site B with an openstack with netapp B. >> > >Netapp A and netapp B can be synchronized via network. >> > >Ignazio >> > >> > OK, thanks. >> > >> > You can likely get the share data and its netapp metadata to show up >> > on B via replication and (gouthamr may explain details) but you will >> > lose all the Openstack/manila information about the share unless >> > Openstack database info (more than just manila tables) is imported. >> > That may be OK foryour use case. >> > >> > -- Tom >> >> >> Checking if I understand your request correctly, you have setup >> manila's "dr" replication in OpenStack A and now want to move your >> shares from OpenStack A to OpenStack B's manila. Is this correct? >> >> If yes, you must: >> * Promote your replicas >> - this will make the mirrored shares available. This action does >> not delete the old "primary" shares though, you need to clean them up >> yourself, because manila will attempt to reverse the replication >> relationships if the primary shares are still accessible >> * Note the export locations and Unmanage your shares from OpenStack A's >> manila >> * Manage your shares in OpenStack B's manila with the export locations >> you noted. >> >> > > >> > > >> > >Il giorno Mer 6 Feb 2019 16:32 Tom Barron ha >> scritto: >> > > >> > >> On 06/02/19 15:34 +0100, Ignazio Cassano wrote: >> > >> >Hello Tom, I think cases you suggested do not meet my needs. >> > >> >I have an openstack installation A with a fas netapp A. >> > >> >I have another openstack installation B with fas netapp B. >> > >> >I would like to use manila replication dr. >> > >> >If I replicate manila volumes from A to B the manila db on B does >> not >> > >> >knows anything about the replicated volume but only the backends on >> > >> netapp >> > >> >B. Can I discover replicated volumes on openstack B? >> > >> >Or I must modify the manila db on B? >> > >> >Regards >> > >> >Ignazio >> > >> >> > >> I guess I don't understand your use case. Do Openstack installation >> A >> > >> and Openstack installation B know *anything* about one another? For >> > >> example, are their keystone and neutron databases somehow synced? >> Are >> > >> they going to be operative for the same set of manila shares at the >> > >> same time, or are you contemplating a migration of the shares from >> > >> installation A to installation B? >> > >> >> > >> Probably it would be helpful to have a statement of the problem that >> > >> you intend to solve before we consider the potential mechanisms for >> > >> solving it. >> > >> >> > >> Cheers, >> > >> >> > >> -- Tom >> > >> >> > >> > >> > >> > >> > >> >Il giorno Dom 3 Feb 2019 11:05 Tom Barron ha >> scritto: >> > >> > >> > >> >> On 01/02/19 07:28 +0100, Ignazio Cassano wrote: >> > >> >> >Thanks Goutham. >> > >> >> >If there are not mantainers for this driver I will switch on >> ceph and >> > >> or >> > >> >> >netapp. >> > >> >> >I am already using netapp but I would like to export shares from >> an >> > >> >> >openstack installation to another. >> > >> >> >Since these 2 installations do non share any openstack component >> and >> > >> have >> > >> >> >different openstack database, I would like to know it is >> possible . >> > >> >> >Regards >> > >> >> >Ignazio >> > >> >> >> > >> >> Hi Ignazio, >> > >> >> >> > >> >> If by "export shares from an openstack installation to another" >> you >> > >> >> mean removing them from management by manila in installation A and >> > >> >> instead managing them by manila in installation B then you can do >> that >> > >> >> while leaving them in place on your Net App back end using the >> manila >> > >> >> "manage-unmanage" administrative commands. Here's some >> documentation >> > >> >> [1] that should be helpful. >> > >> >> >> > >> >> If on the other hand by "export shares ... to another" you mean to >> > >> >> leave the shares under management of manila in installation A but >> > >> >> consume them from compute instances in installation B it's all >> about >> > >> >> the networking. One can use manila to "allow-access" to >> consumers of >> > >> >> shares anywhere but the consumers must be able to reach the >> "export >> > >> >> locations" for those shares and mount them. >> > >> >> >> > >> >> Cheers, >> > >> >> >> > >> >> -- Tom Barron >> > >> >> >> > >> >> [1] >> > >> >> >> > >> >> https://netapp.github.io/openstack-deploy-ops-guide/ocata/content/manila.examples.manila_cli.single_svm.html#d6e5806 >> > >> >> > >> > >> >> >Il giorno Gio 31 Gen 2019 20:56 Goutham Pacha Ravi < >> > >> >> gouthampravi at gmail.com> >> > >> >> >ha scritto: >> > >> >> > >> > >> >> >> Hi Ignazio, >> > >> >> >> >> > >> >> >> On Thu, Jan 31, 2019 at 7:31 AM Ignazio Cassano >> > >> >> >> wrote: >> > >> >> >> > >> > >> >> >> > Hello All, >> > >> >> >> > I installed manila on my queens openstack based on centos 7. >> > >> >> >> > I configured two servers with glusterfs replocation and >> ganesha >> > >> nfs. >> > >> >> >> > I configured my controllers octavia,conf but when I try to >> create a >> > >> >> share >> > >> >> >> > the manila scheduler logs reports: >> > >> >> >> > >> > >> >> >> > Failed to schedule create_share: No valid host was found. >> Failed to >> > >> >> find >> > >> >> >> a weighted host, the last executed filter was >> CapabilitiesFilter.: >> > >> >> >> NoValidHost: No valid host was found. Failed to find a >> weighted host, >> > >> >> the >> > >> >> >> last executed filter was CapabilitiesFilter. >> > >> >> >> > 2019-01-31 16:07:32.614 159380 INFO manila.message.api >> > >> >> >> [req-241d66b3-8004-410b-b000-c6d2d3536e4a >> > >> >> 89f76bc5de5545f381da2c10c7df7f15 >> > >> >> >> 59f1f232ce28409593d66d8f6495e434 - - -] Creating message >> record for >> > >> >> >> request_id = req-241d66b3-8004-410b-b000-c6d2d3536e4a >> > >> >> >> >> > >> >> >> >> > >> >> >> The scheduler failure points out that you have a mismatch in >> > >> >> >> expectations (backend capabilities vs share type extra-specs) >> and >> > >> >> >> there was no host to schedule your share to. So a few things >> to check >> > >> >> >> here: >> > >> >> >> >> > >> >> >> - What is the share type you're using? Can you list the share >> type >> > >> >> >> extra-specs and confirm that the backend (your GlusterFS >> storage) >> > >> >> >> capabilities are appropriate with whatever you've set up as >> > >> >> >> extra-specs ($ manila pool-list --detail)? >> > >> >> >> - Is your backend operating correctly? You can list the manila >> > >> >> >> services ($ manila service-list) and see if the backend is both >> > >> >> >> 'enabled' and 'up'. If it isn't, there's a good chance there >> was a >> > >> >> >> problem with the driver initialization, please enable debug >> logging, >> > >> >> >> and look at the log file for the manila-share service, you >> might see >> > >> >> >> why and be able to fix it. >> > >> >> >> >> > >> >> >> >> > >> >> >> Please be aware that we're on a look out for a maintainer for >> the >> > >> >> >> GlusterFS driver for the past few releases. We're open to bug >> fixes >> > >> >> >> and maintenance patches, but there is currently no active >> maintainer >> > >> >> >> for this driver. >> > >> >> >> >> > >> >> >> >> > >> >> >> > I did not understand if controllers node must be connected >> to the >> > >> >> >> network where shares must be exported for virtual machines, so >> my >> > >> >> glusterfs >> > >> >> >> are connected on the management network where openstack >> controllers >> > >> are >> > >> >> >> conencted and to the network where virtual machine are >> connected. >> > >> >> >> > >> > >> >> >> > My manila.conf section for glusterfs section is the following >> > >> >> >> > >> > >> >> >> > [gluster-manila565] >> > >> >> >> > driver_handles_share_servers = False >> > >> >> >> > share_driver = >> manila.share.drivers.glusterfs.GlusterfsShareDriver >> > >> >> >> > glusterfs_target = root at 10.102.184.229:/manila565 >> > >> >> >> > glusterfs_path_to_private_key = /etc/manila/id_rsa >> > >> >> >> > glusterfs_ganesha_server_username = root >> > >> >> >> > glusterfs_nfs_server_type = Ganesha >> > >> >> >> > glusterfs_ganesha_server_ip = 10.102.184.229 >> > >> >> >> > #glusterfs_servers = root at 10.102.185.19 >> > >> >> >> > ganesha_config_dir = /etc/ganesha >> > >> >> >> > >> > >> >> >> > >> > >> >> >> > PS >> > >> >> >> > 10.102.184.0/24 is the network where controlelrs expose >> endpoint >> > >> >> >> > >> > >> >> >> > 10.102.189.0/24 is the shared network inside openstack where >> > >> virtual >> > >> >> >> machines are connected. >> > >> >> >> > >> > >> >> >> > The gluster servers are connected on both. >> > >> >> >> > >> > >> >> >> > >> > >> >> >> > Any help, please ? >> > >> >> >> > >> > >> >> >> > Ignazio >> > >> >> >> >> > >> >> >> > >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Feb 11 18:28:15 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 11 Feb 2019 10:28:15 -0800 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> Message-ID: Also, to keep everyone on the same page, this topic was discussed in the D&I WG meeting today for those interested[1]. Long story short, the organizers of the mentoring cohort program are concerned that this might take away from their efforts. We talked a little bit about who would be making use of this list, how it should be formatted, how postings enter/exit the list,etc. -Kendall (diablo_rojo) [1] http://eavesdrop.openstack.org/meetings/diversity_wg/2019/diversity_wg.2019-02-11-17.02.log.html#l-62 On Mon, Feb 11, 2019 at 9:14 AM Kendall Nelson wrote: > On Mon, Feb 11, 2019 at 8:01 AM Thierry Carrez > wrote: > >> Doug Hellmann wrote: >> > Kendall Nelson writes: >> >> [...] >> >> So I think that the First Contact SIG project liaison list kind of fits >> >> this. Its already maintained in a wiki and its already a list of people >> >> willing to be contacted for helping people get started. It probably >> just >> >> needs more attention and refreshing. When it was first set up we (the >> FC >> >> SIG) kind of went around begging for volunteers and then once we >> maxxed out >> >> on them, we said those projects without volunteers will have the role >> >> defaulted to the PTL unless they delegate (similar to how other liaison >> >> roles work). >> >> >> >> Long story short, I think we have the sort of mentoring things >> covered. And >> >> to back up an earlier email, project specific onboarding would be a >> good >> >> help too. >> > >> > OK, that does sound pretty similar. I guess the piece that's missing is >> > a description of the sort of help the team is interested in receiving. >> >> I guess the key difference is that the first contact list is more a >> function of the team (who to contact for first contributions in this >> team, defaults to PTL), rather than a distinct offer to do 1:1 mentoring >> to cover specific needs in a team. >> >> It's probably pretty close (and the same people would likely be >> involved), but I think an approach where specific people offer a >> significant amount of their time to one mentee interested in joining a >> team is a bit different. I don't think every team would have volunteers >> to do that. I would not expect a mentor volunteer to care for several >> mentees. In the end I think we would end up with a much shorter list >> than the FC list. >> > > I think our original ask for people volunteering (before we completed the > list with PTLs as stand ins) was for people willing to help get started in > a project and look after their first few patches. So I think that was kinda > the mentoring role originally but then it evolved? Maybe Matt Oliver or > Ghanshyam remember better than I do? > > >> >> Maybe the two efforts can converge into one, or they can be kept as two >> different things but coordinated by the same team ? >> >> > I think we could go either way, but that they both would live with the FC > SIG. Seems like the most logical place to me. I lean towards two lists, one > being a list of volunteer mentors for projects that are actively looking > for new contributors (the shorter list) and the other being a list of > people just willing to keep an eye out for the welcome new contributor > patches and being the entry point for people asking about getting started > that don't know anyone in the project yet (kind of what our current view > is, I think). > > >> -- >> Thierry Carrez (ttx) >> > > -Kendall (diablo_rojo) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Feb 11 18:42:01 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 11 Feb 2019 10:42:01 -0800 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> Message-ID: Yeah I think the Project Team Guide makes sense. -Kendall (diablo_rojo) On Thu, 7 Feb 2019, 12:29 pm Doug Hellmann, wrote: > Kendall Nelson writes: > > > On Mon, Feb 4, 2019 at 9:26 AM Doug Hellmann > wrote: > > > >> Jeremy Stanley writes: > >> > >> > On 2019-02-04 17:31:46 +0900 (+0900), Ghanshyam Mann wrote: > >> > [...] > >> >> If I recall it correctly from Board+TC meeting, TC is looking for > >> >> a new home for this list ? Or we continue to maintain this in TC > >> >> itself which should not be much effort I feel. > >> > [...] > >> > > >> > It seems like you might be referring to the in-person TC meeting we > >> > held on the Sunday prior to the Stein PTG in Denver (Alan from the > >> > OSF BoD was also present). Doug's recap can be found in the old > >> > openstack-dev archive here: > >> > > >> > > >> > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134744.html > >> > > >> > Quoting Doug, "...it wasn't clear that the TC was the best group to > >> > manage a list of 'roles' or other more detailed information. We > >> > discussed placing that information into team documentation or > >> > hosting it somewhere outside of the governance repository where more > >> > people could contribute." (If memory serves, this was in response to > >> > earlier OSF BoD suggestions that retooling the Help Wanted list to > >> > be a set of business-case-focused job descriptions might garner more > >> > uptake from the organizations they represent.) > >> > -- > >> > Jeremy Stanley > >> > >> Right, the feedback was basically that we might have more luck > >> convincing companies to provide resources if we were more specific about > >> how they would be used by describing the work in more detail. When we > >> started thinking about how that change might be implemented, it seemed > >> like managing the information a well-defined job in its own right, and > >> our usual pattern is to establish a group of people interested in doing > >> something and delegating responsibility to them. When we talked about it > >> in the TC meeting in Denver we did not have any TC members volunteer to > >> drive the implementation to the next step by starting to recruit a team. > >> > >> During the Train series goal discussion in Berlin we talked about having > >> a goal of ensuring that each team had documentation for bringing new > >> contributors onto the team. > > > > > > This was something I thought the docs team was working on pushing with > all > > of the individual projects, but I am happy to help if they need extra > > hands. I think this is suuuuuper important. Each Upstream Institute we > > teach all the general info we can, but we always mention that there are > > project specific ways of handling things and project specific processes. > If > > we want to lower the barrier for new contributors, good per project > > documentation is vital. > > > > > >> Offering specific mentoring resources seems > >> to fit nicely with that goal, and doing it in each team's repository in > >> a consistent way would let us build a central page on > docs.openstack.org > >> to link to all of the team contributor docs, like we link to the user > >> and installation documentation, without requiring us to find a separate > >> group of people to manage the information across the entire community. > > > > > > I think maintaining the project liaison list[1] that the First Contact > SIG > > has kind of does this? Between that list and the mentoring cohort program > > that lives under the D&I WG, I think we have things covered. Its more a > > matter of publicizing those than starting something new I think? > > > > > >> > >> So, maybe the next step is to convince someone to champion a goal of > >> improving our contributor documentation, and to have them describe what > >> the documentation should include, covering the usual topics like how to > >> actually submit patches as well as suggestions for how to describe areas > >> where help is needed in a project and offers to mentor contributors. > > > >> Does anyone want to volunteer to serve as the goal champion for that? > >> > >> > > I can probably draft a rough outline of places where I see projects > diverge > > and make a template, but where should we have that live? > > > > /me imagines a template similar to the infra spec template > > Could we put it in the project team guide? > > > > > > >> -- > >> Doug > >> > >> > > [1] https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons > > -- > Doug > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Feb 11 20:32:59 2019 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 12 Feb 2019 09:32:59 +1300 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: <3fb28588-06b5-12a3-cc4c-a28aa758f166@redhat.com> On 12/02/19 4:32 AM, NANTHINI A A wrote: > Hi , > I have tried the below .But getting error .Please let me know how I can proceed further . > > root at cic-1:~# cat try1.yaml > heat_template_version: 2013-05-23 > description: > This is the template for I&V R6.1 base configuration to create neutron resources other than sg and vm for vyos vms > parameters: > resource_name_map: > - network1: NetworkA1 > network2: NetworkA2 > - network1: NetworkB1 > network2: NetworkB2 > > resources: > neutron_Network_1: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network1]} > neutron_Network_2: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network2]} > root at cic-1:~# cat tryrepeat.yaml > > heat_template_version: 2013-05-23 > > resources: > rg: > type: OS::Heat::ResourceGroup > properties: > count: 2 > resource_def: > type: try1.yaml > root at cic-1:~# > > root at cic-1:~# heat stack-create tests -f tryrepeat.yaml > WARNING (shell) "heat stack-create" is deprecated, please use "openstack stack create" instead > ERROR: resources.rg: : Error parsing template file:///root/try1.yaml while scanning for the next token > found character '%' that cannot start any token > in "", line 15, column 45: > ... {get_param: [resource_name_map, %index%, network1]} That's a yaml parsing error. You just need to put quotes around the thing that starts with %, like "%index%" > Thanks in advance . > > > Thanks, > A.Nanthini > -----Original Message----- > From: Harald Jensås [mailto:hjensas at redhat.com] > Sent: Monday, February 11, 2019 7:47 PM > To: NANTHINI A A ; openstack-dev at lists.openstack.org > Subject: Re: [Heat] Reg accessing variables of resource group heat api > > On Wed, 2019-02-06 at 06:12 +0000, NANTHINI A A wrote: >> Hi , >> We are developing heat templates for our vnf deployment .It >> includes multiple resources .We want to repeat the resource and hence >> used the api RESOURCE GROUP . >> Attached are the templates which we used >> >> Set1.yaml -> has the resources we want to repeat Setrepeat.yaml -> has >> the resource group api with count . >> >> We want to access the variables of resource in set1.yaml while >> repeating it with count .Eg . port name ,port fixed ip address we want >> to change in each set . >> Please let us know how we can have a variable with each repeated >> resource . >> > > Sounds like you want to use the index_var variable[1] to prefix/suffix reource names? > > I.e in set1.yaml you can use: > > name: > list_join: > - '_' > - {get_param: 'OS::stack_name'} > - %index% > - > > > The example should resulting in something like: > stack_0_Network3, stack_0_Subnet3 > stack_1_Network0, stack_1_Subnet0 > [ ... ] > > > If you want to be more advanced you could use a list parameter in the set1.yaml template, and have each list entry contain a dictionaly of each resource name. The %index% variable would then be used to pick the correct entry from the list. > > > parameters: > resource_name_map: > - network1: foo_custom_name_net1 > network2: foo_custom_name_net2 > - network1: bar_custom_name_net1 > network2: bar_custom_name_net2 - > > resources: > neutron_Network_1: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network1]} > neutron_Network_2: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network2]} > > > %index% is the "count" picking the 'foo' entries when %index% is 0, and 'bar' entries when %index% is 1 and so on. > > > > > > [1] > https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Heat::ResourceGroup-props-opt > > > From openstack at nemebean.com Mon Feb 11 21:03:36 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 11 Feb 2019 15:03:36 -0600 Subject: [dev][tc] Part 2: Evaluating projects in relation to OpenStack cloud vision In-Reply-To: References: Message-ID: <87db3527-3bd7-614b-5fc6-d44092452885@nemebean.com> On 2/10/19 2:33 PM, Chris Dent wrote: > > This a "part 2" or "other half" of evaluating OpenStack projects in > relation to the technical vision. See the other threads [1][2] for > more information. > > In the conversations that led up to the creation of the vision > document [3] one of the things we hoped was that the process could > help identify ways in which existing projects could evolve to be > better at what they do. This was couched in two ideas: > > * Helping to make sure that OpenStack continuously improves, in the >   right direction. > * Helping to make sure that developers were working on projects that >   leaned more towards interesting and educational than frustrating >   and embarrassing, where choices about what to do and how to do it >   were straightforward, easy to share with others, so >   well-founded in agreed good practice that argument would be rare, >   and so few that it was easy to decide. > > Of course, to have a "right direction" you first have to have a > direction, and thus the vision document and the idea of evaluating > how aligned a project is with that. > > The other half, then, is looking at the projects from a development > standpoint and thinking about what aspects of the project are: > > * Things (techniques, tools) the project contributors would encourage >   others to try. Stuff that has worked out well. Oslo documents some things that I think would fall under this category in http://specs.openstack.org/openstack/oslo-specs/#team-policies The incubator one should probably get removed since it's no longer applicable, but otherwise I feel like we mostly still follow those policies and find them to be reasonable best practices. Some are very Oslo-specific and not useful to anyone else, of course, but others could be applied more broadly. There's also http://specs.openstack.org/openstack/openstack-specs/specs/eventlet-best-practices.html although in the spirit of your next point I would be more +1 on the "don't use Eventlet" option for new projects. It might be nice to have a document that discusses preferred Eventlet alternatives for new projects. I know there are a few Eventlet-free projects out there that could probably provide feedback on their method. > > * Things—given a clean slate, unlimited time and resources, the >   benefit of hindsight and without the weight of legacy—the project >   contributors would encourage others to not repeat. > > And documenting those things so they can be carried forward in time > some place other than people's heads, and new projects or > refactorings of existing projects can start on a good foot. > > A couple of examples: > > * Whatever we might say about the implementation (in itself and how >   it is used), the concept of a unified configuration file format, >   via oslo_config, is probably considered a good choice, and we >   should keep on doing that. I'm a _little_ biased, but +1. Things like your env var driver or the drivers for moving secrets out of plaintext would be next to impossible if everyone were using a different configuration method. > > * On the other hand, given hindsight and improvements in commonly >   available tools, using a homegrown WSGI (non-)framework (unless >   you are Swift) plus eventlet may not have been the way to go, yet >   because it is what's still there in nova, it often gets copied. And as I noted above, +1 to this too. > > It's not clear at this point whether these sorts of things should be > documented in projects, or somewhere more central. So perhaps we can > just talk about it here in email and figure something out. I'll > followup with some I have for placement, since that's the project > I've given the most attention. > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001417.html > > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002524.html > > [3] https://governance.openstack.org/tc/reference/technical-vision.html > From amy at demarco.com Mon Feb 11 21:47:23 2019 From: amy at demarco.com (Amy Marrich) Date: Mon, 11 Feb 2019 15:47:23 -0600 Subject: Fwd: UC Candidacy In-Reply-To: References: Message-ID: This email is my nomination to re-run for the OpenStack User Committee election. I have been involved with OpenStack as an operator since the Grizzly release working with both private and public cloud environments. I have been an upstream contributor since the Mitaka release cycle and I am currently a Core Reviewer for OpenStack-Ansible which works closely with operators to help them set up their deployments and insight for our direction. I believe I bring valuable insight to the User Committee being involved as both an AUC and ATC. Through my involvement with the OpenStack Upstream Institute and Diversity Working Group, I have been very active in helping to bring new members to our community and more importantly working to find new ways to keep them involved once they join. There is still work I would like to continue working on, such as the OPS Meetups and the OpenStack mentoring programs to help get more Operators involved in the community. Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Mon Feb 11 21:55:17 2019 From: aspiers at suse.com (Adam Spiers) Date: Mon, 11 Feb 2019 21:55:17 +0000 Subject: [tc][all] Train Community Goals In-Reply-To: <7e69aef5-d3c1-22df-7a6f-89b35e14fb8c@nemebean.com> References: <66d73db6-9f84-1290-1ab8-cf901a7fb355@catalyst.net.nz> <6b498008e71b7dae651e54e29717f3ccedea50d1.camel@evrard.me> <7e69aef5-d3c1-22df-7a6f-89b35e14fb8c@nemebean.com> Message-ID: <20190211215517.ax5jktscy7ovhoz7@pacific.linksys.moosehall> Yeah thanks - I'm well looped in here through my colleague JP[1] :-) Still hoping to find some more time for this very soon, although right now I'm focused on some pressing nova work ... [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/002089.html Ben Nemec wrote: >cc aspiers, who sounded interested in leading this work, pending >discussion with his employer[1]. > >1: http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001750.html > >On 1/31/19 9:59 AM, Lance Bragstad wrote: >>*Healthcheck middleware* >> >>There is currently no volunteer to champion for this goal. The first >>iteration of the work on the oslo.middleware was updated [3], and a >>gap analysis was started on the mailing lists [4]. >>If you want to get involved in this goal, don't hesitate to answer >>on the ML thread there. >> >>[3] https://review.openstack.org/#/c/617924/2 >>[4] https://ethercalc.openstack.org/di0mxkiepll8 From aspiers at suse.com Mon Feb 11 22:26:41 2019 From: aspiers at suse.com (Adam Spiers) Date: Mon, 11 Feb 2019 22:26:41 +0000 Subject: [all][tc] Formalizing cross-project pop-up teams In-Reply-To: <723736DB-ED80-4600-AA98-F51FE70A8D73@gmail.com> References: <20190201145553.GA5625@sm-workstation> <20190205121122.lzz3xcsorr7drjvm@pacific.linksys.moosehall> <22431bc3-3612-affe-d690-46e9048ec61d@openstack.org> <20190207144227.geh7irlxi5uhy5fs@pacific.linksys.moosehall> <20190207202103.a47txuo4lborwqgy@pacific.linksys.moosehall> <20190208091829.6tiig7lgef6txcxk@pacific.linksys.moosehall> <723736DB-ED80-4600-AA98-F51FE70A8D73@gmail.com> Message-ID: <20190211222641.pney33hmai6vjoky@pacific.linksys.moosehall> Ildiko Vancsa wrote: >First of all I like the idea of pop-up teams. > >On 2019. Feb 8., at 10:18, Adam Spiers wrote: >>True. And for temporary docs / notes / brainstorming there's the >>wiki and etherpad. So yeah, in terms of infrastructure maybe IRC >>meetings in one of the communal meeting channels is the only thing >>needed. We'd still need to take care of ensuring that popups are >>easily discoverable by anyone, however. And this ties in with the >>"should we require official approval" debate - maybe a halfway >>house is the right balance between red tape and agility? For >>example, set up a table on a page like >> >> https://wiki.openstack.org/wiki/Popup_teams >> >>and warmly encourage newly forming teams to register themselves by adding a row to that table. Suggested columns: >> >> - Team name >> - One-line summary of team purpose >> - Expected life span (optional) >> - Link to team wiki page or etherpad >> - Link to IRC meeting schedule (if any) >> - Other comments >> >>Or if that's too much of a free-for-all, it could be a slightly more >>formal process of submitting a review to add a row to a page: >> >> https://governance.openstack.org/popup-teams/ >> >>which would be similar in spirit to: >> >> https://governance.openstack.org/sigs/ >> >>Either this or a wiki page would ensure that anyone can easily >>discover what teams are currently in existence, or have been in the >>past (since historical information is often useful too). Just >>thinking out aloud … > >In my experience there are two crucial steps to make a cross-project >team work successful. The first is making sure that the proposed new >feature/enhancement is accepted by all teams. The second is to have >supporters from every affected project team preferably also resulting >in involvement during both design and review time maybe also during >feature development and testing phase. > >When these two steps are done you can work on the design part and >making sure you have the work items prioritized on each side in a way >that you don’t end up with road blocks that would delay the work by >multiple release cycles. Makes perfect sense to me - thanks for sharing! >To help with all this I would start the experiment with wiki pages >and etherpads as these are all materials you can point to without too >much formality to follow so the goals, drivers, supporters and >progress are visible to everyone who’s interested and to the TC to >follow-up on. > >Do we expect an approval process to help with or even drive either of >the crucial steps I listed above? I'm not sure if it would help. But I agree that visibility is important, and by extension also discoverability. To that end I think it would be worth hosting a central list of popup initiatives somewhere which links to the available materials for each initiative. Maybe it doesn't matter too much whether that central list is simply a wiki page or a static web page managed by Gerrit under a governance repo or similar. From openstack at nemebean.com Mon Feb 11 22:28:57 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 11 Feb 2019 16:28:57 -0600 Subject: [tc][all][self-healing-sig] Service-side health checks community goal for Train cycle In-Reply-To: References: <158c354c1d7a3e6fb261202b34d4e3233d5f39bc.camel@evrard.me> <1548671352.507178.1645094472.39B42BCA@webmail.messagingengine.com> <7cc5aa565a3a50a2d520d99e3ddcd6da5502e990.camel@evrard.me> Message-ID: <21a9a786-a530-55b3-cf74-0444899a98f2@nemebean.com> On 1/28/19 5:34 AM, Chris Dent wrote: > On Mon, 28 Jan 2019, Jean-Philippe Evrard wrote: > >> It is not a non-starter. I knew this would show up :) >> It's fine that some projects do differently (for example swift has >> different middleware, keystone is not using paste). > > Tangent so that people are clear on the state of Paste and > PasteDeploy. > > I recommend projects move away from using either. > > Until recently both were abandonware, not receiving updates, and > had issues working with Python3. > > I managed to locate maintainers from a few years ago, and negotiated > to bring them under some level of maintenance, but in both cases the > people involved are only interested in doing limited management to > keep the projects barely alive. > > pastedeploy (the thing that is more often used in OpenStack, and is > usually used to load the paste.ini file and doesn't have to have a > dependency on paste itself) is now under the Pylons project: > https://github.com/Pylons/pastedeploy > > Paste itself is with me: https://github.com/cdent/paste > >> I think it's also too big of a change to move everyone to one single >> technology in a cycle :) Instead, I want to focus on the real use case >> for people (bringing a common healthcheck "api" itself), which doesn't >> matter on the technology. > > I agree that the healthcheck change can and should be completely > separate from any question of what is used to load middleware. > That's the great thing about WSGI. > > As long as the healthcheck tooling presents are "normal" WSGI > interface it ought to either "just work" or be wrappable by other tooling, > so I wouldn't spend too much time making a survey of how people are > doing middleware. So should that question be re-worded? The current Keystone answer is accurate but unhelpful, given that I believe Keystone does enable the healthcheck middleware by default: https://docs.openstack.org/keystone/latest/admin/health-check-middleware.html Since what we care about isn't the WSGI implementation but the availability of the feature, shouldn't that question be more like "Project enables healthcheck middleware by default"? In which case Keystone's answer becomes a simple "yes" and Manila's a simple "no". > > The tricky part (but not that tricky) will be with managing how the > "tests" are provided to the middleware. > From hyangii at gmail.com Tue Feb 12 00:07:48 2019 From: hyangii at gmail.com (Jae Sang Lee) Date: Tue, 12 Feb 2019 09:07:48 +0900 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: <3C065CFC-3E64-47C1-84C9-FB87A1F9B475@cern.ch> References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> <3C065CFC-3E64-47C1-84C9-FB87A1F9B475@cern.ch> Message-ID: Hello, I tested today by increasing EVENTLET_THREADPOOL_SIZE size to 100. I wanted to have good results, but this time I did not get a response after removing 41 volumes. This environment variable did not fix the cinder-volume stopping. Restarting the stopped cinder-volume will delete all volumes that are in deleting state while running the clean_up function. Only one volume in the deleting state, I force the state of this volume to be available, and then delete it, all volumes will be deleted. This result was the same for 3 consecutive times. After removing dozens of volumes, the cinder-volume was down, and after the restart of the service, 199 volumes were deleted and one volume was manually erased. If you have a different approach to solving this problem, please let me know. Thanks. 2019년 2월 11일 (월) 오후 9:40, Arne Wiebalck 님이 작성: > Jae, > > On 11 Feb 2019, at 11:39, Jae Sang Lee wrote: > > Arne, > > I saw the messages like ''moving volume to trash" in the cinder-volume > logs and the peridic task also reports > like "Deleted from trash for backend ''" > > The patch worked well when clearing a small number of volumes. This > happens only when I am deleting a large > number of volumes. > > > Hmm, from cinder’s point of view, the deletion should be more or less > instantaneous, so it should be able to “delete” > many more volumes before getting stuck. > > The periodic task, however, will go through the volumes one by one, so if > you delete many at the same time, > volumes may pile up in the trash (for some time) before the tasks gets > round to delete them. This should not affect > c-vol, though. > > I will try to adjust the number of thread pools by adjusting the > environment variables with your advices > > Do you know why the cinder-volume hang does not occur when create a > volume, but only when delete a volume? > > > Deleting a volume ties up a thread for the duration of the deletion (which > is synchronous and can hence take very > long for ). If you have too many deletions going on at the same time, you > run out of threads and c-vol will eventually > time out. FWIU, creation basically works the same way, but it is almost > instantaneous, hence the risk of using up all > threads is simply lower (Gorka may correct me here :-). > > Cheers, > Arne > > > > Thanks. > > > 2019년 2월 11일 (월) 오후 6:14, Arne Wiebalck 님이 작성: > >> Jae, >> >> To make sure deferred deletion is properly working: when you delete >> individual large volumes >> with data in them, do you see that >> - the volume is fully “deleted" within a few seconds, ie. not staying in >> ‘deleting’ for a long time? >> - that the volume shows up in trash (with “rbd trash ls”)? >> - the periodic task reports it is deleting volumes from the trash? >> >> Another option to look at is “backend_native_threads_pool_size": this >> will increase the number >> of threads to work on deleting volumes. It is independent from deferred >> deletion, but can also >> help with situations where Cinder has more work to do than it can cope >> with at the moment. >> >> Cheers, >> Arne >> >> >> >> On 11 Feb 2019, at 09:47, Jae Sang Lee wrote: >> >> Yes, I added your code to pike release manually. >> >> >> >> 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 작성: >> >>> Hi Jae, >>> >>> You back ported the deferred deletion patch to Pike? >>> >>> Cheers, >>> Arne >>> >>> > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: >>> > >>> > Hello, >>> > >>> > I recently ran a volume deletion test with deferred deletion enabled >>> on the pike release. >>> > >>> > We experienced a cinder-volume hung when we were deleting a large >>> amount of the volume in which the data was actually written(I make 15GB >>> file in every volumes), and we thought deferred deletion would solve it. >>> > >>> > However, while deleting 200 volumes, after 50 volumes, the >>> cinder-volume downed as before. In my opinion, the trash_move api does not >>> seem to work properly when removing multiple volumes, just like remove api. >>> > >>> > If these test results are my fault, please let me know the correct >>> test method. >>> > >>> >>> -- >>> Arne Wiebalck >>> CERN IT >>> >>> >> -- >> Arne Wiebalck >> CERN IT >> >> > -- > Arne Wiebalck > CERN IT > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at openstack.org Tue Feb 12 00:38:42 2019 From: chris at openstack.org (Chris Hoge) Date: Mon, 11 Feb 2019 16:38:42 -0800 Subject: [loci] Loci builds functionally broken Message-ID: It appears the lastest release of virtualenv has broken Loci builds. I believe the root cause is an update in how symlinks are handled. Before the release, the python libraries installed in the: /var/lib/openstack/lib64/python2.7/lib-dynload directory (this is on CentOS, Ubuntu and Suse vary) were direct instances of the library. For example: -rwxr-xr-x. 1 root root 62096 Oct 30 23:46 itertoolsmodule.so Now, the build points to a long-destroyed symlink that is an artifact of the requirements build process. For example: lrwxrwxrwx. 1 root root 56 Feb 11 23:01 itertoolsmodule.so -> /tmp/venv/lib64/python2.7/lib-dynload/itertoolsmodule.so We will investigate how to make the build more robust, repair this, and will report back soon. Until then, you should expect any fresh builds to not be functional, despite the apparent success in building the container. Thanks, Chris [1] https://virtualenv.pypa.io/en/stable/changes/#release-history From chris at openstack.org Tue Feb 12 01:12:14 2019 From: chris at openstack.org (Chris Hoge) Date: Mon, 11 Feb 2019 17:12:14 -0800 Subject: [loci] Loci builds functionally broken In-Reply-To: References: Message-ID: <378149AB-54F2-45E7-B196-31F0505F6E0A@openstack.org> A patch for a temporary fix is up for review. https://review.openstack.org/#/c/636252/ We’ll be looking into a more permanent fix in the coming days. > On Feb 11, 2019, at 4:38 PM, Chris Hoge wrote: > > It appears the lastest release of virtualenv has broken Loci builds. I > believe the root cause is an update in how symlinks are handled. Before > the release, the python libraries installed in the: > > /var/lib/openstack/lib64/python2.7/lib-dynload > > directory (this is on CentOS, Ubuntu and Suse vary) were direct instances > of the library. For example: > > -rwxr-xr-x. 1 root root 62096 Oct 30 23:46 itertoolsmodule.so > > Now, the build points to a long-destroyed symlink that is an artifact of > the requirements build process. For example: > > lrwxrwxrwx. 1 root root 56 Feb 11 23:01 itertoolsmodule.so -> /tmp/venv/lib64/python2.7/lib-dynload/itertoolsmodule.so > > We will investigate how to make the build more robust, repair this, and > will report back soon. Until then, you should expect any fresh builds to > not be functional, despite the apparent success in building the container. > > Thanks, > Chris > > [1] https://virtualenv.pypa.io/en/stable/changes/#release-history > > From sam47priya at gmail.com Mon Feb 11 17:07:41 2019 From: sam47priya at gmail.com (Sam P) Date: Tue, 12 Feb 2019 02:07:41 +0900 Subject: [ops] OpenStack operators meetup, Berlin, March 6th,7th In-Reply-To: References: Message-ID: Hi Erik, Thanks!. I will contact Ashlee. --- Regards, Sampath On Sat, Feb 9, 2019 at 2:30 AM Erik McCormick wrote: > > Hi Sam, > > On Thu, Feb 7, 2019 at 9:07 PM Sam P wrote: > > > > Hi Chris, > > > > I need an invitation letter to get my German visa. Please let me know > > who to contact. > > > You can contact Ashlee at the foundation and she will be able to > assist you. Her email is ashlee at openstack.org. See you in Berlin! > > --- Regards, > > Sampath > > > > > > On Thu, Feb 7, 2019 at 2:38 AM Chris Morgan wrote: > > > > > > See you there! > > > > > > On Wed, Feb 6, 2019 at 12:18 PM Erik McCormick wrote: > > >> > > >> I'm all signed up. See you in Berlin! > > >> > > >> On Wed, Feb 6, 2019, 10:43 AM Chris Morgan > >>> > > >>> Dear All, > > >>> The Evenbrite for the next ops meetup is now open, see > > >>> > > >>> https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 > > >>> > > >>> Thanks for Allison Price from the foundation for making this for us. We'll be sharing more details on the event soon. > > >>> > > >>> Chris > > >>> on behalf of the ops meetups team > > >>> > > >>> -- > > >>> Chris Morgan > > > > > > > > > > > > -- > > > Chris Morgan From liliueecg at gmail.com Tue Feb 12 03:23:42 2019 From: liliueecg at gmail.com (Li Liu) Date: Mon, 11 Feb 2019 22:23:42 -0500 Subject: [Cyborg][IRC] The Cyborg IRC meeting will be held Wednesday at 0300 UTC Message-ID: Happy Chinese New Year! The IRC meeting will be resumed Wednesday at 0300 UTC, which is 10:00 pm est(Tuesday) / 7:00 pm pst(Tuesday) /11 am Beijing time (Wednesday) -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Tue Feb 12 05:15:00 2019 From: ramishra at redhat.com (Rabi Mishra) Date: Tue, 12 Feb 2019 10:45:00 +0530 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: On Mon, Feb 11, 2019 at 9:23 PM NANTHINI A A wrote: > Hi , > I have tried the below .But getting error .Please let me know how I can > proceed further . > > root at cic-1:~# cat try1.yaml > heat_template_version: 2013-05-23 > description: > This is the template for I&V R6.1 base configuration to create neutron > resources other than sg and vm for vyos vms > parameters: > resource_name_map: > - network1: NetworkA1 > network2: NetworkA2 > - network1: NetworkB1 > network2: NetworkB2 > > resources: > neutron_Network_1: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network1]} > I don't think you can use %index% directly in this template. You have to pass it as resource property from tryreapet.yaml. Please check the example[1] in heat-templates repo (resource_group_index_lookup.yaml and random.yaml). [1] https://github.com/openstack/heat-templates/blob/master/hot/resource_group/resource_group_index_lookup.yaml > neutron_Network_2: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network2]} > root at cic-1:~# cat tryrepeat.yaml > > heat_template_version: 2013-05-23 > > resources: > rg: > type: OS::Heat::ResourceGroup > properties: > count: 2 > resource_def: > type: try1.yaml > root at cic-1:~# > > root at cic-1:~# heat stack-create tests -f tryrepeat.yaml > WARNING (shell) "heat stack-create" is deprecated, please use "openstack > stack create" instead > ERROR: resources.rg: : Error parsing template > file:///root/try1.yaml while scanning for the next token > found character '%' that cannot start any token > in "", line 15, column 45: > ... {get_param: [resource_name_map, %index%, network1]} > > > > Thanks in advance . > > > Thanks, > A.Nanthini > -----Original Message----- > From: Harald Jensås [mailto:hjensas at redhat.com] > Sent: Monday, February 11, 2019 7:47 PM > To: NANTHINI A A ; > openstack-dev at lists.openstack.org > Subject: Re: [Heat] Reg accessing variables of resource group heat api > > On Wed, 2019-02-06 at 06:12 +0000, NANTHINI A A wrote: > > Hi , > > We are developing heat templates for our vnf deployment .It > > includes multiple resources .We want to repeat the resource and hence > > used the api RESOURCE GROUP . > > Attached are the templates which we used > > > > Set1.yaml -> has the resources we want to repeat Setrepeat.yaml -> has > > the resource group api with count . > > > > We want to access the variables of resource in set1.yaml while > > repeating it with count .Eg . port name ,port fixed ip address we want > > to change in each set . > > Please let us know how we can have a variable with each repeated > > resource . > > > > Sounds like you want to use the index_var variable[1] to prefix/suffix > reource names? > > I.e in set1.yaml you can use: > > name: > list_join: > - '_' > - {get_param: 'OS::stack_name'} > - %index% > - > > > The example should resulting in something like: > stack_0_Network3, stack_0_Subnet3 > stack_1_Network0, stack_1_Subnet0 > [ ... ] > > > If you want to be more advanced you could use a list parameter in the > set1.yaml template, and have each list entry contain a dictionaly of each > resource name. The %index% variable would then be used to pick the correct > entry from the list. > > > parameters: > resource_name_map: > - network1: foo_custom_name_net1 > network2: foo_custom_name_net2 > - network1: bar_custom_name_net1 > network2: bar_custom_name_net2 - > > resources: > neutron_Network_1: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network1]} > neutron_Network_2: > type: OS::Neutron::Net > properties: > name: {get_param: [resource_name_map, %index%, network2]} > > > %index% is the "count" picking the 'foo' entries when %index% is 0, and > 'bar' entries when %index% is 1 and so on. > > > > > > [1] > > https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Heat::ResourceGroup-props-opt > > > > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Tue Feb 12 05:43:18 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 12 Feb 2019 14:43:18 +0900 Subject: [dev] [neutron] bug deputy report of the week of Feb 4 Message-ID: Hi neutrinos, I was a bug deputy last week. The last week was relatively quiet. The following * Needs investigation * https://bugs.launchpad.net/neutron/+bug/1815463 (New) [dev] Agent RPC version does not auto upgrade if neutron-server restart first * ovsdbapp.exceptions.TimeoutException in functional tests (gate failure) https://bugs.launchpad.net/bugs/1815142 * In Progress * https://bugs.launchpad.net/bugs/1815345 (Medium, In Progress) neutron doesnt delete port binding level when deleting an inactive port binding * Incomplete * https://bugs.launchpad.net/bugs/1815424 (Incomplete) Port gets port security disabled if using --no-security-groups I cannot reproduce it. Requesting the author more information. * FYI * https://bugs.launchpad.net/bugs/1815433 Code crash with invalid connection limit of listener neutron-lbaas bug needs to be filed to storyboard. I requested it to the bug author and he/she filed it. [1] https://storyboard.openstack.org/#!/project/openstack/neutron-lbaas Best Regards, Akihiro Motoki (irc: amotoki) -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Tue Feb 12 05:46:59 2019 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 12 Feb 2019 16:46:59 +1100 Subject: [cinder] Help with Fedora 29 devstack volume/iscsi issues In-Reply-To: <20190211101229.j5aqii2os5z2p2cw@localhost> References: <20190207063940.GA1754@fedora19.localdomain> <20190211101229.j5aqii2os5z2p2cw@localhost> Message-ID: <20190212054659.GA14416@fedora19.localdomain> On Mon, Feb 11, 2019 at 11:12:29AM +0100, Gorka Eguileor wrote: > It is werid that there are things missing from the logs: > > In method _get_connection_devices we have: > > LOG.debug('Getting connected devices for (ips,iqns,luns)=%s', 1 > ips_iqns_luns) > nodes = self._get_iscsi_nodes() > > And we can see the message in the logs [2], but then we don't see the > call to iscsiadm that happens as the first instruction in > _get_iscsi_nodes: > > out, err = self._execute('iscsiadm', '-m', 'node', run_as_root=True, > root_helper=self._root_helper, > check_exit_code=False) > > And we only see the error coming from parsing the output of that command > that is not logged. Yes, I wonder if this is related to a rootwrap stdout/stderr capturing or something? > I believe Matthew is right in his assessment, the problem is the output > from "iscsiadm -m node", there is a missing space between the first 2 > columns in the output [4]. > > This looks like an issue in Open iSCSI, not in OS-Brick, Cinder, or > Nova. > > And checking their code, it looks like this is the patch that fixes it > [5], so it needs to be added to F29 iscsi-initiator-utils package. Thank you! This excellent detective work has solved the problem. I did a copr build with that patch [1] and got a good tempest run [2]. Amazing how much trouble a " " can cause. I have filed an upstream bug on the package https://bugzilla.redhat.com/show_bug.cgi?id=1676365 Anyway, it has led to a series of patches you may be interested in, which I think would help future debugging efforts https://review.openstack.org/636078 : fix for quoting of devstack args (important for follow-ons) https://review.openstack.org/636079 : export all journal logs. Things like iscsid were logging to the journal, but we weren't capturing them. Includes instructions on how to use the exported journal [3] https://review.openstack.org/636080 : add a tcpdump service. With this you can easily packet capture during a devstack run. e.g. https://review.openstack.org/636082 captures all iscsi traffic and stores it [4] https://review.openstack.org/636081 : iscsid debug option, which uses a systemd override to turn up debug logging. Reviews welcome :) Thanks, -i [1] https://github.com/open-iscsi/open-iscsi/commit/baa0cb45cfcf10a81283c191b0b236cd1a2f66ee.patch [2] http://logs.openstack.org/82/636082/9/check/devstack-platform-fedora-latest/e2fac10/ [3] http://logs.openstack.org/82/636082/9/check/devstack-platform-fedora-latest/e2fac10/controller/logs/devstack.journal.README.txt [4] http://logs.openstack.org/82/636082/9/check/devstack-platform-fedora-latest/e2fac10/controller/logs/tcpdump.pcap.gz From amotoki at gmail.com Tue Feb 12 06:11:57 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 12 Feb 2019 15:11:57 +0900 Subject: [dev] [neutron] bug deputy report of the week of Feb 4 In-Reply-To: References: Message-ID: I forgot to add one bug which needs help from FWaaS team. The updated list is as follows. 2019年2月12日(火) 14:43 Akihiro Motoki : > Hi neutrinos, > > I was a bug deputy last week. > The last week was relatively quiet. The following > > > * Needs investigation > * https://bugs.launchpad.net/neutron/+bug/1815463 (New) > [dev] Agent RPC version does not auto upgrade if neutron-server > restart first > * ovsdbapp.exceptions.TimeoutException in functional tests (gate failure) > https://bugs.launchpad.net/bugs/1815142 > * Needs help from FWaaS team * https://bugs.launchpad.net/neutron/+bug/1814507 Deleting the default firewall group not deleting the associated firewall rules to the policy We need an input on the basic design policy from FWaaS team. > > * In Progress > * https://bugs.launchpad.net/bugs/1815345 (Medium, In Progress) > neutron doesnt delete port binding level when deleting an inactive > port binding > > * Incomplete > * https://bugs.launchpad.net/bugs/1815424 (Incomplete) > Port gets port security disabled if using --no-security-groups > I cannot reproduce it. Requesting the author more information. > > * FYI > * https://bugs.launchpad.net/bugs/1815433 > Code crash with invalid connection limit of listener > neutron-lbaas bug needs to be filed to storyboard. I requested it to > the bug author and he/she filed it. > [1] > https://storyboard.openstack.org/#!/project/openstack/neutron-lbaas > > Best Regards, > Akihiro Motoki (irc: amotoki) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Tue Feb 12 06:55:13 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 12 Feb 2019 15:55:13 +0900 Subject: [Searchlight] TC vision reflection Message-ID: Hi team, Follow by the call of the TC [1] for each project to self-evaluate against the OpenStack Cloud Vision [2], the Searchlight team would like to produce a short bullet point style document comparing itself with the vision. The purpose is to find the gaps between Searchlight and the TC vision and it is a good practice to align our work with the rest. I created a new pad [3] and welcome all of your opinions. Then, after about 3 weeks, I will submit a patch set to add the vision reflection document to our doc source. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001417.html [2] https://governance.openstack.org/tc/reference/technical-vision.html [3] https://etherpad.openstack.org/p/-tc-vision-self-eval Ping me on the channel #openstack-searchlight Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From arne.wiebalck at cern.ch Tue Feb 12 06:55:57 2019 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Tue, 12 Feb 2019 07:55:57 +0100 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> <3C065CFC-3E64-47C1-84C9-FB87A1F9B475@cern.ch> Message-ID: <93782FC6-38BE-438C-B665-40977863DEDA@cern.ch> Jae, One other setting that caused trouble when bulk deleting cinder volumes was the DB connection string: we did not configure a driver and hence used the Python mysql wrapper instead … essentially changing connection = mysql://cinder:@:/cinder to connection = mysql+pymysql://cinder:@:/cinder solved the parallel deletion issue for us. All details in the last paragraph of [1]. HTH! Arne [1] https://techblog.web.cern.ch/techblog/post/experiences-with-cinder-in-production/ > On 12 Feb 2019, at 01:07, Jae Sang Lee wrote: > > Hello, > > I tested today by increasing EVENTLET_THREADPOOL_SIZE size to 100. I wanted to have good results, > but this time I did not get a response after removing 41 volumes. This environment variable did not fix > the cinder-volume stopping. > > Restarting the stopped cinder-volume will delete all volumes that are in deleting state while running the clean_up function. > Only one volume in the deleting state, I force the state of this volume to be available, and then delete it, all volumes will be deleted. > > This result was the same for 3 consecutive times. After removing dozens of volumes, the cinder-volume was down, > and after the restart of the service, 199 volumes were deleted and one volume was manually erased. > > If you have a different approach to solving this problem, please let me know. > > Thanks. > > 2019년 2월 11일 (월) 오후 9:40, Arne Wiebalck 님이 작성: > Jae, > >> On 11 Feb 2019, at 11:39, Jae Sang Lee wrote: >> >> Arne, >> >> I saw the messages like ''moving volume to trash" in the cinder-volume logs and the peridic task also reports >> like "Deleted from trash for backend ''" >> >> The patch worked well when clearing a small number of volumes. This happens only when I am deleting a large >> number of volumes. > > Hmm, from cinder’s point of view, the deletion should be more or less instantaneous, so it should be able to “delete” > many more volumes before getting stuck. > > The periodic task, however, will go through the volumes one by one, so if you delete many at the same time, > volumes may pile up in the trash (for some time) before the tasks gets round to delete them. This should not affect > c-vol, though. > >> I will try to adjust the number of thread pools by adjusting the environment variables with your advices >> >> Do you know why the cinder-volume hang does not occur when create a volume, but only when delete a volume? > > Deleting a volume ties up a thread for the duration of the deletion (which is synchronous and can hence take very > long for ). If you have too many deletions going on at the same time, you run out of threads and c-vol will eventually > time out. FWIU, creation basically works the same way, but it is almost instantaneous, hence the risk of using up all > threads is simply lower (Gorka may correct me here :-). > > Cheers, > Arne > >> >> >> Thanks. >> >> >> 2019년 2월 11일 (월) 오후 6:14, Arne Wiebalck 님이 작성: >> Jae, >> >> To make sure deferred deletion is properly working: when you delete individual large volumes >> with data in them, do you see that >> - the volume is fully “deleted" within a few seconds, ie. not staying in ‘deleting’ for a long time? >> - that the volume shows up in trash (with “rbd trash ls”)? >> - the periodic task reports it is deleting volumes from the trash? >> >> Another option to look at is “backend_native_threads_pool_size": this will increase the number >> of threads to work on deleting volumes. It is independent from deferred deletion, but can also >> help with situations where Cinder has more work to do than it can cope with at the moment. >> >> Cheers, >> Arne >> >> >> >>> On 11 Feb 2019, at 09:47, Jae Sang Lee wrote: >>> >>> Yes, I added your code to pike release manually. >>> >>> >>> >>> 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 작성: >>> Hi Jae, >>> >>> You back ported the deferred deletion patch to Pike? >>> >>> Cheers, >>> Arne >>> >>> > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: >>> > >>> > Hello, >>> > >>> > I recently ran a volume deletion test with deferred deletion enabled on the pike release. >>> > >>> > We experienced a cinder-volume hung when we were deleting a large amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it. >>> > >>> > However, while deleting 200 volumes, after 50 volumes, the cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api. >>> > >>> > If these test results are my fault, please let me know the correct test method. >>> > >>> >>> -- >>> Arne Wiebalck >>> CERN IT >>> >> >> -- >> Arne Wiebalck >> CERN IT >> > > -- > Arne Wiebalck > CERN IT > From gmann at ghanshyammann.com Tue Feb 12 08:21:09 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 Feb 2019 17:21:09 +0900 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <1b6f6b9e-7d18-c730-9a46-13da0d18180c@openstack.org> Message-ID: <168e0cba6f4.1013071eb93450.6339187288167074748@ghanshyammann.com> ---- On Tue, 12 Feb 2019 02:14:56 +0900 Kendall Nelson wrote ---- > > > On Mon, Feb 11, 2019 at 8:01 AM Thierry Carrez wrote: > Doug Hellmann wrote: > > Kendall Nelson writes: > >> [...] > >> So I think that the First Contact SIG project liaison list kind of fits > >> this. Its already maintained in a wiki and its already a list of people > >> willing to be contacted for helping people get started. It probably just > >> needs more attention and refreshing. When it was first set up we (the FC > >> SIG) kind of went around begging for volunteers and then once we maxxed out > >> on them, we said those projects without volunteers will have the role > >> defaulted to the PTL unless they delegate (similar to how other liaison > >> roles work). > >> > >> Long story short, I think we have the sort of mentoring things covered. And > >> to back up an earlier email, project specific onboarding would be a good > >> help too. > > > > OK, that does sound pretty similar. I guess the piece that's missing is > > a description of the sort of help the team is interested in receiving. > > I guess the key difference is that the first contact list is more a > function of the team (who to contact for first contributions in this > team, defaults to PTL), rather than a distinct offer to do 1:1 mentoring > to cover specific needs in a team. > > It's probably pretty close (and the same people would likely be > involved), but I think an approach where specific people offer a > significant amount of their time to one mentee interested in joining a > team is a bit different. I don't think every team would have volunteers > to do that. I would not expect a mentor volunteer to care for several > mentees. In the end I think we would end up with a much shorter list > than the FC list. > > I think our original ask for people volunteering (before we completed the list with PTLs as stand ins) was for people willing to help get started in a project and look after their first few patches. So I think that was kinda the mentoring role originally but then it evolved? Maybe Matt Oliver or Ghanshyam remember better than I do? Yeah, that's right. > Maybe the two efforts can converge into one, or they can be kept as two > different things but coordinated by the same team ? > > > I think we could go either way, but that they both would live with the FC SIG. Seems like the most logical place to me. I lean towards two lists, one being a list of volunteer mentors for projects that are actively looking for new contributors (the shorter list) and the other being a list of people just willing to keep an eye out for the welcome new contributor patches and being the entry point for people asking about getting started that don't know anyone in the project yet (kind of what our current view is, I think). -- IMO, very first thing to make help-wanted list a success is, it has to be uptodate per development cycle, mentor-mapping(or with example workflow etc). By Keeping the help-wanted list in any place other than the project team again leads to existing problem for example it will be hard to prioritize, maintain and easy to get obsolete/outdated. FC SIG, D&I WG are great place to market/redirect the contributors to the list. The model I was thinking is: 1. Project team maintain the help-wanted-list per current development cycle. Entry criteria in that list is some volunteer mentor(exmaple workflow/patch) which are technically closer to that topic. 2. During PTG/developer meetup, PTL checks if planned/discussed topic needs to be in help-wanted list and who will serve as the mentor. 3. The list has to be updated in every developement cycle. It can be empty if any project team does not need help during that cycle or few items can be carry-forward if those are still a priority and have mentor mapping. 4. FC SIG, D&I WG, Mentoring team use that list and publish in all possible place. Redirect new contributors to that list depends on the contributor interested area. This will be the key role to make help-wanted-list success. -gmann > Thierry Carrez (ttx) > > -Kendall (diablo_rojo) From gmann at ghanshyammann.com Tue Feb 12 08:27:11 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 Feb 2019 17:27:11 +0900 Subject: [tc] The future of the "Help most needed" list In-Reply-To: References: <713ef94c-27d9-ed66-cf44-f9aa98e49a4c@openstack.org> <168b7a27e57.af72660a286234.5558288446348080387@ghanshyammann.com> <20190204152231.qgiryyjn7omu642z@yuggoth.org> <9df5005d-22b5-c158-4f03-92e07fb47a32@openstack.org> <168c8439d24.feed3a49551.7656492683145817726@ghanshyammann.com> Message-ID: <168e0d12b9b.ec123b9093670.4194575768064979236@ghanshyammann.com> ---- On Fri, 08 Feb 2019 01:07:33 +0900 Doug Hellmann wrote ---- > Ghanshyam Mann writes: > > > ---- On Thu, 07 Feb 2019 21:42:53 +0900 Doug Hellmann wrote ---- > > > Thierry Carrez writes: > > > > > > > Doug Hellmann wrote: > > > >> [...] > > > >> During the Train series goal discussion in Berlin we talked about having > > > >> a goal of ensuring that each team had documentation for bringing new > > > >> contributors onto the team. Offering specific mentoring resources seems > > > >> to fit nicely with that goal, and doing it in each team's repository in > > > >> a consistent way would let us build a central page on docs.openstack.org > > > >> to link to all of the team contributor docs, like we link to the user > > > >> and installation documentation, without requiring us to find a separate > > > >> group of people to manage the information across the entire community. > > > > > > > > I'm a bit skeptical of that approach. > > > > > > > > Proper peer mentoring takes a lot of time, so I expect there will be a > > > > limited number of "I'll spend significant time helping you if you help > > > > us" offers. I don't envision potential contributors to browse dozens of > > > > project-specific "on-boarding doc" to find them. I would rather > > > > consolidate those offers on a single page. > > > > > > > > So.. either some magic consolidation job that takes input from all of > > > > those project-specific repos to build a nice rendered list... Or just a > > > > wiki page ? > > > > > > > > -- > > > > Thierry Carrez (ttx) > > > > > > > > > > A wiki page would be nicely lightweight, so that approach makes some > > > sense. Maybe if the only maintenance is to review the page periodically, > > > we can convince one of the existing mentorship groups or the first > > > contact SIG to do that. > > > > Same can be achieved If we have a single link on doc.openstack.org or contributor guide with > > top section "Help-wanted" with subsection of each project specific help-wanted. project help > > wanted subsection can be build from help wanted section from project contributor doc. > > > > That way it is easy for the project team to maintain their help wanted list. Wiki page can > > have the challenge of prioritizing and maintain the list. > > > > -gmann > > > > > > > > -- > > > Doug > > Another benefit of using the wiki is that SIGs and pop-up teams can add > their own items. We don't have a good way for those groups to be > integrated with docs.openstack.org right now. Nice point about SIG. pop-up teams are more of volunteer only which might have less chance to make an entry in this list. My main concern with wiki is, it easily ( and maybe most of them ) gets obsolete. Especially in this case where technical ownership is distributed. -gmann > > -- > Doug > > From gmann at ghanshyammann.com Tue Feb 12 08:41:03 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 Feb 2019 17:41:03 +0900 Subject: [tc] cdent non-nomination for TC In-Reply-To: <0047dff9-7138-fa7b-16a6-6bbad31a493a@openstack.org> References: <20190208192550.5s2sx52fnvzps4sl@yuggoth.org> <0047dff9-7138-fa7b-16a6-6bbad31a493a@openstack.org> Message-ID: <168e0dde0ad.f104a07594256.7469283881027772697@ghanshyammann.com> ---- On Mon, 11 Feb 2019 18:00:36 +0900 Thierry Carrez wrote ---- > Jeremy Stanley wrote: > > On 2019-02-08 12:34:18 +0000 (+0000), Chris Dent wrote: > > [...] > >> I do not intend to run. I've done two years and that's enough. When > >> I was first elected I had no intention of doing any more than one > >> year but at the end of the first term I had not accomplished much of > >> what I hoped, so stayed on. Now, at the end of the second term I > >> still haven't accomplished much of what I hoped > > [...] > > > > You may not have accomplished what you set out to, but you certainly > > have made a difference. You've nudged lines of discussion into > > useful directions they might not otherwise have gone, provided a > > frequent reminder of the representative nature of our governance, > > and produced broadly useful summaries of our long-running > > conversations. I really appreciate what you brought to the TC, and > > am glad you'll still be around to hold the rest of us (and those who > > succeed you/us) accountable. Thanks! > > Jeremy said it better than I could have ! While I really appreciated the > perspective you brought to the TC, I understand the need to focus to > have the most impact. > > It's also a good reminder that the role that the TC fills can be shared > beyond the elected membership -- so if you care about a specific aspect > of governance, OpenStack-wide technical leadership or community health, > I encourage you to participate in the TC activities, whether you are > elected or not. > Thanks Chris for serving your great effort in TC and making the difference. You have been doing a lot of things during your TC terms with the actual outcome and setting an example. -gmann > -- > Thierry Carrez (ttx) > > From gmann at ghanshyammann.com Tue Feb 12 08:44:52 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 Feb 2019 17:44:52 +0900 Subject: [tc] smcginnis non-nomination for TC In-Reply-To: <20190208140051.GB8848@sm-workstation> References: <20190208140051.GB8848@sm-workstation> Message-ID: <168e0e15da8.106c8076294445.1151043233506755582@ghanshyammann.com> ---- On Fri, 08 Feb 2019 23:00:51 +0900 Sean McGinnis wrote ---- > As Chris said, it is probably good for incumbents to make it known if they are > not running. > > This is my second term on the TC. It's been great being part of this group and > trying to contribute whatever I can. But I do feel it is important to make room > for new folks to regularly join and help shape things. So with that in mind, > along with the need to focus on some other areas for a bit, I do not plan to > run in the upcoming TC election. > > I would highly encourage anyone interested to run for the TC. If you have any > questions about it, feel free to ping me for any thoughts/advice/feedback. > > Thanks for the last two years. I think I've learned a lot since joining the TC, > and hopefully I have been able to contribute some positive things over the > years. I will still be around, so hopefully I will see folks in Denver. > Thanks Sean for your serving as TC with one of the most humble and helpful person. -gmann > Sean > > From geguileo at redhat.com Tue Feb 12 09:24:30 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 12 Feb 2019 10:24:30 +0100 Subject: [cinder][dev] Bug for deferred deletion in RBD In-Reply-To: <93782FC6-38BE-438C-B665-40977863DEDA@cern.ch> References: <69CDA0DD-B97D-4929-831F-88785A1F4281@cern.ch> <5B36EF71-C557-4BEF-B4C6-05A67922D86D@cern.ch> <3C065CFC-3E64-47C1-84C9-FB87A1F9B475@cern.ch> <93782FC6-38BE-438C-B665-40977863DEDA@cern.ch> Message-ID: <20190212092430.34q6zlr47jj6uq4c@localhost> On 12/02, Arne Wiebalck wrote: > Jae, > > One other setting that caused trouble when bulk deleting cinder volumes was the > DB connection string: we did not configure a driver and hence used the Python > mysql wrapper instead … essentially changing > > connection = mysql://cinder:@:/cinder > > to > > connection = mysql+pymysql://cinder:@:/cinder > > solved the parallel deletion issue for us. > > All details in the last paragraph of [1]. > > HTH! > Arne > > [1] https://techblog.web.cern.ch/techblog/post/experiences-with-cinder-in-production/ > Good point, using a C mysql connection library will induce thread starvation. This was thoroughly discussed, and the default changed, like 2 years ago... So I assumed we all changed that. Something else that could be problematic when receiving many concurrent requests on any Cinder service is the number of concurrent DB connections, although we also changed this a while back to 50. This is set as sql_max_retries or max_retries (depending on the version) in the "[database]" section. Cheers, Gorka. > > > > On 12 Feb 2019, at 01:07, Jae Sang Lee wrote: > > > > Hello, > > > > I tested today by increasing EVENTLET_THREADPOOL_SIZE size to 100. I wanted to have good results, > > but this time I did not get a response after removing 41 volumes. This environment variable did not fix > > the cinder-volume stopping. > > > > Restarting the stopped cinder-volume will delete all volumes that are in deleting state while running the clean_up function. > > Only one volume in the deleting state, I force the state of this volume to be available, and then delete it, all volumes will be deleted. > > > > This result was the same for 3 consecutive times. After removing dozens of volumes, the cinder-volume was down, > > and after the restart of the service, 199 volumes were deleted and one volume was manually erased. > > > > If you have a different approach to solving this problem, please let me know. > > > > Thanks. > > > > 2019년 2월 11일 (월) 오후 9:40, Arne Wiebalck 님이 작성: > > Jae, > > > >> On 11 Feb 2019, at 11:39, Jae Sang Lee wrote: > >> > >> Arne, > >> > >> I saw the messages like ''moving volume to trash" in the cinder-volume logs and the peridic task also reports > >> like "Deleted from trash for backend ''" > >> > >> The patch worked well when clearing a small number of volumes. This happens only when I am deleting a large > >> number of volumes. > > > > Hmm, from cinder’s point of view, the deletion should be more or less instantaneous, so it should be able to “delete” > > many more volumes before getting stuck. > > > > The periodic task, however, will go through the volumes one by one, so if you delete many at the same time, > > volumes may pile up in the trash (for some time) before the tasks gets round to delete them. This should not affect > > c-vol, though. > > > >> I will try to adjust the number of thread pools by adjusting the environment variables with your advices > >> > >> Do you know why the cinder-volume hang does not occur when create a volume, but only when delete a volume? > > > > Deleting a volume ties up a thread for the duration of the deletion (which is synchronous and can hence take very > > long for ). If you have too many deletions going on at the same time, you run out of threads and c-vol will eventually > > time out. FWIU, creation basically works the same way, but it is almost instantaneous, hence the risk of using up all > > threads is simply lower (Gorka may correct me here :-). > > > > Cheers, > > Arne > > > >> > >> > >> Thanks. > >> > >> > >> 2019년 2월 11일 (월) 오후 6:14, Arne Wiebalck 님이 작성: > >> Jae, > >> > >> To make sure deferred deletion is properly working: when you delete individual large volumes > >> with data in them, do you see that > >> - the volume is fully “deleted" within a few seconds, ie. not staying in ‘deleting’ for a long time? > >> - that the volume shows up in trash (with “rbd trash ls”)? > >> - the periodic task reports it is deleting volumes from the trash? > >> > >> Another option to look at is “backend_native_threads_pool_size": this will increase the number > >> of threads to work on deleting volumes. It is independent from deferred deletion, but can also > >> help with situations where Cinder has more work to do than it can cope with at the moment. > >> > >> Cheers, > >> Arne > >> > >> > >> > >>> On 11 Feb 2019, at 09:47, Jae Sang Lee wrote: > >>> > >>> Yes, I added your code to pike release manually. > >>> > >>> > >>> > >>> 2019년 2월 11일 (월) 오후 4:39에 Arne Wiebalck 님이 작성: > >>> Hi Jae, > >>> > >>> You back ported the deferred deletion patch to Pike? > >>> > >>> Cheers, > >>> Arne > >>> > >>> > On 11 Feb 2019, at 07:54, Jae Sang Lee wrote: > >>> > > >>> > Hello, > >>> > > >>> > I recently ran a volume deletion test with deferred deletion enabled on the pike release. > >>> > > >>> > We experienced a cinder-volume hung when we were deleting a large amount of the volume in which the data was actually written(I make 15GB file in every volumes), and we thought deferred deletion would solve it. > >>> > > >>> > However, while deleting 200 volumes, after 50 volumes, the cinder-volume downed as before. In my opinion, the trash_move api does not seem to work properly when removing multiple volumes, just like remove api. > >>> > > >>> > If these test results are my fault, please let me know the correct test method. > >>> > > >>> > >>> -- > >>> Arne Wiebalck > >>> CERN IT > >>> > >> > >> -- > >> Arne Wiebalck > >> CERN IT > >> > > > > -- > > Arne Wiebalck > > CERN IT > > > From bence.romsics at gmail.com Tue Feb 12 10:09:25 2019 From: bence.romsics at gmail.com (Bence Romsics) Date: Tue, 12 Feb 2019 11:09:25 +0100 Subject: [Neutron] Multi segment networks In-Reply-To: References: Message-ID: Hi Ricardo, On Thu, Feb 7, 2019 at 6:45 PM Ricardo Noriega De Soto wrote: > Does it mean, that placing two VMs (with regular virtio interfaces), one in the vxlan segment and one on the vlan segment, would be able to ping each other without the need of a router? > Or would it require an external router that belongs to the owner of the infrastructure? To my limited understanding of multi-segment networks I think neutron generally does not take care of packet forwarding between the segments. So I expect your example net-create command to create a network with two disconnected segments. IIRC the first time when multi-segment networks were allowed in the API, there was no implementation of connecting the segments at all automatically. The API was merged to allow later features like the routed-networks feature of neutron [1][2]. Or to allow connecting segments administratively outside of neutron control. I'm not sure if it is well defined how the segments should be connected - on l2 or l3. I think people originally thought of mostly bridging the segments together. Then the routed networks feature went to connect them by routers. I guess it depends on your use case. Hope this helps, Bence Romsics (rubasov) [1] https://specs.openstack.org/openstack/neutron-specs/specs/newton/routed-networks.html [2] https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html From chkumar246 at gmail.com Tue Feb 12 10:26:50 2019 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 12 Feb 2019 15:56:50 +0530 Subject: [tripleo][openstack-ansible] collaboration on os_tempest role update X - Feb 12, 2019 Message-ID: Hello, Here is the 10th update (Feb 06 to Feb 12, 2019) on collaboration on os_tempest[1] role between TripleO and OpenStack-Ansible projects. Summary: This week we basically worked on clearing up/merging the existing patches like: * For debugging networking issue for os_tempest, we have router ping * Added telemetry tempest plugin support * The os_tempest overview page got rewrite: https://docs.openstack.org/openstack-ansible-os_tempest/latest/overview.html * Added use of user/password for secure image download And from myside, not to much work as busy with ruck/rover on TripleO Side. Things got merged OS_TEMPEST: * Update all plugin urls to use https rather than git - https://review.openstack.org/633752 * venv: use inventory_hostname instead of ansible_hostname - https://review.openstack.org/635187 * Add telemetry distro plugin install for aodh - https://review.openstack.org/632125 * Add user and password for secure image download (optional) - https://review.openstack.org/625266 * Ping router once it is created - https://review.openstack.org/633883 * Improve overview subpage - https://review.openstack.org/633934 python-venv_build: * Add tripleo-ci-centos-7-standalone-os-tempest job - https://review.openstack.org/634377 In Progress work: OS_TEMPEST * Use the correct heat tests - https://review.openstack.org/#/c/630695/ * Add option to disable router ping - https://review.openstack.org/636211 * Add tempest_service_available_mistral with distro packages - https://review.openstack.org/635180 * Added tempest.conf for heat_plugin - https://review.openstack.org/632021 TripleO: * Reuse the validate-tempest skip list in os_tempest - https://review.openstack.org/634380 Goal of this week: * Unblock os_heat gate due to mpi4py dependency and other issue * Complete skip list reuse on tripleo side Thanks to jrosser, mnaser, odyssey4me, guilhermesp on router_ping, os_heat help, arxcruz on reuse skip list & mkopec for improving doc. Here is the 9th update [2]. Have queries, Feel free to ping us on #tripleo or #openstack-ansible channel. Links: Links: [1.] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest [2.] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002382.html Thanks, Chandan Kumar From moreira.belmiro.email.lists at gmail.com Tue Feb 12 10:31:39 2019 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Tue, 12 Feb 2019 11:31:39 +0100 Subject: [nova] Can we drop the cells v1 docs now? In-Reply-To: <1979b66e-7de8-9826-1145-e80af5d6a270@gmail.com> References: <1979b66e-7de8-9826-1145-e80af5d6a270@gmail.com> Message-ID: +1 to remove cellsV1 docs. This architecture should not be considered in new Nova deployments. As Matt described we use cellsV2 since Queens but we are still using nova-network in a significant part of the infrastructure. I was always assuming that cellsV1/nova-network code would be removed in Stein. I continue to support this plan! We will not maintain an internal fork but migrate everything to Neutron. Belmiro CERN On Mon, Feb 11, 2019 at 3:44 PM Matt Riedemann wrote: > I have kind of lost where we are on dropping cells v1 code at this > point, but it's probably too late in Stein. And technically nova-network > won't start unless cells v1 is configured, and we've left the > nova-network code in place while CERN is migrating their deployment to > neutron*. CERN is running cells v2 since Queens and I think they have > just removed this [1] to still run nova-network without cells v1. > > There has been no work in Stein to remove nova-network [2] even though > we still have a few API related things we can work on removing [3] but > that is very low priority. To be clear, CERN only cares about the > nova-network service, not the APIs which is why we started removing > those in Rocky. > > As for cells v1, if we're not going to drop it in Stein, can we at least > make incremental progress and drop the cells v1 related docs to further > signal the eventual demise and to avoid confusion in the docs about what > cells is (v1 vs v2) for newcomers? People can still get the cells v1 > in-tree docs on the stable branches (which are being published [4]). > > [1] > https://github.com/openstack/nova/blob/bff3fd1cd/nova/cmd/network.py#L43 > [2] https://blueprints.launchpad.net/nova/+spec/remove-nova-network-stein > [3] https://etherpad.openstack.org/p/nova-network-removal-rocky > [4] https://docs.openstack.org/nova/queens/user/cells.html#cells-v1 > > *I think they said there are parts of their deployment that will > probably never move off of nova-network, and they will just maintain a > fork for that part of the deployment. > > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Tue Feb 12 13:04:02 2019 From: ramishra at redhat.com (Rabi Mishra) Date: Tue, 12 Feb 2019 18:34:02 +0530 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: On Tue, Feb 12, 2019 at 11:14 AM NANTHINI A A wrote: > Hi , > > May I know in the following example given > > > parameters: > resource_name_map: > - network1: foo_custom_name_net1 > network2: foo_custom_name_net2 > - network1: bar_custom_name_net1 > network2: bar_custom_name_net2 > > what is the parameter type ? > > json -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Feb 12 14:35:10 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 12 Feb 2019 09:35:10 -0500 Subject: [ops] last weeks ops meetups team minutes Message-ID: Meeting ended Tue Feb 5 15:31:14 2019 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 10:31 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-02-05-15.00.html 10:31 AM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-02-05-15.00.txt 10:31 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-02-05-15.00.log.html Next meeting is in 25 minutes on #openstack-operators Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Tue Feb 12 15:06:40 2019 From: amy at demarco.com (Amy Marrich) Date: Tue, 12 Feb 2019 09:06:40 -0600 Subject: [tc] cdent non-nomination for TC In-Reply-To: References: Message-ID: Chris, Thank you so much for all you have done as a member of the TC! Amy (spotz) On Fri, Feb 8, 2019 at 6:41 AM Chris Dent wrote: > > Next week sees the start of election season for the TC [1]. People > often worry that incumbents always get re-elected so it is > considered good form to announce if you are an incumbent and do > not intend to run. > > I do not intend to run. I've done two years and that's enough. When > I was first elected I had no intention of doing any more than one > year but at the end of the first term I had not accomplished much of > what I hoped, so stayed on. Now, at the end of the second term I > still haven't accomplished much of what I hoped, so I think it is > time to focus my energy in the places where I've been able to get > some traction and give someone else—someone with a different > approach—a chance. > > If you're interested in being on the TC, I encourage you to run. If > you have questions about it, please feel free to ask me, but also > ask others so you get plenty of opinions. And do your due diligence: > Make sure you're clear with yourself about what the TC has been, > is now, what you would like it to be, and what it can be. > > Elections are fairly far in advance of the end of term this time > around. I'll continue in my TC responsibilities until the end of > term, which is some time in April. I'm not leaving the community or > anything like that, I'm simply narrowing my focus. Over the past > several months I've been stripping things back so I can be sure that > I'm not ineffectively over-committing myself to OpenStack but am > instead focusing where I can be most useful and make the most > progress. Stepping away from the TC is just one more part of that. > > Thanks very much for the experiences and for the past votes. > > [1] https://governance.openstack.org/election/ > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Tue Feb 12 15:10:46 2019 From: amy at demarco.com (Amy Marrich) Date: Tue, 12 Feb 2019 09:10:46 -0600 Subject: [tc] smcginnis non-nomination for TC In-Reply-To: <168e0e15da8.106c8076294445.1151043233506755582@ghanshyammann.com> References: <20190208140051.GB8848@sm-workstation> <168e0e15da8.106c8076294445.1151043233506755582@ghanshyammann.com> Message-ID: Sean, Thanks for all your hard work on the TC and hope to see you in Denver. Amy (spotz) ---- On Fri, 08 Feb 2019 23:00:51 +0900 Sean McGinnis > wrote ---- > > As Chris said, it is probably good for incumbents to make it known if > they are > > not running. > > > > This is my second term on the TC. It's been great being part of this > group and > > trying to contribute whatever I can. But I do feel it is important to > make room > > for new folks to regularly join and help shape things. So with that in > mind, > > along with the need to focus on some other areas for a bit, I do not > plan to > > run in the upcoming TC election. > > > > I would highly encourage anyone interested to run for the TC. If you > have any > > questions about it, feel free to ping me for any > thoughts/advice/feedback. > > > > Thanks for the last two years. I think I've learned a lot since joining > the TC, > > and hopefully I have been able to contribute some positive things over > the > > years. I will still be around, so hopefully I will see folks in Denver. > > > > Sean > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mvanwinkle at salesforce.com Tue Feb 12 15:37:19 2019 From: mvanwinkle at salesforce.com (Matt Van Winkle) Date: Tue, 12 Feb 2019 09:37:19 -0600 Subject: [PTLs] Got unfinished business from Berlin? Message-ID: Greetings, PTLs, cores or anyone on point for a key feature, In an effort to make the feedback loop even stronger between the dev and ops community, the UC is actively looking for any unfinished etherpads or topics from the Berlin summit that need more Ops input. We'd like to get them proposed as potential topics at the upcoming Ops Meetup (strangely enough back in Berlin) [1] If you have something your dev team needs input on, please propose it here: [2] so it can get voted on by the attendees and organizers. There is a section titled "Session Ideas" that you can list the topic in. Feel free to link an etherpad if one exists. The UC will continue to push to tie discussions at forums/PTGs to those at the Ops meetups and OpenStack Days - and vice versa. Thanks in advance! VW [1] https://www.eventbrite.com/e/openstack-ops-meetup-berlin-tickets-55034908894 [2] https://etherpad.openstack.org/p/BER-ops-meetup -- Matt Van Winkle Senior Manager, Software Engineering | Salesforce -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanthini.a.a at ericsson.com Tue Feb 12 05:44:22 2019 From: nanthini.a.a at ericsson.com (NANTHINI A A) Date: Tue, 12 Feb 2019 05:44:22 +0000 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: Hi , May I know in the following example given parameters: resource_name_map: - network1: foo_custom_name_net1 network2: foo_custom_name_net2 - network1: bar_custom_name_net1 network2: bar_custom_name_net2 what is the parameter type ? Thanks, A.Nanthini From: Rabi Mishra [mailto:ramishra at redhat.com] Sent: Tuesday, February 12, 2019 10:45 AM To: NANTHINI A A Cc: hjensas at redhat.com; openstack-dev at lists.openstack.org Subject: Re: [Heat] Reg accessing variables of resource group heat api On Mon, Feb 11, 2019 at 9:23 PM NANTHINI A A > wrote: Hi , I have tried the below .But getting error .Please let me know how I can proceed further . root at cic-1:~# cat try1.yaml heat_template_version: 2013-05-23 description: This is the template for I&V R6.1 base configuration to create neutron resources other than sg and vm for vyos vms parameters: resource_name_map: - network1: NetworkA1 network2: NetworkA2 - network1: NetworkB1 network2: NetworkB2 resources: neutron_Network_1: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network1]} I don't think you can use %index% directly in this template. You have to pass it as resource property from tryreapet.yaml. Please check the example[1] in heat-templates repo (resource_group_index_lookup.yaml and random.yaml). [1] https://github.com/openstack/heat-templates/blob/master/hot/resource_group/resource_group_index_lookup.yaml neutron_Network_2: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network2]} root at cic-1:~# cat tryrepeat.yaml heat_template_version: 2013-05-23 resources: rg: type: OS::Heat::ResourceGroup properties: count: 2 resource_def: type: try1.yaml root at cic-1:~# root at cic-1:~# heat stack-create tests -f tryrepeat.yaml WARNING (shell) "heat stack-create" is deprecated, please use "openstack stack create" instead ERROR: resources.rg: : Error parsing template file:///root/try1.yaml while scanning for the next token found character '%' that cannot start any token in "", line 15, column 45: ... {get_param: [resource_name_map, %index%, network1]} Thanks in advance . Thanks, A.Nanthini -----Original Message----- From: Harald Jensås [mailto:hjensas at redhat.com] Sent: Monday, February 11, 2019 7:47 PM To: NANTHINI A A >; openstack-dev at lists.openstack.org Subject: Re: [Heat] Reg accessing variables of resource group heat api On Wed, 2019-02-06 at 06:12 +0000, NANTHINI A A wrote: > Hi , > We are developing heat templates for our vnf deployment .It > includes multiple resources .We want to repeat the resource and hence > used the api RESOURCE GROUP . > Attached are the templates which we used > > Set1.yaml -> has the resources we want to repeat Setrepeat.yaml -> has > the resource group api with count . > > We want to access the variables of resource in set1.yaml while > repeating it with count .Eg . port name ,port fixed ip address we want > to change in each set . > Please let us know how we can have a variable with each repeated > resource . > Sounds like you want to use the index_var variable[1] to prefix/suffix reource names? I.e in set1.yaml you can use: name: list_join: - '_' - {get_param: 'OS::stack_name'} - %index% - The example should resulting in something like: stack_0_Network3, stack_0_Subnet3 stack_1_Network0, stack_1_Subnet0 [ ... ] If you want to be more advanced you could use a list parameter in the set1.yaml template, and have each list entry contain a dictionaly of each resource name. The %index% variable would then be used to pick the correct entry from the list. parameters: resource_name_map: - network1: foo_custom_name_net1 network2: foo_custom_name_net2 - network1: bar_custom_name_net1 network2: bar_custom_name_net2 - resources: neutron_Network_1: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network1]} neutron_Network_2: type: OS::Neutron::Net properties: name: {get_param: [resource_name_map, %index%, network2]} %index% is the "count" picking the 'foo' entries when %index% is 0, and 'bar' entries when %index% is 1 and so on. [1] https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Heat::ResourceGroup-props-opt -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanthini.a.a at ericsson.com Tue Feb 12 14:18:12 2019 From: nanthini.a.a at ericsson.com (NANTHINI A A) Date: Tue, 12 Feb 2019 14:18:12 +0000 Subject: [Heat] Reg accessing variables of resource group heat api In-Reply-To: References: <876949220f20f48d5416c2cb41f6e1aa79ee2cc0.camel@redhat.com> Message-ID: Hi , I followed the example given in random.yaml .But getting below error .Can you please tell me what is wrong here . root at cic-1:~# heat stack-create test -f main.yaml WARNING (shell) "heat stack-create" is deprecated, please use "openstack stack create" instead ERROR: Property error: : resources.rg.resources[0].properties: : Unknown Property names root at cic-1:~# cat main.yaml heat_template_version: 2015-04-30 description: Shows how to look up list/map values by group index parameters: net_names: type: json default: - network1: NetworkA1 network2: NetworkA2 - network1: NetworkB1 network2: NetworkB2 resources: rg: type: OS::Heat::ResourceGroup properties: count: 3 resource_def: type: nested.yaml properties: # Note you have to pass the index and the entire list into the # nested template, resolving via %index% doesn't work directly # in the get_param here index: "%index%" names: {get_param: net_names} outputs: all_values: value: {get_attr: [rg, value]} root at cic-1:~# cat nested.yaml heat_template_version: 2013-05-23 description: This is the template for I&V R6.1 base configuration to create neutron resources other than sg and vm for vyos vms parameters: net_names: type: json index: type: number resources: neutron_Network_1: type: OS::Neutron::Net properties: name: {get_param: [names, {get_param: index}, network1]} Thanks, A.Nanthini From: Rabi Mishra [mailto:ramishra at redhat.com] Sent: Tuesday, February 12, 2019 6:34 PM To: NANTHINI A A Cc: hjensas at redhat.com; openstack-dev at lists.openstack.org Subject: Re: [Heat] Reg accessing variables of resource group heat api On Tue, Feb 12, 2019 at 11:14 AM NANTHINI A A > wrote: Hi , May I know in the following example given parameters: resource_name_map: - network1: foo_custom_name_net1 network2: foo_custom_name_net2 - network1: bar_custom_name_net1 network2: bar_custom_name_net2 what is the parameter type ? json -------------- next part -------------- An HTML attachment was scrubbed... URL: From elfosardo at gmail.com Tue Feb 12 16:09:53 2019 From: elfosardo at gmail.com (elfosardo) Date: Tue, 12 Feb 2019 17:09:53 +0100 Subject: [ironic] should console be renamed to seriale_console ? Message-ID: Greetings Openstackers! Currently ironic supports only one type of console: serial The current implementation also gives as assumed the support for just one type of console, but not that long ago a spec to also support a graphical console type [1] has been accepted and we're now close to see a first patch with basic support merged [2]. With the introduction of the support for the graphical console, the need to define a new parameter called "console_type" has been recognized. In practice, at the moment that would mean having "console" and "graphical" as console types, which could result in a confusing and in the end not correct implementation. With this message I'd like to start a discussion on the potential impact of the possible future renaming of everything that currently involves the serial console from "console" to "serial_console" or equivalent. Thanks, Riccardo [1] https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/vnc-graphical-console.html [2] https://review.openstack.org/#/c/547356/ From kennelson11 at gmail.com Tue Feb 12 17:06:20 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 12 Feb 2019 09:06:20 -0800 Subject: Denver PTG Attending Teams Message-ID: Hello! The results are in! Here are the list of teams that are planning to attend the upcoming PTG in Denver, following the summit. Hopefully we are getting it to you soon enough to plan travel. If you haven't already registered yet, you can do that here[1]. If you haven't booked your hotel yet, please please please use our hotel block here[2]. ----------------------------------------- Pilot Projects: - Airship - Kata Containers - StarlingX OpenStack Components: - - Barbican - Charms - Cinder - Cyborg - Docs/I18n - Glance - Heat - Horizon - Infrastructure - Ironic - Keystone - LOCI - Manila - Monasca - Neutron - Nova - Octavia - OpenStack Ansible - OpenStack QA - OpenStackClient - Oslo - Placement - Release Management - Requirements - Swift - Tacker - TripleO - Vitrage - OpenStack-Helm SIGs: - API-SIG - AutoScaling SIG - Edge Computing Group - Extended Maintenance SIG - First Contact SIG - Interop WG/RefStack - K8s SIG - Scientific SIG - Security SIG - Self-healing SIG ------------------------------------------ If your team is missing from this list, its because I didn't get a 'yes' response from your PTL/Chair/Contact Person. Have them contact me and we can try to work something out. Now that we have this list, we will start putting together a draft schedule. See you all in Denver! -Kendall (diablo_rojo) [1] https://www.eventbrite.com/e/open-infrastructure-summit-project-teams-gathering-tickets-52606153421 [2] https://www.hyatt.com/en-US/group-booking/DENCC/G-FNTE -------------- next part -------------- An HTML attachment was scrubbed... URL: From km.giuseppesannino at gmail.com Tue Feb 12 17:31:23 2019 From: km.giuseppesannino at gmail.com (Giuseppe Sannino) Date: Tue, 12 Feb 2019 18:31:23 +0100 Subject: [kolla][mariadb] Multinode deployment fails due to bootstrap_mariadb or mariadb errors Message-ID: Hi all, need your help. I'm trying to deploy Openstack "Queens" via kolla on a multinode system (1 controller/kolla host + 1 compute). I tried with both binary and source packages and I'm using "ubuntu" as base_distro. The first attempt of deployment systematically fails here: TASK [mariadb : Running MariaDB bootstrap container] ******************************************************************************************************************************************************************************************************** fatal: [xx.yy.zz.136]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 1"} Looking at the bootstrap_mariadb container logs I can see: ---------- Neither host 'xxyyzz' nor 'localhost' could be looked up with '/usr/sbin/resolveip' Please configure the 'hostname' command to return a correct hostname. ---------- Any idea ? Thanks a lot /Giuseppe -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Feb 12 17:41:00 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 12 Feb 2019 12:41:00 -0500 Subject: Placement governance switch In-Reply-To: <8BE26158-5817-497F-A9D2-60222BD9F82C@leafe.com> References: <8BE26158-5817-497F-A9D2-60222BD9F82C@leafe.com> Message-ID: Ed Leafe writes: > With PTL election season coming up soon, this seems like a good time to revisit the plans for the Placement effort to become a separate project with its own governance. We last discussed this back at the Denver PTG in September 2018, and settled on making Placement governance dependent on a number of items. [0] > > Most of the items in that list have been either completed, are very close to completion, or, in the case of the upgrade, is no longer expected. But in the time since that last discussion, much has changed. Placement is now a separate git repo, and is deployed and run independently of Nova. The integrated gate in CI is using the extracted Placement repo, and not Nova’s version. > > In a hangout last week [1], we agreed to several things: > > * Placement code would remain in the Nova repo for the Stein release to allow for an easier transition for deployments tools that were not prepared for this change > * The Placement code in the Nova tree will remain frozen; all new Placement work will be in the Placement repo. > * The Placement API is now unfrozen. Nova, however, will not develop code in Stein that will rely on any newer Placement microversion than the current 1.30. > * The Placement code in the Nova repo will be deleted in the Train release. > > Given the change of context, now may be a good time to change to a separate governance. The concerns on the Nova side have been largely addressed, and switching governance now would allow us to participate in the next PTL election cycle. We’d like to get input from anyone else in the OpenStack community who feels that a governance change would impact them, so please reply in this thread if you have concerns. > > [0] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002451.html > > > -- Ed Leafe Have you talked to the election team about running a PTL election for the new team? I don't know what their expected cut-off date for having teams defined is, so we should make sure they're ready and then have the governance patch to set up the new team prepared ASAP because that requires a formal vote from the TC, which will take a while and we're about to enter TC elections. -- Doug From doug at doughellmann.com Tue Feb 12 17:44:27 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 12 Feb 2019 12:44:27 -0500 Subject: [tc] cdent non-nomination for TC In-Reply-To: <0047dff9-7138-fa7b-16a6-6bbad31a493a@openstack.org> References: <20190208192550.5s2sx52fnvzps4sl@yuggoth.org> <0047dff9-7138-fa7b-16a6-6bbad31a493a@openstack.org> Message-ID: Thierry Carrez writes: > Jeremy Stanley wrote: >> On 2019-02-08 12:34:18 +0000 (+0000), Chris Dent wrote: >> [...] >>> I do not intend to run. I've done two years and that's enough. When >>> I was first elected I had no intention of doing any more than one >>> year but at the end of the first term I had not accomplished much of >>> what I hoped, so stayed on. Now, at the end of the second term I >>> still haven't accomplished much of what I hoped >> [...] >> >> You may not have accomplished what you set out to, but you certainly >> have made a difference. You've nudged lines of discussion into >> useful directions they might not otherwise have gone, provided a >> frequent reminder of the representative nature of our governance, >> and produced broadly useful summaries of our long-running >> conversations. I really appreciate what you brought to the TC, and >> am glad you'll still be around to hold the rest of us (and those who >> succeed you/us) accountable. Thanks! > > Jeremy said it better than I could have ! While I really appreciated the > perspective you brought to the TC, I understand the need to focus to > have the most impact. > > It's also a good reminder that the role that the TC fills can be shared > beyond the elected membership -- so if you care about a specific aspect > of governance, OpenStack-wide technical leadership or community health, > I encourage you to participate in the TC activities, whether you are > elected or not. > > -- > Thierry Carrez (ttx) > Yes, I'm piling on a bit late so I'll keep this short and just say I agree with all of the above and have definitely found your perspective valuable. Thank you! -- Doug From ed at leafe.com Tue Feb 12 17:46:59 2019 From: ed at leafe.com (Ed Leafe) Date: Tue, 12 Feb 2019 11:46:59 -0600 Subject: Placement governance switch In-Reply-To: References: <8BE26158-5817-497F-A9D2-60222BD9F82C@leafe.com> Message-ID: On Feb 12, 2019, at 11:41 AM, Doug Hellmann wrote: > > Have you talked to the election team about running a PTL election for > the new team? I don't know what their expected cut-off date for having > teams defined is, so we should make sure they're ready and then have the > governance patch to set up the new team prepared ASAP because that > requires a formal vote from the TC, which will take a while and we're > about to enter TC elections. We did realize that it might be cutting it close, as nominations begin on March 5. Since the governance change would not be a new issue, we did not anticipate a lengthy debate among the TC. If it turns out that it can’t be done in time, so be it, but we at least wanted to try. -- Ed Leafe From doug at doughellmann.com Tue Feb 12 17:48:30 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 12 Feb 2019 12:48:30 -0500 Subject: [tc] smcginnis non-nomination for TC In-Reply-To: <20190208140051.GB8848@sm-workstation> References: <20190208140051.GB8848@sm-workstation> Message-ID: Sean McGinnis writes: > As Chris said, it is probably good for incumbents to make it known if they are > not running. > > This is my second term on the TC. It's been great being part of this group and > trying to contribute whatever I can. But I do feel it is important to make room > for new folks to regularly join and help shape things. So with that in mind, > along with the need to focus on some other areas for a bit, I do not plan to > run in the upcoming TC election. > > I would highly encourage anyone interested to run for the TC. If you have any > questions about it, feel free to ping me for any thoughts/advice/feedback. > > Thanks for the last two years. I think I've learned a lot since joining the TC, > and hopefully I have been able to contribute some positive things over the > years. I will still be around, so hopefully I will see folks in Denver. > > Sean > Thank you, Sean. Your input and help has been valuable. I look forward to seeing your impact on the Board. :-) -- Doug From doug at stackhpc.com Tue Feb 12 17:54:26 2019 From: doug at stackhpc.com (Doug Szumski) Date: Tue, 12 Feb 2019 17:54:26 +0000 Subject: [kolla][mariadb] Multinode deployment fails due to bootstrap_mariadb or mariadb errors In-Reply-To: References: Message-ID: On 12/02/2019 17:31, Giuseppe Sannino wrote: > Hi all, > need your help. > I'm trying to deploy Openstack "Queens" via kolla on a multinode > system (1 controller/kolla host + 1 compute). > > I tried with both binary and source packages and I'm using "ubuntu" as > base_distro. > > The first attempt of deployment systematically fails here: > > TASK [mariadb : Running MariaDB bootstrap container] > ******************************************************************************************************************************************************************************************************** > fatal: [xx.yy.zz.136]: FAILED! => {"changed": true, "msg": "Container > exited with non-zero return code 1"} > > Looking at the bootstrap_mariadb container logs I can see: > ---------- > Neither host 'xxyyzz' nor 'localhost' could be looked up with > '/usr/sbin/resolveip' > Please configure the 'hostname' command to return a correct > hostname. > ---------- > > Any idea ? Have you checked that /etc/hosts is configured correctly? > > Thanks a lot > /Giuseppe > From lyarwood at redhat.com Tue Feb 12 18:00:21 2019 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 12 Feb 2019 18:00:21 +0000 Subject: [nova][dev] Which response code should be returned when migrate is called but the src host is offline? Message-ID: <20190212180021.nloawdf5ywvmvdgh@lyarwood.usersys.redhat.com> Hello all, I can't seem to settle on an answer for $subject as part this bugfix: compute: Reject migration requests when source is down https://review.openstack.org/#/c/623489/ 409 suggests that the user is able to address the issue while 503 suggests that n-api itself is at fault. I'd really appreciate peoples thoughts on this given I hardly ever touch n-api. Thanks in advance, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From km.giuseppesannino at gmail.com Tue Feb 12 18:18:23 2019 From: km.giuseppesannino at gmail.com (Giuseppe Sannino) Date: Tue, 12 Feb 2019 19:18:23 +0100 Subject: [kolla][mariadb] Multinode deployment fails due to bootstrap_mariadb or mariadb errors In-Reply-To: References: Message-ID: Hi Doug, first of all, many thanks for the fast reply. the /etc/hosts on my "host" machine is properly confiured: 127.0.0.1 localhost 127.0.1.1 hce03 # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters # BEGIN ANSIBLE GENERATED HOSTS xx.yy.zz.136 hce03 xx.yy.zz.138 hce05 # END ANSIBLE GENERATED HOSTS while the bootstrap_mariadb is attempting to start up if I check within the container I see: ()[mysql at 01ec215b2dc8 /]$ cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.2 01ec215b2dc8 /G On Tue, 12 Feb 2019 at 18:54, Doug Szumski wrote: > > On 12/02/2019 17:31, Giuseppe Sannino wrote: > > Hi all, > > need your help. > > I'm trying to deploy Openstack "Queens" via kolla on a multinode > > system (1 controller/kolla host + 1 compute). > > > > I tried with both binary and source packages and I'm using "ubuntu" as > > base_distro. > > > > The first attempt of deployment systematically fails here: > > > > TASK [mariadb : Running MariaDB bootstrap container] > > > ******************************************************************************************************************************************************************************************************** > > fatal: [xx.yy.zz.136]: FAILED! => {"changed": true, "msg": "Container > > exited with non-zero return code 1"} > > > > Looking at the bootstrap_mariadb container logs I can see: > > ---------- > > Neither host 'xxyyzz' nor 'localhost' could be looked up with > > '/usr/sbin/resolveip' > > Please configure the 'hostname' command to return a correct > > hostname. > > ---------- > > > > Any idea ? > > Have you checked that /etc/hosts is configured correctly? > > > > > Thanks a lot > > /Giuseppe > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Feb 12 18:22:28 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 12 Feb 2019 13:22:28 -0500 Subject: Placement governance switch In-Reply-To: References: <8BE26158-5817-497F-A9D2-60222BD9F82C@leafe.com> Message-ID: Ed Leafe writes: > On Feb 12, 2019, at 11:41 AM, Doug Hellmann wrote: >> >> Have you talked to the election team about running a PTL election for >> the new team? I don't know what their expected cut-off date for having >> teams defined is, so we should make sure they're ready and then have the >> governance patch to set up the new team prepared ASAP because that >> requires a formal vote from the TC, which will take a while and we're >> about to enter TC elections. > > We did realize that it might be cutting it close, as nominations begin on March 5. Since the governance change would not be a new issue, we did not anticipate a lengthy debate among the TC. > > If it turns out that it can’t be done in time, so be it, but we at least wanted to try. > > > -- Ed Leafe I'm not suggesting you should wait; I just want you to be aware of the deadlines. New project teams fall under the formal vote rules described in the "Motions" section of the TC charter [1]. Those call for a minimum of 7 calendar days and 3 days after reaching the minimum number of votes for approval. Assuming no prolonged debate, you'll need 7-10 days for the change to be approved. If the team is ready to go now, I suggest you go ahead and file the governance patch so we can start collecting the necessary votes. [1] https://governance.openstack.org/tc/reference/charter.html#motions -- Doug From lbragstad at gmail.com Tue Feb 12 18:25:24 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 12 Feb 2019 12:25:24 -0600 Subject: [Edge-computing] [keystone] x509 authentication In-Reply-To: References: Message-ID: Sending a quick update here that summarizes activity on this topic from the last couple of weeks. A few more bugs have trickled in regarding x509 federation support [0]. One of the original authors of the feature has started chipping away at fixing them, but they can be worked in parallel if others are interested in this work. As a reminder, there are areas of the docs that can be improved, in case you don't have time to dig into a patch. [0] https://bugs.launchpad.net/keystone/+bugs?field.tag=x509 On 1/29/19 11:55 AM, Lance Bragstad wrote: > > > On Fri, Jan 25, 2019 at 3:02 PM James Penick > wrote: > > Hey Lance, >  We'd definitely be interested in helping with the work. I'll grab > some volunteers from my team and get them in touch within the next > few days. > > > Awesome, that sounds great! I'm open to using this thread for more > technical communication if needed. Otherwise, #openstack-keystone is > always open for folks to swing by if they want to discuss things there. > > FWIW - we brought this up in the keystone meeting today and there > several other people interested in this work. There is probably going > to be an opportunity to break the work up a bit. >   > > -James > > > On Fri, Jan 25, 2019 at 11:16 AM Lance Bragstad > > wrote: > > Hi all, > > We've been going over keystone gaps that need to be addressed > for edge use cases every Tuesday. Since Berlin, Oath has > open-sourced some of their custom authentication plugins for > keystone that help them address these gaps. > > The basic idea is that users authenticate to some external > identity provider (Athenz in Oath's case), and then present an > Athenz token to keystone. The custom plugins decode the token > from Athenz to determine the user, project, roles assignments, > and other useful bits of information. After that, it creates > any resources that don't exist in keystone already. > Ultimately, a user can authenticate against a keystone node > and have specific resources provisioned automatically. In > Berlin, engineers from Oath were saying they'd like to move > away from Athenz tokens altogether and use x509 certificates > issued by Athenz instead. The auto-provisioning approach is > very similar to a feature we have in keystone already. In > Berlin, and shortly after, there was general agreement that if > we could support x509 authentication with auto-provisioning > via keystone federation, that would pretty much solve Oath's > use case without having to maintain custom keystone plugins. > > Last week, Colleen started digging into keystone's existing > x509 authentication support. I'll start with the good news, > which is x509 authentication works, for the most part. It's > been a feature in keystone for a long time, and it landed > after we implemented federation support around the Kilo > release. Chances are there won't be a need